• Explainability through provenance and lineage
    April 19, 2022Alexis Zumwalt
    - In our previous post, we discussed how trustworthy AI adoption and its successful integration into our country’s critical infrastructure and systems are paramount. In this post, we discuss how explainability in AI is crucial to ensure the transparency and auditability of your AI deployments. Outputs from trustworthy AI applications must be explainable… ...
  • Spring 2022 Snorkel Flow release roundup
    April 14, 2022Molly Friederich
    - Latest features and platform improvements for Snorkel Flow 2022 is off to a strong start as we continue to make the benefits of data-centric AI more accessible to the enterprise. With this release, we’re further empowering AI/ML teams to drive rapid, analysis-driven training data iteration and development. Improvements include streamlined data… ...
  • Introduction to trustworthy AI
    April 7, 2022Alexis Zumwalt
    - The adoption of trustworthy AI and its successful integration into our country’s most critical systems is paramount to achieving the goal of employing AI applications to accelerate economic prosperity and national security. However, traditional approaches to developing AI applications suffer from a critical flaw that leads to significant ethics and… ...
  • How to better govern ML models? Hint: auditable training data
    April 6, 2022Jonathan Dahlberg
    - ML models will always have some level of bias. Rather than relying on black-box algorithms, how can we make the entire AI development workflow more auditable? How do we build applications where bias can be easily detected and quickly managed? Today, most organizations focus their model governance efforts on investigating… ...
  • Algorithms that leverage data from other tasks with Chelsea Finn
    March 31, 2022Team Snorkel
    - The Future of Data-Centric AI Talk Series Background Chelsea Finn is an assistant professor of computer science and electrical engineering at Stanford University, whose research has been widely recognized, including in the New York Times and MIT Technology Review. In this talk, Chelsea talks about algorithms that use data from… ...
  • Learning with imperfect labels and visual data with Anima Anandkumar
    March 18, 2022Team Snorkel
    - The future of data-centric AI talk series Background Anima Anandkumar holds dual positions in academia and industry. She is a Bren professor at Caltech and the director of machine learning research at NVIDIA. Anima also has a long list of accomplishments ranging from the Alfred P. Sloan scholarship to the… ...
  • Weak Supervision Modeling with Fred Sala
    March 17, 2022Team Snorkel
    - Understanding the label model. Machine learning whiteboard (MLW) open-source series Background Frederic Sala, is an assistant professor at the University of Wisconsin-Madison, and a research scientist at Snorkel AI. Previously, he was a postdoc in Chris Re’s lab at Stanford. His research focuses on data-driven systems and weak supervision. In… ...
  • Tips for using a data-centric AI approach
    March 9, 2022Team Snorkel
    - The future of data-centric AI talk series Background Andrew Ng is a machine-learning pioneer, founder and CEO of Landing AI, and a former team leader at Google Brain. Recently he gave a presentation to the Future of Data-Centric AI virtual conference, where he discussed some practical tips for responsible data-centric… ...
Results: 1 - 8 of : 67
  • Request demo

  • Request a demo to see how you can accelerate AI development with Snorkel Flow’s revolutionary data-centric, programmatic workflow. 

  • Snorkel Events

    Image

    Let’s connect

    Speed time to value, reduce costs, and unlock more AI possibility with the Snorkel Flow platform.
    Request a demo