Crossing the demo-to-production chasm with Snorkel Custom

We’re excited to announce Snorkel Custom to help enterprises cross the chasm from flashy chatbot demos to real production AI value.

Alex Ratner
April 11, 2024

Latest posts

  • Using few-shot learning language models as weak supervision
    May 3, 2022Ryan Smith
    - Utilizing large language models as zero-shot and few-shot learners with Snorkel for better quality and more flexibility Large language models (LLMs) such as BERT, T5, GPT-3, and others are exceptional resources for applying general knowledge to your specific problem. Being able to frame a new task as a question for… ...
  • Accelerating AI in healthcare
    April 29, 2022Team Snorkel
    - How can data-centric AI speeds your end-to-end healthcare AI development and deployment Healthcare is a field that is awash in data, and managing it all is complicated and expensive. As an industry, it benefits tremendously from the ongoing development of machine learning and data-centric AI. The potential benefits of AI… ...
  • Bill of materials for responsible AI: collaborative labeling
    April 28, 2022Alexis Zumwalt
    - In our previous posts, we discussed how explainable AI is crucial to ensure the transparency and auditability of your AI deployments and how trustworthy AI adoption and its successful integration into our country’s critical infrastructure and systems are paramount. In this post, we dive into making trustworthy and responsible AI possible with Snorkel Flow,… ...
  • ICLR 2022 recap from Snorkel AI
    April 20, 2022Braden Hancock
    - We are honored to be part of the International Conference on Learning Representations (ICLR) 2022, where Snorkel AI founders and researchers will be presenting five papers on data-centric AI topics The field of artificial intelligence moves fast!  Hardly a month goes by without exciting new state-of-the-art techniques, results, datasets, and… ...
  • Explainability through provenance and lineage
    April 19, 2022Alexis Zumwalt
    - In our previous post, we discussed how trustworthy AI adoption and its successful integration into our country’s critical infrastructure and systems are paramount. In this post, we discuss how explainability in AI is crucial to ensure the transparency and auditability of your AI deployments. Outputs from trustworthy AI applications must be explainable… ...
  • Spring 2022 Snorkel Flow release roundup
    April 14, 2022Molly Friederich
    - Latest features and platform improvements for Snorkel Flow 2022 is off to a strong start as we continue to make the benefits of data-centric AI more accessible to the enterprise. With this release, we’re further empowering AI/ML teams to drive rapid, analysis-driven training data iteration and development. Improvements include streamlined data… ...
  • Introduction to trustworthy AI
    April 7, 2022Alexis Zumwalt
    - The adoption of trustworthy AI and its successful integration into our country’s most critical systems is paramount to achieving the goal of employing AI applications to accelerate economic prosperity and national security. However, traditional approaches to developing AI applications suffer from a critical flaw that leads to significant ethics and… ...
  • How to better govern ML models? Hint: auditable training data
    April 6, 2022Jonathan Dahlberg
    - ML models will always have some level of bias. Rather than relying on black-box algorithms, how can we make the entire AI development workflow more auditable? How do we build applications where bias can be easily detected and quickly managed? Today, most organizations focus their model governance efforts on investigating… ...
Results: 193 - 200 of : 252
  • Request demo

  • See Snorkel Flow’s data-centric AI workflow in action

  • Snorkel Events

    Learn how enterprises can harness the power of LLMs and use their data to deliver value with genAI.

    Watch on demand