Weekly Demo

with Live Q&A

See a demo of Snorkel Flow for generative AI use cases such as RAG optimization and LLM fine-tuning, and chat with one of our machine learning solution engineers.

Each week, we’ll show you (step by step) how Snorkel Flow is used to support a different AI use case, whether it’s classifying chatbot utterances or fine-tuning a RAG embedding model.

We’ll be happy to answer any questions about the demo, Snorkel Flow and what we’re seeing in enterprise AI.

Meet Our ML Engineers

Interact with us in a live Q&A

Image
Image

Vignesh Ramesh

Machine Learning Solutions Engineer,
Snorkel AI

Image

Haley Massa

ML Solutions Engineer,
Snorkel AI

Image

Bryan Wood

Principal ML Solutions Engineer,
Snorkel AI (ex Bank of America)

Image

Chris Borg

Solutions Engineer,
Snorkel AI

Upcoming Demos

| Sign up for one or for all in the form below

Evaluating LLMs for domain-specific use cases

We'll demonstrate how to create an LLM evaluation that is specialized for a specific domain and use case. We’ll combine ground truth, LLM-as-a-judge, and labeling functions to identify where and why the model may be responding poorly.
February 26 2025
10-10:30 AM PT

Classifying chatbot utterances

We’ll create a predictive model for classifying the intent of chatbot utterances by applying labeling functions to sample conversations and creating high-quality training data.
March 12 2025
10-10:30 AM PT

Optimizing RAG retrieval

See how we create labeled question/context/answer triplets and document metadata, and use them to fine-tune the embedding model and chunk retrieval.
March 19 2025
10-10:30 AM PT

Sign Up Now

By submitting this form, I agree to the Terms of Use and acknowledge that my information will be used in accordance with the Privacy Policy.