Weekly Demo

with Live Q&A

See a demo of Snorkel Flow for generative AI use cases such as RAG optimization and LLM fine-tuning, and chat with one of our machine learning solution engineers.

Each week, we’ll show you (step by step) how Snorkel Flow is used to support a different AI use case, whether it’s classifying chatbot utterances or fine-tuning a RAG embedding model.

We’ll be happy to answer any questions about the demo, Snorkel Flow and what we’re seeing in enterprise AI.

Meet Our ML Engineers

Interact with us in a live Q&A

Image
Image

Michael Dalman

Machine Learning Engineer,
Snorkel AI

Image

Haley Massa

ML Solutions Engineer,
Snorkel AI

Image

Bryan Wood

Machine Learning Solutions Engineer,
Snorkel AI

Image

Chris Borg

Solutions Engineer,
Snorkel AI

Registration Coming Soon
| Registration will be available soon.

Fine-tuning embedding models

We’ll create functions to extract information, structure and metadata from documents, and use them to fine tune an embedding model for RAG.
January 15 2025
10-10:30 AM PT

Evaluating LLMs for domain-specific use cases

We'll demonstrate how to create an LLM evaluation that is specialized for a specific domain and use case. We’ll combine ground truth, LLM-as-a-judge, and labeling functions to identify where and why the model may be responding poorly.
January 22 2025
10-10:30 AM PT
Registration will be available shortly, please check back soon.

Sign Up Now

By submitting this form, I agree to the Terms of Use and acknowledge that my information will be used in accordance with the Privacy Policy.