Upcoming demos with live Q&A:

Optimizing RAG retrieval

November 20

Fine-tuning embedding models

December 04

Evaluating LLMs for domain-specific use cases

December 11

Register now

By submitting this form, I agree to the Terms of Use and acknowledge that my information will be used in accordance with the Privacy Policy.

Upcoming demos with live Q&A:

Optimizing RAG retrieval

November 20

Fine-tuning embedding models

December 04

Evaluating LLMs for domain-specific use cases

December 11
DEMO

Weekly demo with live Q&A

See a demo of Snorkel Flow for generative AI use cases such as RAG optimization and LLM fine-tuning, and chat with one of our machine learning solution engineers.

Each week, we’ll show you (step by step) how Snorkel Flow is used to support a different AI use case, whether it’s classifying chatbot utterances or fine-tuning a RAG embedding model.

We’ll be happy to answer any questions about the demo, Snorkel Flow and what we’re seeing in enterprise AI.

Sign up for one or for all . . .

Select the one(s) you want on the form

November 20, 10-10:30 AM PT

Optimizing RAG retrieval

See how we create labeled question/context/answer triplets and document metadata, and use them to fine-tune the embedding model and chunk retrieval.

December 04, 10-10:30 AM PT

Fine-tuning embedding models

We’ll create a predictive model for classifying the intent of chatbot utterances by applying labeling functions to sample conversations and creating high-quality training data.

December 11, 10-10:30 AM PT

Evaluating LLMs for domain-specific use cases

We'll demonstrate how to create an LLM evaluation that is specialized for a specific domain and use case. We’ll combine ground truth, LLM-as-a-judge, and labeling functions to identify where and why the model may be responding poorly.