Weekly demo
with Q&A

Fine tune embedding models

July 17

Classifying chatbot utterances

July 24

Optimizing RAG retrieval

July 31

Fine-tune LLMs

August 7

Register now

By submitting this form, I agree to the Terms of Use and acknowledge that my information will be used in accordance with the Privacy Policy.

Upcoming demos with Q&A:

Classifying chatbot utterances

July 24

Optimizing RAG retrieval

July 31

Fine-tune LLMs

August 7
DEMO

Weekly demo
with Q&A

See a demo of Snorkel Flow for generative AI use cases such as RAG optimization and LLM fine-tuning, and chat with one of our machine learning solution engineers.

Each week, we'll show you (step by step) how Snorkel Flow is used to support a different AI use case, whether it's classifying chatbot utterances or fine-tuning a RAG embedding model.

We'll be happy to answer any questions about the demo, Snorkel Flow and what we're seeing in enterprise AI.

Sign up for one or for all . . .

Select the one(s) you want on the form

July 24, 10-10:30 AM PT

Classifying chatbot utterances

We’ll create a predictive model for classifying the intent of chatbot utterances by applying labeling functions to sample conversations and creating high-quality training data.

July 31, 10-10:30 AM PT

Optimizing RAG retrieval

We'll create labeled question/context/ answer triplets and document metadata, and use them to fine-tune the embedding model and chunk retrieval.

August 7, 10-10:30 AM PT

Fine-tuning LLMs for enterprise alignment

We’ll create functions to label a set of compliant and non-compliant question/ answer pairs as approved or rejected, and use them to fine-tune an LLM.

August   10-10:30 AM PT

Fine-tuning embedding models

We’ll create functions to extract information, structure and metadata from documents, and use them to fine tune an embedding model for RAG.