On-demand webinar
Speakers
Haley Massa
ML Solutions Engineer
Snorkel AI
I'm a Machine Learning Solutions Engineer at Snorkel, driven by my love for tackling real-world ML challenges and my passion for education. In my role, I collaborate with prospects and clients to demonstrate how Snorkel can empower them to reach their AI production goals. When I'm not at work, I enjoy volunteering as an AI mentor and teacher with several non-profit organizations.
Shane Johnson
Senior Director of Product Marketing
Snorkel AI
I started out as a developer and architect before pivoting to product/marketing. I'm still a developer at heart (and love coding for fun), but I love advocating for innovative products -- particularly to developers.
I've spent most of my time in the database space, but lately I've been going down the LLM rabbit hole.
How to optimize RAG pipelines for domain- and enterprise-specific tasks
RAG is the first step in building LLM-powered AI applications for enterprise use cases. However, RAG pipelines must be optimized to ensure accurate, helpful, and compliant responses.
Optimizing RAG requires using only the most relevant information as context, which can be achieved with techniques such as semantic document chunking, fine-tuning embedding and reranking models, and efficient context-window utilization.
In this webinar, we introduce basic RAG concepts and a standard pipeline. Next, we explain how to optimize each stage in a sophisticated RAG pipeline to ensure the LLM has proper context. Finally, we finish with a demo showing how to optimize RAG pipelines with Snorkel Flow.
Watch this webinar to learn more about:
- Improve LLM responses by eliminating retrieval errors
- Optimize different stages of the RAG pipeline
- Accelerate the delivery of production RAG applications