On-demand webinar

Speakers

Image

Colin Toal

Principal, Business Development, AI/ML
AWS

Image

Chris Borg

Solutions Engineer
Snorkel AI

Building specialized LLMs with Bedrock + SageMaker

Your LLM needs to be trained. Here’s the solution.

Large language models (LLMs) can drive operational efficiency, boost revenue, and open new growth channels for enterprises. But to unlock this value, off-the-shelf models need customization with your unique data. In this webinar + live demo, you’ll discover how to use Snorkel Flow with Amazon AI services to fine-tune LLMs for specific tasks and align them with your domain-specific knowledge—fast.

Watch this session to learn how to:

  • Curate high-quality instruction and preference data 10-100x faster for reliable and scalable LLM fine-tuning
  • Apply emerging methods for fine-tuning and alignment using Snorkel Flow with Amazon Bedrock and SageMaker
  • Evaluate LLMs for production readiness and alignment with business needs

Snorkel AI powers efficient data labeling and management for LLM fine-tuning, natively integrating with Amazon Bedrock and SageMaker for seamless, end-to-end development of specialized models.

Schedule

Tuesday, March 12, 2024

7:45 PM to 8:30 PM

Arrive and mingle

8:30 PM to 10:45 PM

Dinner and conversation with data science leaders