LIVE WEBINAR WITH DEMO

Building specialized LLMs with Bedrock + Sagemaker

November 14, 2024
10 AM - 11 AM PT
Image
Colin Toal

Principal, Business Development, AI/ML
AWS

Image
Chris Borg

Solutions Engineer
Snorkel AI

Your LLM needs to be trained. Here’s the solution.

Large language models (LLMs) can drive operational efficiency, boost revenue, and open new growth channels for enterprises. But to unlock this value, off-the-shelf models need customization with your unique data. In this webinar + live demo, you’ll discover how to use Snorkel Flow with Amazon AI services to fine-tune LLMs for specific tasks and align them with your domain-specific knowledge—fast.

Join this live session and demo to learn how to:

  • Curate high-quality instruction and preference data 10-100x faster for reliable and scalable LLM fine-tuning
  • Apply emerging methods for fine-tuning and alignment using Snorkel Flow with Amazon Bedrock and SageMaker
  • Evaluate LLMs for production readiness and alignment with business needs

Snorkel AI powers efficient data labeling and management for LLM fine-tuning, natively integrating with Amazon Bedrock and SageMaker for seamless, end-to-end development of specialized models.

  • Date: November 14, 2024
    Time: 10 AM - 11 AM PT

Register now

Register now

By submitting this form, I agree to the Terms of Use and acknowledge that my information will be used in accordance with the Privacy Policy.

Speakers

Image

Colin Toal

Principal, Business Development, AI/ML
AWS

Image

Chris Borg

Solutions Engineer
Snorkel AI