On-demand webinar
Speakers
Marty Moesta
Lead Product Manager, Generative AI
Snorkel AI
Marty Moesta is the lead product manager for Snorkel’s Generative AI products and services, before that, Marty was part of the founding go to market team here at Snorkel, focusing on success management and field engineering with fortune 100 strategic customers across financial services, insurance and health care. Prior to Snorkel, Marty was a Director of Technical Product Management at Tanium.
Tom Walshe
Senior Research Scientist
Snorkel AI
Tom Walshe is a Senior Research Scientist at Snorkel AI. Before Snorkel, Tom worked in LegalTech and finance services, where he focussed on building end-to-end AI systems and researching data-centric AI. Prior to industry, Tom completed a PhD in Computer Science from the University of Oxford.
How to fine-tune LLMs to perform specialized tasks accurately
LLMs must be fine-tuned and aligned on domain-specific knowledge before they can accurately and reliably perform specialized tasks within the enterprise.
However, the key to transforming foundation models such as Meta’s Llama 3 into specialized LLMs is high-quality training data, which can be applied via fine-tuning and alignment.
In this on-demand webinar, we’ll provide an overview of fine-tuning methods such as DPO, ORPO and SPIN, explain how to curate high-quality instruction and preference data 10-100x faster (and at scale) and give a demo showing how we fine-tune, align and evaluate LLMs.
Watch this webinar to learn more about:
- Curating high-quality training data 10-100x faster
- Emerging LLM fine-tuning and alignment methods
- Evaluating LLM accuracy for production deployment