Speakers
Marty Moesta
Lead Product Manager, Generative AI
Snorkel AI
Marty Moesta serves as the lead product manager for Snorkel's Generative AI products and services. Before taking on this role, Marty played a key role in Snorkel's founding go-to-market team, where he focused on success management and field engineering. In this capacity, he worked closely with Fortune 100 strategic customers in industries such as financial services, insurance, and healthcare. Prior to joining Snorkel, Marty held the position of Director of Technical Product Management at Tanium.
Amit Kushwaha
Director of AI Engineering
SambaNova Systems
Amit Kushwaha is the Director of AI Engineering at SambaNova Systems, leading the development and implementation of AI solutions that leverage SambaNova's differentiated hardware. Previously, as Principal Data Scientist at ExxonMobil, he led the organization’s digital transformation efforts, driving the strategy and execution of a multi-million dollar AI/ML portfolio that fostered innovation and enhanced operational efficiency at scale.
Passionate about harnessing technology to solve complex challenges, Amit specializes in developing innovative, business-focused solutions at the intersection of artificial intelligence, high-performance computing, and computer simulations. He holds a Ph.D. in Engineering from Stanford University.
Fine-Tuning and Aligning LLMs with Enterprise Data
LLMs often require fine-tuning and alignment on domain-specific knowledge before they can accurately, and reliably, perform specialized tasks within the enterprise.
The key to transforming foundation models such as Meta’s Llama 3 into specialized LLMs is high-quality training data which can be applied via fine-tuning and alignment.
In this session, we’ll provide an overview of methods such as SFT and DPO, show how to curate high-quality instruction and preference data 10-100x faster (and at scale) and demonstrate how to fine-tune, align and evaluate an LLM.
Join us, and learn more about:
- Curating high-quality training data 10-100x faster
- Emerging LLM fine-tuning and alignment methods
- Evaluating LLM accuracy for production deployment