LIVE WEBINAR WITH DEMO

Improving the accuracy of domain-specific tasks
with LLM distillation


Image
Thursday, Mar. 20, 2025

10:00 AM PT / 1:00 PM ET
Join us and learn how to:
  • Train small-language models (SLMs) for specialized tasks
  • Choose between LLM fine-tuning and distillation
  • Reduce inference costs while preserving response quality
Image

Register now

By submitting this form, I agree to the Terms of Use and acknowledge that my information will be used in accordance with the Privacy Policy.

The reasoning capabilities of large, state-of-the-art open models such as DeepSeek R1 rival those of popular closed models, but their generalization is rarely required and the compute resources required for low-latency inference often put them out of reach.

However, LLM distillation has proven to be an effective technique for creating small language models (SLMs) which excel at specialized tasks – preserving the reasoning capabilities of large models while at the same time significantly reducing inference costs.

In this webinar, we’ll provide an overview of LLM distillation, explain how it compares with fine-tuning, and introduce the latest techniques for training SLMs using foundation models and knowledge transfer methods.

Speakers

Image

Shane Johnson

Senior Director of Product Marketing
Snorkel AI

Image

Charles Dickens

Applied Research Scientist
Snorkel AI