Improving the accuracy of domain-specific tasks with LLM distillation
- Train small-language models (SLMs) for specialized tasks
- Choose between LLM fine-tuning and distillation
- Reduce inference costs while preserving response quality
Register now
By submitting this form, I agree to the Terms of Use and acknowledge that my information will be used in accordance with the Privacy Policy.
The reasoning capabilities of large, state-of-the-art open models such as DeepSeek R1 rival those of popular closed models, but their generalization is rarely required and the compute resources required for low-latency inference often put them out of reach.
However, LLM distillation has proven to be an effective technique for creating small language models (SLMs) which excel at specialized tasks – preserving the reasoning capabilities of large models while at the same time significantly reducing inference costs.
In this webinar, we’ll provide an overview of LLM distillation, explain how it compares with fine-tuning, and introduce the latest techniques for training SLMs using foundation models and knowledge transfer methods.
Speakers

Shane Johnson
Senior Director of Product Marketing
Snorkel AI

Charles Dickens
Applied Research Scientist
Snorkel AI