Latest posts
- Snorkel teams with Microsoft to showcase new AI research at NVIDIA GTC
- Microsoft infrastructure facilitates Snorkel AI research experiments, including our recent high rank on the AlpacaEval 2.0 LLM leaderboard. ...
- How Skill-it! enables faster, better LLM training
- Humans learn tasks better when taught in a logical order. So do LLMs. Researchers developed a way to exploit this tendency called “Skill-it!” ...
- Fine-tuned representation models boost LLM systems. Here’s how
- Fine-tuned representation models are often the most effective way to boost the performance of AI applications. Learn why. ...
- Enterprise GenAI to surge in 2024: survey results
- Enterprise GenAI 2024: applications will likely surge toward production, according to Snorkel AI Enterprise LLM Summit survey results . ...
- Large language model training: how three training phases shape LLMs
- Training large language models is a multi-layered stack of processes, each with its unique role and contribution to the model's performance. ...
- LoRA: Low-Rank Adaptation for LLMs
- Low-rank adaptation (LoRA) lets data scientists customize GenAI models like LLMs faster than traditional full fine-tuning methods. ...
- LLM distillation demystified: a complete guide
- LLM distillation isolates task-specific LLM performance and mirrors it in a smaller format—creating faster and cheaper performance. ...
- Enterprises must shift their focus from models to data in AI development
- Snorkel AI CEO Alex Ratner explains his view on the importance of AI in data development and illustrates his position with two case studies. ...
Results: 25 - 32 of : 265