Latest posts
- Here’s how Snorkel Flow + Google AI built an enterprise-ready model in a day - Google and Snorkel AI customized PaLM 2 using domain expertise and data development to improve performance by 38 F1 points in a matter of hours. ...
- Snorkel teams with Microsoft to showcase new AI research at NVIDIA GTC - Microsoft infrastructure facilitates Snorkel AI research experiments, including our recent high rank on the AlpacaEval 2.0 LLM leaderboard. ...
- How Skill-it! enables faster, better LLM training - Humans learn tasks better when taught in a logical order. So do LLMs. Researchers developed a way to exploit this tendency called “Skill-it!” ...
- Fine-tuned representation models boost LLM systems. Here’s how - Fine-tuned representation models are often the most effective way to boost the performance of AI applications. Learn why. ...
- Enterprise GenAI to surge in 2024: survey results - Enterprise GenAI 2024: applications will likely surge toward production, according to Snorkel AI Enterprise LLM Summit survey results . ...
- Large language model training: how three training phases shape LLMs - Training large language models is a multi-layered stack of processes, each with its unique role and contribution to the model's performance. ...
- LoRA: Low-Rank Adaptation for LLMs - Low-rank adaptation (LoRA) lets data scientists customize GenAI models like LLMs faster than traditional full fine-tuning methods. ...
- LLM distillation demystified: a complete guide - LLM distillation isolates task-specific LLM performance and mirrors it in a smaller format—creating faster and cheaper performance. ...
Results: 9 - 16 of : 249