Latest posts
- How Skill-it! enables faster, better LLM training - Humans learn tasks better when taught in a logical order. So do LLMs. Researchers developed a way to exploit this tendency called “Skill-it!” ...
- Fine-tuned representation models boost LLM systems. Here’s how - Fine-tuned representation models are often the most effective way to boost the performance of AI applications. Learn why. ...
- Enterprise GenAI to surge in 2024: survey results - Enterprise GenAI 2024: applications will likely surge toward production, according to Snorkel AI Enterprise LLM Summit survey results . ...
- Large language model training: how three training phases shape LLMs - Training large language models is a multi-layered stack of processes, each with its unique role and contribution to the model's performance. ...
- LoRA: Low-Rank Adaptation for LLMs - Low-rank adaptation (LoRA) lets data scientists customize GenAI models like LLMs faster than traditional full fine-tuning methods. ...
- LLM distillation demystified: a complete guide - LLM distillation isolates task-specific LLM performance and mirrors it in a smaller format—creating faster and cheaper performance. ...
- Enterprises must shift their focus from models to data in AI development - Snorkel AI CEO Alex Ratner explains his view on the importance of AI in data development and illustrates his position with two case studies. ...
- Insurance’s GenAI revolution: a business perspective - Snorkel CEO Alex Ratner talks with QBE Ventures' Alex Taylor about the future of AI, LLMs and multimodal models in the insurance industry. ...
Results: 1 - 8 of : 237