Enterprise GenAI 2024: applications will likely surge toward production, according to Snorkel AI Enterprise LLM Summit survey results .
LLM distillation isolates task-specific LLM performance and mirrors it in a smaller format—creating faster and cheaper performance.
Snorkel CEO Alex Ratner talks with QBE Ventures’ Alex Taylor about the future of AI, LLMs and multimodal models in the insurance industry.
We’ve developed new approaches to scale human preferences and align LLM output to enterprise users’ expectations by magnifying SME impact.
Enterprises that aim to build valuable GenAI applications must view them from a systems-level. LLMs are just one part of an ecosystem.
QBE Ventures made a strategic investment in Snorkel AI because it provides what Insurers need: scalable and affordable ways to customize AI.
LLMs have claimed the spotlight since the debut of ChatGPT, but BERT models quietly handle most enterprise production NLP tasks.
When done right, advanced classification applications cultivate business value and automation, unlock new business lines, and reduce costs.
A brief guide on how financial institutions could use Google Dialogflow with Snorkel Flow to build better chatbots for retail banking
A proof-of-concept project that combines predictive AI + generative AI to minimize LLM’s risks while keeping their advantages.
Large language models open many new opportunities for data science teams, but enterprise LLM challenges persist—and customization is key.
LLM distillation will become a more important in 2024, according to a poll of attendees at Snorkel AI’s 2023 Enterprise LLM virtual summit.
LLMs have a broad but shallow knowledge, but fall short on specialized tasks. For best performance, enterprises must fine tune their LLMs.
The Biden administration issued an executive order that creates new AI standards and challenges. AI data development can help.
Snorkel AI’s Enterprise LLM Virtual Summit drew 1,000 attendees with speakers from Contextual AI, Google, Meta, Stanford, and Together AI.