Retrieval-augmented generation (RAG) enables LLMs to produce more accurate responses by finding and injecting relevant context. Learn how.
AI alignment ensures that AI systems align with human values, ethics, and policies. Here’s a primer on how developers can build safer AI.
LLM distillation isolates task-specific LLM performance and mirrors it in a smaller format—creating faster and cheaper performance.
It’s critical enterprises can trust and rely on GenAI evaluation results, and for that, SME-in-the-loop workflows are needed. In my first blog post on enterprise GenAI evaluation, I discussed the importance of specialized evaluators as a scalable proxy for SMEs. It simply isn’t practical to task SMEs with performing manual evaluations – it can take weeks if not longer, unnecessarily…
We’re taking a look at the research paper, LLMs can easily learn to reason from demonstration (Li et al., 2025), in this week’s community research spotlight. It focuses on how the structure of reasoning traces impacts distillation from models such as DeepSeek R1. What’s the big idea regarding LLM reasoning distillation? The reasoning capabilities of powerful models such as DeepSeek…
GenAI needs fine-grained evaluation for AI teams to gain actionable insights.
It’s critical enterprises can trust and rely on GenAI evaluation results, and for that, SME-in-the-loop workflows are needed. In my first blog post on enterprise GenAI evaluation, I discussed the importance of specialized evaluators as a scalable proxy for SMEs. It simply isn’t practical to task SMEs with performing manual evaluations – it can take weeks if not longer, unnecessarily…
We’re taking a look at the research paper, LLMs can easily learn to reason from demonstration (Li et al., 2025), in this week’s community research spotlight. It focuses on how the structure of reasoning traces impacts distillation from models such as DeepSeek R1. What’s the big idea regarding LLM reasoning distillation? The reasoning capabilities of powerful models such as DeepSeek…
GenAI needs fine-grained evaluation for AI teams to gain actionable insights.
Specialized GenAI evaluation ensures AI assistants meet business requirements, SME expertise, and industry regulations—critical for production-ready AI.
Ensure your LLMs align with your values and goals using LLM alignment techniques. Learn how to mitigate risks and optimize performance.
Discover common RAG failure modes and how to fix them. Learn how to optimize retrieval-augmented generation systems for max business value.
Learn about large language model (LLM) alignment and how it maximizes the effectiveness of AI outputs for organizations.
Retrieval-augmented generation (RAG) enables LLMs to produce more accurate responses by finding and injecting relevant context. Learn how.
To tackle generative AI use cases, Snorkel AI + AWS launched an accelerator program to address the biggest blocker: unstructured data.
AI alignment ensures that AI systems align with human values, ethics, and policies. Here’s a primer on how developers can build safer AI.
Snorkel takes a step on the path to enterprise superalignment with new data development workflows for enterprise alignment
We’re excited to announce Snorkel Custom to help enterprises cross the chasm from flashy chatbot demos to real production AI value.
Snorkel AI will be at Google Cloud Next. The event will feature more than 700 sessions, so we picked five that we think you shouldn’t miss.
Snorkel AI helped a client solve the challenge of social media content filtering quickly and sustainably. Here’s how.
Fine-tuned representation models are often the most effective way to boost the performance of AI applications. Learn why.