Data-centric Foundation Model Development
With Snorkel Flow’s Data-centric Foundation Model Development enterprise AI/ML teams overcome adaptation and deployment challenges currently blocking them from adopting foundation models to radically accelerate AI development.
Adapt
Build large, domain-specific training sets to fine-tune foundation models in minutes.
Auto-label
Automatically jumpstart training data labeling by distilling knowledge from foundation models.
Refine
Easily address mistakes foundation models make on your complex, domain-specific task.
Deploy
Build smaller, specialized models deployable within governance and cost controls.
“With Snorkel Flow, we applied data-centric workflows to distill knowledge from foundation models and build high-cardinality classification models with more than 90% accuracy in days.”
Jackie Swansburg Paulino
CPO, Pixability
Supercharge enterprise AI with foundation models
Snorkel Flow’s unique programmatic labeling capabilities and data-centric development workflow give enterprises the tools they need to put foundation models to use for complex, performance-critical use cases.
Foundation Model Fine-tuning
Create large, domain-specific training datasets to fine-tune and adapt foundation models for enterprise use cases with production-grade accuracy.
Foundation Model Warm Start
Use foundation models with state-of-the-art zero- and few-shot learning to auto-label training data with a push of a button to train deployable models.
Foundation Model Prompt Builder
Develop, evaluate, and combine prompts to tune and correct the output of foundation models to precisely label datasets and train deployable models.
Built a 1000x smaller deployment model with the same quality as a fine-tuned GPT-3 model using <1% as many ground truth labels.
Case study by Snorkel AI Research Team
Learn more
Use all knowledge sources
Intelligently combine foundation models with rich enterprise knowledge sources as inputs into programmatic labeling using the advanced weak supervision algorithms we’ve pioneered.
Foundation model research by Snorkel AI
Ask Me Anything: A simple strategy for prompting language models
S. Arora et al. • Oct 2022
Learning to Compose Soft Prompts for Compositional Zero-Shot Learning
N. Nayak et al. • Sep 2022
Multitask-prompted training enables zero-shot task generalization.
V. Sanh et al. • Aug 2022
On the Opportunities and Risks of Foundation Models
R. Bommasani et al. • Jul 2022
Contrastive adapters for foundation model group robustness.
M. Zhang et al. • Jul 2022
Language Models in the Loop: Incorporating Prompting into Weak Supervision.
R. Smith et al. • May 2022
Shoring Up the Foundations: Fusing Model Embeddings and Weak Supervision
M. Chen et al. • Mar 2022
Learn more
Are you ready to dive in?
Build high-quality AI 100x faster with Snorkel Flow, the AI data development platform.
Get started