Snorkel AI CEO and co-founder Alex Ratner recently spoke with five researchers about the research they published finding creative new ways to get value out of foundation models. The researchers used prompting and weak supervision to build better, smaller models, explored how foundation models can be used to bolster programmatic labeling, and how an Ask Me Anything approach can bolster foundation model performance. They also looked probed how to sharpen and shrink GPT-3, and how contrastive learning boosts foundation model specialization.

These techniques can help organizations take greater advantage of foundation models, and could have important impacts on the field going forward.

Below follows videos from the interview series. You can click on any of the titles to see a transcript.

Prompting and weak supervision to build better models

Snorkel researcher Ryan Smith talks about his paper on using foundation models to build compact, deployable, and effective models. By asking multiple questions or prompts of the foundation model, it is possible to refine the output and use it to train smaller, specialist models. This allows organizations to take advantage of the benefits of foundation models while still being able to put governance constraints around them and reducing deployment costs.

How Foundation Models bolster programmatic labeling

Mayee Chen, a PhD student at Stanford, published a paper on the intersection of two promising data-centric AI techniques: foundation models and weak supervision. The paper demonstrates how combining these two techniques can improve the effectiveness of programmatic labeling. Chen explains that you can improve nuance in weak supervision by allowing the system to learn separate accuracy parameters for data subgroups. It also allows the system to project labels from labeled points onto unlabeled points that are close to it in the embedding space.

How a Brown professor sharpened and shrunk GPT-3

In this conversation, Alex and Brown professor Stephen Bach discuss Bach’s paper, “Multitask Prompted Training Enables Zero-Shot Task Generalization,” which was presented at ICLR this year. The paper explores how to improve prompting for zero-shot learning or zero-shot inference. Bach and his collaborators were able to create a model that was 16 times smaller than GPT-3, yet outperformed it on a number of benchmark tasks. The paper highlights the importance of data curation and multitask supervised training in building successful foundation models.

Contrastive Learning boosts Foundation Model specialization

Stanford PhD student Ananya Kumar discusses his paper “Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation.” This paper focuses on how Foundation Models like BERT, GPT-3, and SimCLR can be pre-trained on a variety of data domains and still achieve state-of-the-art results when fine-tuned on a single domain. Ananya and Alex discussed the importance of having unlabeled data from the domain you care about, as well as the need for augmentations to create a connectivity between the domains. They also discussed the possibility of automatically optimizing augmentations to further improve performance.

Ask Me Anything approach bolsters foundation models

Stanford PhD student Simran Arora discusses her recent research on how prompting methods can enable a 6 billion parameter model to outperform a 175 billion parameter GPT-3 model. Their method, Ask Me Anything (AMA), applies multiple prompts to each example and aggregates the predictions using weak supervision. AMA has demonstrated significant boosts for 14 open-source language models, and has the potential to reduce the scalability challenges posed by large foundation models. They also discuss the importance of prompt engineering and the need for more principled approaches to fine-tuning and guiding these models.

Learn More

Follow Snorkel AI on LinkedInTwitter, and YouTube to be the first to see new posts and videos!

Image by DeepMind