Research roundup: dive into the latest foundation model research
Snorkel AI CEO and co-founder Alex Ratner recently spoke with five researchers about the research they published finding creative new ways to get value out of foundation models. The researchers used prompting and weak supervision to build better, smaller models, explored how foundation models can be used to bolster programmatic labeling, and how an Ask Me Anything approach can bolster foundation model performance. They also looked probed how to sharpen and shrink GPT-3, and how contrastive learning boosts foundation model specialization.
These techniques can help organizations take greater advantage of foundation models, and could have important impacts on the field going forward.
Below follows videos from the interview series. You can click on any of the titles to see a transcript.
Prompting and weak supervision to build better models
Snorkel researcher Ryan Smith talks about his paper on using foundation models to build compact, deployable, and effective models. By asking multiple questions or prompts of the foundation model, it is possible to refine the output and use it to train smaller, specialist models. This allows organizations to take advantage of the benefits of foundation models while still being able to put governance constraints around them and reducing deployment costs.
How Foundation Models bolster programmatic labeling
Mayee Chen, a PhD student at Stanford, published a paper on the intersection of two promising data-centric AI techniques: foundation models and weak supervision. The paper demonstrates how combining these two techniques can improve the effectiveness of programmatic labeling. Chen explains that you can improve nuance in weak supervision by allowing the system to learn separate accuracy parameters for data subgroups. It also allows the system to project labels from labeled points onto unlabeled points that are close to it in the embedding space.
How a Brown professor sharpened and shrunk GPT-3
In this conversation, Alex and Brown professor Stephen Bach discuss Bach’s paper, “Multitask Prompted Training Enables Zero-Shot Task Generalization,” which was presented at ICLR this year. The paper explores how to improve prompting for zero-shot learning or zero-shot inference. Bach and his collaborators were able to create a model that was 16 times smaller than GPT-3, yet outperformed it on a number of benchmark tasks. The paper highlights the importance of data curation and multitask supervised training in building successful foundation models.
Contrastive Learning boosts Foundation Model specialization
Stanford PhD student Ananya Kumar discusses his paper “Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation.” This paper focuses on how Foundation Models like BERT, GPT-3, and SimCLR can be pre-trained on a variety of data domains and still achieve state-of-the-art results when fine-tuned on a single domain. Ananya and Alex discussed the importance of having unlabeled data from the domain you care about, as well as the need for augmentations to create a connectivity between the domains. They also discussed the possibility of automatically optimizing augmentations to further improve performance.
Ask Me Anything approach bolsters foundation models
Stanford PhD student Simran Arora discusses her recent research on how prompting methods can enable a 6 billion parameter model to outperform a 175 billion parameter GPT-3 model. Their method, Ask Me Anything (AMA), applies multiple prompts to each example and aggregates the predictions using weak supervision. AMA has demonstrated significant boosts for 14 open-source language models, and has the potential to reduce the scalability challenges posed by large foundation models. They also discuss the importance of prompt engineering and the need for more principled approaches to fine-tuning and guiding these models.
Build and evaluate LLMs faster than ever with Snorkel + AWS
Snorkel AI and AWS have launched an accelerator program for Amazon SageMaker customers designed to deliver private fine-tuned generative AI models along with co-developed benchmarks that evaluate model performance against an organization’s unique goals and objectives. Learn more and apply here or book a meeting at AWS re:Invent 2024.
Matt Casey leads content production at Snorkel AI. In prior roles, Matt built machine learning models and data pipelines as a data scientist. As a journalist, he produced written and audio content for outlets including The Boston Globe and NPR affiliates.