Instruction Tuning LLMs with Weak Supervision: A Case Study with RedPajama
Register now
In partnership with Together AI, Snorkel researchers recently demonstrated a 24% improvement in response win-rate against ChatGPT by programmatically categorizing, scoring, and filtering the original corpus of prompt/response training examples for the open source RedPajama chat LLM—with less than one day of work.
Join this open discussion with Snorkel AI co-founder and head of technology Braden Hancock and Snorkel AI staff researcher Chris Glaze to learn more about our groundbreaking results. Get a first-hand look at how instruction tuning—and careful curation of training data with weak supervision—can improve the performance of open source LLMs like Llama 2 and RedPajama.
In advance of the webinar, you can read our blog post for more detail, including the complete results of our research, and please bring your questions to ask live during the event.
About the Presenters
Braden Hancock
Co-founder and Head of Technology
Snorkel AI
Chris Glaze
Staff Research Scientist
Snorkel AI