Better FM performance sans fine tuning
Watch on demand
During this research talk, you’ll see how you can achieve higher model performance from foundation models such as CLIP without spending days, weeks, or months fine tuning them.
PhD Student Dyah Adila from the University of Wisconsin-Madison will discuss how the ROBOSHOT method works and how to apply it. ROBOSHOT improves the robustness of zero-shot embeddings by querying a large language model for helpful and distracting features and uses the output to create a kind of corrective lens for the foundation model used in the classification task.
The talk will address how to:
- Improve the robustness of pretrained model embeddings in a fully zero-shot fashion.
- Reduce the impact of harmful biases that can affect the performance of pretrained models.
- Understand and characterize the conditions under which performance can be boosted.
Presented by
Dyah Adila
PhD Student
University of Wisconsin-Madison
Dyah Adila hails from Indonesia and studies under Fred Sala. She had interned at Amazon AWS AI and JP Morgan Chase, Singapore. Her research interests center on building robust and reliable machine learning solutions— especially in settings where access to labeled data is limited.