Better FM performance sans fine tuning

April 05, 2024
| 12:00 pm - 1:00 pm Pacific Time

Watch on demand

By submitting this form, I agree to the Terms of Use and acknowledge that my information will be used in accordance with the Privacy Policy.

During this research talk, you’ll see how you can achieve higher model performance from foundation models such as CLIP without spending days, weeks, or months fine tuning them.

PhD Student Dyah Adila from the University of Wisconsin-Madison will discuss how the ROBOSHOT method works and how to apply it. ROBOSHOT improves the robustness of zero-shot embeddings by querying a large language model for helpful and distracting features and uses the output to create a kind of corrective lens for the foundation model used in the classification task.

The talk will address how to:

  • Improve the robustness of pretrained model embeddings in a fully zero-shot fashion.
  • Reduce the impact of harmful biases that can affect the performance of pretrained models.
  • Understand and characterize the conditions under which performance can be boosted.

Presented by

Image

Dyah Adila

PhD Student
University of Wisconsin-Madison

Dyah Adila hails from Indonesia and studies under Fred Sala. She had interned at Amazon AWS AI and JP Morgan Chase, Singapore. Her research interests center on building robust and reliable machine learning solutions— especially in settings where access to labeled data is limited.