Anomaly Detection with Multiple Reference Datasets
This paper proposes generalizations of CWOLA and SALAD, which exploit multiple reference datasets to improve performance in resonant anomaly detection, and provides finite-sample guarantees to go beyond existing asymptotic analyses.
Ask Me Anything: A simple strategy for prompting language models.
This paper proposes “Ask Me Anything” (AMA), a prompting method that uses weak supervision to combine noisy predictions from multiple prompts generated from an LLM, resulting in an average 10.2% performance lift over the few-shot baseline across a variety of different open-source models.
Contrastive Adapters for Foundation Model Group Robustness
The authors propose Contrastive Adapting, an efficient adapter training strategy that improves the group robustness of large pretrained foundation models (FMs) without finetuning, leading to up to 56.0 percentage points of increase in accuracy compared to zero-shot.
Zero-Shot Learning with Common Sense Knowledge Graphs
Zero-shot learning with Common Sense Knowledge Graphs is a general-purpose framework with a novel transformer graph convolutional network for generating class representations from common sense knowledge graphs, which improves over existing WordNet-based methods on zero-shot learning tasks.
Binary Classification with Positive Labeling Sources
This paper demonstrates that WEAPO, a Weak Supervision method for binary classification tasks with only positive labeling sources, is effective and efficient—achieving the highest performance of the tested Weak Supervision approaches in terms of label quality and final classifier accuracy on 10 benchmark datasets.
Tight Lower Bounds on Worst-Case Guarantees for Zero-Shot Learning with Attributes
This paper demonstrates a mathematical analysis of zero-shot learning with attributes, providing a tight lower bound on the worst-case error of the best map from attributes to classes and showing that this bound is predictive of how standard zero-shot methods behave in practice.
AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels
AutoWS-Bench-101 is a framework for evaluating automated weak supervision techniques compared to other baseline methods such as zero-shot foundation models and supervised learning, in order to help practitioners choose the best method to generate additional labels.
Lifting Weak Supervision To Structured Prediction
This paper finds that weak supervision can be used beyond classification applications, including rankings, graphs, and manifolds, and can provide generalization guarantees nearly identical to models trained on clean data.