This paper proposes generalizations of CWOLA and SALAD, which exploit multiple reference datasets to improve performance in resonant anomaly detection, and provides finite-sample guarantees to go beyond existing asymptotic analyses.
This paper proposes “Ask Me Anything” (AMA), a prompting method that uses weak supervision to combine noisy predictions from multiple prompts generated from an LLM, resulting in an average 10.2% performance lift over the few-shot baseline across a variety of different open-source models.
The authors propose Contrastive Adapting, an efficient adapter training strategy that improves the group robustness of large pretrained foundation models (FMs) without finetuning, leading to up to 56.0 percentage points of increase in accuracy compared to zero-shot.
Zero-shot learning with Common Sense Knowledge Graphs is a general-purpose framework with a novel transformer graph convolutional network for generating class representations from common sense knowledge graphs, which improves over existing WordNet-based methods on zero-shot learning tasks.
This paper demonstrates that WEAPO, a Weak Supervision method for binary classification tasks with only positive labeling sources, is effective and efficient—achieving the highest performance of the tested Weak Supervision approaches in terms of label quality and final classifier accuracy on 10 benchmark datasets.
This paper demonstrates a mathematical analysis of zero-shot learning with attributes, providing a tight lower bound on the worst-case error of the best map from attributes to classes and showing that this bound is predictive of how standard zero-shot methods behave in practice.
AutoWS-Bench-101 is a framework for evaluating automated weak supervision techniques compared to other baseline methods such as zero-shot foundation models and supervised learning, in order to help practitioners choose the best method to generate additional labels.
This paper finds that weak supervision can be used beyond classification applications, including rankings, graphs, and manifolds, and can provide generalization guarantees nearly identical to models trained on clean data.
See Snorkel Flow’s data-centric AI workflow in action
Join the Snorkel AI newsletterLearn what’s new in Snorkel Flow and AI