Latest posts

MLOps: Towards DevOps for data-centric AI with Ce Zhang

The future of data-centric AI talk series  Don’t miss the opportunity to gain an in-depth understanding of data-centric AI and learn best practices from real-world implementations. Connect with fellow data scientists, machine learning engineers, and AI leaders from academia and industry with over 30 virtual sessions. Save your seat at The Future of Data-Centric AI. Happening on August 3-4, 2022….

Dr. Bubbles, Snorkel AI's mascot
June 2, 2022

What to expect at The Future of Data-Centric AI 2022

30+ sessions by 40+ speakers in 2 action-packed days Last year we organized The Future of Data-Centric AI conference to explore the shift from model-centric to data-centric AI. Speakers included researchers and industry experts such as Andrew Ng (Landing AI), Anima Anandkumar (NVIDIA), Chris Re (Stanford AI Lab), Michael DAndrea (Genentech), Skip McCormick (BNY Mellon), Imen Grida Ben Yahia (Orange)…

Devang Sachdev portrayed
June 1, 2022

Auto LF generation: Lots of little models, big benefits

Constructing labeling functions (LFs) is at the heart of using weak supervision. We often think of these labeling functions as programmatic expressions of domain expertise or heuristics. Indeed, much of the advantage of weak supervision is that we can save time—writing labeling functions and applying them to data at scale is much more efficient compared to hand-labeling huge numbers of…

May 31, 2022

Building a COVID fact-checking system with external knowledge

Powerful resources to leverage as labeling functions In this post, we’ll use the COVID-FACT dataset to demonstrate how to use existing resources as labeling functions (LFs), to build a fact-checking system. The COVID-FACT dataset contains 4086 claims about the COVID-19 pandemic; it contains claims, evidence for the claims, and contradictory claims refuted by the evidence. The evidence retrieval is formulated…

Annie Yang portrayed
May 26, 2022

Snorkel AI FAQ

Browse through these FAQ to find answers to commonly raised questions about Snorkel AI, Snorkel Flow, and data-centric AI development. Have more questions? Contact us. Programmatic labeling Use cases 1. What is a labeling function? A Labeling Function (LF) is an arbitrary function that takes in a data point and outputs a proposed label or abstains. The logic used to…

Dr. Bubbles, Snorkel AI's mascot
May 25, 2022

Panel discussion: Academic and industry perspectives on ethical AI

This post showcases a panel discussion on the academic and industry perspectives of ethical AI, which was moderated by Director of Federal Strategy and Growth, Alexis Zumwalt, Fouts Family Early Career Professor and Lead of Ethical AI (NSF AI Institute AI4OPT), Georgia Institute of Technology, Swati Gupta, Chief Data Officer, Department of the Navy, Thomas Sasalsa, Senior Manager of Responsible…

Dr. Bubbles, Snorkel AI's mascot
May 24, 2022

Event recap: Adopting trustworthy AI for government

We’re currently experiencing such a rapid AI revolution and adoption of technologies, ranging from autonomous cars to virtual assistants and robotic surgeries and so much more, making it challenging for our government agencies to keep up. Especially when adding AI technologies to the mix, it can be even harder to manage.The crucial adoption of trustworthy AI and its successful integration…

Alexis Zumwalt portrayed
May 23, 2022

Programmatic labeling

The founding team of Snorkel AI has spent over half a decade—first at the Stanford AI Lab and now at Snorkel AI—researching programmatic labeling and other techniques for breaking through the biggest bottleneck in AI: the lack of labeled training data. This research has resulted in the Snorkel research project and 150+ peer-reviewed publications. Snorkel’s programmatic labeling technology has been…

Dr. Bubbles, Snorkel AI's mascot
May 22, 2022

Weak supervision

The founding team of Snorkel AI has spent over half a decade—first at the Stanford AI Lab and now at Snorkel AI—researching weak supervision (WS) and other techniques for breaking through the biggest bottleneck in AI: the lack of labeled training data. This research has resulted in the Snorkel research project and 150+ peer-reviewed publications. Snorkel’s technology which applies weak…

Dr. Bubbles, Snorkel AI's mascot
May 17, 2022

Data-centric AI: A complete primer

The founding team of Snorkel AI has spent over half a decade—first at the Stanford AI Lab and now at Snorkel AI—researching data-centric techniques to overcome the biggest bottleneck in AI: The lack of labeled training data. In this video Snorkel AI co-founder Paroma Varma gives an overview of the key principles of data-centric AI development. What is data-centric AI?…

Dr. Bubbles, Snorkel AI's mascot
May 17, 2022

Data extraction from SEC filings (10-Ks) with Snorkel Flow

Leveraging Snorkel Flow to extract critical data from annual quarterly reports (10-Ks) Introduction It can surprise those who have never logged into EDGAR how much information is available in annual reports from public companies. You can find tactical details like the names of senior leadership, top shareholders, and more strategic information like earnings, risk factors, and the company strategy and vision. Warren…

May 10, 2022

Liger: Fusing foundation model embeddings & weak supervision

Showcasing Liger—a combination of foundation model embeddings to improve weak supervision techniques. Machine learning whiteboard (MLW) open-source series In this talk, Mayee Chen, a PhD student in Computer Science at Stanford University focuses on her work combining weak supervision and foundation model embeddings that improve two essential aspects of current weak supervision techniques. Check out the full episode here or…

Dr. Bubbles, Snorkel AI's mascot
May 9, 2022

AI in cybersecurity an introduction and case studies

An introduction to AI in cybersecurity with real-world case studies in a Fortune 500 organization and a government agency Despite all the recent advances in artificial intelligence and machine learning (AI/ML) applied to a vast array of application areas and use cases, success in AI in cybersecurity remains elusive. The key component to building AI/ML applications is training data, which…

Nic Acton portrayed
May 5, 2022

Active learning: an overview

A primer on active learning presented by Josh McGrath. Machine learning whiteboard (MLW) open-source series This video defines active learning, explores variants and design decisions made within active learning pipelines, and compares it to related methods. It contains references to some seminal papers in machine learning that we find instructive. Check out the full video below or on Youtube. Additionally, a…

May 4, 2022

Using few-shot learning language models as weak supervision

Utilizing large language models as zero-shot and few-shot learners with Snorkel for better quality and more flexibility Large language models (LLMs) such as BERT, T5, GPT-3, and others are exceptional resources for applying general knowledge to your specific problem. Being able to frame a new task as a question for a language model (zero-shot learning), or showing it a few…

May 3, 2022

Accelerating AI in healthcare

How can data-centric AI speeds your end-to-end healthcare AI development and deployment Healthcare is a field that is awash in data, and managing it all is complicated and expensive. As an industry, it benefits tremendously from the ongoing development of machine learning and data-centric AI. The potential benefits of AI integration in healthcare can be broken down into two categories:…

Dr. Bubbles, Snorkel AI's mascot
April 29, 2022

Bill of materials for responsible AI: collaborative labeling

In our previous posts, we discussed how explainable AI is crucial to ensure the transparency and auditability of your AI deployments and how trustworthy AI adoption and its successful integration into our country’s critical infrastructure and systems are paramount. In this post, we dive into making trustworthy and responsible AI possible with Snorkel Flow, the data-centric AI platform for government and federal agencies. Collaborative labeling and…

Alexis Zumwalt portrayed
April 28, 2022

ICLR 2022 recap from Snorkel AI

We are honored to be part of the International Conference on Learning Representations (ICLR) 2022, where Snorkel AI founders and researchers will be presenting five papers on data-centric AI topics The field of artificial intelligence moves fast!  This is a world we are intimately familiar with at Snorkel AI, having spun out of academia in 2019. For over half a…

April 20, 2022

Explainability through provenance and lineage

In our previous post, we discussed how trustworthy AI adoption and its successful integration into our country’s critical infrastructure and systems are paramount. In this post, we discuss how explainability in AI is crucial to ensure the transparency and auditability of your AI deployments. Outputs from trustworthy AI applications must be explainable in understandable terms based on the design and implementation of…

Alexis Zumwalt portrayed
April 19, 2022

Spring 2022 Snorkel Flow release roundup

Latest features and platform improvements for Snorkel Flow 2022 is off to a strong start as we continue to make the benefits of data-centric AI more accessible to the enterprise. With this release, we’re further empowering AI/ML teams to drive rapid, analysis-driven training data iteration and development. Improvements include streamlined data exploration and programmatic labeling workflows, integrated active learning and AutoML,…

Molly Friederich portrayed
April 14, 2022

Introduction to trustworthy AI

The adoption of trustworthy AI and its successful integration into our country’s most critical systems is paramount to achieving the goal of employing AI applications to accelerate economic prosperity and national security. However, traditional approaches to developing AI applications suffer from a critical flaw that leads to significant ethics and governance concerns. Specifically, AI today relies on massive, hand-labeled training datasets…

Alexis Zumwalt portrayed
April 7, 2022

How to better govern ML models? Hint: auditable training data

ML models will always have some level of bias. Rather than relying on black-box algorithms, how can we make the entire AI development workflow more auditable? How do we build applications where bias can be easily detected and quickly managed? Today, most organizations focus their model governance efforts on investigating model performance and the bias within the predictions. Data science…

April 6, 2022

Algorithms that leverage data from other tasks with Chelsea Finn

The Future of Data-Centric AI Talk Series Background Chelsea Finn is an assistant professor of computer science and electrical engineering at Stanford University, whose research has been widely recognized, including in the New York Times and MIT Technology Review. In this talk, Chelsea talks about algorithms that use data from tasks you are interested in and data from other tasks….

Dr. Bubbles, Snorkel AI's mascot
March 31, 2022
March 21, 2022

Learning with imperfect labels and visual data with Anima Anandkumar

The future of data-centric AI talk series Background Anima Anandkumar holds dual positions in academia and industry. She is a Bren professor at Caltech and the director of machine learning research at NVIDIA. Anima also has a long list of accomplishments ranging from the Alfred P. Sloan scholarship to the prestigious NSF career award and many more. She recently joined…

Dr. Bubbles, Snorkel AI's mascot
March 18, 2022

Weak Supervision Modeling with Fred Sala

Understanding the label model. Machine learning whiteboard (MLW) open-source series Background Frederic Sala, is an assistant professor at the University of Wisconsin-Madison, and a research scientist at Snorkel AI. Previously, he was a postdoc in Chris Re’s lab at Stanford. His research focuses on data-driven systems and weak supervision. In this talk, Fred focuses on weak supervision modeling. This machine…

Dr. Bubbles, Snorkel AI's mascot
March 17, 2022

Tips for using a data-centric AI approach

The future of data-centric AI talk series Background Andrew Ng is a machine-learning pioneer, founder and CEO of Landing AI, and a former team leader at Google Brain. Recently he gave a presentation to the Future of Data-Centric AI virtual conference, where he discussed some practical tips for responsible data-centric AI development. This presentation dives into tips for data-centric AI applicable…

Dr. Bubbles, Snorkel AI's mascot
March 9, 2022

Resilient enterprise AI application development

Using a data-centric approach to capture the best of rule-based systems and ML models for enterprise AI One of the biggest challenges to making AI practical for the enterprise is keeping the AI application relevant (and therefore valuable) in the face of ever-changing input data and evolving business objectives. Practitioners typically use one of two approaches to build these AI applications:…

March 3, 2022

How AI can be used to rapidly respond to information warfare in the Russia-Ukraine conflict

Proliferating web technology has contributed to information warfare in recent conflicts. Artificial Intelligence (AI) can play a significant role in stemming disinformation campaigns, cyber-attacks, and informing diplomacy in the rapidly evolving situation in Ukraine. Snorkel AI is dedicated to supporting the National Security community and other enterprise organizations with state-of-the-art AI technology. We see this as our responsibility in the…

Nic Acton portrayed
February 28, 2022

How Genentech extracted information for clinical trial analytics with Snorkel Flow

Genentech, a global biotech leader and member of the Roche Group, leveraged Snorkel Flow to extract critical information from lengthy clinical trial protocol (CTP) pdf documents. They built AI applications that used NER, entity linking, text extraction, and classification models to determine inclusion/ exclusion criteria and to analyze Schedules of Assessments. Genentech’s team achieved 95-99% model accuracy by using Snorkel…

Dr. Bubbles, Snorkel AI's mascot
February 26, 2022
1 6 7 8 9 10