We’re currently experiencing such a rapid AI revolution and adoption of technologies, ranging from autonomous cars to virtual assistants and robotic surgeries and so much more, making it challenging for our government agencies to keep up. Especially when adding AI technologies to the mix, it can be even harder to manage.The crucial adoption of trustworthy AI and its successful integration into our country’s most critical systems is paramount to achieving the goal of employing AI applications to accelerate economic prosperity and national security. Last month, on April 21 Snorkel AI hosted a virtual event on trustworthy AI: A practical roadmap for government, exploring these issues and the solutions Snorkel Flow provides.Watch the full event below:
These are the topics that we covered:
- Trustworthy AI challenges, policy frameworks, and practical solutions
- Explainability and Transparency
- Detecting and Mitigating Bias
- Supply Chain Integrity for AI/ML
- Governance and Auditability
- Provenance and Lineage
- The challenges of relying on manual data labeling
- New data-centric approaches to Trustworthy AI
- Securely operationalizing AI/ML
- AI/ML use cases for the Government
The following article captures some of the highlights from the event:
Government keynote presentation with FBI’s CTO, Gregory Ihrie
Our government speaker Gregory Ihrie, CTO of the FBI, delved into how recent events around the world have accelerated the need for auditable machine learning models with traceable lineage. Gregory expands upon how The Federal Bureau of Investigation (FBI) has a long history of seeking out and identifying technologies that can enhance its capabilities and allow it to better carry out its mission. At the same time, the FBI, as a part of the Department of Justice and the Executive Branch, is bound by the Constitution. It must always operate within the bounds of established law and policy. In other words, the FBI needs to deliver both effective and trustworthy AI simultaneously.
Academic and industry perspectives on ethical AI
A panel of academic and industry experts shared their perspectives on the current state of ethical AI.Speakers included:
- Swati Gupta – Fouts Family Early Career Professor and Lead of Ethical AI (NSF AI Institute AI4OPT), Georgia Institute of Technology
- Thomas Sasala – Chief Data Officer, Department of the Navy
- Sakshi Jain – Senior Manager of Responsible AI (Equity and Explainability), LinkedIn
- Skip McCormick – Data Science Fellow, BNY Mellon
- Moderated by Alexis Zumwalt – Director of Strategy and Growth, Snorkel AI
A data-centric AI roadmap for trustworthy AI with Alex Ratner
Snorkel AI’s CEO and Co-founder Alexander Ratner informed the attendees how a data-centric roadmap with Snorkel Flow can streamline the development and deployment of trustworthy AI for the government.
The benefits of programmatic labeling for trustworthy AI with Braden Hancock
Snorkel AI’s Head of Technology and Co-founder, Braden Hancock discussed how to utilize the Snorkel Flow platform to achieve trustworthy AI for a variety of use cases thanks to programmatic labeling.
Snorkel Flow aims to solve the most pressing problems government agencies face when deploying trustworthy AI, including data governance, budget tracking, and managing third-party risk. If you’d like to learn more about trustworthy AI, we invite you to stay tuned, as we plan to release a series of posts discussing some of these important topics adapting trustworthy AI for federal entities across the US, and how Snorkel Flow can help you achieve just that.Snorkel AI has been successful in delivering products and results to multiple federal government partners. To speak with our federal team about how Snorkel AI can support your efforts at understanding and developing trustworthy and responsible AI applications contact firstname.lastname@example.org.