Sponsored by
Image

AI Workshop Series

GenAI Evaluation Workshop

New York

Tuesday, April 22, 2:00- 5:00 PM

Learn GenAI Evaluation Techniques

Unlock the power of the GenAI evaluation through an interactive, hands-on workshop designed for AI practitioners. This 3-hour workshop simplifies the process of evaluating Generative AI outputs with SME input. Our framework empowers organizations to identify where LLM responses diverge from organizational needs—a precursor to better aligning AI systems.

The workshop will be at Ludlow House, 139 Ludlow Street, NY, NY 10002.

ImageImage
Image
Bryan Wood

Principal AI Architect
Snorkel AI (ex Bank of America)

Image
James Wang

Applied ML Engineer
Snorkel AI

Register for NYC workshop

By submitting this form, I agree to the Terms of Use and acknowledge that my information will be used in accordance with the Privacy Policy.
ImageImage

Agenda

2:00 PM

Registration

2:30 PM

Hands-on Workshop

4:30 PM

Networking

Hands-On Training for Specialized AI

Attendees will learn the fundamentals of GenAI evaluation and how to apply them with the Snorkel AI Data Platform.

The workshops will include exercises which guide attendees through the process of building evaluators and data slices to identify how individual responses perform along multiple axes, and how the model performs in aggregate on task-specific subsets of prompts.

In this interactive workshop, you’ll learn how to:

  • Create evaluations based on domain and use case requirements
  • Implement LLM-as-a-judge evaluators to enforce acceptance criteria
  • Validate evaluator correctness with SME-provided ground truth
  • Categorize evaluation inputs to identify failures in a business context

Sample Exercise

Create an LLM evaluator specific to a use case and domain.

Image