AI Explained: The Agentic Gap: What Enterprises Think vs. What Actually Works
April 9, 2026
10:00AM PT / 1:00PM ET
Registration is now closed. Please check back later for the recording.
Enterprises are piloting agents, but most are still thinking in terms of models, not systems. As agentic deployments ramp up, teams are looking to have a clear picture of what they should measure. Join Jeff Dalton, Head of AI and Chief Scientist at Valence for a candid conversation about where agentic AI really stands today and what it takes to move from ambition to deployment.
What you'll learn:
- How agentic architectures are evolving beyond single models, and its implications on how to build systems.
- How frontier research in memory, context, and orchestration is shaping the next generation of agentic systems.
- What makes evaluating agentic systems fundamentally harder than evaluating models, and how to think about measuring agents.
AI Explained is our AMA series featuring experts on the most pressing issues facing Agentic and ML teams.
Featured Speakers
Jeff Dalton
Head of AI and Chief Scientist
at
Valence
Jeff Dalton is Head of AI and Chief Scientist at Valence, the company behind Nadia, the AI coach that nearly 100 of the Fortune 500 use. Jeff has more than 100 published research papers on search, information retrieval, and natural language understanding, and at Valence, his work focuses on best-in-class context and memory for AI coaching. Jeff is a Turing Fellow, and prior to joining Valence, he was at Google, where he developed language understanding capabilities for Google Assistant and built next-generation knowledge graphs for Google Search.
Joshua Rubin
Head of AI Science
at
Fiddler AI
Joshua Rubin is Head of AI Science at Fiddler AI, an enterprise AI Observability company. He’s built and led a data science team that developed novel explainability tools for computer vision and multimodal deep-learning models, and techniques for measuring model robustness and drift in unstructured data, key components of Fiddler's LLM observability product. Most recently he's been developing small BERT-scale models to close the feedback loop on measuring large language model performance, serving customers including cloud-native travel platforms, large financial services firms, ad-tech companies, and cryptocurrency exchanges.
