Watch this keynote from Hanlin Tang on the discussion of the science and engineering behind LLMs in enterprise applications, the choice between buying LLM services from external providers and building custom models, specialized domain-specific models, and more.
Learn about Fiddler Auditor, the open source robustness library that facilitates red teaming of LLMs. Robustness testing is a critical step in pre-production to minimize hallucinations, bias, and adversarial attacks.
Watch this panel session with Ricardo Baeza-Yates, Director of Research at Institute for Experiential AI, Northeastern University; Miriam Vogel, President and CEO at EqualAI, Chair at National AI Advisory Committee; Toni Morgan, Responsible Innovation Manager at TikTok, to learn why responsible AI principles and frameworks are necessary, why model governance is important, and the limitations of LLMs.
Hear from Amit Prakash, CTO and Co-founder at ThoughtSpot; Diego Oppenheimer, Partner and CEO in Residence at Factory; and Roie Schwaber-Cohen, Staff Developer Advocate at Pinecone on how LLMOps optimizes MLOps for LLMs, the key pieces of a generative AI workflow, and how ML teams can leverage LLMs in their applications.
Watch this session from Ali Arsanjani, PhD, Head of the AI Center of Excellence at Google, to learn about the importance of explainability, adaptability, and risk-minimizing in enterprise generative AI.
Watch this panel session with Payel Das, Principal Research Staff Member and Manager, Trusted AI a IBM Research; Srinath Sridhar, CEO and Co-founder at Regie.ai; Casey Corvino, CTO and Co-founder at Lavender AI to learn how engineering teams are using generative AI to improve productivity and how IBM is using generative AI to drive medical advances.
Watch this session with George Mathew, Managing Director at Insight Partners to learn about the current state of AI, the different modalities of human and machine interaction, the advancements, and challenges AI presents.
Watch Chaoyu Yang, Founder and CEO at BentoML, discuss the challenges in moving LLM app prototypes into production, when to consider using an open-source LLM vs OpenAI endpoints, and more.
Watch our discussion with Jure Leskovec, Professor of Computer Science at Stanford University and Co-founder at Kumo.AI, on considerations of incorporating GNNs with generative AI models and LLM workflows, and examples of real-world AI applications using GNNs
Watch this latest on-demand webinar with Peter Norvig, Distinguished Education Fellow at Stanford’s Human-Centered Artificial Intelligence Institute, to learn human-in-the-loop best practices for generative applications, considerations for AI safety, and more!
On this episode, we’re joined by Peter Norvig, a Distinguished Education Fellow at the Stanford Institute for Human-Centered AI and co-author of popular books on AI, including Artificial Intelligence: A Modern Approach and more recently, Data Science in Context.
Please try a different topic or type.