Season 1 | Episode 7

AI Safety and Alignment with Amal Iyer

In this episode, we’re joined by Amal Iyer, Sr. Staff AI Scientist at Fiddler AI. 

Large-scale AI models trained on internet-scale datasets have ushered in a new era of technological capabilities, some of which now match or even exceed human ability. However, this progress emphasizes the importance of aligning AI with human values to ensure its safe and beneficial societal integration. In this talk, we will provide an overview of the alignment problem and highlight promising areas of research spanning scalable oversight, robustness and interpretability.

About the guest
Amalendu (Amal) Iyer is a Sr. Staff Data Scientist at Fiddler where he is responsible for developing systems and algorithms to monitor, evaluate and explain ML models. He also leads the development of Fiddler Auditor, an open-source project to evaluate robustness and safety of Large Language models. Prior to joining Fiddler, Amal worked at HP Labs, where he led the research around Self-Supervised Learning techniques for improving data-efficiency of ML models and on Deep Reinforcement Learning. Before that at Qualcomm AI research, he was part of the team that developed the Snapdragon Neural Processing SDK and developed Speech Recognition models for Voice UI applications. Amal obtained his M.S. from University of Florida and B.S. from University of Mumbai in Electrical and Computer Engineering.
Transcript
Subscribe