On this episode, we’re joined by Patrick Hall, Co-Founder of BNH.AI.
We will delve into critical aspects of AI, such as model risk management, generating adverse action notices, addressing algorithmic discrimination, ensuring data privacy, fortifying ML security, and implementing advanced model governance and explainability.
Mary Reagan: Welcome to Fiddler's AI Explained. I'm with Patrick Hall. Patrick Hall is the co-founder and principal scientist of BNH.AI. He's also the visiting faculty at George Washington School of Business.
Mary Reagan: Um, so he advises clients on AI risk, um, and we're going to be diving into a lot of stuff today. I'm really excited to hear what Patrick has to say. Um, my name's Mary Regan. I've worn a couple of different hats in my lifetime, started out as a data scientist, uh, moved into product management, and most recently I've been helping with our community efforts here at Fiddler.
Mary Reagan: Um, Patrick, anything that you want to say just to sort of kick us off and introduce yourself, um, that I might have missed?
Patrick Hall: Uh, I think just briefly some of the things that we were talking about before people jumped on. Um, I've helped my firm support the NIST AI Risk Management Framework, which I think is a really important development for, um, people working in our space.
Patrick Hall: And also, as, as we were discussing, I sit on the board of what's known as the AI Incident Database, which is a, um, a sort of open source intelligence effort and an information sharing effort around public failures of, um, AI and machine learning systems. Uh, but, yeah, so, so, it, I, I just bring those up because I, I hope they sort of frame some of the, uh, points that we'll get into later.
Mary Reagan: Great. Um, so, yeah, let's just get started. So, your law firm, it helps businesses manage, like, a variety of things, privacy, fairness, security, transparency of AI, and it, you know, you're providing really highly technical advice to manage, um, AI models, and this ends up, of course, affecting, you know, hundreds of thousands to millions of people around the world.
Mary Reagan: Um, can you just give us an overview of your work?
Patrick Hall: Yes, and, um, another one of the things that I should have said, uh, when you asked me if I had anything else to add was, I'm not a lawyer. I am not a lawyer. Nothing I'm saying is legal advice. Um, if, if, uh, anybody in the audience would like legal advice in these topics, I can connect you to actual attorneys. Um, so, so I run the, um, technical side of the house at BNH.AI, and, um, yeah, I mean, you, you gave a good basic description of, of what we do, uh, and, and to kind of layer in some more details, I would say we, we do a lot of data privacy work, and I'm, I'm probably less involved in that. Um, some of my attorney partners are probably better qualified to work on especially legal data privacy issues.
Patrick Hall: I do sometimes advise on sort of the privacy enhancing technologies aspect of data privacy. Um, we do a lot of work in non discrimination, as I'm sure audience members are aware, um, for many reasons, not just bad data. AI and machine learning systems have a nasty tendency to perpetuate, sort of, um, existing social biases, and, um, we, we do a lot of work around policies, governance, testing, and remediation of, of those biases in machine learning systems, um, we do a lot of model audits, and that, that, you know, that can mean different things to different firms.
Patrick Hall: But that's essentially when we come in and kind of investigate, um, a model and try to, uh, you know, hold it account to some standard, whether, whether it's an existing law, or whether that's like the NIST AI Risk Management Frameworks, or it depends on, on the, um, client's needs, uh, and then, yeah, so for transparency, it's often about adverse action notices, and, and finance, lots of finance companies have questions about how to generate accurate adverse action notices with machine learning models, um, um, And there's, there's even sort of red teaming, you know, sort of, sort of looking into the security of, of different machine learning systems.
Patrick Hall: And so that's, yeah, please.
Mary Reagan: Can you define, because we actually had a conversation a few weeks ago from someone from Lavender AI who was mentioning red teaming, and it was actually a new term for me. So can you define what that means?
Patrick Hall: Yeah, it it it's an interesting term. So I think like, so, all right, like I, you know, I want to take one step back and say, one of the most interesting things we've done with NIST is work on this glossary and work on a glossary of trustworthy AI terms.
Patrick Hall: And I think if I, if I have a second during the conversation, I'll try to go find the link and put it in the chat or something. But, um. I think there's a huge vocabulary problem in AI and data science in general, and I don't think trustworthy or responsible AI or ethic, you know, we're seeing what the issues are just trying to determine what these terms mean.
Patrick Hall: And I would say red teams, red teaming is the same way. So I hear people using red teaming, um, in the more traditional information security sense, which is where you have a separate team or an external team come in and really, um, test your systems for vulnerabilities. And, uh, a very adversarial and thorough manner.
Patrick Hall: Um, I also hear red teaming used, um, almost, uh, almost the same as a model audit or a model validation exercise. And I think just. Especially in our work with, uh, generative AI, that does seem to be the kind of preferred way to talk about, uh, validation efforts around generative AI systems. So, instead of saying we're validating this, you know, generative AI system, I often times hear people say we're red teaming system.
Patrick Hall: And I think that's totally fine. Like, I, you know, at some, in some ways we need clear terms just to communicate and that kind of bugs me, but in the end, as long as people are sort of testing systems in a thorough and adversarial and objective way, um, I, I also don't care what you call it, I guess. So, so red teaming can be a very sort of specific, um, sort of bringing external or a separate team to, um, Really adversarially test the system for vulnerabilities in the information security context.
Patrick Hall: But it seems to have just taken on this more broad notion of model validation in generative AI world. And I think both are fine.
Mary Reagan: That totally makes sense. Um, so one, thank you for that answer. And two, like, you know, you're sort of in the thick of it, right, you're every company that is coming to you is is actively seeking out ways to improve their responsible AI. But that maybe isn't the whole landscape right there isn't necessarily pressure, you know, whether it be legally or from policy perspective you can get, companies really to install any sort of you know responsible AI framework, I'm curious how you view that and really, like, what you think the most significant barriers to any kind of widespread adoption of responsible AI practices are.
Patrick Hall: Well, I do think You know, the market pressures in AI machine learning are immense, right? Like, people just want to develop and deploy as quickly as possible. And, um, that holds true, and I think it's sort of the main barrier of adoption. For responsible AI practices, and in my opinion, except in, um, regulated verticals.
Patrick Hall: And so, when you're working in employment, or you're working in consumer finance, or you're working in housing, it's like, it's really like being in a different world. It's just a different world than, than say, uh, a big unregulated tech company. Right. But, You know, so, so if, if sort of market incentives are the main kind of, um, blocker to the adoption of responsible AI practices, then I would say the main sort of accelerant of adoption is regulation.
Patrick Hall: And, um, you know, when we started BNH.AI, our thesis was that AI regulation would be coming. And, you know, at least to a certain extent, that's true. So there's, there's, um, the new New York law. New York City law that mandates AI bias audits for tools used in employment. There's the EU AI Act, which for smaller companies in the US may not be such a big deal, but a lot of our clients are sort of large multinational organizations, and for them the EU AI Act is Is a huge deal.
Patrick Hall: And, and for them, AI um, is really expected to be regulated by the end of this year. You know, the, the development and use of AI is really supposed to be regulated in the eu, um, by the end of this year. And so, and, and there's constant talk of sort of AI bills and AI regulation and stuff in the us. I, I'm, I'm not optimistic about that, but I do expect the, um, EU AI act to hit like a ton of bricks.
Patrick Hall: And if you're paying attention, which I don't know why. You know, most people wouldn't be, but there are more and more local laws, um, around the use of AI in the United States, and that's sort of what kicked off, um, the, the big, you know, that, that's sort of what led cybersecurity, both products and services, to kind of take off was state level breach reporting requirements about, you know, starting to hit about 20 years ago.
Patrick Hall: So, I, I see, you know, insane high market pressures preventing the adoption of, of, um, responsible AI, but I also see sort of a steady drumbeat of regulation, uh, that, that should accelerate the, the adoption of responsible AI.
Mary Reagan: I see, and so, like, your customer base right now, would you say, primarily in the verticals that are already have, you know, like, finance, for example, that you mentioned, that has heavy policy, or are you kind of seeing, yeah?
Patrick Hall: I just, just, I'd say about half and half, really.
Mary Reagan: Interesting. Um, I was curious too, so, because, you know, I was looking over your website before we had this, um, and you wrote there, the biggest barriers to the adoption of AI, um, are not technical, they're legal, ethical, and policy related. And I just, like, I thought that was such an interesting statement, and I would love to hear you break that down.
Patrick Hall: I, I guess I still think that's true. Um, I mean, we know, we know how to do machine learning, um, and I'm, you know, we, we certainly don't always get it right, but, you know, the, the basics of sort of training machine learning systems to make specific decisions about, um, sort of um, Uh, task in a controlled environment, um, I would say that's a, that's a fairly well known, uh, uh, commodity.
Patrick Hall: Uh, turns out we can also teach machine learning systems to write documents and generate pictures pretty well, too. Uh, so, so I think a lot of the basics of the technology are worked out. Um, you know, and, and, you'll have to forgive me, like, I don't, I don't consider the, the giant deep learning hyperparameter Easter egg hunt that happened between, you know, 2006 and today to be some kind of huge step forward in technology.
Patrick Hall: Like, like, we had a, we had big breakthroughs in, in sort of visual pattern recognition, you know, starting around, you know, the, the mid aughts into 2010, 2015. Uh, now, now we seem to have big breakthroughs in generative AI. So, I think You know, I know for technologists, there's this sort of constant focus on, you know, oh, you know, we, we figured out how to make this, this part of the optimization process better, um, we figured out how to, you know, make some attention mechanism work a little bit better, but I think for consumers, you know, and not to be mean to practitioners, but I think for consumers, those are fairly irrelevant developments, um, and from a consumer perspective or an end user perspective, like, um, Machine learning is somewhat well figured out.
Patrick Hall: It's, it's just, can we manage the risk around it? Can it be legal? Can we make sure it's legal and used legally? Can we align it to human values and human ethics? Um, so I think, you know, my, my thesis continues to be the basics of the technology are mostly worked out. Now, um, it's about It's about real world adoption, and, and in particular, in, in high impact use cases.
Patrick Hall: And both, both decision making and content generation in many sort of high impact use cases are already regulated. Whether it's done by a person or a computer is, is again a little bit irrelevant. Um, it may be harder to understand how the laws apply. It may be harder to regulate or enforce how the laws apply when a computer does these things.
Patrick Hall: But for me, um, You know, it's, it's really these alignment, um, risk management and policy questions around AI that I think are going to be the biggest, um, sort of roadblocks, not for the adoption of responsible AI, but for the adoption of AI and machine learning in general over, say, the next 10 to 20 years.
Mary Reagan: So like in that vein, how have you seen, you know, I think all of us, I imagine all of our audience members too, everyone can have an example that comes to mind of how AI risk was handled poorly, but I'm curious, based on the position that you sit, if you could walk us through some examples that you've seen, of course, um, you know, predicting and, your clients.
Patrick Hall: Yeah. So, oftentimes, all right, so, so it's probably safer to talk about the AI incident database in, in this case. Um, and if you haven't looked at the AI incident database, you should Google it and go, and go check it out. You, you know, you might be better off just looking at AI incidents than, than listening to me, uh, talk for the next 40 minutes or whatever but the, there's just all kinds of failures of these systems. And so anything from like a chess robot breaking somebody's finger to a, um, a robot in an Amazon
Mary Reagan: Pause you, for a second.
Patrick Hall: Yeah, yeah.
Mary Reagan: Can you even explain what the Incident Database is for people who don't know? Like what, what is the purpose of that? Let's start there.
Patrick Hall: Well, okay, alright, so there's, there's I think this is actually a really important point. I think, um, large organizations, in my experience, and even individual people really struggle with notions of fairness and privacy and transparency. Right? We all have different expectations around what those mean and around how they might be implemented.
Patrick Hall: And your definition of fairness is probably as good as my definition of fairness. But when we go to implement them mathematically, we might find out that, you know, we have some kind of mutually exclusive norms. Um, and so these notions are really, these sort of ethical notions, I think, are really hard for large organizations to deal with.
Patrick Hall: Incidents, on the other hand, I think, are a known quantity from information security and transportation. Um, incidents don't really have to involve anyone's sort of politics or ethics, they're just Bad things that happen that cost you money or hurt people, and it's not really debatable whether money was spent or people were harmed.
Patrick Hall: Uh, and so I think incidents are a good thing to build around to motivate sort of adoption of responsible machine learning or AI because they can help you sidestep some of the really challenging issues around fairness and privacy. Um, and so the incident database is a database of incidents. It's indexed, it's searchable, it's interactive, and it has about 500, it has thousands of public reports of, of, about more than 500 known public AI and machine learning system failures at this point.
Patrick Hall: And, um, there's two goals of, of incidents, or, or sort of incident reporting in a database. The first one, and the most important one, is don't repeat past failed AI incidents. So let's come back to that as probably the best example of mishandling AI risk. And then, um, you know, there's sort of a, um, reputational ding that may disincentivize companies from sort of taking their most from, from acting on their, their sort of most high risk deployments, if they have some, you know, some, some inkling that they might show up in this public database.
Patrick Hall: So I think in, you know, the incident database has two, two purposes. One, information sharing to prevent repeated incidents. And two, just sort of, um, making organizations and the public aware of these failures and in that way maybe disincentivizing some of the highest risk uses of AI. So, so getting back to what I would consider like a really good slash bad example of mishandling AI risk, um, There was a chatbot in 2021 released in, um, South Korea or South Korean social media on the Kakao app.
Patrick Hall: A very popular me, um, app, um, Korean language app, and, um. It, all of it, it started making denigrating comments about, um, various different types of people, uh, and, and had to be shut down. Now, this was a near exact repeat of Microsoft Research's very high profile Tay chatbot failure. where Tay was sort of poisoned by Twitter users and started making all kinds of um, racist and obscene statements, um, and, and if you look at the marketing of these chatbots, if you even, they're, they're both sort of, um, you know, anthropomorphized, which is bad, um, they're both sort of anthropomorphized cartoon images of young women, and they have, like, light around their faces.
Patrick Hall: It, it was like, it was like the, the ScatterLabs designers just fully repeated the Tay failure, and, and I think that that points to the real level of maturity in a lot of the design of these AI systems, you know, people are repeating the most, people are, people are still repeating just the most famous AI incidents, and, uh, had they perhaps checked the good old AI incident database before they got started, they might have thought, oh, hey, there's a risk that this chatbot, um, you know, might make some biased statements towards different kinds of individuals.
Patrick Hall: And then I have to add on, on Lee-Luda, this is all public. Um, you know, it was also handing out people's personal information because they failed to scrub, they apparently, you know, failed to scrub personal information from the training data. So it was both insulting people and, um, violating data privacy law.
Patrick Hall: And, and so, yeah, I, I'd say that, that's, that's a There's, there's a lot wrong there. But to me, the most wrong thing is repeating a failed design. And so that's what that, that's a major reason why I'm into the AI incident database is to get as much information out there about failed design so that people hopefully stop repeating them.
Mary Reagan: So you said one thing, you said, you know that the chatbot was anthropomorphized and that is bad. I'm curious.
Patrick Hall: Yeah. In my opinion.
Mary Reagan: Yeah. Well, yeah. Can you talk about why you think that's bad?
Patrick Hall: Yeah, so I think it's bad because AI and machine learning systems are nowhere near as smart as people, and, um, if we over rely on them the way we might rely on human intelligence, people are going to get hurt, and, um, I think self driving cars...
Patrick Hall: Go ahead.
Mary Reagan: Yeah, so people are gonna get hurt just because, yeah, maybe you're gonna go right into one.
Patrick Hall: Yeah, so I, I think the most, I think the most obvious example of this is, um, self driving cars. I, I just don't think they're possibly ready yet to drive around, you know, sort of crowded, congested streets.
Patrick Hall: Like, we can argue back and forth, like, can they, can they operate in closed environments? Can they operate on sort of predefined routes? Yeah, maybe, maybe. But what's absolutely clear to me is we do not know, yet know how to make self driving vehicles that can operate in sort of real dense urban, um, environments.
Patrick Hall: And so, you know, if we anthropomorphize AI systems, people start to think You know, the, the public, the, the, the consumers of the system often start to think that they're as smart as people, right? And I think that's been a big part of issues around ChatGPT is that it, it's sort of presented as this human like intelligence.
Patrick Hall: And so, you know, the, the main danger is that we, um, By anthropomorphizing these systems, we sort of imbue them in the public's mindset with human like intelligence. And that's dangerous because these systems do not have human like intelligence. And they, you know, go look at the AI Incident Database. They fail in myriad ways.
Patrick Hall: And, um You know, it, it's just, it's just not a good thing to overhype the sort of capabilities of consumer products. And, and especially when we know that they can lead to sort of harmful outcomes. And I think anthropomorphic cessation, um, plays a large role in that kind of overhyping of, of current AI machine learning capabilities.
Patrick Hall: I do not think we're at the dawn of artificial general intelligence, okay? Sorry, I'm sorry.
Mary Reagan: Um, yeah, I think that's really clear. I agree with you on that a lot, so thanks for spelling that out. I, you mentioned, so right before we got on, you were talking about how the incident rate had changed in the database.
Patrick Hall: Yeah.
Mary Reagan: I don't know if you want to.
Patrick Hall: Yeah, yeah, sure, sure. I'm happy to get into that. So, um, the, yeah, I, at the end, I'm, I'm, I'm, At one point, I was a leading contributor of incident reports to the database. That's how I got into it, and, um, and again, I got into it because
Mary Reagan: I want to hear that story too.
Patrick Hall: Yeah, yeah, no, no, it's, it's simple.
Patrick Hall: It's just the thing I was talking about. I, like, you and I, or, or, you know, a lot of people that, Like, with advanced degree, a lot of Americans with advanced degrees that work in tech might think that we have good ideas about fairness and privacy. Um, but it turns out that those ideas are often unworkable with inside of large organizations, often don't align to, um, other valid notions about fairness and privacy.
Patrick Hall: And so I just found that to be a really difficult sticking point to implementing Responsible AI, and I found incidents to be a much easier thing, uh, to, to sort of motivate people around Responsible AI, because we might disagree on what fairness or privacy or transparency means, but we don't want to look stupid, and we don't want to hurt people, and we don't want to cost our company money.
Patrick Hall: And so, so I found, I think incidents are just a really important motivator for Responsible AI. Um, but yeah,
Mary Reagan: I just want to say, I want to underline that, because I have been thinking about, you know, obviously at Fiddler we think about responsible AI, and I think that you're absolutely right. That's a kind of a brilliant insight for me, and you know, I'm so thankful for you sharing that, but you're right, because that's date, like cold data, that then we can just point to and say, hey, the incidents are going to, it's a clear metric, Yeah, we don't have to argue exactly like you're saying about these sort of nitty gritty how are we going to measure fairness for this group, who's defining fairness, is it individual fairness, is it group fairness, what are we asking for, no, right, let's just look at the instance.
Patrick Hall: Yeah. Okay. And yeah, yeah, so I'm glad that makes sense to you, um, so yeah, so, you know, I don't know the exact numbers, maybe I should as a board member, but you know, as a, as a, as a contributor and a user, um, What there was a commercial when I, that will date me when I was growing up, it was like, I'm not, I'm not just the president, I'm a, I'm a client or whatever, um, so I don't just sit on the board, I'm an active user of the AI Incident Database too, and so, you know, around the end of the year, there are about 300 separate incident reports in the Incident Database, and now, You know, we're a few months into 2023, there's like two or three hundred more incidents and they're the majority of them from a, you know, quick qualitative analysis standpoint are from generative AI systems and from self driving cars.
Patrick Hall: There's others, there's others, but, but the bulk of the new incidents are from generative AI systems and self driving cars.
Mary Reagan: Yeah, and so the way you put this to me, for our audience here before, was with the release of ChatGPT, so that was in November, the incident rate has doubled basically since then.
Patrick Hall: Yeah. It, I, I think that's roughly true. I think that, that something along those lines is at least roughly true.
Mary Reagan: Yeah, that's wild. I think, um, I mean, so this, I have this, I have this question laid out and it kind of goes into it. So what do you think the most significant AI risks for businesses that they should be aware of today?
Patrick Hall: Okay, it's a good question, and I think for business people, um, I think there's, there's really three main uses of AI, and they Two of them have similar risks, and one of them has very different risks. And so, you know, from my whole career, um, until generative AI really started taking off, machine learning was used for two things.
Patrick Hall: It was learned for pattern recognition, so like, facial recognition, and it was used for decision support or decision making, right? So, Um, this is, this is essentially classifiers, right? And, and whether it's a big fancy classifier that picks your face out of a million other faces, or whether it's a, um, you know, more kind of traditional classifier that, that decides whether you're likely to pay your loan back or not.
Patrick Hall: Um, you know, we were basically using AI machine learning to make decisions and or to detect patterns that we would subsequently make decisions about. And for those, um, Types of applications that I think still make up the bulk of, of real world applications. Um, you know, your main risks are the things that we've been talking about.
Patrick Hall: Um, bias and discrimination, uh, data privacy violations, poor performance, you know, problems with robustness, problems with reliability, um, issues with transparency, um, and, and so that's sort of where the, the field of Responsible AI comes in. sort of grew up around the risk of decision making AI systems.
Patrick Hall: What I really want business people to understand is that generative AI is not for decision making, okay? Like, you should not take a system that was trained to generate content and not trained to make decisions And then use that system to make decisions. So I think a major risk, so generative AI, to me, has a very different risk profile.
Patrick Hall: Um, a major risk with generative AI, um, and, and this, I would say this is shared with, um, with traditional decision making or pattern recognition applications, but, but I think it's more acute with generative AI, is automation complacency or over reliance. And that's the, that's the exact issue. That's the fancy way to, to say that exact issue I was talking about.
Patrick Hall: You can ask ChatGPT for stock tips. Um, and, you know, they might be as good as a, as a human investors, but they're still silly. They're still, it's just, it just predicted the next word, right? Like a lot of humans aren't going to I'm not good at stock picking either, because it's a random walk, but, um, you know, I'm just trying to pick something off the top of my head to make the point that you can ask ChatGPT for a decision making, you know, a decision making question, and it will give you a good sounding response, but it's really, really critical to understand the only thing that has happened is it generated the most likely response conditioned on the tokens you gave it, and that is not an adequate mechanism for decision making.
Patrick Hall: And so I think. You know, that, the major risk to me with, um, generative AI is, is automation complacency or over reliance because people don't understand it's not for decision making. Um, there's significant risk there, or, or at least significant unanswered questions around intellectual property, right?
Patrick Hall: There, there's lawsuits flying around like crazy on intellectual property with generative AI. There's significant questions around data privacy. Um, there, there's, um, You know, there's also issues of, and I hate this term, but I'm going to say it so people know what I'm talking about, hallucination. What, what in the good old days we just called errors.
Patrick Hall: So these systems make errors all the time because all they're doing is predicting the next word. And so, you know, sometimes they get that prediction wrong. And so I think, There's, you know, there's a lot of risk with these systems, depending on how they're deployed. They can be deployed in low risk or high risk settings.
Patrick Hall: Um, I think they're better off, all machine learning is, today, is mostly better off deployed in low risk settings, unless you really know what you're doing, but then generative AI really does have a different risk profile than sort of more traditional decision making or pattern recognition applications.
Mary Reagan: And this, I think, goes, so all of that goes back to your earlier point of the dangers of anthropomorphizing a system.
Patrick Hall: Yeah, yeah, yeah, this is why we don't want to anthropomorphize. Yeah, because humans, humans, humans have the ability to detect patterns and make decisions. Yeah, um, I just, I just don't think we're there yet with, um, 2023 AI and machine learning.
Patrick Hall: And I don't, you know, I don't understand why, why people sort of feel pressured not saying things like that. Um. But, but yeah, I just think if, if we're realistic, we're just not, we're just not up to human intelligence yet and may never be, um, and so anyway, I'll, I'll, I'll be quiet, but please go ahead.
Mary Reagan: Yeah, I mean, I guess it's also in, you know, you would say any of the companies who are releasing the large language models, you know, GPT-4, so OpenAI, whoever, it's in their interest to say, oh, it's near human intelligence, right?
Mary Reagan: Some buzz around the model itself.
Patrick Hall: Yeah.
Mary Reagan: So there's a publicity aspect I think to that.
Patrick Hall: That's those. That's those market incentives driving people away from responsible AI.
Mary Reagan: Right, right. Um, so I'm gonna, I'm, I'm not sure if I should jump to some, um, audience questions yet. We are getting a few, but I think, um, maybe I'm curious before we, we get there.
Mary Reagan: I, I, one question I see is how do you foresee the role of AI ethics evolving in the coming years?
Patrick Hall: All right. I, I'm bleak on the outlook of AI ethics, and, and, and not, not in academia, uh, I think in academia, it's likely to, to sort of flourish slowly, like, like other branches of ethics do, um, in, in sort of the research world, but in the commercial world, I, I think I've seen the future of AI ethics, and it's kind of sad, um, so,
Mary Reagan: Don't break my heart, Patrick. Don't break my heart.
Patrick Hall: All right, all right. Well, it doesn't have to be all, it doesn't have to be all companies, but, but, I mean, I think the trend that I've seen has been that, um, you know, it, it was, it's almost like company, companies accidentally hired people with ethics. To do, uh, to do AI ethics, and that led to real sort of commercial problems, right?
Patrick Hall: And, and while I might agree with a lot of the sort of famous AI ethicists that saw their big tech careers come to an end, you know, you can also see it from the company's perspective, like, their job is to make money and sell products and get products out the door. Um, and so, So what I've seen though is, is the companies kind of fixed this, that bug in their system, right, like now I feel like AI ethics groups in large research, large commercial research labs are focusing on sort of the, the fake Terminator apocalypse, which is You know, I, I, I do think generative AI and other as AI systems can cause catastrophic risk.
Patrick Hall: I, I think a deepfake, you know, a, a well timed deep fake, a well done deep fake could, could cause a war. I mean, I, I, I don't think that there are, I think catastrophic risk are possible, but I think this, this sort of pretending that we're at the dawn of a GI and the computers are gonna link themselves together.
Patrick Hall: And do bad things is, is pretty silly. And I see that, you know, being a focus of AI ethics. And then also I see just people investigating what I would also call really silly topics around like the feelings of large language models. Um, I don't possibly see how large language models have feelings. And so I, you know, I could be wrong.
Patrick Hall: I, I could have, I could just be too traditionalist here, but, but unfortunately I think I've seen a commercial sort of Um, practice of AI ethics head in, in pretty silly directions that, that have no chance of sort of impeding a company's commercial goals. And so, you know, it, it may have been a mistake to mix for profit companies and anything involving ethics together in the first place.
Patrick Hall: Um, but, but what I see the re The remnants there are, are pretty silly and pretty useless. And so I'm hoping that, that in academia, AI ethics will be more serious and focus on more real and substantive topics. Um, and, and I really look towards the future of risk management. And so I think for companies, ethics will continue to be very difficult.
Patrick Hall: Um, but, but hopefully risk management is is a better lens for companies to think through how to align their systems with sort of their own corporate or other human values. So, um, you know, that, that's my answer there. I hope it's not too bleak or too upsetting, but, um,
Mary Reagan: No, I think, yeah, you're, it's a really real answer, right?
Mary Reagan: I mean, I think everyone I imagine on this call knows, you know, that at the end of the day, you know, it's your own business's profit, right? Like that's what's going to make or break you. So even I think that's why, you know, it's hard, it's difficult to have regulation come, as you're saying, you know, from the inside, right?
Mary Reagan: Because ultimately that first piece, you know, your profit is going to, is going to win out over any sort of ethical.
Patrick Hall: Yeah, yeah, in an organization that's, that's basically mandated to make profits. Yeah, it's, it's just right. It's, uh, it's, it's somewhat straightforward. It's somewhat straightforward.
Mary Reagan: Yes. Um, so, okay, let's see.
Mary Reagan: We've had a few questions coming in. I want to make sure that the audience feels heard and gets the chance to ask.
Patrick Hall: Please. Yeah, yeah, please.
Mary Reagan: So, um, Praveen is asking how do you handle data protection? If AI can easily scrape some of your personal and professional data, as was with the case with ChatGPT and the incident at Samsung.
Mary Reagan: Oh, I think it was with KaKaoTalk.
Patrick Hall: Mm hmm. Mm hmm. Well Alright, so, so one, I mean, it's, it's a good question. I, I'd like to sort of correct or push back on the premise, one premise of the question, which is that companies can easily get your personal data. Um, that's less and less true, at least in the legal sense.
Patrick Hall: Now, technically, can they go out and kind of scrape and buy or, or, you know, by hook and by crook, get your data? In the U.S. and other countries, yes. In other countries like the EU or China, not, not so much. And so, so one, I, you know, I think the public needs to be more, um, needs to advocate better for their own data privacy rights because while it's It's absurd that we don't have AI regulation in this country.
Patrick Hall: It's even more absurd that we don't have data privacy regulations or federal level data privacy regulations. So I think, um, one, there are some federal data privacy laws in different verticals. Two, there are more and more state and local laws around data privacy. So I think it, it should be harder and harder for companies to sort of access your personal data in, um, predatory and deceptive ways.
Patrick Hall: But the reality is today, at least in the U.S. Um, that, that it is still mostly possible. Um, so I think you just Sadly, you just have to take responsibility and, you know, be careful what you sign up for, be careful what you post on social media, um, be careful, uh, how you store your information. I mean, I'm, I'm a, I use password, you know, I use password managers, VPNs, um.
Patrick Hall: You know, all, all these kinds of things in an attempt to, to keep my, my data as private as possible, but, but really the issue is that we should just have better data privacy regulation in the U.S., generally speaking.
Mary Reagan: So one, this makes me have a, a question about that. I want to go a little bit deeper, but actually realizing that this, what this person was asking about was the incident with Samsung with ChatGPT, Korea, where. someone inputted their own. They used CPT inputted their own sensitive data. And then now that is OpenAI has that data. Mm-Hmm. . So the slight,
Patrick Hall: I think there's some, I, I wanna be, I think there's some question as to whether OpenAI has the data. Um. So I want to be clear about that, but, but, the, the very clear advice, I'm aware of this incident, and, and, you know, I can talk about that more, but to me, the very clear advice, and I think this really pushes back at, at the value of these generative AI systems, they're great for writing Christmas cards, they're great for writing, you know, poems to my four year old, I don't see really how they can be used today in high risk applications within large organizations because of the controls that are needed.
Patrick Hall: And so the controls that are needed there are, you should not copy and paste into or from the UI, okay? So you have to type in information or get it in there another way that doesn't just replicate your company's proprietary data. Not, not necessarily because OpenAI has access to it, but more because We're not sure what OpenAI does with it.
Patrick Hall: We're not sure what hackers or attackers could do with it if OpenAI does have it. And we're really not sure about the intersection of, um, local and federal, depending on the country, and international data privacy laws and the use of these. These complex new AI systems, right? So I, I don't think it's as clear cut as, as OpenAI stole their data, but I do think there's significant risk there.
Patrick Hall: And that, um, we don't really know what OpenAI does with the data. Um, we, we don't have any good answers for really how data privacy laws and IP laws that exist are, and are enforced work with these systems. So I think in all of that uncertainty. The only thing you can do is not copy and paste into it, which makes it pretty hard to use.
Patrick Hall: And I think, you know, I, I'm aware of many, so companies don't like to come out and say this, but I'm aware of many companies that just ban ChatGPT and other sort of generative AI use case, use, user interfaces. from their servers the same way they ban Facebook and Snapchat and everything else. Um, and, and so, you know, I'll leave it at that, but, but the, the, the, a normal human being would have given the answer of, don't just copy and paste into the user interface.
Patrick Hall: In fact, You really should not copy and paste into and from the user interface.
Mary Reagan: Um, so I'm really curious, I actually don't know much about data privacy laws, so can you just quickly break down how we don't have data privacy law protection? Like, how you see that?
Patrick Hall: But getting to your, you know, getting back to your question, the U.S., the U.S. tends to take, and again, I'm a, I'm a self taught policy person, and if there's any real policy people on the line, they'll, They'll see that immediately. But the U.S., the U.S. tends to regulate at the, and within verticals, within industry verticals.
Patrick Hall: And so there are strong data privacy laws in education. There are strong data privacy laws in healthcare. Um, but there's not, like there is in the EU, or in China, or in many other countries, a broad sort of overarching, um, country wide law that sets a, that sets out, um, That sets out sort of basic rights around how organizations can use data.
Patrick Hall: And so in the EU, you, there's, there's like six legal bases under which data can be used. And none of those legal bases include Training your cool AI system. Um, and so literally you have to, you have to have a lead, you know, one of the most basic things about data privacy law is, is it sets up legal reasons for which you can use data.
Patrick Hall: And if you are using data outside of those reasons, then it might be illegal. Um, Consent is problematic, but, but a big part of data privacy law is where we consent to having our data used, um, you know, things about how long data can be stored, how long data has to be kept, um, the conditions under which it's stored, those are all kind of basics of data privacy law.
Mary Reagan: Okay, yeah, thank you, that's very clear. Um, I'm gonna fold, so here's another, audience question about and i'm going to fold it into an existing one that i had as well um so their question is there a document or a checklist or even a rubric which can be used to gauge the AI risk or AI applications and i want to fold that kind of into the larger question that i was going to ask you which is how can companies very broadly you know or just ensure that they're developing and deploying AI systems in an accountable way
Patrick Hall: So broadly, I would send people to the new, you know, as of, as of January 2023, the NIST AI Risk Management Framework.
Patrick Hall: Um, there, there's a million, you can Google a million different Responsible AI Checklists or Trustworthy AI Checklists and, and some, some are better than others. Um, but, Likely the best one, or likely some of the best ones, come from institutions like NIST and ISO, International Standards Organization. Um, and so, if, if you're looking for the best advice, I would send you to places like NIST and ISO.
Patrick Hall: They have different standards, but, but they do work on sort of making sure the standards align. And, um, you know, I, I, I, It depends what kind of guidance you're looking for. ISO is more like a checklist. You know, ISO basically does supply these very long checklists for like, you know, validating machine learning models, ensuring reliability in neural networks.
Patrick Hall: There's, there are now ISO standards for this, and they really are kind of like deep, long, technical checklists that you can apply to your system. NIST is a little bit different. Um, NIST is, does more sort of research, and then synthesizes that research into guidance. And so, so NIST has just put forward, um, really a mountain of AI risk management guidance with their AI Risk Management Framework Version 1, and that, that would be one of the best resources in the world that I would send people to.
Patrick Hall: Now, accountability is one of my favorite topics, and um, I'm gonna, I'm gonna kind of tackle that a little separately. Um, how, how do you ensure accountability in AI systems? Um, I think there's one direct way, and it's all about explainability, and the kind of stuff that Fiddler does, which is, uh, you enable actionable recourse.
Patrick Hall: Okay, so you tell people how a decision was made and provide them a process to appeal it in a logical and timely manner. Okay, that, that is, that is probably the key rubber hits the road. How do I make an AI system accountable is I give people the opportunity to appeal wrong decisions. That's like, that's the point of the sphere on AI accountability in my, uh, in my opinion.
Patrick Hall: Now, you know, one other thing that I always like to point out is Uh, banking regulations, where the use of machine learning and predictive models, um, has been regulated for a long time, for decades, uh, they, at, at the largest banks, um, mandate or, or not mandate, but, but sort of, it's become tradition, and regulators would have a lot of questions for you if you didn't do this, um, they put one single human being in charge of AI risk, and that one single human being gets You know, they're, they're well paid and they get big bonuses when the AI machine learning systems work well, and they get penalized and potentially fired when they don't.
Patrick Hall: Moreover, that one single human being, which, this stands in direct contrast to most of the responsible AI directors I run into, um, has a very large staff and a very large budget. Um, and, very crucially, they are independent from the technology organization. You know, I run into all kinds of responsible AI directors who have no budget and no staff and report up through the technology chain.
Patrick Hall: That's not, that's not, that doesn't do anything. And so In model risk management, they instate a single human being with all of the accountability, and that person does not report through technology, they report to the board risk committee, and they're hired and fired by the board, not by the CEO and the CTO.
Patrick Hall: So there's both sort of governance internal structures that are needed to make AI systems accountable, like chief model risk officers, and then there's also this notion, I think there's a lot of things you can do to make AI systems accountable, but on the technical side I think the point of the spear is allowing people to appeal wrong decisions, which oftentimes entails describing how a decision was made and allowing people to appeal that, you know, the logic of the decision or the data on which the decision was based.
Mary Reagan: Yes, that really makes sense. Yeah, I think it's also really interesting how you're seeing most people sort of being folded into the technology wing. Yeah. I mean, yeah. Great points. So I'm going to fold in together sort of two questions that I'm seeing that are sort of broadly on the topics of ethics and sort of who, you know, so this person is saying, when we say ethics, who's setting the baseline?
Mary Reagan: And the second person is, is asking, you know, AI and data are deeply political and how do we balance the interests of countries at different economic and social trajectories? And I think these questions are kind of tied.
Patrick Hall: Right. And, and I think,
Mary Reagan: Globally agreed upon statements.
Patrick Hall: Yeah. Yeah. Yeah. Yeah. I, I think. So you can, if you want to see not globally agreed upon standards, but standards on which You know, bureaucrats and civil society in a large number of countries have agreed on standards.
Patrick Hall: You can go look at the OECD AI work. Um, and, and so OECD does have AI standards just like NIST and ISO. And OECD is sort of, um, a larger group of countries. And so it is possible, it is possible to come to some kind of understanding. But, but I think, you know, this gets at what we were saying earlier. You might have good ideas about fairness, I might have good ideas about fairness.
Patrick Hall: And they might be really different. Also, I mean, it's just obvious that there's too many, uh, white dudes and, and just not enough diversity when it comes to setting the baselines of what's considered fair in technology. So, so what I really try to do is Um, one, you know, work with many different kinds of people, both in terms of, of sort of demographic background, but, but perhaps as importantly in terms of, um, professional backgrounds, like I like to work with statisticians, and economists, and psychologists, and social scientists, and I like to talk to the people who use the systems, right?
Patrick Hall: I like to get a broad audience. Spectrum of perspectives on systems, but, but, you know, irregardless of what we can do to deal with these issues around bias and privacy, bias and privacy, and their sort of normative meanings across different cultures. That's why I look to different kinds of solutions. Like I look to the AI incident database and incident response.
Patrick Hall: I look to risk management, which is actually a pretty boring state field that that is fairly mature. Um, and does a good job handling risk, whether that risk is, um, an airplane, you know, crashing, or an algorithmic system illegally discriminating against millions of people. So, um, I, I think that I'm not an ethicist, I'm not a trained ethicist, um, I almost, in my professional work, try to sidestep this, this notion of ethics, because it's a wicked problem, and that, that's a real term, wicked problem.
Patrick Hall: Um, it, it's You know, it's borderline unsolvable, but what we can't, but there are things that we can do, which are, you know, focus on incident response and reducing incidents. There are things that we can do, like just bring sane risk management practices into the AI world, that I think are actually a lot more easy and direct than trying to to tackle these really tough ethical and political, and yeah, I mean, the question is right.
Patrick Hall: These are, data and models are very political, and they will be used, they are and will be used for political purposes. So I think, you know, we need to be aware of that. We need to deal with those risks. They are very serious, and I, for me, the way that I do it is by focusing on more concrete things.
Patrick Hall: Incidents, incident response, preventing incidents, and risk management.
Mary Reagan: Excellent. Maybe could you say, like, because just in that answer, you said, you know, of course, focusing on incident response, but then also doing, you said, sane risk management. Could you just give a couple of bullet points on some things that you think would be universally applicable?
Patrick Hall: Yeah, so, so having one single person with actual responsibility and accountability in charge of AI risk is important. Um, not a responsible AI director with no staff and no budget. There are reports to the CTO that doesn't do anything. Um, having executive support for risk management around AI systems, so having senior executives in the board understand what's going on in an organization around AI, having them approve of those things, that, that really changes the tenor of what organizations want to do with AI.
Patrick Hall: Um, having clear and transparent and written policies and procedures that people get education on so that the rules of the road are, are well known and available to everyone. Um, and, and really this is, this is the most basic thing. This is the most basic thing. If you want people to do risk management, you have to pay them to do it.
Patrick Hall: Okay, it's just, it's really simple. It's, it's really, that's really pretty straightforward, right? Risk management is a boring, tedious, difficult job that has to be done by people that are just as skilled as the developers. So, having equal stature is another basic risk management point between validators and, and developers is another really basic point.
Patrick Hall: And, you know, the risk management work is never going to be as exciting as the development work, so you just have to pay people to do it. So people have to be incentivized, people have to be properly incentivized to take, um, to take on risk management. And then again, you know, like I was bringing up, you really have to have the right organizational structures to do risk management well.
Patrick Hall: And so if, if people are wondering how I know all this, it's actually all in one document. called, uh, SR 117 by the Federal Reserve. It was released in 2011. Um, it's about 20 pages long, written in plain language. You can and should go read that, and you will have a very, you know, a much more clear idea about how to do risk management around these systems.
Patrick Hall: So, um, those were just, and all right, so one more basic risk management. Nobody, not JPMorgan, not Meta, not the U.S. government, has enough resources to manage all of their risk. And so you have to measure and prioritize risk and focus risk management resources on the most, um, perceived most dangerous risk or most threatening risk.
Patrick Hall: And that's another way that we do risk management.
Mary Reagan: So super helpful. Thank you. Um, so a few more audience questions in our last minute. So do you The clarity emerging in Clay's law that deals with the, quote, "right to be forgotten" from ML training data, if a single individual wants to be, quote, "forgotten," does that automatically mean the death penalty for all models derived from datasets that included the individual who wants to be forgotten entirely?
Patrick Hall: No, I, I don't, I, I'm not a lawyer, and I don't practice, and even more than not being a lawyer, I'm not a lawyer that practices in the EU where they actually have a right to be forgotten. We don't have that in the U.S., okay? So, um, what, but, but many countries do have this, and this kind of gets back to the discussion we were having about ChatGPT.
Patrick Hall: I assure, I work with some of the best data privacy people in the world, at least I think I do, and the, the honest, The truth is, like I was saying, people do not have a clear idea of how the right to be forgotten is going to interact with complex AI systems. People don't have, um, a clear idea of whether there truly is a right to explanation under the GDPR in the EU, and if that right does exist, how is that going to work with AI systems?
Patrick Hall: Um, how do already existing data retention limits and data storage requirements interact with AI systems? People don't, the, you know, the, the questions, the answers to these questions are sort of on a case by case basis, like, like the questioner said, sort of coming up in, in various court cases or, or other sort of technical problems, um, and, and my impression is there's just no real clear answer, and that's why I would say that technologies like ChatGPT are risky for large organizations because, you know, say that you use ChatGPT in a country or, or a jurisdiction where there is a strong right to be free.
Patrick Hall: Forgotten? Nope. What's going to happen? People don't know the answer to that question. And so it's not that OpenAI is stealing everybody's data, it's just more that there are these unknown risks around using machine learning, any kind of machine learning, um, and sort of the burgeoning, uh, field of data privacy law.
Patrick Hall: There's just a lot of open questions. And if you're a large organization with a big operating bank account that can be sued for a lot of money, or face really serious regulatory damages, um, those are risks that you might not want to take, despite the technology seeming so cool.
Mary Reagan: We have, gosh, there's so many, still so many good questions coming up.
Mary Reagan: I'm going to ask this last one before we sort of wrap up, which is, so what do you think about the position of Elon Musk and others, you know, who think about holding back on AI development in any way?
Patrick Hall: Oh, that's really simple. Elon Musk means Needs people to think AI is better than it actually is. That's why he signed that letter.
Patrick Hall: Okay, um, OpenAI needs its competitors to slow down their development. That's why they signed that letter. Um, I don't think anyone of the sophistication of an Elon Musk or or the CEO of a major AI company thinks that we're at the dawn of AGI, right? I think that they signed these letters for sort of, um, uh, you know, political reasons.
Patrick Hall: Like we were just saying, oh, this is all political, right? Uh, Tesla needs people, needs, needs the government and consumers to be confused about the current state of self driving AI. Which isn't that good. Go look at the AI Incident Database. Um, OpenAI needs their competitors to slow down, because their, many of their competitors have a lot more money and a lot more people than them.
Patrick Hall: And so, you know, just to be as direct as possible, that's why I think those companies signed those letters. And, and, you know, I know we might be going over here a lot, but this is actually one thing that, that makes me the most mad. Um, I am very, very frustrated by efforts that divert AI risk management resources to a perceived, you know, future or near future catastrophic terminator type risk.
Patrick Hall: It's fake, and in the worst cases, it's, it's deceitful, right? Literally, there are people involved in, at the highest levels of these efforts who are trying to move dollars away from efforts that might actually work to manage risk. Because that will slow down their progress and move money into sort of these fake efforts around, you know, this is the dawn of AGI and we need to be worried about Skynet.
Patrick Hall: And so I get really animated on these topics, but just to be clear, you know, there's no Skynet. We're not close to Skynet. We're not close to the Terminator. If people are advocating moving spending towards those kinds of efforts, then I would question, you know, whether they're doing so in good faith or not.
Mary Reagan: Yeah, I'm so glad we asked that question. What an excellent, um, point to end on. Patrick, I've just enjoyed this conversation so much. I've learned so much from you today. Um, I'm so glad that you could be here. I want to say that we'll probably continue this conversation on our community. So if people want to sign up on our Slack channel, um, we can Uh, populate with some of the questions we didn't get to and answer them there.
Mary Reagan: Um, otherwise, I just want to say thanks to all the attendees for coming. And again, I know I already said thank you, Patrick, but man, really fascinating hour. I've learned a lot. Um.
Patrick Hall: Very kind of you. Very kind. Happy to be here. And, you know, audience, please feel free to connect on LinkedIn. That's the easiest place to find me where you can tell me I'm smart or stupid.
Patrick Hall: People do it all the time. So you can have your turn. But, uh, great to be here. And, and thank you so much.