Back to blog home

Responsible AI Podcast with Anjana Susarla – “The Industry Is Still in a Very Nascent Phase”

In our Responsible AI podcast, we discuss the practice of building AI that is transparent, accountable, ethical, and reliable. We chat with industry leaders, professors, and AI experts. This week, we spoke with Anjana Susarla, who holds the Omura-Saxena Professorship in Responsible AI at Michigan State’s Broad College of Business. You can watch or listen to the podcast in full, or read the highlights from our conversation below.  

What is Responsible AI?

“There’s not been one authoritative or single definition,” Anjana said. When we’re talking about Responsible AI in the industry or media, usually we’re talking about a certain aspect of the solution. What are these different dimensions? AI ethics, transparency, explainability, and methods to address bias would all be part of our larger understanding of Responsible AI. 

Anjana emphasized that our decisions are increasingly automated: “Whether we order something from Uber Eats, or we’re listening to something on Spotify, or we go to Netflix or YouTube for some entertainment — all of our choices are essentially dictated by some kind of black-box algorithms.” AI has only become more prevalent with the pandemic, as we rely more heavily on automated systems online. 

There are two sides to interacting with AI responsibly. First, Anjana said, “As a citizen, what are your rights and responsibilities?” And second, what are your responsibilities as the designer of an algorithm or machine learning model? 

The risks of “irresponsible” AI

Bringing up an infamous example of AI gone wrong, Anjana mentioned Amazon’s attempt to build an automated tool for resumé screening — where one of the biggest predictors of success was possibly having the name “Jared.” To prevent issues of bias like this, Anjana said, “I think that the main thing that concerns people like me who look at businesses and how businesses use AI is: Do we have any norms? Whether it’s accepted social norms, or do we have some professional bodies that can ensure that we use these technologies in a responsible manner?” 

According to the law, there are already some provisions: you aren’t supposed to treat users differently based on gender, for example. And it’s easy to decide not to use one or two sensitive attributes in a predictive model. The problems arise, Anjana explained, when you have proxy attributes. For example, there was a famous case in the Netherlands where they were trying to design an algorithm to detect welfare fraud. One of the system’s predictive factors was a person’s zip code. And the courts in the Netherlands found that the algorithm was violating the law by discriminating against minority communities on the basis of the zip code.

We’ve seen similar cases in the United States too, where systems for predicting recidivism (or the likelihood that someone will go on to commit another crime) used features like zip code. This translated to people from minority communities being more likely to get a harsher sentence. “Bias and fairness are not abstract things,” Anjana said. “There are real consequences.”

Challenges of Responsible AI

What is stopping organizations from implementing AI responsibly today? Is it a lack of knowledge, a lack of tools, people, processes?

“I think the important thing to understand is, when we talk about the implementation of AI, there is a huge gap between what’s happening in Silicon Valley and some of the cutting-edge technologies vs. what regular businesses are doing,” Anjana said. 

In Anjana’s work consulting with companies, she’s seen that the adoption of AI is still very varied: some organizations are quite sophisticated, others are not. Many companies are still trying to figure out their dashboards, toolkits, and monitoring solutions. But even for a company like Facebook, scaling responsible AI is a major challenge. 

“Things like content moderation, you can do some for smaller sized projects, but as the number of people grows, your problems with human-in-the-loop methods of detecting misinformation, hate speech (what I would term as “algorithmic harm”) would grow substantially more. There’s just too much content out there.” 

Finally, AI is not like traditional software, which people have experience with for 20+ years and invested resources and time. “With AI, we are relying so much on black-box AI models,” Anjana said. “I actually worry about the biases and systems being magnified in a sense because we’re all depending on the same AI toolkit and practitioners. We’re still sort of at the ‘AI is like magic’ kind of wave — an unreasonable faith in the effectiveness of AI that’s not so much worried about what are the societal consequences and how will it affect the end-user. That to me is the most sobering part.”

Solutions for Responsible AI

Anjana’s goal for industry applications would be to design “bias aware systems.” After all, computer scientists have come up with metrics to assess biases: criteria like the disparate impact that says whether we predominantly targeting a certain community. But we have to decide where to draw the line. “There’s already some disproportionate burden on some communities — a history of redlining, for instance,” Anjana said. “Should we do something more than maintain the status quo?”

Regardless, “there have to be some directives by government agencies,” Anjana said. Responsible AI is not something an individual or a company is going to take on fully without the right incentives. However, is some self-regulation possible?

Self-regulation

Anjana mentioned that it’s unlikely for regulation of the tech sector to change the current landscape, where the web is relatively open and generates huge amounts of content. Yet, the status quo with tech companies is problematic. “Since they’re operating without gatekeepers or filters that govern the traditional use of information and news, and they have the ability to micro-target their users, that’s somehow that’s almost ripe for creating unfortunate outcomes.” 

The solution might be to have the tech companies look at metrics that reprioritize away from focusing solely on engagement. Although this might seem counterintuitive — tech companies always want more engagement — solutions like this are already being used today in small ways. For example, if there’s a known piece of misinformation, Facebook’s algorithms tend to stop recommending it.

Better labeling

Another step companies can take to make their AI systems more responsible is to produce more high-quality data through partnerships with researchers and the news media. 

“Better labeling would help everything, I think,” Anjana said. “We’re seeing conversational Ai with gender biases and so on. How can we overcome that problem? There has to be a concerted effort where there are more concerned people working with companies. In facial recognition, for example, some of the concerns have been pointed out by researchers, the extent to which you have huge crowd-sourced datasets and labeled data, that’s helped advance the state of AI.” 

Explainability

“Build systems that are explainable,” was Anjana’s advice. “This is something we need to emphasize more as a best practice.” Sometimes certain forms of models (like linear regressions) are more explainable than others (like black-box neural nets), and this may be something to consider when implementing AI. 

“Auditing just an algorithm is not going to solve the problem completely,” said Anjana. “We are only going to eve able to verify if the algorithm is working as the designer intended. I think that we will need some kind of concerted effort by the practitioners and the industry to create some of these frameworks that are more strategic.” This might mean going to the executives at companies and taking a bigger picture look at how the organization is accountable for explaining the algorithm’s predictions. 

Regulations

“We need some pressure,” Anjana said. “Most companies are struggling with enough things. This may be something desirable — everyone likes the idea of responsible AI — but unless there are some consequences...it’s not going to be very widespread.” 

However, changes may be coming very soon — to the extent that new laws may be passed in Europe in the next few months, we may see regulations catching up in the United States as a result.“I’m really excited about some of the newer regulations coming out of Europe and the fact that we’re seeing a lot more discussion about the effects of AI,” Anjana said. 

Think about the big picture

Responsible AI is about stepping back and thinking about the right way to solve a problem rather than trying to force in the most tech-heavy solution. Anjana shared an example: “There’s a software called Compass that was used for the recidivism — and a group of researchers did a study... the algorithm used 147 features and the study said just two or three features are needed to predict recidivism.” 

When designing an algorithm responsibly, it makes sense to take a moment to ask: Do you need 150 features, or do you maybe need only five features? As tech people, “we love all the complexity,” Anjana said. “But maybe the real world doesn’t, always.”

---

For previous episodes, please visit our Resource Hub. You can also find us on Clubhouse, where we host a chat and have a club on Responsible AI and ML.