Back to blog home

How the AI Bill of Rights Impacts You

As efficiency increases in machine learning (ML) tools, so does the need for thoughtful training and monitoring that prevents or reduces bias, discrimination, and threats to fundamental human rights. The White House Office of Science and Technology Policy (OSTP) published The Blueprint for an AI Bill of Rights (The AI Bill of Rights) on October 4, 2022, as a “national values statement…[to] guide the design, use, and deployment of automated systems.”

The AI Bill of Rights presents five key principles that guide automated decision-making system design, deployment, and monitoring, to facilitate transparency, fight bias and discrimination, and promote social justice in AI. To create this blueprint, the OSTP spent a year consulting community members, industry leaders, developers, and policymakers across partisan lines and international borders. The resulting document leverages both the technical expertise of ML practitioners and the social knowledge of impacted communities.

In AI Explained: The AI Bill of Rights Webinar, Merve Hickok, founder of AIethicist.org, joined me to discuss the AI Bill of Rights, how to interpret and implement its principles, and the impact its framework will have on ML practitioners.

What is the AI Bill of Rights?

The AI Bill of Rights is a non-binding whitepaper that provides five key principles to protect the rights of the American public, guidelines for their practical implementation, as well as suggestions for future protective regulations. The five principles are:

  1. Provide safe and effective systems for users affected by system outcomes
  2. Maintain algorithmic discrimination protections
  3. Protect data privacy
  4. Provide notice and explanation when using an automated system
  5. Ensure human alternatives, consideration, and fallback allow users to opt out of automated systems

These principles present a holistic approach to assessing and protecting both individual and community rights. The AI Bill of Rights states that testing and risk monitoring is a shared responsibility performed by developers, developer organizations, implementers, and governance systems. It requires audits independent of developers and users and suggests that companies not deploy a system that might threaten any fundamental right. Anyone working on autonomous systems should read the full AI Bill of Rights for specific recommendations and examples based on their specific industry and role.

The AI Bill of Rights contains similar principles to existing AI regulations and documents that strive to protect users from AI systems. Risk identification, mitigation, ongoing monitoring, and transparency are also called for in the EU Artificial Intelligence Act (EU AI Act), a regulatory framework presented by the European Commission in 2021. All five principles from the blueprint are also proposed in the Universal Guidelines for Artificial Intelligence (UGAI), a global policy framework created by researchers, policy makers, and industry leaders in 2018. The AI Bill of Rights does not have the regulatory power that the EU AI Act does, nor does it call out specific prohibitions on secret profiling or unitary scoring like the UGAI. It consolidates the values shared between these global frameworks and provides a clear path for their implementation in the U.S.

Hickok praised the document as “one of the greatest AI policy developments in the U.S.” She applauded the blueprint’s call for transparency and explainable AI and agreed that users need clear information about automated systems early in their development. “If you don’t know a system is there, you don’t have a way of challenging the outcome,” Hickok said. Informing users that an autonomous system is in place “is the first step toward oversight, accountability, and improving the system.”

Why do we need the AI Bill of Rights?

As autonomous systems become more complex and humans are removed from the loop, biased results can be amplified at alarming rates. There is a clear need to protect users affected by these systems. 

Although the AI Bill of Rights is non-binding, it provides the next steps for legislative bodies to create laws that enforce these principles. We’ve seen policy documents translated into enforceable protections before. The Fair Information Practice Principles (FIPPs) were first presented in a 1973 Federal Government report as guidelines. Now these principles are the infrastructure for numerous state and federal privacy laws. Similar to the FIPPs, the AI Bill of Rights is a public commitment to protect user rights, opportunities, and access to resources. It provides groundwork for agencies and regulatory bodies seeking guidance as they develop their own legislation for AI development and implementation. Individual states will consult this blueprint when passing future anti-bias laws. I think that it is simpler for vendors to pretend as though local laws exist nationwide. Local legislation could then encourage nationwide or global changes in AI development.

Still, there is more work to do. The AI Bill of Rights states that law enforcement may require “alternative” safeguards and mechanisms to govern autonomous systems rather than being held to the same five principles laid out for other industry applications.There is also “a huge need for Congress to take this into legislative action” and provide consumer protection agencies with clear processes and additional resources.

Who does the AI Bill of Rights impact?

The AI Bill of Rights will  have the highest impact in domains with existing regulations like healthcare, employment, and recruiting. The safeguards provided in the AI Bill of Rights will likely improve efficiency and bolster future innovation. You can build “more creative and deliberate products when you slow down a bit and think through the consequences and harms,” Hickok said. I believe that it’s in the best interest of a company to proactively adopt these principles. The model monitoring and proactive audits recommended by the AI Bill of Rights will help to identify model performance issues and risks, especially since ML models may not present obvious signs of failure or indicate that data quality is degraded. 

Once these principles become law, future regulations will be domain dependent and accountability will be shared between AI system developers, business system owners, and monitoring tool providers. For example, if a recruitment AI system discriminates against particular groups, the employer using it would be held responsible for implementing a biased system. If a vendor marketed that AI system as fair or equitable, it would be held accountable by regulatory bodies like the Federal Trade Commission (FTC) for providing a system that does not perform as described. Similarly, a vendor providing a monitoring tool might be held accountable for not providing a product that is able to perform specific functions related to model bias

As companies work to show compliance with the blueprint’s principles, they will need to carefully choose vendors that also uphold the principles and reduce risks. This will likely encourage developers to strive for explainability and ML monitoring across the full model development lifecycle.

How can I use the AI Bill of Rights to build responsible AI?

ML practitioners, data scientists, and business owners should consult the complete AI Bill of Rights as a guide for full system structure, not simply as a set of rules to avoid bias. The five principles are relevant to any automated decision-making system, and future legislation will likely apply to a wide range of autonomous systems, not only AI.

Key practices for developers that will likely become the focus of future regulations include:

  • Documenting decisions and tradeoffs made during model development
  • Documenting data quality, sources, limitations, and how it is updated
  • Providing clear explanations of objective functions and how they relate to overall system goals

Key practices within businesses developing these models include:

  • Performing independent evaluations before and after model deployment
  • Setting clear roles, responsibilities, and controls within individual teams and the entire organization
  • Providing safe feedback mechanisms within MLOps teams and within the organization so any person participating in the development or monitoring process can raise concerns without fear of repercussions

Examples of how these principles and practices become laws and regulations are already apparent in state laws. In Illinois, the Biometric Information Privacy Act does not allow any private entity to obtain biometric information about an individual without providing written notice. In California, under the Warehouse Quotas Bill, companies that use algorithmic monitoring in quota systems must disclose how the system works to employees. NYC has passed a law that requires independent evaluations of automated employment decision tools, which must include a review of possible discrimination against protected groups. With new state laws likely to emerge, ML practitioners can proactively prepare by following the technical companion within the AI Bill of Rights. 

By providing principles and practical implementation guidelines at both the team and organization level, the AI Bill of Rights creates a framework for communication and knowledge transfer between users, businesses, and developers. Whether a data scientist, lawmaker, CEO, or user, it is our job to engage in the process provided by the AI Bill of Rights to create trustworthy AI systems that protect our fundamental rights. 

Request a demo to see how we can help you build responsible AI.