On October 30, President Biden issued an Executive Order on Safe, Secure, and Trustworthy AI. This is the latest in a string of AI-related actions made by the White House, such as the Blueprint for an AI Bill of Rights and meetings with CEOs from OpenAI, Microsoft, Alphabet, and Anthropic on advancing innovation in responsible AI.
The Executive Order (EO) is based around eight core goals that include establishing new standards for AI safety and security, protecting Americans’ privacy, and advancing American leadership around the world. The EO itself will only be directly applicable to US government agencies; however, it will be indirectly applicable to private companies and will likely have a ripple effect globally where policymakers have already been busy finalizing the EU AI Act and other initiatives.
Let’s take a closer look at the goals of the EO, the requirements it will set, and what it means for the use of AI in the US moving forward.
What are the goals of the Executive Order on Safe, Secure, and Trustworthy AI?
The EO sets out eight goals to help shape the future of responsible AI development and use in the US. The White House describes the actions drawn up within the EO as “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.” The eight goals are:
1. New standards for AI safety and security
Among other things, this will include requirements for the biggest AI developers to share safety test results with the US government and will call on the National Institute for Standards and Technology (NIST), the Department of Homeland Security, and the Department of Energy to develop and implement standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
2. Protecting Americans’ privacy
Alongside calls to pass bipartisan privacy legislation, the EO directs the acceleration of the development and use of privacy-preserving techniques as well as strengthening privacy-preserving research and technologies.
3. Advancing equity and civil rights
The EO outlines further actions that address algorithmic discrimination as well as fairness in the criminal justice system. For instance, the order will require the Department of Justice to set standards on using AI, as well as offer training to civil rights offices on best practices on prosecuting civil rights violations related to AI.
4. Standing up for consumers, patients, and students
The actions outlined under this goal promote responsible innovation namely in the areas of healthcare and education. The Department of Health and Human Services will establish a safety program to receive reports and act to remedy any harms or unsafe practices involving AI and will advance the use of AI to develop affordable drugs. In education, the order plans for creating resources to support educators by deploying AI-enabled education tools, like personalized tutoring.
5. Supporting workers
In an employment setting, the EO will seek to address potential harm by calling for the development of principles and best practices to safeguard employees as well as ordering a report on the potential impact AI has on labor markets. Among other things, this will include a requirement for agencies to issue guidance to landlords, federal benefits programs, and contracts for how they can implement AI while still protecting against discrimination.
6. Promoting innovation and competition
To further advancements in AI, the EO orders a pilot program for a “National AI Research Resource” to catalyze AI research across the country. The EO further orders open access to technical assistance and resources for small AI developers in an effort to promote a fair, open, and competitive AI ecosystem.
7. Advancing American leadership abroad
To promote global collaboration, the State Department and the Commerce Department will lead an effort to establish robust international AI frameworks and will “expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI.”
8. Ensuring responsible and effective government use of AI
The EO also takes steps to ensure that government agencies deploy AI in a safe and responsible manner by ordering guidance on the matter, accelerated hiring of AI experts, and assistance in procuring AI products and services.
What will be the impact of the EO moving forward?
While this EO is a meaningful step forward in addressing the challenges presented by the rapid and widespread adoption of AI, there’s still action that needs to be taken and questions that need to be answered.
In the fact sheet announcing the EO, the White House states that it’ll remain in discussions with Congress to pass bipartisan AI legislation; various members of Congress have already started introducing legislation that addresses different parts of AI, but so far, no sweeping legislation has been introduced.
Earlier this year, Senate majority leader Chuck Shumer introduced a strategy for AI policy making, which he presented as a starting point for further bipartisan efforts in this space. At the same, others argue that before legislation can target AI specifically, the US still needs a comprehensive data privacy and security law that will protect consumer data and businesses’ right to innovate.
Put into the context of the Biden administration's EO, the lack of any federal legislation regarding privacy or AI more specifically brings up questions of enforceability. Without federal law, there are aspects of this executive order that could be difficult to enforce.
What is the impact on businesses?
Although the critical nature of AI governance programs is becoming more and more apparent, businesses developing these programs still don’t have much of a roadmap to build from and are still looking to examples and research for guidance.
In this instance, the actions discussed in the EO will help organizations define what responsible AI is and how they can implement it in their day-to-day operations. In the long term, the research into new controls that this EO calls for will give companies new tools and data for building and using trustworthy systems and will continue to flag areas of risk that companies need to account for.
Tech leaders in AI have had largely positive responses to the EO. Because there’s so much emphasis placed on protecting American leadership and innovation in this field, companies can continue to research and develop their AI systems while also navigating the safeguards that need to be in place to ensure safe, secure, and transparent use of AI.