On Devil’s Night Day, two significant AI developments were announced. First, the White House’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (“AI EO”). Second, the Group of 7 (“G-7”) announced its International Guiding Principles on Artificial Intelligence (“G-7 Principles”) and companion Code of Conduct for AI Developers (“G-7 Code”). All are three broad strokes – the devil will be in the details.
Following is a short summary of each but please check back soon for more analysis and key takeaways for businesses and their AI governance programs.
The AI EO is intended to create a framework for responsible innovation and use of artificial intelligence (“AI”) in the United States. It builds on the White House’s October 2022 Blueprint for an AI Bill of Rights.
Highlights of the AI EO are:
The AI EO is not directly applicable to the private sector; rather, the AI EO directs the federal government to create certain AI-related standards, some of which will apply to AI developers and users. Although the AI EO is revocable by a successor President, the AI EO offers some structure to AI until Congress agrees on comprehensive AI legislation.
The G-7 Principles are directed to “all AI actors, when and as applicable to cover the design, development, deployment and use” and intended to cover “advanced AI systems”, which include in particular “the most advanced foundation models and generative AI systems”. The G-7 Code has a narrower focus, i.e., developers of advanced AI systems. Both are voluntary – at least for now.
In the introductions to the G-7 Principles and G-7 Code, the G-7 makes clear that both are intended as “innovation friendly” and expected to evolve as AI systems evolve.
The G-7 Principles are:
The G-7 Code incorporates the G-7 Principles into a “risk based” approach to development with a focus on accountability and transparency.