Pay attention to these four notable directives in Biden’s AI Executive Order.
This article originally appeared on the Forrester blog.
With much anticipation, and two days before the UK’s AI Safety Summit last week, the White House published the Fact Sheet and full Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, signaling the government’s commitment to AI.
Related: Biden Signs Sweeping Executive Order Regulating Artificial Intelligence
The EO sets the tone for the administration’s agenda to bolster the nation’s competitive advantage by investing in new opportunities that AI will create and fostering AI entrepreneurship while mitigating risks.
The Executive Order Is Broad In Scope, With Big Implications Beyond The Executive Branch
The EO builds on the administration’s previous actions to “drive safe, secure, and trustworthy development of AI.” The tone is optimistic, calling out opportunities for harnessing AI alongside guidelines for risk mitigation, and its broad scope shows a deep understanding of the nuances and impacts of AI.
Related: AI Powers Growth of Big 3 Public Cloud Providers
The EO has teeth beyond mandatory requirements for the executive branch with its commitment to developing new standards, taking a multi-agency approach to enforcement, and holding the government accountable to the same standards for “responsible and effective” use of AI. Ultimately, the EO will have a big and lasting impact for companies and industries that transact with the nation’s biggest employer: the federal government.
Pay Attention To These Four Notable Directives In The EO
The EO calls for a “society-wide effort” from government, the private sector, academia, and civil society to address eight AI priorities. Expect it to impact your enterprise AI strategy in these four critical areas:
New standards for AI safety and security. Currently, no formal AI red-teaming (structured testing to find flaws and vulnerabilities in an AI system) requirements exist. That changes with the mandate for the National Institute of Standards and Technology (NIST) and the Department of Commerce (DOC) to establish red-teaming standards within 270 days. Companies creating foundation models will be required to share AI red-team test results with the federal government, creating an opportunity for enterprises to also request them in their procurements. NIST is also tasked with developing a companion resource to its AI Risk Management Framework 100-1 that addresses generative AI (genAI) and the Secure Software Development Framework to foster secure development practices for genAI and dual-foundation.
Clarification on intellectual property ownership. One far-reaching economic impact is the EO’s intent to protect US companies’ innovation investments — and workers. Section 5.2 (c) (i-iii) directs the DOC and US Patent Office (USPTO) to provide specific guidance on how genAI impacts the inventorship process, patent eligibility when genAI is involved in invention, and any special carve-outs or updates necessary as AI is used in other critical and emerging technologies. This section also directs these agencies to evaluate the implications of using copyrighted works to train models.
Protection of Americans’ privacy. The EO recognizes that personal data protection is critical to safe and trustworthy AI. It emphasizes the need for a federal privacy bill, encourages research and adoption of privacy-enhancing technologies (PETs), and demands that agencies create safeguards for the ethical collection and use of citizens’ personal data for AI. But the impact of these mandates is uncertain given the lack of a reasonable timeline or measures that can be taken immediately.
Responsible and safe use of AI through third-party ecosystem risk monitoring. Within 180 days, US infrastructure-as-a-service (IaaS) providers reselling products abroad will be required to verify the identity of any person obtaining an IaaS account from a foreign reseller. It also encourages independent agencies outside the executive branch (e.g., EPA, CIA, and EEOB) to use the “full range of authorities” to ensure that regulated entities conduct “due diligence on and monitor any third-party AI services” and by mandating an “independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings.” Companies are already vetting foreign customers for semiconductor chips; now, they’ll have to do it for AI. Enterprises: Prepare For Domestic And International AI Standards With AI Governance
Executive orders are typically aimed at directing the operations of the federal government, but that doesn’t mean that they don’t have significant influence on large organizations. For example, the EO calls out the need for broad, international collaboration on AI standards as well as aligning on risk mitigation. A common, shared framework and standard will be a welcome relief for multinational organizations navigating a fragmented, confusing, and growing regulatory landscape for AI both in the US and around the world.
Also, the EO’s issuance of new guidance for the federal government to invest in AI and use it to overhaul how it procures AI products and services should ease the pain for companies that sell, service, and support the government. By doing so, the EO’s guidance requires the federal government to eat its own dog food, as the private sector watches for lessons on what works and what doesn’t.
Enterprises should prepare for the downstream impacts of this executive order by assessing the risks of existing AI use cases against the NIST AI Framework, launching a formal AI governance initiative, and monitoring for the creation of companion documents and new standards mandated in the EO.