Jessica Curyto

Biden and Xi discuss AI as Executive Order Implementation Begins

President Biden and President Xi met on Wednesday, November 15, 2023, on the margins of the Asia-Pacific Economic Cooperation (APEC) Summit in San Francisco. Among the topics discussed, the two leaders agreed to address the risks of advanced Artificial Intelligence (AI) systems through U.S.-China government talks. The plan to convene experts to discuss risk and safety issues appears to reflect a mutual willingness to work together on areas where interests align. The coordination is expected to address some of the most dangerous risks posed by the use of AI such as those outlined in President Biden’s whole-of-government Executive Order (EO) on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. While the EO does not directly reference any country, China’s use of AI to advance its military capabilities and/or to pursue malicious cyber activity is a top concern for the U.S. Government. This week’s meeting is timely as the Biden Administration sets out to rapidly implement the EO and directs agencies to meet deadlines between 30 and 365 days.

Objectives

The EO attempts to harness AI’s technological advances across a wide variety of industry sectors while mitigating significant national security risks and challenges associated with misuse by establishing Federal standards and reporting requirements to direct safe, secure, and responsible development of AI systems both inside and outside the government. The standards and reporting requirements build upon the voluntary commitments the White House secured from 15 U.S. companies, that include efforts to red-team AI models, mitigate harmful capabilities, and safeguard AI models against cyber and insider threats. Additionally, the Administration intends to lead efforts with allies and partners to establish a strong international framework for managing risks while employing the benefits of AI.

Implementation

The EO calls for establishing the White House Artificial Intelligence Council to coordinate implementation activities across more than 20 Federal agencies. The Assistant to the President and Deputy Chief of Staff for Policy will chair the Council.

National Security Risks

Several security-related risks are cited in areas such as biotechnology, cybersecurity, critical infrastructure, privacy, and confidentiality. Some of the more urgent national security risks include use of dual-use foundation models[1] that substantially lower the barrier of entry for non-experts to develop chemical, biological, radiological, or nuclear (CBRN) weapons as well as enable autonomous offensive cyber capabilities. Certain agencies are directed to conduct risk assessments related to AI in critical infrastructure and in cybersecurity including the risks of AI being misused to assist in the development or use of CBRN threats. In addition, there are requirements for studies to assess how AI may increase biosecurity risks especially at the intersection of AI and synthetic biology. The EO calls for a National Security Memorandum to address the U.S. military and Intelligence Community uses of AI, including means to counter adversaries’ application of AI for military purposes.

There are several risk mitigation actions aimed at ensuring the safety and security of AI technology, including:

  • Development of standards, tools, and tests;
  • Provisions to protect against AI use to engineer dangerous biological materials;
  • Guidance and best practices for detecting AI-generated content and authenticating official content;
  • Establishment of an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.

What Does this Mean?

The Biden Administration is embarking on a Federal Government-wide approach to evaluate and mitigate risks of AI systems while seeking to promote an innovative, competitive AI ecosystem that both supports workers and protects consumers. The broad and ambitious directives are focused on the Federal Government’s perception of risks related to the development and use of AI. There are standards and guidance related to AI safety and security that will impact the private sector, particularly companies engaged in high tech and critical infrastructure. These government efforts will require ongoing collaboration with developers and users of AI.

Key Agency Roles

Agencies charged with significant responsibilities have issued individual fact sheets summarizing their unique roles, including the Office of Management and Budget, Department of Commerce, Department of Homeland Security and Department of Energy.

Office of Management and Budget

OMB will direct agencies to designate a Chief Artificial Intelligence Officer tasked with coordinating the agency’s use of AI, promoting AI innovation, and managing risks from the agency’s use of AI. OMB will provide guidance on Federal Government use of AI, including guidance on labeling and authenticating AI content. OMB will also convene and chair an interagency council to coordinate the development and use of AI in agencies’ programs and operations, aside from the use of AI in national security systems.

Department of Commerce

The National Institute of Standards and Technology (NIST) will:

  • Develop industry standards for the safe and responsible development of frontier AI models;
  • Create test environments to evaluate these systems;
  • Develop standards on privacy and authenticating AI-generated content;
  • Develop generative AI versions of the AI Risk Management Framework (NIST AI 100-1) and the Secure Software Development Framework;
  • Establish extensive red-team testing standards for AI developers of AI.

NIST will house the United States AI Safety Institute (USAISI), which will operationalize NIST’s AI Risk Management Framework and coordinate with similar institutes in ally and partner countries.

Within 90 days of the date of the EO, the Secretary of Commerce will invoke the Defense Production Act (DPA) to implement reporting requirements for companies developing or demonstrating an intent to develop dual-use foundation models to provide the Federal Government with continuous information related to physical and cybersecurity protections of these models and results of AI red-team testing (based on NIST developed guidance). The Commerce Secretary will use DPA authorities to compel companies to provide information on the use of large-scale computing clusters beginning 90 days after the date of the EO. Also, the Bureau of Industry and Security (BIS) will develop regulations to enhance safety as next-generation frontier AI models are developed and tested. Regarding risks posed by synthetic content, the Secretary of Commerce will develop guidance on existing tools and practices for digital content authentication and synthetic content detection measures.

Department of Homeland Security

DHS is tasked with managing AI in critical infrastructure and in cybersecurity, including the application of NIST’s AI Risk Management Framework to critical infrastructure owners and operators.

DHS will:

Utilize AI to improve U.S. cyber defense, including by conducting an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical U.S. Government software, systems, and networks

Evaluate risks of AI being misused to assist in the development or use of CBRN threats, particularly biological weapons

Cybersecurity and Infrastructure Security Agency (CISA) will:

Assess potential cross-sector risks and potential risks related to the use of AI in critical infrastructure sectors within 90 days of the date of the EO and annually thereafter

CISA’s assessment will include ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyberattacks.

Department of Energy

DOE will develop tools to evaluate AI capabilities to understand and mitigate risks associated with nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards. Additionally, DOE will work with the National Science Foundation to establish a training pilot program for scientists with a goal of training 500 new researchers by 2025.

What’s Next?

In addition to the numerous actions and deadlines associated with implementing the EO, there are upcoming regulatory actions and reporting requirements that will affect certain companies. For example, within 90 days of the date of the EO, the Department of Commerce is directed to propose regulations requiring U.S. Infrastructure as a Service (IaaS) providers to report to Commerce when a foreign person uses its products to train large AI models with the potential for misuse in malicious cyber-enabled activity. This upcoming rulemaking complements the recent AI chips rule issued by BIS that seeks public comments on regulations to control access to IaaS solutions with computational power similar to supercomputers to develop large dual-use AI foundation models with potential capabilities of concern (e.g., weapons modeling, malicious cyber activities).

Also, within 270 days of the date of the EO, the National Telecommunications and Information Administration will solicit public comments on risks and potential benefits of dual-use foundation models with widely available model weights.

What to Do?

Review EO and Plan for Potential Impact: This is a wide-reaching Executive Order. Companies should continuously monitor the implementation of the EO in areas of relevance, as well as review proposed regulations and reporting requirements such as those for dual-use foundation models and large-scale computing clusters.

Provide Comments: As proposed rules and guidance are released, companies should review them closely for any impacts and consider submitting comments. Companies should strongly consider providing comments to the recent proposed semiconductor rules and monitor developments with those rules. To date, OMB released a draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. Public comments are due December 5, 2023.  

Consider Applying to AI Advisory Boards: As DHS sets up its AI Safety and Security Advisory Board and as NIST establishes the United States AI Safety Institute, companies should consider encouraging appropriate subject matter experts to apply for membership to help shape recommendations, guidance, and best practices. To date, NIST has begun to establish the Artificial Intelligence Safety Institute Consortium and is inviting organizations to provide letters of interest on an ongoing basis. The Consortium plans to begin its collaborative activities no earlier than December 4, 2023.

Assess and Mitigate AI Risks: Generally, AI-related risks tend to fall into the following three categories:

  • AI used as an instrumentality of threat activity;
  • Malicious targeting of AI systems;
  • Unintended consequences associated with the use of AI systems that could have political, economic, ethical, and other potentially destabilizing effects.

The EO addresses risks in these three categories. Specifically, it includes reporting requirements to guard against the malicious targeting of certain AI systems. In addition, it is concerned about the use of dual-use foundation models as an instrument to perform tasks that pose serious risks to security, national economic security, and, national public health or safety. Companies involved in developing or using large-scale AI training models should ensure they understand the types of risks the Federal Government seeks to mitigate (e.g., potential for AI models to be used in development of advanced weapons systems, weapons of mass destruction, CBRN weapons or for significant malicious cyber-enabled activities) and assess whether existing internal physical and cybersecurity protections are adequate to mitigate those risks.

Monitor Activity on Capitol Hill: Given growing bipartisan support to regulate AI and President Biden’s acknowledgement that legislative action will be required, companies should also track bipartisan legislation to assist with EO implementation.

For more details on assessing and managing AI risks, please read our Framework for Managing Risks Associated with Artificial Intelligence.

The Chertoff Group is a specialized advisory firm that helps organizations achieve their business and security objectives in a complex risk environment. Our highly qualified and experienced team includes a diverse mix of commercial and public sector security backgrounds. We serve global Fortune 500 enterprises across multiple sectors, as well as small to medium-sized businesses with specialized needs. Effectiveness, durability, and stakeholder alignment are common themes, and our work is grounded in principles of anticipating what is next, demonstrating best practice and value.

Our team helps organizations manage cyber, physical and geopolitical risks; navigate evolving regulatory and compliance requirements; and discover opportunities to win business and create value.

To learn more, contact us at info@chertoffgroup.com.


[1] “Dual-use foundation model” is defined in the EO as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters…”

Let's Talk.

Let's explore ways we can help you manage risk or position for strategic growth.

202.552.5280 | Mon. – Fri. 8:00 AM – 5:00 PM EDT