Today, the Biden-Harris Administration is unveiling a new set of measures designed to strengthen responsible U.S. leadership in artificial intelligence (AI) while protecting the rights and safety of the public. These actions build on the Administration’s ongoing work to ensure that emerging technologies enhance, rather than undermine, the everyday lives of Americans, and represent another step toward a coordinated federal approach to both the risks and benefits of AI.
AI is rapidly becoming one of the most influential technologies of the 21st century. To fully realize its potential, however, its risks must be identified and managed from the outset. President Biden has repeatedly emphasized that AI policy must put people and communities first by promoting innovation that serves the public interest, while safeguarding national security, democratic values, and economic stability. A central part of this approach is the expectation that companies bear a core responsibility to rigorously test and secure their products before rolling them out or making them widely available.
As part of this effort, Vice President Harris and senior Administration officials will meet today with the CEOs of four leading American AI firms — Alphabet, Anthropic, Microsoft, and OpenAI. The goal of this discussion is to reinforce the obligation of technology leaders to develop AI in ways that are responsible, trustworthy, and ethical, with clear safeguards in place to reduce risks and prevent harms to individuals and society. This meeting is one component of a broader, continuing dialogue with advocates, industry, researchers, civil rights and not-for-profit organizations, community groups, and international partners on the full range of AI-related challenges.
These new steps add to a series of actions already taken by the Administration to encourage responsible AI development. Recent milestones include the release of the Blueprint for an AI Bill of Rights and related executive actions, as well as the publication of the AI Risk Management Framework and a roadmap for establishing a National AI Research Resource.
The Administration has also moved to protect Americans as AI is increasingly integrated into daily life. In February, President Biden signed an Executive Order directing federal agencies to identify and address bias in their design and use of new technologies, including AI, and to guard the public against algorithmic discrimination. In addition, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and the Department of Justice’s Civil Rights Division recently issued a joint statement affirming their shared commitment to using existing legal authorities to shield the public from AI-related harms.
At the same time, the federal government is addressing national security risks associated with AI, with particular attention to cybersecurity, biosecurity, and safety. This work includes mobilizing cybersecurity experts from across the national security community to support leading AI companies by sharing best practices for protecting AI models, infrastructure, and networks.
New investments to strengthen responsible AI research and development in the United States.
The National Science Foundation (NSF) is committing $140 million to launch seven new National AI Research Institutes. With this expansion, there will be 25 Institutes distributed across the country, extending participation to institutions in nearly every state. These Institutes foster collaboration among universities, federal agencies, industry, and other partners to drive transformative AI advances that are ethical, trustworthy, and oriented toward the public good. Beyond encouraging responsible innovation, the Institutes enhance the nation’s AI research and development ecosystem and help build a diverse AI workforce. The newly announced Institutes will focus on critical domains such as climate, agriculture, energy, public health, education, and cybersecurity.
Public evaluations of current generative AI systems.
The Administration is also announcing that leading AI developers — including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI — have independently committed to participate in a public assessment of their AI systems. This evaluation, conducted on a platform developed by Scale AI and hosted at the AI Village at DEFCON 31, will follow responsible disclosure principles. Thousands of community participants and AI experts will have the opportunity to rigorously test these models and examine how well they align with the principles and practices laid out in the Blueprint for an AI Bill of Rights and the AI Risk Management Framework. The findings from this open testing will provide valuable insights to researchers and the public about the real-world impacts of these systems and will help developers identify and address issues. Independent testing, beyond the control of companies or government, is a key element of robust AI evaluation.
Policies to ensure the U.S. government leads by example in managing AI risks and using AI responsibly.
The Office of Management and Budget (OMB) will release draft policy guidance for public comment on the federal government’s use of AI systems. This guidance will set concrete requirements for federal departments and agencies to ensure that the development, procurement, and deployment of AI prioritize the protection of Americans’ rights and safety. At the same time, it will support agencies in responsibly leveraging AI to fulfill their missions, improve service delivery, and promote equitable outcomes. By establishing clear standards, the guidance is expected to serve as a reference point for state and local governments, private companies, and other organizations in their own AI procurement and use. OMB plans to issue the draft guidance this summer, inviting feedback from advocates, civil society groups, industry, and other stakeholders before finalizing it.