“An industrial revolution, an information revolution, and a renaissance—all at once.” That’s how the Trump administration describes artificial intelligence (AI) in its new AI Action Plan .
The United States remains the world’s leading center of artificial intelligence (AI) innovation, home to tech giants like Microsoft, Google, Anthropic, Nvidia, OpenAI, AMD, and Meta. Yet despite this position, the federal government operates without a comprehensive statutory framework governing the development and deployment of AI. This gap poses a significant challenge for federal agencies that seek to leverage AI technologies.
AI is often described as a convergence of revolutions – industrial, informational, and intellectual. Its potential scale and impact have few historical parallels.
The term “artificial intelligence” emerged in the 1950s. Still, it’s only in recent years that AI has started to become integrated into mainstream business functions, advanced analytics, national security systems, and public-sector operations.
Modern AI models are complex, as they routinely leverage deep learning, natural language processing (NLP), train on massive datasets, and identify patterns at a scale beyond human capacity. The resulting technological acceleration has created both extraordinary opportunities and profound risks.
Global Competitiveness and Strategic Rivalry
The federal government recognizes AI as a strategic national advantage, yet the United States leadership is being tested by rapidly maturing regulatory frameworks abroad.
The European Union’s AI Act, the world’s binding regulation, became fully effective in August 2024. It takes a risk-based approach, banning unacceptable risk systems – such as social scoring and real-time biometric identification – as of February 2025. The EU also mandates strict compliance for high-risk AI systems, forcing US companies to adapt their tools for the European market.
Simultaneously, China is aggressively regulating generative AI services. As of September 1, 2025, China mandates that all AI-generated content must carry visible watermarks and technical metadata. Furthermore, generative AI providers must file security assessments to ensure training data aligns with state values.
China’s investment in AI has not gone unnoticed by the Trump administration. On April 3, 2025, the Executive Office of the President, Office of Management and Budget (OMB) under Russell T. Vought released document M-25-21,
Accelerating Federal Use of AI through Innovation, Governance, and Public Trust , which outlines the U.S. government’s data-related expectations and requirements for federal agencies in the context of their AI adoption. It states, “This memorandum provides guidance to agencies on ways to promote human flourishing, economic competitiveness, and national security.” The document is a blueprint for AI development, not a proposal for regulating it.
The United States’ Patchwork Regulatory Landscape
Federal AI governance currently consists of executive actions, sector-specific guidance, and targeted laws rather than a national policy framework.
This has left the U.S. with a multi-layered but fragmented system. Agencies rely on existing legal authorities—including civil rights, consumer protection, and national security laws—while implementing voluntary frameworks such as the NIST AI Risk Management Framework 1.0 . Meanwhile, states have become the primary drivers of binding AI legislation, enacting diverse laws addressing employment AI, biometric data, healthcare AI, and deepfake regulations.
The current fragmentation of state AI laws has created significant compliance hurdles for developers and federal partners. In response, the Trump Administration is aggressively moving toward federal preemption to establish a unified national standard. The administration argues that a patchwork of conflicting regulations – such as the Colorado AI Act or emerging rules from the California Privacy Protection Agency – impedes interstate commerce and slows American competitiveness.
To enforce this, the new Executive Order directs the establishment of an AI Litigation Task Force. This body is empowered to challenge laws at the state level that are deemed overly burdensome or inconsistent with federal policy. The strategy mirrors the legal logic used by groups like the National Pork Producers Council to challenge state overreach in other industries, prioritizing generally applicable permitting reforms and interstate commerce protections over local mandates.
However, this preemption is not absolute. The attorney general’s judgment will likely preserve state authority in specific high-priority areas. The administration has explicitly stated it will not preempt state laws related to child safety protections, data center infrastructure, or state government procurement. This nuance creates a dual-track system: broad federal deregulation to support AI innovation combined with targeted state authority over physical infrastructure and vulnerable populations.
The AI Chasm
AI implementations are still in the early adopter phase. Most federal agencies have begun dabbling in AI investments. Progress will remain slow until several challenges are addressed.
Data
In the commercial world, most companies are reticent to invest in building a large AI application. I have heard many CDOs state, “our data just isn’t ready.” I have worked for over two decades in both the federal and commercial industries, and the data in most federal agencies is far more inaccurate, misunderstood, incorrectly formatted, and needlessly duplicated.
AI Best Practices & Knowledge
AI development is an emerging field. As a result, few professionals have hands-on experience successfully building large AI applications. In addition, best practices are just beginning to emerge, and those that do exist continue to evolve as technology advances.
Inflated Expectations
In a world where a $25 a month ChatGPT subscription can be used to write a doctoral dissertation, proofread thousands of pages of text, or create a custom image for your birthday party in seconds, executives think that their AI investments should be as cost-effective and easy to use.
Fear
There have been numerous IT incidents that have led to public litigation and large settlements. Technology leaders are just beginning to understand AI and fear they can’t manage its risks.
The Road Ahead: What Federal Agencies Must Do
Federal agencies increasingly rely on advanced analytics and AI-enabled tools to fulfill missions, deliver services, and uphold regulatory obligations. To leverage AI effectively and responsibly, agencies should implement two foundational capabilities:
1. Establish Enterprise AI Governance
AI governance is an integrated framework of technologies, capabilities, risk management, regulations, and policies that guides the development, deployment, and use of AI. Effective AI governance includes:
Clearly defined roles and accountability structures
Formalized risk-assessment and oversight processes
Transparent reporting and documentation standards
Ethical guidelines aligned with statutory mandates
Distinguish between limited risk AI systems (like internal chatbots) and high risk systems (like benefits determination) that trigger strict federal reporting and legal obligations.
Without these elements, agencies face heightened risks, including unintended discrimination, security vulnerabilities, operational failures, and the erosion of public trust.
2. Implement a Robust Data Governance Program
AI is only as reliable as the data that fuels it. Agencies must expand their data governance programs by establishing:
Strong metadata management provides the who, what, when, where, why, and how of the data
Data ownership and stewardship roles
Uniform data standards and metadata practices
Data quality management and validation controls
Secure data lifecycle management processes
AI-driven data management identification capabilities like PII (personally identifiable information), dormant data, expired analytics, and dormant ETL (extract, transform, and load) processes
Strong data governance ensures that federal AI systems operate on trustworthy, well-managed data while maintaining compliance with privacy and security regulations.
Together, AI governance and data governance form the structural foundation for scalable, responsible, mission-aligned AI adoption across the federal government. Agencies that invest in these capabilities today will be better positioned to harness AI’s transformative potential while upholding the principles and protections that define American democracy.
Aligning with the National AI Strategy
The path forward is defined by the Trump Administration’s July release of the AI Action Plan, which establishes a clear mandate: the federal government will prioritize encouraging AI innovation over restrictive existing federal regulations.
When President Trump signed the recent Executive Order, it signaled a shift away from the European Union’s AI Act model, aiming to regulate AI through a national AI strategy that preempts conflicting state laws rather than waiting for new federal legislation or a unified AI bill. This approach explicitly rejects patchwork state AI regulation and existing state AI laws, instead empowering AI developers to drive responsible AI innovation without the burden of navigating such laws or complex AI-specific legislation.
Agencies must recognize that compliance with this AI policy is a condition for future resources.
The Special Advisor for AI has indicated that adherence to these standards will be a key factor in awarding discretionary grant programs and authorizing new AI compute infrastructure. Whether utilizing machine learning or AI tools for personal or professional services, agencies must implement trustworthy AI governance that aligns with OECD AI principles to maintain global AI leadership.
Failure to do so could leave initiatives vulnerable to oversight by the Federal Trade Commission, making it critical to adopt a unified framework for responsible AI.