Although the United States is currently the undisputed global leader in artificial intelligence (AI), home to tech giants like Google, Nvidia, OpenAI, Microsoft, AMD, and Meta, no comprehensive federal legislation regulates its development there. In its Winning the Race. AMERICA’S AI ACTION PLAN , the White House claims, “The United States is in a race to achieve global dominance in artificial intelligence (AI). Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race, it is imperative that the United States and its allies win this race. President Trump took decisive steps toward achieving this goal during his first days in office by signing Executive Order 14179, ‘Removing Barriers to American Leadership in Artificial Intelligence’ calling for America to retain dominance in this global race and directing the creation of an AI Action Plan.”
If anything, the Trump administration takes a laxer view on AI technology than the previous administration. On April 3, 2025, The Executive Office of the President Office of Management and Budget under Russell T. Vought released document M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust , which outlines the U.S. government’s data-related expectations and requirements for federal agencies in the context of their AI adoption. It states, “This memorandum provides guidance to agencies on ways to promote human flourishing, economic competitiveness and national security.” The is a blueprint for AI development, not a proposal for regulating it at all.
The U.S. is forging a fragmented path to AI governance, which is characterized by a state-led “patchwork” of laws, a federal strategy of targeted oversight and executive action. A deep-seated tension exists between fostering innovation and mitigating profound societal risks.
What is Artificial Intelligence?
According to SAS , “Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. Most AI examples that you hear about today – from chess-playing computers to self-driving cars – rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data.”
Although the term “artificial intelligence” was coined in 1956, it took sixty years for businesses to embrace the technology. Today, AI is everywhere. Businesses promote it as a panacea to fix just about any problem imaginable. It has become the buzzword of all buzzwords. Today, AI is the foundation of chatbots, agentic AI, and Large Language Models (LLMs), which are just machine learning and deep learning models on a vast scale. The biggest AI players in the stock market have seen run-ups in price that make the ecommerce bubble of the 1990s look like minor market blips rather than 1929-type stock market crashes.
However, this doesn’t mean AI is a technological bust, far from it. AI has already succeeded in almost unimaginable ways. It is the analytical backbone of many business processes, and it will continue to be for the forseeable future. In his Using artificial intelligence McKinsey Maps 400+ AI Use Cases; Finds Trillions in Potential Value , Michael Chui notes AI adds a 69% value in the nineteen industries they studied, including travel, logistics, retail, high tech, oil and gas, insurance, and semi-conductors. AI holds enormous promise, but it is not a magic bullet, as many software companies would have their customers and investors believe.
A Race Everyone Fears Losing
The White House as well as a lot of businesses recognizes the revolutionary potential of AI. The White House calls the technology “An industrial revolution, an information revolution, and a renaissance—all at once.” Somewhat ominously but completely fittingly, the White House adds, “The opportunity that stands before us is both inspiring and humbling. And it is ours to seize, or to lose.” This is a race no one wants to lose. The power of AI could be enormous if it reaches Artificial General Intelligence (AGI) level, a hypothetical type of AI with human-like cognitive abilities, capable of understanding, learning, and applying intelligence to any intellectual task. This differs from current AI, which is specialized in narrow tasks.
However, there is no guarantee we reach AGI, far from it. If we do, it certainly won’t be on the truncated timelines of companies like OpenAI and Anthropic. Recent attempts to reach AGI through LLMs have simply failed, taking with them the hopes of many early investors. OpenAI seems to recognize this. Recently, their CEO made waves implying the company was becoming “too big to fail” and would, like the banking industry in 2008 and 2009, seek federal government assistance should their massive tech investments go south. It’s an assumption that went down so poorly the CEO had to ‘clarify’ his statement. He didn’t mean a government bailout was a part of his growth plans. The readers misconstrued his words, he says.
All this being said, there are countless other technological avenues to explore with AI. Its ability to discover new techniques and methodologies is what sets it apart from any other previous technology developed by man. It’s that special.
Rivalry Is the Catalyst for Greatness
Federal agencies, such as the National Science Foundation, are investing in AI research and development to support American workers. The White House has established the National Artificial Intelligence Initiative Office to coordinate federal AI research and policy. Other federal agencies, such as the Department of State, are using AI technology to enhance diplomacy and global communication.
AI governance is a key aspect of the US government’s approach to AI, with a focus on responsible use and public trust. The US government is working with industry leaders to develop and implement AI technologies, but the US faces a huge challenge in the race to AGI.
China has the potential to beat the US in the AI race because of its the inherent advantages an authoritarian regime has over free democracies. China has a vast and highly engaged population of over one billion internet users who provide massive real-world datasets to the government free of privacy constraints — the government is free to do what it wants, after all. This data is used to rapidly train and scale AI applications. This large-scale user base enables quicker iteration and deployment of consumer-facing AI products. In addition, the Chinese government actively supports AI development through substantial funding, favorable industrial policies, and deep government-business integration goals.
The “Made in China 2025” initiative is a comprehensive, state-led industrial policy aimed to radically transform China from being the “world’s factory” for low-cost goods into a global leader in high-tech manufacturing and innovation by 2049. AI is a major part of the initiative, particularly focused on embedding AI across manufacturing and physical economy sectors, creating real-world “embodied AI” applications. China’s AI ecosystem is also highly cost-efficient, developing AI systems that run on more affordable hardware, making AI adoption more accessible across industries.
Prisoners of Our AI Device s
There is a modern day thought experiment, which describes an AI that is given a seemingly harmless goal, such as “make as many paperclips as possible” and, through superintelligent optimization, ends up converting all matter on earth, including humans, into paperclips. Even though this is a thought experiment, this illustrates the worst case scenario for AI — and us.
Today, AI regulation is no longer optional. The risk of AI has moved beyond the realm of science fiction, into tangible, present-day problems. Algorithmic biases in AI have been found in hiring, lending, and the judiciary. AI erodes privacy in cases like data harvesting for model training and surveillance. Bad actors can use AI to create disinformation, which can threaten national security. Deepfakes can influence operations as well as threaten cyber warfare.
AI can automate and optimize cyberattacks, finding vulnerabilities in software and networks at a scale and speed impossible for humans. This includes creating highly sophisticated phishing emails and malware. AI’s lack of transparency and “explainability” means the technology has a troubling “black box” quality to it. This makes it difficult or even impossible to understand why a specific decision or task was made. This is particularly problematic in high-stakes fields like medicine or finance, where accountability and an explanation of what is going on under the hood is a requirement.
The risks of AI include potential job displacement, economic inequality, algorithmic bias and discrimination, as well as the erosion of privacy. AI is highly effective at automating both routine manual labor and cognitive tasks. This could lead to significant unemployment or underemployment in sectors like transportation, manufacturing, customer service, and even some white-collar jobs, like paralegals and analysts.
The U.S. Approach – A Multi-Layered Patchwor k
In October 2023, the Biden Administration’s passed Executive Order 14110, “The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Executive Order”, setting a comprehensive federal government-wide framework to ensure responsible development and use of AI. The Order established new standards for AI safety, security, privacy, equity, civil rights, consumer and worker protection while promoting innovation, competition, and U.S. leadership in AI globally. It aimed to balance the promise of AI innovation with the imperative to protect society from its risks. It was grounded in the principles of safety, security, equity, and transparency. Most importantly, it was an order directing federal agencies to act, not an actual law passed by Congress with legal ramifications.
On his first day in office, instead of trying to work within Biden’s well-intentioned framework, President Donald Trump revoked Executive Order 14110. Within three days, he issued a replacement order, marking a major shift in U.S. federal AI policy. President Trump’s Executive Order 14179 “Removing Barriers to American Leadership in Artificial Intelligence,” rescinded the prior administration’s emphasis on oversight, safety, and equity, replacing it with a deregulatory approach centered on promoting rapid AI innovation and restoring U.S. leadership in this field, even though the U.S. had never relinquished its leadership in this space.
The order revoked previous AI policies deemed to hinder American AI innovation, including those focused on risk mitigation, content provenance, and DEI-related considerations in AI frameworks. It prioritized economic competitiveness, human flourishing, and the protection of national security by accelerating AI development, increasing investment in infrastructure. It also attempted to actively support the global export of American AI technology. The order directed federal agencies to establish procurement and development practices intended to ensure AI systems are “truth-seeking”, “ideologically neutral”, and free from engineered social agendas or “woke” bias in government modeling. Within 180 days, the administration was tasked to develop an AI Action Plan, covering policy across technology, education, labor, national security, cybersecurity, energy, trade, and antitrust initiatives.
The U.S. Regulatory Landscape
Currently, the regulatory environment for AI in the United States is highly fragmented. No comprehensive federal legislation governing AI development and its use exists. Several bills aimed at addressing safety, accountability, and ethical concerns are making their way through Congress, but today’s highly partisan political landscape, along with the inherent complexity of the unique issues involved with AI, and the intense lobbying of the powerful players investing in the technology have slowed progress to an all-encompassing, regulatory solution.
In his article, 2025 May be the Year of AI Legislation: Will We See Consensus Rules or a Patchwork? , Jules Polonetsky states, “In 2024, lawmakers across the United States introduced more than 700 AI-related bills, and 2025 is off to an even quicker start, with more than 40 proposals on dockets in the first days of the new year.” Future legislation should establish guidelines for transparency, protect against misuse (such as with deepfake fraud), and ensure AI technologies grow responsibly while still promoting innovation and public trust.
Three legislative trends rose to the top in 2024 — consumer protection, deepfakes and government use of AI, says Chelsea Canada in her article, 3 Trends Emerge as AI Legislation Gains Momentum . “A couple of states passed first-in-the-nation AI legislation focused on consumer protections. At least half the states addressed deepfakes through new laws targeting the technology’s use in elections and sexually explicit materials. Finally, most states considered or enacted bills related to government use of AI tools,” Canada added.
Algorithmic Accountability Act
According to U.S. Senator Ron Wyden , “The Algorithmic Accountability Act of 2023 requires companies to assess the impacts of the AI systems they use and sell, creates new transparency about when and how such systems are used, and empowers consumers to make informed choices when they interact with AI systems.” The act takes aim at AI systems that may lead to fraud or safety risks, such as deepfakes and autonomous systems. As incidents of AI misuse, including deepfake fraud, continue to rise, these legislative efforts are crucial for establishing regulatory frameworks that protect individuals and organizations while promoting ethical AI development.
Leaving it to the States
California
In the absence of any overriding federal law, the states have stepped up to become primary legislators. As is so typically the case, California leads the way, quickly becoming the de facto national standard bearer. The CA Consumer Privacy Act (CCPA) & CPRA regulates data, which is the lifeblood of AI. The law, which went into effect on January 1, 2020, established the core set of consumer privacy rights for Californians. It grants California residents extensive rights over any personal information held by businesses, including the right to access, delete, correct, and opt out of the sale or sharing of their data.
The CPRA, which came into effect January 1, 2023. It amended and significantly expanded the CCPA, adding new rights, creating a dedicated enforcement agency, and tightening the rules. Both the CCPA and CPRA are currently in effect, with further amended regulations and new requirements taking effect on January 1, 2026.
On September 29, 2025, California enacted The Transparency in Frontier Artificial Intelligence Act (SB 53), which mandates that developers of large foundational AI models disclose detailed safety protocols, risk assessment procedures, and measures taken to prevent misuse or harmful outcomes from their AI systems. This law targets companies building highly capable AI systems to ensure accountability and public trust.
In addition, California promotes transparency from organizations using AI. They must communicate clearly with users about AI involvement, particularly in consumer-facing applications. California’s AI initiative attempts to balance innovation and consumer protection by pushing companies to proactively address the ethical challenges posed by rapidly advancing AI technologies.
The Remaining Forty-nine
In the 2024 legislative session, “at least 45 states, Puerto Rico, the Virgin Islands and Washington, D.C., introduced AI bills, and 31 states, Puerto Rico and the Virgin Islands adopted resolutions or enacted legislation,” says The National Conference of State Legislatures . These initiatives include:
Colorado’s SB 205, which enacts the most comprehensive AI legislation in the nation. It requires developers and deployers of high-risk AI systems “to use reasonable care to avoid algorithmic discrimination and requires disclosures to consumers.” Additionally, reasonable care must be taken to avoid algorithmic discrimination in consequential decisions related to areas like education, employment, financial services, healthcare, and housing. The law goes into effect in February 2026.
Utah’s SB 149, the Artificial Intelligence Policy Act (SB 149), focuses on transparency by requiring clear and conspicuous disclosure when generative AI is used to interact with consumers. It also created an Office of Artificial Intelligence Policy and an AI Learning Laboratory Program to study and test AI technology. “It also limits an entity’s ability to ‘blame’ generative AI for statements or acts that constitute consumer protection violations,” says the law firm, Skadden .
New Hampshire’s House Bill 1432 criminalizes the fraudulent use of deepfakes. However, any media deemed satirical or a parody is not a violation of the law.
South Dakota’s Senate Bill 79 criminalizes the use of AI to create computer-generated child pornography.
The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), effective January 1, 2026, establishes a comprehensive legal framework governing the ethical development, deployment, and use of AI systems in Texas. It applies broadly to entities that develop, deploy, or market AI systems in the state or to Texas residents, setting prohibitions against AI systems designed for harmful purposes.
New York City’s Automated Employment Decision Tool (Local Law 144), effective July 5, 2023, regulates the use of AI and machine learning tools in employment decisions such as hiring and promotions. It requires employers to conduct bias audits of the AI tools used, provide notice to job applicants or employees about their use, disclose the data sources and functions of these tools, and offer alternatives or accommodations if requested. The act aims to ensure transparency, fairness, and accountability in automated decision-making processes.
Arkansas amended the Frank Broyles Publicity Rights Protection Act to explicitly protect individuals from unauthorized commercial use of AI-generated replicas of their voice or likeness without consent. It provides legal recourse such as injunctions and damages. Additionally, Arkansas passed criminal offenses under HB1529 for unlawful creation or distribution of deepfake visual materials depicting identifiable persons with intent to cause harm, establishing cause of action and penalties.
Montana’s Government AI Use Law (HB178), which became effective October 1, 2025, limits the use of AI systems by state and local government entities. It also prohibits AI use aimed at manipulating individuals or groups.
In addition to the acts above, the Maryland Department of Information Technology implemented policies and procedures focused on the development, procurement, deployment, use and assessment of AI systems when utilized by the Maryland state government.
Many other states initiated regulatory sandboxes and frameworks encouraging innovation while safeguarding privacy and civil rights, reflecting a burgeoning mosaic of AI governance at the state level that was a direct result of the complete absence of comprehensive federal AI legislation.
Deepfake Legislation
According to The National Conference of State Legislatures , at least 40 states have deepfake legislation pending and at least 50 bills are already in effect. Alabama criminalized a person knowingly creating, recording, or altering a private image without consent if the depicted individual had a reasonable expectation of privacy. California allows individuals to report digital identity theft to a social media platform. The social media platform must then permanently block and remove reported instances of digital identity theft.
Florida requires the addition of disclaimers on political advertisements, electioneering communications, or other miscellaneous advertisements when marketers use deepfakes. Louisiana criminalized the unlawful dissemination or sale of images of another individual created by AI.
Tennessee enacted the “Ensuring Likeness, Voice and Image Security Act of 2024”. This ensures every individual in the state has a property right in the use of his or her name, photograph, voice, or likeness in any medium in any manner.
The AI Regulatory “Patchwork” Problem
In their article, Shaping the future of AI: balancing innovation and ethics in global regulation , Kashefi, Kashefi, and Mirsaraei claim, “As AI technologies become increasingly integrated into societal fabrics, the absence of a unified regulatory approach becomes a critical issue.” The writers note that while the EU’s General Data Protection Regulation (GDPR) offers a comprehensive framework governing data privacy and AI, many other regions don’t have similar, comprehensive regulations. “This disparity leads to a fragmented global AI regulatory landscape, potentially hindering international cooperation and technological advancement,” argue the writers.
The ethical challenges posed by AI include “algorithmic bias, where AI systems may inadvertently perpetuate social inequalities, and the potential for mass surveillance, which raises serious privacy concerns,” say Kashefi, Kashefi, and Mirsaraei.
Recent corporate AI ethical scandals include the Cambridge Analytica scandal, Amazon’s AI recruiting tool bias scandal, and Google’s Project Maven Controversy, in which employees protested against Google’s partnership with the U.S. military on AI-driven drone surveillance technology. Other scandals include xAI Grok chatbot data leak in which xAI exposed over 370,000 private user chatbot conversations publicly through a “share” feature lacking proper privacy controls. Sensitive user data leaked, triggering a major privacy backlash.
“These concerns underscore the urgent need for comprehensive ethical frameworks and robust regulatory mechanisms to guide AI development and deployment. Ensuring that AI development aligns with societal values and ethical principles is crucial for realizing the benefits of AI in a way that is responsible, equitable, and beneficial for all segments of society,” claim Kashefi, Kashefi, and Mirsaraei.
The Road Ahead: Predictions and Possibilities
In the coming years, the regulatory landscape for AI in the U.S. will remain a patchwork of evolving federal principles and diverse state-specific laws, with continued federal efforts to establish a national AI strategy. This complex multi-level system poses challenges for innovators and regulators but also fosters experimentation with varied governance models.
At the federal level, comprehensive AI legislation remains stalled, with no broad regulatory authority enacted yet. Instead, the federal government relies on existing laws (anti-discrimination, privacy, national security), agency guidance, voluntary standards like the NIST AI Risk Management Framework, and focused rules such as the AI Training Act for federal employees. Innovation and competitiveness remain most business’s top priority, which drives a desire for reduced regulatory barriers. However, ongoing congressional hearings indicate the AI regulatory debate is far from over.
A narrow, targeted federal bill on a specific issue like deepfakes or election interference is possible, but comprehensive legislation tackling the issues AI raises is probably out of the question. Within the next three-to-five years, a cooperative federal-state framework or a set of core national standards could emerge that combines the “patchwork” of laws into a combined whole.
At the state level, legislative activity has accelerated dramatically, with 38 states enacting about 100 AI laws in 2025 alone. States act as the primary drivers of AI regulation today, crafting laws focused on specific uses such as employment AI, healthcare AI, biometric data, government AI use, consumer protections, and AI deepfake restrictions. Transparency and disclosure requirements are common safeguards. The trend in 2025 shows states favoring transparency and user-facing disclosures over heavy compliance mandates. They are also gradually developing accountability mechanisms, especially around fairness and discrimination.
Key Trends
As AI technology continues to advance rapidly, predictions regarding the evolution of AI regulations suggest the following key trends:
Increased focus on accountability and transparency: organizations will have to demonstrate accountability in their AI systems. They will have to disclose how algorithms make decisions, particularly in high-stakes areas such as finance, healthcare, and law enforcement.
Stricter guidelines for high-risk applications: high-risk AI applications like autonomous vehicles and facial recognition technologies might be imposed with restrictions to ensure safety standards are met before implementation.
Enhanced consumer protection measures: as incidents of AI misuse increase, regulatory frameworks may include specific provisions aimed at protecting consumers from deceptive practices and ensuring that individuals are informed any time they interact with AI.
Deeper collaboration between government and industry: the two sides will create best practices for the ethical use of AI to promote innovation while safeguarding public interests.
Adaptability to rapid technological changes : because AI develops at a breakneck pace, regulations will adapt quickly to keep up.
Global regulatory alignment: as countries around the world develop their own AI regulations, there may be efforts in the U.S., Europe, and Asia to align with international standards to facilitate cross-border cooperation while ensuring there is a cohesive approach to AI governance.
AI & American Values
American values must guide the development and use of AI. AI systems must be designed and used in ways that respect human interaction and dignity as well as include the values of fairness, transparency, and accountability. The US government commits to ensure that AI is developed and used in ways that support American workers and the economy. AI must enhance economic competitiveness. A collaborative approach that involves government, industry, and academia must guide it.
In the US, The National Institute of Standards and Technology (NIST) and the Department of Defense play a crucial role in developing standards and benchmarks for AI systems. These standards ensure reliability, safety, and the trustworthiness of AI technologies, say Kashefi, Kashefi, and Mirsaraei. “The US Department of Defense has established a set of ethical principles for the use of AI in defence. These principles emphasize responsible, equitable, traceable, reliable, and governable use of AI in military applications, claim Kashefi, Kashefi, and Mirsaraei.
In addition, several federal agencies, such as the Food and Drug Administration and the Federal Aviation Administration, have created their own AI guidelines to regulate their industries, an approach that allows for regulations tailored to the unique challenges and requirements of their sectors, say Kashefi, Kashefi, and Mirsaraei.
The development of AI systems must focus on performance and efficiency, including those related to speed, accuracy, and reliability. Industry leaders are working to develop AI systems that are faster, more accurate, and more reliable than past iterations. The US government is also working to support the development of AI systems that are faster, more accurate, and more reliable, including through initiatives focused on AI research and development. AI performance and efficiency must be guided by a commitment that will support the needs of American workers as well as the economy.
Global HarmonizAItion
“Harmonizing AI regulations across the globe offers significant advantages that can help in managing the rapid development and deployment of AI technologies more effectively,” argue Kashefi, Kashefi, and Mirsaraei. Global consistency in regulation is a fantastic goal to have but is it realistic. “A harmonized approach to AI regulation can lead to the establishment of universal ethical standards,” claim Kashefi, Kashefi, and Mirsaraei.
The great German philosopher Friedrich Nietzsche once said, “Perhaps no one has yet been truthful enough about what ‘truthfulness’ is.” It’s a statement that should be kept in mind when dealing with our cultures, values, and customs. Doubly so when you’re dealing with authoritarian nations who lack independent judiciary systems and strong IP protection. AI is just too potentially powerful a technology to put blind trust in a competitor who would have few qualms about using the power of AI in an unethical way against you.
Promises Made, Promises Kept?
Obviously, it is not ideal to have different rules and regulations for every state a company does business in or creates products in. Having one over-riding federal legislation would be the preference for most businesses, but, unfortunately, this isn’t the landscape we’re currently in. The compliance nightmare this creates for businesses operating across state lines is not insignificant.
AI can revolutionize society. However, a focus on societal implications, including those related to fairness, transparency, and accountability must guide it. The US government is working to support the development of AI in ways that benefit society, as are industry leaders who are working to support the development of AI in ways that benefit society, including through initiatives focused on AI and education. The needs of American the people must be supported above all else.
The U.S. is not failing to regulate AI; it is pursuing a distinct, decentralized, and complex model of governance. “In the USA, AI regulation is characterized by its decentralized and sector-specific nature,” say Kashefi, Kashefi, and Mirsaraei.
“The rapid advancement of AI is not solely a technological phenomenon but also a socio-ethical challenge. Addressing the potential risks and ethical concerns associated with AI is essential to harness its benefits while safeguarding individual rights, societal values, and ethical principles. This calls for a proactive, multifaceted approach involving stakeholders from various sectors, including technology, policy, ethics, and civil society,” state Kashefi, Kashefi, and Mirsaraei.
Can this multi-layered, somewhat chaotic approach effectively protect citizens and democratic values while also allowing America’s innovation engine to thrive? The world is watching the American experiment in real-time, and its outcome will shape the global technological order for decades to come.
An American RenAIssance
Over the long-term, the U.S. is likely to continue its hybrid approach, leaning heavily on sector-specific regulators and a principles-based executive strategy, positioning itself as a more agile, innovation-friendly counterweight to the EU’s comprehensive compliance model.
Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people. AI will enable Americans to discover new materials, synthesize new chemicals, manufacture new drugs, and develop new methods to harness energy—an industrial revolution. It will enable radically new forms of education, media, and communication—an information revolution. And it will enable altogether new intellectual achievements: unraveling ancient scrolls once thought unreadable, making breakthroughs in scientific and mathematical theory, and creating new kinds of digital and physical art—a renaissance.
The federal government is in a tricky place right now; it doesn’t want to over-regulate an industry that holds so much promise, yet it must ensure the marketplace doesn’t get ahead of itself. Capitalistic investment in the U.S. seems to run on the promises of exponential future returns. AI technology is so inherently complex and difficult to understand that it requires serious and sober regulation, if only to tamp down unwarranted enthusiasm as well as limit the unchecked deception when it comes to promises made by overzealous tech executives.
The White House is right when it calls AI an industrial and information revolution as well as a renaissance. However, we measured those evolutions in centuries, not months or years, or even decades. There’s plenty of time for AI to flourish. A technology so powerful and so potentially risky necessitates strong legislation, if only to ensure AGI doesn’t go off the rails and figure out a way to turn us all into paperclips.