Juan Londoño and Jennifer Huddleston

Reports indicate that the White House is considering an executive order establishing a new working group to regulate artificial intelligence (AI) that would “examine potential oversight procedures.” This group would be tasked to devise a system for the government to “approve” the most advanced models before they could launch. The plausibility of the risks associated with the most advanced AI models is unclear, but government control or burdensome regulation of the technology would bring significant risks to innovation and speech. Such an approach would open the door to a level of government control that could lead to regulatory capture, restrictions on expression, and the overall weaponization of government power to punish politically disfavored companies.
Requiring pre-launch approval was criticized as heavy-handed and anticompetitive when included in the Biden administration’s executive order on AI. If the Trump administration does carry through with such a requirement, it will raise similar concerns and represent a dramatic departure from the light-touch approach the administration has favored on this emerging technology.
According to additional reporting, the White House had been working on new safety-focused measures prior to the release of Anthropic’s and OpenAI’s recent models, but efforts seem to have been fast-tracked after these models raised additional cybersecurity concerns. The latest descriptions even call it an “FDA for AI” approval. This would abandon the approach that has led American technology to flourish and replace it with a framework that burdens innovation with a stagnant bureaucracy and a number of other problems.
Concerns about cybersecurity are valid, but government pre-approval comes with significant tradeoffs. Alternative policies are likely better able to balance legitimate cybersecurity risks while preventing the chilling of speech and innovation that mandatory pre-approval would entail.
A prescriptive, top-down approach in which the White House gatekeeps the market would subject a developing industry to unprecedented control driven by the executive branch’s whims. This would not only cause tremendous damage to technological and economic innovation but, for an expressive product such as AI, likely trample on Americans’ free speech rights. Such power could easily be abused not only to favor certain companies but even to engage in jawboning or censorship by controlling what information a model is allowed to produce.
Recent events, such as the Anthropic-Pentagon feud, have shown that the dispute between the government and innovators over what their model should do is not merely hypothetical. While this case was limited to the defense application of an AI model, it was a perfect example of how the government can invoke regulation to retaliate against a company for design choices it disagrees with, particularly if such companies must seek government approval before launch. It would not be far-fetched to believe that if the White House is given the power to broadly manipulate the AI market, it will likely wield it for political purposes.
If an administration considers a model “too woke,” “biased,” or to be spreading misinformation or disinformation, it would now have the power to prevent it from being rolled out. The establishment of a pre-market approval regime is likely to chill substantial speech, as companies will now avoid drawing political attention from the sitting administration to prevent any political clashes that could influence the approval process.
Installing a mandatory review process would also severely damage and slow technological innovation. As some have pointed out, the government will have an incentive to be slow rather than nimble and an active disincentive to approve models. This could put US companies at the type of global disadvantage typically faced in Europe, where companies have long had to first seek government approval. The political incentives then push the government to require AI developers to prove that a model is safe, rather than prove it has no evident flaws. This is a significantly higher bar that will undoubtedly take more time to clear, delaying the rollout of new features and potentially leaving fewer or more dated products available. When it comes to the underlying safety concerns around AI, there are less restrictive alternatives.
The administration may already be considering some. For example, recently, several frontier AI companies voluntarily agreed to share information that would allow the Center for AI Standards and Innovation (CAISI) to test and review their models to identify potential safety and security-related risks and capabilities, without giving the government ultimate approval or disapproval. Such voluntary agreements for government review and safety auditing of AI models enable independent third-party review of companies’ safety and security claims. However, they should not result in the government being the ultimate arbiter in how technology develops, as mandatory pre-approval risks do.
It is important to note that, to this day, frontier models are not completely unregulated or without oversight. As mentioned above, CAISI can already enter into voluntary agreements with companies willing to submit their safety tests to independent auditing. At the same time, the National Institute of Standards and Technology (NIST) has already published an AI risk management framework (RMF), a guidance document that shares best practices on AI risk management for developers and deployers. By maintaining a voluntary nature, NIST has brought companies to the table to create a rapidly evolving document better suited to reflect the industry’s fast pace of change, making the RMF a valuable “soft law” governance tool. But these tools are all significantly less extreme than pre-market government approval.
Establishing a pre-release review or licensing regime for AI companies would grant the government, particularly the executive branch, significant control over AI technologies that could hinder innovation or control expression. The costs of technological and economic development would be onerous. But the impact it could have on AI-powered speech and content creation could be even worse.




