Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Trump Resets AI Policy to Rely on Private Sector

Trump repealed a Biden order calling for protection against bias. While companies welcome deregulation, some are concerned about the administration's six-month timeline to reshape guidelines.

David Sacks, U.S. President Donald Trump's "AI and Crypto Czar", speaks to President Trump as he signs a series of executive orders in the Oval Office of the White House on Jan. 23, 2025, in Washington, DC. Trump signed a range of executive orders pertaining to issues including crypto currency, Artificial Intelligence, and clemency for anti-abortion activists. (Anna Moneymaker/Getty Images/TNS)
"AI and Crypto Czar" David Sacks speaks to President Trump as he signs a series of executive orders regarding crypto currency and artificial intelligence.
Anna Moneymaker/TNS
President Trump is sending the country’s artificial intelligence policy back to the drawing board, scrapping Biden-era protections such as those against biased algorithms in a move that some say will let AI proliferate unchecked.

A Trump executive order from January calls for a new AI action plan “free from ideological bias” to be developed within 180 days by a group of White House science and tech officials, including David Sacks, a venture capitalist and former PayPal executive whom Trump named as a special adviser for AI and crypto.

The lifting of Biden’s guardrails and the six-month long wait for a new action plan has companies in areas ranging from health care to talent recruitment wondering how to proceed, said Aaron Tantleff, a partner at the law firm of Foley & Lardner LLP specializing in tech policy.

“I have clients saying ‘I can’t wait,’” Tantleff said in an interview. Neither the tech companies developing the AI systems nor the end users of those systems are likely to halt their work until the new policy emerges, he said.

Meanwhile, “everything is off the table, there are different rules, bias is going to be taken out, barriers to innovation are gone, so potentially we are in an era of unchecked development of AI systems,” Tantleff said. “What are the safety measures, what are the guardrails, how do you proceed?”

AI companies aren’t waiting for an answer. They are racing to outcompete rivals for funds as they launch highly advanced models.

A day after he became president, Trump announced a $500 billion, U.S.-based artificial intelligence joint venture between industry titan OpenAI Global LLC, tech company Oracle Corp. and financier SoftBank Group, touting what is essentially a private development with no government input.

At the event to launch the venture, called Stargate, OpenAI CEO Sam Altman predicted the technology’s ability to cure cancers and heart disease. He also lauded Trump, though the president’s role was unclear.

“We wouldn’t be able to do this without you, Mr. President,” Altman said.

Trump said that normally such a venture would’ve gone to China. Indeed, markets were rocked by reports that China’s DeepSeek startup can deliver results similar to those from U.S. for a fraction of the cost, spotlighting the intensifying competition among nations to be the world’s AI leader.

Repealing Biden's Rule


In repealing Biden’s executive order of October 2023 , Trump blamed it for hindering innovation and imposing “onerous and unnecessary government control” on those who develop and deploy the technology.

Trump touted his own order as ensuring “America’s dominance in AI to promote human flourishing, economic competitiveness, and national security,” while being “free from ideological bias or engineered social agendas.”

Biden’s order had required advanced AI developers to share with the government the results of safety tests before unleashing their systems, to prevent risks to national security, public health or the economy. It aimed to ensure that AI development would not violate civil and labor rights or engage in discrimination or unfair labor practices, while protecting consumers and their privacy. It directed federal agencies to assess whether models posed chemical, biological, nuclear and cybersecurity risks.

The order also established the AI Safety Institute at the National Institute of Standards and Technology, and tasked it with designing voluntary standards for safe use.

The Trump administration is likely to keep the institute, though may rename it to give it its own imprimatur, said Daniel Castro, vice president at the Information Technology and Innovation Foundation, a think tank.

“I suspect the main shift will be a move towards more concrete definitions of harm and away from some of the DEI-related focuses that the Biden administration had,” referring to diversity, equity, and inclusion programs, Castro said in an email.

Private companies will likely continue to ensure that their systems don’t perpetuate biases, “it may be less of a focus for the federal government,” Castro said.

Trump’s decision to roll back protections that the Biden administration put in place is likely to hurt Americans, Nicole Gill, co-founder and executive director of Accountable Tech, a nonprofit that focuses on digital justice, said in a statement.

Biden’s order “laid the groundwork for basic accountability for this rapidly-evolving technology — ensuring that government agencies could harness AI’s potential without harming the millions of Americans they are entrusted to serve,” Gill said. “Among other common sense safeguards, this order protected Americans against AI fraud; protected families from AI discrimination in housing and the criminal justice system; and protected patients from unsafe AI tools in healthcare settings.”

“With this order gone, these protections disappear,” Gill said.

Even if the Trump administration does away with guardrails, companies may still have to contend with legal consequences of decisions made using AI tools, Gerry Stegmaier, partner at the law firm of Reed Smith said in an email.

With the federal government stepping away from tough regulations, state governments are likely to step up regulation and enforcement of rules on AI systems, Stegmaier said.

AI Across Borders


Multinational companies deploying AI tools in different parts of the world face another conundrum, Tantleff said.

The European Union, for example, has the EU AI Act that prohibits discrimination and bias. The law, which took effect in August, applies to multinational corporations that deploy AI systems if they’re used to make decisions that affect EU citizens.

“Multinational companies are saying, ‘I can’t not comply with EU laws,’” Tantleff said. Global companies complying with EU guardrails designed to prevent harm could run afoul of the Trump administration’s new policies, he said.

European companies may not be able to deploy their AI systems in the United States and vice versa. “We don’t know what’s going to happen,” he said.

It’s also unclear how the Trump administration’s AI policies would affect the AI Safety Institute’s cross-border work.

In December, the institute said it had worked with its U.K. counterpart to evaluate OpenAI’s ChatGPT o1 model for a range of cyber skills that might be used to enable malicious tasks, like hacking into a computer system.

The Institute said the findings should be considered preliminary and not conclusive, but that the model was tested for advances in biological sciences and potentially use them for malicious purposes, and “achieves performance that is generally comparable to best-performing reference models tested across an array of question sets.”
___

©2025 CQ-Roll Call, Inc., All Rights Reserved. Distributed by Tribune Content Agency, LLC.