As the U.S. authorities prepares to implement limits on state synthetic intelligence legal guidelines by way of President Donald Trump’s latest executive order, White House Office of Science and Technology Policy Director Michael Kratsios indicated to U.S. lawmakers that he may see a situation the place sector-specific AI regulation is viable and mandatory.
Kratsios didn’t provide many particulars when pressed on particular components of the brand new order — together with the definition of “onerous” state AI provisions — throughout a 14 Jan. look earlier than the U.S. House Committee on Science, Space and Technology’s Subcommittee on Research and Technology. Instead, he deferred to his Trump administration colleagues or earlier steerage for solutions whereas encouraging Congress to work with the administration on AI, regardless of omitting particulars on the precise role lawmakers ought to play in crafting federal laws mandated by the chief order.
“We want to create a regulatory environment that provides a level of clarity and a level of understanding for all of our innovators, and the most important part of that is promulgating and working towards a use case sector-specific approach to AI regulation,” he stated.
“Creating a one-size-fits-all regulation around AI is not the way that we can best deal with all these new AI technologies,” Kratsios continued. “Folks that are developing AI-powered medical diagnostics should continue to be regulated by the FDA, for example. Anyone who’s developing a drone should continue to be regulated by the FAA.”
There is a sturdy role for the U.S. National Institute of Science and Technology to play in setting requirements for reliable AI, in response to Kratsios, who added there are areas the place lawmakers and the administration can present readability.
His look, the primary earlier than Congress for the reason that government order was signed, offers some perception into the administration’s views on AI points heading into 2026.
The scope of the chief order
There has been little motion on the federal facet since Trump signed the order mandating company actions to restrict state AI legislation impacts. The Department of Justice’s AI Litigation Task Force to sue states over their legal guidelines was established throughout the allotted 30-day interval. Other deliverables from the order are anticipated 90 days out from its signing.
At the House subcommittee listening to, lawmakers on each side tried to determine subsequent steps following the order. U.S. Rep. Jay Obernolte, R-Calif., stated he believed each states and the federal authorities may regulate AI — however the authorities ought to go first and set up its role, so states know theirs.
“I think what everyone believes is that there should be a federal lane, and that there should be a state lane, and that the federal government needs to go first in defining what is under Article One of the Constitution, interstate commerce, and where those preemptive guardrails are,” he stated.
Kratsios stated he nonetheless opposes state-level regulation as a result of it may damage smaller builders unable to maintain up with various compliance necessities. He reiterated the order stating that “lawful” state actions associated to little one security protections, AI computing and information infrastructure, and state authorities procurement is not going to be touched below the order.
But Kratsios deferred when Rep. Don Beyer, D-Va., requested what authority his role needed to outline a state’s potential to control or how burdensome state legal guidelines can be decided. Most of that work, he stated, can be a Department of Commerce endeavor.
“It’s a process to be determined,” he stated, referring to defining onerous legal guidelines.
Additionally, Kratsios restated the White House’s need to create a nationwide framework and encourage lawmakers to succeed in out to teams just like the AI Education Task Force.
The role of NIST, AI requirements
Kratsios expressed assist for the mission of NIST and its Center for AI Standards and Innovation, previously the AI Safety Institute, noting the creation of dependable requirements is “absolutely important.” But he stopped shy of saying the latter ought to be codified below a forthcoming bill from Obernolte.
The destiny of NIST’s role has been unsure as Congress has been debating how a lot funding it ought to obtain after the company lost staff early final yr. The administration proposed to chop NIST’s funding this yr in the newest spherical of spending payments, however appropriators voted early in January to extend it.
Rep. Suhas Subramanyam, D-Va., stated NIST misplaced 400 staffers final yr and requested how Kratsios may reconcile these cuts with the significance of the company. He additionally requested what role the federal government ought to play in mitigating AI dangers.
Kratsios stated he was unfamiliar with these cuts, however stated the company has a “very important role” in setting superior metrics on mannequin analysis, which might be used throughout all industries.
“You want to have trust in them so that when everyday Americans are using — whether it be medical models or anything else — they are comfortable with the fact that it has been tested and evaluated,” he stated.
Kratsios additionally stated NIST ought to be “depoliticized,” a aim the Trump administration specified by its AI Action Plan by eradicating references to bias and discrimination within the company’s internationally-referenced AI Risk Management Framework.
“Inserting political rhetoric into their work is something that devalues and corrupts the broader efforts that NIST is trying to do across so many important scientific domains,” Kratsios stated.
How AI misuses ought to be dealt with
Lawmakers additionally sought perception into how the administration views AI misuse, with its latest announcement of a partnership with Grok, X’s AI chatbot, a frequent focus.
The chatbot has been below fireplace for its nonconsensual express deepfake era, one thing X stated it could no longer be able to do after investigations had been launched by regulators internationally. The U.S. military not too long ago introduced a partnership with Grok because it appears to increase its AI utilization.
Kratsios deferred questions on that contract to the U.S. General Services Administration, in addition to an April 2025 guidance document on procurement throughout the authorities.
He stated the Trump administration is dedicated to defending youngsters’s security and privateness on-line, however “the misuse of AI tools requires accountability for harmful or inappropriate use, not necessarily blanket restrictions on the use and development of that technology.” Any federal worker discovered to be misusing an AI product can be held accountable, he stated.
Caitlin Andrews is a workers author for the IAPP.
