Congress May Finally Take on AI in 2025. Here’s What to Expect

Congress May Finally Take on AI in 2025. Here’s What to Expect

AI tools rapidly infiltrated peoples’ lives in 2024, but AI lawmaking in the U.S. moved much more slowly. While dozens of AI-related bills were introduced this Congress—either to fund its research or mitigate its harms—most got stuck in partisan gridlock or buried under other priorities. In California, a bill aiming to hold AI companies liable for harms easily passed the state legislature, but was vetoed by Governor Gavin Newsom. 

[time-brightcove not-tgx=”true”]

This inaction has some AI skeptics increasingly worried. “We’re seeing a replication of what we’ve seen in privacy and social media: of not setting up guardrails from the start to protect folks and drive real innovation,” Ben Winters, the director of AI and data privacy at the Consumer Federation of America, tells TIME. 

Industry boosters, on the other hand, have successfully persuaded many policymakers that overregulation would harm industry. So  instead of trying to pass a comprehensive AI framework, like the E.U. did with its AI Act in 2023, the U.S. may instead find consensus on discrete areas of concern one by one.

As the calendar turns, here are the major AI issues that Congress may try to tackle in 2025. 

Banning Specific Harms

The AI-related harm Congress may turn to first is the proliferation of non-consensual deepfake porn. This year, new AI tools allowed people to sexualize and humiliate young women with the click of a button. Those images rapidly spread across the internet and, in some cases, were wielded as a tool for extortion

Tamping down on these images seemed like a no-brainer to almost everyone: leaders in both parties, parent activists, and civil society groups all pushed for legislation. But bills got stuck at various stages of the lawmaking process. Last week, the Take It Down Act, spearheaded by Texas Republican Ted Cruz and Minnesota Democrat Amy Klobuchar, was tucked into a House funding bill after a significant media and lobbying push by those two senators. The measure would criminalize the creation of deepfake pornography and requires social media platforms to take down images 48 hours after being served notice.

But the funding bill collapsed after receiving strong pushback from some Trump allies, including Elon Musk. Still, Take It Down’s inclusion in the funding bill means that it received sign-off from all of the key leaders in the House and Senate, says Sunny Gandhi, the vice president of political affairs at Encode, an AI-focused advocacy group. He added that the Defiance Act, a similar bill that allows victims to take civil action against deepfake creators, may also be a priority next year.

Read More: Time 100 AI: Francesca Mani

Activists will seek legislative action related to other AI harms, including the vulnerability of consumer data and the dangers of companion chatbots causing self-harm. In February, a 14-year-old committed suicide after developing a relationship with a chatbot that encouraged him to “come home.” But the difficulty of passing a bill around something as uncontroversial as combating deepfake porn portends a challenging road to passage for other measures. 

Increased Funding for AI Research

At the same time, many legislators intend to prioritize spurring AI’s growth. Industry boosters are framing AI development as an arms race, in which the U.S. risks lagging behind other countries if it does not invest more in the space. On Dec. 17, the Bipartisan House AI Task Force released a 253-page report about AI, emphasizing the need to drive “responsible innovation.” “From optimizing manufacturing to developing cures for grave illnesses, AI can greatly boost productivity, enabling us to achieve our objectives more quickly and cost-effectively,” wrote the task force’s co-chairs Jay Obernolte and Ted Lieu. 

In this vein, Congress will likely seek to increase funding for AI research and infrastructure. One bill that drew interest but failed to pass the finish line was the Create AI Act, which aimed to establish a national AI research resource for academics, researchers and startups. “It’s about democratizing who is part of this community and this innovation,” Senator Martin Heinrich, a New Mexico Democrat and the bill’s main author, told TIME in July. “I don’t think we can afford to have all of this development only occur in a handful of geographies around the country.” 

More controversially, Congress may also try to fund the integration of AI tools into U.S. warfare and defense systems. Trump allies, including David Sacks, the Silicon Valley venture capitalist Trump has named his “White House A.I. & Crypto Czar,”, have expressed interest in weaponizing AI. Defense contractors recently told Reuters they expect Elon Musk’s Department of Government Efficiency to seek more joint projects between contractors and AI tech firms. And in December, OpenAI announced a partnership with the defense tech company Anduril to use AI to defend against drone attacks. 

This summer, Congress helped allocate $983 million toward the Defense Innovation Unit, which aims to bring new technology to the Pentagon. (This was a massive increase from past years.) The next Congress may earmark even bigger funding packages towards similar initiatives. “The barrier to new entrants at the Pentagon has always been there, but for the first time, we’ve started to see these smaller defense companies competing and winning contracts,” says Tony Samp, the head of AI policy at the law firm DLA Piper. “Now, there is a desire from Congress to be disruptive and to go faster.” 

Senator Thune in the spotlight

One of the key figures shaping AI legislation in 2025 will be Republican Senator John Thune of South Dakota, who has taken a strong interest in the issue and will become the Senate Majority Leader in January. In 2023, Thune, in partnership with Klobuchar, introduced a bill aimed at promoting the transparency of AI systems. While Thune has decried the “heavy-handed” approach to AI in Europe, he has also spoken very clearly about the need for tiered regulation to address AI’s applications in high-risk areas. 

“I’m hopeful there is some positive outcome about the fact that the Senate Majority Leader is one of the top few engaged Senate Republicans on tech policy in general,” Winters says. “That might lead to more action on things like kids’ privacy and data privacy.”  

Trump’s impact

Of course, the Senate will have to take some cues about AI next year from President Trump. It’s unclear how exactly Trump feels about the technology, and he will have many Silicon Valley advisors with different AI ideologies competing for his ear. (Marc Andreessen, for instance, wants AI to be developed as fast as possible, while Musk has warned of the technology’s existential risks.)

While some expect Trump to approach AI solely from the perspective of deregulation, Alexandra Givens, the CEO of the Center for Democracy & Technology, points out that Trump was the first president to issue an AI executive order in 2020, which focused on AI’s impact on people and protecting their civil rights and privacy. “We hope that he is able to continue in that frame and not have this become a partisan issue that breaks down along party lines,” she says. 

Read More: What Donald Trump’s Win Means For AI

States may move faster than Congress

Passing anything in Congress will be a slog, as always. So state legislatures might lead the way in forging their own AI legislation. Left-leaning states, especially, may try to tackle parts of AI risk that the Republican-dominated Congress is unlikely to touch, including AI systems’ racial and gender biases, or its environmental impact
Colorado, for instance, passed a law this year regulating AI’s usage in high-risk scenarios like screening applications for jobs, loans and housing. “It tackled those high-risk uses while still being light touch,” Givens says. “That’s an incredibly appealing model.” In Texas, a state lawmaker recently introduced his own bill modeled after that one, which will be considered by their legislature next year. Meanwhile, New York will consider a bill limiting the construction of new data centers and requiring them to report their energy consumption.

Leave a comment

Send a Comment

Your email address will not be published. Required fields are marked *