Rachel Lau and J.J. Tolentino work with main public curiosity foundations and nonprofits on expertise coverage points at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press. Isabel Epistelomogi, a coverage and analysis intern with Freedman Consulting, additionally contributed to this text.

US Senator Ted Cruz (R-TX), chairman of the Senate Commerce Committee, at a hearing titled “AI’ve Got a Plan: America’s AI Action Plan” on Wednesday, September 10, 2025.

This month, AI coverage and deregulation had been once more a spotlight in Congress as Sen. Ted Cruz (R-TX), Chairman of the Senate Commerce Committee, launched a brand new federal AI framework constructed across the SANDBOX Act (S. 2750). The invoice would empower the White House Office of Science and Technology Policy (OSTP) to ascertain a regulatory sandbox the place AI firms might take a look at merchandise with two-year exemptions from present federal guidelines. In the same vein, Rep. Michael Baumgartner (R-WA) launched “The American Artificial Intelligence Leadership and Uniformity Act” (H.R. 5388) within the House. The invoice seeks to resurrect Sen. Cruz’s legislative efforts to preempt state AI regulation, with Baumgartner’s invoice calling for a five-year moratorium on most state and native AI rules.

At the identical time, issues about political affect over federal businesses intensified following the suspension of comic Jimmy Kimmel by ABC/Disney. The transfer got here after FCC Chairman Brendan Carr steered revoking broadcast licenses of ABC associates over Kimmel’s remarks, and broadcaster Nexstar Media—additionally searching for FCC approval for a $6 billion merger—preemptively dropped Kimmel’s program. Observers warned that the Trump administration’s rising use of merger evaluations on the FCC and FTC to reward allies and punish critics represents an erosion of company independence.

Beyond these high-profile headlines, federal businesses, Congress, civil society, business, and the courts had been lively as properly. The FTC launched new inquiries into how AI companies safeguard youngsters and settled a decade-long case in opposition to Pornhub’s mum or dad firm. Meanwhile, the courts weighed in with landmark rulings, together with Anthropic’s $1.5 billion copyright settlement, cures in Google’s search monopoly case, and Amazon’s $2.5 billion “dark patterns” settlement.

Read on to study extra about September developments in US tech coverage.

Summary

US Senate Commerce Committee Chairman Ted Cruz (R-TX) introduced an AI coverage framework designed to advertise US management and innovation in AI improvement and deployment. As a part of the framework, Sen. Cruz unveiled the Strengthening Artificial Intelligence Normalization and Diffusion By Oversight and eXperimentation (SANDBOX) Act (S. 2750), which might require the White House Office of Science and Technology Policy (OSTP) to establish a regulatory sandbox for AI builders “to test, experiment with, or temporarily offer AI products and services.” Cruz described the invoice as a light-touch method to AI regulation that will enable AI firms to use for 2-year exemptions from “obstructive” federal guidelines to compete with China’s rising AI business. Sen. Cruz stated that the laws aligns with the Trump administration’s AI Action Plan, whereas OSTP Director Michael Kratsios voiced assist for the SANDBOX Act.

Industry leaders and specialists endorsed Sen. Cruz’s legislative framework for AI coverage and applauded the SANDBOX Act. TechInternet CEO Linda Moore stated that the group is “grateful to Senator Cruz for his continued leadership and work to establish an AI policy framework that will support American innovation and strengthen our AI global leadership” and praised the Trump administration’s efforts to establish standards that remove barriers to innovation. NetChoice Director of State and Federal Affairs Amy Bos released a statement supporting the SANDBOX Act as an “innovation-first approach that will keep us ahead of global rivals like China.” A Meta spokesperson commended Sen. Cruz, stating that the “regulatory sandbox proposal offers a broad scope that could enable a wide range of business practices, research, and AI technologies—not just a select few—to benefit from the program.”

In contrast, civil society organizations expressed concerns over Sen. Cruz’s policy framework and warned that industry-friendly policies could exacerbate AI’s risks and harms. Data & Society’s Policy Director Brian J. Chen and Executive Director Janet Haven described the SANDBOX Act as a “liability shield” for the AI industry, enabling companies to “continue to discriminate, spread deepfakes, exacerbate mental health risks and surveil workers.” J.B. Branch, Public Citizen’s Big Tech accountability advocate, issued a statement calling on Congress to prioritize “legislation that delivers real accountability, transparency, and consumer protection in the age of AI” rather than providing companies with “hall passes” to avoid regulations. Sacha Haworth, Executive Director of The Tech Oversight Project, criticized the SANDBOX Act as a manner “for the Trump Administration to strip away standards that hold Big Tech accountable for violating privacy, endangering kids online, and letting scammers rip off seniors and veterans.”

What We’re Reading

  • Justin Hendrix and Ben Lennett, “US Senator Ted Cruz Proposes SANDBOX Act to Waive Federal Regulations for AI Developers,” Tech Policy Press.
  • Brian J. Chen and Janet Haven, “Ted Cruz’s AI sandbox enables dangerous self-regulation, not innovation,” The Hill.
  • Senator Ted Cruz, “Will AI’s Future be American?” Real Clear World.

Summary

ABC/Disney briefly suspended comic Jimmy Kimmel this month following remarks by Federal Communications Commission (FCC) Chairman Brendan Carr suggesting the potential revocation of broadcast licenses held by ABC’s affiliate tv stations. Before the suspension, Nexstar Media, which owns 32 affiliate stations, had already preempted the airing of this system. In addition to issues about its broadcast licenses, Nexstar’s resolution might have been influenced by its pending $6 billion merger, which requires FCC approval. Specifically, the corporate is searching for regulatory adjustments to possession guidelines that presently restrict the variety of stations a single entity can management. If authorized, Nexstar’s planned acquisition of greater than 60 stations from TEGNA, Inc. would end result within the mixed entity reaching roughly 80 % of US tv households.

This was not the primary occasion wherein a merger overview below Chairman Carr—appointed by President Trump—raised questions on political affect. For instance, the FCC authorized the Paramount/Skydance merger solely after Paramount resolved a $16 million lawsuit involving President Trump, and Skydance agreed to terminate its Diversity, Equity, and Inclusion (DEI) applications whereas appointing an ombudsman at CBS to judge complaints of bias in opposition to conservatives. Similarly, the Federal Trade Commission (FTC) authorized the Omnicom–Interpublic promoting company merger partially on the situation that the mixed agency not maintain any coverage that “declines to deal with Advertisers based on political or ideological viewpoints.” Although the FTC, just like the FCC, is structured as an unbiased company with 5 commissioners—together with a chair—who’re nominated by the President and confirmed by Congress, the Trump administration has more and more exercised direct political affect over its operations. This was exemplified by the elimination of Democratic Commissioners Rebecca Kelly Slaughter and Alvaro M. Bedoya earlier this 12 months.

Although the FTC has carried out elements of its shopper safety mandate, as evidenced by its enforcement actions this month on children’s privacy and deceptive practices, there’s a concern that competitors coverage could possibly be more and more utilized by the Trump Administration to affect the information and knowledge panorama. As former FTC Commissioner Bedoya argued recently, “the president is using his power to block mergers not to protect the public interest, or protect competition, but to punish his enemies and reward his friends.” In distinction, a lot of President Trump’s supporters largely welcomed Carr’s feedback, although some Senate Republicans expressed concern. These developments spotlight the extent to which the politicization of the FTC and FCC below the Trump Administration threatens to undermine the unique democratic functions of insurance policies equivalent to antitrust enforcement and media possession restrictions, which had been designed to curb monopoly energy and safeguard a variety of viewpoints.

What We’re Reading

  • Cristiano Lima-Strong and Anish Wuppalapati, “100 Days of Trump: His Enforcers Are Waging War On Content Moderation. It’s Likely Just The Start,” Tech Policy Press.
  • Angela Fu, “Media consolidation is shaping who folds under political pressure — and who could be next,” Poynter.
  • John Hendel, “‘It’s the threats that are the point’: How Brendan Carr exerts his FCC power,” Politico.

Tech TidBits & Bytes goals to supply brief updates on tech coverage happenings throughout the manager department and businesses, Congress, civil society, business, and courts.

In the manager department and businesses:

  • The Trump administration unveiled and signed an executive order, advancing the framework of a long-anticipated deal to separate TikTok’s US operations from ByteDance, a China-based firm. The new deal, led by Oracle and the personal fairness agency Silver Lake, would require the US model of TikTok to obtain a licensed copy of the ByteDance algorithm, with US information retained and monitored by Oracle. ByteDance is anticipated to retain not more than a 20% stake and might be excluded from TikTok’s safety committee. Trump announced plans to situation an government order granting a 120-day delay to finalize phrases. The Chinese authorities has not formally authorized the deal, although officers acknowledged “basic framework consensus.” Rep. John Moolenaar (R-MI), chair of the House Select Committee on the Chinese Communist Party, requested an pressing White House briefing following President Trump’s government order to overview deal particulars and decide compliance with the 2024 legislation.
  • President Trump called on Microsoft to fireplace Lisa Monaco, a former Biden Administration Justice Department official who now serves as Microsoft’s President of Global Affairs. Trump wrote on social media that Monaco “ is a menace to U.S. National Security, especially given the major contracts that Microsoft has with the United States Government.”
  • The Federal Trade Commission (FTC) launched an inquiry in opposition to main AI firms, together with Alphabet, Character.ai, OpenAI, Snap, xAI, and Meta, requesting data associated to how they “measure, test, and monitor potentially negative impacts of this technology on children and teens” and guarantee their AI instruments are in compliance with the Children’s Online Privacy Protection Act.
  • The FTC took motion in opposition to robotic toy maker Apitor for violating the Children’s Online Privacy Protection Act (COPPA) by permitting a third-party Chinese software program firm to gather geolocation information from youngsters with out parental consent. The Apitor app, which controls robotic toys for teenagers ages 6-14, required Android customers to allow location sharing, which transmitted youngsters’s location information to servers in China with out notifying dad and mom. As a part of a settlement, Apitor should overhaul its information practices, guarantee third-party compliance with COPPA, and delete any improperly collected information. A $500,000 penalty was imposed however suspended as a result of firm’s incapacity to pay.
  • The FTC settled a decade-long lawsuit in opposition to Pornhub’s mum or dad firm, Aylo, over claims of “child sex abuse material (CSAM) and nonconsensual material (NCM) and failed to prevent the spread of the content.” Aylo pays a $5 million penalty and undertake new compliance, consent, and information privateness applications.
  • The FTC opened a call for public comments on their “Strategic Plan for Fiscal Years 2026-2030,” with feedback accepted till October 17, 2025.
  • The FCC initiated proceedings to revoke recognition from seven Chinese government-controlled electronics testing labs below new “Bad Labs” guidelines geared toward defending US nationwide safety. The motion was half of a bigger effort to forestall international adversaries from overseeing labs that certify gadgets for the US market. FCC Chair Brendan Carr emphasised that the transfer will restore belief in tools security and America’s provide chain independence below Trump’s financial agenda.

In Congress:

  • The House and Senate variations of the FY26 National Defense Authorization Act (NDAA) included provisions to speed up AI adoption throughout the Department of Defense, significantly for logistics, mission-critical duties, and public-private sandboxes. The House model emphasised AI’s influence on cybersecurity coaching, workforce improvement, and worldwide cooperation. The Senate model centered on standardized danger frameworks, mannequin governance, and cybersecurity safeguards. Outcomes remained in flux because the NDAA moved towards reconciliation.
  • Sen. Mark Kelly (D-AZ) published an AI roadmap suggesting that main AI firms fund a federal belief, the AI Horizon Fund, to spend money on American staff by means of upskilling applications, modernized credentialing, and protections for displaced staff. The belief would require AI companies to assist finance the water, energy, and grid techniques they depend on. The proposal additionally emphasised rising public belief by means of stronger AI security requirements, oversight, and transparency.
  • Sen. Chuck Grassley (R-IA) pressed Meta CEO Mark Zuckerberg over claims that the corporate tried to silence whistleblower Sarah Wynn-Williams, who testified earlier than Congress about Meta’s alleged ties to China, Foreign Corrupt Practices Act (FCPA) violations, and practices focusing on teenagers. Grassley cited a $50,000-per-violation non-disparagement clause and raised issues that her severance deal might have violated SEC guidelines. Meanwhile, Sen. Josh Hawley (R-MO) referred to as for Zuckerberg to testify over further nationwide safety issues.
  • Nine Democratic senators sent a letter to Immigration and Customs Enforcement (ICE) Acting Director Todd Lyons inquiring about ICE’s reported use of a smartphone-based biometric surveillance app, Mobile Fortify. The app scans an individual’s face or fingerprint and connects to huge federal databases. Lawmakers warned that the device allows real-time “Super Queries” into information, together with prison data, immigration standing, and private particulars from industrial information brokers like LexisNexis. The senators cited issues about racial bias, wrongful detentions, surveillance of protestors, and chilling results on free speech. They demanded that ICE disclose utilization insurance policies, testing information, database use, and whether or not US residents are being focused.
  • The Cybersecurity Information Sharing Act (CISA) of 2015 is poised to run out on September 30, amid a looming authorities shutdown and congressional deadlock. If CISA lapses, there are issues over weakened authorized protections for private-sector firms sharing cyber menace intelligence with federal businesses. The legislation has shielded companies from legal responsibility when transmitting delicate information and has underpinned US cyber protection for a decade. Efforts to increase the legislation failed within the Senate, together with a unbroken decision and a scaled-back different from Sen. Rand Paul (R-KY). Post-shutdown negotiations are more and more more likely to discover a decision.

In civil society:

  • A brand new report from the Congressional Progressive Caucus Center warned that AI is accelerating surveillance, discrimination, and job loss throughout workplaces, whereas eroding employee rights and union energy. From biased hiring algorithms to invasive productiveness monitoring, the report contextualized how AI is getting used to depend upon employer management with little transparency or authorized guardrails. The report referred to as for federal motion, together with complete AI labor requirements, stronger enforcement, public-interest AI instruments, and expanded security, and expanded security nets.
  • The New Democrat Coalition released its 2025 Innovation Agenda, calling for aggressive funding in AI, quantum computing, biotech, and clear power to counter China’s tech dominance and reignite inclusive US financial development. The agenda proposed expanded STEM immigration, workforce reskilling, digital privateness protections, AI security requirements, new regional tech hubs, federal AI infrastructure, and a stronger innovation-government partnership constructed on predictability, belief, and transparency.
  • The NYU Center for Technology & Public Policy released a report urging the US to construct equitable, safe, and democratically ruled public AI analysis infrastructure. While proposals just like the National AI Research Resource (NAIRR) purpose to broaden entry to information and fashions, the report warned that with out robust safeguards, it might widen present inequalities and promote company dominance. Key suggestions to democratize innovation included embedding cybersecurity, stopping dual-use dangers, rejecting unique public-private partnerships, and immediately supporting under-resourced establishments.
  • Americans for Responsible Innovation (ARI) published a white paper on the rising nationwide safety issues across the AI information annotation business, which gives the human-labeled information important to coaching superior AI fashions. The paper warned that unchecked international involvement, significantly from adversities like China, might erode US management in AI, nationwide safety, and mannequin integrity. The paper referred to as for expanded screening of international investments, potential export controls, and limitations to adversary-controlled entry, particularly in crucial sectors like infrastructure and protection.
  • Immigration rights activist Dominick Skinner used AI facial reconstruction and reverse picture search instruments to establish masked Immigration and Customs Enforcement (ICE) officers. The ICE List Project claimed to have named over 100 ICE workers, prompting backlash from the Department of Homeland Security (DHS) and lawmakers. In response, Sen. Marsha Blackburn (R-TN) proposed the Protecting Law Enforcement from Doxxing Act, which might criminalize exposing and doxxing federal officers, and sent a letter to the CEO of PimEyes asking about how their expertise is getting used to establish ICE officers. Sen. Gary Peters (D-MI) and different Democrats additionally expressed concern in regards to the risks of masked legislation enforcement and AI misuse.
  • The UC Berkeley Labor Center published a report, “The Current Landscape of Tech and Work Policy in the U.S.: A Guide to Key Laws, Bills, and Concepts,” offering an outline of legislative momentum on regulating digital office applied sciences. The report covers payments on digital monitoring, algorithmic administration, information privateness, automation and job loss, and different points.

In business:

  • NetChoice, a commerce affiliation representing Amazon, Google, Meta, Snap, and different main tech gamers, launched a brand new tremendous PAC. Known for its authorized fights defending Section 230, NetChoice’s transfer got here amid rising bipartisan stress to reform the availability.
  • Meta launched a brand new tremendous PAC, the American Technology Excellence Project, which pledges “tens of millions” to again state-level candidates supportive of the substitute intelligence business. This marked the second tremendous PAC Meta has unveiled in a month, following the launch of META California, centered on AI coverage on the state degree. The new PAC will give attention to electing pro-tech candidates and heading off what Meta calls “poorly crafted” AI payments throughout the US.
  • Microsoft and OpenAI signed a non-binding deal permitting OpenAI to restructure right into a for-profit firm. Details on how a lot of OpenAI Microsoft will personal or whether or not Microsoft will retain unique entry to OpenAI’s newest fashions weren’t disclosed. Attorneys normal in California and Delaware should approve OpenAI’s new construction for the change to enter impact.
  • OpenAI announced new safeguards to detect and reply to customers displaying indicators of psychological well being misery and hazard, together with serving to customers attain suicide hotlines. Meta additionally announced plans to introduce new psychological well being safeguards on its AI chatbots for indicators of suicide, self-harm, and consuming issues, suggesting that its chatbots will now join teen customers to psychological well being assets. These latest strikes got here in response to rising scrutiny over the influence that AI techniques have on youth psychological well-being.
  • Apple quietly revised inside AI coaching insurance policies following Trump’s return to the White House, based on paperwork obtained by POLITICO. Updates included reclassifying DEI as a “controversial” matter, increasing sensitivity round Trump, and flagging references to Apple’s management as model dangers. Apple denied the coverage adjustments, citing its Responsible AI Principles.
  • YouTube responded to subpoenas issued by the House Committee on the Judiciary on their content material moderation insurance policies surrounding the COVID-19 pandemic and freedom of expression, arguing that “no matter the political atmosphere, YouTube will continue to enable free expression on its platform, particularly as it relates to issues subject to political debate.”
  • Amid mounting worker and investor stress, Microsoft terminated its cloud and AI providers utilized by Israel’s navy after uncovering that its Azure platform was deployed to retailer and analyze thousands and thousands of intercepted Palestinian cellphone calls. The resolution adopted an investigation that exposed the scope of Israel’s secret mass surveillance program using Microsoft’s digital infrastructure. Microsoft President Brad Smith instructed the employees the corporate wouldn’t assist “mass surveillance of civilians” wherever on the earth.
  • The Business Software Alliance (BSA), a worldwide commerce affiliation, urged governments to behave now on quantum readiness with a six-point technique: software program R&D, workforce improvement, post-quantum cryptography, and worldwide cooperation. The report highlighted the urgency of investing in quantum-specific software program, fostering business adoption, and upgrading digital infrastructure to fulfill coming threats.

In the courts:

  • A divided Supreme Court announced that it’ll hear arguments on the Trump Administration’s elimination of FTC Commissioner Rebecca Kelly Slaughter in December and that it’ll enable President Trump to maintain Slaughter out of the company within the meantime, following Slaughter’s reinstatement by a federal appeals court docket earlier within the month. The Supreme Court acknowledged that it might take into account the broader query of whether or not presidents can take away unbiased regulators with out trigger. The three liberal Supreme Court justices dissented, with Justice Kagan saying that the court docket’s order permits the president to “extinguish the agencies’ bipartisanship and independence.”
  • Amazon agreed to pay $2.5 billion to settle a Federal Trade Commission (FTC) lawsuit alleging that the corporate deceptively enrolled customers in Prime and made it tough to cancel their subscription. The settlement was one of many largest in FTC historical past and included $1 billion in civil penalties and $1.5 billion in restitution to impacted clients. The FTC argued that Amazon utilized manipulative design, or “dark patterns,” to lure customers into recurring subscriptions. Amazon didn’t admit wrongdoing however will notify eligible customers about compensation and streamline the cancellation course of.
  • A federal choose ruled that Google should share its search information with “qualifying competitors” to resolve its monopoly on search outcomes. The ruling additionally pressured Google to limit its funds, guaranteeing that its search engines like google obtain preferential placement on net browsers and smartphones. The resolution was thought-about some of the important makes an attempt to “level the tech playing field” within the final 20 years; nonetheless, the court docket stopped in need of Google’s worst-case state of affairs of forcing the corporate to promote Chrome. Google is more likely to enchantment the choice.
  • Anthropic agreed to pay $1.5 billion to settle a landmark copyright lawsuit over accusations that the corporate used over 500,000 pirated books to coach Claude, the most important settlement in US copyright historical past. Anthropic was accused of illegally downloading books from “shadow or pirated” libraries. As a part of the deal, Anthropic should destroy the pirated information and will nonetheless face future infringement claims. The ruling clarified that coaching AI fashions with legally acquired books is taken into account “fair use.”

The following payments made progress throughout the House and Senate in September:

  • Digital Asset Market Clarity Act of 2025 (CLARITY Act)H.R. 3633. Introduced by Rep. J. French Hill (R-AR), the invoice handed the House and was despatched to the Senate.
  • Generative AI Terrorism Risk Assessment ActH.R. 1736. Introduced by Rep. August Pfluger (R-TX), the invoice superior by means of the House Committee on Homeland Security.
  • Romance Scam Prevention ActS. 841. Introduced by Sen. Marsha Blackburn (R-TN), the invoice superior by means of the Senate Committee on Commerce, Science, and Transportation.

The following payments had been launched within the Senate in September:

  • SANDBOX ActS. 2750. Introduced by Sen. Ted Cruz (R-TX), the invoice would “require the Director of the Office of Science and Technology Policy to establish a Federal regulatory sandbox program for artificial intelligence, and for other purposes.”
  • Children Harmed by AI Technology Act (CHAT Act)S. 2714. Introduced by Sen. Jon Husted (R-OH), the invoice would “require artificial intelligence chatbots to implement age verification measures and establish certain protections for minor users, and for other purposes.”
  • RAISE Act of 2025S. 2740. Introduced by Sen. Jon Husted (R-OH), the invoice would “amend the Elementary and Secondary Education Act of 1965 to encourage States to develop academic standards for elementary school and secondary school for artificial intelligence and other emerging technologies.”
  • Consumer Safety Technology ActS. 2766. Introduced by Sen. John R. Curtis (R-UT), the invoice would “direct the Consumer Product Safety Commission to establish a pilot program to explore the use of artificial intelligence in support of the mission of the Commission and to direct the Secretary of Commerce and the Federal Trade Commission to study and report on the use of blockchain technology and tokens, respectively.”
  • A decision expressing the sense of the Senate that the feedback made by Federal Communications Commission Chairman Brendan Carr…S.Res. 407. Introduced by Sen. Edward J. Markey (D-MA), the decision expressed “the sense of the Senate that the comments made by Federal Communications Commission Chairman Brendan Carr on Wednesday, September 17, 2025, threatening to penalize ABC and Disney for the political commentary of ABC late night host Jimmy Kimmel were dangerous and unconstitutional.”

The following payments had been launched within the House in September:

  • The American Artificial Intelligence Leadership and Uniformity ActH.R. 5388. Introduced by Rep. Michael Baumgartner (R-WA), the invoice would “establish a clear, national framework for Artificial Intelligence (AI) development by preempting conflicting state-level regulations and codifying President Trump’s executive order on Artificial Intelligence.”
  • Fair Artificial Intelligence Realization Act (FAIR Act)H.R. 5315. Introduced by Rep. Harriet Hageman (R-WY), the invoice would “prohibit the Federal procurement of large language models not developed in accordance with unbiased AI principles.”
  • AI Sovereignty ActH.R. 5288. Introduced by Rep. Eugene Vindman (D-VA), the invoice would “direct the Secretary of Commerce to submit reports on strategies regarding the development of, and research relating to, critical artificial intelligence technologies, and for other purposes.”
  • Protect Elections from Deceptive AI ActH.R. 5272. Introduced by Rep. Julie Johnson (D-TX), the invoice would “prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for Federal office, and for other purposes.”
  • Literacy in Future Technologies (LIFT) Artificial Intelligence Act – H.R. 5584. Introduced by Rep. Thomas H. Kean (R-NJ), the invoice would “improve educational efforts related to artificial intelligence literacy at the K through 12 level, and for other purposes.”
  • Growing University AI Research for Defense Act (GUARD Act) – H.R. 5466. Introduced by Rep. Ronny Jackson (R-TX), the invoice would authorize the Secretary of Defense, “to establish an AI Institute at a senior military college (SMC) to advance critical defense technologies, workforce development, and innovative applications for artificial intelligence to strengthen America’s national security and defense capabilities.”
  • AI Warnings And Resources for Education (AWARE) ActH.R. 5360. Introduced by Rep. Erin Houchin (R-IN), the invoice would “direct the Federal Trade Commission to develop and make available to the public educational resources for parents, educators, and minors with respect to the safe and responsible use of AI chatbots by minors, and for other purposes.”
  • SHIELD Act of 2025H.R. 5215. Introduced by Rep. Haley M. Stevens (D-MI), the invoice would “direct the Secretary of Defense to establish a pilot program to develop a training program that teaches members of the Armed Forces to interact with digital information in a safe and responsible manner, and for other purposes.”
  • Algorithmic Accountability Act of 2025H.R. 5511. Introduced by Rep. Yvette Clark (D-NY), the invoice would “direct the Federal Trade Commission to require impact assessments of certain algorithms, and for other purposes.”
  • Expressing the sense of the House of Representatives… H.Res. 694. Introduced by Rep. Greg Landsman (D-OH), the decision expressed “the sense of the House of Representatives that the Centers for Medicare & Medicaid Services should halt the pilot program and should not jeopardize seniors’ access to critical health care by utilizing artificial intelligence to determine Medicare coverage.”
  • Unleashing Low-Cost Rural AI ActH.R. 5227. Introduced by Rep. Jim Costa (D-CA), the invoice would “conduct a study on the impact of artificial intelligence and data center site growth on energy supply resources in the United States, and for other purposes.”
  • Condemning makes an attempt to make use of Federal regulatory energy or litigation to suppress lawful speech… H.Res. 748. Introduced by Rep. Yassamin Ansari (D-AZ), the decision condemned “attempts to use Federal regulatory power or litigation to suppress lawful speech, particularly speech critical of a political party or the President of the United States, and warning against the rise of authoritarianism.”
  • Liquid Cooling for AI Act of 2025H.R. 5332. Introduced by Rep. Jay Obernolte (R-CA), the invoice would “direct the Comptroller General of the United States to conduct a technology assessment focused on liquid-cooling systems for artificial-intelligence compute clusters and high-performance computing facilities, require the development of Federal Government-wide best-practice guidance for Federal agencies, and for other purposes.”
  • No Social Media at School ActH.R. 5173. Introduced by Rep. Angie Craig (D-MN), the invoice would “require social media companies to use geofencing to block access to their social media platforms on K-12 education campuses, and for other purposes.”
  • HONOR ActH.R. 5090. Introduced by Rep. Nancy Mace (R-SC), the invoice would “amend the Uniform Code of Military Justice to expand prohibitions against the wrongful broadcast, distribution, or publication of intimate visual images, including digital forgeries, and for other purposes.”

We welcome suggestions on how this roundup could possibly be most useful in your work – please contact [email protected] along with your ideas.



Sources

Leave a Reply

Your email address will not be published. Required fields are marked *