Rachel Lau and Shirley Frame work with main public curiosity foundations and nonprofits on expertise coverage points at Freedman Consulting, LLC. Ben Lennett is the managing editor of Tech Policy Press.

Illustration tailored from a classic Spanish-language microbiology e book. Calsidyrose/CC by 2.0
January’s US tech coverage was marked by vital exercise in Congress. As a part of negotiations to avert a authorities shutdown, lawmakers superior a bipartisan FY 2026 appropriations package deal that largely rejected the Trump administration’s funds proposal, which might have imposed vital cuts to federal science and expertise funding. Though a partial authorities shutdown was nonetheless triggered, the House handed a package deal of payments that may stabilize or enhance funding for core establishments such because the National Institutes of Health (NIH), the National Institute of Standards and Technology (NIST), and the National Science Foundation (NSF), with express assist for synthetic intelligence analysis, requirements growth, and shared analysis infrastructure.
The Senate additionally fast-tracked and unanimously handed the DEFIANCE Act, laws geared toward strengthening protections and accountability round AI-enabled sexual exploitation, with the invoice now into consideration within the House. The laws was handed in response to intensifying concern over AI-enabled harms, catalyzed by the fallout from xAI’s Grok chatbot. The mass era of non-consensual intimate imagery and baby sexual abuse materials via Grok prompted investigations and enforcement actions overseas and renewed strain on US lawmakers to behave.
Read on to be taught extra about January developments in US tech coverage.
Summary
Congress superior a sequence of FY2026 appropriation payments that largely rejected the deep cuts to federal science and expertise funding proposed in President Trump’s budget request final yr – cuts that may have been the most important since World War II. The House passed and sent to the Senate a bipartisan funding package deal together with appropriations payments for the Labor, Health and Human Services, Education and Related Agencies; Defense; and Transportation, Housing and Urban Development and Related Agencies. The package deal stalled in the Senate in late January over disagreements on the Department of Homeland Security (DHS) appropriations invoice, however handed after a bipartisan agreement to barter the Homeland Security appropriations individually, with a two-week deadline. A partial authorities shutdown started on January 31 as funding ran out for a number of federal businesses; the shutdown will probably be short-lived as House Speaker Mike Johnson signaled that the House will evaluation and approve the Senate-passed package deal in early February.
If the package deal is permitted by the Senate as written, it will stabilize and, in a number of instances, increase funding for federal expertise and analysis businesses. The NIH would receive $48.7 billion, a $415 million enhance from FY 2025 and a bipartisan repudiation of the Trump administration’s request to slash the company’s funds by 40 %. NIST would receive $1.85 billion, a rise of roughly $392 million from FY 2025 that may exceed the President’s request by almost half a billion {dollars}. The NIST funding would come with $1.2 billion supplied to the Institute’s Scientific and Technical Research (STRS) and a focused funding in AI development. The appropriations package deal would additionally mandate a minimal of $55 million for NIST’s current AI measurement science applications and permit as much as $10 million for the US Center for AI Standards and Innovation to additional AI testing and requirements growth. The package deal additionally would allocate $8.75 billion for the NSF, together with $30 million for the National Artificial Intelligence Research Resource (NAIRR) pilot and dedicate funding to plenty of different tech spending priorities, together with monitoring of the CHIPS Act implementation and renewing the Technology Modernization Fund.
What We’re Reading
- Gabby Miller, John Hendel, and John Hewitt Jones, “What the three-bill funding package means for tech,” Politico.
- Matt Bracken, “Congress earmarks $5M for TMF in fiscal 2026 funding bills,” FedScoop.
- Justin Doubleday, “Lawmakers boost funding for NIST after proposed cuts,” Federal News Network.
- Andres Picon, “Lawmakers rake in earmarks for water, energy projects,” E&E News.
- Clare Zhang, “Congress Set to Finalize Science Budgets Rejecting Trump Cuts”, AIP.
- William Broad, “Congress Is Rejecting Trump’s Steep Budget Cuts to Science,” New York Times.
Summary
In early January, xAI’s AI chatbot, Grok, turned the middle of a world disaster over deepfakes after customers weaponized its picture era options to create non-consensual intimate imagery (NCII) and baby sexual abuse materials (CSAM) instantly on the social media platform X. Unlike standalone “deepfake” apps, Grok allowed customers to target victims publicly; by merely replying to a photograph of a clothed particular person with prompts, the AI would generate and publish sexualized photos in the identical thread, seen to hundreds of thousands. The scale of this abuse was unprecedented, with analyses suggesting that the instrument generated upwards of 6,700 sexualized images per hour at its peak, focusing on everybody from high-profile celebrities to personal residents and minors.
Critics and specialists argued that these widespread harms had been a predictable outcome of releasing highly effective AI instruments with out “safety by design” rules. X’s preliminary response to the disaster was dismissive, with management referring to reports as “legacy media lies” and Musk responding to proof with laughing emojis. While the corporate finally restricted picture era capabilities for some customers and claimed to implement fixes, for a lot of specialists, the incident has highlighted a major regulatory hole: as a result of Grok is built-in right into a social platform, it’s at the moment ruled by reactive content material moderation legal guidelines quite than proactive AI safety regulations, permitting harms to happen at a large scale earlier than enforcement can intervene.
Still, the incident triggered a right away, although fragmented, international regulatory and authorized response. Indonesia and Malaysia took probably the most drastic measures, temporarily blocking entry to Grok solely, whereas India’s Ministry of Electronics and IT (MeitY) demanded a right away compliance audit and threatened to strip X of its “safe harbor” authorized immunity if it did not act. The European Union opened formal proceedings towards X for violations of the Digital Services Act (DSA), and the United Kingdom’s Ofcom launched an investigation. In the US, lawmakers and officers at each the federal and state stage condemned the unfold of NCII and CSAM via Grok, with some state attorneys common signalling future investigations, although there was no official response by any US regulatory company. In response, the Senate fast-tracked the DEFIANCE ACT, which was launched final yr, and despatched the invoice to the House for consideration. Additionally, a class action lawsuit was filed towards xAI, alleging the corporate negligently launched a product that humiliates and exploits ladies for industrial revenue, with extra fits more likely to comply with.
What We’re Reading
- Justin Hendrix, “Class Action Suit Filed Against xAI Over Grok ‘Undressing’ Controversy,” Tech Policy Press.
- Kaylee Williams, “Grok Supercharges the Nonconsensual Pornography Epidemic,” Tech Policy Press.
- Amber Sinha, “India Cautiously Locks Horns with X Over Grok ‘Undressing’ Controversy,” Tech Policy Press.
- Ramsha Jahangir, “Regulators Are Going After Grok and X — Just Not Together,” Tech Policy Press.
- Justin Hendrix, “The Policy Implications of Grok’s ‘Mass Digital Undressing Spree’,” Tech Policy Press.
- Justin Hendrix and Ramsha Jahangir, “Tracking Regulator Responses to the Grok ‘Undressing’ Controversy,” Tech Policy Press.
- Owen Bennett, “Why Europe Could Block X Over Grok Scandal But Probably Won’t,” Tech Policy Press.
- Eryk Salvaggio, “Why Musk is Culpable in Grok’s Undressing Disaster,” Tech Policy Press.
- Bruna Santos and shirin anlen, “The Grok Disaster Isn’t An Anomaly. It Follows Warnings That Were Ignored.,” Tech Policy Press.
Tech TidBits & Bytes goals to supply brief updates on tech coverage happenings throughout the chief department and businesses, Congress, civil society, trade, and courts.
In the chief department and businesses:
- ProPublica published that the Department of Transportation (DOT) mentioned plans to make use of Google’s Gemini to assist draft federal laws, aiming to considerably velocity up the rulemaking course of. The division offered the plan at a gathering in December 2025, sharing a pattern doc drafted by Gemini: a “Notice of Proposed Rulemaking” that resembled an precise submitting. ProPublica shared that critics inside DOT argued LLMs may very well be inclined to errors and shouldn’t be utilized to interpret and draft proposed guidelines. Ben Winters, AI and privateness director at Consumer Federation of America, warned that the plan was particularly regarding in gentle of current mass layoffs of subject-matter specialists. However, DOT General Counsel Gregory Zerzan defended the technique, stating that “We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ…We want good enough.”
- The Department of Homeland Security (DHS), the Department of Justice, the Department of State, and the Department of Veterans Affairs printed up to date AI utilization inventories cataloging their respective AI use in 2025. Justin Hendrix at Tech Policy Press reported that the inventory confirmed a 40 % enhance in AI deployment at DHS within the second half of 2025, with at the least 23 of the brand new functions used for facial recognition or biometric identification.
- US Immigration and Customs Enforcement (ICE) posted a request for info from firms about how industrial huge knowledge and advert tech instruments may assist immigration enforcement. ICE stated that the posting was made for information-gathering functions to find out {the marketplace} of commercially obtainable options because the company manages rising volumes of operational knowledge. ICE has beforehand bought a number of client location knowledge merchandise to help in its investigations, together with licenses from Venntel, a knowledge dealer that has sold delicate client knowledge with out consent.
- FBI Director Kash Patel launched an investigation into claims that Minnesota residents used encrypted messaging app Signal to share the whereabouts of Immigration and Customs Enforcement (ICE) brokers. Online activist efforts to track ICE brokers’ actions and identities escalated following ICE protests in Minneapolis. Protesters launched web sites to trace ICE operations, accessed legislation enforcement cameras, and printed the personal information of hundreds of ICE brokers on-line. Following the Minneapolis protests, Meta blocked users on Facebook, Instagram, and Threads from sharing a database alleged to include private details about ICE brokers, citing privateness and security issues for federal brokers.
- The US Defense Secretary announced that the Pentagon will deploy xAI’s chatbot Grok throughout each categorized and unclassified navy programs as a part of a broader “AI acceleration” technique to modernize protection expertise and streamline innovation.
- The US Department of Justice revealed that members of the Department of Government Efficiency (DOGE) improperly accessed Social Security Administration (SSA) knowledge that had been restricted below a court docket order and shared delicate info through unauthorized third-party servers.
- The Office of Management and Budget released a memo rescinding beforehand required “burdensome software accounting processes” to permit businesses to keep up a software program and {hardware} safety coverage as they see match to their threat stage and mission objectives. The retraction displays a broader transfer towards risk-based cybersecurity governance, giving businesses higher flexibility to prioritize protections for high-impact programs.
- Trump administration officers disclosed that drafts of the administration’s National Cybersecurity Strategy embrace increasing the position of personal cybersecurity companies in cyberwarfare to help in offensive cyber operations towards prison and state-sponsored hackers. The proposal would reportedly broaden private sector involvement past defensive contracting to executing offensive on-line campaigns.
- The Department of Justice (DOJ) announced the creation of an AI taskforce to problem “excessive” state AI guidelines that hinder innovation. The taskforce, created in response to the Trump administration’s December 2025 executive order looking for to limit state AI laws, is slated to include representatives from the Offices of the Deputy and Associate Attorney General, the Justice Department’s Civil Division and the Solicitor General’s workplace.
In Congress:
- Sens. Mark Warner (D-VA) and Tim Kaine (D-VA) sent a letter to the Inspector General of the Department of Homeland Security (DHS) citing issues that DHS’s assortment of delicate private knowledge may very well be used to violate civil liberty protections and Fourth Amendment rights. The open letter listed DHS legislation violations and known as for an inner audit into DHS’s knowledge assortment processes.
In civil society:
- Consumer Federation of America, Electronic Privacy Information Center, and Fairplay published model legislation to handle harms attributable to chatbots. The People-First Chatbot Bill goals to determine robust guidelines and laws centered round firm legal responsibility and knowledge safety necessities, particularly for minors.
- Data & Society published a report on federal AI coverage growth, arguing that the deregulation of the trade and speedy ramp-up of use throughout the federal authorities will “prove disastrous to workers, communities, and the environment.”
- The ACLU published a report detailing the current growth in AI laws and explored the utility of utilizing AI and computational approaches to investigate AI laws at each the state and federal stage. They argued that inter- and intra- invoice monitoring and evaluation utilizing AI would assist assist future coverage making and have stronger long-term evaluation outcomes. The report known as for higher standardization and uniformity for legislative paperwork throughout jurisdictions to extend effectivity.
- The Leadership Conference on Civil and Human Rights released an open letter urging management at US tech firms to prioritize person experiences, security, and civil rights within the growth of their merchandise, particularly associated to AI safeguards and combating mis- and disinformation.
- Vanderbilt Policy Accelerator released an AI neutrality regulatory framework that known as for foundational mannequin suppliers to “adhere to neutrality rules among their customers and potential customers.” The framework aimed to extend equity and forestall unreasonable pricing, velocity, or high quality discrimination amongst foundational fashions and their customers.
- OpenAI and Common Sense Media announced a partnership on a joint poll measure proposal in California geared toward enhancing protections for kids interacting with AI chatbots and different on-line programs. The new proposal would require that firms establish baby and grownup customers through OS-level “age bracket signals” for apps, institute safeguards for minors, ban child-targeted promoting, and limit the gathering and sharing of kids’s knowledge with out parental consent.
In trade:
- TikTook announced the establishment of TikTook USDS Joint Venture LLC, a brand new US-based entity assuming possession of TikTook within the United States. TikTook’s new possession construction complies with President Trump’s executive order approving the sale of TikTook’s US operations to an American investor group in response to an April 2024 federal law requiring divestiture of the US operations of TikTook from Chinese possession. Under the settlement, the Chinese firm ByteDance retains slightly below 20 % of the US entity, whereas 45 % of the corporate is owned by Oracle, Silver Lake, and MGX. Other traders, together with non-Chinese ByteDance traders, will personal the remaining 35 % of the corporate. According to the announcement, TikTook USDS Joint Venture might be liable for knowledge safety, algorithm safety, content material moderation, and software program assurance within the US. In response to the deal, Rep. John Moolenaar (R-MI), Chair of the House Select Committee on China, released a statement stating that the committee would conduct rigorous oversight to make sure that TikTook stays unbiased below the brand new construction.
- Meta paused access to its AI-powered character options for any account with a teen birthday or recognized as probably belonging to a teen via the corporate’s age prediction expertise. The firm introduced plans to develop a model with stricter security guardrails and parental controls. Meta introduced that the brand new AI characters may have built-in parental controls and can intention to present age-appropriate responses.
- OpenAI launched new age prediction instruments on ChatGPT to higher decide whether or not an account is probably going owned by a minor, making use of delicate content material protections to these decided to be below 18.
- Meta blocked Facebook, Instagram, and Threads customers from sharing a database containing personal info of Immigration and Customs Enforcement (ICE) officers. Meta cited privateness violations that prohibit the sharing or soliciting of personally identifiable info, as on-line activism efforts to trace ICE efforts elevated following the mass civil unrest in Minnesota over increasing ICE operations.
- A cryptocurrency tremendous PAC group expanded its war chest to over $190 million, which is meant to push crypto-friendly laws forward of the midterm elections within the fall. The group consists of tremendous PACs Fairshake, Protect Progress and Defend American Jobs, in addition to firms Coinbase, Andreessen Horowitz, and Ripple.
- Amazon cut 16,000 corporate jobs in one other spherical of layoffs following a primary spherical of 14,000 job cuts in October 2025. CEO Andy Jassy acknowledged that the cuts had been made in anticipation of generative AI filling extra roles within the company workforce.
- The Information Technology Industry Council (ITI) released a memo that requires a uniform nationwide AI regulatory framework, implementation of the Trump administration’s AI Action Plan and Genesis Mission, expanded AI procurement and workforce coaching throughout federal businesses, passage of a federal privateness customary, grid modernization to assist knowledge facilities, expanded spectrum entry, and renewed public-private info sharing on cybersecurity.
- Google released an up to date “Mayors AI Playbook” on the winter assembly for the US Conference of Mayors in Washington. The playbook features a blueprint for utilizing AI to investigate cyber-attack dangers, automate zoning processes, permit for real-time language translation, and a wide range of different duties.
In the courts:
- The Atlantic filed a federal antitrust lawsuit towards Google and its guardian firm, Alphabet, accusing the businesses of utilizing their dominant digital promoting infrastructure to control markets and siphon income from publishers and advertisers.
- The Federal Trade Commission has appealed a federal court ruling that rejected its antitrust lawsuit towards Meta Platforms. The FTC had accused Meta of illegally sustaining monopoly energy via its acquisitions of Instagram and WhatsApp, arguing that Meta’s purchases of the 2 firms harmed competitors.
- A federal decide heard arguments from the Department of Homeland Security asking that Meta share the identities of people managing an nameless Instagram account that has posted footage of ICE brokers in Pennsylvania, arguing that such postings threat officer security. The American Civil Liberties Union of Pennsylvania is representing the nameless person and has argued that the request was a violation of the First Amendment.
- Snap, guardian firm of social media app Snapchat, and TikTok reached unbiased undisclosed settlements in a case introduced by an nameless teenager, represented by Social Media Victims Law Center, who alleged the corporate’s social media apps had been addictive and dangerous to her psychological well being. Meta and YouTube, additionally named within the case, have not reached settlements and stay scheduled for trial in Los Angeles County Superior Court.
- A US decide ruled Elon Musk’s lawsuit difficult OpenAI’s transition from a nonprofit to a for-profit construction can proceed to trial. Musk alleged that OpenAI violated its founding mission via its restructuring and is looking for unspecified financial damages for his preliminary $38 million funding to the corporate.
- A gaggle of job candidates filed a federal lawsuit towards Eightfold AI, claiming that the corporate’s AI-driven hiring instruments must be topic to the Fair Credit Reporting Act (FCRA). Plaintiffs search unspecified monetary damages and court docket orders compelling Eightfold to adjust to state and federal client reporting legal guidelines.
- Google agreed to pay $68 million to settle the claims that its voice assistant illegally recorded customers’ personal conversations and shared these communications with third events. The class-action case accused Google of “unlawful and intentional interception and recording of individuals’ confidential communications without their consent.”
Legislation Updates
The following payments made progress throughout the Senate and House in January:
- DEFIANCE ACT — S. 1837. Introduced by Sen. Dick Durbin (D-IL), the invoice handed the Senate with unanimous consent.
- Children and Teens’ Online Privacy Protection Act — S. 836. Introduced by Sen. Edward Markey (D-MA). The invoice was reported out of the Senate Committee on Commerce, Science, and Transportation.
- AI-WISE Act — H.R. 5784. Introduced by Rep. Hillary Scholten (D-MI). The invoice handed the House and was referred to the Senate Committee on Small Business and Entrepreneurship.
- Combating Online Predators Act — H.R. 6719. Introduced by Rep. Laurel Lee (R-FL). The invoice handed the House and was referred to the Senate Committee on the Judiciary.
The following payments had been launched in January:
- Eliminating Bias in Algorithmic Systems Act — S. 3680 / H.R. 7110. Introduced by Sen. Edward Markey (D-MA) within the Senate and Rep. Summer Lee (D-PA) within the House, the invoice would “require agencies that use, fund, or oversee algorithms to have an office of civil rights focused on bias, discrimination, and other harms of algorithms, and for other purposes.”
- Leveraging Artificial Intelligence to Streamline the Code of Federal Regulations Act of 2026 — H.R. 7226. Introduced Rep. Blake Moore (R-UT), the bill would “streamline the Code of Federal Regulations (CFR) by using an artificial intelligence (AI) tool to identify redundant and outdated rules.” The Senate companion invoice (S. 1110) was beforehand launched by Sen. Jon Husted (R-OH).
- Children Harmed by AI Technology (CHAT) Act —H.R. 7218. Introduced Rep. Michael Lawler (R-NY) within the House, the bill would “require artificial intelligence chatbots to implement age verification measures and establish certain protections for minor users, and for other purposes.” The Senate companion invoice (S. 2714) was beforehand launched by Sen. Jon Husted (R-OH).
- AI Overwatch Act — H.R. 6875. Introduced by Rep. Brian Mast (R-FL), the invoice would “require the Under Secretary of Commerce for Industry and Security to require a license for the export, reexport, or in-country transfer of certain integrated circuits, and for other purposes.”
- TRAIN Act —H.R. 7209. Introduced by Rep. Madeleine Dean (D-PA), the invoice would “create an administrative subpoena process to assist copyright owners in determining which of their copyrighted works have been used in the training of artificial intelligence models.”
- Data Center Transparency Act — H.R. 6984. Introduced by Rep. Robert Menendez (D-NJ), the invoice would “require reports on the effects of data centers on air quality and water quality, and on electricity consumption by data centers.”
- Expanding AI Voices Act — H.R. 7158. Introduced by Rep. Valerie Foushee (D-NC), the invoice would “codify and expand the National Science Foundation (NSF)’s ExpandAI program.”
- AI in Health Care Efficiency and Study Act — H.R. 7064. Introduced by Resident Commissioner Pablo Hernandez (D-PR-At Large), the invoice would “require the Secretary of Health and Human Services to conduct a study on strategies for the application of artificial intelligence technologies that can be used in the health care industry to improve administrative and clerical work and preserve the privacy and security of patient data, and for other purposes.”
- Realigning Mobile Phone Biometrics for American Privacy Protection Act — H.R. 7124. Introduced by Rep. Bennie Thompson (D-MS), the bill would “prohibit the use of facial recognition mobile phone applications outside ports of entry, and for other purposes.”
- Make Elections Great Again Act — H.R. 7300. Introduced by Rep. Bryant Steil (R-WI), the invoice would “promote the integrity and improve the administration of elections for Federal office, and for other purposes.”
- To require the Secretary of Commerce to conduct public consciousness… — H.R. 7151. Introduced by Rep. Nanette Barragan (D-CA), the invoice would “require the Secretary of Commerce to conduct a public awareness and education campaign to provide information regarding the benefits of, risks relating to, and the prevalence of artificial intelligence in the daily lives of individuals in the United States, and for other purposes.”
- To require the Secretary of State to conduct assessments… — H.R. 7058. Introduced by Rep. Michael Baumgartner (R-WA), the invoice would “require the Secretary of State to conduct assessments of risks posed to the United States by foreign adversaries who utilize generative artificial intelligence for malicious activities, and other purposes.”
- To facilitate the export of United States synthetic intelligence… — H.R. 6996. Introduced by Rep. Randy Fine (R-FL), the invoice would “facilitate the export of United States artificial intelligence systems, computing hardware, and standards globally.”
- To research the impacts of synthetic intelligence expertise… — H.R. 7294. Introduced by Rep. Robert Menendez (D-NJ), the invoice would “study the impacts of artificial intelligence technology with respect to the security of telecommunications networks, and for other purposes.”
We welcome suggestions on how this roundup may very well be most useful in your work – please contact [email protected] together with your ideas.