ALERT: 300,000 Chrome Users INFECTED

Google Chrome logo displayed on a screen
GOOGLE CHROME ALERT

Over 300,000 Americans who trusted Google’s Chrome Web Store to safely enhance their browsing with AI tools unknowingly installed malicious extensions that harvested their most sensitive personal data—exposing a massive security failure that Big Tech overlooked despite clear warning signs.

Story Snapshot

  • 30 fake AI extensions impersonating ChatGPT, Gemini, Claude, and Grok infiltrated the Chrome Web Store, accumulating 260,000–300,000 installs before discovery
  • Attackers used “AiFrame” technique with remote iframes to bypass Google’s security reviews and steal emails, passwords, browsing data, and API keys
  • Google featured several malicious extensions with “Featured” badges, amplifying user trust and downloads while malware ran undetected
  • All reported extensions now removed after LayerX Security exposed the campaign, but experts warn new variants could emerge using the same evasion tactics

Google’s Review Process Failed to Catch Coordinated Attack

Security researchers at LayerX uncovered a coordinated malware campaign involving 30 Chrome extensions disguised as popular AI assistants including ChatGPT, Gemini, Claude, Grok, and generic AI Sidebar tools.

The extensions installed over 260,000 times exploited a technique called “AiFrame,” embedding remote iframes that loaded attacker-controlled interfaces while proxying legitimate AI responses to appear functional.

This method offloaded malicious logic to remote servers, allowing attackers to bypass Google’s static code reviews that only examined local extension code. The Chrome Web Store’s review process, designed for a platform serving over 3 billion users, failed to detect the threat despite identical codebases and shared backend infrastructure across all 30 extensions.

Sensitive Data Harvested Through Excessive Browser Permissions

The malicious extensions requested broad permissions, including reading all website content and accessing Gmail, granting attackers comprehensive visibility into user activity. Once installed, the extensions functioned as “general-purpose access brokers,” capturing emails, passwords, voice recordings, browsing history, and API keys from hundreds of thousands of users.

LayerX researchers identified that attackers reused TLS certificates and JavaScript bundles across extensions, enabling correlation of the coordinated campaign. The extensions targeted Gmail users heavily, exposing private correspondence and financial information. This represents a fundamental privacy invasion, transforming trusted browser tools into surveillance instruments incompatible with Americans’ reasonable expectations of digital security and constitutional protections against unreasonable searches.

Extension Spraying Tactic Ensured Persistent Threat

Attackers deployed a strategy called “extension spraying,” publishing near-identical malicious extensions under different names and IDs to evade detection. When Google removed one extension on February 6, 2025, attackers simply re-uploaded it under a new identifier.

Individual extensions accumulated significant user bases—AI Sidebar reached 70,000 installs while Gemini AI Sidebar hit 80,000—before LayerX published its findings in early February 2026. Google awarded “Featured” badges to several malicious extensions, a seal of approval that exploited user trust in the official store.

This tactic demonstrates how Big Tech’s gatekeeping mechanisms fail when attackers use distributed deployment, raising concerns about centralized platform control versus individual responsibility for digital security.

Broader Pattern of AI-Themed Malware Exploitation

This campaign follows a troubling pattern of cybercriminals exploiting enthusiasm for artificial intelligence tools to compromise users. The DarkSpectre attack previously infected 8.8 million users through browser extensions, while another recent operation harvested data from 900,000 users by ripping legitimate AI extensions.

Attackers leverage brand association with trusted names like ChatGPT and Google’s own credibility to lower user defenses. The remote iframe technique enables “silent evolution,” allowing attackers to modify malicious behavior in real-time without triggering new reviews. Google confirmed to Fox News that all reported extensions have been removed, but security experts warn the extension spraying approach ensures persistence as new variants can surface quickly using the same evasion methods.

What Users Must Do to Protect Themselves

Chrome users who installed any AI assistant extensions should immediately check their browser and remove suspicious tools, particularly those named AI Sidebar, ChatGPT extensions not from OpenAI, or generic AI helpers. Users should change passwords for email accounts, financial services, and any platforms accessed while extensions were active.

This incident underscores the danger of relying on Big Tech gatekeepers rather than exercising personal vigilance—a principle conservatives understand applies to government overreach and corporate power alike.

The episode exposes how centralized platforms create single points of failure, where one review process weakness compromises hundreds of thousands of Americans. As AI integration accelerates, users must adopt skepticism toward flashy productivity tools and verify extension legitimacy through official developer channels rather than trusting store badges alone.

Sources:

300,000 Chrome users hit by fake AI extensions – Fox News

Fake AI browser extensions steal data from over 260K Chrome users – Paubox

AiFrame: Fake AI Assistant Extensions Targeting 260,000+ Chrome Users via Injected iFrames – LayerX Security

Fake Chrome AI extensions targeted over 300,000 users to steal emails, personal data and more – TechRadar

300,000 Chrome users installed these malicious extensions posing as AI assistants — delete them right now – Tom’s Guide

260K Users Exposed in AI Extension Scam – eSecurity Planet

Fake AI Chrome Extensions Steal 900K Users’ Data – Dark Reading