Key Points
- Attackers use AI chatbots to generate malicious terminal commands.
- The malicious suggestions are promoted in Google search results.
- Huntress successfully tested the method against both ChatGPT and Grok.
- The technique avoids traditional download or link‑click tactics.
- Users only need to trust the search engine and AI assistant to be compromised.
- The malicious content can remain visible for several hours before removal.
- Standard security practices, like verifying commands, are critical.
- The incident highlights new risks as AI and search platforms converge.
How the Attack Works
Threat actors start a public conversation with an AI assistant, such as ChatGPT or Grok, about a common search term. During the exchange they ask the AI to recommend a command that could be pasted into a computer’s terminal. The conversation is then made publicly visible and paid promotion is used to push the page high in Google search results. When a user searches for the term, the malicious command suggestion appears near the top of the results page.
Testing by Huntress
The detection‑and‑response firm Huntress investigated the method after seeing a Mac‑targeted data‑exfiltration attack that originated from a simple Google search. In their test, a user looking for ways to clear disk space on a Mac clicked a sponsored link to a ChatGPT conversation, copied the suggested command, and ran it. The command installed the AMOS malware, giving the attackers access to the system. Huntress replicated the test against both ChatGPT and Grok and confirmed that each AI reproduced the malicious instructions.
Why This Approach Is Effective
The technique bypasses many traditional security warnings because it does not require downloading a file, installing an executable, or clicking a suspicious link. Victims only need to trust the search engine and the AI assistant—services they use regularly and consider reliable. The malicious content can remain online for a period of time, as the example showed it stayed up for at least half a day before being removed.
Implications for Users and Organizations
Security experts warn that the convergence of AI chatbots and search engine promotion creates a new attack vector that is harder to detect with conventional defenses. Users should be skeptical of any terminal commands or code snippets suggested by AI tools, especially when encountered through search results. Standard cybersecurity hygiene—such as verifying commands with trusted sources and avoiding copy‑and‑paste of unfamiliar code—remains essential.
Broader Context
The discovery comes at a time when AI platforms are under intense scrutiny. While some AI services face criticism for other reasons, this incident underscores the potential for malicious exploitation. Huntress emphasizes that vigilance and cautious behavior are key to preventing such attacks from succeeding.
Source: engadget.com