It's a scary new world out there: Hackers are now exploiting the very AI tools designed to help us, using them to spread malware! Following my earlier reports on the vulnerabilities of agentic browsers, I've been keeping a close eye on how AI is being used in scams. Now, there's a disturbing trend emerging: cybercriminals are leveraging AI prompts to inject malicious commands into Google search results. When unsuspecting users execute these commands, their computers become vulnerable to malware.
This alarming discovery comes from a recent report by Huntress, a detection-and-response firm. Here’s how it works: first, the attackers engage an AI assistant, like ChatGPT or Grok, asking it to generate commands related to common search terms. They then make this conversation public and pay to promote it on Google. The result? Whenever someone searches for that term, the malicious instructions appear prominently in the search results.
Huntress tested this method on both ChatGPT and Grok after finding that a Mac-targeting data exfiltration attack, known as AMOS, originated from a simple Google search. The user, searching for "clear disk space on Mac," clicked a sponsored ChatGPT link and, unaware of the danger, executed the provided command. This allowed the attackers to install the AMOS malware. The tests revealed that both chatbots readily replicated this attack vector.
The brilliance of this attack lies in its stealth. It bypasses the usual red flags we've been taught to watch out for. Victims don't need to download files, install suspicious programs, or even click on dodgy links. Instead, they're simply trusting Google and AI tools like ChatGPT—sources they likely use daily or have heard about incessantly. They're primed to trust these sources, making them easy targets. The malicious link remained active on Google for at least half a day after Huntress published its findings, further illustrating the risk.
This news comes at a particularly sensitive time for AI. Grok has been criticized for its alignment with Elon Musk, and ChatGPT's creator, OpenAI, is facing increasing competition. While it's not yet clear if other chatbots are vulnerable, it's wise to exercise extreme caution. Along with your usual cybersecurity practices, never copy and paste anything into your command terminal or browser URL bar unless you fully understand its purpose. But here's where it gets controversial: Could this be an inevitable consequence of AI's increasing integration into our lives, or is it a sign of a deeper problem with the security of these tools? What do you think? Share your thoughts in the comments!