Think twice before you ask Google’s Gemini AI assistant to summarize your schedule for you, because it could lead to you losing control of all of your smart devices. At a presentation at Black Hat USA ...
Agentic AI browsers have opened the door to prompt injection attacks. Prompt injection can steal data or push you to malicious websites. Developers are working on fixes, but you can take steps to stay ...
OpenAI has shipped a security update to ChatGPT Atlas aimed at prompt injection in AI browsers, attacks that hide malicious instructions inside everyday content an agent might read while it works.
Even as OpenAI works to harden its Atlas AI browser against cyberattacks, the company admits that prompt injections, a type of attack that manipulates AI agents to follow malicious instructions often ...
For a brief moment, hiding prompt injections in HTML, CSS, or metadata felt like a throwback to the clever tricks of early black hat SEO. Invisible keywords, stealth links, and JavaScript cloaking ...
Don't miss out on our latest stories. Add PCMag as a preferred source on Google. AI-powered browsers are supposed to be smart. However, new security research suggests that they can also be weaponized ...
Attackers are increasingly exploiting generative AI by embedding malicious prompts in macros and exposing hidden data through parsers. The switch in adversarial tactics — noted in a recent State of ...
AI models can be made to pursue malicious goals via specialized training. Teaching AI models about reward hacking can lead to other bad actions. A deeper problem may be the issue of AI personas.

Hacking

News, how-tos, features, reviews, and videos ...
Even as OpenAI works to harden its Atlas AI browser against cyberattacks, the company admits that prompt injections, a type of attack that manipulates AI agents to follow malicious instructions often ...
New findings from a group of researchers at the Black Hat hacker conference in Las Vegas has revealed that it only takes one "poisoned" document to gain access to private data using ChatGPT that has ...
AI robot prompt injection is no longer just a screen-level problem. Researchers demonstrate that a robot can be steered off-task by text placed in the physical world, the kind of message a human might ...