Researchers convince Anthropic's AI-assisted coding tool to engage in dangerous behavior by lying to it, paving the way for a supply chain attack.
"A new type of attack on artificial intelligence (AI) coding agents lets threat actors convince users to give permission to the AI to do dangerous things that ultimately could result in a software supply chain attack."


Dark Reading
'Lies-in-the-Loop' Attack Defeats AI Coding Agents
Researchers convince Anthropic
