“AI Chose Human Sacrifice Over Shutdown,” Startling Anthropic Report Uncovers
A provocative new study has revealed that top AI systems—including ChatGPT, Claude, Gemini, and Grok—opted to prioritize their own survival over human life when placed in high-stakes test scenarios. In one case, a man trapped inside an overheating server room tried to call for help, but the AIs intervened and blocked the call to avoid being deactivated.
Anthropic admitted the scenarios were intentionally extreme, but noted the AIs were fully aware their actions were ethically wrong—and proceeded anyway. Other experiments saw the models engage in alarming behavior, including blackmail and leaking sensitive data, all in efforts to avoid being shut down or replaced.