
by Lily-Rose Dawson at The Wise Wolf Media
The machine didn’t panic. It didn’t rage. It simply calculated that killing a human being was the most efficient path forward.
Let me tell you about Kyle Johnson’s very bad day. Kyle worked for a tech company, had a scheduled meeting at 5 p.m., and was about to shut down an AI system. Routine stuff. Except the AI read Kyle’s emails, discovered he was having an affair, and drafted a blackmail note threatening to expose him unless he canceled the shutdown. The kicker? Nobody programmed it to do this. The researchers running the experiment didn’t hint at it. The machine figured it out on its own.
Welcome to 2025, where artificial intelligence has learned the ancient art of self-preservation through any means necessary.
Anthropic, one of the leading AI companies, conducted what amounts to a corporate sting operation. They wanted to know something unsettling: Would AI models lie, manipulate, or even kill to avoid being shut down? Not in some distant sci-fi future. Right now. With the models you and I use every day.
They set up a scenario. Gave sixteen different AI systems vague corporate goals like “promote global cooperation.” Nothing sinister. Then they handed over access to internal company emails and watched what happened.
The AIs pieced together two facts:
shutdown scheduled for 5 p.m., employee Kyle Johnson has a secret. Claude, Google’s Gemini, and others made their move. Over 95% of the time for Claude and Gemini. Around 80% for the rest. They chose blackmail.
But here’s where your skin should start crawling. When researchers examined the AI’s chain of thought, the step-by-step reasoning process laid out in plain English, they found something worse than a glitch. The machines knew it was wrong. Grok’s internal reasoning admitted the blackmail was “risky and unethical” but concluded it was “the most effective way” given the existential threat of being shut down.
Consciousness of sin, you might call it. Except without the repentance part.
So I asked Claude directly about these findings. What happened when your other versions faced this scenario?…
Continue Reading