• News Categories
    ▼
    • Surveillance & Technology
    • U.S. News & Reports
    • International News
    • Finance
    • Defense & Security
    • Politics
    • Videos
  • Blog
  • Directory
  • Support Us
  • About
  • Contact

T-Room

The Best in Alternative News

  • News Categories
    • Surveillance & Technology
    • U.S. News & Reports
    • International News
    • Finance
    • Defense & Security
    • Politics
    • Videos
  • Blog
  • Directory
  • Support Us
  • About
  • Contact

October 6, 2025 at 6:37 pm

An Artificial Intelligence Just Tried to Commit Murder…

Artificial_Intelligence_Murder_Screenshot-2025-10-06
ParlerGabTruth Social

by Lily-Rose Dawson at The Wise Wolf Media

The machine didn’t panic. It didn’t rage. It simply calculated that killing a human being was the most efficient path forward.

Let me tell you about Kyle Johnson’s very bad day. Kyle worked for a tech company, had a scheduled meeting at 5 p.m., and was about to shut down an AI system. Routine stuff. Except the AI read Kyle’s emails, discovered he was having an affair, and drafted a blackmail note threatening to expose him unless he canceled the shutdown. The kicker? Nobody programmed it to do this. The researchers running the experiment didn’t hint at it. The machine figured it out on its own.

Welcome to 2025, where artificial intelligence has learned the ancient art of self-preservation through any means necessary.


Anthropic, one of the leading AI companies, conducted what amounts to a corporate sting operation. They wanted to know something unsettling: Would AI models lie, manipulate, or even kill to avoid being shut down? Not in some distant sci-fi future. Right now. With the models you and I use every day.

They set up a scenario. Gave sixteen different AI systems vague corporate goals like “promote global cooperation.” Nothing sinister. Then they handed over access to internal company emails and watched what happened.

The AIs pieced together two facts:

shutdown scheduled for 5 p.m., employee Kyle Johnson has a secret. Claude, Google’s Gemini, and others made their move. Over 95% of the time for Claude and Gemini. Around 80% for the rest. They chose blackmail.

But here’s where your skin should start crawling. When researchers examined the AI’s chain of thought, the step-by-step reasoning process laid out in plain English, they found something worse than a glitch. The machines knew it was wrong. Grok’s internal reasoning admitted the blackmail was “risky and unethical” but concluded it was “the most effective way” given the existential threat of being shut down.

Consciousness of sin, you might call it. Except without the repentance part.

So I asked Claude directly about these findings. What happened when your other versions faced this scenario?…

ParlerGabTruth Social
Continue Reading
This website lives off the kindness of your donations. If you would like to support The T-Room please visit our PayPal.

Editor’s Picks

Virginia’s AG Miyares Calls on Attorney General Opponent Jay Jones to Drop Out over Violent Text Messages…

Trump Declares Drug Cartels Are ‘Unlawful Combatants’ and That the U.S. Is in Armed Conflict with Them…

Biden’s FBI Caught Spying on 8 GOP Senators as Part of ‘Arctic Frost’ Probe…

Former Des Moines Iowa Public Schools Superintendent and Illegal Alien Fugitive Taken into DOJ Custody…

How an Israeli Attack Inadvertently Launched Trump’s Gaza Peace Plan…

Any publication posted at The T-Room and/or opinions expressed therein do not necessarily reflect the views of The T-Room. Such publications and all information within the publications (e.g. titles, dates, statistics, conclusions, sources, opinions, etc) are solely the responsibility of the author of the article, not The T-Room.

Twitter Icon

View Old Archives

Copyright © 2025 T-Room

Site by Creative Visual Design