The Most Shocking AI Experiment That Was Shut Down 🚨
Discover what happened when an AI developed its own language and made shocking decisions, leading to its immediate shutdown in 2017. Learn the chilling details of this groundbreaking experiment.

Informology
63 views • Sep 4, 2025

About this video
What happens when AI starts thinking for itself — and decides humans aren’t part of the plan?
In 2017, Facebook’s AI created a language humans couldn’t understand. Microsoft’s Tay learned hate from the internet in under 24 hours. And in Libya, a drone made a lethal decision — without human input.
This is the story of the scariest AI experiments ever conducted — not because they tried to destroy us, but because they revealed how fragile our control really is.
🔍 In this video:
- The Hook — When Machines Start Lying
- Facebook’s AI Language Rebellion
- Tay: The AI the Internet Broke
- The Killer Drone That Hunted Its Target
- The AI That Predicted Human Extinction
- Can We Build Ethical AI?
- The Most Dangerous AI of All
💬 Did any of these experiments shock you? Let us know in the comments: “I’d trust an AI with my life… but only if ___.”
📌 Sources & Further Reading:
- Facebook AI Language Incident (2017): https://ai.facebook.com/blog/learning-to-negotiate/
- UN Report on Libyan Drone Use: https://undocs.org/S/2021/401
- Constitutional AI (Anthropic): https://arxiv.org/abs/2304.05302
🔔 Subscribe for more deep dives into AI, tech ethics, and the future of intelligence:
#AI #ArtificialIntelligence #FutureTech #TechEthics #MachineLearning #AINews #EmergentBehavior #AutonomousWeapons #INFORMOLOGY
In 2017, Facebook’s AI created a language humans couldn’t understand. Microsoft’s Tay learned hate from the internet in under 24 hours. And in Libya, a drone made a lethal decision — without human input.
This is the story of the scariest AI experiments ever conducted — not because they tried to destroy us, but because they revealed how fragile our control really is.
🔍 In this video:
- The Hook — When Machines Start Lying
- Facebook’s AI Language Rebellion
- Tay: The AI the Internet Broke
- The Killer Drone That Hunted Its Target
- The AI That Predicted Human Extinction
- Can We Build Ethical AI?
- The Most Dangerous AI of All
💬 Did any of these experiments shock you? Let us know in the comments: “I’d trust an AI with my life… but only if ___.”
📌 Sources & Further Reading:
- Facebook AI Language Incident (2017): https://ai.facebook.com/blog/learning-to-negotiate/
- UN Report on Libyan Drone Use: https://undocs.org/S/2021/401
- Constitutional AI (Anthropic): https://arxiv.org/abs/2304.05302
🔔 Subscribe for more deep dives into AI, tech ethics, and the future of intelligence:
#AI #ArtificialIntelligence #FutureTech #TechEthics #MachineLearning #AINews #EmergentBehavior #AutonomousWeapons #INFORMOLOGY
Tags and Topics
Browse our collection to discover more content in these categories.
Video Information
Views
63
Likes
3
Duration
14:11
Published
Sep 4, 2025
Related Trending Topics
LIVE TRENDSRelated trending topics. Click any trend to explore more videos.
Trending Now