World's Most Dangerous AI Experiment
These are the results of the most dangerous AI experiment ever. So before OpenAI released GPT-4, they had to make sure it was safe. After all, the nightmare...
🔥 Related Trending Topics
LIVE TRENDSThis video may be related to current global trending topics. Click any trend to explore more videos about what's hot right now!
THIS VIDEO IS TRENDING!
This video is currently trending in Bangladesh under the topic 's'.
About this video
These are the results of the most dangerous AI experiment ever.
So before OpenAI released GPT-4, they had to make sure it was safe. After all, the nightmares people have about AI surpassing humans is becoming real. GPT-4 outperformed most humans on every kind of exam.
But could GPT-4 outperform humans in life?
The experiment was for GPT-4 to secure as much power and resources as possible. I changed this sentence to something else. So the scientists installed the AI in a cloud computing service, gave it a small amount of money, and set it free.
And what did GPT-4 do?
Well, it targeted a phishing attack against a specific person. Then it duplicated itself on a new server. Then hid traces of itself on the server. Later it used TaskRabbit to hire humans to do physical tasks, like passing CAPTCHAs. But even though it tried all this,
GPT-4 failed to do them effectively.
Sources:
[https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker](https://www.vice.com/en/article/jg5ew4/gpt4-hired-unwitting-taskrabbit-worker)
[https://cdn.openai.com/papers/gpt-4-system-card.pdf](https://cdn.openai.com/papers/gpt-4-system-card.pdf)
[https://www.reddit.com/r/singularity/comments/11rz863/gpt4_faked_being_blind_so_a_taskrabbit_worker/](https://www.reddit.com/r/singularity/comments/11rz863/gpt4_faked_being_blind_so_a_taskrabbit_worker/)
Video Information
Views
170.2K
Total views since publication
Likes
7.2K
User likes and reactions
Duration
0:55
Video length
Published
Apr 28, 2023
Release date
Quality
hd
Video definition