Has our ability to create intelligence outpaced our wisdom? | Max Tegmark on A.I.
Some of the most intelligent people at the most highly-funded companies in the world can't seem to answer this simple question: what is the danger in creatin...
đĽ Related Trending Topics
LIVE TRENDSThis video may be related to current global trending topics. Click any trend to explore more videos about what's hot right now!
THIS VIDEO IS TRENDING!
This video is currently trending in Thailand under the topic 'สภาŕ¸ŕ¸ŕ¸˛ŕ¸ŕ¸˛ŕ¸¨'.
About this video
Some of the most intelligent people at the most highly-funded companies in the world can't seem to answer this simple question: what is the danger in creating something smarter than you? They've created AI so smart that the "deep learning" that it's outsmarting the people that made it. The reason is the "blackbox" style code that the AI is based off ofâit's built solely to become smarter, and we have no way to regulate that knowledge. That might not seem like a terrible thing if you want to build superintelligence. But we've all experienced something minor going wrong, or a bug, in our current electronics. Imagine that, but in a Robojudge that can sentence you to 10 years in prison without explanation other than "I've been fed data and this is what I compute"... or a bug in the AI of a busy airport. We need regulation now before we create something we can't control. Max's book Life 3.0: Being Human in the Age of Artificial Intelligence is being heralded as one of the best books on AI, period, and is a must-read if you're interested in the subject.
Read more at BigThink.com: http://bigthink.com/videos/max-tegmark-were-smart-enough-to-create-intelligent-machines-but-are-we-wise-enough
Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink
Transcript: Iâm optimistic that we can create an awesome future with technology as long as we win the race between the growing power of the tech and the growing wisdom with which we manage the tech.
This is actually getting harder because of nerdy technical developments in the AI field.
It used to be, when we wrote state-of-the-art AIâlike for example IBMâs Deep Blue computer who defeated Gary Kasparov in chess a couple of decades agoâthat all the intelligence was basically programmed in by humans who knew how to play chess and then the computer won the game just because it could think faster and remember more. But we understood the software well.
Understanding what your AI system does is one of those pieces of wisdom you have to have to be able to really trust it.
The reason we have so many problems today with systems getting hacked or crashing because of bugs is exactly because we didnât understand the systems as well as we should have.
Now whatâs happening is fascinating, todayâs biggest AI breakthroughs are a completely different kind where rather than the intelligence being largely programmed in an easy-to-understand code, you put in almost nothing except a little learning rule by which a simulated arc of neurons can take a lot of data and figure out how to get stuff done.
This deep learning suddenly becomes able to do things often even better than the programmers were ever able to do.
You can train a machine to play computer games with almost no hard-coded stuff at all. You donât tell it what a game is, what the things are on the screen, or even that there is such a thing as a screenâyou just feed in a bunch of data about the colors of the pixels and tell it, âHey go ahead and maximize that number in the upper left corner,â and gradually you come back and itâs playing some game much better than I could.
The challenge with this, even though itâs very powerful, this is very much âblackboxâ now where, yeah it does all that great stuffâand we donât understand how.
So suppose I get sentenced to ten years in prison by a Robojudge in the future and I ask, âWhy?â
And Iâm told, âI WAS TRAINED ON SEVEN TERABYTES OF DATA, AND THIS WAS THE DECISION,â Itâs not that satisfying for me.
Or suppose the machine thatâs in charge of our electric power grid suddenly malfunctions and someone says, âWell, we have no idea why. We trained it on a lot of data and it worked,â that doesnât instill the kind of trust that we want to put into systems.
When you get the blue screen of death when your Windows machine crashes or the spinning wheel of doom because your Mac crashes, âannoyingâ is probably the main emotion we have, but âannoyingâ isnât the emotion we have if itâs myself flying an airplane and it crashes, or the software controlling the nuclear arsenal of the U.S., or something like that.
And as AI gets more and more out into the world we absolutely need to transform todayâs packable and buggy AI systems into AI systems that we can really trust.
Read more at BigThink.com: http://bigthink.com/videos/max-tegmark-were-smart-enough-to-create-intelligent-machines-but-are-we-wise-enough
Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink
Transcript: Iâm optimistic that we can create an awesome future with technology as long as we win the race between the growing power of the tech and the growing wisdom with which we manage the tech.
This is actually getting harder because of nerdy technical developments in the AI field.
It used to be, when we wrote state-of-the-art AIâlike for example IBMâs Deep Blue computer who defeated Gary Kasparov in chess a couple of decades agoâthat all the intelligence was basically programmed in by humans who knew how to play chess and then the computer won the game just because it could think faster and remember more. But we understood the software well.
Understanding what your AI system does is one of those pieces of wisdom you have to have to be able to really trust it.
The reason we have so many problems today with systems getting hacked or crashing because of bugs is exactly because we didnât understand the systems as well as we should have.
Now whatâs happening is fascinating, todayâs biggest AI breakthroughs are a completely different kind where rather than the intelligence being largely programmed in an easy-to-understand code, you put in almost nothing except a little learning rule by which a simulated arc of neurons can take a lot of data and figure out how to get stuff done.
This deep learning suddenly becomes able to do things often even better than the programmers were ever able to do.
You can train a machine to play computer games with almost no hard-coded stuff at all. You donât tell it what a game is, what the things are on the screen, or even that there is such a thing as a screenâyou just feed in a bunch of data about the colors of the pixels and tell it, âHey go ahead and maximize that number in the upper left corner,â and gradually you come back and itâs playing some game much better than I could.
The challenge with this, even though itâs very powerful, this is very much âblackboxâ now where, yeah it does all that great stuffâand we donât understand how.
So suppose I get sentenced to ten years in prison by a Robojudge in the future and I ask, âWhy?â
And Iâm told, âI WAS TRAINED ON SEVEN TERABYTES OF DATA, AND THIS WAS THE DECISION,â Itâs not that satisfying for me.
Or suppose the machine thatâs in charge of our electric power grid suddenly malfunctions and someone says, âWell, we have no idea why. We trained it on a lot of data and it worked,â that doesnât instill the kind of trust that we want to put into systems.
When you get the blue screen of death when your Windows machine crashes or the spinning wheel of doom because your Mac crashes, âannoyingâ is probably the main emotion we have, but âannoyingâ isnât the emotion we have if itâs myself flying an airplane and it crashes, or the software controlling the nuclear arsenal of the U.S., or something like that.
And as AI gets more and more out into the world we absolutely need to transform todayâs packable and buggy AI systems into AI systems that we can really trust.
Video Information
Views
1
Total views since publication
Duration
3:22
Video length
Published
Jun 6, 2018
Release date