Skip to main content

We all know that AI is something that is constantly growing when it comes to the technology behind it but someday that technology could come back to bite us in the rear. I am sure you’ve seen movies that went over robots taking over the world but the risk of that actually happening someday might be higher than most would expect it to be.

Being able to control AI once it gets to a certain point won’t be as easy as it might sound like it would be. Simply shutting things off won’t be as possible as you might hope for it to be. Earlier this month research on the topic was published in the Journal of Artificial Intelligence Research, and it really puts things into perspective. For this research those working on it basically calculated the potential of AI getting ‘out of control’ and well things aren’t looking too good.

Super intelligent machines or AI overall could end up getting as advances as it appears to be on the big screen at some point or on some level and that in itself is quite scary to think about. While we would love to be able to use this kind of thing to our advantage, knowing when we’ve gone too far isn’t going to be as clear-cut. I know, this might all sound a bit out there, but it’s well worth diving into.

As follows was written going over this on some levels and posted on MPG.DE:

Suppose someone were to program an AI system with intelligence superior to that of humans, so it could learn independently. Connected to the Internet, the AI may have access to all the data of humanity. It could replace all existing programs and take control all machines online worldwide. Would this produce a utopia or a dystopia? Would the AI cure cancer, bring about world peace, and prevent a climate disaster? Or would it destroy humanity and take over the Earth?

Computer scientists and philosophers have asked themselves whether we would even be able to control a superintelligent AI at all, to ensure it would not pose a threat to humanity. An international team of computer scientists used theoretical calculations to show that it would be fundamentally impossible to control a super-intelligent AI

“A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity”, says study co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development

Scientists have explored two different ideas for how a superintelligent AI could be controlled. On one hand, the capabilities of superintelligent AI could be specifically limited, for example, by walling it off from the Internet and all other technical devices so it could have no contact with the outside world – yet this would render the superintelligent AI significantly less powerful, less able to answer humanities quests. Lacking that option, the AI could be motivated from the outset to pursue only goals that are in the best interests of humanity, for example by programming ethical principles into it. However, the researchers also show that these and other contemporary and historical ideas for controlling super-intelligent AI have their limits.

In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.

Superintelligence is coming and that much we should all be able to agree on when it comes to things like AI. It could end up being just as if not smarter than us and while intense and confusing it’s also quite mind-blowing and exciting to think about. We’ve already come a very long way and in time things will only be pushed further and further.

The abstract of that study noted above goes as follows:

Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potentially catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible.

What do you think about all of this? Do you think that there could somehow be a way to contain such and if so what ideas do you have in regard? I, for one, think that it would be seemingly impossible as well if we were to advance to a stage as intense as we’ve seen played out in stories.