The idea of AI (Artificial Intelligence) overthrowing humankind has been talked about for several decades and scientists delivered their verdict on whether we’d be ready to control a high-level computer super intelligence. The answer is? Almost definitely not.
The catch is that controlling a super intelligence far beyond human comprehension would-require a simulation of that super intelligence which we can analyze. But if we’re unable to grasp it, it’s impossible to make such a simulation.
Rules like ‘cause no harm to humans‘ cannot be set, if we do not understand the type of scenarios that an AI come with, suggest the authors of the new paper. When a computing system is functioning on a level-above the scope of our programmers, we cannot set limits.
“A super-intelligence poses a fundamentally different problem than those typically studied under the banner of robot ethics,” write the researchers.
“This is because a super-intelligence is multi-faceted and potentially capable of mobilizing a diversity of resources so as to realize objectives that are potentially incomprehensible to humans, let alone controllable.”
Part of team’s reasoning comes from the halting problem suggests by Turing in 1936. The matter centres on knowing whether or not a computer program will reach a conclusion & answer (so it halts), or just loop forever trying to seek out one.
As Turing proved through-some smart math, while we can know that for a few specific programs, it’s logically impossible to seek out way which will allow us to know that for each potential program that would ever be written. That brings-us back to AI, which in a super intelligent state could feasibly hold every possible computer program in its memory directly.
Any program written to prevent AI harming humans & destroying the world, for instance, may reach a conclusion (& halt) or not, it’s mathematically impossible for us to be absolutely sure either way that means it isn’t containable.
“In effect, this makes the containment algorithm unusable,” said scientist Iyad Rahwan, from the Max Planck Institute for Human Development in Germany.
The alternative to teaching AI some ethics & it to not destroy the world, something which no algorithm can often absolutely certain of doing, researchers say is to limit the capabilities of the super-intelligence. It could be cut-off from parts of the internet or from certain networks, for instance.
The new study rejects this thought too, suggesting that it might limit the reach of the synthetic intelligence, the argument goes that if we’re not getting to use it to solve problems beyond the scope of humans, then why make it at all?
If we are getting to push ahead with AI, we’d not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. It means we’d need to start asking some serious-questions on the directions we’re going in.
“A super-intelligent machine that controls the world seems like science fiction,” says scientist Manuel Cebrian from the Max-Planck Institute for Human Development. “But there are already machines that-perform certain important tasks independently without programmers fully understanding how they learned it.”
“The question therefore arises whether this at some point become uncontrollable & dangerous for humanity.”
The research has been published in the Journal of Artificial Intelligence Research.