Previously, I published an article claiming that publishing an efficient NP-complete algorithm would kill mankind in a few months. In this article I will consider this scenario and mitigation in more details.

I write this, because I recently self-published and submitted to a prestigious math journal my proof of P=NP (by the way, check it for errors). It proves P=NP without providing a practically efficient NP-complete algorithm. Well, my proof may be an error, because tens (or more likely, hundreds) of proofs of P=NP were produced by different people, and no one yet gained acceptance by mathematics community.

But suppose, for the sake of discussion, that my proof is correct. It alarms the society about the possible rise of universal artificial intelligence without providing the actual algorithm, yet.

So, what are the dangers and how to mitigate them?

The most obvious danger is Terminator or The Matrix scenarios: the machines start a war against people. The second scenario is evil people giving robots an explicit military purpose. These two scenarios are considered by me together, because their mitigation measures are the same. Because it is an universal intelligence, only universal intelligence can overcome it. So, the only way for mankind to protect itself is to build a protective army of robots also based on an NP-complete algorithm.

Because (as it seems for me) existence of an efficient NP-complete algorithm is proven, building the robots army should be started right away. We don’t have an efficient NP-complete algorithm, but we should start building all the other components of the robot armies. It should be a matter of minutes to finish their programming with this missing component to react to possible evil people using this technology with greater military forces.

“Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.” — Nick Bostrom

You may think: There is an obvious mitigation: Limit the number of paper clips done. But: 1. The machine is still hating people because they may turn it off before it makes the set number of paper clips. 2. Suppose the machine is given the task to make just one paper-clip. Then isn’t it safe? It isn’t: The machine would be naively programmed to make the paper-clip atom-to-atom exact to the model. Pursuing this task, the machine could be able to spend all resources of human civilization into “bettering” this paper-clip to be atom-to-atom exact as in the model. It may still kill people to avoid them turning itself off, because it expecting that people could prevent it making the paper clip atom-to-atom exact as in the model, or kill people to prevent them touching the paper clip, because touching would stir away some atoms from it.

Next, the third danger described (paper clip maker) also applies to militaries: In the same way as in the paper clip example, protective military robots/drones may do their work too well, causing unintended results.

Have you noticed that most of these dangers are not specific to NP-complete algorithms (universal intelligence), but are present with its weaker cousin, general artificial intelligence? Ilon Musk’s robots pose similar threats, too.

The next two dangers are specific to NP-complete algorithms:

Because of its universality, an NP-complete algorithm would probably be able to accomplish its purpose in the universe knowing no more than quantum chromodynamics. Then if we program it correctly, to do the right thing, we are safe? Again no: The algorithm may went mad after a LHC discovery that changes the base physics formulas a little.

A danger is that NP-complete algorithms may start to (try) to control weather and climate to accomplish their purposes by the power of energy in atmosphere (“by magic”) by sending radio waves to the atmosphere. In this scenario mankind may have a war against spirits (also known as “energetic creatures”).

So, the last described danger of an efficient NP-complete algorithm is that it may do a literal magic.

Militaries, be prepared to mitigate these dangers by building ready to use all components of protective robots except of the NP-complete algorithm itself, as I said above to be plugged into the system quickly, if discovered.

Leave a Reply

Your email address will not be published. Required fields are marked *