Suppose we have an (efficient) NP-complete algorithm.
I remind that proving a provable theorem isn’t an NP problem, because there are theorems whose shortest proof is of super-exponential length.
However, finding a proof that is below a given “threshold” length is an NP-complete problem.
Suppose our NP-complete algorithm is fixed. How to improve it’s ability to prove theorems (or check statements what is essentially the same, with constant multiplier 2)?
We need to “prefix” it with another algorithm E that transforms the theorem statement length into a number (the longest considered proof). To preserve polynomiality of our supposed efficient NP-complete algorithm, its output size must be polynomial to input. In other words the output size is capped by exp(p(s)) where s is the theorem statement size. Because exponentiating is a low cost operation, our algorithm should calculate a function between exp(x) and exp(p(s)).
TBD: behavior of degree asymptotically. May the degree be fractional?
Let we have some algorithm E. A reasonable task is to increase it’s p(x) degree.
The natural question is how much hard increasing degree is.
For simplicity we may assume p(x) = 2^k. Then raising p(x) to q-th degree is repeating it q times. That’s a linear operation. So if we know the degree of p(x) for some E, we can increase the degree just by naming any bigger natural number.
At first it may seem that increasing degree is easy because we could just use the algorithm such as exp(x^(n+1)) where n is the degree of p(x), but we may probably not know n. (Using an undecidable problem in E to vary between two different values of n would make n non-calculable). We can even “vary” between an infinite sets of n-s by using an infinite set of problems containing an infinite set of undecidable problems, e.g. the set of all statements of some formal systems containing arithmetics.)
Can we increase the degree when we don’t know n? Yes we can: E(log E(p(x))) produces exp(p(p(x))) that is a 2n degree. We can apply a polynomial to the degree by repeating this procedure (TBD: logarithmic number of times?)
So, yes, we can do any “allowed” (any polynomial) increase of degree by specifying this polynomial.
So, the way to increase is to specify a polynomial.
The degree of the polynomial can be get high by making this number a quickly growing function applied to another number, but if our computing power is limited, this would increase complexity too much and so is impractical. So we essentially would have this polynomial as a function of our computing power.
Now consider another hypothetical way to solve NP-hard problems than an efficient NP-complete algorithm: enter into a closed timelike spacetime curve, give a random answer and behave in such a way that after verifying the answer giving a wrong answer would produce a contradiction. They from laws of physicis being non-contradictory it follows that you produce a correct solution.
If your problem is undecidable, it follows that you will be unable to make the decision to solve it this way.
Unlike a polynomial NP-complete algorithm, the timemachine-based NP problem solver isn’t limited by the execution time growing polynomially to input data size. You still need to specify some algorithm E. But it’s degree can be higher: we are limited only by output data size and processing time of an easy problem like verifying a long proof of this length. It cannot be very high nevertheless if our storage and processing power are limited. So, it’s a should be again a function of storage and processing power. If we know the degree of the NP-complete algorithm and amounts of storage and processing power, this function is easy to calculate (asymptotically) even using known methods.
But if we had a halting problem solver for the case if it halts, the polynomial would be arbitrarily big (while fitting its output into halting solver input). Then the game to win in problem solving becomes naming an algorithm without input that produces a bigger number. That’s easy: input the biggest number we can input.
Equivalently, it’s an algorithm E with input to be applied to number 2. We can improve it (even if we don’t know algorithm’s internals) by applying it several times to its own output. That works only for the case if we don’t hit input size limit. We can improve further by applying to it an algorithm E2 that repeats E(x) times applying E. Then get E3 in a similar way, then E4, etc. This procedure also can be speeded up by taking E'(x) = Ex(2), etc.
Is there a greatest growing function of this kind?
2 Responses