Suppose you want to solve a combinatorially hard problem. Option one is to think really hard about the problem and design an algorithm that guarantees a solution. A second option is to try a “mystery” algorithm that comes without any guarantee of success but in practice usually outperforms any of the algorithms you have designed. Is there a point when you should shift your focus of study to the mystery algorithm? Does the mystery algorithm know something that you do not?

This talk explores the questions above for the case of hard feasibility problems that are efficiently solved by iterated maps generated by projections to non-convex sets. There is a strong analogy with mechanics. When applied to unfeasible instances these iterated maps sample a probability density in a high dimensional space, analogous to the attractor of a strongly mixing dynamical system. It is in this sense that the iterated map performs an exhaustive search. Feasible instances differ only in the existence of an attractive fixed point at some unknown location on the attractor. To improve the iterated-map algorithm, one can dramatically increase the rate of discovery of the solution fixed point by reducing the Kolmogorov-Sinai entropy of the attractor. These ideas will be illustrated with three applications: bit retrieval (phase retrieval on 2-valued sequences), latin-square completion and graph coloring.