In the last two chapters, we talked about computational complexity up till the early 1970s. Here, we'll add a new ingredient to our already simmering stew – something that was thrown in around the mid-1970s, and that now pervades complexity to such an extent that it's hard to imagine doing anything without it. This new ingredient is randomness.
Certainly, if you want to study quantum computing, then you first have to understand randomized computing. I mean, quantum amplitudes only become interesting when they exhibit some behavior that classical probabilities don't: contextuality, interference, entanglement (as opposed to correlation), etc. So we can't even begin to discuss quantum mechanics without first knowing what it is that we're comparing against.
Alright, so what is randomness? Well, that’s a profound philosophical question, but I’m a simpleminded person. So, you’ve got some probability p, which is a real number in the unit interval [0, 1]. That’s randomness.
But wasn’t it a big achievement when Kolmogorov put probability on an axiomatic basis in the 1930s? Yes, it was! But in this chapter, we’ll only care about probability distributions over finitely many events, so all the subtle questions of integrability, measurability, and so on won’t arise. In my view, probability theory is yet another example where mathematicians immediately go to infinite-dimensional spaces, in order to solve the problem of having a nontrivial problem to solve! And that’s fine – whatever floats your boat. I’m not criticizing that. But in theoretical computer science, we’ve already got our hands full with 2n choices. We need 2ℵ0 choices like we need a hole in the head.