Thursday, February 15, 2007

Conservation of dimension and randomness

Fix two integers 0 < n < m and consider the following seemingly unrelated questions:


(a) is there a function f:{0,1}^n -> {0,1}^m such that if its input X is uniformly random over the domain then f(X) is uniformly random over the range?

(b) is there a continuous bijection g:R^n -> R^m ?


The answer to both is negative. I'll give hand-wavy proof sketches and invite readers to fill in gaps (or point out logical lapses), or give pointers to clear, self-contained proofs. If (a) were true, we could arbitrarily amplify the randomness of some fixed-length input, obtaining lossless compression of arbitrarily long strings. If (b) were true, such a g could be uniformly approximated by differentiable functions, whose local Jacobian would have to be an invertible linear map from R^n to R^m.

Now, what do the two have in common? The first one violates "conservation of randomness" -- we get more random bits than we started out with, for free. The second one violates "conservation of dimension" -- we locally span a space of a higher dimension than what we started out with.

Update: If you take Cosma's class, you'll see an apparent violation of conservation of randomness. Define the logistic map F:[0,1] -> [0,1] by F(x) = 4x(1-x). Let N:[0,1]->{0,1} be the nearest-integer function. Draw x_0 uniformly at random from [0,1] and consider the sequence x_0, x_1, x_2,..., where x_{i+1} = F(x_i). It's easy to show that the sequence y_i = N(x_i) consists of iid Bernoulli variables. So it seems that with a single random draw of x_0, we got an infinite sequence of independent random variables for free. What's going on here?

9 comments:

Aryeh said...

(b) should be false in general topological spaces -- one shouldn't have to resort to differentiatioin and linear structure.

Suresh Venkatasubramanian said...

Something tells me that there's something fishy in using the floor (or nearest integer) function, but I don't know what. Using the floor function can be dangerous from a complexity-theoretic point of view, and it would be fascinating if essentially the same thing is going on

Aryeh said...

Suresh,

well, the nearest-integer function is perfectly legitimate. Do you agree that given a single uniformly random draw from [0,1] I can generate an infinite sequence iid Bernoulli variables?

Suresh Venkatasubramanian said...

I guess I'm not too sure about that either. The first step seems fine to me, and I understand intuitively that the chaotic behaviour of the logistic map should destroy correlations, but I don't know how to prove it.

Aryeh said...

Very well -- since this particular part of the claim is legitimate, I'll give a proof of that sometime (but I must attend to some urgent other business right now). I suggest that you look over Cosma's notes (linked above) -- I think he either proves that or gives it as an exercize...

dileffante said...

Suresh, just think about what you are picking at the beginning: a number in [0,1] (not in {0,1}!). Not done? Well, say that you choose to represent it in binary.

Aryeh said...

dileffante is right on. Define the "binary shift" operator
G:[0,1]->[0,1] as follows. If
0.b_1,b_2,...
is the unique nonterminating binary expansion of x then G(x)
is the unique real number given by the binary expansion
0.b_2,b_3,... .
It should be clear -- and is an easy exercise in first-year analysis -- that the sequence of variables
N(x_0), N(G(x_0)), N(G(G(x_0))), ...
is a Bernoulli process if x_0 is drawn uniformly from [0,1].

Other comments: the "paradox" (or rather, my obfuscatory trick) is more about information theory than probability theory.

Additionally, it's easy to construct continuous surjections from [0,1] to [0,1]^2 (e.g., Hilbert curve) as well as discontinuous bijections between [0,1] and [0,1]^2 (easy exercise).

Suresh Venkatasubramanian said...

so what's the answer ?

Aryeh said...

As dileffante and I tried to hint, the answer is that to write down a number in [0,1] to infinite precision (in binary, say) requires infinitely many bits. Thus, drawing a number uniformly from [0,1] is equivalent, in an information-theoretic sense, to flipping infinitely many fair coins.

The logistic map and the shift operator both somewhat obscure the point. What if I defined y_i to be the i'th bit of x_0 in binary? Is it clear that the y_i are a Bernoulli sequence (assuming x0~uniform[0,1] )?

Shall I elaborate further or have we beaten this one to death?