Whenever someone claims to have a "silver-bullet" argument on a controversial issue (such as climate change, abortion, etc), it should set off a red-light alert. A "silver-bullet" argument is something like a mathematical proof, only involving social issues. It's the kind of argument that any sane, rational person can agree with -- regardless of his values, preferences, etc. Presumably, having heard such an argument, any sane, rational person can't help but to agree with the conclusion.
So I just heard such an argument -- all 9.5 minutes of it. And I'm still not agreeing with the conclusion that drastic taxation and regulation is needed to stop global warming NOW. So... am I insane? Irrational? Or... could it be that the argument has holes?
Let's play a game. I don't care what your stance is on global warming science and policy. Just for fun, point out as many logical fallacies as you can in this vampire-slaying, inexorably compelling super-argument.
Sunday, December 13, 2009
Wednesday, July 8, 2009
New paper up
Check it out.
UPDATE:
Here's another one.
I suppose I should say a word or two to get you to click. Both involve automata and probability, but in rather different ways. I know the suspense must be killing you by now -- all the answers are only a click away!..
UPDATE II: That first paper has a mistake, discovered by Dana Angluin and Lev Reyzin. It has been pulled from submission. The mistake seems fixable and possibly already fixed. Developing...
UPDATE III: (Dec. 13, 2009) A new paper is up by Dana Angluin, David Eisenstat, Lev Reyzin and yours truly. Among other results, it shows that learning DFAs in the way I suggested (by sampling random ones and invoking AdaBoost) is impossible since one can embed parity in random DFAs.
UPDATE:
Here's another one.
I suppose I should say a word or two to get you to click. Both involve automata and probability, but in rather different ways. I know the suspense must be killing you by now -- all the answers are only a click away!..
UPDATE II: That first paper has a mistake, discovered by Dana Angluin and Lev Reyzin. It has been pulled from submission. The mistake seems fixable and possibly already fixed. Developing...
UPDATE III: (Dec. 13, 2009) A new paper is up by Dana Angluin, David Eisenstat, Lev Reyzin and yours truly. Among other results, it shows that learning DFAs in the way I suggested (by sampling random ones and invoking AdaBoost) is impossible since one can embed parity in random DFAs.
Monday, June 1, 2009
A reverse Jensen inequality for exponentials
Let x_1, ... x_n be real numbers and t_1, ... t_n be nonnegative numbers summing to 1.
A trivial consequence of Jensen's inequality is that
exp(t_1 x_1 + ... + t_n x_n) <= t_1 exp(x_1) + ... + t_n exp(x_n).
I claim that in the other direction, we have the following:
t_1 exp(x_1) + ... + t_n exp(x_n) <= exp( diam(x) + t_1 x_1 + ... + t_n x_n)
where diam(x) = max_i x_i - min_i x_i is the diameter of {x_1, ..., x_n}.
Any proof ideas? (Unlike my previous goof-ups, I can actually prove this one.)
A trivial consequence of Jensen's inequality is that
exp(t_1 x_1 + ... + t_n x_n) <= t_1 exp(x_1) + ... + t_n exp(x_n).
I claim that in the other direction, we have the following:
t_1 exp(x_1) + ... + t_n exp(x_n) <= exp( diam(x) + t_1 x_1 + ... + t_n x_n)
where diam(x) = max_i x_i - min_i x_i is the diameter of {x_1, ..., x_n}.
Any proof ideas? (Unlike my previous goof-ups, I can actually prove this one.)
Sunday, May 17, 2009
concentration of exchangeable processes
Having thoroughly embarrassed myself in the previous post, let me shoot for something half-way redeeming.
Let X_1, X_2, ... be a sequence of {0,1}-valued random variables. We say that the process {X_i} is exchangeable if every finite-dimensional joint distribution is invariant under the permutation of indices. The de Finetti theorem characterizes the exchangeable processes as mixtures (possibly uncountable) of Bernoulli processes. Note that this is only true for infinite sequences -- there are well-known examples of finite exchangeable sequences that cannot be extended even by 1 (see Diaconis for a fascinating discussion).
Now exchangeable processes are in general not concentrated. Indeed, the measure P on {0,1}^n with P(00...0)=P(11...1)=1/2 (i.e., a mixture of two degenerate Bernoulli processes) is exchangeable but not concentrated.
However, I claim the following:
1. if P is a mixture of Bernoulli processes whose biases are bounded within
0<=a<=b<=1, and f:{0,1}^n -> R is 1-Lipschitz w.r.t. the normalized Hamming metric,
we have
P(f - Ef > t) <= exp(-2n(t-d)^2) where t < d =b-a and the bias of a Bernoulli process is P(X=1). Note that this recovers the ordinary McDiarmid bound, for which b=a, trivially.
2. if P only assigns positive probability to sequences with at most k ones,
we have
P(f - Ef > t) <= exp(-2n(t/2k)^2) ) for any f:{0,1}^n -> R that is 1-Lipschitz w.r.t. the normalized Hamming metric.
The proofs are standard and apparently not worthy of writing up -- but let me know if you try it and get stuck. Have these appeared in literature?
Let X_1, X_2, ... be a sequence of {0,1}-valued random variables. We say that the process {X_i} is exchangeable if every finite-dimensional joint distribution is invariant under the permutation of indices. The de Finetti theorem characterizes the exchangeable processes as mixtures (possibly uncountable) of Bernoulli processes. Note that this is only true for infinite sequences -- there are well-known examples of finite exchangeable sequences that cannot be extended even by 1 (see Diaconis for a fascinating discussion).
Now exchangeable processes are in general not concentrated. Indeed, the measure P on {0,1}^n with P(00...0)=P(11...1)=1/2 (i.e., a mixture of two degenerate Bernoulli processes) is exchangeable but not concentrated.
However, I claim the following:
1. if P is a mixture of Bernoulli processes whose biases are bounded within
0<=a<=b<=1, and f:{0,1}^n -> R is 1-Lipschitz w.r.t. the normalized Hamming metric,
we have
P(f - Ef > t) <= exp(-2n(t-d)^2) where t < d =b-a and the bias of a Bernoulli process is P(X=1). Note that this recovers the ordinary McDiarmid bound, for which b=a, trivially.
2. if P only assigns positive probability to sequences with at most k ones,
we have
P(f - Ef > t) <= exp(-2n(t/2k)^2) ) for any f:{0,1}^n -> R that is 1-Lipschitz w.r.t. the normalized Hamming metric.
The proofs are standard and apparently not worthy of writing up -- but let me know if you try it and get stuck. Have these appeared in literature?
Monday, May 4, 2009
Tighter Hamming ball volume?
Let C(n,k) be the binomial coefficient "n choose k". We want to bound the sum of these, with k ranging from 0 to d. This is equivalent to bounding the volume of the n-dimensional Hamming ball of radius d. Denote this number by B(n,d).
A well-known bound (appearing, for example, in the Sauer-Shelah-VC lemma) is
B(n,d) <= (en/d)^d .
We are pretty sure that it can be sharpened, for example by writing a (1/2) in front of the RHS. (We actually think it can be sharpened by better than a constant.)
Thus has to be known. Anyone have a reference? Thanks in advance!
A well-known bound (appearing, for example, in the Sauer-Shelah-VC lemma) is
B(n,d) <= (en/d)^d .
We are pretty sure that it can be sharpened, for example by writing a (1/2) in front of the RHS. (We actually think it can be sharpened by better than a constant.)
Thus has to be known. Anyone have a reference? Thanks in advance!
Friday, April 24, 2009
2 inequalities
1.
Let mu and nu be two Bernoulli measures on {0,1}, with biases p and q, respectively. Thus,
mu({1})=p=1-mu({0})
nu({1})=q=1-nu({0})
Let P and Q be the n-fold products of mu and nu, respectively. Thus, P and Q are two probability measures on {0,1}^n.
Show that
||P-Q|| <= n|p-q|
where ||.|| is the total variation norm (and <= is a poor man's way of writing \leq in plain text).
2.
Now assume that q = 1/2; thus Q is the uniform measure on {0,1}^n.
Show that
||P-Q|| <= C n^(1/2) |p-1/2|
where C>0 is some fixed universal constant. (Hint: Pinsker's inequality might come in handy.)
Are these novel or already known?
Let mu and nu be two Bernoulli measures on {0,1}, with biases p and q, respectively. Thus,
mu({1})=p=1-mu({0})
nu({1})=q=1-nu({0})
Let P and Q be the n-fold products of mu and nu, respectively. Thus, P and Q are two probability measures on {0,1}^n.
Show that
||P-Q|| <= n|p-q|
where ||.|| is the total variation norm (and <= is a poor man's way of writing \leq in plain text).
2.
Now assume that q = 1/2; thus Q is the uniform measure on {0,1}^n.
Show that
||P-Q|| <= C n^(1/2) |p-1/2|
where C>0 is some fixed universal constant. (Hint: Pinsker's inequality might come in handy.)
Are these novel or already known?
Subscribe to:
Posts (Atom)