Littlewood–Offord problem

From The Right Wiki
Jump to navigationJump to search

In mathematical field of combinatorial geometry, the Littlewood–Offord problem is the problem of determining the number of subsums of a set of vectors that fall in a given convex set. More formally, if V is a vector space of dimension d, the problem is to determine, given a finite subset of vectors S and a convex subset A, the number of subsets of S whose summation is in A. The first upper bound for this problem was proven (for d = 1 and d = 2) in 1938 by John Edensor Littlewood and A. Cyril Offord.[1] This Littlewood–Offord lemma states that if S is a set of n real or complex numbers of absolute value at least one and A is any disc of radius one, then not more than (clogn/n)2n of the 2n possible subsums of S fall into the disc. In 1945 Paul Erdős improved the upper bound for d = 1 to

(nn/2)2n1n

using Sperner's theorem.[2] This bound is sharp; equality is attained when all vectors in S are equal. In 1966, Kleitman showed that the same bound held for complex numbers. In 1970, he extended this to the setting when V is a normed space.[2] Suppose S = {v1, …, vn}. By subtracting

12i=1nvi

from each possible subsum (that is, by changing the origin and then scaling by a factor of 2), the Littlewood–Offord problem is equivalent to the problem of determining the number of sums of the form

i=1nεivi

that fall in the target set A, where εi takes the value 1 or −1. This makes the problem into a probabilistic one, in which the question is of the distribution of these random vectors, and what can be said knowing nothing more about the vi.

References

  1. Littlewood, J.E.; Offord, A.C. (1943). "On the number of real roots of a random algebraic equation (III)". Rec. Math. (Mat. Sbornik). Nouvelle Série. 12 (54): 277–286.
  2. 2.0 2.1 Bollobás, Béla (1986). Combinatorics. Cambridge. ISBN 0-521-33703-8.