Tail Bounds for Occupancy Problems

Paul Spirakis

Computer Technology Institute, Patras University (Greece)

Algorithms Seminar

November 20, 2000

[summary by Stphane Boucheron]

A properly typeset version of this document is available in postscript and in pdf.

If some fonts do not look right on your screen, this might be fixed by configuring your browser (see the documentation here).
Abstract
The talk was based on [9] and consisted in a presentation of various tail bounds for occupancy problems and applications to the determination of the conjectured satisfiability threshold in the random k-sat problem.



1  Bins and Balls and Occupancy Problems

In bins and balls games, m balls are placed independently and uniformly at random among n bins. Henceforth, a generic allocation will be denoted by w{1,...,n}m: wk=j if the k-th ball is located in the j-th bin. Let Xn(w,m) denote the number of empty bins when m balls have been assigned a position. The piecewise constant interpolation is defined by Xn(w,t)=Xn(w, tn). To alleviate notations, we omit w when this is not a source of confusion. The behavior of the process Xn() as n becomes large has been the subject of many investigations in random combinatorics. The lecture is concerned with different derivations of tail bounds for Xn() and their application to the analysis of the threshold phenomenon for the (random) k-satisfiability problem.

1.1  Approaches to random allocations

There are many approaches to random allocation problems. Many early successes of analytic combinatorics have been reported in the monograph by Kolchin, Sevast'yanov and Chystiakov [11].

Probabilistic (Martingale-theoretical) approaches have been successful as well. Let Ft denote the s-algebra generated by the first nt allocations (we do not mention n to alleviate notations). Then it is straightforward to check the relation
E


 Xn


t+
1
n



|Ft 


=


1-
1
n



Xn(t).
From this, one immediately deduces that (1-1/n)- ntXn(t) is an Ft-Martingale. Moreover it has bounded increments, and its quadratic variation process converges in probability towards t| et-(1+t). Applying Martingale limit theorems [8], one easily deduces: Unfortunately, results on convergence in distribution tell little about asymptotic probability of rare events: the convergence rate cannot be better than O(1/(n)1/2), and probability of rare events are especially relevant to the analysis of extreme values that often constitute the core of applications. Nevertheless, central limit theorems suggest that the tail probabilities of the empty cell statistics might be Gaussian-like. In computer science, sharp upper bounds on tail probabilities are often desirable.

If instead of throwing a fixed number  nt of balls into the n bins, one first draws N according to a Poisson distribution with parameter nt , and then throws N balls into the n bins, the bin occupancies become independent Bernoulli random variables with success probability exp(-t). Xn(t) is now distributed according to a binomial random variable with parameters n and exp(-t). Let P denote the original probability distribution on allocations and let Q denote this alternate probability distribution on N and allocations. Note that conditionally on N= nt, the distributions of Xn(t) under P and Q are identical (the multinomial distribution is a conditioned Poisson process). Then
P{Xn(t) A}=
Q{Xn(t) A N= nt}
Q{N= nt}
(2p nt)1/2 Q{Xn(t) A}     (1)
Inequality (1) provides with an easy tail upper bound for rare events under Q, i.e., for large deviations of Xn(t) around its expectation. If A={ w| Xn(w,t)>ne-t+ne }, then
P{Xn(t) A}(2p n)1/2  exp ( -nh ( e-t+e,e-t ) )
where h(x,y)=xlogx/y+(1-x)log1-x/1-y. It obviously raises two questions: Is the order of the exponent correct? Can we get rid of the (n)1/2 factor?

1.2  Known results

As allocation are performed independently, a very straightforward yet useful bound comes from the Azuma -- Mc Diarmid inequality. Namely note that if w and w' are two allocation schemes that differ only in one position wj =wj' for all j k= tn except for j=i, then |Xn(w,t)-Xn(w',t)|1/n. As a matter of fact, if the space of allocations is equipped with the Hamming distance, the empty bin statistics is 1-Lipschitz. This implies that
P { \vert Xn(t)-E [ Xn(t) ] \vert >ne } 2exp


-
2ne2
t2



.     (2)
Inequality (2) is obtained by a Martingale embedding argument. Namely Xn(t)=E[Xn(t)|Ft] and the process Mn(s)=E[Xn(t)|Fs] is an Fs-martingale, as
E [ Mn(s+h)|Fs ] =E [ E [ Xn(t)|Fs+h ] |Fs ] =E [ Xn(t)|Fs ] =Mn(s).
One may wonder what the best way to apply Azuma's inequality is.

1.3  Painless tail bounds

The first bound presented in [9] is:
P { \vert Xn(t)-E [ Xn(t) ] \vert >ne } 2exp


-
(n-1/2)n2e2
n2-E [ Xn(t)2 ]



.     (3)
When n becomes large, the exponent on the right-hand side is equivalent to
-
ne2
1-e-2t
.

The trivial Poisson estimates (1) clearly shows that this exponent is rather poor as soon as t becomes non-negligible. This is not a denial of the merits of Martingale approach. Indeed, this method provides nearly optimal bounds for smooth Gaussian functionals and for many discrete problems. The apparent flaw in Equation (3) comes from the fact that we did not use tight enough bounds on the quadratic variation process associated with E[Xn(t)|Fs].

Next the authors of [9] proceed to establish what they call a Chernof bound for the occupancy problem. It shows that the Poisson tail estimate (1) is correct even if we do not resort to a conditioning argument, i.e., that the (n)1/2 factor is spurious.

2  The Large Deviation Approach

The large deviation approach (see [2, 5, 7] for recent presentations) aims at identifying the right exponents for tail probability. It provides the right touchstone for the occupancy problem. Rather than using the martingale structure of the occupancy problem, the large deviation approach relies on the Markovian structure of the occupancy problem: conditionally on Xn(t), Xn(t+1/n) does not depend on Ft-1/n. The large deviation principle invoked in [9] comes from a contraction of a functional large deviation principle derived by Azencott and Ruget. The latter shows that asymptotically, the exponent in large deviation probabilities can be represented as a the solution of a variational problem, namely
 
lim
n
1
n
logP { Xn(t) nx } =-inf x(0)=1, x(t)=x
t


0
h


-
.
x
 
(s),x(s)


 ds.     (4)
The article [9] solves the associated variational problem and provides a closed form for the exponent, confirming the intuition that the exponent obtained by Poissonization is not optimal.

3  Satisfiability Problems

The second part of the paper presents an application of tail bounds for occupancy problems to the analysis of the random 3-sat problem. An instance of the 3-sat problem is a boolean formula in conjunctive normal form, where each clause has at most 3 literals. For each number n of variables, and each problem size k, the set of instances of the 3-sat problem is provided with the uniform probability over the m-tuples of 3-clauses over the n variables. At the time of writing [9], it was conjectured that as n goes to infinity while k/n remains constant, a phase transition occurs. For k/n<c3, random 3-sat formulas are satisfiable with overwhelming probability, while for k/n>c3 random 3-sat formulas are not satisfiable formulas with overwhelming probability.

The paper [9] proposes an upper-bound on the conjectured satisfiability threshold: c34.758. This result came in a series of improvement starting from the straightforward c35.19, through c35.08 [6], c34.64 [3], c34.601 [10], and recently culminating with c34.506 [4].

In the sequel, n and k are supposed to be fixed. F denotes a random 3-sat formula, #F denotes the number of assignments of the n boolean variables that satisfy F. F is satisfiable if #F1. T(F) equals 1 if F is satisfiable, 0 otherwise. Let s denote a generic truth assignment. F(s) equals 1 if s satisfies F, 0 otherwise. 1 denotes the truth assignment where all variables are set to 1. Then, we have
EF [ T(F) ] =EF


 
s:F(s)=1
1
#F



=
 
s
EF


F(s)
#F



=2n EF


F(1)
#F



,     (5)
where the second equality comes from the fact that the number of formulae that satisfy a particular truth assignment does not depend on the truth assignment. Hence, to get an upper bound on the probability of satisfiability, it is enough to get an upper bound on



7
8



cnEF'


1
#F



,
where F is now picked at random among the (7/8)cn(
n
3
)cn formulae that are satisfied by 1. This distribution among formulae is a product distribution where each clause is picked uniformly at random among the clauses where at least one literal is not negated.

The main idea of the proof is to establish that conditionally on the fact that it is satisfiable, a 3-sat formula with sufficiently many clauses has exponentially many satisfying truth assignments with overwhelming probability.

What is proved in [9] is actually the following. Let #F1 denote the number of truth assignments s of F where for each clause in F, there exists a non-negated variable that evaluates to 1 in s. Obviously 1/#F 1/#F1 . Now to lower bound #F1, it is enough to determine a minimum family of variables I(F) such that any truth assignment where all variables in I(F) evaluates to 1 satisfies the formula F (I(F) is sometimes called a prime implicant of F). As a matter of fact, we have #F12n-#I, and hence
P{F is satisfiable }


7
8



cnEF' [ 2#I ] .     (6)

Since the publication of [9], improved upper bounds on c3 have been derived by refining estimations on the fluctuations of #F for random formulae. Those estimations still rely on statistics for random allocations. But the empty bins statistics are no more sufficient. The best known upper bounds [4] rely on a statistics that have sometimes been called empirical occupancy measures. As a matter of fact, an allocation w defines a probability measure on N, Xn_(i,t) denotes the fraction of bins that contain i balls for i N. The large deviations of this measure-valued random variable may be studied in different ways: by resorting to Azencott--Ruget results and projective limit arguments [2], or directly as in [1].

References

[1]
Boucheron (S.), Gamboa (F.), and Lonard (C.). -- Bins and balls: large deviations of the empirical occupancy process. -- Rapport de recherche du LRI n°1255, Universit Paris-Sud, 2000.

[2]
Dembo (Amir) and Zeitouni (Ofer). -- Large deviations techniques and applications. -- Springer-Verlag, New York, 1998, second edition, xvi+396p.

[3]
Dubois (O.) and Boufkhad (Y.). -- A general upper bound for the satisfiability threshold of random r-SAT formulae. Journal of Algorithms, vol. 24, n°2, 1997, pp. 395--420.

[4]
Dubois (O.), Boufkhad (Y.), and Mandler (J.). -- Typical random 3-sat formulae and the satisability threshold. In Proceedings of SODA'2000. ACM, pp. 126--127. -- 2000.

[5]
Dupuis (Paul) and Ellis (Richard S.). -- A weak convergence approach to the theory of large deviations. -- John Wiley & Sons, New York, 1997, xviii+479p. A Wiley-Interscience Publication.

[6]
El Maftouhi (A.) and Fernandez de la Vega (W.). -- On random 3-sat. Combinatorics, Probability and Computing, vol. 4, n°3, 1995, pp. 189--195.

[7]
Feng (Jin) and Kurtz (Thomas G.). -- Large deviations for stochastic processes. -- 2000. 194 pages. Available from http://www.math.wisc.edu/~kurtz/feng/ldp.htm.

[8]
Hall (P.) and Heyde (C. C.). -- Martingale limit theory and its application. -- Academic Press, New York, 1980, xii+308p. Probability and Mathematical Statistics.

[9]
Kamath (Anil), Motwani (Rajeev), Palem (Krishna), and Spirakis (Paul). -- Tail bounds for occupancy and the satisfiability threshold conjecture. Random Structures & Algorithms, vol. 7, n°1, 1995, pp. 59--80.

[10]
Kirousis (Lefteris M.), Kranakis (Evangelos), Krizanc (Danny), and Stamatiou (Yannis C.). -- Approximating the unsatisfiability threshold of random formulas. Random Structures & Algorithms, vol. 12, n°3, 1998, pp. 253--269.

[11]
Kolchin (Valentin F.), Sevast'yanov (Boris A.), and Chistyakov (Vladimir P.). -- Random allocations. -- V. H. Winston & Sons, Washington, D.C., 1978, xi+262p. Translated from the Russian.

This document was translated from LATEX by HEVEA.