Explicit Sufficient Invariants for an Interacting Particle System

Yoshiaki Itoh

Institute of Statistical Mathematics, Japan

Algorithms Seminar

May 17, 1999

[summary by Philippe Robert]

A properly typeset version of this document is available in postscript and in pdf.

If some fonts do not look right on your screen, this might be fixed by configuring your browser (see the documentation here).

Abstract
We introduce a new class of interacting particle systems on a graph G. Suppose there are initially Ni (0) particles at each vertex i of G, and that the particles interact to form a Markov chain: at each instant two particles are chosen at random, and if these are at adjacent vertices of G, one particle jumps to the other particle's vertex, each with probability 1/2. The process enters a death state after a finite time when all the particles are in some independent subset of the vertices of G, i.e., a set of vertices with no edge between any two of them. The problem is to find the distribution of the death state h i= Ni (¥) as a function of the numbers Ni (0).

We are able to obtain, for some special graphs, the limiting distribution of each Ni if the total number of particles N ® ¥ in such a way that the fraction Ni (0) /N = xi at each vertex is held fixed as N ® ¥. In particular we can obtain the limit law for the graph S2: ·---·---· having 3 vertices and 2 edges.

This talk is based on a joint paper with Colin Mallows and Larry Shepp [1]

1   A Particle System

If G is a connected graph, the following particle system is considered. Initially n particles are distributed on the nodes of the graph, at each unit of time two particles are chosen at random if they are on neighboring nodes then one of the two particles jumps to the other one's vertex, each with probability 1/2. This kind of model has various applications to genetics, to voting and symbolic computation.

It is easily seen that if a vertex has no particles it remains empty and the process lives on a subgraph with this vertex removed. The process continues until all vertices are empty except those of an independent subset J of G, i.e., the vertices in J have no edge in common. At that time the process stops since no interaction can occur anymore. We denote by N(t)=(Ni(t);iÎ G) the vector of the number of particles on the vertices at time t, t is the first time the process reaches an independent set; N(t) is the terminal state of the process.
Example 1  [The complete graph]   The singletons are obviously the independent sets for this graph and by symmetry the terminal distribution is easy to derive, Pr(N(t)=di)=Ni(0)/n.
In general it is difficult to determine the distribution of the final state of the process. The method used here is to find functionals f such that the relation
E ( f(N(t)) ) =E ( f(N(0)) )
holds for all t³ 0 and for the random time t=t. If there are sufficiently many functions f then the distribution of N(t) may be derived. In a classical dynamical system such a function is an invariant of the motion. In a probabilistic context the corresponding notion is the martingale, a function f is admissible if (f(N(t)) is a martingale, i.e., for all s£ t,
E ( f(N(t))|all events before time s ) =f(N(s)),
in particular E(f(N(t)))=E(f(N(0))) for all t³ 0.

In our case (åiÎ G Ni(t)) is a trivial (constant!) martingale. Another example is the process (Ni(t)) for iÎ G, it is also a martingale, hence E(Ni(t))=Ni(0).

For an admissible f, under mild integrability conditions we shall have E (f(N(t)))=f(N(0)), hence if there is a rich class of such f the distribution of N(t) will be determined. It turns out that this discrete model does not seem to have sufficiently many admissible functions. The situation is somewhat simpler if one considers a continuous version (Xi(t);iÎ G) of the process, it is the solution of the stochastic differential equation
dXi=
 
å
jÎ Ni
(XiXj)1/2  dBij,     (1)
where Ni is the set of the neighbors of iÎ G. The Bij, i,jÎ G are Brownian motions such that Bij=-Bji, so that if
 
å
iÎ G
Xi(t)=1
for t=0 then this relation holds for all t³ 0 (the ``number'' of particles is constant). Notice that if one of the coordinates is 0 it remains 0 as in the discrete model.

2   Martingales of the Continuous Process

A star graph Sr with r+1 vertices is considered, the vertex 0 is supposed to be the center.
Proposition 1   If G=Sr and a1+...+ar=0, the process (Pna(X1(t),...,Xr(t))) is a martingale, where Pna is the polynomial defined by
P
a
 
n
(x1,...,xr)=
 
å
i1³ 1,...,ir³ 1
i1+...+ir=n
æ
è
n
i
ö
ø
æ
è
n-r
i-1
ö
ø
r
Õ
k=1
(akxk)
ik
 
,
with i=(i1,...,ir) and (
n
i
)=n!/i1!··· ir!.
The proof is carried out with the help of Itô's equation. In this manner there is a sequence of martingales associated to the stochastic process (X(t)).

A similar situation occurs with the Brownian motion (Bt), if hn is the n-th Hermite polynomial, then
(Mn(t))= ( tn/2 hn ( B(t)/(t)1/2 ) )
is a martingale. The appropriate generating function of these processes gives the classical exponential martingale
+¥
å
n=0
cn
n!
Mn(t)= exp æ
ç
ç
è
c B(t) -
c2
2
t ö
÷
÷
ø
.
The martingales (Mn(t)) are not easy to use to get distributions of the Brownian motion functionals. The above exponential martingale is simple, ``contains'' all the martingales (Mn(t)) and many distributions related to the Brownian motion can be directly obtained with it (see Rogers and Williams [3]). In the following a similar method is used to get the terminal distribution of the process. As we shall see the corresponding exponential martingale is not as simple as for Brownian motion but it will give the desired distribution.

3   Star Network With Three Nodes

Since the expression of the polynomials Pna is rather complicated to derive results on the distribution of (X(t)), the simpler case r=2 is now considered. In this case the terminal states (X0(t),X1(t),X2(t)) are (1,0,0) or (0,x,1-x), xÎ [0,1]. Taking a1=-1 and a2=2 in the above proposition, we get that, for n³ 2,
(Yn(t))= æ
ç
ç
è
n-1
å
i=1
æ
è
n
i
ö
ø
æ
è
n-2
i-1
ö
ø
(-1)iX1(t)iX2(t)n-1 ö
÷
÷
ø
    (2)
is a martingale. The key result of this section is the following identity, for |v|<1/4 and 0£ x£ 1
 
å
n³ 2
vn
n
n-1
å
i=1
æ
è
n
i
ö
ø
æ
è
n-2
i-1
ö
ø
(-1)ixi(1-x)n-1= xv+
1-v
2
æ
ç
ç
è
1-(1+
4xv
(1-v)2
)1/2 ö
÷
÷
ø
,     (3)
this suggests to sum the expressions (2) as follows
Zu(t)=
 
å
n³ 2
un
n
Yn(t).
The process (Zu(t)) is also a martingale, the exponential martingale of (X(t)), and the representation (3) gives the following result. If µ is the terminal distribution of (X1(t)) with the initial state Xi(0)=xi, i=1,...,r, then for |u|<1/4,
ó
õ
1


0
((1-u)2+4ux)1/2µ(dx) = u(x1+x2-1)+ (1+2u(x1-x2)+u2(x1+x2)2)1/2.
The problem is thus reduced to a kind of moment problem (by differentiating with respect to u under the integral). Further analytic manipulations give the density f of µ as
d2
dx2
2
p
ó
õ
x


0
1
(x-w)1/2
g(wdw,
where g is a complicated function but with an explicit expression.

References

[1]
Itoh (Yoshiaki). -- Random collision models in oriented graphs. Journal of Applied Probability, vol. 16, n°1, 1979, pp. 36--44.

[2]
Itoh (Yoshiaki), Mallows (Colin), and Shepp (Larry). -- Explicit sufficient invariants for an interacting particle system. Journal of Applied Probability, vol. 35, n°3, 1998, pp. 633--641.

[3]
Rogers (L. C. G.) and Williams (David). -- Diffusions, Markov processes, and martingales. Vol. 2: Itô calculus. -- John Wiley & Sons Inc., New York, 1987, xiv+475p.

This document was translated from LATEX by HEVEA.