Random Matrices and Queues in Series
Yuliy Baryshnikov
Lama, Université de de Versailles 
SaintQuentinenYvelines (France)
Algorithms Seminar
December 11, 2000
[summary by Marianne Durand]
A properly typeset version of this document is available in
postscript and in
pdf.
If some fonts do not look right on your screen, this might be
fixed by configuring your browser (see the documentation here).
Abstract
Queues in series are defined as an infinite sequence of clients
queuing in front of an infinite sequence of servers where each time a
client is served by a server, it immediatly enters the next
queue. Simple questions about this model are very hard to solve
directly. This talk describes the centralized and normalized law of
the departure of the kth client from the nth server, as n tends
to infinity while k remains bounded; this law is related to a
sequence of largest eigenvalues of random matrices. This relation
allows us to use the numerous asymptotic results known regarding the
spectra of random matrices and gain useful informations about the
queuing processes.
1 Queues and Brownian Motions
Consider an infinite series of queues corresponding to servers, and an
infinity of jobs. At
first all the
jobs are in the first queue; then when a job leaves the server Q_{i}, it
immediatly enters the queue corresponding to the server Q_{i+1}. The
question asked is: When does the
ith job leave the jth server? This can be modeled by
pathweights
in an infinite matrix. Let w_{k,l} denote the time needed to process
the kth job on the lth server. The cost of the maximal
weight of a path from (0,0) to (i,j) in the matrix (w_{k,l}) is
noted c(i,j). The path is
made of steps of size one where only one component increases. Then one
observes that c(i,j) is equal to the time when the
ith job leaves the jth server. This equality illustrates the
fact that server j can process job i if it has already
processed job i1 and if job i has left queue j1.
The problem of queues in series can thus be modeled by an infinite
matrix, where we assume from now on that the entries are independent
identically distributed random variables, with finite variance. For the
main theorem and for Section 3 the distribution is
assumed to be geometric with parameter q.
The aim of the talk [1] is to link the queue problem to the
distribution of the largest eigenvalues of random Hermitian matrix
with appropriate distribution.
An infinite matrix of weights is also a model for a physical problem, the
interacting particle process, see [7]; there we assume that
all the integers corresponds to sites that are capable of containing
one particle,
and that at first
all the sites with negative positions are full. In this model the weight
w_{i,j} is the time taken by a particle to move from i to i+j.
A preliminary remark links this queue problem to Brownian
motion [3]. Given
(B_{k})_{k=1,2,...} independent standard Brownian motions, and
D_{k}^{(n)}=c(k,n)en/(vn)^{1/2}, where e is the expectation of
w_{1,1} and v its variance, the following theorem holds:
Theorem 1
The processes D^{(n)}=(D_{k}^{(n)})_{k=1,2,...} converge in law as
n® ¥ to the stochastic process
D=(D_{k})_{k=1,2,...}
where
D_{k}=sup_{0=t0<t1<...<tk=1}å_{i=0}^{k1}(B_{i}(t_{i+1})B_{i}(t_{i})).
This can easily be seen by modeling a path from (0,0) to (k,N)
for large N by k long vertical lines, where the path uses the ith
vertical line from the time t_{i} to the time t_{i+1}.
We now introduce the Gaussian Unitary Ensemble (GUE) [8] as
the probability distribution on the Hermitian matrices with the
density r_{GUE}(H)=Z^{1}e^{trH2/2} where
Z is a normalizing constant equal to ò
e^{trH2/2} dH. A useful property is that a
Hermitian matrix H is drawn from GUE if Â(h_{ij}) and
Á(h_{ij}) are i.i.d. Gaussian random variables with mean 0 and
variance 1. Given a matrix H, let
H_{k}=(h_{(i,j)})_{1£ i,j£ k} be the main minor of
size k of H and s_{k} the largest eigenvalue of H_{k}.
Theorem 2
The laws of both processes s={s_{k}} for H drawn from
GUE, and D={D_{k}} coincide.
This theorem is proven in the next sections. We first exhibit a
bijection between a finite restriction of size M of the queue
problem and a subspace of N^{M(M+1)/2} via Young tableaux. The
second part of the proof is to relate this subspace of N^{M(M+1)/2} to the dominant eigenvalues of minors of the matrix H.
2 Combinatorics
The bijection between the matrix H and Young tableaux is a
generalization of the RobinsonSchenstedKnuth correspondence
(see [5]) between Young tableaux and permutations. The
matrix W of size N× M with coefficients the weights w_{i,j}
can be represented as a generalized permutation a,
a= 
æ
è 

i_{1} 
i_{2} 
... 
i_{k} 
j_{1} 
j_{2} 
... 
j_{k} 


ö
ø 
,

where i_{l}ÎN_{N}, the integers between 1 and N, j_{l}Î
N_{M} and j_{l} represents a(i_{l}). The integers i_{l} are not
necessarily distinct, this is why the permutation is said to be
generalized. The number of columns of type () is equal to
w_{i,j}. As the generalized permutation a is written in a
lexicographically sorted fashion, the bijection is quite
obvious. Indeed, given a matrix, the set of columns is well defined,
and sorting gives the uniqueness of the image; conversely given a
generalized permutation, one simply has to count the numbers of
columns of type () to reconstruct the matrix. Recall that
a Young diagram l is a decreasing sequence
(l_{1},l_{2},...,l_{r}) that can be represented as r
rows of boxes of heights l_{i}. A semistandard Young
tableau is a filling of the boxes by positive integers such that
the filling is increasing rightwards in rows and strictly increasing
in columns. The Young diagram underlying a Young tableau P is called
the shape and is denoted by sh(P). The
RobinsonSchenstedKnuth (RSK) correspondence is a bijection between
the set of generalized permutations with k columns and the set of
pairs of semistandard Young tableaux (P,Q) having the same
shape l such that å_{i}l_{i}=k. With the previous
bijection between matrices and generalized permutations, we thus have
a bijection between matrices of size N× M and pairs of
semistandard Young tableaux (P,Q) with shape l, such that
å_{i}l_{i}=k. We denote (P(w),Q(w)) Young
tableaux obtained by the RSK correspondence from a matrix w.
Some properties make this correspondence really
interesting.
We denote by W_{N,M,k} the set of matrices of size N× M whose
coefficients add up to k, to state a result on the way the
distribution of the Young tableaux behaves through this correspondence.
Lemma 1
If the set W_{N,M,k} is given the uniform distribution, then the
distribution of P(w) for wÎ W_{N,M,k} given
sh(P(w))=l is
uniform on the semistandard Young tableaux of shape l.
Moreover the shape sh(P) encodes a few
characteristics of w: the length l_{1} of the first row is the
maximal weight of the monotonous paths from (0,0) to (M,N) in the
table w, because it is the length of the longest increasing
subsequence of the second line of a. This is a direct
consequence of the RSK algorithm. It is then possible to consider the
Young tableau P as an embedding of several Young tableaux
P_{1},..., P_{M} where P_{M} equals P and P_{i} is obtained from
P_{i+1} by removing all boxes filled with i+1. A nice consequence
of the way the tableau P is built is that the sequence of the
lengths of the first rows of the embedded Young tableaux coincides
with the sequence of maximal paths from (0,0) to (i,N) as i goes
from 1 to M. Thus we now focus on Young tableaux instead of the
weight matrix. We introduce a representation of the Young tableaux
that encodes the shape of the tableaux, and also the shape of all the
embedded tableaux inside. To describe a Young tableau P filled with
N_{M}, let x_{j}^{i} be the coordinate of the rightmost box filled
with a number at most i in the jth row of the
tableau. Equivalently, this is just the length of the jth row in the
tableau P_{i1} defined by the embedding above. The elements
x_{j}^{i}, 1£ i£ M, 1£ j£ i can be seen as a
triangular array of size M(M+1)/2. The image x of a
tableau P by this transformation has the property that its last line
is equal to l=(l_{1},...,l_{M}), and that its first
column is equal to c(k,N), kÎN_{M}, corresponding to the
length of maximal paths from (0,0) to (k,N). This correspondence
is formalized in the following lemma.
Lemma 2
Let the GelfandCetlin cone C_{GC} be the set of triangular arrays
(x_{j}^{i}) of size M(M+1)/2 such that x_{j1}^{i}³
x_{j1}^{i1}³ x_{j}^{i} for 1£ i£ M, 1£ j £
i. Then the Young tableaux filled with N_{M} are in onetoone
correspondence with the integer points in the GelfandCetlin cone.
What we have now is a mapping from W_{N,M,k}, the set of
matrices of size
N× M whose coefficients add up to k, to the set
of integers point in the GelfandCetlin cone. This mapping has the
property that
if W_{N,M,k} is given the uniform distribution, then the
distribution of x, given that the last line is equal to
l=(l_{1},...,l_{M}), is uniform on the integers
points of the GelfandCetlin cone.
3 Gaussian Unitary Ensemble
From now on we add the restriction that the distributions of the
coefficients w(i,j) are i.i.d. geometric with parameter q, that is
P(w_{i,j}=k)=(1q)q^{k}. All the results of the
previous section still hold in this context, as they were obtained in
full generality.
The aim of this section is to finish the proof of
Theorem 2. For
this we describe the distribution of x(w) by its distribution
conditioned upon values of its last line, and the distribution of its
last line.
This is then linked to the distribution of the limit processes D_{k}.
The probability that the RSK correspondence applied to a random matrix
w with i.i.d. geometric entries with parameter q yields a pair of
Young tableaux of shape l=(l_{1},...,l_{M}) is





(l_{i}l_{j}i+j)^{2} 

(l_{i}+Ni)! 

(l_{i}+Mi)! 

q^{k}
(1) 
where k=ål_{i}. The proof of this formula can be found
in [4] and is based on the fact that there are Õ_{1£ i
<j£ M}l_{i}l_{j}i+j/ji semistandard tableaux
of shape l filled with N_{N}. The vector of centered and
normalized variables x_{i}=l_{i}eN/(vN)^{1/2} (with e
the average and v the variance of w_{1,1}), is noted x.
Plugging the Stirling approximation in Equation (2) yields
that for fixed q such that 0<q<1, fixed M and N®
¥, the distribution of x converges weakly to
Z^{1}Õ_{i<j}(x_{i}x_{j})^{2}Õ_{i}e^{xi2/2} (Z is a
normalizing constant). This is the distribution of the vector of
ordered eigenvalues of a random matrix drawn from GUE. This property
leads to the following theorem:
Theorem 3
The distribution of the sequence (D_{1},...,D_{M}) defined in
Theorem 1 is the distribution
of the first column of the random triangular array x of size M(M+1)/2
distributed
uniformly for a fixed
last line, and the distribution of its last line is the distribution of
the eigenvalues of matrices drawn from GUE [6].
The distribution of (D_{1},...,D_{M}) is the same as the distribution
of the first column of x(w), up to a proper normalization. And if
the distribution on w is uniform, then the distribution on x,
knowing its last line, is uniform. As the probability of getting a
Young tableau of shape l, and thus an array x of last line
l, is the same as getting the vector of
ordered eigenvalues of a random matrix drawn from GUE equal to
l, Theorem 3 is proved.
The GelfandCetlin polyhedron GC(l) is defined as a subset of
C_{GC}
such that the last line of the array is equal to
l=(l_{1},...,l_{M}).
Theorem 3 means that the distribution
of (D_{1},...,D_{M}) is uniform on GC(l).
This allows us to state the theorem below, which is a major step in
the proof of Theorem 2.
Theorem 4
Let H=(h_{ij}), i, j£ M be a random matrix drawn from GUE
with eigenvalues (l_{1},...,l_{M}), and
x(H)= 
æ
ç
ç
ç
è 

x_{1}^{1} 
... 
... 
x_{1}^{M1} 
... 
x_{M1}^{M1} 
x_{1}^{M} 
x_{2}^{M} 
... 
x_{M}^{M} 


ö
÷
÷
÷
ø 
where x_{j}^{i} is the jth eigenvalue of the main minor of size i
of H.
Then the triangular array x(H) is uniformly distributed in the
polyhedron GC(l).
The proof of this theorem is based on the fact that
the last line of the array x corresponds to the eigenvalues of the
matrix H, which is drawn from GUE, and that given this last line, the
distribution is uniform.
The last line of x is equal to l
and its first column to
the vector (s_{k}) by definition. This together with
Theorems 3 and 4 proves Theorem 2.
Theorem 2 can be used to prove the conjecture of Peter
Glynn
and Ward Witt [3] stating that D_{k}/(k)^{1/2}® 2. The
proof uses the already known fact that the largest eigenvalue of
random Hermitian matrix drawn from GUE rescaled by (k)^{1/2}
converges in distribution to 2, see [2].
References
 [1]

Baryshnikov (Yu.). 
GUEs and queues. Probability Theory and Related Fields,
vol. 119, n°2, 2001, pp. 256274.
 [2]

Geman (Stuart). 
A limit theorem for the norm of random matrices. Annals of
Probability, vol. 8, n°2, 1980, pp. 252261.
 [3]

Glynn (Peter W.) and Whitt (Ward). 
Departures from many queues in series. Annals of Applied
Probability, vol. 1, n°4, 1991, pp. 546572.
 [4]

Johansson (Kurt). 
Shape fluctuations and random matrices. Communications in
Mathematical Physics, vol. 209, n°2, 2000, pp. 437476.
 [5]

Knuth (Donald E.). 
Permutations, matrices, and generalized Young tableaux. Pacific Journal of Mathematics, vol. 34, 1970, pp. 709727.
 [6]

Kuperberg (Greg). 
Random words, quantum statistics, central limits, random matrices,
September 2000. To appear in Methods Appl. Anal..
Available from
http://front.math.ucdavis.edu/math.PR/9909104
.
 [7]

Liggett (Thomas M.). 
Interacting particle systems. 
SpringerVerlag, New York, 1985, xv+488p.
 [8]

Mehta (M. L.). 
Random matrices and the statistical theory of energy levels. 
Academic Press, New York, 1967, x+259p.
This document was translated from L^{A}T_{E}X by
H^{E}V^{E}A.