Fast Multivariate Power Series Multiplication in Characteristic Zero

Grégoire Lecerf

Gage, École polytechnique (France)

Algorithms Seminar

June 11, 2001

[summary by Ludovic Meunier]

A properly typeset version of this document is available in postscript and in pdf.

If some fonts do not look right on your screen, this might be fixed by configuring your browser (see the documentation here).
Let S be a multivariate power series ring over a field of characteristic zero. The article [5] presents an asymptotically fast algorithm for multiplying two elements of S truncated according to total degree. Up to logarithmic factors, the complexity of the algorithm is optimal, in the sense that it is linear in the size of the output.

1  Introduction

Let k be a field of characteristic zero. We write S=k[[x1,...,xn]] for the multivariate power series ring in the n variables x1,...,xn. Let I be any ideal of S. By computing at precision I in S, we understand computing modulo the ideal I in S. In other words, power series in S are regarded as vectors in the k-algebra S/I. We denote by m the maximal ideal (x1,...,xn) in S and by d any positive integer. The paper [5] sets the problem of a fast algorithm for multiplying two power series in S truncated in total degree d, that is computed at precision md+1.

The general question of a fast algorithm for multivariate multiplication in S modulo any ideal remains an open problem and has received very little attention in the literature. Previous works (e.g., [2]) investigated computation modulo the ideal (x1d+1,...,xnd+1), that is truncation according to partial degree with respect to each variable xi. The method used is called Kronecker's substitution and is briefly discussed in Section 3.

The need for multiplication routines modulo md+1 arises in various fields, such as polynomial system solving [7] and treatment of systems of partial differential equations.

The efficiency of the algorithm is measured with respect to the model of nonscalar complexity. By nonscalar complexity, we understand the number of primitive operations in the field k needed to complete the algorithm, independently of the sizes of the numbers involved (see [3]). We now introduce some notation. We denote by D = deg(md+1) the degree of the ideal md+1. D is the number of monomials in S which are not in md+1, that is the dimension of the k-algebra S/md+1. Simple combinatorial considerations give
D = deg ( md+1 ) =
We set C := deg(md) and denote by Mu(d) the complexity of the multiplication of two univariate polynomials of degree d in k[t].

The next section presents the algorithm; its complexity belongs to
O ( D log3D loglogD ) .     (1)
Since D is the size of the output, the algorithm is optimal, up to the logarithmic factors.

2  The Algorithm

2.1  Description

The first step of the algorithm consists in translating the multivariate problem into a univariate one. This is motivated by the fact that fast algorithms for univariate power series multiplication are known (e.g., [6]).

Let t be a new variable. We consider the substitution
S/md+1 ¾® k[x1,...,xn][[t]]/(td+1)
f(x1,...,xn) |® f(x1t,...,xnt).
If f is an element of S/md+1, Rt~(f) is a univariate power series in the single variable t truncated at degree d. It can then be written Rt~(f) = f0~ + f1~ t + ... + fd~ td, where each coefficient fi~ is a homogeneous multivariate polynomial in the variables x1,...,xn of total degree i. This remark on the degree suggests that:
  1. the substitution Rt~ is optimal, in the sense that it provides us with a representation of f that retains exactly the monomials that form a basis of S/md+1. In particular, the algorithm does not suffer from any overhead caused by unnecessary terms (see Section 3);
  2. in view of the homogeneity of the fi~, keeping all of the variables xi is redundant. The substitution defined by
    Rt :
    S/md+1 ¾®
    k[x2,...,xn][[t]]/(td+1)= ( k[[t]]/(td+1) ) [x2,...,xn]
    f(x1,...,xn) |® f(t,x2t,...,xnt)
    reduces the complexity in the step of evaluation-interpolation (see below): n-1 variables, instead of n variables, are actually needed.
The second step of the algorithm performs the multiplication. Let f and g be two power series in S/md+1 and h be the product fg in S/md+1. The equality h = fg turns into
Rt(h) = Rt(f)Rt(g).     (2)
Consequently, we concentrate on a fast way to compute Rt(h). We use an evaluation-interpolation scheme. We first consider the evaluation map at the point P=(p2,...,pn) in kn-1 defined by
EP :
( k[[t]]/(td+1) ) [x2,...,xn]
¾® k[[t]]/(td+1)
f(x2,...,xn) |® f(P).
We then apply EP to equation (2), which yields
EP ( Rt(h) ) = EP ( Rt(f) ) EP ( Rt(g) ) modtd+1.     (3)
Equation (3) holds for any point P and computes the product Rt(h) at P by using a univariate power series multiplication algorithm. Such an algorithm is described in [6].

The last step of the algorithm consists in reconstructing h from a set of values of Rt(h). We regard Rt(h) as a multivariate polynomial in the variables x2,...,xn. There exists an interpolation map
I :
( k[[t]]/(td+1) ) C
( k[[t]]/(td+1) ) [x2,...,xn]
( f(P1),...,f(PC) )
|® f(x2,...,xn),
which recovers Rt(h) from a set of C pairwise distinct values {EP1( Rt(h) ),...,EPC( Rt(h) ) }. The evaluation points Pi, for i in 1,...,C, are chosen to be powers of distinct prime numbers, namely Pi=(p2i,...,pni), where pj are distinct prime numbers. Note the key point is that the characteristic of the ground field k is zero, so that all EPi(Rt(h)) have pairwise distinct values. An implementation of both maps EP and I is described by J. Canny, E. Kaltofen, and Y. Lakshman in [4]. Their method relies on fast univariate multipoint evaluation and interpolation (e.g., [1]).

Finally, we reconstruct h from Rt(h). If Rt(h)=h0+h1t+...+hdtd is given, h is obtained by homogenizing each hi in degree i with respect to the variable x1 and then evaluating at t =1.

We are now ready to unfold the algorithm.

MultivariatePS_Mult := proc(f,g)
(1) F ¬ Rt(f); G ¬ Rt(g); // new representation
(2) for i in (P1,...,PC) do // evaluation
FPi ¬ EPi(F); GPi ¬ EPi(G);
(3) for i to C do // univariate multiplication
HPi ¬ FPiGPi;
(4) Rt(h) ¬ I(HP1,...,HPC); // interpolation
(5) h ¬ homogenization in degree with respect to x1 // reconstruction
in Rt(h);
return h;

The next section derives the complexity result claimed by (1).

2.2  Complexity

Steps 1 and 5 can be performed in O(C) operations. We examine the cost of Steps 2, 3, and 4 separately: The overall complexity of the algorithm is then derived by replacing Mu(C) by its estimate O ( C logC loglogC ) obtained in [6] and noting that C < Dlog(D)/d. This yields
O ( D log3D loglogD ) .

2.3  Generalization

We mention that van der Hoeven generalized the algorithm to the case when
I = ( x1d1... xndn,  for a1d1+...+andn > d ),
where the ai are positive integers, by using the substitution defined by
Vt :
S/I ¾® k[x2,...,xn][[t]]/( td+1)
f(x1,...,xn) |® f(ta1,x2ta2,...,xntan)
instead of Rt. The rest of the algorithm remains unaltered.

3  Appendix: Kronecker's Substitution

Kronecker's substitution is defined by the map
Kt :
S/I ¾® k[[t]]/t(2d+1)n
f(x1,...,xn) |® f(t,t2d+1,...,t(2d+1)n-1),
where I = (x1d+1,...,xnd+1). This substitution truncates power series in partial degree d with respect to each variable xi. Let f be a power series in S/I, one recovers the coefficient of x1e1... xnen in f by simply reading off the coefficient of te1 + (2d+1)e2 + ... + (2d+1)n-1en in Kt(f). The cost of this algorithm is the cost of the multiplication of two univariate polynomials of degree (2d)n, that is O ( Mu( (2d)n) ). This is the lowest known complexity for multivariate power series multiplication modulo the ideal (x1d+1,...,xnd+1). In particular, when addressed in this context, the algorithm presented above requires precision mnd+1 and yields a similar complexity.

Kronecker's substitution may be used to compute modulo md+1 as well. However, it results in a significant overhead of O ( 2nn!), for fixed n and d » n, with respect to the size of the power series.


Aho (Alfred V.), Hopcroft (John E.), and Ullman (Jeffrey D.). -- The design and analysis of computer algorithms. -- Addison-Wesley Publishing Co., Reading, Mass.-London-Amsterdam, 1975, x+470p. Second printing, Addison-Wesley Series in Computer Science and Information Processing.

Brent (R. P.) and Kung (H. T.). -- Fast algorithms for composition and reversion of multivariate power series (preliminary version). In Proceedings of a Conference on Theoretical Computer Science Department of Computer Science, University of Waterloo, Waterloo, Ontario (August 1977), pp. 149--158. -- 1977.

Bürgisser (Peter), Clausen (Michael), and Shokrollahi (M. Amin). -- Algebraic complexity theory. -- Springer-Verlag, Berlin, 1997, xxiv+618p. With the collaboration of Thomas Lickteig.

Canny (John F.), Kaltofen (Erich), and Lakshman Yagati. -- Solving systems of nonlinear polynomial equations faster. In Gonnet (Gaston) (editor), Symbolic and Algebraic Computation (International Symposium ISSAC'89, Portland, Oregon, USA, July 17-19, 1989). pp. 121--128. -- ACM Press, 1989. Conference proceedings.

Lecerf (Grégoire) and Schost (Éric). -- Fast multivariate power series multiplication in characteristic zero. -- Available from, 2001.

Schönhage (A.). -- Schnelle Multiplikation von Polynomen über Körpern der Charakteristik 2. Acta Informatica, vol. 7, n°4, 1976/77, pp. 395--398.

Schost (Éric). -- Sur la résolution des systèmes polynomiaux à paramètres. -- PhD thesis, École polytechnique, Palaiseau, France, January 2001. Defended on December 7, 2000.

This document was translated from LATEX by HEVEA.