_{1}

An important problem that arises in different areas of science and engineering is that of computing the limits of sequences of vectors , where , N being very large. Such sequences arise, for example, in the solution of systems of linear or nonlinear equations by fixed-point iterative methods, and are simply the required solutions. In most cases of interest, however, these sequences converge to their limits extremely slowly. One practical way to make the sequences converge more quickly is to apply to them vector extrapolation methods. Two types of methods exist in the literature: polynomial type methods and epsilon algorithms. In most applications, the polynomial type methods have proved to be superior convergence accelerators. Three polynomial type methods are known, and these are the minimal polynomial extrapolation (MPE), the reduced rank extrapolation (RRE), and the modified minimal polynomial extrapolation (MMPE). In this work, we develop yet another polynomial type method, which is based on the singular value decomposition, as well as the ideas that lead to MPE. We denote this new method by SVD-MPE. We also design a numerically stable algorithm for its implementation, whose computational cost and storage requirements are minimal. Finally, we illustrate the use of SVD-MPE with numerical examples.

An important problem that arises in different areas of science and engineering is that of computing limits of sequences of vectors

In most cases of interest, however, the sequences

Nevertheless, we may ask whether we can do something with those

Of course, in case

Two different types of vector extrapolation methods exist in the literature:

1) Polynomial type methods: The minimal polynomial extrapolation (MPE) of Cabay and Jackson [

2) Epsilon algorithms: The scalar epsilon algorithm (SEA) of Wynn [

The paper by Smith, Ford, and Sidi [

Vector extrapolation methods are used effectively in various branches of science and engineering in acceler- ating the convergence of iterative methods that result from large sparse systems of equations.

All of these methods have the useful feature that their only input is the vector sequence

In this work, we develop yet another polynomial type method, which is based on the singular value decomposition (SVD), as well as some ideas that lead to MPE. We denote this new method by SVD-MPE. We also design a numerically stable algorithm for its implementation, whose computational cost and storage requirements are minimal. The new method is described in the next section. In Section 3, we show how the error in the approximation produced by SVD-MPE can be estimated at zero cost in terms of the quantities already used in the construction of the approximation. In Section 4, we give a very efficient algorithm for implementing SVD-MPE. In Section 5, we derive determinant representations for the approximations produced by SVD-MPE, while in Section 6, we show that this method is a Krylov subspace method when applied to vector sequences that result from the solution of linear systems via fixed-point iterative schemes. Finally, in Section 7, we illustrate its use with two numerical examples.

Before closing, we state the (reduced version of) the well known singular value decomposition (SVD) theorem. For different proofs, we refer the reader to Golub and Van Loan [

Theorem 1.1 Let

Furthermore, if

In case

Remark: The

In what follows, we use boldface lower case letters for vectors and boldface upper case letters for matrices. In addition, we will be working with general inner products

・ In

・ In

Of course, the standard Euclidean inner product

We begin with a brief summary of minimal polynomial extrapolation (MPE). We use the ideas that follow to develop our new method.

Given the vector sequence

and, for some fixed n, define the matrices

Clearly, there is an integer

(Of course, this is the same as saying that

This minimization problem can also be expressed as in

and, as is easily seen,

provided

Finally, set

as the approximation to

What we have so far is only the definition (or the theoretical development) of MPE as a method. It should not be taken as an efficient computational procedure (algorithm), however. For this topic, see [

We start by observing that the unconstrained minimization problem for MPE given in (2.7) can also be expressed as a superficially “constrained” minimization problem as in

For the SVD-MPE method, we replace this “constrained” minimization problem by the following actual constrained minimization problem:

With

provided

Finally, we set

as the SVD-MPE approximation to

Of course, the minimization problem in (2.12) has a solution for

Lemma 2.1 Let

ordered as in

and let

Assuming that

Proof. The proof is achieved by observing that, with

so that the problem in (2.12) becomes

The (optimal) solution to this problem is

In view of the nature of the solution for the (optimal)

Remarks:

1) Recall that there exists a positive integer

2) Of course,

Before we go on to the development of our algorithm in the next section, we state the following result concerning the finite termination property of SVD-MPE, whose proof is very similar to that pertaining to MPE and RRE given in [

Theorem 2.2 Let

We now turn to the problem of estimating at zero cost the error

and that the vector sequence

Now, if

This is justified since

1)

In this case, we have

and, therefore, by

and thus

2)

In this case, assuming that

where

from which, we conclude that the vectors

That is, for all large m, the sequence

where

Remark: That the vector

Now, we can compute

Theorem 3.1 Let

Proof. First, the solution to (2.12) is

Thus, by Lemma 2.1, we have

which is the required result. □

We now turn to the design of a good algorithm for constructing numerically the approximation

In this section, we let

As we have seen in Section 2, to determine

1) We first compute the QR factorization of

where

Of course, we can carry out the QR factorizations in different ways. Here we do this by the modified Gram- Schmidt process (MGS) in a numerically stable way as follows:

1. Compute

2. For

Set

For

end do (i)

Compute

end do (j)

Note that the matrices

For MGS, see [

2) We next compute the SVD of

and a square diagonal matrix

such that

In addition, since

3) Substituting (4.7) in (4.1), we obtain the following true singular value decomposition of

Thus,

Remark: Computing the SVD of a matrix

With

provided

Next, by the fact that

and by (2.14), we can re-express

where the

Making use of the fact that

where the

Thus, the computation of

Of course,

It is clear that, for the computation in (4.14) and (4.15), we need to save both

This completes the design of our algorithm for implementing SVD-MPE. For convenience, we provide a systematic description of this algorithm in

the exact or approximate residual vector

Note that the input vectors

Remark: If we were to compute the SVD of

1) If we have storage limitations, then we would have to overwrite

2) If we do not want to compute the vectors

Clearly, the indirect approach we have taken here for carrying out the singular value decomposition of

enables us to save extra computing and storage very conveniently when N is very large.

In [

The following lemma, whose proof can be found in [

Lemma 5.1 Let

Then, whether

where

provided

For convenience of notation, we will write

Then

Theorem 5.2 that follows gives our first determinant representation for

Theorem 5.2 Define

and assume that

^{3} (5.5)

Then, provided

where

Proof. With

Invoking here

Dividing both sides of this equality by

which, by the fact that

is the same as

where we have invoked (5.4). We will be able to apply Lemma 5.1 to prove the validity of (5.6) if we show that, in (5.10), the equations with

are linearly independent. By the fact that

we have

where

and

Invoking

Since

Remark: We note that the condition that

The determinant representation given in Theorem 5.3 that follows is based on the complete singular value decomposition of

Theorem 5.3 Let

be the singular value decomposition of

and

Define

Then,

where

Proof. By Theorem 1.1,

Therefore,

By (2.16) and by the fact that

But, by (5.11), (5.15) is the same as

Therefore, Lemma 5.1 applies with

In Sidi [

Since there is no room for confusion, we will use the notation of Theorem 5.3.

Theorem 6.1 Let

and let the vector sequence

Define the residual vector of

1)

2) The residual vector of

Thus,

Consequently, SVD-MPE is a Krylov subspace method for the linear system

Proof. With the

Therefore,

and

Upon substituting

To prove (6.2), we first recall that

Remark: We recall that (see [

We now provide two examples that show the performance of SVD-MPE and compare SVD-MPE with MPE. In both examples, SVD-MPE and MPE are implemented with the standard Euclidean inner product and the norm induced by it. Thus,

As we have already mentioned, a major application area of vector extrapolation methods is that of numerical solution of large systems of linear or nonlinear equations

C0) Choose integers

C1) Compute the vectors

C2) Apply SVD-MPE (or MPE) to the vectors

C3) If

Otherwise, set

We will call each application of steps C1 - C3 a cycle, and denote by

quence

Note that the remark at the end of Section 4 is relevant to the implementation of SVD-MPE in the cycling mode when N, the dimension of the

Example 7.1 Consider the vector sequence

and is symmetric with respect to both main diagonals, and

Example 7.2 We now apply SVD-MPE and MPE to the nonlinear system that arises from finite-difference approximation of the two-dimensional convection-diffusion equation considered in Kelley ( [

where

as the exact solution.

The equation is discretized on a square grid by approximating

and

and

we replace the differential equation by the finite difference equations

with

Here

We first write the finite difference equations in a way that is analogous to the PDE written in the form

and split the matrix representing

Jacobi iteration6. For both cycling computations, SVD-MPE and MPE seem to perform very similarly in this example.

Note that the Jacobi and Gauss-Seidel iterations converge extremely slowly. In view of this slow conve- rgence, the acceleration produced by SVD-MPE and MPE in the cycling mode is remarkable.

The author would like to thank Boaz Ophir for carrying out the computations reported in Section 7 of this work.

Avram Sidi, (2016) SVD-MPE: An SVD-Based Vector Extrapolation Method of Polynomial Type. Applied Mathematics,07,1260-1278. doi: 10.4236/am.2016.711111