NUMERICAL ANALYSIS TITLES -- Volume 1, Number 1 August, 1983 ================================================== [1] "Solving the linear least squares problem on a linear array of processors" by Ahmed Sameh (Sept. 1982) [no abstract] [2] "A fast poisson solver for multiprocessors" by Ahmed Sameh (Jan. 1983) [no abstract] [3] "An overview of parallel algorithms in numerical linear algebra" by Ahmed Sameh (March 1983) [no abstract] [4] "The computation and communication complexity of a parallel banded system solver" by Ahmed Sameh (March 1983) [no abstract] Communicated by: Ahmed Sameh Obtainable from: Ahmed Sameh (Request in writing) ----- [5] Alan George and Esmond Ng,"Solution of sparse underdetermined systems of linear equations", Research report CS-82-39, Dept. of Computer Science, University of Waterloo, September 1982. (Presented in Sparse Matrix Symposium 1982.) In this paper we consider the problem of computing the minimal l2-solution to a consistent underdetermined linear system Ax=b, where A is m by n with m<=n. The method of solution is to reduce A to lower trapezoidal form [L O] using orthogonal transformations, where L is m by m and lower triangular. The method can be implemented efficiently if the matrix AA' is sparse. However if A contains some dense columns, AA' may be unacceptably dense. We present a method for handling these dense columns. The problem of solving a rank-deficient underdetermined system is also considered. [6] Esmond Ng,"Row elimination in sparse matrices using rotations", Research report CS-83-01, Dept. of Computer Science, University of Waterloo, January 1983. One way of solving a system of linear equations Ax=b, where A is an m by n matrix, is to use a QR-decomposition of A (or A' if m=n, and let Pr and Pc be permutation matrices of order m and n respectively. Suppose PrAPc is reduced to upper trapezoidal form [R' O]' using Givens rotations, where R is n by n and upper triangular. The sparsity structure of R depends only on Pc. For a fixed Pc, the number of arithmetic operations required to compute R depends on Pr. In this paper, we consider row ordering strategies which are appropriate when Pc is obtained from nested dissection orderings of A'. Recently, it was shown that so-called "width-2" nested dissection orderings of A'A could be used to simultaneously obtain good row and column orderings for A. In this paper, we show that the conventional (width-1) nested dissection orderings can also be used to induce good row orderings. In part I of this paper, we analyze the application of Givens rotations to a sparse matrix A using a bipartite graph model. [9] Alan George and Esmond Ng,"A partial pivoting implementation of Gaussian elimination for sparse systems", Research report CS-83-15, Dept. of Computer Science, University of Waterloo, May 1983. (Submitted to SIAM J. Sci. Stat. Comput.) In this paper, we consider the problem of solving a sparse nonsingular system of linear equations. We show that the structures of the triangular matrices obtained in the LU-decomposition of a sparse nonsingular matrix A using Gaussian elimination with partial pivoting are contained in those of the Cholesky factors of A', provided that the diagonal elements of A are nonzero. Based on this result, a method for solving sparse linear systems is then described. The main advantage of this method is that the numerical computation can be done using a static data structure. Numerical experiments comparing this method with other implementations of Gaussian elimination for solving sparse linear systems are presented and the results indicate that the method proposed in this paper is quite competitive with other approaches. Communicated by: Esmond Ng Obtainable from: Department of Computer Science University of Waterloo Waterloo, Ontario Canada N2L 3G1 ----- [10] Richard H. Bartels and Nezam Mahdavi-Amiri, "On Generating Test Problems for Nonlinear Programming Algorithms" We present an approach to test-problem generation which pro- vides a way of constructing example nonlinear programming prob- lems from a wide variety of given functions. The approach per- mits one to specify an arbitrary number of points at each one of which the problem should satisfy some "interesting" conditions (i.e. be optimal or stationary or degenerate) and to determine the characteristics of functions and derivatives at these points (i.e. choose predetermined values, gradients, Hessians and Lagrange multipliers). We give a sample of results obtained by using our approach to generate test problems for two algorithms to minimize nonlinear- least-squares objectives subject to nonlinear constraints. Communicated by: Richard Bartels Obtainable from: Department of Computer Science University of Waterloo Waterloo, Ontario ----- [11] V. S. Manoranjan, A. R. Mitchell, and J. Ll. Morris, "Numerical Solutions of the 'Good' Boussinesq Equation", Report NA/59, October, 1982. The 'good' Boussinesq equation is studied numerically using Galerkin methods and soliton solutions are shown to exist. An analytic formula for the 2-soliton interaction is presented and verified by numerical experiment. [12] D. F. Griffiths, A. R. Mitchell, "A Numerical Study of the Nonlinear Schroedinger Equation", Report NA/52, May, 1982. This paper presents a numerical study of the Nonlinear (cubic) Schroedinger (NLS) equation. Of particular interest are soliton solutions. The paper presents a one-parameter family of numerical schemes for solving the equation and advances a number of arguments, based on energy conservation, which isolates an optimum of the parameter. The numerical algorithm employs piecewise linear basis functions for space discretizations, and this is compared with a class of finite difference methods based upon a Crank-Nicolson scheme. Numerical results for the evolution of a single soliton and for the interaction of two solitons are provided. Communicated by: J. Ll. Morris, University of Waterloo. Obtainable from: D. F. Griffiths Department of Mathematics University of Dundee Dundee DD1 4HN Scotland, U.K. ----- [13] Thomas F. Coleman and Andrew R. Conn, "On the Local Convergence of a Quasi-Newton Method for the Nonlinear Programming Problem". In this paper we propose a new local quasi-Newton method to solve the equality constrained nonlinear programming prob- lem. The pivotal feature of the algorithm is that a projec- tion of the Hessian of the Lagrangian is approximated by a sequence of symmetric positive definite matrices. The matrix approximation is updated at every iteration by a projected version of the DFP or BFGS formula. We establish that the method is locally convergent and the sequence of x-values converges to the solution at a 2-step Q-superlinear rate. [14] Andrew R. Conn and Nicholas I.M. Gould, "ON THE LOCATION OF DIRECTIONS OF INFINITE DESCENT FOR NON-LINEAR PROGRAMMING ALGORITHMS". There is much current interest in general equality constrained programming problems, both for their own sake and for their applicability to active set methods for nonlinear programming. In the former case, typically, the issues are existence of solutions and their determination. In the latter instance, non-existence of solutions gives rise to directions of infinite descent. Such direction may subsequently be used to deter- mine a more desirable active set. The generalised Cholesky decomposition of relevant matrices enables us to answer the ques- tion of existence and to determine directions of infinite descent (when applicable) in an efficient and stable manner. The methods examined are related to implemen- tations that are suitable for null, range and Lagrangian methods. Communicated by: Andrew R. Conn Obtainable from: Andrew R. Conn Computer Science Department University of Waterloo Waterloo, Ontario CANADA N2L 3G1 ----- Title: "Bibliography of Stanford Computer Science Reports, 1962-1983", Author: "Kathryn A. Berg", Number: "STAN-CS-83-962", HCprice: "$5.00", MFprice: "$2.00", Note: "62 pages", Date: "March 1983" A chronological listing of all technical reports and theses published by the department. The availability and cost of each report is noted. Also included are the AI Memos and a number of the Heuristic Programming Project publications. Communicated by: Gene Golub Obtainable from: Computer Science Department Stanford University Stanford, California USA 94305 ----- [15] Performance of Various Computers Using Standard Linear Algebra Software in a Fortran Environment Jack J. Dongarra Mathematics and Computer Science Division Argonne National Laboratory Argonne, Illinois 60439 Tele: 312-972-7246 Arpa: Dongarra@anl-mcs In this note we compare a number of different computer systems for solving dense systems of linear equations using the LINPACK software in a Fortran environment. There are about 50 computers compared, ranging from a Cray X-MP to the 68000 based systems such as Apollo and Masscomp. [16] Squeezing the Most out of an Algorithm in Cray Fortran Jack J. Dongarra Mathematics and Computer Science Division Argonne National Laboratory and Stanley C. Eisenstat Department of Computer Science and Research Center for Scientific Computation Yale University Abstract - This paper discusses a technique for achieving super-vector performance on a Cray in a purely Fortran environment (i.e., without resorting to assembly language). The technique can be applied to a wide variety of algorithms in linear algebra, and is beneficial in other architectural settings. Communicated by: Jack Dongarra Obtainable from: Jack Dongarra Mathematics and Computer Science Division Argonne National Laboratory Argonne, Illinois 60439 ----- [17] "Block Preconditioning For The Conjugate Gradient Method" P. Concus, G.H. Golub and G. Meurant Block preconditionings for the conjugate gradient method are investigated for solving positive definite block tridiagonal systems of linear equations arising from discretization of boundary value problems for elliptic partial differentialequations. The preconditionings rest on the use of sparse approximate matrix inverses to generate incomplete block cholesky factorizations. Carrying out of the factorizations can be guaranteed under suitable conditions. Numerical experiments on test problems for two dimensions indicate that a particularly attractive preconditioning, which uses special properties of tridiagonal matrix inverses, can be computationally more efficient for the same computer storage than other preconditionings, including the popular point incomplete Cholesky factorization. Key words: conjugate gradient method, elliptic partial differential equations, incomplete factorization, iterative methods, preconditioning, sparse matrices. Communicated by: Gene Golub Computer Science Department Stanford University Stanford, California USA 94305 Available from: The authors ------- [18] Inaccuracy in Quasi-Newton Methods: Local Improvement Theorems (Rice University MASC Technical Report 83-11, March 1983.) by J.E. Dennis, Jr. Mathematical Sciences Department, Rice University, Houston, Texas 77251. and Homer F. Walker Department of Mathematics, University of Houston, Houston, Texas 77004. In this paper, we consider the use of bounded-deterioration quasi-Newton methods implemented in floating-point arithmetic to find solutions to F(x) = 0 where only inaccurate F-values are available. Our analysis is for the case where the relative error in F is less than one. We obtain theorems specifying local rates of improvement and limiting accuracies depending on the nearness to Newton's method of the basic algorithm, the accuracy of its implementation, the relative errors in the function values, the accuracy of the solutions of the linear systems for the Newton steps, and the unit-rounding errors in the addition of the Newton steps. Communicated by: John Dennis Available from: The authors --------------- [19] "Sharp Estimates for Multigrid Rates of Convergence" by Randolph E. Bank[$] and Craig C. Douglas[%] [$] Department of Mathematics, University of California at San Diego [%] Department of Computer Science, Yale University Yale University, Department of Computer Science Research Report #277 Available on request from: (U.S. Mail) (Phone) Craig C. Douglas 203/436-3761 Yale University Department of Computer Science (ARPAnet) 10 Hillhouse Ave. DOUGLAS-CRAIG@YALE P.O. Box 2158 Yale Station New Haven, CT 06520 Summary: In this paper, we prove the convergence of the multilevel iterative method for solving linear equations that arise from elliptic partial differential equations. Our theory is presented entirely in terms of the generalized condition number K of the matrix A and the smoothing matrix B. This leads to a completely algebraic analysis of the method as an iterative technique for solving linear equations; the properties of the elliptic equation and discretization procedure enter only when we seek to estimate K, just as in the case of most standard iterative methods. Here we consider the fundamental two level iteration, and the V and W cycles of the j-level iteration (j > 2). We prove that the V and W cyles converge even when only one smoothing iteration is used. We present several examples of the computation of K using both Fourier analysis and standard finite element techniques. We compare the predictions of our theorems with the actual rate of convergence. Our analysis also shows that adaptive iterative methods, both fixed (Chebyshev) and adaptive (conjugate gradients and conjugate residuals), are effective as smoothing procedures. ------- .