GENRATING INDEX FOR SINGLE PRECISION # --------------------------------- # Available SIMPLE DRIVER routines: # --------------------------------- file lapack/single/sgbsv.f prec single SGBSV computes the solution to a real system of linear equations A * X = B, where A is a band matrix of order N with KL subdiagonals and KU superdiagonals, and X and B are N-by-NRHS matrices. The LU decomposition with partial pivoting and row interchanges is used to factor A as A = L * U, where L is a product of permutation and unit lower triangular matrices with KL subdiagonals, and U is upper triangular with KL+KU superdiagonals. The factored form of A is then used to solve the system of equations A * X = B. file lapack/single/sgees.f prec single SGEES computes for an N-by-N real nonsymmetric matrix A, the eigenvalues, the real Schur form T, and, optionally, the matrix of Schur vectors Z. This gives the Schur factorization A = Z*T*(Z**T). Optionally, it also orders the eigenvalues on the diagonal of the real Schur form so that selected eigenvalues are at the top left. The leading columns of Z then form an orthonormal basis for the invariant subspace corresponding to the selected eigenvalues. A matrix is in real Schur form if it is upper quasi-triangular with 1-by-1 and 2-by-2 blocks. 2-by-2 blocks will be standardized in the form [ a b ] [ c a ] where b*c < 0. The eigenvalues of such a block are a +- sqrt(bc). file lapack/single/sgeev.f prec single SGEEV computes for an N-by-N real nonsymmetric matrix A, the eigenvalues and, optionally, the left and/or right eigenvectors. The right eigenvector v(j) of A satisfies A * v(j) = lambda(j) * v(j) where lambda(j) is its eigenvalue. The left eigenvector u(j) of A satisfies u(j)**H * A = lambda(j) * u(j)**H where u(j)**H denotes the conjugate transpose of u(j). The computed eigenvectors are normalized to have Euclidean norm equal to 1 and largest component real. file lapack/single/sgegs.f prec single This routine is deprecated and has been replaced by routine SGGES. SGEGS computes the eigenvalues, real Schur form, and, optionally, left and or/right Schur vectors of a real matrix pair (A,B). Given two square matrices A and B, the generalized real Schur factorization has the form A = Q*S*Z**T, B = Q*T*Z**T where Q and Z are orthogonal matrices, T is upper triangular, and S is an upper quasi-triangular matrix with 1-by-1 and 2-by-2 diagonal blocks, the 2-by-2 blocks corresponding to complex conjugate pairs of eigenvalues of (A,B). The columns of Q are the left Schur vectors and the columns of Z are the right Schur vectors. If only the eigenvalues of (A,B) are needed, the driver routine SGEGV should be used instead. See SGEGV for a description of the eigenvalues of the generalized nonsymmetric eigenvalue problem (GNEP). file lapack/single/sgegv.f prec single This routine is deprecated and has been replaced by routine SGGEV. SGEGV computes the eigenvalues and, optionally, the left and/or right eigenvectors of a real matrix pair (A,B). Given two square matrices A and B, the generalized nonsymmetric eigenvalue problem (GNEP) is to find the eigenvalues lambda and corresponding (non-zero) eigenvectors x such that A*x = lambda*B*x. An alternate form is to find the eigenvalues mu and corresponding eigenvectors y such that mu*A*y = B*y. These two forms are equivalent with mu = 1/lambda and x = y if neither lambda nor mu is zero. In order to deal with the case that lambda or mu is zero or small, two values alpha and beta are returned for each eigenvalue, such that lambda = alpha/beta and mu = beta/alpha. The vectors x and y in the above equations are right eigenvectors of the matrix pair (A,B). Vectors u and v satisfying u**H*A = lambda*u**H*B or mu*v**H*A = v**H*B are left eigenvectors of (A,B). Note: this routine performs "full balancing" on A and B -- see "Further Details", below. file lapack/single/sgels.f prec single SGELS solves overdetermined or underdetermined real linear systems involving an M-by-N matrix A, or its transpose, using a QR or LQ factorization of A. It is assumed that A has full rank. The following options are provided: 1. If TRANS = 'N' and m >= n: find the least squares solution of an overdetermined system, i.e., solve the least squares problem minimize || B - A*X ||. 2. If TRANS = 'N' and m < n: find the minimum norm solution of an underdetermined system A * X = B. 3. If TRANS = 'T' and m >= n: find the minimum norm solution of an undetermined system A**T * X = B. 4. If TRANS = 'T' and m < n: find the least squares solution of an overdetermined system, i.e., solve the least squares problem minimize || B - A**T * X ||. Several right hand side vectors b and solution vectors x can be handled in a single call; they are stored as the columns of the M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix X. file lapack/single/sgelsd.f prec single SGELSD computes the minimum-norm solution to a real linear least squares problem: minimize 2-norm(| b - A*x |) using the singular value decomposition (SVD) of A. A is an M-by-N matrix which may be rank-deficient. Several right hand side vectors b and solution vectors x can be handled in a single call; they are stored as the columns of the M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix X. The problem is solved in three steps: (1) Reduce the coefficient matrix A to bidiagonal form with Householder transformations, reducing the original problem into a "bidiagonal least squares problem" (BLS) (2) Solve the BLS using a divide and conquer approach. (3) Apply back all the Householder tranformations to solve the original least squares problem. The effective rank of A is determined by treating as zero those singular values which are less than RCOND times the largest singular value. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. file lapack/single/sgelss.f prec single SGELSS computes the minimum norm solution to a real linear least squares problem: Minimize 2-norm(| b - A*x |). using the singular value decomposition (SVD) of A. A is an M-by-N matrix which may be rank-deficient. Several right hand side vectors b and solution vectors x can be handled in a single call; they are stored as the columns of the M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix X. The effective rank of A is determined by treating as zero those singular values which are less than RCOND times the largest singular value. file lapack/single/sgelsy.f prec single SGELSY computes the minimum-norm solution to a real linear least squares problem: minimize || A * X - B || using a complete orthogonal factorization of A. A is an M-by-N matrix which may be rank-deficient. Several right hand side vectors b and solution vectors x can be handled in a single call; they are stored as the columns of the M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix X. The routine first computes a QR factorization with column pivoting: A * P = Q * [ R11 R12 ] [ 0 R22 ] with R11 defined as the largest leading submatrix whose estimated condition number is less than 1/RCOND. The order of R11, RANK, is the effective rank of A. Then, R22 is considered to be negligible, and R12 is annihilated by orthogonal transformations from the right, arriving at the complete orthogonal factorization: A * P = Q * [ T11 0 ] * Z [ 0 0 ] The minimum-norm solution is then X = P * Z' [ inv(T11)*Q1'*B ] [ 0 ] where Q1 consists of the first RANK columns of Q. This routine is basically identical to the original xGELSX except three differences: o The call to the subroutine xGEQPF has been substituted by the the call to the subroutine xGEQP3. This subroutine is a Blas-3 version of the QR factorization with column pivoting. o Matrix B (the right hand side) is updated with Blas-3. o The permutation of matrix B (the right hand side) is faster and more simple. file lapack/single/sgesdd.f prec single SGESDD computes the singular value decomposition (SVD) of a real M-by-N matrix A, optionally computing the left and right singular vectors. If singular vectors are desired, it uses a divide-and-conquer algorithm. The SVD is written A = U * SIGMA * transpose(V) where SIGMA is an M-by-N matrix which is zero except for its min(m,n) diagonal elements, U is an M-by-M orthogonal matrix, and V is an N-by-N orthogonal matrix. The diagonal elements of SIGMA are the singular values of A; they are real and non-negative, and are returned in descending order. The first min(m,n) columns of U and V are the left and right singular vectors of A. Note that the routine returns VT = V**T, not V. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. file lapack/single/sgesv.f prec single SGESV computes the solution to a real system of linear equations A * X = B, where A is an N-by-N matrix and X and B are N-by-NRHS matrices. The LU decomposition with partial pivoting and row interchanges is used to factor A as A = P * L * U, where P is a permutation matrix, L is unit lower triangular, and U is upper triangular. The factored form of A is then used to solve the system of equations A * X = B. file lapack/single/sgesvd.f prec single SGESVD computes the singular value decomposition (SVD) of a real M-by-N matrix A, optionally computing the left and/or right singular vectors. The SVD is written A = U * SIGMA * transpose(V) where SIGMA is an M-by-N matrix which is zero except for its min(m,n) diagonal elements, U is an M-by-M orthogonal matrix, and V is an N-by-N orthogonal matrix. The diagonal elements of SIGMA are the singular values of A; they are real and non-negative, and are returned in descending order. The first min(m,n) columns of U and V are the left and right singular vectors of A. Note that the routine returns V**T, not V. file lapack/single/sgges.f prec single SGGES computes for a pair of N-by-N real nonsymmetric matrices (A,B), the generalized eigenvalues, the generalized real Schur form (S,T), optionally, the left and/or right matrices of Schur vectors (VSL and VSR). This gives the generalized Schur factorization (A,B) = ( (VSL)*S*(VSR)**T, (VSL)*T*(VSR)**T ) Optionally, it also orders the eigenvalues so that a selected cluster of eigenvalues appears in the leading diagonal blocks of the upper quasi-triangular matrix S and the upper triangular matrix T.The leading columns of VSL and VSR then form an orthonormal basis for the corresponding left and right eigenspaces (deflating subspaces). (If only the generalized eigenvalues are needed, use the driver SGGEV instead, which is faster.) A generalized eigenvalue for a pair of matrices (A,B) is a scalar w or a ratio alpha/beta = w, such that A - w*B is singular. It is usually represented as the pair (alpha,beta), as there is a reasonable interpretation for beta=0 or both being zero. A pair of matrices (S,T) is in generalized real Schur form if T is upper triangular with non-negative diagonal and S is block upper triangular with 1-by-1 and 2-by-2 blocks. 1-by-1 blocks correspond to real generalized eigenvalues, while 2-by-2 blocks of S will be "standardized" by making the corresponding elements of T have the form: [ a 0 ] [ 0 b ] and the pair of corresponding 2-by-2 blocks in S and T will have a complex conjugate pair of generalized eigenvalues. file lapack/single/sggev.f prec single SGGEV computes for a pair of N-by-N real nonsymmetric matrices (A,B) the generalized eigenvalues, and optionally, the left and/or right generalized eigenvectors. A generalized eigenvalue for a pair of matrices (A,B) is a scalar lambda or a ratio alpha/beta = lambda, such that A - lambda*B is singular. It is usually represented as the pair (alpha,beta), as there is a reasonable interpretation for beta=0, and even for both being zero. The right eigenvector v(j) corresponding to the eigenvalue lambda(j) of (A,B) satisfies A * v(j) = lambda(j) * B * v(j). The left eigenvector u(j) corresponding to the eigenvalue lambda(j) of (A,B) satisfies u(j)**H * A = lambda(j) * u(j)**H * B . where u(j)**H is the conjugate-transpose of u(j). file lapack/single/sggglm.f prec single SGGGLM solves a general Gauss-Markov linear model (GLM) problem: minimize || y ||_2 subject to d = A*x + B*y x where A is an N-by-M matrix, B is an N-by-P matrix, and d is a given N-vector. It is assumed that M <= N <= M+P, and rank(A) = M and rank( A B ) = N. Under these assumptions, the constrained equation is always consistent, and there is a unique solution x and a minimal 2-norm solution y, which is obtained using a generalized QR factorization of the matrices (A, B) given by A = Q*(R), B = Q*T*Z. (0) In particular, if matrix B is square nonsingular, then the problem GLM is equivalent to the following weighted linear least squares problem minimize || inv(B)*(d-A*x) ||_2 x where inv(B) denotes the inverse of B. file lapack/single/sgglse.f prec single SGGLSE solves the linear equality-constrained least squares (LSE) problem: minimize || c - A*x ||_2 subject to B*x = d where A is an M-by-N matrix, B is a P-by-N matrix, c is a given M-vector, and d is a given P-vector. It is assumed that P <= N <= M+P, and rank(B) = P and rank( (A) ) = N. ( (B) ) These conditions ensure that the LSE problem has a unique solution, which is obtained using a generalized RQ factorization of the matrices (B, A) given by B = (0 R)*Q, A = Z*T*Q. file lapack/single/sggsvd.f prec single SGGSVD computes the generalized singular value decomposition (GSVD) of an M-by-N real matrix A and P-by-N real matrix B: U'*A*Q = D1*( 0 R ), V'*B*Q = D2*( 0 R ) where U, V and Q are orthogonal matrices, and Z' is the transpose of Z. Let K+L = the effective numerical rank of the matrix (A',B')', then R is a K+L-by-K+L nonsingular upper triangular matrix, D1 and D2 are M-by-(K+L) and P-by-(K+L) "diagonal" matrices and of the following structures, respectively: If M-K-L >= 0, K L D1 = K ( I 0 ) L ( 0 C ) M-K-L ( 0 0 ) K L D2 = L ( 0 S ) P-L ( 0 0 ) N-K-L K L ( 0 R ) = K ( 0 R11 R12 ) L ( 0 0 R22 ) where C = diag( ALPHA(K+1), ... , ALPHA(K+L) ), S = diag( BETA(K+1), ... , BETA(K+L) ), C**2 + S**2 = I. R is stored in A(1:K+L,N-K-L+1:N) on exit. If M-K-L < 0, K M-K K+L-M D1 = K ( I 0 0 ) M-K ( 0 C 0 ) K M-K K+L-M D2 = M-K ( 0 S 0 ) K+L-M ( 0 0 I ) P-L ( 0 0 0 ) N-K-L K M-K K+L-M ( 0 R ) = K ( 0 R11 R12 R13 ) M-K ( 0 0 R22 R23 ) K+L-M ( 0 0 0 R33 ) where C = diag( ALPHA(K+1), ... , ALPHA(M) ), S = diag( BETA(K+1), ... , BETA(M) ), C**2 + S**2 = I. (R11 R12 R13 ) is stored in A(1:M, N-K-L+1:N), and R33 is stored ( 0 R22 R23 ) in B(M-K+1:L,N+M-K-L+1:N) on exit. The routine computes C, S, R, and optionally the orthogonal transformation matrices U, V and Q. In particular, if B is an N-by-N nonsingular matrix, then the GSVD of A and B implicitly gives the SVD of A*inv(B): A*inv(B) = U*(D1*inv(D2))*V'. If ( A',B')' has orthonormal columns, then the GSVD of A and B is also equal to the CS decomposition of A and B. Furthermore, the GSVD can be used to derive the solution of the eigenvalue problem: A'*A x = lambda* B'*B x. In some literature, the GSVD of A and B is presented in the form U'*A*X = ( 0 D1 ), V'*B*X = ( 0 D2 ) where U and V are orthogonal and X is nonsingular, D1 and D2 are ``diagonal''. The former GSVD form can be converted to the latter form by taking the nonsingular matrix X as X = Q*( I 0 ) ( 0 inv(R) ). file lapack/single/spbsv.f prec single SPBSV computes the solution to a real system of linear equations A * X = B, where A is an N-by-N symmetric positive definite band matrix and X and B are N-by-NRHS matrices. The Cholesky decomposition is used to factor A as A = U**T * U, if UPLO = 'U', or A = L * L**T, if UPLO = 'L', where U is an upper triangular band matrix, and L is a lower triangular band matrix, with the same number of superdiagonals or subdiagonals as A. The factored form of A is then used to solve the system of equations A * X = B. file lapack/single/sposv.f prec single SPOSV computes the solution to a real system of linear equations A * X = B, where A is an N-by-N symmetric positive definite matrix and X and B are N-by-NRHS matrices. The Cholesky decomposition is used to factor A as A = U**T* U, if UPLO = 'U', or A = L * L**T, if UPLO = 'L', where U is an upper triangular matrix and L is a lower triangular matrix. The factored form of A is then used to solve the system of equations A * X = B. file lapack/single/sppsv.f prec single SPPSV computes the solution to a real system of linear equations A * X = B, where A is an N-by-N symmetric positive definite matrix stored in packed format and X and B are N-by-NRHS matrices. The Cholesky decomposition is used to factor A as A = U**T* U, if UPLO = 'U', or A = L * L**T, if UPLO = 'L', where U is an upper triangular matrix and L is a lower triangular matrix. The factored form of A is then used to solve the system of equations A * X = B. file lapack/single/ssbev.f prec single SSBEV computes all the eigenvalues and, optionally, eigenvectors of a real symmetric band matrix A. file lapack/single/ssbevd.f prec single SSBEVD computes all the eigenvalues and, optionally, eigenvectors of a real symmetric band matrix A. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. file lapack/single/ssbgv.f prec single SSBGV computes all the eigenvalues, and optionally, the eigenvectors of a real generalized symmetric-definite banded eigenproblem, of the form A*x=(lambda)*B*x. Here A and B are assumed to be symmetric and banded, and B is also positive definite. file lapack/single/ssbgvd.f prec single SSBGVD computes all the eigenvalues, and optionally, the eigenvectors of a real generalized symmetric-definite banded eigenproblem, of the form A*x=(lambda)*B*x. Here A and B are assumed to be symmetric and banded, and B is also positive definite. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. file lapack/single/sspev.f prec single SSPEV computes all the eigenvalues and, optionally, eigenvectors of a real symmetric matrix A in packed storage. file lapack/single/sspevd.f prec single SSPEVD computes all the eigenvalues and, optionally, eigenvectors of a real symmetric matrix A in packed storage. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. file lapack/single/sspgv.f prec single SSPGV computes all the eigenvalues and, optionally, the eigenvectors of a real generalized symmetric-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be symmetric, stored in packed format, and B is also positive definite. file lapack/single/sspgvd.f prec single SSPGVD computes all the eigenvalues, and optionally, the eigenvectors of a real generalized symmetric-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be symmetric, stored in packed format, and B is also positive definite. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. file lapack/single/sspsv.f prec single SSPSV computes the solution to a real system of linear equations A * X = B, where A is an N-by-N symmetric matrix stored in packed format and X and B are N-by-NRHS matrices. The diagonal pivoting method is used to factor A as A = U * D * U**T, if UPLO = 'U', or A = L * D * L**T, if UPLO = 'L', where U (or L) is a product of permutation and unit upper (lower) triangular matrices, D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. The factored form of A is then used to solve the system of equations A * X = B. file lapack/single/sstedc.f prec single SSTEDC computes all eigenvalues and, optionally, eigenvectors of a symmetric tridiagonal matrix using the divide and conquer method. The eigenvectors of a full or band real symmetric matrix can also be found if SSYTRD or SSPTRD or SSBTRD has been used to reduce this matrix to tridiagonal form. This code makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. See SLAED3 for details. file lapack/single/sstev.f prec single SSTEV computes all eigenvalues and, optionally, eigenvectors of a real symmetric tridiagonal matrix A. file lapack/single/sstevd.f prec single SSTEVD computes all eigenvalues and, optionally, eigenvectors of a real symmetric tridiagonal matrix. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. file lapack/single/sstevr.f prec single SSTEVR computes selected eigenvalues and, optionally, eigenvectors of a real symmetric tridiagonal matrix T. Eigenvalues and eigenvectors can be selected by specifying either a range of values or a range of indices for the desired eigenvalues. Whenever possible, SSTEVR calls SSTEMR to compute the eigenspectrum using Relatively Robust Representations. SSTEMR computes eigenvalues by the dqds algorithm, while orthogonal eigenvectors are computed from various "good" L D L^T representations (also known as Relatively Robust Representations). Gram-Schmidt orthogonalization is avoided as far as possible. More specifically, the various steps of the algorithm are as follows. For the i-th unreduced block of T, (a) Compute T - sigma_i = L_i D_i L_i^T, such that L_i D_i L_i^T is a relatively robust representation, (b) Compute the eigenvalues, lambda_j, of L_i D_i L_i^T to high relative accuracy by the dqds algorithm, (c) If there is a cluster of close eigenvalues, "choose" sigma_i close to the cluster, and go to step (a), (d) Given the approximate eigenvalue lambda_j of L_i D_i L_i^T, compute the corresponding eigenvector by forming a rank-revealing twisted factorization. The desired accuracy of the output can be specified by the input parameter ABSTOL. For more details, see "A new O(n^2) algorithm for the symmetric tridiagonal eigenvalue/eigenvector problem", by Inderjit Dhillon, Computer Science Division Technical Report No. UCB//CSD-97-971, UC Berkeley, May 1997. Note 1 : SSTEVR calls SSTEMR when the full spectrum is requested on machines which conform to the ieee-754 floating point standard. SSTEVR calls SSTEBZ and SSTEIN on non-ieee machines and when partial spectrum requests are made. Normal execution of SSTEMR may create NaNs and infinities and hence may abort due to a floating point exception in environments which do not handle NaNs and infinities in the ieee standard default manner. file lapack/single/ssyev.f prec single SSYEV computes all eigenvalues and, optionally, eigenvectors of a real symmetric matrix A. file lapack/single/ssyevd.f prec single SSYEVD computes all eigenvalues and, optionally, eigenvectors of a real symmetric matrix A. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. Because of large use of BLAS of level 3, SSYEVD needs N**2 more workspace than SSYEVX. file lapack/single/ssyevr.f prec single SSYEVR computes selected eigenvalues and, optionally, eigenvectors of a real symmetric matrix A. Eigenvalues and eigenvectors can be selected by specifying either a range of values or a range of indices for the desired eigenvalues. SSYEVR first reduces the matrix A to tridiagonal form T with a call to SSYTRD. Then, whenever possible, SSYEVR calls SSTEMR to compute the eigenspectrum using Relatively Robust Representations. SSTEMR computes eigenvalues by the dqds algorithm, while orthogonal eigenvectors are computed from various "good" L D L^T representations (also known as Relatively Robust Representations). Gram-Schmidt orthogonalization is avoided as far as possible. More specifically, the various steps of the algorithm are as follows. For each unreduced block (submatrix) of T, (a) Compute T - sigma I = L D L^T, so that L and D define all the wanted eigenvalues to high relative accuracy. This means that small relative changes in the entries of D and L cause only small relative changes in the eigenvalues and eigenvectors. The standard (unfactored) representation of the tridiagonal matrix T does not have this property in general. (b) Compute the eigenvalues to suitable accuracy. If the eigenvectors are desired, the algorithm attains full accuracy of the computed eigenvalues only right before the corresponding vectors have to be computed, see steps c) and d). (c) For each cluster of close eigenvalues, select a new shift close to the cluster, find a new factorization, and refine the shifted eigenvalues to suitable accuracy. (d) For each eigenvalue with a large enough relative separation compute the corresponding eigenvector by forming a rank revealing twisted factorization. Go back to (c) for any clusters that remain. The desired accuracy of the output can be specified by the input parameter ABSTOL. For more details, see SSTEMR's documentation and: - Inderjit S. Dhillon and Beresford N. Parlett: "Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices," Linear Algebra and its Applications, 387(1), pp. 1-28, August 2004. - Inderjit Dhillon and Beresford Parlett: "Orthogonal Eigenvectors and Relative Gaps," SIAM Journal on Matrix Analysis and Applications, Vol. 25, 2004. Also LAPACK Working Note 154. - Inderjit Dhillon: "A new O(n^2) algorithm for the symmetric tridiagonal eigenvalue/eigenvector problem", Computer Science Division Technical Report No. UCB/CSD-97-971, UC Berkeley, May 1997. Note 1 : SSYEVR calls SSTEMR when the full spectrum is requested on machines which conform to the ieee-754 floating point standard. SSYEVR calls SSTEBZ and SSTEIN on non-ieee machines and when partial spectrum requests are made. Normal execution of SSTEMR may create NaNs and infinities and hence may abort due to a floating point exception in environments which do not handle NaNs and infinities in the ieee standard default manner. file lapack/single/ssygv.f prec single SSYGV computes all the eigenvalues, and optionally, the eigenvectors of a real generalized symmetric-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be symmetric and B is also positive definite. file lapack/single/ssygvd.f prec single SSYGVD computes all the eigenvalues, and optionally, the eigenvectors of a real generalized symmetric-definite eigenproblem, of the form A*x=(lambda)*B*x, A*Bx=(lambda)*x, or B*A*x=(lambda)*x. Here A and B are assumed to be symmetric and B is also positive definite. If eigenvectors are desired, it uses a divide and conquer algorithm. The divide and conquer algorithm makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray X-MP, Cray Y-MP, Cray C-90, or Cray-2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. file lapack/single/ssysv.f prec single SSYSV computes the solution to a real system of linear equations A * X = B, where A is an N-by-N symmetric matrix and X and B are N-by-NRHS matrices. The diagonal pivoting method is used to factor A as A = U * D * U**T, if UPLO = 'U', or A = L * D * L**T, if UPLO = 'L', where U (or L) is a product of permutation and unit upper (lower) triangular matrices, and D is symmetric and block diagonal with 1-by-1 and 2-by-2 diagonal blocks. The factored form of A is then used to solve the system of equations A * X = B. # --------------------------------- # Available EXPERT DRIVER routines: # --------------------------------- file lapack/single/sgbsvx.f prec single file lapack/single/sgbsvxx.f prec single file lapack/single/sgeesx.f prec single file lapack/single/sgeevx.f prec single file lapack/single/sgelsx.f prec single file lapack/single/sgesvx.f prec single file lapack/single/sgesvxx.f prec single file lapack/single/sggesx.f prec single file lapack/single/sggevx.f prec single file lapack/single/spbsvx.f prec single file lapack/single/sposvx.f prec single file lapack/single/sposvxx.f prec single file lapack/single/sppsvx.f prec single file lapack/single/ssbevx.f prec single file lapack/single/ssbgvx.f prec single file lapack/single/sspevx.f prec single file lapack/single/sspgvx.f prec single file lapack/single/sspsvx.f prec single file lapack/single/sstevx.f prec single file lapack/single/ssyevx.f prec single file lapack/single/ssygvx.f prec single file lapack/single/ssysvx.f prec single # ----------------------------- # Available AUXILIARY routines: # ----------------------------- file lapack/single/scsum1.f prec single SCSUM1 takes the sum of the absolute values of a complex vector and returns a single precision result. Based on SCASUM from the Level 1 BLAS. The change is to use the 'genuine' absolute value. Contributed by Nick Higham for use with CLACON. file lapack/single/sgesc2.f prec single SGESC2 solves a system of linear equations A * X = scale* RHS with a general N-by-N matrix A using the LU factorization with complete pivoting computed by SGETC2. file lapack/single/sgetc2.f prec single SGETC2 computes an LU factorization with complete pivoting of the n-by-n matrix A. The factorization has the form A = P * L * U * Q, where P and Q are permutation matrices, L is lower triangular with unit diagonal elements and U is upper triangular. This is the Level 2 BLAS algorithm. file lapack/single/sgtts2.f prec single SGTTS2 solves one of the systems of equations A*X = B or A'*X = B, with a tridiagonal matrix A using the LU factorization computed by SGTTRF. file lapack/single/sisnan.f prec single SISNAN returns .TRUE. if its argument is NaN, and .FALSE. otherwise. To be replaced by the Fortran 2003 intrinsic in the future. file lapack/single/slabad.f prec single SLABAD takes as input the values computed by SLAMCH for underflow and overflow, and returns the square root of each of these values if the log of LARGE is sufficiently large. This subroutine is intended to identify machines with a large exponent range, such as the Crays, and redefine the underflow and overflow limits to be the square roots of the values computed by SLAMCH. This subroutine is needed because SLAMCH does not compensate for poor arithmetic in the upper half of the exponent range, as is found on a Cray. file lapack/single/slabrd.f prec single SLABRD reduces the first NB rows and columns of a real general m by n matrix A to upper or lower bidiagonal form by an orthogonal transformation Q' * A * P, and returns the matrices X and Y which are needed to apply the transformation to the unreduced part of A. If m >= n, A is reduced to upper bidiagonal form; if m < n, to lower bidiagonal form. This is an auxiliary routine called by SGEBRD file lapack/single/slacn2.f prec single SLACN2 estimates the 1-norm of a square, real matrix A. Reverse communication is used for evaluating matrix-vector products. file lapack/single/slacon.f prec single SLACON estimates the 1-norm of a square, real matrix A. Reverse communication is used for evaluating matrix-vector products. file lapack/single/slacpy.f prec single SLACPY copies all or part of a two-dimensional matrix A to another matrix B. file lapack/single/sladiv.f prec single SLADIV performs complex division in real arithmetic a + i*b p + i*q = --------- c + i*d The algorithm is due to Robert L. Smith and can be found in D. Knuth, The art of Computer Programming, Vol.2, p.195 file lapack/single/slae2.f prec single SLAE2 computes the eigenvalues of a 2-by-2 symmetric matrix [ A B ] [ B C ]. On return, RT1 is the eigenvalue of larger absolute value, and RT2 is the eigenvalue of smaller absolute value. file lapack/single/slaebz.f prec single SLAEBZ contains the iteration loops which compute and use the function N(w), which is the count of eigenvalues of a symmetric tridiagonal matrix T less than or equal to its argument w. It performs a choice of two types of loops: IJOB=1, followed by IJOB=2: It takes as input a list of intervals and returns a list of sufficiently small intervals whose union contains the same eigenvalues as the union of the original intervals. The input intervals are (AB(j,1),AB(j,2)], j=1,...,MINP. The output interval (AB(j,1),AB(j,2)] will contain eigenvalues NAB(j,1)+1,...,NAB(j,2), where 1 <= j <= MOUT. IJOB=3: It performs a binary search in each input interval (AB(j,1),AB(j,2)] for a point w(j) such that N(w(j))=NVAL(j), and uses C(j) as the starting point of the search. If such a w(j) is found, then on output AB(j,1)=AB(j,2)=w. If no such w(j) is found, then on output (AB(j,1),AB(j,2)] will be a small interval containing the point where N(w) jumps through NVAL(j), unless that point lies outside the initial interval. Note that the intervals are in all cases half-open intervals, i.e., of the form (a,b] , which includes b but not a . To avoid underflow, the matrix should be scaled so that its largest element is no greater than overflow**(1/2) * underflow**(1/4) in absolute value. To assure the most accurate computation of small eigenvalues, the matrix should be scaled to be not much smaller than that, either. See W. Kahan "Accurate Eigenvalues of a Symmetric Tridiagonal Matrix", Report CS41, Computer Science Dept., Stanford University, July 21, 1966 Note: the arguments are, in general, *not* checked for unreasonable values. file lapack/single/slaein.f prec single SLAEIN uses inverse iteration to find a right or left eigenvector corresponding to the eigenvalue (WR,WI) of a real upper Hessenberg matrix H. file lapack/single/slaev2.f prec single SLAEV2 computes the eigendecomposition of a 2-by-2 symmetric matrix [ A B ] [ B C ]. On return, RT1 is the eigenvalue of larger absolute value, RT2 is the eigenvalue of smaller absolute value, and (CS1,SN1) is the unit right eigenvector for RT1, giving the decomposition [ CS1 SN1 ] [ A B ] [ CS1 -SN1 ] = [ RT1 0 ] [-SN1 CS1 ] [ B C ] [ SN1 CS1 ] [ 0 RT2 ]. file lapack/single/slaexc.f prec single SLAEXC swaps adjacent diagonal blocks T11 and T22 of order 1 or 2 in an upper quasi-triangular matrix T by an orthogonal similarity transformation. T must be in Schur canonical form, that is, block upper triangular with 1-by-1 and 2-by-2 diagonal blocks; each 2-by-2 diagonal block has its diagonal elemnts equal and its off-diagonal elements of opposite sign. file lapack/single/slag2.f prec single SLAG2 computes the eigenvalues of a 2 x 2 generalized eigenvalue problem A - w B, with scaling as necessary to avoid over-/underflow. The scaling factor "s" results in a modified eigenvalue equation s A - w B where s is a non-negative scaling factor chosen so that w, w B, and s A do not overflow and, if possible, do not underflow, either. file lapack/single/slag2d.f prec single SLAG2D converts a SINGLE PRECISION matrix, SA, to a DOUBLE PRECISION matrix, A. Note that while it is possible to overflow while converting from double to single, it is not possible to overflow when converting from single to double. This is an auxiliary routine so there is no argument checking. file lapack/single/slags2.f prec single SLAGS2 computes 2-by-2 orthogonal matrices U, V and Q, such that if ( UPPER ) then U'*A*Q = U'*( A1 A2 )*Q = ( x 0 ) ( 0 A3 ) ( x x ) and V'*B*Q = V'*( B1 B2 )*Q = ( x 0 ) ( 0 B3 ) ( x x ) or if ( .NOT.UPPER ) then U'*A*Q = U'*( A1 0 )*Q = ( x x ) ( A2 A3 ) ( 0 x ) and V'*B*Q = V'*( B1 0 )*Q = ( x x ) ( B2 B3 ) ( 0 x ) The rows of the transformed A and B are parallel, where U = ( CSU SNU ), V = ( CSV SNV ), Q = ( CSQ SNQ ) ( -SNU CSU ) ( -SNV CSV ) ( -SNQ CSQ ) Z' denotes the transpose of Z. file lapack/single/slagtm.f prec single SLAGTM performs a matrix-vector product of the form B := alpha * A * X + beta * B where A is a tridiagonal matrix of order N, B and X are N by NRHS matrices, and alpha and beta are real scalars, each of which may be 0., 1., or -1. file lapack/single/slagts.f prec single SLAGTS may be used to solve one of the systems of equations (T - lambda*I)*x = y or (T - lambda*I)'*x = y, where T is an n by n tridiagonal matrix, for x, following the factorization of (T - lambda*I) as (T - lambda*I) = P*L*U , by routine SLAGTF. The choice of equation to be solved is controlled by the argument JOB, and in each case there is an option to perturb zero or very small diagonal elements of U, this option being intended for use in applications such as inverse iteration. file lapack/single/slagv2.f prec single SLAGV2 computes the Generalized Schur factorization of a real 2-by-2 matrix pencil (A,B) where B is upper triangular. This routine computes orthogonal (rotation) matrices given by CSL, SNL and CSR, SNR such that 1) if the pencil (A,B) has two real eigenvalues (include 0/0 or 1/0 types), then [ a11 a12 ] := [ CSL SNL ] [ a11 a12 ] [ CSR -SNR ] [ 0 a22 ] [ -SNL CSL ] [ a21 a22 ] [ SNR CSR ] [ b11 b12 ] := [ CSL SNL ] [ b11 b12 ] [ CSR -SNR ] [ 0 b22 ] [ -SNL CSL ] [ 0 b22 ] [ SNR CSR ], 2) if the pencil (A,B) has a pair of complex conjugate eigenvalues, then [ a11 a12 ] := [ CSL SNL ] [ a11 a12 ] [ CSR -SNR ] [ a21 a22 ] [ -SNL CSL ] [ a21 a22 ] [ SNR CSR ] [ b11 0 ] := [ CSL SNL ] [ b11 b12 ] [ CSR -SNR ] [ 0 b22 ] [ -SNL CSL ] [ 0 b22 ] [ SNR CSR ] where b11 >= b22 > 0. file lapack/single/slahqr.f prec single file lapack/single/slahr2.f prec single SLAHR2 reduces the first NB columns of A real general n-BY-(n-k+1) matrix A so that elements below the k-th subdiagonal are zero. The reduction is performed by an orthogonal similarity transformation Q' * A * Q. The routine returns the matrices V and T which determine Q as a block reflector I - V*T*V', and also the matrix Y = A * V * T. This is an auxiliary routine called by SGEHRD. file lapack/single/slahrd.f prec single SLAHRD reduces the first NB columns of a real general n-by-(n-k+1) matrix A so that elements below the k-th subdiagonal are zero. The reduction is performed by an orthogonal similarity transformation Q' * A * Q. The routine returns the matrices V and T which determine Q as a block reflector I - V*T*V', and also the matrix Y = A * V * T. This is an OBSOLETE auxiliary routine. This routine will be 'deprecated' in a future release. Please use the new routine SLAHR2 instead. file lapack/single/slaic1.f prec single SLAIC1 applies one step of incremental condition estimation in its simplest version: Let x, twonorm(x) = 1, be an approximate singular vector of an j-by-j lower triangular matrix L, such that twonorm(L*x) = sest Then SLAIC1 computes sestpr, s, c such that the vector [ s*x ] xhat = [ c ] is an approximate singular vector of [ L 0 ] Lhat = [ w' gamma ] in the sense that twonorm(Lhat*xhat) = sestpr. Depending on JOB, an estimate for the largest or smallest singular value is computed. Note that [s c]' and sestpr**2 is an eigenpair of the system diag(sest*sest, 0) + [alpha gamma] * [ alpha ] [ gamma ] where alpha = x'*w. file lapack/single/slaisnan.f prec single This routine is not for general use. It exists solely to avoid over-optimization in SISNAN. SLAISNAN checks for NaNs by comparing its two arguments for inequality. NaN is the only floating-point value where NaN != NaN returns .TRUE. To check for NaNs, pass the same variable as both arguments. A compiler must assume that the two arguments are not the same variable, and the test will not be optimized away. Interprocedural or whole-program optimization may delete this test. The ISNAN functions will be replaced by the correct Fortran 03 intrinsic once the intrinsic is widely available. file lapack/single/slaln2.f prec single SLALN2 solves a system of the form (ca A - w D ) X = s B or (ca A' - w D) X = s B with possible scaling ("s") and perturbation of A. (A' means A-transpose.) A is an NA x NA real matrix, ca is a real scalar, D is an NA x NA real diagonal matrix, w is a real or complex value, and X and B are NA x 1 matrices -- real if w is real, complex if w is complex. NA may be 1 or 2. If w is complex, X and B are represented as NA x 2 matrices, the first column of each being the real part and the second being the imaginary part. "s" is a scaling factor (.LE. 1), computed by SLALN2, which is so chosen that X can be computed without overflow. X is further scaled if necessary to assure that norm(ca A - w D)*norm(X) is less than overflow. If both singular values of (ca A - w D) are less than SMIN, SMIN*identity will be used instead of (ca A - w D). If only one singular value is less than SMIN, one element of (ca A - w D) will be perturbed enough to make the smallest singular value roughly SMIN. If both singular values are at least SMIN, (ca A - w D) will not be perturbed. In any case, the perturbation will be at most some small multiple of max( SMIN, ulp*norm(ca A - w D) ). The singular values are computed by infinity-norm approximations, and thus will only be correct to a factor of 2 or so. Note: all input quantities are assumed to be smaller than overflow by a reasonable factor. (See BIGNUM.) file lapack/single/slaneg.f prec single SLANEG computes the Sturm count, the number of negative pivots encountered while factoring tridiagonal T - sigma I = L D L^T. This implementation works directly on the factors without forming the tridiagonal matrix T. The Sturm count is also the number of eigenvalues of T less than sigma. This routine is called from SLARRB. The current routine does not use the PIVMIN parameter but rather requires IEEE-754 propagation of Infinities and NaNs. This routine also has no input range restrictions but does require default exception handling such that x/0 produces Inf when x is non-zero, and Inf/Inf produces NaN. For more information, see: Marques, Riedy, and Voemel, "Benefits of IEEE-754 Features in Modern Symmetric Tridiagonal Eigensolvers," SIAM Journal on Scientific Computing, v28, n5, 2006. DOI 10.1137/050641624 (Tech report version in LAWN 172 with the same title.) file lapack/single/slangb.f prec single SLANGB returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of an n by n band matrix A, with kl sub-diagonals and ku super-diagonals. Description =========== SLANGB returns the value SLANGB = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slange.f prec single SLANGE returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real matrix A. Description =========== SLANGE returns the value SLANGE = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slangt.f prec single SLANGT returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real tridiagonal matrix A. Description =========== SLANGT returns the value SLANGT = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slanhs.f prec single SLANHS returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a Hessenberg matrix A. Description =========== SLANHS returns the value SLANHS = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slansb.f prec single SLANSB returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of an n by n symmetric band matrix A, with k super-diagonals. Description =========== SLANSB returns the value SLANSB = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slansp.f prec single SLANSP returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real symmetric matrix A, supplied in packed form. Description =========== SLANSP returns the value SLANSP = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slanst.f prec single SLANST returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real symmetric tridiagonal matrix A. Description =========== SLANST returns the value SLANST = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slansy.f prec single SLANSY returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a real symmetric matrix A. Description =========== SLANSY returns the value SLANSY = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slantb.f prec single SLANTB returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of an n by n triangular band matrix A, with ( k + 1 ) diagonals. Description =========== SLANTB returns the value SLANTB = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slantp.f prec single SLANTP returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a triangular matrix A, supplied in packed form. Description =========== SLANTP returns the value SLANTP = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slantr.f prec single SLANTR returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a trapezoidal or triangular matrix A. Description =========== SLANTR returns the value SLANTR = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( ( norm1(A), NORM = '1', 'O' or 'o' ( ( normI(A), NORM = 'I' or 'i' ( ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. file lapack/single/slanv2.f prec single SLANV2 computes the Schur factorization of a real 2-by-2 nonsymmetric matrix in standard form: [ A B ] = [ CS -SN ] [ AA BB ] [ CS SN ] [ C D ] [ SN CS ] [ CC DD ] [-SN CS ] where either 1) CC = 0 so that AA and DD are real eigenvalues of the matrix, or 2) AA = DD and BB*CC < 0, so that AA + or - sqrt(BB*CC) are complex conjugate eigenvalues. file lapack/single/slapll.f prec single Given two column vectors X and Y, let A = ( X Y ). The subroutine first computes the QR factorization of A = Q*R, and then computes the SVD of the 2-by-2 upper triangular matrix R. The smaller singular value of R is returned in SSMIN, which is used as the measurement of the linear dependency of the vectors X and Y. file lapack/single/slapmt.f prec single SLAPMT rearranges the columns of the M by N matrix X as specified by the permutation K(1),K(2),...,K(N) of the integers 1,...,N. If FORWRD = .TRUE., forward permutation: X(*,K(J)) is moved X(*,J) for J = 1,2,...,N. If FORWRD = .FALSE., backward permutation: X(*,J) is moved to X(*,K(J)) for J = 1,2,...,N. file lapack/single/slapy2.f prec single SLAPY2 returns sqrt(x**2+y**2), taking care not to cause unnecessary overflow. file lapack/single/slapy3.f prec single SLAPY3 returns sqrt(x**2+y**2+z**2), taking care not to cause unnecessary overflow. file lapack/single/slaqgb.f prec single SLAQGB equilibrates a general M by N band matrix A with KL subdiagonals and KU superdiagonals using the row and scaling factors in the vectors R and C. file lapack/single/slaqge.f prec single SLAQGE equilibrates a general M by N matrix A using the row and column scaling factors in the vectors R and C. file lapack/single/slaqp2.f prec single SLAQP2 computes a QR factorization with column pivoting of the block A(OFFSET+1:M,1:N). The block A(1:OFFSET,1:N) is accordingly pivoted, but not factorized. file lapack/single/slaqps.f prec single SLAQPS computes a step of QR factorization with column pivoting of a real M-by-N matrix A by using Blas-3. It tries to factorize NB columns from A starting from the row OFFSET+1, and updates all of the matrix with Blas-3 xGEMM. In some cases, due to catastrophic cancellations, it cannot factorize NB columns. Hence, the actual number of factorized columns is returned in KB. Block A(1:OFFSET,1:N) is accordingly pivoted, but not factorized. file lapack/single/slaqr0.f prec single file lapack/single/slaqr1.f prec single file lapack/single/slaqr2.f prec single file lapack/single/slaqr3.f prec single file lapack/single/slaqr4.f prec single file lapack/single/slaqr5.f prec single file lapack/single/slaqsb.f prec single SLAQSB equilibrates a symmetric band matrix A using the scaling factors in the vector S. file lapack/single/slaqsp.f prec single SLAQSP equilibrates a symmetric matrix A using the scaling factors in the vector S. file lapack/single/slaqsy.f prec single SLAQSY equilibrates a symmetric matrix A using the scaling factors in the vector S. file lapack/single/slaqtr.f prec single SLAQTR solves the real quasi-triangular system op(T)*p = scale*c, if LREAL = .TRUE. or the complex quasi-triangular systems op(T + iB)*(p+iq) = scale*(c+id), if LREAL = .FALSE. in real arithmetic, where T is upper quasi-triangular. If LREAL = .FALSE., then the first diagonal block of T must be 1 by 1, B is the specially structured matrix B = [ b(1) b(2) ... b(n) ] [ w ] [ w ] [ . ] [ w ] op(A) = A or A', A' denotes the conjugate transpose of matrix A. On input, X = [ c ]. On output, X = [ p ]. [ d ] [ q ] This subroutine is designed for the condition number estimation in routine STRSNA. file lapack/single/slar1v.f prec single SLAR1V computes the (scaled) r-th column of the inverse of the sumbmatrix in rows B1 through BN of the tridiagonal matrix L D L^T - sigma I. When sigma is close to an eigenvalue, the computed vector is an accurate eigenvector. Usually, r corresponds to the index where the eigenvector is largest in magnitude. The following steps accomplish this computation : (a) Stationary qd transform, L D L^T - sigma I = L(+) D(+) L(+)^T, (b) Progressive qd transform, L D L^T - sigma I = U(-) D(-) U(-)^T, (c) Computation of the diagonal elements of the inverse of L D L^T - sigma I by combining the above transforms, and choosing r as the index where the diagonal of the inverse is (one of the) largest in magnitude. (d) Computation of the (scaled) r-th column of the inverse using the twisted factorization obtained by combining the top part of the the stationary and the bottom part of the progressive transform. file lapack/single/slar2v.f prec single SLAR2V applies a vector of real plane rotations from both sides to a sequence of 2-by-2 real symmetric matrices, defined by the elements of the vectors x, y and z. For i = 1,2,...,n ( x(i) z(i) ) := ( c(i) s(i) ) ( x(i) z(i) ) ( c(i) -s(i) ) ( z(i) y(i) ) ( -s(i) c(i) ) ( z(i) y(i) ) ( s(i) c(i) ) file lapack/single/slarf.f prec single SLARF applies a real elementary reflector H to a real m by n matrix C, from either the left or the right. H is represented in the form H = I - tau * v * v' where tau is a real scalar and v is a real vector. If tau = 0, then H is taken to be the unit matrix. file lapack/single/slarfb.f prec single SLARFB applies a real block reflector H or its transpose H' to a real m by n matrix C, from either the left or the right. file lapack/single/slarfg.f prec single SLARFG generates a real elementary reflector H of order n, such that H * ( alpha ) = ( beta ), H' * H = I. ( x ) ( 0 ) where alpha and beta are scalars, and x is an (n-1)-element real vector. H is represented in the form H = I - tau * ( 1 ) * ( 1 v' ) , ( v ) where tau is a real scalar and v is a real (n-1)-element vector. If the elements of x are all zero, then tau = 0 and H is taken to be the unit matrix. Otherwise 1 <= tau <= 2. file lapack/single/slarfgp.f prec single SLARFGP generates a real elementary reflector H of order n, such that H * ( alpha ) = ( beta ), H' * H = I. ( x ) ( 0 ) where alpha and beta are scalars, beta is non-negative, and x is an (n-1)-element real vector. H is represented in the form H = I - tau * ( 1 ) * ( 1 v' ) , ( v ) where tau is a real scalar and v is a real (n-1)-element vector. If the elements of x are all zero, then tau = 0 and H is taken to be the unit matrix. file lapack/single/slarfp.f prec single SLARFP generates a real elementary reflector H of order n, such that H * ( alpha ) = ( beta ), H' * H = I. ( x ) ( 0 ) where alpha and beta are scalars, beta is non-negative, and x is an (n-1)-element real vector. H is represented in the form H = I - tau * ( 1 ) * ( 1 v' ) , ( v ) where tau is a real scalar and v is a real (n-1)-element vector. If the elements of x are all zero, then tau = 0 and H is taken to be the unit matrix. Otherwise 1 <= tau <= 2. file lapack/single/slarft.f prec single SLARFT forms the triangular factor T of a real block reflector H of order n, which is defined as a product of k elementary reflectors. If DIRECT = 'F', H = H(1) H(2) . . . H(k) and T is upper triangular; If DIRECT = 'B', H = H(k) . . . H(2) H(1) and T is lower triangular. If STOREV = 'C', the vector which defines the elementary reflector H(i) is stored in the i-th column of the array V, and H = I - V * T * V' If STOREV = 'R', the vector which defines the elementary reflector H(i) is stored in the i-th row of the array V, and H = I - V' * T * V file lapack/single/slarfx.f prec single SLARFX applies a real elementary reflector H to a real m by n matrix C, from either the left or the right. H is represented in the form H = I - tau * v * v' where tau is a real scalar and v is a real vector. If tau = 0, then H is taken to be the unit matrix This version uses inline code if H has order < 11. file lapack/single/slargv.f prec single SLARGV generates a vector of real plane rotations, determined by elements of the real vectors x and y. For i = 1,2,...,n ( c(i) s(i) ) ( x(i) ) = ( a(i) ) ( -s(i) c(i) ) ( y(i) ) = ( 0 ) file lapack/single/slarnv.f prec single SLARNV returns a vector of n random real numbers from a uniform or normal distribution. file lapack/single/slarra.f prec single Compute the splitting points with threshold SPLTOL. SLARRA sets any "small" off-diagonal elements to zero. file lapack/single/slarrb.f prec single Given the relatively robust representation(RRR) L D L^T, SLARRB does "limited" bisection to refine the eigenvalues of L D L^T, W( IFIRST-OFFSET ) through W( ILAST-OFFSET ), to more accuracy. Initial guesses for these eigenvalues are input in W, the corresponding estimate of the error in these guesses and their gaps are input in WERR and WGAP, respectively. During bisection, intervals [left, right] are maintained by storing their mid-points and semi-widths in the arrays W and WERR respectively. file lapack/single/slarrc.f prec single Find the number of eigenvalues of the symmetric tridiagonal matrix T that are in the interval (VL,VU] if JOBT = 'T', and of L D L^T if JOBT = 'L'. file lapack/single/slarrd.f prec single SLARRD computes the eigenvalues of a symmetric tridiagonal matrix T to suitable accuracy. This is an auxiliary code to be called from SSTEMR. The user may ask for all eigenvalues, all eigenvalues in the half-open interval (VL, VU], or the IL-th through IU-th eigenvalues. To avoid overflow, the matrix must be scaled so that its largest element is no greater than overflow**(1/2) * underflow**(1/4) in absolute value, and for greatest accuracy, it should not be much smaller than that. See W. Kahan "Accurate Eigenvalues of a Symmetric Tridiagonal Matrix", Report CS41, Computer Science Dept., Stanford University, July 21, 1966. file lapack/single/slarre.f prec single To find the desired eigenvalues of a given real symmetric tridiagonal matrix T, SLARRE sets any "small" off-diagonal elements to zero, and for each unreduced block T_i, it finds (a) a suitable shift at one end of the block's spectrum, (b) the base representation, T_i - sigma_i I = L_i D_i L_i^T, and (c) eigenvalues of each L_i D_i L_i^T. The representations and eigenvalues found are then used by SSTEMR to compute the eigenvectors of T. The accuracy varies depending on whether bisection is used to find a few eigenvalues or the dqds algorithm (subroutine SLASQ2) to conpute all and then discard any unwanted one. As an added benefit, SLARRE also outputs the n Gerschgorin intervals for the matrices L_i D_i L_i^T. file lapack/single/slarrf.f prec single Given the initial representation L D L^T and its cluster of close eigenvalues (in a relative measure), W( CLSTRT ), W( CLSTRT+1 ), ... W( CLEND ), SLARRF finds a new relatively robust representation L D L^T - SIGMA I = L(+) D(+) L(+)^T such that at least one of the eigenvalues of L(+) D(+) L(+)^T is relatively isolated. file lapack/single/slarrj.f prec single Given the initial eigenvalue approximations of T, SLARRJ does bisection to refine the eigenvalues of T, W( IFIRST-OFFSET ) through W( ILAST-OFFSET ), to more accuracy. Initial guesses for these eigenvalues are input in W, the corresponding estimate of the error in these guesses in WERR. During bisection, intervals [left, right] are maintained by storing their mid-points and semi-widths in the arrays W and WERR respectively. file lapack/single/slarrk.f prec single SLARRK computes one eigenvalue of a symmetric tridiagonal matrix T to suitable accuracy. This is an auxiliary code to be called from SSTEMR. To avoid overflow, the matrix must be scaled so that its largest element is no greater than overflow**(1/2) * underflow**(1/4) in absolute value, and for greatest accuracy, it should not be much smaller than that. See W. Kahan "Accurate Eigenvalues of a Symmetric Tridiagonal Matrix", Report CS41, Computer Science Dept., Stanford University, July 21, 1966. file lapack/single/slarrr.f prec single Perform tests to decide whether the symmetric tridiagonal matrix T warrants expensive computations which guarantee high relative accuracy in the eigenvalues. file lapack/single/slarrv.f prec single SLARRV computes the eigenvectors of the tridiagonal matrix T = L D L^T given L, D and APPROXIMATIONS to the eigenvalues of L D L^T. The input eigenvalues should have been computed by SLARRE. file lapack/single/slartg.f prec single SLARTG generate a plane rotation so that [ CS SN ] . [ F ] = [ R ] where CS**2 + SN**2 = 1. [ -SN CS ] [ G ] [ 0 ] This is a slower, more accurate version of the BLAS1 routine SROTG, with the following other differences: F and G are unchanged on return. If G=0, then CS=1 and SN=0. If F=0 and (G .ne. 0), then CS=0 and SN=1 without doing any floating point operations (saves work in SBDSQR when there are zeros on the diagonal). If F exceeds G in magnitude, CS will be positive. file lapack/single/slartv.f prec single SLARTV applies a vector of real plane rotations to elements of the real vectors x and y. For i = 1,2,...,n ( x(i) ) := ( c(i) s(i) ) ( x(i) ) ( y(i) ) ( -s(i) c(i) ) ( y(i) ) file lapack/single/slaruv.f prec single SLARUV returns a vector of n random real numbers from a uniform (0,1) distribution (n <= 128). This is an auxiliary routine called by SLARNV and CLARNV. file lapack/single/slas2.f prec single SLAS2 computes the singular values of the 2-by-2 matrix [ F G ] [ 0 H ]. On return, SSMIN is the smaller singular value and SSMAX is the larger singular value. file lapack/single/slascl.f prec single SLASCL multiplies the M by N real matrix A by the real scalar CTO/CFROM. This is done without over/underflow as long as the final result CTO*A(I,J)/CFROM does not over/underflow. TYPE specifies that A may be full, upper triangular, lower triangular, upper Hessenberg, or banded. file lapack/single/slasd0.f prec single Using a divide and conquer approach, SLASD0 computes the singular value decomposition (SVD) of a real upper bidiagonal N-by-M matrix B with diagonal D and offdiagonal E, where M = N + SQRE. The algorithm computes orthogonal matrices U and VT such that B = U * S * VT. The singular values S are overwritten on D. A related subroutine, SLASDA, computes only the singular values, and optionally, the singular vectors in compact form. file lapack/single/slasd1.f prec single SLASD1 computes the SVD of an upper bidiagonal N-by-M matrix B, where N = NL + NR + 1 and M = N + SQRE. SLASD1 is called from SLASD0. A related subroutine SLASD7 handles the case in which the singular values (and the singular vectors in factored form) are desired. SLASD1 computes the SVD as follows: ( D1(in) 0 0 0 ) B = U(in) * ( Z1' a Z2' b ) * VT(in) ( 0 0 D2(in) 0 ) = U(out) * ( D(out) 0) * VT(out) where Z' = (Z1' a Z2' b) = u' VT', and u is a vector of dimension M with ALPHA and BETA in the NL+1 and NL+2 th entries and zeros elsewhere; and the entry b is empty if SQRE = 0. The left singular vectors of the original matrix are stored in U, and the transpose of the right singular vectors are stored in VT, and the singular values are in D. The algorithm consists of three stages: The first stage consists of deflating the size of the problem when there are multiple singular values or when there are zeros in the Z vector. For each such occurence the dimension of the secular equation problem is reduced by one. This stage is performed by the routine SLASD2. The second stage consists of calculating the updated singular values. This is done by finding the square roots of the roots of the secular equation via the routine SLASD4 (as called by SLASD3). This routine also calculates the singular vectors of the current problem. The final stage consists of computing the updated singular vectors directly using the updated singular values. The singular vectors for the current problem are multiplied with the singular vectors from the overall problem. file lapack/single/slasd2.f prec single SLASD2 merges the two sets of singular values together into a single sorted set. Then it tries to deflate the size of the problem. There are two ways in which deflation can occur: when two or more singular values are close together or if there is a tiny entry in the Z vector. For each such occurrence the order of the related secular equation problem is reduced by one. SLASD2 is called from SLASD1. file lapack/single/slasd3.f prec single SLASD3 finds all the square roots of the roots of the secular equation, as defined by the values in D and Z. It makes the appropriate calls to SLASD4 and then updates the singular vectors by matrix multiplication. This code makes very mild assumptions about floating point arithmetic. It will work on machines with a guard digit in add/subtract, or on those binary machines without guard digits which subtract like the Cray XMP, Cray YMP, Cray C 90, or Cray 2. It could conceivably fail on hexadecimal or decimal machines without guard digits, but we know of none. SLASD3 is called from SLASD1. file lapack/single/slasd4.f prec single This subroutine computes the square root of the I-th updated eigenvalue of a positive symmetric rank-one modification to a positive diagonal matrix whose entries are given as the squares of the corresponding entries in the array d, and that 0 <= D(i) < D(j) for i < j and that RHO > 0. This is arranged by the calling routine, and is no loss in generality. The rank-one modified system is thus diag( D ) * diag( D ) + RHO * Z * Z_transpose. where we assume the Euclidean norm of Z is 1. The method consists of approximating the rational functions in the secular equation by simpler interpolating rational functions. file lapack/single/slasd5.f prec single This subroutine computes the square root of the I-th eigenvalue of a positive symmetric rank-one modification of a 2-by-2 diagonal matrix diag( D ) * diag( D ) + RHO * Z * transpose(Z) . The diagonal entries in the array D are assumed to satisfy 0 <= D(i) < D(j) for i < j . We also assume RHO > 0 and that the Euclidean norm of the vector Z is one. file lapack/single/slasd6.f prec single SLASD6 computes the SVD of an updated upper bidiagonal matrix B obtained by merging two smaller ones by appending a row. This routine is used only for the problem which requires all singular values and optionally singular vector matrices in factored form. B is an N-by-M matrix with N = NL + NR + 1 and M = N + SQRE. A related subroutine, SLASD1, handles the case in which all singular values and singular vectors of the bidiagonal matrix are desired. SLASD6 computes the SVD as follows: ( D1(in) 0 0 0 ) B = U(in) * ( Z1' a Z2' b ) * VT(in) ( 0 0 D2(in) 0 ) = U(out) * ( D(out) 0) * VT(out) where Z' = (Z1' a Z2' b) = u' VT', and u is a vector of dimension M with ALPHA and BETA in the NL+1 and NL+2 th entries and zeros elsewhere; and the entry b is empty if SQRE = 0. The singular values of B can be computed using D1, D2, the first components of all the right singular vectors of the lower block, and the last components of all the right singular vectors of the upper block. These components are stored and updated in VF and VL, respectively, in SLASD6. Hence U and VT are not explicitly referenced. The singular values are stored in D. The algorithm consists of two stages: The first stage consists of deflating the size of the problem when there are multiple singular values or if there is a zero in the Z vector. For each such occurence the dimension of the secular equation problem is reduced by one. This stage is performed by the routine SLASD7. The second stage consists of calculating the updated singular values. This is done by finding the roots of the secular equation via the routine SLASD4 (as called by SLASD8). This routine also updates VF and VL and computes the distances between the updated singular values and the old singular values. SLASD6 is called from SLASDA. file lapack/single/slasd7.f prec single SLASD7 merges the two sets of singular values together into a single sorted set. Then it tries to deflate the size of the problem. There are two ways in which deflation can occur: when two or more singular values are close together or if there is a tiny entry in the Z vector. For each such occurrence the order of the related secular equation problem is reduced by one. SLASD7 is called from SLASD6. file lapack/single/slasd8.f prec single SLASD8 finds the square roots of the roots of the secular equation, as defined by the values in DSIGMA and Z. It makes the appropriate calls to SLASD4, and stores, for each element in D, the distance to its two nearest poles (elements in DSIGMA). It also updates the arrays VF and VL, the first and last components of all the right singular vectors of the original bidiagonal matrix. SLASD8 is called from SLASD6. file lapack/single/slasda.f prec single Using a divide and conquer approach, SLASDA computes the singular value decomposition (SVD) of a real upper bidiagonal N-by-M matrix B with diagonal D and offdiagonal E, where M = N + SQRE. The algorithm computes the singular values in the SVD B = U * S * VT. The orthogonal matrices U and VT are optionally computed in compact form. A related subroutine, SLASD0, computes the singular values and the singular vectors in explicit form. file lapack/single/slasdq.f prec single SLASDQ computes the singular value decomposition (SVD) of a real (upper or lower) bidiagonal matrix with diagonal D and offdiagonal E, accumulating the transformations if desired. Letting B denote the input bidiagonal matrix, the algorithm computes orthogonal matrices Q and P such that B = Q * S * P' (P' denotes the transpose of P). The singular values S are overwritten on D. The input matrix U is changed to U * Q if desired. The input matrix VT is changed to P' * VT if desired. The input matrix C is changed to Q' * C if desired. See "Computing Small Singular Values of Bidiagonal Matrices With Guaranteed High Relative Accuracy," by J. Demmel and W. Kahan, LAPACK Working Note #3, for a detailed description of the algorithm. file lapack/single/slasdt.f prec single SLASDT creates a tree of subproblems for bidiagonal divide and conquer. file lapack/single/slaset.f prec single SLASET initializes an m-by-n matrix A to BETA on the diagonal and ALPHA on the offdiagonals. file lapack/single/slasr.f prec single SLASR applies a sequence of plane rotations to a real matrix A, from either the left or the right. When SIDE = 'L', the transformation takes the form A := P*A and when SIDE = 'R', the transformation takes the form A := A*P**T where P is an orthogonal matrix consisting of a sequence of z plane rotations, with z = M when SIDE = 'L' and z = N when SIDE = 'R', and P**T is the transpose of P. When DIRECT = 'F' (Forward sequence), then P = P(z-1) * ... * P(2) * P(1) and when DIRECT = 'B' (Backward sequence), then P = P(1) * P(2) * ... * P(z-1) where P(k) is a plane rotation matrix defined by the 2-by-2 rotation R(k) = ( c(k) s(k) ) = ( -s(k) c(k) ). When PIVOT = 'V' (Variable pivot), the rotation is performed for the plane (k,k+1), i.e., P(k) has the form P(k) = ( 1 ) ( ... ) ( 1 ) ( c(k) s(k) ) ( -s(k) c(k) ) ( 1 ) ( ... ) ( 1 ) where R(k) appears as a rank-2 modification to the identity matrix in rows and columns k and k+1. When PIVOT = 'T' (Top pivot), the rotation is performed for the plane (1,k+1), so P(k) has the form P(k) = ( c(k) s(k) ) ( 1 ) ( ... ) ( 1 ) ( -s(k) c(k) ) ( 1 ) ( ... ) ( 1 ) where R(k) appears in rows and columns 1 and k+1. Similarly, when PIVOT = 'B' (Bottom pivot), the rotation is performed for the plane (k,z), giving P(k) the form P(k) = ( 1 ) ( ... ) ( 1 ) ( c(k) s(k) ) ( 1 ) ( ... ) ( 1 ) ( -s(k) c(k) ) where R(k) appears in rows and columns k and z. The rotations are performed without ever forming P(k) explicitly. file lapack/single/slassq.f prec single SLASSQ returns the values scl and smsq such that ( scl**2 )*smsq = x( 1 )**2 +...+ x( n )**2 + ( scale**2 )*sumsq, where x( i ) = X( 1 + ( i - 1 )*INCX ). The value of sumsq is assumed to be non-negative and scl returns the value scl = max( scale, abs( x( i ) ) ). scale and sumsq must be supplied in SCALE and SUMSQ and scl and smsq are overwritten on SCALE and SUMSQ respectively. The routine makes only one pass through the vector x. file lapack/single/slasv2.f prec single SLASV2 computes the singular value decomposition of a 2-by-2 triangular matrix [ F G ] [ 0 H ]. On return, abs(SSMAX) is the larger singular value, abs(SSMIN) is the smaller singular value, and (CSL,SNL) and (CSR,SNR) are the left and right singular vectors for abs(SSMAX), giving the decomposition [ CSL SNL ] [ F G ] [ CSR -SNR ] = [ SSMAX 0 ] [-SNL CSL ] [ 0 H ] [ SNR CSR ] [ 0 SSMIN ]. file lapack/single/slaswp.f prec single SLASWP performs a series of row interchanges on the matrix A. One row interchange is initiated for each of rows K1 through K2 of A. file lapack/single/slasy2.f prec single SLASY2 solves for the N1 by N2 matrix X, 1 <= N1,N2 <= 2, in op(TL)*X + ISGN*X*op(TR) = SCALE*B, where TL is N1 by N1, TR is N2 by N2, B is N1 by N2, and ISGN = 1 or -1. op(T) = T or T', where T' denotes the transpose of T. file lapack/single/slasyf.f prec single SLASYF computes a partial factorization of a real symmetric matrix A using the Bunch-Kaufman diagonal pivoting method. The partial factorization has the form: A = ( I U12 ) ( A11 0 ) ( I 0 ) if UPLO = 'U', or: ( 0 U22 ) ( 0 D ) ( U12' U22' ) A = ( L11 0 ) ( D 0 ) ( L11' L21' ) if UPLO = 'L' ( L21 I ) ( 0 A22 ) ( 0 I ) where the order of D is at most NB. The actual order is returned in the argument KB, and is either NB or NB-1, or N if N <= NB. SLASYF is an auxiliary routine called by SSYTRF. It uses blocked code (calling Level 3 BLAS) to update the submatrix A11 (if UPLO = 'U') or A22 (if UPLO = 'L'). file lapack/single/slatbs.f prec single SLATBS solves one of the triangular systems A *x = s*b or A'*x = s*b with scaling to prevent overflow, where A is an upper or lower triangular band matrix. Here A' denotes the transpose of A, x and b are n-element vectors, and s is a scaling factor, usually less than or equal to 1, chosen so that the components of x will be less than the overflow threshold. If the unscaled problem will not cause overflow, the Level 2 BLAS routine STBSV is called. If the matrix A is singular (A(j,j) = 0 for some j), then s is set to 0 and a non-trivial solution to A*x = 0 is returned. file lapack/single/slatdf.f prec single SLATDF uses the LU factorization of the n-by-n matrix Z computed by SGETC2 and computes a contribution to the reciprocal Dif-estimate by solving Z * x = b for x, and choosing the r.h.s. b such that the norm of x is as large as possible. On entry RHS = b holds the contribution from earlier solved sub-systems, and on return RHS = x. The factorization of Z returned by SGETC2 has the form Z = P*L*U*Q, where P and Q are permutation matrices. L is lower triangular with unit diagonal elements and U is upper triangular. file lapack/single/slatps.f prec single SLATPS solves one of the triangular systems A *x = s*b or A'*x = s*b with scaling to prevent overflow, where A is an upper or lower triangular matrix stored in packed form. Here A' denotes the transpose of A, x and b are n-element vectors, and s is a scaling factor, usually less than or equal to 1, chosen so that the components of x will be less than the overflow threshold. If the unscaled problem will not cause overflow, the Level 2 BLAS routine STPSV is called. If the matrix A is singular (A(j,j) = 0 for some j), then s is set to 0 and a non-trivial solution to A*x = 0 is returned. file lapack/single/slatrd.f prec single SLATRD reduces NB rows and columns of a real symmetric matrix A to symmetric tridiagonal form by an orthogonal similarity transformation Q' * A * Q, and returns the matrices V and W which are needed to apply the transformation to the unreduced part of A. If UPLO = 'U', SLATRD reduces the last NB rows and columns of a matrix, of which the upper triangle is supplied; if UPLO = 'L', SLATRD reduces the first NB rows and columns of a matrix, of which the lower triangle is supplied. This is an auxiliary routine called by SSYTRD. file lapack/single/slatrs.f prec single .