https://nhigham.com/2022/05/18/the-big-six-matrix-factorizations/ Skip to content [logo-blue-on-white_narrower] Nick Higham Applied mathematics, numerical linear algebra and software. Primary Menu * Home * Blog * Papers + Linear Systems and Condition Estimation + Least Squares Problems + Correlation Matrices + Eigenvalue Problems + Matrix Functions and Nonlinear Matrix Equations + Miscellaneous Papers + Articles and Book Reviews * Books + Accuracy and Stability of Numerical Algorithms + Functions of Matrices: Theory and Computation + Handbook of Writing for the Mathematical Sciences + MATLAB Guide + The Princeton Companion to Applied Mathematics + Penguin Dictionary of Mathematics * Conferences * Talks + Videos + Slides * Resources * People * Photos * Miscellanea * Popular Posts * SIAM News + Posts at SIAM News + SIAM News home page * "What Is" The Big Six Matrix Factorizations Six matrix factorizations dominate in numerical linear algebra and matrix analysis: for most purposes one of them is sufficient for the task at hand. We summarize them here. For each factorization we give the cost in flops for the standard method of computation, stating only the highest order terms. We also state the main uses of each factorization. For full generality we state factorizations for complex matrices. Everything translates to the real case with "Hermitian" and "unitary" replaced by "symmetric" and "orthogonal", respectively. The terms "factorization" and "decomposition" are synonymous and it is a matter of convention which is used. Our list comprises three factorization and three decompositions. Recall that an upper triangular matrix is a matrix of the form \notag R = \begin{bmatrix} r_{11} & r_{12} & \dots & r_{1n}\\ & r_ {22} & \dots & r_{2n}\\ & & \ddots& \vdots\\ & & & r_{nn} \end {bmatrix}, and a lower triangular matrix is the transpose of an upper triangular one. Cholesky Factorization Every Hermitian positive definite matrix A\in\mathbb{C}^{n\times n} has a unique Cholesky factorization A = R^*R, where R\in\mathbb{C}^{n \times n} is upper triangular with positive diagonal elements. Cost: n^3/3 flops. Use: solving positive definite linear systems. LU Factorization Any matrix A\in\mathbb{C}^{n\times n} has an LU factorization PA = LU , where P is a permutation matrix, L is unit lower triangular (lower triangular with 1s on the diagonal), and U is upper triangular. We can take P = I if the leading principal submatrices A(1\colon k, 1\ colon k), k = 1\colon n-1, of A are nonsingular, but to guarantee that the factorization is numerically stable we need A to have particular properties, such as diagonal dominance. In practical computation we normally choose P using the partial pivoting strategy, which almost always ensures numerically stable. Cost: 2n^3/3 flops. Use: solving general linear systems. QR Factorization Any matrix A\in\mathbb{C}^{m\times n} with m\ge n has a QR factorization A = QR, where Q\in \mathbb{C}^{m\times m} is unitary and R is upper trapezoidal, that is, R = \left[\begin{smallmatrix} R_1 \\ 0\end{smallmatrix}\right], where R_1\in\mathbb{C}^{n\times n} is upper triangular. Partitioning Q = [Q_1~Q_2], where Q_1\in\mathbb{C}^{m\times n} has orthonormal columns, gives A = Q_1R_1, which is the reduced, economy size, or thin QR factorization. Cost: 2(n^2m-n^3/3) flops for Householder QR factorization. The explicit formation of Q (which is not usually necessary) requires a further 4(m^2n-mn^2+n^3/3) flops. Use: solving least squares problems, computing an orthonormal basis for the range space of A, orthogonalization. Schur Decomposition Any matrix A\in\mathbb{C}^{n\times n} has a Schur decomposition A = QTQ^*, where Q is unitary and T is upper triangular. The eigenvalues of A appear on the diagonal of T. For each k, the leading k columns of Q span an invariant subspace of A. For real matrices, a special form of this decomposition exists in which all the factors are real. An upper quasi-triangular matrix R is a block upper triangular with whose diagonal blocks R_{ii} are either 1\times1 or 2\times2. Any A\in\mathbb{R}^{n\times n} has a real Schur decomposition A = Q R Q^T, where Q is real orthogonal and R is real upper quasi-triangular with any 2\times2 diagonal blocks having complex conjugate eigenvalues. Cost: 25n^3 flops for Q and T (or R) by the QR algorithm; 10n^3 flops for T (or R) only. Use: computing eigenvalues and eigenvectors, computing invariant subspaces, evaluating matrix functions. Spectral Decomposition Every Hermitian matrix A\in\mathbb{C}^{n\times n} has a spectral decomposition A = Q\Lambda Q^*, where Q is unitary and \Lambda = \ mathrm{diag}(\lambda_i). The \lambda_i are the eigenvalues of A, and they are real. The spectral decomposition is a special case of the Schur decomposition but is of interest in its own right. Cost: 9n^3 for Q and D by the QR algorithm, or 4n^3\!/3 flops for D only. Use: any problem involving eigenvalues of Hermitian matrices. Singular Value Decomposition Any matrix A\in\mathbb{C}^{m\times n} has a singular value decomposition (SVD) \notag A = U\Sigma V^*, \quad \Sigma = \mathrm{diag}(\sigma_1,\ sigma_2,\dots,\sigma_p) \in \mathbb{R}^{m\times n}, \quad p = \min (m,n), where U\in\mathbb{C}^{m\times m} and V\in\mathbb{C}^{n\times n} are unitary and \sigma_1\ge\sigma_2\ge\cdots\ge\sigma_p\ge0. The \sigma_i are the singular values of A, and they are the nonnegative square roots of the p largest eigenvalues of A^*A. The columns of U and V are the left and right singular vectors of A, respectively. The rank of A is equal to the number of nonzero singular values. If A is real, U and V can be taken to be real. The essential SVD information is contained in the compact or economy size SVD A = U\Sigma V^*, where U \in\mathbb{C}^{m\times r}, \Sigma = \mathrm{diag}(\sigma_1,\dots,\ sigma_r), V\in\mathbb{C}^{n\times r}, and r = \mathrm{rank}(A). Cost: 14mn^2+8n^3 for P(:,1\colon n), \Sigma, and Q by the Golub-Reinsch algorithm, or 6mn^2+20n^3 with a preliminary QR factorization. Use: determining matrix rank, solving rank-deficient least squares problems, computing all kinds of subspace information. Discussion Pivoting can be incorporated into both Cholesky factorization and QR factorization, giving \Pi^T A \Pi = R^*R (complete pivoting) and A\Pi = QR (column pivoting), respectively, where \Pi is a permutation matrix. These pivoting strategies are useful for problems that are (nearly) rank deficient as they force R to have a zero (or small) (2,2) block. The big six factorizations can all be computed by numerically stable algorithms. Another important factorization is that provided by the Jordan canonical form, but while it is a useful theoretical tool it cannot in general be computed in a numerically stable way. For further details of these factorizations see the articles below. These factorizations are precisely those discussed by Stewart (2000) in his article The Decompositional Approach to Matrix Computation, which explains the benefits of matrix factorizations in numerical linear algebra. Related Blog Posts * What Is a Cholesky Factorization? (2020) * What Is a QR Factorization? (2020) * What Is a Schur Decomposition? (2022) * What Is an LU Factorization? (2021) * What Is the Singular Value Decomposition? (2020) Share this: * Print * Email * Twitter * More * * Facebook * Reddit * * LinkedIn * Related Posted on May 18, 2022May 18, 2022 by Nick HighamPosted in matrix computations Post navigation Previous Previous post: What Is a Schur Decomposition? Leave a Reply Cancel reply Enter your comment here... [ ] Fill in your details below or click an icon to log in: * * * * Gravatar Email (required) (Address never made public) [ ] Name (required) [ ] Website [ ] WordPress.com Logo You are commenting using your WordPress.com account. ( Log Out / Change ) Twitter picture You are commenting using your Twitter account. ( Log Out / Change ) Facebook photo You are commenting using your Facebook account. ( Log Out / Change ) Cancel Connecting to %s [ ] Notify me of new comments via email. [ ] Notify me of new posts via email. [Post Comment] [ ] [ ] [ ] [ ] [ ] [ ] [ ] D[ ] Search for: [ ] [Search] Recent Posts * The Big Six Matrix Factorizations * What Is a Schur Decomposition? * What Is a Permutation Matrix? * What's New in MATLAB R2022a? * What Is the Matrix Inverse? Twitter Feed My Tweets Recent Comments * Nick Higham on What Is a Condition Number? * kannan R on What Is a Condition Number? * Chu on What Is A\A? * Nick Higham on What Is A\A? * Jose-Javier Martinez on What Is A\A? Categories * books * conferences * Emacs * LaTeX * matrix computations * miscellaneous * people * Princeton Companion * publication peculiarities * publishing * research * software * what-is * writing Tags Adobe_Acrobat Algol ASCII_art Basic Beamer bfloat16 BibTeX blogs bohemian_matrices C CMYK comma Commodore_64 Commodore_Pet correlations creativity dictionary DOI ellipsis error_analysis Forth Fortran fp16 GitHub Helm Householder_symposium IEEE_arithmetic ill_posed_problem IMA Julia Lambert_W_function Lanczos LINPACK Lisp lists logarithm Mac machine_learning Manchester mathematician MATLAB matrix_function Moler NAG Org orthogonal_matrix Pandoc paper PCAM PDF PDF_viewers pencil performance_profile photocopying photography Princeton_University_press punctuation Python rounding scanning SIAM SIAM_News softmax spreadsheets stationery talk Twitter typesetting unwinding_function Vandermonde video wilkinson Windows Subscribe to Blog via Email Enter your email address to subscribe to this blog and receive notifications of new posts by email. Join 384 other followers Email Address: [ ] Subscribe Archives * May 2022 * April 2022 * March 2022 * February 2022 * January 2022 * December 2021 * November 2021 * October 2021 * September 2021 * August 2021 * July 2021 * June 2021 * May 2021 * April 2021 * March 2021 * February 2021 * January 2021 * December 2020 * November 2020 * October 2020 * September 2020 * August 2020 * July 2020 * June 2020 * May 2020 * April 2020 * March 2020 * February 2020 * December 2019 * November 2019 * August 2019 * June 2019 * May 2019 * January 2019 * December 2018 * November 2018 * October 2018 * August 2018 * June 2018 * April 2018 * January 2018 * December 2017 * November 2017 * October 2017 * August 2017 * July 2017 * June 2017 * May 2017 * April 2017 * March 2017 * February 2017 * January 2017 * December 2016 * November 2016 * October 2016 * September 2016 * August 2016 * June 2016 * May 2016 * April 2016 * March 2016 * February 2016 * January 2016 * December 2015 * November 2015 * October 2015 * September 2015 * August 2015 * July 2015 * June 2015 * May 2015 * April 2015 * February 2015 * December 2014 * November 2014 * September 2014 * August 2014 * July 2014 * June 2014 * May 2014 * April 2014 * March 2014 * February 2014 * January 2014 * December 2013 * November 2013 * October 2013 * September 2013 * August 2013 * July 2013 * June 2013 * May 2013 * April 2013 * March 2013 * February 2013 * January 2013 RSS Feed * RSS - Posts * RSS - Comments Search for: [ ] [Search] Recent Posts * The Big Six Matrix Factorizations * What Is a Schur Decomposition? * What Is a Permutation Matrix? * What's New in MATLAB R2022a? * What Is the Matrix Inverse? Recent Comments * Nick Higham on What Is a Condition Number? * kannan R on What Is a Condition Number? * Chu on What Is A\A? * Nick Higham on What Is A\A? * Jose-Javier Martinez on What Is A\A? Categories * books (18) * conferences (30) * Emacs (9) * LaTeX (16) * matrix computations (7) * miscellaneous (16) * people (16) * Princeton Companion (12) * publication peculiarities (7) * publishing (2) * research (29) * software (30) * what-is (73) * writing (15) Top Posts in Last 2 Days * The Big Six Matrix Factorizations * Better LaTeX Tables with Booktabs * What Is a Cholesky Factorization? * What Is a Schur Decomposition? * What Is the Singular Value Decomposition? * What Is a Symmetric Positive Definite Matrix? * What Is the Jordan Canonical Form? * What Is a QR Factorization? * What Is an LU Factorization? * What Is a Condition Number? Recent Posts from NLA Group: Numerical Linear Algebra Group Our Alumni - Vanni Noferini Jack Dongarra Receives 2021 ACM A.M. Turing Award NLA Group Work on Stochastic Rounding Featured in Science News SIAM PP22 Minisymposium on Understanding and Exploiting Mixed-Precision Accelerators for High-Performance Computing Numerical Linear Algebra Group Activities 2021 Website Powered by WordPress.com. * Follow Following + [croppe] Nick Higham Join 384 other followers [ ] Sign me up + Already have a WordPress.com account? Log in now. * + [croppe] Nick Higham + Customize + Follow Following + Sign up + Log in + Copy shortlink + Report this content + View post in Reader + Manage subscriptions + Collapse this bar Send to Email Address [ ] Your Name [ ] Your Email Address [ ] [ ] loading [Send Email] Cancel Post was not sent - check your email addresses! Email check failed, please try again Sorry, your blog cannot share posts by email. [b]