https://adambaskerville.github.io/posts/LespEigenvalues/ avatar Dr Adam Luke Baskerville Research Fellow & Scientific Programmer * HOME * ABOUT * T>T * PRESENTATIONS * PROGRAMMING COURSE * PUBLICATIONS * ASTROPITOGRAPHY * PAINTING Posts T>T: When Double Precision is Not Enough Post [ ] Cancel T>T: When Double Precision is Not Enough Posted Dec 2, 2019 2019-12-02T00:00:00+00:00 by Adam Luke Baskerville Try the code yourself! Click the following button to launch an ipython notebook on Google Colab which implements the code developed in this post: Open In Colab The Problem: We want to solve the following standard eigenvalue problem \[ \mathbf{A}\Psi = \lambda\Psi, \] where \(\mathbf{A}\) is the lesp matrix, \(\Psi\) is the eigenvector and \(\lambda\) the eigenvalue. The lesp matrix is a tridiagonal matrix with real, sensitive eigenvalues, with a \(5 \times 5\) example seen in equation (1). \[ \mathbf{M} = \begin{pmatrix} -5 & 2 & 0 & 0 & 0 \\ \frac{1}{2} & -7 & 3 & 0 & 0 \\ 0 & \frac{1}{3} & -9 & 4 & 0 \\ 0 & 0 & \frac{1}{4} & -11 & 5 \\ 0 & 0 & 0 & \frac{1}{5} & -13 \\ \end{pmatrix} \tag{1} \] We are going to numerically show that the eigenvalues of a matrix \(\ mathbf{A}\) are equal to the eigenvalues of its matrix transpose, \(\ mathbf{A}^T\), easily proved as follows \[\text{det}(\mathbf{A}^T - \lambda \mathbf{I}) = \text{det}((\mathbf {A} - \lambda \mathbf{I})^T) = \text{det}(\mathbf{A} - \lambda \ mathbf{I}),\] where \(\mathbf{I}\) is the identity matrix. If you do not want to write your own function to construct the lesp matrix then you are in luck, there is a python library, rogues, which has one built in. The following python code calculates the eigenvalues of \(\mathbf{A} \) and \(\mathbf{A}^T\) and displays them using seaborn. 1 from rogues import lesp 2 from matplotlib import pyplot 3 import seaborn as sns 4 from scipy.linalg import eigvals 5 6 sns.set() 7 palette = sns.color_palette("bright") 8 9 # Dimension of matrix 10 dim = 100 11 # get lesp matrix 12 A = lesp(dim) 13 # Transpose matrix A 14 AT = A.T 15 # Calculate eigenvalues of A 16 Aev = eigvals(A) 17 # Calculate eigenvalues of A^T 18 ATev = eigvals(A.T) 19 # Extract real and imaginary parts of A 20 A_X = [x.real for x in Aev] 21 A_Y = [x.imag for x in Aev] 22 # Extract real and imaginary parts of A^T 23 AT_X = [x.real for x in ATev] 24 AT_Y = [x.imag for x in ATev] 25 26 # Plot 27 ax = sns.scatterplot(x=A_X, y=A_Y, color = 'gray', marker='o', label=r'$\mathbf{A}$') 28 ax = sns.scatterplot(x=AT_X, y=AT_Y, color = 'blue', marker='x', label=r'$\mathbf{A}^T$') 29 # Give axis labels 30 ax.set(xlabel=r'real', ylabel=r'imag') 31 # Draw legend 32 ax.legend() 33 34 pyplot.show() This produces the following plot of the eigenvalues for a \(100 \ times 100 \) matrix. Desktop View Something is wrong, most of the eigenvalues of matrix \(\mathbf{A}\) are not equal to the eigenvalues of \(\mathbf{A}^T\), even though they should be identical. Some of them also have complex components! There is no error with the program; this discrepancy is caused by a loss of numerical accuracy in the eigenvalue calculation due to the limitation of hardware double precision (16-digit). The Solution: The scipy eigvals function calls LAPACK routine _DGEEV which first reduces the input matrix to upper Hessenberg form by means of orthogonal similarity transformations. The QR algorithm is then used to further reduce the matrix to upper quasi-triangular Schur form, \ (\mathbf{T}\), with 1 by 1 and 2 by 2 blocks on the main diagonal. The eigenvalues are computed from \(\mathbf{T}\). For most applications this will produce very accurate eigenvalues, but when a problem is ill-conditioned, or a small change to the input matrix results in a significant change to the eigenvalues; more numerically stable methods are required. Failing this, higher working precision is required such as quadruple (32-digit), octuple (64-digit) or even arbitrary precision. If you are losing precision in a calculation it is always advised to first analyse the methods and algorithms you are using to see if they can be improved. Only after doing this is it desirable to increase the working precision. Extended precision calculations take significantly longer than double precision calculations as they are run in software rather than on hardware. Increasing precision is simple in python with the use of the mpmath library. Note that this library is incredibly slow for large matrices, so is best avoided for most applications. The following code calculates the same eigenvalues as before, this time at quadruple, 32-digit precision. 1 from rogues import lesp 2 from matplotlib import pyplot 3 import seaborn as sns 4 from scipy.linalg import eigvals 5 from mpmath import * 6 # Set precision to 32-digit 7 mp.dps = 32 8 9 sns.set() 10 palette = sns.color_palette("bright") 11 12 # Dimension of matrix 13 dim = 100 14 # Lesp matrix 15 A = lesp(dim) 16 # Transpose matrix A 17 AT = A.T 18 # Calculate eigenvalues of A 19 Aev, Eeg = mp.eig(mp.matrix(A)) 20 # Calculate eigenvalues of A^T 21 ATev, ETeg = mp.eig(mp.matrix(AT)) 22 # Extract real and imaginary parts of A 23 A_X = [x.real for x in Aev] 24 A_Y = [x.imag for x in Aev] 25 # Extract real and imaginary parts of A^T 26 AT_X = [x.real for x in ATev] 27 AT_Y = [x.imag for x in ATev] 28 29 # Plot 30 ax = sns.scatterplot(x=A_X, y=A_Y, color = 'gray', marker='o', label=r'$\mathbf{A}$') 31 ax = sns.scatterplot(x=AT_X, y=AT_Y, color = 'blue', marker='x', label=r'$\mathbf{A}^T$') 32 # Give axis labels 33 ax.set(xlabel=r'real', ylabel=r'imag') 34 # Draw legend 35 ax.legend() 36 37 pyplot.show() This produces the following plot of the eigenvalues for a \(100 \ times 100 \) matrix. Desktop View Success! We have shown that the eigenvalues of \(\mathbf{A}\) are equal to the eigenvalues of \(\mathbf{A}^T\), equation (1) and all eigenvalues are real. This particular example highlights a recurring theme within computational studies of quantum systems. Quantum simulations of atoms and molecules regularly involve ill-conditioned standard and generalized eigenvalue problems. Numerically stable algorithms exist for such problems, but these occasionally fail leaving us to brute force the calculation with higher precision to minimize floating point rounding errors. The next post will discuss better ways to implement higher precision in numerical calculations. Science Mathematics Programming matrix eigenvalues lesp Python This post is licensed under CC BY 4.0 by the author. Share Recent Update * T>T: Langton's Ant 2: Electric Boogaloo * T>T: Order from Chaos: Langton's Ant Trending Tags Science Programming Mathematics science programming mathematics python matplotlib script langtons Contents T>T: Call C++ From Python Without Wrapping T>T: Solving the Hydrogen Atom Variationally Further Reading Jul 23, 2019 2019-07-23T00:00:00+01:00 T>T: Optimising Cumulative Sums in Python The Problem: We have a simple nested sum problem (aka cumulative sum): \[ \sum\limits_{a_i=0}^{a}\sum\limits_{b_i=0}^{b}\sum\limits_ {c_i=0}^{c}\sum\limits_{d_i=0}^{d}\sum\limits_{e_i=0}^{e}\sum\lim... Aug 29, 2019 2019-08-29T00:00:00+01:00 T>T: Call C++ From Python Without Wrapping The Problem: We want to programmatically find the minimum value of the following simple function \[ f(x,y) = x^2 + y^2 + 3 \tag{1} \] We already know the answer to this, \(\text{min}(f(x,y)) = ... Jan 16, 2020 2020-01-16T00:00:00+00:00 T>T: Solving the Hydrogen Atom Variationally Try the code yourself! Click the following button to launch an ipython notebook on Google Colab which implements the code developed in this post: The Problem: We want to solve the Schrodinger ... (c) 2022 Adam Luke Baskerville. Some rights reserved. Trending Tags Science Programming Mathematics science programming mathematics python matplotlib script langtons