140x Filetype PDF File size 0.59 MB Source: case.edu
Lecture notes on matrix analysis Mark W. Meckes April 27, 2019 Contents 1 Linear algebra background 3 1.1 Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Matrices and linear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Rank and eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Matrix factorizations 7 2.1 SVD: The fundamental theorem of matrix analysis . . . . . . . . . . . . . . 7 2.2 Afirst application: the Moore–Penrose inverse . . . . . . . . . . . . . . . . 9 2.3 The spectral theorems and polar decomposition . . . . . . . . . . . . . . . . 10 2.4 Factorizations involving triangular matrices . . . . . . . . . . . . . . . . . . 14 2.5 Simultaneous factorizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Eigenvalues of Hermitian matrices 20 3.1 Variational formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2 Inequalities for eigenvalues of two Hermitian matrices . . . . . . . . . . . . 22 3.3 Majorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4 Norms 31 4.1 Vector norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.2 Special classes of norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.3 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.4 Matrix norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.5 The spectral radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.6 Unitarily invariant norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 4.7 Duality for matrix norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5 Some topics in solving linear systems 50 5.1 Condition numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 5.2 Sparse signal recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 6 Positive (semi)definite matrices 55 6.1 Characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.2 Kronecker and Hadamard products . . . . . . . . . . . . . . . . . . . . . . . 57 6.3 Inequalities for positive (semi)definite matrices . . . . . . . . . . . . . . . . 58 1 7 Locations and perturbations of eigenvalues 60 7.1 The Gerˇsgorin circle theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 60 7.2 Eigenvalue perturbations for non-Hermitian matrices . . . . . . . . . . . . . 61 8 Nonnegative matrices 64 8.1 Inequalities for the spectral radius . . . . . . . . . . . . . . . . . . . . . . . 64 8.2 Perron’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 8.3 Irreducible nonnegative matrices . . . . . . . . . . . . . . . . . . . . . . . . 71 8.4 Stochastic matrices and Markov chains . . . . . . . . . . . . . . . . . . . . . 73 8.5 Reversible Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 8.6 Convergence rates for Markov chains . . . . . . . . . . . . . . . . . . . . . . 77 9 Spectral graph theory 79 9.1 Eigenvalues of the adjacency matrix . . . . . . . . . . . . . . . . . . . . . . 79 9.2 The graph Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 2 1 Linear algebra background If you need to brush up on linear algebra background, the best source is, of course, Linear Algebra, by Elizabeth S. Meckes and Mark W. Meckes, Cambridge Uni- versity Press, 2018. 1.1 Fundamentals Wewill use F to stand for either the set of real numbers R or the set of complex numbers C. In this class we will deal only with finite-dimensional vector spaces over R or C. Basics terms which you should be comfortable with: • vector space over F • subspace • span • linearly (in)dependent • basis • standard basis of Fn • dimension • linear transformation/map • matrix • identity matrix • identity map • invertible matrix • invertible linear map • singular matrix • singular linear map • inverse matrix • inverse map • kernel/null space • image/range • determinant 3 • eigenvector • eigenvalue • characteristic polynomial • inner product • norm (associated with an inner product) • standard inner product on Fn • orthogonal • orthonormal basis • unitary map • unitary matrix • orthogonal matrix 1.2 Matrices and linear maps Give a matrix u ∈ V and a basis B = (v1,...,vn) of V, the matrix representing u with respect to B is the column matrix x ∈ M such that n,1 n u=Xxivi. i=1 Given a linear map T : V → W and bases B = (v ,...,v ) of V and B = (w ,...,w ) 1 1 n 2 1 m of W, the matrix representing T with respect to B1 and B2 is the unique matrix A∈Mm,n such that m T(v ) = Xa w j ij i i=1 for each j. Equivalently, the jth column of A is the matrix of T(v ) with respect to B . j 2 If V = W and we consider the same basis B = B1 = B2 in both cases, we speak simply of the matrix representing T with respect to B. There are no universally agreed-upon notations for the above notions, and we will not introduce any since they inevitably lead to a lot of notational clutter. Writing out “the matrix of T with respect to B and B ” is pretty cumbersome, too, but we won’t need to 1 2 write it that often after this section. Proposition 1.1. Let B1 be a basis of V and B2 be a basis of W. Let T : V → W be a linear map, v ∈ V, and write w = T(v). Let A be the matrix of T with respect to B1 and B2, x the matrix of v with respect to B1, and y the matrix of w with respect to B2. Then y=Ax. 4
no reviews yet
Please Login to review.