jagomart
digital resources
picture1_Vector Analysis Notes 172741 | Matrix Analysis


 140x       Filetype PDF       File size 0.59 MB       Source: case.edu


File: Vector Analysis Notes 172741 | Matrix Analysis
lecture notes on matrix analysis mark w meckes april 27 2019 contents 1 linear algebra background 3 1 1 fundamentals 3 1 2 matrices and linear maps 4 1 3 ...

icon picture PDF Filetype PDF | Posted on 27 Jan 2023 | 2 years ago
Partial capture of text on file.
                                              Lecture notes on matrix analysis
                                                               Mark W. Meckes
                                                                April 27, 2019
                      Contents
                      1 Linear algebra background                                                                          3
                          1.1   Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         3
                          1.2   Matrices and linear maps . . . . . . . . . . . . . . . . . . . . . . . . . . . .           4
                          1.3   Rank and eigenvalues       . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     6
                      2 Matrix factorizations                                                                              7
                          2.1   SVD: The fundamental theorem of matrix analysis . . . . . . . . . . . . . .                7
                          2.2   Afirst application: the Moore–Penrose inverse           . . . . . . . . . . . . . . . .     9
                          2.3   The spectral theorems and polar decomposition . . . . . . . . . . . . . . . .             10
                          2.4   Factorizations involving triangular matrices . . . . . . . . . . . . . . . . . .          14
                          2.5   Simultaneous factorizations . . . . . . . . . . . . . . . . . . . . . . . . . . .         18
                      3 Eigenvalues of Hermitian matrices                                                                20
                          3.1   Variational formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        20
                          3.2   Inequalities for eigenvalues of two Hermitian matrices         . . . . . . . . . . . .    22
                          3.3   Majorization     . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    26
                      4 Norms                                                                                            31
                          4.1   Vector norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        31
                          4.2   Special classes of norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        33
                          4.3   Duality    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    35
                          4.4   Matrix norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        37
                          4.5   The spectral radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       40
                          4.6   Unitarily invariant norms . . . . . . . . . . . . . . . . . . . . . . . . . . . .         43
                          4.7   Duality for matrix norms       . . . . . . . . . . . . . . . . . . . . . . . . . . . .    48
                      5 Some topics in solving linear systems                                                            50
                          5.1   Condition numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         50
                          5.2   Sparse signal recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        52
                      6 Positive (semi)definite matrices                                                                  55
                          6.1   Characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .       55
                          6.2   Kronecker and Hadamard products . . . . . . . . . . . . . . . . . . . . . . .             57
                          6.3   Inequalities for positive (semi)definite matrices . . . . . . . . . . . . . . . .          58
                                                                        1
                     7 Locations and perturbations of eigenvalues                                                  60
                        7.1   The Gerˇsgorin circle theorem . . . . . . . . . . . . . . . . . . . . . . . . . .    60
                        7.2   Eigenvalue perturbations for non-Hermitian matrices . . . . . . . . . . . . .        61
                     8 Nonnegative matrices                                                                        64
                        8.1   Inequalities for the spectral radius   . . . . . . . . . . . . . . . . . . . . . . . 64
                        8.2   Perron’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   67
                        8.3   Irreducible nonnegative matrices . . . . . . . . . . . . . . . . . . . . . . . .     71
                        8.4   Stochastic matrices and Markov chains . . . . . . . . . . . . . . . . . . . . .      73
                        8.5   Reversible Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . .     76
                        8.6   Convergence rates for Markov chains . . . . . . . . . . . . . . . . . . . . . .      77
                     9 Spectral graph theory                                                                       79
                        9.1   Eigenvalues of the adjacency matrix . . . . . . . . . . . . . . . . . . . . . .      79
                        9.2   The graph Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    82
                                                                    2
                1   Linear algebra background
                If you need to brush up on linear algebra background, the best source is, of course,
                     Linear Algebra, by Elizabeth S. Meckes and Mark W. Meckes, Cambridge Uni-
                     versity Press, 2018.
                1.1   Fundamentals
                Wewill use F to stand for either the set of real numbers R or the set of complex numbers
                C. In this class we will deal only with finite-dimensional vector spaces over R or C.
                   Basics terms which you should be comfortable with:
                   • vector space over F
                   • subspace
                   • span
                   • linearly (in)dependent
                   • basis
                   • standard basis of Fn
                   • dimension
                   • linear transformation/map
                   • matrix
                   • identity matrix
                   • identity map
                   • invertible matrix
                   • invertible linear map
                   • singular matrix
                   • singular linear map
                   • inverse matrix
                   • inverse map
                   • kernel/null space
                   • image/range
                   • determinant
                                                      3
                                  • eigenvector
                                  • eigenvalue
                                  • characteristic polynomial
                                  • inner product
                                  • norm (associated with an inner product)
                                  • standard inner product on Fn
                                  • orthogonal
                                  • orthonormal basis
                                  • unitary map
                                  • unitary matrix
                                  • orthogonal matrix
                             1.2       Matrices and linear maps
                             Give a matrix u ∈ V and a basis B = (v1,...,vn) of V, the matrix representing u with
                             respect to B is the column matrix x ∈ M                               such that
                                                                                              n,1
                                                                                                 n
                                                                                        u=Xxivi.
                                                                                               i=1
                                   Given a linear map T : V → W and bases B = (v ,...,v ) of V and B = (w ,...,w )
                                                                                                    1         1         n                    2         1           m
                             of W, the matrix representing T with respect to B1 and B2 is the unique matrix
                             A∈Mm,n such that                                                      m
                                                                                     T(v ) = Xa w
                                                                                          j              ij   i
                                                                                                  i=1
                             for each j. Equivalently, the jth column of A is the matrix of T(v ) with respect to B .
                                                                                                                                  j                              2
                                   If V = W and we consider the same basis B = B1 = B2 in both cases, we speak simply
                             of the matrix representing T with respect to B.
                                   There are no universally agreed-upon notations for the above notions, and we will not
                             introduce any since they inevitably lead to a lot of notational clutter. Writing out “the
                             matrix of T with respect to B and B ” is pretty cumbersome, too, but we won’t need to
                                                                           1            2
                             write it that often after this section.
                             Proposition 1.1. Let B1 be a basis of V and B2 be a basis of W. Let T : V → W be a
                             linear map, v ∈ V, and write w = T(v). Let A be the matrix of T with respect to B1 and
                             B2, x the matrix of v with respect to B1, and y the matrix of w with respect to B2. Then
                             y=Ax.
                                                                                                 4
The words contained in this file might help you see if this file matches what you are looking for:

...Lecture notes on matrix analysis mark w meckes april contents linear algebra background fundamentals matrices and maps rank eigenvalues factorizations svd the fundamental theorem of arst application moore penrose inverse spectral theorems polar decomposition involving triangular simultaneous hermitian variational formulas inequalities for two majorization norms vector special classes duality radius unitarily invariant some topics in solving systems condition numbers sparse signal recovery positive semi denite characterizations kronecker hadamard products locations perturbations gersgorin circle eigenvalue non nonnegative perron s irreducible stochastic markov chains reversible convergence rates graph theory adjacency laplacian if you need to brush up best source is course by elizabeth cambridge uni versity press wewill use f stand either set real r or complex c this class we will deal only with nite dimensional spaces over basics terms which should be comfortable space subspace span li...

no reviews yet
Please Login to review.