
PREFACE :
Linear algebra forms the basis for much of modern mathematics—theoretical,
applied, and computational. The purpose of this book is to provide a broad
and solid foundation for the study of advanced mathematics. A secondary
aim is to introduce the reader to many of the interesting applications of linear
algebra.
Detailed outline of the book
Chapter 1 is optional reading; it provides a concise exposition of three main emphases of linear algebra: linear equations, best approximation, and diagonalization (that is, decoupling variables). No attempt is made to give precise definitions or results; rather, the intent is to give the reader a preview of some of the questions addressed by linear algebra before the abstract development begins.
Most students studying a book like this will already know how to solve systems of linear algebraic equations, and this knowledge is a prerequisite for the first three chapters. Gaussian elimination with back substitution is not presented until Section 3.7, where it is used to illustrate the theory of linear operator equations developed in the first six sections of Chapter 3. The discussion of Gaussian elimination was delayed advisedly; this arrangement of the material emphasizes the nature of the book, which presents the theory of linear algebra and does not emphasize mechanical calculations. However, if this arrangement is not suitable for a given class of students, there is no reason that Section 3.7 cannot be presented early in the course.
Many of the examples in the text involve spaces of functions and elementary calculus, and therefore a course in calculus is needed to appreciate much of the material.
Detailed outline of the book
Chapter 1 is optional reading; it provides a concise exposition of three main emphases of linear algebra: linear equations, best approximation, and diagonalization (that is, decoupling variables). No attempt is made to give precise definitions or results; rather, the intent is to give the reader a preview of some of the questions addressed by linear algebra before the abstract development begins.
Most students studying a book like this will already know how to solve systems of linear algebraic equations, and this knowledge is a prerequisite for the first three chapters. Gaussian elimination with back substitution is not presented until Section 3.7, where it is used to illustrate the theory of linear operator equations developed in the first six sections of Chapter 3. The discussion of Gaussian elimination was delayed advisedly; this arrangement of the material emphasizes the nature of the book, which presents the theory of linear algebra and does not emphasize mechanical calculations. However, if this arrangement is not suitable for a given class of students, there is no reason that Section 3.7 cannot be presented early in the course.
Many of the examples in the text involve spaces of functions and elementary calculus, and therefore a course in calculus is needed to appreciate much of the material.
The core of the book is formed by Chapters 2, 3, 4, and 6. They present an
axiomatic development of the most important elements of finite-dimensional
linear algebra: vector spaces, linear operators, norms and inner products, and
determinants and eigenvalues. Chapter 2 begins with the concept of a field, of
which the primary examples are R (the field of real numbers) and C (the field
of complex numbers). Other examples are finite fields, particularly Zp, the
field of integers modulo p (where p is a prime number). As much as possible,
the results in the core part of the book (particularly Chapters 2–4) are phrased
in terms of an arbitrary field, and examples are given that involve finite fields
as well as the more standard fields of real and complex numbers.
Once fields are introduced, the concept of a vector space is introduced,along with the primary examples that will be studied in the text: Euclidean n-space and various spaces of functions. This is followed by the basic ideas necessary to describe vector spaces, particularly finite-dimensional vector spaces: subspace, spanning sets, linear independence, and basis. Chapter 2 ends with two optional application sections, Lagrange polynomials (which form a special basis for the space of polynomials) and piecewise polynomials (which are useful in many computational problems, particularly in solving differential equations). These topics are intended to illustrate why we study the common properties of vector spaces and bases: In a variety of applications, common issues arise, so it is convenient to study them abstractly. In addition, Section 2.8.1 presents an application to discrete mathematics: Shamir’s scheme for secret sharing, which requires interpolation in a finite field.
Chapter 3 discusses linear operators, linear operator equations, and inverses of linear operators. Central is the fact that every linear operator on finite-dimensional spaces can be represented by a matrix, which means that there is a close connection between linear operator equations and systems of linear algebraic equations. As mentioned above, it is assumed in Chapter 2 that the reader is familiar with Gaussian elimination for solving linear systems, but the algorithm is carefully presented in Section 3.7, where it is used to illustrate the abstract results on linear operator equations. Applications for Chapter 3 include linear ordinary differential equations (viewed as linear operator equations), Newton’s method for solving systems of nonlinear equations (which illustrates the idea of linearization), the use of matrices to represent graphs, binary linear block codes, and linear programming.
Once fields are introduced, the concept of a vector space is introduced,along with the primary examples that will be studied in the text: Euclidean n-space and various spaces of functions. This is followed by the basic ideas necessary to describe vector spaces, particularly finite-dimensional vector spaces: subspace, spanning sets, linear independence, and basis. Chapter 2 ends with two optional application sections, Lagrange polynomials (which form a special basis for the space of polynomials) and piecewise polynomials (which are useful in many computational problems, particularly in solving differential equations). These topics are intended to illustrate why we study the common properties of vector spaces and bases: In a variety of applications, common issues arise, so it is convenient to study them abstractly. In addition, Section 2.8.1 presents an application to discrete mathematics: Shamir’s scheme for secret sharing, which requires interpolation in a finite field.
Chapter 3 discusses linear operators, linear operator equations, and inverses of linear operators. Central is the fact that every linear operator on finite-dimensional spaces can be represented by a matrix, which means that there is a close connection between linear operator equations and systems of linear algebraic equations. As mentioned above, it is assumed in Chapter 2 that the reader is familiar with Gaussian elimination for solving linear systems, but the algorithm is carefully presented in Section 3.7, where it is used to illustrate the abstract results on linear operator equations. Applications for Chapter 3 include linear ordinary differential equations (viewed as linear operator equations), Newton’s method for solving systems of nonlinear equations (which illustrates the idea of linearization), the use of matrices to represent graphs, binary linear block codes, and linear programming.
Eigenvalues and eigenvectors are introduced in Chapter 4, where the emphasis
is on diagonalization, a technique for decoupling the variables in a
system so that it can be more easily understood or solved. As a tool for
studying eigenvalues, the determinant function is first developed. Elementary
facts about permutations are needed; these are developed in Appendix
B for the reader who has not seen them before. Results about polynomials
form further background for Chapter 4, and these are derived in Appendix C.
Chapter 4 closes with two interesting applications in which linear algebra is
key: systems of constant coefficient linear ordinary differential equations and
integer programming.
Chapter 4 shows that some matrices can be diagonalized, but others cannot. After this, there are two natural directions to pursue, given in Chapters 5 and 8. One is to try to make a nondiagonalizable matrix as close to diagonal form as possible; this is the subject of Chapter 5, and the result is the Jordan canonical form. As an application, the matrix exponential is presented, which completes the discussion of systems of ordinary differential equations that was begun in Chapter 4. A brief introduction to the spectral theory of graphs is also presented in Chapter 5. The remainder of the text does not depend on Chapter 5.
Chapter 6 is about orthogonality and its most important application, best approximation. These concepts are based on the notion of an inner product and the norm it defines. The central result is the projection theorem, which shows how to find the best approximation to a given vector from a finite-dimensional subspace (an infinite-dimensional version appears in Chapter 10). This is applied to problems such as solving overdetermined systems of linear equations and approximating functions by polynomials. Orthogonality is also useful for representing vector spaces in terms of orthogonal subspaces; in particular, this gives a detailed understanding of the four fundamental subspaces defined by a linear operator. Application sections address weighted polynomial approximation, the Galerkin method for approximating solutions to differential equations, Gaussian quadrature (that is, numerical integration), and the Helmholtz decomposition for vector fields.
Chapter 4 shows that some matrices can be diagonalized, but others cannot. After this, there are two natural directions to pursue, given in Chapters 5 and 8. One is to try to make a nondiagonalizable matrix as close to diagonal form as possible; this is the subject of Chapter 5, and the result is the Jordan canonical form. As an application, the matrix exponential is presented, which completes the discussion of systems of ordinary differential equations that was begun in Chapter 4. A brief introduction to the spectral theory of graphs is also presented in Chapter 5. The remainder of the text does not depend on Chapter 5.
Chapter 6 is about orthogonality and its most important application, best approximation. These concepts are based on the notion of an inner product and the norm it defines. The central result is the projection theorem, which shows how to find the best approximation to a given vector from a finite-dimensional subspace (an infinite-dimensional version appears in Chapter 10). This is applied to problems such as solving overdetermined systems of linear equations and approximating functions by polynomials. Orthogonality is also useful for representing vector spaces in terms of orthogonal subspaces; in particular, this gives a detailed understanding of the four fundamental subspaces defined by a linear operator. Application sections address weighted polynomial approximation, the Galerkin method for approximating solutions to differential equations, Gaussian quadrature (that is, numerical integration), and the Helmholtz decomposition for vector fields.
Title : Finite-Dimensional Linear Algebra
author(s) : Mark S. Gockenbach
size : 4.8 Mb
file type : PDF
how to dowload from this website
0 on: "Finite-Dimensional Linear Algebra"
If some URL not work please inform me and thanks