Linear Algebra, Part One: Matrices

George E. Hrabovsky

MAST

Introduction

The language of quantum mechanics is often that of linear algebra. There are two main views of linear algebra (okay, three, but modules are too advanced for now). One is based on abstract vector spaces and the other are matrices. There is an important principle that tells us both are identical. We will start with matrices as they allow us to make calculations.

Matrices

A matrix is a rectangular array of real or complex numbers. We say that there are M rows and N columns of a matrix. The number of rows and columns form the order of the matrix. We can also call it an M×N matrix. If the matrix is labeled A, then we will have elements labeled by column and row as indices, retreat1-2_1.png, where i=1,…,M and j=1,…,N.

retreat1-2_2.png

You might also see the elements of a matrix in the form retreat1-2_3.png, retreat1-2_4.png, or retreat1-2_5.png. For us this is not impiortant, we will use subscripts. There are times when this placement is very important, so be sure you know what context you are working in.

A matrix having one row and N columns, is a row matrix

retreat1-2_6.png

A matrix having M rows and a single column, is a column matrix

retreat1-2_7.png

A matrix having the same number of rows and columns is called an N × N square matrix.

Basic Matrix Arithmetic

For the exercises that follow we will use the following matrices:

retreat1-2_8.png

retreat1-2_9.png

retreat1-2_10.png

retreat1-2_11.png

If two matrices, say O and P, have the same elements then they are equal and we write O = P.

We can add two matrices by adding their elements,

retreat1-2_12.png

LA1-1: Add A to the matrices B through F.

We can also subtract two matrices by subtracting their elements,

retreat1-2_13.png

In order to add or subtract matrices they must be of the same order, another word for this is conformable.

We can multiply a matrix by a number, say a, by multiplying each element by a,

retreat1-2_14.png

In this way we can change the sign of a matrix,

retreat1-2_15.png

The following rules apply, first addition is commutative,

retreat1-2_16.png

Addition is associative,

retreat1-2_17.png

There is an additive identity, in this case it is a matrix all of whose elements are 0, we will label this 0,

retreat1-2_18.png

so

retreat1-2_19.png

There is an additive inverse,

retreat1-2_20.png

Multiplication by a number is right-distributive,

retreat1-2_21.png

Multiplication by a number is also left-distributive,

retreat1-2_22.png

Matrix Multiplication

Given two matrices, R and S, where R is an M ×N matrix and S is an N × T matrix the matrix product of the two is

retreat1-2_23.png

where i=1,…,M, j=1,…N, and k=1,…,T. Where the elements are a set of products,

retreat1-2_24.png

retreat1-2_25.png

retreat1-2_26.png

retreat1-2_27.png

retreat1-2_28.png

retreat1-2_29.png

retreat1-2_30.png

retreat1-2_31.png

retreat1-2_32.png

Assuming all matrices are conformable, then the matrix product is left- and right-distributive and associative

retreat1-2_33.png

retreat1-2_34.png

retreat1-2_35.png

In general, the matrix product is not commutative.

In general, O P=0, does not imply that either O=0 or that P=0.

In general, A B=A C does not imply that B=C.

LA1-2: Multiply B to the matrices B through F and then reverse the multiplication.

Special Matrices I

We now introduce five important kinds of matrices.

A square matrix with all off-diagonal elements zero, an all diagonal elements non-zero is called a diagonal matrix.

A diagonal matrix whose diagonal elements are all 1, is called the identity matrix, and is denoted I.

A square matrix whose elements satisfy the condition retreat1-2_36.png, is called an upper triangular matrix.

A square matrix whose elements satisfy the condition retreat1-2_37.png, is called a lower triangular matrix. Another way of defining a diagonal matrix is that it is both upper and lower triangular.

Matrix Inverses

If O P=I=P O, then P is the inverse matrix of O, retreat1-2_38.png.

We also note that retreat1-2_39.png.

Similarly retreat1-2_40.png.

In general, for a 2 × 2 matrix

retreat1-2_41.png

LA1-3: Find the inverse matrices A through F.

Matrix Transpose

A matrix that is the interchange of rows and columns is the transpose of the matrix. We denote this with a T superscript,

retreat1-2_42.png

retreat1-2_43.png

The following rules hold.

retreat1-2_44.png

retreat1-2_45.png

retreat1-2_46.png

retreat1-2_47.png

retreat1-2_48.png

Special Matrices II

A matrix equal to its transpose is called symmetric, retreat1-2_49.png.

A matrix equal to its negative transpose is called skew-symmetric, retreat1-2_50.png.

Determinants

A determinant is a special kind of function that takes a matrix and converts it into a scalar. We will begin with notation, the determinant of a matrix, A is denoted either det(A), or retreat1-2_51.png. So, what is a determinant? We will begin with the simplest matrix,

retreat1-2_52.png

then the determinant is said to be of first order and is written,

retreat1-2_53.png

Interesting result, but not very revealing.

Let's look at an arbitrary 2×2 matrix,

retreat1-2_54.png

this determinant is said to be of second order,

retreat1-2_55.png

We can see how this structure comes about be examining the array,

retreat1-2_56.gif

here you can see that the first term in (25) is given by multiplying the diagonal elements of the array from the upper + side. The second term in (25) is given by subtracting the diagonal terms from the upper - side, then keeping the negative sign, thus we subtract the second term.

This gives us a better clue about how to proceed. We can extend this to an arbitrary 3×3 matrix,

retreat1-2_57.png

then the determinant is,

retreat1-2_58.gif

We can look at the array and see geometrically how we combined elements,

retreat1-2_59.gif

here we have the first positive term, retreat1-2_60.png. We can also construct the next positive term,

retreat1-2_61.gif

retreat1-2_62.png. We can then construction the final positive term,

retreat1-2_63.gif

retreat1-2_64.png. We can also construct the first negative term,

retreat1-2_65.gif

retreat1-2_66.png. Then we have the second negative term,

retreat1-2_67.gif

retreat1-2_68.png. Then we have the last negative term,

retreat1-2_69.gif

retreat1-2_70.png. If we look at (27) long enough we will notice that we can factor it a bit,

retreat1-2_71.gif

Something interesting is happening here. If we convert the differences into lower-order determinants,

retreat1-2_72.gif

The first determinant is gained by eliminating row 1 and column 1, the indices of the coefficient of the determinant,

retreat1-2_73.gif

the second determinant is gained by eliminating row 1 and column 2,

retreat1-2_74.gif

the final determinant is gained by eliminating row 1 and column 3,

retreat1-2_75.gif

Here we note that we have made three submatrices whose determinants are of order -1 from the order of the original matrix. Such a determinant is called a minor of the element retreat1-2_76.png, where we have eliminated row i and column j. We can denote the minor as retreat1-2_77.png. So  given (4) we can find the minor of an element, retreat1-2_78.png. We first eliminate the 3rd row and then the first column,

retreat1-2_79.gif

this gives us,

retreat1-2_80.png

In this way we can actually calculate the elements of the determinant. If we consider the sign of the minor, then we can define the cofactor of the element retreat1-2_81.pngas,

retreat1-2_82.png

Using our example above we find the cofactor for retreat1-2_83.png,

retreat1-2_84.png

We can now write the determinant as the sum of the products of the elements of any row or column and their respective cofactors. For the ith row we have,

retreat1-2_85.png

for the jth column,

retreat1-2_86.png

These are called the Laplace expansions of the determinant. Here is a numerical example,

retreat1-2_87.png

retreat1-2_88.png

Step 1: Choose an element, say retreat1-2_89.png. In our example retreat1-2_90.png.

Step 2: Eliminate the rows and columns associated with our choice. In our case we remove the first row and the first column,

retreat1-2_91.gif

giving us the minor for retreat1-2_92.png,

retreat1-2_93.gif

retreat1-2_94.png

retreat1-2_95.png

retreat1-2_96.png

retreat1-2_97.png

retreat1-2_98.png

retreat1-2_99.png

retreat1-2_100.png

Step 3: Determine the relevant cofactor,

retreat1-2_101.png

retreat1-2_102.png

retreat1-2_103.png

retreat1-2_104.png

retreat1-2_105.png

retreat1-2_106.png

retreat1-2_107.png

retreat1-2_108.png

retreat1-2_109.png

Step 4: Expand through each element using (29) or (30),

retreat1-2_110.png

retreat1-2_111.png

retreat1-2_112.png

Cramer’s Rule

One application of determinants is finding the solution of a matrix equation. If we have a square matrix A, and two column matrices x and b,

retreat1-2_113.png

we can solve for the ith component of x,

retreat1-2_114.png

where retreat1-2_115.png is the matrix formed by removing the ith column of A and substituting b.

Adjoints

The transpose of a matrix of the complex conjugates of a matrix is called the adjoint of the matrix and is written retreat1-2_116.png. A matrix that is equal to its adjoint is called Hermitian. The adjoint is sometimes called the Hermitian conjugate.

If a matrix has a non-zero determinant, then

retreat1-2_117.png

Eigenvalues and Eigenvectors

Let’s say we have a square matrix O, a column matrix x, and a number λ. Consider the matrix equation

retreat1-2_118.png

we can rewrite this

retreat1-2_119.gif

We can then solve this equation. The most obvious solution occurs when x is the zero column matrix, this is the trivial solution. Nontrivial solutions exist when

retreat1-2_120.png

This is the characteristic equation of O.

We can expand this into a characteristic polynomial in λ of degree n,

retreat1-2_121.png

thus having n roots. The roots of the characteristic polynomial are the set of eigenvalues of the matrix that satisfy the equation (LA1.38). A solution of (LA1.37) in the form of a value for the column matrix will exist for every eigenvalue and is called an eigenvector.

LA1-4: Find the eigenvalues of the matrices A through F.

LA1-5: Find the eigenvectors of the matrices A through F.

Created with the Wolfram Language

Click here to go back to the quantum mechanics page.

Click here to go back to our home page.