Matrix and Vector Operations

George E. Hrabovsky

MAST

Introduction

We have already seen how to use matrices in terms of data analysis. Here we will review some of the basics in terms of how to use Mathematica to solve problems involving both matrices and vectors.

Constructing Matrices

How do we enter a matrix into Mathematica? The simplest way is to go to the Insert menu above. Choose Table/Matrix. Then choose New. Click Matrix. Then specify the number of rows and columns. I chose 2 each. Then click on OK. This is what you get.

l6_1.png

You can enter what you like into the spaces.

The limitation of this method becomes apparent in two cases. The first is if the matrix is large. The second is if the matrix has definite values based on some method or process.

For these situations we use the Table[] command. For example, if we have a 3×3 matrix whose elements are given by a formula, then we might write what follows.

l6_2.png

l6_3.png

We can see this in matrix form.

l6_4.png

l6_5.png

We can also use the Array command.

l6_6.png

l6_7.png

We can construct an array of the coefficients of a list of polynomials using the CoefficientArrays command.

l6_8.png

l6_9.png

l6_10.png

l6_11.gif

l6_12.png

l6_13.png

l6_14.png

l6_15.png

We can combine two matrices by using the Join command.

l6_16.png

l6_17.png

l6_18.png

l6_19.png

Elements and Shapes of Matrices

We can use all of the list commands to pull out the elements of a matrix. Here we take the third row of our first matrix.

l6_20.png

l6_21.png

Here is the second column.

l6_22.png

l6_23.png

We can also get a submatrix using the Take command.

l6_24.png

l6_25.png

l6_26.png

l6_27.png

We can also take pieces away from a matrix to make a new matrix by using the Drop command.

l6_28.png

l6_29.png

Here we remove the first row from exp1.

l6_30.png

l6_31.png

Here we remove the first column.

l6_32.png

l6_33.png

If we want only particular parts of a matrix we use the Extract command.

l6_34.png

l6_35.png

Here we extract the first row from exp1.

l6_36.png

l6_37.png

Here we remove the first column.

l6_38.png

l6_39.png

Given a full array, one whose rows and columns form a rectangular array, we can get a list of the dimensions of the array using the Dimensions command.

l6_40.png

l6_41.png

l6_42.png

l6_43.png

l6_44.png

l6_45.png

An array that is not full is sometimes termed ragged.

l6_46.png

l6_47.png

In this last case the first row determines the size of the array, the other rows do not have the same number of elements, so we call this ragged.

We can extract the diagonal of a matrix using the Diagonal command.

l6_48.png

l6_49.png

We can transpose a matrix.

l6_50.png

l6_51.png

Transpose can also be accomplished by typing [ESC]tr[ESC]

l6_52.png

l6_53.png

As an exercise read through the documentation on Transpose and work several examples.

What about the Hermitian (or Conjugate) Transpose? We can use [ESC]ct[ESC], or the command ConjugateTranspose[].

l6_54.png

l6_55.png

This is not a good test as there were no complex numbers.

l6_56.png

l6_57.png

l6_58.png

l6_59.png

As an exercise read through the documentation on ConjugateTranspose and work several examples.

If we want to keep all of the upper triangular elements as they are, but all other elements become zeros we use the command UpperTriangularize.

l6_60.png

l6_61.png

l6_62.png

l6_63.png

We can do a similar thing with the LowerTriangularize command.

l6_64.png

l6_65.png

l6_66.png

l6_67.png

It is also possible to change an element of a matrix using the ReplacePart command.

l6_68.png

l6_69.png

For example, we can change exp14 to have a 4 in the {1,3} position.

l6_70.png

l6_71.png

Vectors

From the perspective of lists and matrices, a vector can be represented as a one-dimensional matrix, either a column or a row. In fact, the default representation of a vector is written as a row, but it is treated as a column vector. Any method that allows you to make a matrix can, therefore, be used to construct a vector.

l6_72.png

l6_73.png

This seems to be a row matrix, but if we apply MatrixForm we see that it is a column matrix.

l6_74.png

l6_75.png

A row matrix is constructed this way.

l6_76.png

l6_77.png

l6_78.png

l6_79.png

We can also create a vector field from a scalar field by taking its gradient.

l6_80.png

l6_81.png

We can also use other coordinate systems.

l6_82.png

l6_83.png

We can do the same thing with the keyboard combination  [ESC]del[ESC] followed by [CTRL]_ to list the variables.

l6_84.png

l6_85.png

Matrix-Vector Multiplication

In lesson two we examined how to multiply a vector by a matrix. We can use the same notation for matrices in general. For example

l6_86.png

l6_87.png

We can see that the order of multiplication is important.

l6_88.png

l6_89.png

Vector Operations

Let’s say  that we define another vector.

l6_90.png

l6_91.png

We can add vectors.

l6_92.png

l6_93.png

Scalar multiplication is accomplished in a completely intuitive way.

l6_94.png

l6_95.png

The scalar product is accomplished using the . symbol.

l6_96.png

l6_97.png

You can also use the Dot command.

l6_98.png

l6_99.png

The vector product is accomplished using the combination [ESC]cross[ESC].

l6_100.png

l6_101.png

You can also use the Cross command.

l6_102.png

l6_103.png

We can find the norm of a vector using the Norm command.

l6_104.png

l6_105.png

We can also evaluate the norm of a numerical matrix.

l6_106.png

l6_107.png

If we have a symbolic matrix

l6_108.png

l6_109.png

We can write its norm.

l6_110.png

l6_111.png

We can restrict this to the reals to eliminate the conjugation.

l6_112.png

l6_113.png

We can get the Jacobian matrix by taking the gradient of a vector field, in this case it is the Hessian because it is the gradient of a gradient.

l6_114.png

l6_115.png

We can also make this specific.

l6_116.png

l6_117.png

We can find the divergence of a vector field by using the Div command. If our generic vector field is composed of three functions of the coordinates we write the field.

l6_118.png

l6_119.png

The divergence of this field in Cartesian coordinates is then calculated.

l6_120.png

l6_121.png

We can also produce the divergence in spherical coordinates.

l6_122.png

l6_123.png

We can find the Curl of a vector field via the Curl command.

l6_124.png

l6_125.png

Here we have spherical coordinates.

l6_126.png

l6_127.png

Say we have another vector.

l6_128.png

l6_129.png

We can find the angle between vectors by using the VectorAngle command.

l6_130.png

l6_131.png

We can use Normalize[] to find the unit vector in the direction of vec1.

l6_132.png

l6_133.png

How do we project vec1 onto vec2?

l6_134.png

l6_135.png

How do we find an orthonormal set of vectors for these two vectors?

l6_136.png

l6_137.png

This takes a little bit of time.

As an exercise look up the documentation on Normalize,  Projection, and Orthogonalize and work several examples of each.

Matrix Operations

In lesson 2 we introduced both matrix multiplication and the Inverse command for inverting a matrix. Here we can see a plot of these commands.

l6_138.png

l6_139.gif

l6_140.png

l6_141.gif

As another example, we make a 4th-order square Hilbert matrix.

l6_142.png

l6_143.gif

l6_144.png

l6_145.gif

As an exercise read through the documentation on Inverse and MatrixPlot (and also ArrayPlot), and work several examples.

In lesson 2 we saw how to calculate a determinant. Here is a program from the Wolfram documentation on how to calculate the cofactor of a square matrix.

l6_146.png

We use this to find the cofactor of exp11 removing the 3rd row and second column.

l6_147.png

l6_148.png

As an exercise read through the documentation of Det and work several examples.

We can also find the minors of a matrix.

l6_149.png

l6_150.png

We can also use this on symbolic matrices.

l6_151.png

l6_152.png

Here we compare it to the original matrix.

l6_153.png

l6_154.png

As an exercise read the documentation for Minors and work several examples.

We can use this program from the documentation to find the adjoint of a matrix.

l6_155.png

Here we find the adjoint of the two examples.

l6_156.png

l6_157.png

l6_158.png

l6_159.png

In the new version of Mathematica (version 13) there is a new command, Adjugate.

l6_160.png

l6_161.png

Adjugate is another name for the classical adjoint. We can compare the results with the ones from above.

l6_162.png

l6_163.png

l6_164.png

l6_165.png

We can find the trace of a matrix.

l6_166.png

l6_167.png

l6_168.png

l6_169.png

As an exercise read the documentation for Tr, note particularly the program for calculating the inner product of a cone of positive definite matrices. Work several examples of your own.

We can calculate powers of matrices.

l6_170.png

l6_171.png

As an exercise read the documentation for MatrixPower, note particularly the application for composing a set of infinitesimal rotations to determine a finite rotation matrix. Work several examples of your own.

We can also calculate matrix exponentials.

l6_172.png

l6_173.png

As an exercise read the documentation for MatrixExp, note particularly the application for determining the basis for the general solution of a system of ODEs. Work several examples of your own.

Linear Systems

A common problem is how to solve a system of linear equations. We say that you could use the LinearSolve command to solve a matrix equation for a solution vector. There are different methods that can be used.

Say we use our matrix in exp31.

l6_174.png

l6_175.png

We construct a matrix equation.

l6_176.png

l6_177.png

We can use Solve on this system .

l6_178.png

l6_179.png

Let’s see what happens when we use LinearSolve.

l6_180.png

l6_181.png

As an exercise read through the documentation on LinearSolve, note (and try out) the neat examples of 100,000 equations (taking a little more than a tenth of a second) and then 1,000,000 equations (taking about 1.4 seconds).

As another exercise read through the section Solving Linear Systems in the tutorial Linear Algebra. Work through some examples.

Eigenvalues and Eigenvectors

We can find the eigenvalues of a matrix.

l6_182.png

l6_183.png

As an exercise read through the write-up of Eigenvalue and work up some examples.

We can also find the eigenvectors.

l6_184.png

l6_185.png

We can use this to diagonalize the matrix.

l6_186.png

l6_187.png

As an exercise work through the documentation for Eigenvectors and work several examples.

We can find the characteristic polynomial for a matrix by using the CharacteristicPolynomial.

l6_188.png

l6_189.png

l6_190.png

l6_191.png

Singular Values and Matrix Norms

Want to find a list of singular values for a matrix?

l6_192.png

l6_193.png

As an exercise read through the write up for SingularValueList and work some examples.

Related to this is the matrix norm. The theory of matrix norms is extensive, and beyond the scope of this lesson. Assuming that you have a need to find the n-norm of a matrix, we use the Norm command. This is the same Norm command we used for vectors.

l6_194.png

1. 0.833494
2. 0.800711

We can also find the Frobenius norm, this is the square root of the trace of the matrix A formed by A A.

l6_195.png

l6_196.png

As an exercise read through the write up for Norm and work some examples.

Other Decompositions

In lesson 2 we discussed several schemes for decomposing matrices. We will now consider additional methods. The Cholesky decomposition of a matrix A returns an upper triangular matrix U such that UU=A.

l6_197.png

l6_198.png

The Jordan decomposition takes a square matrix A and returns a similarity matrix S and the Jordan canonical form J such that l6_199.png.

l6_200.png

l6_201.png

We can place this into matrix form.

l6_202.png

l6_203.png

The first matrix is S the second is J.

The Schur decomposition takes the matrix A and returns the orthonormal  matrix O and the upper triangular matrix U, such that A=O U O.

l6_204.png

l6_205.png

l6_206.png

l6_207.png

l6_208.png

The Hessenberg decomposition takes the matrix A and returns the unitary matrix P and the upper Hessenberg matrix H such that A=P H P.

l6_209.png

l6_210.png

l6_211.png

l6_212.png

Abstract Vectors and Matrices

In lesson 3 we introduced the abstract Vectors command. There is a similar command for Matrices.

l6_213.png

l6_214.png

We can define a class of matrices symbolically.

l6_215.png

l6_216.png

We can check to see if a matrix is a member of a set of matrices.

l6_217.png

l6_218.png

l6_219.png

l6_220.png

We can also use this to simplify conditions.

l6_221.png

l6_222.png

Created with the Wolfram Language