Monitoring Numerical Solutions and Building Your Own Numerical Solutions

George E. Hrabovsky

MAST

Introduction

So far we have been using what amount to a set of black box tools in Mathematica. It is, of course, possible to learn what algorithms are being implemented, but that does not mean you will understand the implementation of those algorithms. In this lesson we will begin by exploring some tools to make using these black boxes more effective. Then we will learn how to construct our own finite difference schemes. Then we will explore several methods of solving differential equations in detail: Verlet, Euler, modified Euler, Runge-Kutta, Multistep, Lax-Wendroff, and Crank-Nicholson. We will end the lesson with a discussion of how to add methods to the existing NDSolve framework.

Reap and Sow in Numerical Integration

Two associated commands are Reap and Sow.

l12_1.png

l12_2.png

l12_3.png

l12_4.png

For example, we can plot the behavior of an interesting function.

l12_5.png

l12_6.png

l12_7.gif

We can numerically integrate this and visualize the points it is using to evaluate the numerical quadrature. We use the command EvaluationMonitor.

l12_8.png

l12_9.png

To put this another way, it will return a result any time a numerical evaluation is triggered.

l12_10.png

l12_11.gif

This is not a very illuminating plot unless you understand how the increasing density of points demonstrates convergence, but it is a bit mysterious. Every time a numerical integration occurs it sows a result. These sowed results are collected as a list by the reap and then displayed. Here is a plot that uses the same information, but displays it along the function curve so we can better understand what is happening.

l12_12.png

l12_13.gif

In the places where the curve is harder to integrate there are more points considered, in other words the method adapts to the difficulty of integration.

Using StepMonitor in NDSolve

So, how do we know that a solution is progressing during an NDSolve evaluation?  We will use StepMonitor for this.

l12_14.png

l12_15.png

We begin by creating a plot that updates automatically. This will be our monitor.

l12_16.png

l12_17.png

Right now this is empty. We will fill it shortly. Watch the plot change as the solution converges to a result.

l12_18.png

l12_19.gif

Using Monitor and Progress Indicator

We can make an additional monitor using the Monitor and ProgressIndicator commands.

l12_20.png

l12_21.png

l12_22.png

l12_23.png

Here we make another plot that updates, to track the progress of our NDSolve.

l12_24.png

l12_25.png

Then we have our solution with the new monitor.

l12_26.png

l12_27.gif

Using NumericQ

We can make our own function to solve the previous equation.

l12_28.png

We can run this.

l12_29.png

l12_30.gif

There may be times when this returns a result that informs you that you must use a numerical value even though you are using one. To avoid this case we add the following statement (and this can be used for any command requiring a numerical input).

l12_31.png

We test this.

l12_32.png

l12_33.gif

Now we can see where it becomes useful.

l12_34.png

l12_35.png

l12_36.png

l12_37.png

l12_38.png

Using RSolve for Differences

So the traditional way of solving ODEs and PDEs on a computer is to convert the differential equation into a difference equation. This effectively turns the analytic task of integrating the ODE or PDE into solving a matrix-valued difference equation. So how do we set up such a difference equation in Mathematica? Just as there are methods of solving equations and differential equations, there is a system for difference equations called RSolve.

Let’s take a look at a linear difference equation with constant coefficients.

l12_39.png

(12.1)

How do we solve this?

l12_40.png

l12_41.png

We can also use this to solve an IVP. Say we have the equation above, but with the initial conditions y(0)=1, y(1)=5, and y(2)=-1.

l12_42.png

l12_43.png

We can plot the real part.

l12_44.png

l12_45.gif

The imaginary part is very small.

l12_46.png

l12_47.gif

We can investigate what happens if time continues.

l12_48.png

l12_49.gif

l12_50.png

l12_51.gif

Let’s take a case that we know the form and see how to convert it to the finite-difference scheme. Take the ODE

l12_52.png

(12.2)

with the initial condition that y(0)=1.

l12_53.png

l12_54.png

We try DSolve .

l12_55.png

l12_56.png

We can plot this..

l12_57.png

l12_58.gif

The derivatives become difference quotients. For a forward difference representing a derivative we have.

l12_59.png

l12_60.png

For a second derivative, we write this.

l12_61.png

l12_62.png

We can write the backwards differences, too.

l12_63.png

l12_64.png

and the second derivative is then.

l12_65.png

l12_66.png

We can also use central differences (this is good for spatial derivatives).

l12_67.png

l12_68.png

and the second derivative is then.

l12_69.png

l12_70.png

To discretize our differential equation we write,

l12_71.png

l12_72.png

We solve this for a value of h determined by the user.

l12_73.png

l12_74.png

We can turn this into a function of h.

l12_75.png

We can plot this.

l12_76.png

l12_77.gif

We can compare them.

l12_78.png

l12_79.gif

Our Own Explorations

Those of you who have programmed in FORTRAN, C++, Java, and the like are familiar with the idea of procedural programming as that is the style used in these languages. Procedures are executed in looping structures.

We begin by converting the diffusion equation to its finite-difference equivalent,

l12_80.png

(12.3)

Here we have a ring divided into X cells for N time steps. Then we specify a given cell’s density by considering the ith cell for the nth time step. So we say that i extends from 0 to X, and the time extends from 0 to N. For any specific time step we will have,

l12_81.png

(12.4)

We can determine the stability condition to be,

l12_82.png

(12.5)

If this ratio is more than 1/2 the solution will become unstable in time. We write,

l12_83.png

(12.6)

So how do we implement this in Mathematica?

If we have some set of initial conditions for ρ as ρ0, then we can use the above formula to get a set of the next time step of values for ρ, we can then relabel these as the new ρ0 values and continue the calculation. We have to establish the initial conditions, and I will use the procedural iterator Table[function, {iterator}] to produce this result.

l12_84.png

Let’s say we choose the option for positions 0 to 6.

l12_85.png

we then develop the current time step as the initial conditions, here I specify the part of the list considered by the convention list[[part of list]].

l12_86.png

Here we have the periodic boundary conditions.

l12_87.png

l12_88.png

this completes the initialization step, and we have established the initial condition.

We now establish a single time step be rewriting ρ[x] to include the boundary conditions. I use the If[condition, condition is met, condition is not met] for this.

l12_89.png

We test this.

l12_90.png

l12_91.png

l12_92.png

l12_93.png

l12_94.png

l12_95.png

Then we can establish a time step, here again I use Table to generate the list.

l12_96.png

We check this.

l12_97.png

l12_98.png

We now construct the solution in time, using Table.

l12_99.png

This completes the model development.

Now we need to display the results. I make a test run.

l12_100.png

l12_101.png

Here is the plot of these results.

l12_102.png

l12_103.gif

This distorts and gets noisy very fast, of course we violated the stability condition with our choice of D, so lets try it again, we know from (12.5) that D≤1/4. We need to reinitialize ρ0.

l12_104.png

Then we construct the run using the correct value for D.

l12_105.png

l12_106.png

This takes a long time to run and produces the result.

l12_107.png

l12_108.gif

Compare this with the solution to the diffusion equation from lesson 10.

l12_109.gif

That looks better. This looks like the result we got from NDSolve with some additional noise. We could use a finer grid and a better difference scheme if we wanted to get more precise and more accurate. That is, however, the subject for another talk.

Another style of programming uses f(x) as the application of f to x. So we start by establishing a function in the traditional way, using the Function[body][parameter replacement] command. Here we use the notation # to represent a formal parameter.

l12_110.png

l12_111.png

We could write this without the Function command, but then we need an ampersand following the function statement so that Mathematica will know that the function statement is done.

l12_112.png

l12_113.png

We can also write these for multiple variables.

l12_114.png

l12_115.png

Or we can write it without the Function[] command.

l12_116.png

l12_117.png

We can apply a function to a list by the use of the Map[function,{list}] command.

l12_118.png

l12_119.png

We could also use the command line f/@.

l12_120.png

l12_121.png

Say we want to program the diffusion equation in functional programming. We begin with establishing the list of cells in our ring. We employ the Range[max element] to produce a list from 1 to X.

l12_122.png

We decide we want to go from 1 to 6.

l12_123.png

l12_124.png

We now establish our initial conditions.

l12_125.png

For our choice of initial conditions we have, 1 everywhere, so we just tell it to divide by itself at each location.

l12_126.png

l12_127.png

Now we specify the diffusion equation. Since we have periodic boundary conditions and a list we can just rotate the list right and left.

l12_128.png

We can test this.

l12_129.png

l12_130.png

We now need to impose our boundary conditions. Here we are replacing the end parts with the boundary conditions using ReplacePart[list,{parts and replacements}].

l12_131.png

We will test these.

l12_132.png

l12_133.png

So we construct a single step.

l12_134.png

Let’s try it.

l12_135.png

l12_136.png

We can construct a table to produce our result.

l12_137.png

We can try this.

l12_138.png

l12_139.png

We can plot this to see how we did.

l12_140.png

l12_141.gif

We have gotten the same result as for procedural programming. It also took almost no time.

Another style of programming uses the symbolic nature of Mathematica to transform expressions. This is called ruled-based programming, and uses the notion of rewrite rules, this is because you rewrite an expression. This is a very different style of programming. Here we take an expression,

l12_142.png

l12_143.png

l12_144.png

and we apply a transformation rule to it,

l12_145.png

l12_146.png

l12_147.png

l12_148.png

Using our example above, we begin with the initial conditions.

l12_149.png

Then we execute it.

l12_150.png

l12_151.png

Then we have the diffusion equation. Here the double underscore __ following the ρ0 signifies a list.

l12_152.png

For our example

l12_153.png

l12_154.png

we can include the boundary conditions.

l12_155.png

l12_156.png

l12_157.png

We can test this again,

l12_158.png

l12_159.png

We can then produce our model.

l12_160.png

l12_161.png

l12_162.png

l12_163.gif

The Verlet Method

Here we implement the Verlet method for solving the Newtonian equations of motion. If we have

l12_164.png

(12.7)

we can assume that the solution will be l12_165.png. We can expand this as a Taylor series for some small time interval Δ t.

l12_166.png

(12.8)

and

l12_167.png

(12.9)

If we know l12_168.png and l12_169.png, and if we calculate l12_170.png, then we can add (12.8) and (12.9) to get,

l12_171.png

(12.10)

The velocity is then the difference.

l12_172.png

(12.11)

So we get a set of positions each of which is identified by (12.10) assuming we know the force. This allows us to write the acceleration as an input. If we couple that with initial position and initial velocity we are almost there. We have one subtle problem remaining, since we want l12_173.png this is the same thing as finding l12_174.png. Rewriting (12.11) we get,

l12_175.png

(12.12)

All we need to do is calculate l12_176.png. This allows us to find l12_177.png for each subsequent time step. If we supply the acceleration function (the right-hand-side divided by mass as a function of position and time), the initial position and velocity, the size of the time step and the number of steps we can solve the equations of motion in general.

l12_178.png

This program was presented (with some alterations) in the book by Joel Franklin, (2013), Computational Methods for Physics, Cambridge University Press. I want to like this book and it covers a lot of ground. The biggest problem with it is that the author is obviously a FORTRAN/C programmer who thinks in terms of procedural programming. There is nothing wrong with this, except that it is almost never the best way to program in Mathematica. Mathematica (and the Wolfram Language in general) allows you to program in every programming paradigm as you like (you can even shift from one to another in the same line of code without trouble). As an exercise write this again using either functional or rule-based style programming.

Euler’s Method

If we have a differential equation,

l12_179.png

(12.13)

with the initial condition

l12_180.png

(12.14)

then we can approximate the solution by,

l12_181.png

(12.15)

and

l12_182.png

(12.16)

We can write this in Mathematica.

l12_183.png

We can test this for a simple quadratic function.

l12_184.png

We can then make a set of data based on applying the Euler method to this function .

l12_185.png

We can plot this .

l12_186.png

l12_187.gif

We can compare this to the result of DSolve.

l12_188.png

l12_189.png

We can plot this.

l12_190.png

l12_191.gif

This code is based on the web page

http://homepage.cem.itesm.mx/jose.luis.gomez/data/mathematica/metodos.pdf .

Improved Euler Method

Euler’s method is quite crude. There is a more accurate version that is sometimes called the improved Euler method, and sometimes it is called Heun’s method. We can borrow some notation from the last section and write,

l12_192.png

(12.17)

We can write this in Mathematica.

l12_193.png

This was taken from the website https://engcourses-uofa.ca/books/numericalanalysis/ordinary-differential-equations/solution-methods-for-ivps/heuns-method/ and suffers from the same procedural bias. Make no mistake, procedural programs work in Mathematica. It is just that these programs work extremely slowly as compared to the other styles of programming.

As an exercise, write this in a more efficient form.

Runge-Kutta

A more advanced, but also demanding, solution method is the Runge-Kutta method. There are many different RK methods. We will look at the classic RK, or RK4 method. The method uses the equations,

l12_194.png

(12.18)

l12_195.png

(12.19)

l12_196.png

(12.20)

l12_197.png

(12.21)

l12_198.png

(12.22)

l12_199.png

(12.23)

We can make a single RK step.

l12_200.png

We can then apply the NestList command to perform the solution starting at l12_201.png and then working through the solution.

l12_202.png

This code is adapted from a discussion on Mathematica Stack Exchange: the web site address is:  https://mathematica.stackexchange.com/questions/220165/runge-kutta-implemented-on-mathematica . This is a great site for uncovering good style and addressing problems.

Multistep Methods

Adams Method

There are many other methods that require multiple steps in a manner similar to the RK4 method. One that is quite common in such courses is the Adams method named for the English mathematical John Couch Adams (1819-1892). Using this method we rewrite our differential equation,

l12_203.png

(12.24)

we then approximate f(t,d(t)) by the k-degree polynomial l12_204.png. So, if l12_205.png approximates our slope function, then we require that for l12_206.png

l12_207.png

(12.24)

and for l12_208.png

l12_209.png

(12.25)

We solve for a and b

l12_210.png

(12.26)

l12_211.png

(12.27)

So we have developed a polynomial to approximate our derivative. We can think of this as a predictor step. If we then replace our slope function with our polynomial and then evaluate the direct integral we get our second step,

l12_212.png

(12.29)

This result is called the first-order Adams-Bashforth formula, named for Adams and the English mathematician Francis Bashforth (1819-1912).

Here is an implementation from the Mathematica Stack Exchange due to a collaboration between user MarcoB and Alex Trounev.

l12_213.png

The Backward Differentiation Method

In this method we also use a polynomial, here to approximate the solution of an IVP instead of a derivative. Here our predictor step proceeds along the same lines as (12.29),

l12_214.png

(12.30)

here the difference is that the derivative l12_215.png, so we make an additional requirement,

l12_216.png

(12.31)

We can then find (by subtracting (12.29) from (12.31) that

l12_217.png

(12.32)

we then get the first-order backward differentiation formula.

l12_218.png

(12.33)

This is due to the work of the English-America mathematician C. William Gear (born 1935), this work was done in the 1970s when Gear was interested in stiff equations.

The Predictor-Corrector Method

We can use the Adams method as a predictor, we can then apply a backward formula and arrive at the predictor-corrector formula,

l12_219.png

(12.34)

l12_220.png

(12.35)

This method has been attributed to the Finnish mathematician Evert Johannes Nyström (1895-1960).

Lax-Wendroff

What follows is due to Hungarian-born American mathematician Peter Lax (b 1926) and the American mathematician Burton Wendroff (b 1930). Their work was done in 1960. If we have the advection equation,

l12_221.png

(12.36)

Lax and Wendroff decided that we can apply a difference formula to the right-hand side. We can rewrite the right-hand side partial derivative,

l12_222.png

(12.37)

We can then add a term,

l12_223.png

(12.38)

We apply differences to each term and get the Lax-Wendroff scheme for the advection equation,

l12_224.png

(12.39)

I found no good implementations of this method, nor have I ever used it in my work. I did find a very old and clunky implementation in a very good book by Victor G. Ganzha and Evgenii V. Vorozhtsov (1996), Numerical Solutions for Partial Differential Equations Problem Solving Using Mathematica, CRC Press (a truly great book for people who want to get down and dirty in the numerical methods). This implementation is step by step and very pedagogical. As a quick dodge I then assign it as a project idea. I guarantee that if you complete such a task, people will use it.

Crank-Nicolson

The Crank-Nicolson method was developed in 1947 by the English mathematical physicist John Crank (1916-2006) and the English mathematician and physicist Phyllis Nicolson (1917-1968). If we can write a PDE in the form,

l12_225.png

(12.40)

then we can rewrite it

l12_226.png

(12.41)

Here we have a solution of the one-dimensional time-independent Schrödinger equation for a position-dependent potential.

l12_227.png

We can see the results of this.

l12_228.gif

l12_229.png

l12_230.gif

For a time-dependent potential we have this.

l12_231.png

Here is the result of this calculation.

l12_232.gif

l12_233.png

l12_234.gif

This implementation was found on the Mathematica Stack Exchange as a result of a collaboration between the users xslittlegrass, Leonid Shifrin, and Jens.

Incorporating Methods into NDSolve

One nice feature is that you can write a method and incorporate it into the existing NDSolve structure.  If we adopt our RK4 method given above, we can write our steps in this way.

l12_235.png

We can specify all of the inputs and outputs for our step function.

l12_236.gif

Of course, we can also use the actual names for these instead.

l12_237.gif

We can now specify a method allowing NDSolve to make the proper difference order for that method.

l12_238.png

We then define the step mode, in this case we make it fixed, since the method has no step control.

l12_239.png

So that is all we need. We try it out on the oscillator equation.

l12_240.png

l12_241.gif

We can plot this.

l12_242.png

l12_243.gif

Here is a second example. This time we couple our new method with a step control method called DoubleStep.

l12_244.png

l12_245.gif

The plot here is given.

l12_246.png

l12_247.gif

We can compare the error.

l12_248.gif

l12_249.gif

These examples are drawn, with minor variations, from the section on NDSolve Plugins in the tutorial Advanced Numerical Differential Equation Solving in Mathematica.

Created with the Wolfram Language