Project 1 Solution

4384 days ago by pub

Numerical Methods for ODEs

These are simple methods to approximate the solution of differential equations.

They all have problems in practice.

Do not use these methods in practice.

Use whatever ODE solver your software has built-in, like ode_solver in Sage or ode45 in MATLAB.

Don't change the following!

Euler's Method

def Euler(ODE, y0, tStart, tEnd, tStep): """Euler's method for the ODE right-hand side ODE(t, y), from time tStart to tEnd, with steps of size tStep, starting at y0.""" y = y0 solution = [(tStart, y)] for t in srange(tStart, tEnd, tStep): # N() forces Sage to use a decimal approximation # instead of pushing around algebraic objects like exp(0.2) y = N(y + tStep * ODE(t, y)) solution.append((N(t + tStep), y)) return solution 
       

Improved Euler's Method

def ImprovedEuler(ODE, y0, tStart, tEnd, tStep): """Improved Euler's method for the ODE right-hand side ODE(t, y), from time tStart to tEnd, with steps of size tStep, starting at y0.""" y = y0 solution = [(tStart, y)] for t in srange(tStart, tEnd, tStep): # N() forces Sage to use a decimal approximation # instead of pushing around algebraic objects like exp(0.2) k1 = N(ODE(t, y)) k2 = N(ODE(t + tStep, y + tStep * k1)) y = y + tStep * (k1 + k2) / 2 solution.append((N(t + tStep), y)) return solution 
       

Runge-Kutta (4th Order) Method

def RungeKutta(ODE, y0, tStart, tEnd, tStep): """Runge-Kutta method (4th order) for the ODE right-hand side ODE(t, y), from time tStart to tEnd, with steps of size tStep, starting at y0.""" y = y0 solution = [(tStart, y)] for t in srange(tStart, tEnd, tStep): # N() forces Sage to use a decimal approximation # instead of pushing around algebraic objects like exp(0.2) k1 = N(ODE(t, y)) k2 = N(ODE(t + tStep / 2, y + tStep / 2 * k1)) k3 = N(ODE(t + tStep / 2, y + tStep / 2 * k2)) k4 = N(ODE(t + tStep, y + tStep * k3)) y = y + tStep * (k1 + 2 * k2 + 2 * k3 + k4) / 6 solution.append((N(t + tStep), y)) return solution 
       

A function to find the best linear fit

def findBestLine(x, y): """Find the best linear fit to data using least squares.""" import numpy import numpy.linalg A = numpy.asarray(zip([1] * len(x), x)) b = numpy.asarray(y) x = numpy.linalg.lstsq(A, b)[0] print "Slope = %g, Intercept = %g" % (x[1], x[0]) return lambda z: x[0] + x[1] * z 
       

Start changing here...

A single, first-order ODE

Setup

# Right-hand side of the differential equation # y' = ODE(t, y) ODE(t, y) = 1 + sin(t) * sin(5 * t) - y # Initial condition (at t = 0) y0 = 0 # Set end time for the solutions below tEnd = 20 
       

Exact solution

# Set up to find exact solution, phi(t) phi = function('phi', t) # Find exact solution. ics specifies initial condition y(0) = y0 exactSolution = desolve(diff(phi, t) == ODE(t, phi), phi, ics=[0, y0]) show(y == expand(exactSolution(t = t))) ESPlot = plot(exactSolution, 0, tEnd, rgbcolor = 'blue') ESPlot.axes_labels(('t', 'y')) show(ESPlot) 
       
y = -\frac{639}{629} \, e^{-t} + \frac{2}{17} \, \sin\left(4 \, t\right) - \frac{3}{37} \, \sin\left(6 \, t\right) + \frac{1}{34} \, \cos\left(4 \, t\right) - \frac{1}{74} \, \cos\left(6 \, t\right) + 1
y = -\frac{639}{629} \, e^{-t} + \frac{2}{17} \, \sin\left(4 \, t\right) - \frac{3}{37} \, \sin\left(6 \, t\right) + \frac{1}{34} \, \cos\left(4 \, t\right) - \frac{1}{74} \, \cos\left(6 \, t\right) + 1

Numerical approximations

# Lists to store the error in for different values of the stepSize EMaxAbsError = [] IEMaxAbsError = [] RKMaxAbsError = [] # Loop over different values of the step size for tStep in (0.5, 0.25, 0.1, 0.05, 0.025): # Find the Euler approximation and compute its error EApprox = Euler(ODE, y0, 0, tEnd, tStep) EPlot = line(EApprox, rgbcolor = 'red') EAbsError = [abs(exactSolution(t = T).n() - Y) for (T, Y) in EApprox] EMaxAbsError.append((tStep, max(EAbsError))) # Find the improved Euler approximation and compute its error IEApprox = ImprovedEuler(ODE, y0, 0, tEnd, tStep) IEPlot = line(IEApprox, rgbcolor = 'green') IEAbsError = [abs(exactSolution(t = T).n() - Y) for (T, Y) in IEApprox] IEMaxAbsError.append((tStep, max(IEAbsError))) # Find the Runge-Kutta approximation and compute its error RKApprox = RungeKutta(ODE, y0, 0, tEnd, tStep) RKPlot = line(RKApprox, rgbcolor = 'orange') RKAbsError = [abs(exactSolution(t = T).n() - Y) for (T, Y) in RKApprox] RKMaxAbsError.append((tStep, max(RKAbsError))) label = text('h = %g' % tStep, (17, 0.2)) P = ESPlot + EPlot + IEPlot + RKPlot + label P.axes_labels(('t', 'y')) show(P) 
       








# Use Sage's built-in solver to find a numerical approximation. # This is the way to go in practice! # (Note that the step size is chosen automatically.) # Set up the solver BBSolver = ode_solver() # The solver is built for systems of ODEs so we have to do # a bit of trickery to turn our scalar ODE in a 1-d system. # This converts the right-hand side. BBSolver.function = lambda t, y: [ODE(t, y[0])] # This sets the initial condition, the time interval to solve over, # and the number of points in the solution. BBSolver.ode_solve(y_0 = [y0], t_span = [0, tEnd], num_points = 100) # This solves the ODE and converts it back to BBApprox = [(t, y[0]) for (t, y) in BBSolver.solution] #a scalar solution. # Find the error for the built-in solver. BBAbsError = [abs(N(exactSolution(t = T)) - Y) for (T, Y) in BBApprox] BBMaxAbsError = max(BBAbsError) BBPlot = line(BBApprox, rgbcolor = 'purple') P = ESPlot + BBPlot P.axes_labels(('t', 'y')) show(P) 
       
# List to store the plots in plots = [] # Loop over the 3 different approximations with a color for each for (error, color) in ((EMaxAbsError, 'red'), (IEMaxAbsError, 'green'), (RKMaxAbsError, 'orange')): (tStep, e) = zip(*error) # Get the log of the stepSizes logTStep = map(log, tStep) # For the plot, find the width of the domain of the stepSizes logTWidth = max(logTStep) - min(logTStep) # Find the log of the error logError = map(log, e) # Add a plot of the log error vs. log stepSize plots.append(line(zip(logTStep, logError), marker = 'o', rgbcolor = color)) # Add a plot of the best linear fit to the log error vs. log stepSize plots.append(plot(findBestLine(logTStep, logError), min(logTStep) - 0.10 * logTWidth, max(logTStep) + 0.10 * logTWidth, rgbcolor = color)) # Add a plot of the error in the built-in solver. logTStep = [min(logTStep) - 0.10 * logTWidth, max(logTStep) + 0.10 * logTWidth] plots.append(line(zip(logTStep, [log(BBMaxAbsError)] * len(logTStep)), rgbcolor = 'purple')) # Show all these plots P = sum(plots) P.axes_labels(('log h', 'log E')) show(P) 
       
Slope = 1.07078, Intercept = -0.349312
Slope = 2.00972, Intercept = -0.679199
Slope = 4.04944, Intercept = -2.83046
Slope = 1.07078, Intercept = -0.349312
Slope = 2.00972, Intercept = -0.679199
Slope = 4.04944, Intercept = -2.83046