SM-2-ElementaryMathematics

4688 days ago by MathFest

Sage Mathfest II

Elementary Mathematics with Sage

Neil Calkin and Dan Warner

WWW.SAGEMATH.ORG


This session will provide a hands-on introduction to the basic capabilities of Sage. 

This will cover the bulk of the Maple and Mathematica capabilities that are used in Calculus and other undergraduate mathematics courses.

 
       
# A nonlinear example var('x y') eq6 = (x-1)^2 + (y-1)^2 == 4; show(eq6) eq7 = 4*y - 3*x == -1; show(eq7) sol3 = solve([eq6,eq7],x,y,solution_dict=True) show(sol3) # Use the dictionary result and display each solution to a different precision for s in sol3: print(s[x].n(digits=5),s[y].n(digits=25)) 
       
# Display the nonlinear example and the solutions x0 = sol3[0][x] y0 = sol3[0][y] s0 = '(%5.2f,%5.2f)' % (x0,y0) p0 = (x0-0.47,y0) x1 = sol3[1][x] y1 = sol3[1][y] s1 = '(%5.2f,%5.2f)' % (x1,y1) p1 = (x1-0.47,y1) g = Graphics() g += circle((1,1),2) g += line([(-1,-1),(3,2)]) g += points([(s[x],s[y]) for s in sol3],rgbcolor=(1,0,0)) g += text(s0,p0) g += text(s1,p1) show(g,aspect_ratio=1) 
       
# The following example by Jason Grout uses Sage to solve a system of three non-linear equations. # First, we solve the system symbolically: var('x y u v') eq8 = u+v==9; show(eq8) eq9 = v*y+u*x==-6; show(eq9) eq10 = v*y^2+u*x^2==24; show(eq10) show(solve([eq8,eq9,eq10,u==1],u,v,x,y)) 
       
# Second, we can solve it numerically solns = solve([eq8,eq9,eq10,u==1],u,v,x,y, solution_dict=True) for s in solns: print(s[u].n(digits=10), s[v].n(digits=10), s[x].n(digits=10), s[y].n(digits=10)) 
       

Matrix Calculations

Try a random 200 by 200 matrix with big integer entries.  Then calculate its determinant.  It could be a big number.

random_matrix(ZZ,200) 
       
a = random_matrix(ZZ,200, x=2^128) print a print a[0,0] print a[199,199] 
       
type(a) 
       
time a. 
       
 
       

Calculus

Sage does Calculus:

p = (x^5-1) diff(p^3,x) 
       
integral(p^3,x) 
       
# Slightly more complicated formula with trigonometric functions with parameters var('a b c') q = (x-a)*(cos(b*x)+sin(c*x)) show(q) 
       
# Differentiate # Illustrates the product rule, the chain rule, and the derivatives of sin and cos. show(diff(q,x)) 
       
# Integrate using integration by parts r = integral(q,x) show(r) show(expand(r)) 
       
# Define and work with functions f(x) = sin(x)^2*cos(x)*exp(x) show(f) 
       
plot(f, 0, 3) 
       
show(integrate(f, x)) 
       
x=4 print f(x) plot(f,0,3) 
       
 
       

Two Experimental Opportunities

Let's start with a result by Archimedes, namely that

$\pi < \frac{22}{7}$.

Archimedes used a very geometric approach involving inscribed and circumscribed polygons.  

But today we might simply ask Sage, or any other modern Computer Algebra System (CAS), to evaluate the integral

$\int_0^1 \frac{(1-x)^4 x^4}{1+x^2} \,dx$

show(integral((1-x)^4*x^4/(1+x^2),x,0,1)) 
       

Since the integrand is positive on the interval (0,1), the inequality immediately follows.  However, this may not be completely satisfactory until we look at the antiderivative.

$\int \frac{(1-x)^4 x^4}{1+x^2} \,dx$

show(integral((1-x)^4*x^4/(1+x^2),x)) 
       

Of course this approach not only depends on modern Computer Algebra Systems.  It also depends on the mathematical edifice of Trigonometry and Calculus that led to the wonderful relation

$\int \frac{1}{1+x^2} \,dx = arctan(x)$

We can push further by taking the geometric series

$\frac{1}{1-x} = 1 + x + x^2 + x^3 + \cdots$

which only converges for $|x| < 1$,  and replace $x$ by $-x^2$ to get the alternating series

 $\frac{1}{1+x^2} = 1 - x^2 + x^4 - x^6 + \cdots$

Then, following James Gregory's lead (1668), integrate term by term to get the series

$\arctan(x) = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \cdots$

Finally, we can conclude, since we have an alternating series with decreasing terms, that

$\frac{\pi}{4} = \arctan(1) =  1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots = \sum\limits_{k=0}^\infty {\frac{(-1)^k}{2k+1}}$

Gregory's series can be used to calculate $\pi$, but it has notoriously slow convergence, but studying it with the aid of today's computers has led to some surprising connections.

\[
\sum\limits_{k = 0}^\infty  {\frac{{( - 1)^{k + 1} }}{{2k + 1}}} 
\]
# Sage supports calculations involving arbitrary precision floating point arithmetic. # This is now an essential tool for gleaning mathematical insights experimentally. # We can specify the precision that we want to use by the declaration RealField(n) # where n is the number of bits to be used R1 = RealField(100) p1 = R1(pi) print type(p1) print p1 R2 = RealField(200) p2 = R2(pi) print p2 print p2.n(digits=50) print p1+p2 print p2+p1 print p2-p1 
       

A Curious Anomaly with Gregory's Series

As noted above, Gregory's series for $\pi$, is over 300 years old and has notoriously slow convergence.

$\pi = 4 \sum\limits_{k=0}^\infty  {\frac{{( - 1)^k }}{{2k + 1}}}$ 

\[
4\sum\limits_{k = 0}^\infty  {\frac{{( - 1)^k }}{{2k + 1}}} 
\]

Nonetheless, let's take an experimental approach and compute some partial sums with a little more accuracy than pencil and paper or even 12-digit calculators. 

 

R = RealField(200) s = R(0) s10 = 4*sum([(-1)^k/(2*k+1) for k in range(10)],s); print s10.n(digits=50) s100 = 4*sum([(-1)^k/(2*k+1) for k in range(100)],s); print s100.n(digits=50) s1000 = 4*sum([(-1)^k/(2*k+1) for k in range(1000)],s); print s1000.n(digits=50) s10000 = 4*sum([(-1)^k/(2*k+1) for k in range(10000)],s); print s10000.n(digits=50) s500000 = 4*sum([(-1)^k/(2*k+1) for k in range(500000)],s); print s500000.n(digits=50) print R(pi).n(digits=50) 
       

Note:

With 10 terms the second digit is incorrect, but third and fourth happen to be correct -- probably just a curious coincidence.

With 100 terms the second digit is now correct, but the third digit is wrong.  On the other hand the next 4 digits, 1592, happen to be OK.

With 1000 terms the third digit is now correct, but the fourth digit is wrong.  On the other hand the next 6 digits, 592653, happen to be OK.

With 10,000 terms the fourth digit is now correct, but the fifth digit is wrong.  On the other hand the next 6 digits, 926535, happen to be OK.

With a half a million terms we get the fifth and sixth digits correct, but the seventh digit is wrong.  On the other hand, the next 10 digits, 6535897932, are correct.  The next 2 digits are wrong but the next 10 digits, 4626422832, are correct.  After that, we have one wrong digit followed by another 10 correct digits, 9502884198. Then 3 wrong digits, and, wrapping up, the 7 last digits are correct.

Something beyond coincidence is certainly going on here!

Although it took the sophisticated expertise of three mathematicians to explain the phenomenon;  the experimental observation that got the ball rolling was by Joseph Roy North of Colorado Springs - a person of commendable curiosity. 

Jonathan Borwein, Peter Borwein, and Karl Dilcher, "Pi, Euler Numbers, and Asymptotic Expansions", American Math Monthly, 1989.

# Since the first correction for N=500,000 is to add 2e-6, and # since the second correction is to subtract 2e-18, it might be # helpful to consider the same calculation for pi/2 instead of pi. # Also, armed with a little hindsight we will choose n = (10^m)/2 R = RealField(200) s = R(0) s50 = 2*sum([(-1)^k/(2*k+1) for k in range(50)],s); print s50.n(digits=50) s500 = 2*sum([(-1)^k/(2*k+1) for k in range(500)],s); print s500.n(digits=50) s5000 = 2*sum([(-1)^k/(2*k+1) for k in range(5000)],s); print s5000.n(digits=50) s50000 = 2*sum([(-1)^k/(2*k+1) for k in range(50000)],s); print s50000.n(digits=50) s500000 = 2*sum([(-1)^k/(2*k+1) for k in range(500000)],s); print s500000.n(digits=50) print R(pi/2).n(digits=50) 
       
# Now let's look at the necessary corrections # pi/2 = 1.5707963267948966192313216916397514420985846996876 # S50 = 1.5607973262955052392565914871547462164608848226857 # +1 -1 +5 # pi/2 = 1.5707963267948966192313216916397514420985846996876 # S500 = 1.5697963277948916192923203066902697395329262915799 # +1 -1 +5 -61 # pi/2 = 1.5707963267948966192313216916397514420985846996876 # S5000 = 1.5706963267958966191813216977397500570990899094173 # +1 -1 +5 -61 +1385 # pi/2 = 1.5707963267948966192313216916397514420985846996876 # S50000 = 1.5707863267948976192313211916397520520985833146876 # +1 -1 +5 -61 +1385 # pi/2 = 1.5707963267948966192313216916397514420985846996876 # S500000 = 1.5707953267948966202313216916347514420986456996876 # +1 -1 +5 -61 # # Check the OEIS to see if this is the start of a known sequence # # Aha! There are 10 known sequences that start with the first 4 numbers # (if we ignore the signs), and two of them have 1385 as the fifth term. # # Let's increase the precision and try again. R = RealField(300) s = R(0) s50 = 2*sum([(-1)^k/(2*k+1) for k in range(50)],s); print s50.n(digits=70) s500 = 2*sum([(-1)^k/(2*k+1) for k in range(500)],s); print s500.n(digits=70) s5000 = 2*sum([(-1)^k/(2*k+1) for k in range(5000)],s); print s5000.n(digits=70) s50000 = 2*sum([(-1)^k/(2*k+1) for k in range(50000)],s); print s50000.n(digits=70) s500000 = 2*sum([(-1)^k/(2*k+1) for k in range(500000)],s); print s500000.n(digits=70) print R(pi/2).n(digits=70) 
       
# pi/2 = 1.570796326794896619231321691639751442098584699687552910487472296153908 # S50 = 1.560797326295505239256591487154746216460884822685658023243553968744347 # +1 -1 +5 # pi/2 = 1.570796326794896619231321691639751442098584699687552910487472296153908 # S500 = 1.569796327794891619292320306690269739532926291579917284073598715948943 # +1 -1 +5 -61 # pi/2 = 1.570796326794896619231321691639751442098584699687552910487472296153908 # S5000 = 1.570696326795896619181321697739750057099089909417276609848259381272946 # +1 -1 +5 -61 +1385 # pi/2 = 1.570796326794896619231321691639751442098584699687552910487472296153908 # S50000 = 1.570786326794897619231321191639752052098583314687557962587445268504108 # +1 -1 +5 -61 +1385 -50521 # pi/2 = 1.570796326794896619231321691639751442098584699687552910487472296153908 # S500000 = 1.570795326794896620231321691634751442098645699687551525487472346674908 # +1 -1 +5 -61 +1385 -50521 # # Notice that the correction digits remain the same, but the gaps between them get # larger as n increases. 
       

The OEIC confirms that these first 6 corrections are indeed the first 6 Euler numbers with alternating signs.

The OEIC also gives us a number of connections and references -- including the paper that first examined this phenomenon.

The bottom line is that we have experimentally established the fact that it is highly likely that the error in the Gregory series has an asymptotic expansion whose coefficients involve the Euler numbers.

In fact, armed with this insight, which they obtained in a very similar fashion, Dilcher and the brothers Borwein showed that the first 6 terms of the asymptotic expansion for even $n$ are:

$\frac{E_0}{2n}$, $\frac{E_2}{(2n)^3}$, $\frac{E_4}{(2n)^5}$, $\frac{E_6}{(2n)^7}$, $\frac{E_8}{(2n)^9}$, and $\frac{E_{10}}{(2n)^{11}}$

where $E_0 = 1$, $E_2 = -1$, $E_4 = 5$, $E_6 = -61$, $E_8 = 1385$, and $E_{10} = -50521$. 

 
       

Illustrating Partial Derivatives

var('x y u v') 
       
(x, y, u, v)
(x, y, u, v)
plot3d((x^2 - y^2) / (x^2 + y^2), (x, -1, 1), (y, -1, 1)) 
       
f(x, y) = x^3 + x^2 * y^3 - 2 * y^2 P = plot3d(f(x, y), (x, 1, 3), (y, 0, 2), opacity = 0.2) P += parametric_plot3d([2 + u, 1, f(2, 1) + diff(f(x, y), x)(x = 2, y = 1) * u], (u, -1/2, 1/2), color = "red", thickness = 2) P += parametric_plot3d([2, 1 + u, f(2, 1) + diff(f(x, y), y)(x = 2, y = 1) * u], (u, -1/2, 1/2), color = "green", thickness = 2) P += point((2, 1, f(2, 1)), color = "black", size = 10) show(P) 
       
f(x, y) = 4 - x^2 - 2 * y^2 P = parametric_plot3d([u * cos(v), 1 / sqrt(2) * u * sin(v), 4 - u^2], (u, 0, 2), (v, 0, 2 * pi), aspect_ratio = [1, 1, 1], opacity = 0.2) P += parametric_plot3d([1 + u, 1, f(1, 1) + diff(f(x, y), x)(x = 1, y = 1) * u], (u, -1/2, 1/2), color = "red", thickness = 2) P += parametric_plot3d([1, 1 + u, f(1, 1) + diff(f(x, y), y)(x = 1, y = 1) * u], (u, -1/2, 1/2), color = "green", thickness = 2) P += point((1, 1, f(1, 1)), color = "black", size = 10) show(P) 
       
 
       

3-Dimensional Plots

Sage can draw 3d plots:

# Surface plot example var('x,y') plot3d(cos(x^2 + y^2), (x, -2, 2), (y, -2, 2)) 
       
# Two intersecting surfaces var('x,y') b = 2.2 (plot3d(sin(x^2-y^2),(x,-b,b),(y,-b,b), opacity=.9) + plot3d(0, (x,-b,b), (y,-b,b), color='red')) 
       
# Implicit 3D plot var('x,y,z') T = golden_ratio p = 2 - (cos(x + T*y) + cos(x - T*y) + cos(y + T*z) + cos(y - T*z) + cos(z - T*x) + cos(z + T*x)) r = 4.78 implicit_plot3d(p, (x, -r, r), (y, -r, r), (z, -r, r), plot_points=50) 
       
 
       

Sage can plot Yoda:

from scipy import io x = io.loadmat(DATA + 'yodapose.mat', struct_as_record=True) from sage.plot.plot3d.index_face_set import IndexFaceSet V = x['V']; F3 = x['F3']-1; F4 = x['F4']-1 Y = (IndexFaceSet(F3, V, color = Color('#00aa00')) + IndexFaceSet(F4, V, color = Color('#00aa00'))) Y = Y.rotateX(-1) Y.show(aspect_ratio = [1,1,1], frame = False, figsize = 4) 
       
 
       
var('t') 
       
t
t
parametric_plot3d([t^3, ln(3 - t), sqrt(t)], (t, 0.01, 2.99)) 
       
parametric_plot3d([cos(t), sin(t), t], (t, 0, 6 * pi)) 
       
var('s') parametric_plot3d([cos(t), sin(t), s], (t, 0, 2 * pi), (s, 0, 6 * pi)) \ + parametric_plot3d([cos(t), sin(t), t], (t, 0, 6 * pi), color = 'red') 
       
parametric_plot3d([cos(t), sin(t), s], (t, 0, 2 * pi), (s, 0, 4)) \ + parametric_plot3d([s, t, 2 - t], (s, -2, 2), (t, -2, 2), color = 'red') \ + parametric_plot3d([cos(t), sin(t), 2 - sin(t)], (t, 0, 2 * pi), color = 'green') 
       
parametric_plot3d([t^2, 7 * t - 12, t^2], (t, 0, 5)) \ + parametric_plot3d([4 * t - 3, t^2, 5 * t - 6], (t, 0, 5), color = 'red') 
       
 
       

Dynamical Systems Modeling

var('t') y = function('y', t) show(y) type(y) 
       
\newcommand{\Bold}[1]{\mathbf{#1}}y\left(t\right)
<type 'sage.symbolic.expression.Expression'>
\newcommand{\Bold}[1]{\mathbf{#1}}y\left(t\right)
<type 'sage.symbolic.expression.Expression'>

A Simple Growth Model

var('alpha beta') DE1 = diff(y,t) == alpha*y - beta show(DE1) u1 = desolve(DE1, y, ivar=t) show(u1) 
       
\newcommand{\Bold}[1]{\mathbf{#1}}D[0]\left(y\right)\left(t\right) = \alpha y\left(t\right) - \beta
\newcommand{\Bold}[1]{\mathbf{#1}}{\left(\frac{\beta e^{\left(-\alpha t\right)}}{\alpha} + c\right)} e^{\left(\alpha t\right)}
\newcommand{\Bold}[1]{\mathbf{#1}}D[0]\left(y\right)\left(t\right) = \alpha y\left(t\right) - \beta
\newcommand{\Bold}[1]{\mathbf{#1}}{\left(\frac{\beta e^{\left(-\alpha t\right)}}{\alpha} + c\right)} e^{\left(\alpha t\right)}
f(t) = expand(u1) show(f) 
       
\newcommand{\Bold}[1]{\mathbf{#1}}t \ {\mapsto}\ c e^{\left(\alpha t\right)} + \frac{\beta}{\alpha}
\newcommand{\Bold}[1]{\mathbf{#1}}t \ {\mapsto}\ c e^{\left(\alpha t\right)} + \frac{\beta}{\alpha}
f(0);f(1);f(1/alpha);f(2) 
       
c + beta/alpha
c*e^alpha + beta/alpha
c*e + beta/alpha
c*e^(2*alpha) + beta/alpha
c + beta/alpha
c*e^alpha + beta/alpha
c*e + beta/alpha
c*e^(2*alpha) + beta/alpha
g(t) = f(c=20,alpha=1/2,beta=40) g 
       
t |--> 20*e^(1/2*t) + 80
t |--> 20*e^(1/2*t) + 80
plot(g,0,5) 
       

A Spring Mass Damper Model

var('t k b') x = function('x',t) DE2 = diff(x,t,t) == -2*x - 1/2*diff(x,t) u2 = desolve(DE2, x, ivar=t) u2 
       
(k1*sin(1/4*sqrt(31)*t) + k2*cos(1/4*sqrt(31)*t))*e^(-1/4*t)
(k1*sin(1/4*sqrt(31)*t) + k2*cos(1/4*sqrt(31)*t))*e^(-1/4*t)
f(t) = u2(k1=1/2,k2=1) f 
       
t |--> 1/2*(sin(1/4*sqrt(31)*t) + 2*cos(1/4*sqrt(31)*t))*e^(-1/4*t)
t |--> 1/2*(sin(1/4*sqrt(31)*t) + 2*cos(1/4*sqrt(31)*t))*e^(-1/4*t)
plot(f,0,10) 
       

Rumor Model

var('t alpha P') r = function('r',t) model = diff(r,t) == alpha*r*(P-r) u3 = desolve(model,r,ivar=t) u3 
       
-(log(-P + r(t)) - log(r(t)))/(P*alpha) == c + t
-(log(-P + r(t)) - log(r(t)))/(P*alpha) == c + t
var('K') eq = (r-P)/r == K*exp(-alpha*P*t) eq 
       
-(P - r(t))/r(t) == K*e^(-P*alpha*t)
-(P - r(t))/r(t) == K*e^(-P*alpha*t)
solve(eq,r) 
       
[r(t) == -P*e^(P*alpha*t)/(K - e^(P*alpha*t))]
[r(t) == -P*e^(P*alpha*t)/(K - e^(P*alpha*t))]
r=function('r',t) rp = desolve_rk4(diff(r,t) == (1/100)*r*(100-r), r, ics=[0,1],step=0.05, end_points=10) 
       
list_plot(rp) 
       
 
       
 
       

Graph Theory

Sage can do graph theory:

g = graphs.FlowerSnark(); g 
       
graph_editor(g) 
       

Sage contains many unique and deep algorithms:

g.automorphism_group()