# Experimental Opportunities

## Neil Calkin, Holly Hirst, and Dan Warner

### November, 2011

$\pi < \frac{22}{7}$.

Archimedes used a very geometric approach involving inscribed and circumscribed polygons.

But today we might simply ask Sage, or any other modern Computer Algebra System (CAS), to evaluate the integral

$\int_0^1 \frac{(1-x)^4 x^4}{1+x^2} \,dx$

show(integral((1-x)^4*x^4/(1+x^2),x,0,1))

Since the integrand is positive on the interval (0,1), the inequality immediately follows.  However, this may not be completely satisfactory until we look at the antiderivative.

$\int \frac{(1-x)^4 x^4}{1+x^2} \,dx$

show(integral((1-x)^4*x^4/(1+x^2),x))

Of course this approach not only depends on modern Computer Algebra Systems.  It also depends on the mathematical edifice of Trigonometry and Calculus that led to the wonderful relation

$\int \frac{1}{1+x^2} \,dx = arctan(x)$

We can push further by taking the geometric series

$\frac{1}{1-x} = 1 + x + x^2 + x^3 + \cdots$

which only converges for $|x| < 1$,  and replace $x$ by $-x^2$ to get the alternating series

$\frac{1}{1+x^2} = 1 - x^2 + x^4 - x^6 + \cdots$

Then, following James Gregory's lead (1668), integrate term by term to get the series

$\arctan(x) = x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + \cdots$

Finally, we can conclude, since we have an alternating series with decreasing terms, that

$\frac{\pi}{4} = \arctan(1) = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \cdots = \sum\limits_{k=0}^\infty {\frac{(-1)^k}{2k+1}}$

Gregory's series can be used to calculate $\pi$, but it has notoriously slow convergence, but studying it with the aid of today's computers has led to some surprising connections.

$\sum\limits_{k = 0}^\infty {\frac{{( - 1)^{k + 1} }}{{2k + 1}}}$
# Sage supports calculations involving arbitrary precision floating point arithmetic. # This is an essential new tool for gleaning mathematical insights experimentally. # We can specify the precision that we want to use by the declaration RealField(n) # where n is the number of bits to be used R1 = RealField(100) p1 = R1(pi) print type(p1) print p1 R2 = RealField(200) p2 = R2(pi) print p2 print p2.n(digits=50) print p1+p2 print p2+p1 print p2-p1

## A Curious Anomaly with Gregory's Series

As noted above, Gregory's series for $\pi$, is over 300 years old and has notoriously slow convergence.

$\pi = 4 \sum\limits_{k=0}^\infty {\frac{{( - 1)^k }}{{2k + 1}}}$

$4\sum\limits_{k = 0}^\infty {\frac{{( - 1)^k }}{{2k + 1}}}$

Nonetheless, let's take an experimental approach and compute some partial sums with a little more accuracy than pencil and paper or even 12-digit calculators.

R = RealField(200) s = R(0) s10 = 4*sum([(-1)^k/(2*k+1) for k in range(10)],s); print s10.n(digits=50) s100 = 4*sum([(-1)^k/(2*k+1) for k in range(100)],s); print s100.n(digits=50) s1000 = 4*sum([(-1)^k/(2*k+1) for k in range(1000)],s); print s1000.n(digits=50) s10000 = 4*sum([(-1)^k/(2*k+1) for k in range(10000)],s); print s10000.n(digits=50) s500000 = 4*sum([(-1)^k/(2*k+1) for k in range(500000)],s); print s500000.n(digits=50) print R(pi).n(digits=50)

#### Note:

With 10 terms the second digit is incorrect, but third and fourth happen to be correct -- probably just a curious coincidence.

With 100 terms the second digit is now correct, but the third digit is wrong.  On the other hand the next 4 digits, 1592, happen to be OK.

With 1000 terms the third digit is now correct, but the fourth digit is wrong.  On the other hand the next 6 digits, 592653, happen to be OK.

With 10,000 terms the fourth digit is now correct, but the fifth digit is wrong.  On the other hand the next 6 digits, 926535, happen to be OK.

With a half a million terms we get the fifth and sixth digits correct, but the seventh digit is wrong.  On the other hand, the next 10 digits, 6535897932, are correct.  The next 2 digits are wrong but the next 10 digits, 4626422832, are correct.  After that, we have one wrong digit followed by another 10 correct digits, 9502884198. Then 3 wrong digits, and, wrapping up, the 7 last digits are correct.

Something beyond coincidence is certainly going on here!

Although it took the sophisticated expertise of three mathematicians to explain the phenomenon;  the experimental observation that got the ball rolling was by Joseph Roy North of Colorado Springs - a person of commendable curiosity.

Jonathan Borwein, Peter Borwein, and Karl Dilcher, "Pi, Euler Numbers, and Asymptotic Expansions", American Math Monthly, 1989.

# Since the first correction for N=500,000 is to add 2e-6, and # since the second correction is to subtract 2e-18, it might be # helpful to consider the same calculation for pi/2 instead of pi. # Also, armed with a little hindsight we will choose n = (10^m)/2 R = RealField(200) s = R(0) s50 = 2*sum([(-1)^k/(2*k+1) for k in range(50)],s); print s50.n(digits=50) s500 = 2*sum([(-1)^k/(2*k+1) for k in range(500)],s); print s500.n(digits=50) s5000 = 2*sum([(-1)^k/(2*k+1) for k in range(5000)],s); print s5000.n(digits=50) s50000 = 2*sum([(-1)^k/(2*k+1) for k in range(50000)],s); print s50000.n(digits=50) s500000 = 2*sum([(-1)^k/(2*k+1) for k in range(500000)],s); print s500000.n(digits=50) print R(pi/2).n(digits=50)

Now let's look at the necessary corrections.

# pi/2 = 1.5707963267948966192313216916397514420985846996876 # S50 = 1.5607973262955052392565914871547462164608848226857 # +1 -1 +5 # pi/2 = 1.5707963267948966192313216916397514420985846996876 # S500 = 1.5697963277948916192923203066902697395329262915799 # +1 -1 +5 -61 # pi/2 = 1.5707963267948966192313216916397514420985846996876 # S5000 = 1.5706963267958966191813216977397500570990899094173 # +1 -1 +5 -61 +1385 # pi/2 = 1.5707963267948966192313216916397514420985846996876 # S50000 = 1.5707863267948976192313211916397520520985833146876 # +1 -1 +5 -61 +1385 # pi/2 = 1.5707963267948966192313216916397514420985846996876 # S500000 = 1.5707953267948966202313216916347514420986456996876 # +1 -1 +5 -61

Check the OEIS to see if this is the start of a known sequence

Aha! There are 10 known sequences that start with the first 4 numbers
(if we ignore the signs), and two of them have 1385 as the fifth term.

Let's increase the precision and try again.

R = RealField(300) s = R(0) s50 = 2*sum([(-1)^k/(2*k+1) for k in range(50)],s); print s50.n(digits=70) s500 = 2*sum([(-1)^k/(2*k+1) for k in range(500)],s); print s500.n(digits=70) s5000 = 2*sum([(-1)^k/(2*k+1) for k in range(5000)],s); print s5000.n(digits=70) s50000 = 2*sum([(-1)^k/(2*k+1) for k in range(50000)],s); print s50000.n(digits=70) s500000 = 2*sum([(-1)^k/(2*k+1) for k in range(500000)],s); print s500000.n(digits=70) print R(pi/2).n(digits=70)
# pi/2 = 1.570796326794896619231321691639751442098584699687552910487472296153908 # S50 = 1.560797326295505239256591487154746216460884822685658023243553968744347 # +1 -1 +5 # pi/2 = 1.570796326794896619231321691639751442098584699687552910487472296153908 # S500 = 1.569796327794891619292320306690269739532926291579917284073598715948943 # +1 -1 +5 -61 # pi/2 = 1.570796326794896619231321691639751442098584699687552910487472296153908 # S5000 = 1.570696326795896619181321697739750057099089909417276609848259381272946 # +1 -1 +5 -61 +1385 # pi/2 = 1.570796326794896619231321691639751442098584699687552910487472296153908 # S50000 = 1.570786326794897619231321191639752052098583314687557962587445268504108 # +1 -1 +5 -61 +1385 -50521 # pi/2 = 1.570796326794896619231321691639751442098584699687552910487472296153908 # S500000 = 1.570795326794896620231321691634751442098645699687551525487472346674908 # +1 -1 +5 -61 +1385 -50521

Notice that the correction digits remain the same, but the gaps between them get larger as n increases.

The OEIC confirms that these first 6 corrections are indeed the first 6 Euler numbers with alternating signs.

The OEIC also gives us a number of connections and references -- including the paper that first examined this phenomenon.

The bottom line is that we have experimentally established the fact that it is highly likely that the error in the Gregory series has an asymptotic expansion whose coefficients involve the Euler numbers.

In fact, armed with this insight, which they obtained in a very similar fashion, Dilcher and the brothers Borwein showed that the first 6 terms of the asymptotic expansion for even $n$ are:

$\frac{E_0}{2n}$, $\frac{E_2}{(2n)^3}$, $\frac{E_4}{(2n)^5}$, $\frac{E_6}{(2n)^7}$, $\frac{E_8}{(2n)^9}$, and $\frac{E_{10}}{(2n)^{11}}$

where $E_0 = 1$, $E_2 = -1$, $E_4 = 5$, $E_6 = -61$, $E_8 = 1385$, and $E_{10} = -50521$.

## A Curious Question about $e^x$.

Here's a question that anyone who has completed the second semester of Calculus and who is endowed with ample curiosity might ask.

The Taylor polynomials for $e^x$ converge for all values of $x$ (in fact, for all complex values of $x$).  That is, for

$p_n(x) = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots + \frac{x^n}{n!}$

we have the limit

$\mathop {\lim }\limits_{n \to \infty } p_n (x) = e^x$

for every $x$.

Now a polynomial of degree $n$ has $n$ zeros, but $e^x$ has no zeros.

Where do the zeros go?

Let's look at a few!

\[
\mathop {\lim }\limits_{n \to \infty } p_n (x) = e^x
p1 = CDF['x']([1,1]) show(p1) r1 = p1.roots(multiplicities=False) show(r1)
p2 = CDF['x']([1,1,1/2]) show(p2) r2 = p2.roots(multiplicities=False) show(r2)
p3 = CDF['x']([1,1,1/2,1/6]) show(p3) r3 = p3.roots(multiplicities=False) show(r3)
# Maybe a plot would be helpful g = Graphics() pp1 = [(real(z),imag(z)) for z in r1] g += point(pp1,pointsize=20, rgbcolor=(0,0,0)) pp2 = [(real(z),imag(z)) for z in r2] g += point(pp2,pointsize=20, rgbcolor=(1,0,0)) pp3 = [(real(z),imag(z)) for z in r3] g += point(pp3,pointsize=20, rgbcolor=(0,1,0)) show(g)
p4 = CDF['x']([1,1,1/2,1/6,1/24]) show(p4) r4 = p4.roots(multiplicities=False) show(r4) pp4 = [(real(z),imag(z)) for z in r4] g += point(pp4,pointsize=20, rgbcolor=(0,0,1)) show(g)
p5 = CDF['x']([1,1,1/2,1/6,1/24,1/120]) show(p5) r5 = p5.roots(multiplicities=False) show(r5) pp5 = [(real(z),imag(z)) for z in r5] g += point(pp5,pointsize=20, rgbcolor=(0,0,0)) show(g)
g = Graphics() coef = [1] for n in range(1,26): c = coef[-1] coef.append(c/n) pn = CDF['x'](coef) # show(pn) r = pn.roots(multiplicities=False) points = [(real(z),imag(z)) for z in r] if mod(n,2)==0: g += point(points,pointsize=20, rgbcolor=(1,0,0)) else: g += point(points,pointsize=20, rgbcolor=(0,0,0)) show(g,aspect_ratio=1)

As the figure suggests the zeros are moving away from the origin as $n$ increases.  Moreover there appears to be a parabolic region that remains free of zeros.

In fact, Saff and Varga showed that there are no zeros inside the parabolic region $y^2 \le 4(x+1)$ for $-1 < x$.

E. B. Saff and R. S. Varga, "Zero-free Parabolic Regions for Sequences of Polynomials," Siam J. Math Anal, 1976.

P1 = plot(sqrt(4*(x+1)),x,-1,15) P2 = plot(-sqrt(4*(x+1)),x,-1,15) g += P1 + P2 show(g,aspect_ratio=1)
g = Graphics() coef = [1] for n in range(1,52): c = coef[-1] coef.append(c/n) pn = CDF['x'](coef) # show(pn) r = pn.roots(multiplicities=False) points = [(real(z),imag(z)) for z in r] if mod(n,2)==0: g += point(points,pointsize=20, rgbcolor=(1,0,0)) else: g += point(points,pointsize=20, rgbcolor=(0,0,0)) P1 = plot(sqrt(4*(x+1)),x,-1,37) P2 = plot(-sqrt(4*(x+1)),x,-1,37) g += P1 + P2 show(g,aspect_ratio=1)

## The Euclidean Algorithm on Real Numbers

The Euclidean Algorithm is commonly used to find the gcd of two integers. For example, consider 105 and 91. We first divide the smaller into the larger to get

$105 = 1 \cdot 91 + 14$

If a number divides 105 and 91, then it must also divide 14. So we repeat the process with 91 and 14 to get

$91 = 6 \cdot 14 + 7$

One more step gives us

$14 = 2 \cdot 7 + 0$

So 7 is the gcd of 105 and 91.

Another way to record this process would be as follows

$\frac{105}{91} = 1 + \frac{14}{91} = 1 + \frac{1}{\frac{91}{14}} = 1 + \frac{1}{6 + \frac{7}{14}} = 1 + \frac{1}{6 + \frac{1}{\frac{14}{7}}} = 1 + \frac{1}{6 + \frac{1}{2}}$

The same process can be applied to decimal numbers.

# For example, consider the number x = 1.15384615384615 q = [int(x)] r = x.frac() print q,r f = 1/r q.append(int(f)) r = f.frac() print q, r f = 1/r q.append(int(f)) r = f.frac() print q, r f = 1/r q.append(int(f)) r = f.frac() print q, r
# The last remainder is not exactly zero, but it is close enough to stop and see what we have. cf = 1 + 1/(6 + (1/(1 + 1/(1 + 0)))) print cf, cf.n()
# What about a number like phi? phi = (1+sqrt(5.0))/2 print 'phi =',phi q = [int(phi)] r = phi.frac() print q,r n = 5 for k in range(n): f = 1/r q.append(int(f)) r = f.frac() print q, r
# What are these fractions? print 1, 1+1/1, 1+1/(1+1/1), 1+1/(1+1/(1+1/1)), 1+1/(1+1/(1+1/(1+1/1))), 1+1/(1+1/(1+1/(1+1/(1+1/1))))
# How about a few more terms for phi q = [int(phi)] r = phi.frac() print q,r n = 10 for k in range(n): f = 1/r q.append(int(f)) r = f.frac() print q, r
# Notice that the remainder is identical for the first 3 steps and then degrades slowly. # What if we carried more digits R = RealField(200) s = R(5) bigphi = (1+sqrt(s))/2 print bigphi q = [int(bigphi)] r = bigphi.frac() print q,r n = 50 for k in range(n): f = 1/r q.append(int(f)) r = f.frac() print q, r.n(digits=10)
# Not surprisingly, Sage has some built-in tools for dealing with continued fractions print continued_fraction_list(phi)
# You can also get the list of partial convergents (the fractions for each stage) cf, pc = continued_fraction_list(phi,partial_convergents=True) print cf print pc print [(fn/fd).n() for (fn,fd) in pc] print 'phi =',phi
# What about the sqrt(2)? s = R(2) sq = sqrt(s) print sq q = [int(sq)] r = sq.frac() print q,r n = 20 for k in range(n): f = 1/r q.append(int(f)) r = f.frac() print q, r.n(digits=10)
# What about the sqrt(3)? s = R(3) sq = sqrt(s) print sq q = [int(sq)] r = sq.frac() print q,r n = 20 for k in range(n): f = 1/r q.append(int(f)) r = f.frac() print q, r.n(digits=10)

## Conjecture?  Conjecture?

e15 = RDF(e) print e15 q = [int(e15)] r = e15.frac() print q, r n = 5 # n = 8 # n = 11 # n = 14 # n = 17 # n = 20 for k in range(n): f = 1/r q.append(int(f)) r = f.frac() print q, r
e100 = R(e) q = [int(e100)] r = e100.frac() print q, r n = 26 for k in range(n): f = 1/r q.append(int(f)) r = f.frac() print q, r.n(digits=10)

## Conjecture?  Conjecture?

var('x, a0, a1, a2, a3, b1, b2, b3') t2 = taylor(exp(x),x,0,2) show(t2) eqn = a0 + a1*x - expand((1 + b1*x)*t2) show(eqn) eqn.collect(x) eq1 = eqn.coefficient(x,0) == 0 show(eq1) eq2 = eqn.coefficient(x,1) == 0 show(eq2) eq3 = eqn.coefficient(x,2) == 0 show(eq3) cf_list = solve((eq1,eq2,eq3),a0,a1,b1,solution_dict=True) cf = cf_list[0] print cf Pade_table = [[1]] Pade_table.append([1+x]) Pade_table[0].append(1/(1-x)) Pade_table[1].append((cf[a0]+cf[a1]*x)/(1+cf[b1]*x)) print Pade_table print 'Pade Table' Pade = matrix(Pade_table) print Pade
eqn = a0 - expand((1 + b1*x + b2*x^2)*t2) show(eqn) eqn.collect(x) eq1 = eqn.coefficient(x,0) == 0 show(eq1) eq2 = eqn.coefficient(x,1) == 0 show(eq2) eq3 = eqn.coefficient(x,2) == 0 show(eq3) cf_list = solve((eq1,eq2,eq3),a0,b1,b2,solution_dict=True) cf = cf_list[0] print cf Pade_table[0].append(cf[a0]/(1+cf[b1]*x+cf[b2]*x^2))
t3 = taylor(exp(x),x,0,3) print t3 eqn = a0 + a1*x + a2*x^2 - expand((1 + b1*x)*t3) print eqn eqn.collect(x) eq1 = eqn.coefficient(x,0) == 0 print eq1 eq2 = eqn.coefficient(x,1) == 0 print eq2 eq3 = eqn.coefficient(x,2) == 0 print eq3 eq4 = eqn.coefficient(x,3) == 0 print eq4 cf_list = solve((eq1,eq2,eq3,eq4),a0,a1,a2,b1,solution_dict=True) cf = cf_list[0] print cf Pade_table[2].append((cf[a0]+cf[a1]*x+cf[a2]*x^2)/(1+cf[b1]*x)) print Pade_table #Pade = matrix(Pade_table) #print Pade
eqn = a0 + a1*x - expand((1 + b1*x + b2*x^2)*t3) print eqn eqn.collect(x) eq1 = eqn.coefficient(x,0) == 0 print eq1 eq2 = eqn.coefficient(x,1) == 0 print eq2 eq3 = eqn.coefficient(x,2) == 0 print eq3 eq4 = eqn.coefficient(x,3) == 0 print eq4 cf_list = solve((eq1,eq2,eq3,eq4),a0,a1,b1,b2,solution_dict=True) cf = cf_list[0] print cf Pade_table[1].append((cf[a0]+cf[a1]*x)/(1+cf[b1]*x+cf[b2]*x^2)) print Pade_table
t4 = taylor(exp(x),x,0,4) eqn = a0 + a1*x + a2*x^2 - expand((1 + b1*x + b2*x^2)*t4) print eqn eqn.collect(x) eq1 = eqn.coefficient(x,0) == 0 print eq1 eq2 = eqn.coefficient(x,1) == 0 print eq2 eq3 = eqn.coefficient(x,2) == 0 print eq3 eq4 = eqn.coefficient(x,3) == 0 print eq4 eq5 = eqn.coefficient(x,4) == 0 print eq5 cf_list = solve((eq1,eq2,eq3,eq4,eq5),a0,a1,a2,b1,b2,solution_dict=True) cf = cf_list[0] print cf Pade_table[2].append((cf[a0]+cf[a1]*x+cf[a2]*x^2)/(1+cf[b1]*x+cf[b2]*x^2)) print 'Pade Table' Pade = matrix(Pade_table) print Pade

The computer has its own view of what constitutes an attractive formula, but if we rewrite these Pade fractions to our own liking we see the following

The entries in the first column are just the Taylor polynomials.

The entries in the first row are the reciprocals of the Taylor polynomials with $x$ replaced by $-x$, which mirrors the property that $e^{-x} = 1/e^x$.

The entries on the main diagonal are

• 1
• $(1 + \frac{x}{2})/(1 - \frac{x}{2})$
• $(1 + \frac{x}{2} + \frac{x^2}{12})/(1 - \frac{x}{2} + \frac{x^2}{12})$

print expand(t3*(1-(x/2))-(1+(x/2))) print expand(t4*(1-(x/2)+(x^2/12))-(1+(x/2)+(x^2/12)))