Copyright (C) 2020 Andreas Kloeckner
import numpy as np
C = 1/2
e0 = 0.1*np.random.rand()
rate = 1
e = e0
for i in range(20):
print(e)
e = C*e**rate
What do you observe about the number of iterations it takes to decrease the error by a factor of 10 for rate=1,2,3
?
Is there a point to faster than cubic convergence?
Now let's see if we can figure out the convergence order from the data.
Here's a function that spits out some fake errors of a process that converges to $q$th order:
def make_rate_q_errors(q, e0, n=10, C=0.7):
errors = []
e = e0
for i in range(n):
errors.append(e)
e = C*e**q
return errors
errors = make_rate_q_errors(1, e0)
#clear
for e in errors:
print(e)
#clear
for i in range(len(errors)-1):
print(errors[i+1]/errors[i])