import numpy as np
import numpy.linalg as la
Let's prepare a matrix with some random or deliberately chosen eigenvalues:
n = 6
if 1:
np.random.seed(70)
eigvecs = np.random.randn(n, n)
eigvals = np.sort(np.random.randn(n))
# Uncomment for near-duplicate largest-magnitude eigenvalue
# eigvals[1] = eigvals[0] + 1e-3
A = eigvecs.dot(np.diag(eigvals)).dot(la.inv(eigvecs))
print(eigvals)
else:
# Complex eigenvalues
np.random.seed(40)
A = np.random.randn(n, n)
print(la.eig(A)[0])
Let's also pick an initial vector:
x0 = np.random.randn(n)
x0
x = x0
Now implement plain power iteration.
Run the below cell in-place (Ctrl-Enter) many times.
x = np.dot(A, x)
x
Back to the beginning: Reset to the initial vector.
x = x0/la.norm(x0)
Implement normalized power iteration.
Run this cell in-place (Ctrl-Enter) many times.
x = np.dot(A, x)
nrm = la.norm(x)
x = x/nrm
print(nrm)
print(x)
Extensions:
What if we want the smallest eigenvalue (by magnitude)?
Once again, reset to the beginning.
x = x0/la.norm(x0)
Run the cell below in-place many times.
x = la.solve(A, x)
nrm = la.norm(x)
x = x/nrm
print(1/nrm)
print(x)
Can we feed an estimate of the current approximate eigenvalue back into the calculation? (Hint: Rayleigh quotient)
Reset once more.
x = x0/la.norm(x0)
Run this cell in-place (Ctrl-Enter) many times.
sigma = np.dot(x, np.dot(A, x))/np.dot(x, x)
x = la.solve(A-sigma*np.eye(n), x)
x = x/la.norm(x)
print(sigma)
print(x)