# Orthogonal Iteration

In [62]:
import numpy as np
import numpy.linalg as la


Let's make a matrix with given eigenvalues:

In [3]:
n = 5

np.random.seed(70)
eigvecs = np.random.randn(n, n)
eigvals = np.sort(np.random.randn(n))

A = np.dot(la.solve(eigvecs, np.diag(eigvals)), eigvecs)
print(eigvals)

[-1.3657822  -0.78460489 -0.08829521  0.30824369  0.52110266]



Let's make an array of iteration vectors:

In [4]:
X = np.random.randn(n, n)


Next, implement orthogonal iteration:

• Orthogonalize.
• Apply A
• Repeat

Run this cell in-place (Ctrl-Enter) many times.

In [64]:
Q, R = la.qr(X)
X = np.dot(A, Q)
print(Q)

[[-0.36366705 -0.24462159  0.39052837  0.06435026 -0.8070026 ]
[-0.42943052 -0.63956476  0.08336455 -0.49910706  0.3879289 ]
[-0.040959   -0.42224061 -0.83670032  0.25124759 -0.23841653]
[ 0.80079022 -0.35639657  0.08781549 -0.40630442 -0.24273785]
[-0.20098031  0.47519633 -0.36436102 -0.72009898 -0.28721745]]



Now check that the (hopefully) converged $Q$ actually led to Schur form:

In [67]:
la.norm(
np.dot(np.dot(Q, R), Q.T)
- A)

Out[67]:
5.2990176465002692e-10


Do the eigenvalues match?

In [68]:
R

Out[68]:
array([[-1.3657822 ,  0.08740819,  3.037871  , -0.26244477, -0.29760524],
[ 0.        , -0.78460489, -0.45833238, -1.38868012,  0.24515348],
[ 0.        ,  0.        ,  0.52110266, -0.15202445,  0.07378812],
[-0.        ,  0.        , -0.        ,  0.30824369,  0.17685701],
[ 0.        , -0.        ,  0.        ,  0.        , -0.08829521]])


What are possible flaws in this plan?

• Will this always converge?