Keeping track of coefficients in Gram-Schmidt

In [38]:
import numpy as np
import numpy.linalg as la
In [39]:
A = np.random.randn(3, 3)

Let's start from regular old (modified) Gram-Schmidt:

In [57]:
Q = np.zeros(A.shape)

q = A[:, 0]
Q[:, 0] = q/la.norm(q)

# -----------

q = A[:, 1]
coeff = np.dot(Q[:, 0], q)
q = q - coeff*Q[:, 0]
Q[:, 1] = q/la.norm(q)

# -----------

q = A[:, 2]
coeff = np.dot(Q[:, 0], q)
q = q - coeff*Q[:, 0]
coeff = np.dot(Q[:, 1], q)
q = q - coeff*Q[:, 1]
Q[:, 2] = q/la.norm(q)
In [52]:
Q.dot(Q.T)
Out[52]:
array([[  1.00000000e+00,   8.88178420e-16,   3.05311332e-16],
       [  8.88178420e-16,   1.00000000e+00,   0.00000000e+00],
       [  3.05311332e-16,   0.00000000e+00,   1.00000000e+00]])

Now we want to keep track of what vector got added to what other vector, in the style of an elimination matrix.

Let's call that matrix $R$.

  • Would it be $A=QR$ or $A=RQ$? Why?
  • Where are $R$'s nonzeros?
In [56]:
R = np.zeros((A.shape[0], A.shape[0]))
In [54]:
Q = np.zeros(A.shape)

q = A[:, 0]
Q[:, 0] = q/la.norm(q)

R[0,0] = la.norm(q)

# -----------

q = A[:, 1]
coeff = np.dot(Q[:, 0], q)
R[0,1] = coeff
q = q - coeff*Q[:, 0]
Q[:, 1] = q/la.norm(q)

R[1,1] = la.norm(q)

# -----------

q = A[:, 2]
coeff = np.dot(Q[:, 0], q)
R[0,2] = coeff
q = q - coeff*Q[:, 0]
coeff = np.dot(Q[:, 1], q)
R[1,2] = coeff
q = q- coeff*Q[:, 1]
Q[:, 2] = q/la.norm(q)

R[2,2] = la.norm(q)
In [55]:
R
Out[55]:
array([[ 0.77134813,  0.22468956, -2.09753371],
       [ 0.        ,  1.92179208,  1.24545523],
       [ 0.        ,  0.        ,  0.56553203]])
In [48]:
la.norm(Q.dot(R) - A)
Out[48]:
2.7755575615628914e-17

This is called QR factorization.


  • When does it break?
  • Does it need something like pivoting?
  • Can we use it for something?