import numpy as np
import numpy.linalg as la
import scipy.linalg as spla
# tall and skinny w/nullspace
np.random.seed(12)
A = np.random.randn(6, 4)
b = np.random.randn(6)
A[3] = A[4] + A[5]
A[1] = A[5] + A[1]
A[2] = A[3] + A[1]
A[0] = A[3] + A[1]
Let's see how successfully we can solve the least squares problem when the matrix has a nullspace using QR:
Q, R = la.qr(A)
R.round(3)
We can choose x_qr[3]
as we please:
x_qr = np.zeros(A.shape[1])
x_qr[3] = 0
QTbnew = Q.T.dot(b)[:3,] - R[:3, 3] * x_qr[3]
x_qr[:3] = spla.solve_triangular(R[:3,:3], QTbnew, lower=False)
Let's take a look at the residual norm and the norm of x_qr
:
R.dot(x_qr)-Q.T.dot(b)[:4]
la.norm(A.dot(x_qr)-b, 2)
la.norm(x_qr, 2)
Choose a different x_qr[3]
and compare residual and norm of x_qr
.
Now compute the SVD of $A$:
U, sigma, VT = la.svd(A)
Make a matrix Sigma
of the correct size:
Sigma = np.zeros(A.shape)
Sigma[:4,:4] = np.diag(sigma)
And check that we've actually factorized A
:
(U.dot(Sigma).dot(VT) - A).round(4)
Now define Sigma_pinv
as the "pseudo-"inverse of Sigma
, where "pseudo" means "don't divide by zero":
Sigma_pinv = np.zeros(A.shape).T
Sigma_pinv[:3,:3] = np.diag(1/sigma[:3])
Sigma_pinv.round(3)
Now compute the SVD-based solution for the least-squares problem:
x_svd = VT.T.dot(Sigma_pinv).dot(U.T).dot(b)
la.norm(A.dot(x_svd)-b, 2)
la.norm(x_svd)