Lecture 21 Math 221 

Orthogonal Similar Matrices.   

We have discussed this topic at different times when we have looked at eigenvalues and eigenvectors of a certain type of matrix.  Do you recall the type of matrix?   If you came up with a symmetric matrix then you would be correct.  If the matrix is not symmetric then it may be diagonalizable but do not try to find a set of orthonormal eigenvectors. i.e do not try to build a Q.  You must be live with a P matrix.  You may ask why worry about it.  One good reason is that we do not have to go to the work of finding an inverse of P.  I know you can always have the computer find the inverse but if the matrix is very large then you will have roundoff errors in finding the inverse.  If you find an orthogonal matrix Q then the inverse is the transpose of Q.  Typesetting:-mrow(Typesetting:-msup(Typesetting:-mi(  There are many cases when you can build a symmetric Q and in  this came Q is its own inverse.  You must remember that an orthogonal matrix is a matrix where the columns are orthonormal not just orthogonal.   

 

We will look at several different symmetric matrices at this time. 

Example 1 

> with(LinearAlgebra):
 

 

 

> A:=<<2,1>|<1,2>>;
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> I2:=IdentityMatrix(2);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> AlI:=A-lambda*I2;
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> p:=Determinant(AlI);
 

`+`(3, `-`(`*`(4, `*`(lambda))), `*`(`^`(lambda, 2)))
 

> factor(p);
 

`*`(`+`(lambda, `-`(1)), `*`(`+`(lambda, `-`(3))))
 

> A1:=subs(lambda=1,AlI);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

The nullspace of A1 is then Typesetting:-mrow(Typesetting:-mi( 

> p1:=NullSpace(A1)[1];
 

Typesetting:-mrow(Typesetting:-mverbatim( (1.1)
 

> A2:=subs(lambda=3,AlI);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

We can see that the null space of A2 is Typesetting:-mrow(Typesetting:-mi( 

>
 

> p2:=NullSpace(A2)[1];
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> q1:=Normalize(p1,2);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

The 2 in the above command is for the two or Euclidian norm.   

> q2:=Normalize(p2,2);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> Q:=<q1|q2>;
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> Q^%T;
 

Matrix(%id = 1437812)
 

Q is symmetric so we to not have to have the transpose of Q.  If Q is not symmetrix then you would need the transpose at the first matrix.   

> Q.A.Q;
 

Matrix(%id = 1440144)
 

> Q^%T.A.Q;
 

Matrix(%id = 1443636)
 

Example 2 

 

> B:=<<6,1>|<1,4>>;
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> pp:=Determinant(B-lambda*I2);
 

`+`(23, `-`(`*`(10, `*`(lambda))), `*`(`^`(lambda, 2)))
 

> eg:=solve(pp=0);
 

`+`(5, `*`(`^`(2, `/`(1, 2)))), `+`(5, `-`(`*`(`^`(2, `/`(1, 2)))))
 

> subs(lambda=eg[1],B-lambda*I2);
 

Matrix(%id = 1450900)
 

> GaussianElimination(%);
 

Matrix(%id = 1452956)
 

> subs(lambda=eg[2],B-lambda*I2);
 

Matrix(%id = 1456996)
 

> GaussianElimination(%);
 

Matrix(%id = 1459268)
 

> Bev:=Eigenvectors(B,output='list');
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> p1:=Bev[1][3][1];
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> p2:=Bev[2][3][1];
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> simplify(p1^%T.p2);
 

0
 

> q1:=evalf(Normalize(p1,2));
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> q2:=evalf(Normalize(p2,2));
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

The evalf commands were used here becuase the symbolic representation was really messy.   

> Q:=<q1|q2>;
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> Q^%T.B.Q;
 

Matrix(%id = 1490924)
 

You need to realize that this is a diagonal matrix.  The off diagonal entries are considered to be zero.  This is do to round-off error when working on the computer. 

> evalf(5+sqrt(2));
 

6.414213562
 

> evalf(5-sqrt(2));
 

3.585786438
 

The last two commands were to show you that the diagonal entries of the matrix Q^%T.B.Q were in fact the eigenvalues of the matrix B.   

Example 3 

> E:=<<2,-1,-1>|<-1,2,-1>|<-1,-1,2>>;
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> I3:=IdentityMatrix(3);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> EI3:=E-lambda*I3;
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> p:=Determinant(EI3);
 

`+`(`-`(`*`(9, `*`(lambda))), `*`(6, `*`(`^`(lambda, 2))), `-`(`*`(`^`(lambda, 3))))
 

> p1:=factor(p);
 

`+`(`-`(`*`(lambda, `*`(`^`(`+`(lambda, `-`(3)), 2)))))
 

> EigE:=solve(p1=0);
 

0, 3, 3
 

> Eig3:=subs(lambda=3,EI3);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> N1:=NullSpace(Eig3);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> Eig4:=subs(lambda=0,EI3);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> ReducedRowEchelonForm(Eig4);
 

Matrix(%id = 1526380)
 

We can see that the nullspace vectors need to all be equal and so we will choose the generator for the null space to the the vector with all ones.  We check the results with the NullSpace command 

> n3:=NullSpace(Eig4);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

We will check our work using the Eigenvectors command 

> Eigenvectors(E, output='list');
 

Typesetting:-mfenced(Typesetting:-mverbatim(
 

Notice that the first vector is orthogonal to the last two but the last two are not orthogonal.  This does not mean that we can not get a set of ortogonal vectors but we have to do a little more work.  We will put the last two vectors through the GramSchmidt algorithm.  Look ack at the entry N1 and we will use this one 

> ES:=GramSchmidt([N1[1],N1[2]],normalized);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> q1:=Normalize(n3[1],2);
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> q2:=ES[1];
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> q3:=ES[2];
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> Q:=<q1|q2|q3>;
 

Typesetting:-mrow(Typesetting:-mverbatim(
 

> Q^%T.Q;
 

Matrix(%id = 1577512)
 

> Q^%T.E.Q;
 

Matrix(%id = 1579984)
 

This example illustrates that you must alway check to see that the eigenvectors you come up with are orthogonal.  You can always obtain a set of orthogonal eigenvectors if you have a symmetric matrix.  If the matrix is not symmetric to do not try to get an orthogonal set of eigenvectors and do not normalize the vectors.  The only time to normalize a set of eigenvectors is when you have a symmetric matrix and you have obtain an orthogonal set of eigenvectors.   

 

Example 4  

 

 

> A:=<<1,-2,2>|<-2,1,2>|<2,2,1>>;
 

Typesetting:-mrow(Typesetting:-mverbatim( (4.1)
 

> EVA1:=Eigenvectors(A,output='list');
 

Typesetting:-mrow(Typesetting:-mverbatim( (4.2)
 

> p1:=EVA1[1][3][1];
 

Typesetting:-mrow(Typesetting:-mverbatim( (4.3)
 

> p2:=EVA1[1][3][2];
 

Typesetting:-mrow(Typesetting:-mverbatim( (4.4)
 

> p3:=EVA1[2][3][1];
 

Typesetting:-mrow(Typesetting:-mverbatim( (4.5)
 

Notice that p2 and p3 are not orthogonal even though they are eigenvectors for eigenvalues of a symmetric matrix.  What do we do?  The magic words are GramSchmidt to the two vectors.  Why not to all three.  You are doing too much work.   You only need an orthogonal set for the second eigenvalue since they p2 and p3 are all ready orthogonal to p1.  We will do it out the long way and not use the GramShcmidt command here.   

 

 

> p2a:=p2-DotProduct(p2,p1)/DotProduct(p1,p1)*p1;
 

Typesetting:-mrow(Typesetting:-mverbatim( (4.6)
 

> q1:=Normalize(p1,2); q2:=Normalize(p2a,2); q3:=Normalize(p3,2);
 

 

 

Typesetting:-mrow(Typesetting:-mverbatim(
Typesetting:-mrow(Typesetting:-mverbatim(
Typesetting:-mrow(Typesetting:-mverbatim( (4.7)
 

> Q:=<q1|q2|q3>;
 

Typesetting:-mrow(Typesetting:-mverbatim( (4.8)
 

> Da:=Q^%T.A.Q;
 

Typesetting:-mrow(Typesetting:-mverbatim( (4.9)
 

The above problem is Example 3.38 of your text but was included to make use you look at the case when a set of eigenvectors for a given eigenvalue were not orthogonal even though you know that a set of orthogonal set will exist because the matrix is a symmetric matrix.  Again you only try to find a set of orthogonal eigenvectors to diagonalize a matrix when it is symmetric.  We will look at what happens a little later if we would take a set of eigenvectors for a non symmetric matrix and make them into a set of orthogonal vectors.  As you may expect we will not get a diagonal matrix but there is interest in this technique when there is not a full set of eigenvectors for a given eigenvalue.  This happens when the algebraic multiplicity of an eigenvalue is not equal to the geometric multiplicity of the eigenvalue.   

>
 

>
 

Normal Matrices  

For real matrices with real entries  a matrix is said to be normal if Typesetting:-mrow(Typesetting:-msup(Typesetting:-mi( If the entries are comples then you replace the transpose by the conjugate transpose.  That is you take the comles conjugate of the elements of the matrix and then take the transpose of the matrix.    

Read the comments on page 389 and 390 of your text.   

>
 

>
 

>
 

>
 

Exercises 3.4 

Exercise 3.4.1 (a), (e), (f),(k), Exercise 3.4.3,  

(Exercise 3.4.6.  Carry out the multiplication but also look at the eigenvalues and eigenvectors of the matrix.  Are the eigenvectors for distinct eigenvalues orthogonal?  )  Exercises are due on November 17, 2008 at the end of class period.  e-mail me your worksheet.