Eigenvalues and Eigenvectors
Several physical applications require the analysis of a system of equations where the results at time k+1 depends on the some combination of the amounts at time k. They can be represented as a vector matrix equation of the form x(k+1)=Ax{k}. In these models, the amount at some starting time is known and we want to know the amount present at some future time or what happens to the solutions as time goes to infinity. As time goes to infinity the solution is know as the steady state solution.
The relationship x(k+1)=Ax(k)=A(Ax(k-1))=A(A(Ax(k-2)))=...A(A...(Ax(0))...).
We can simplify the above relationship as
x(k+1) = Ax(k)=
x(k-1) =
x(k-2) =
x(k-3) = ... =
x(0).
We would like to have a method that would not require us to have to take powers of matrices.
This is not an easy thing to do accuatly even in the case of 2x2 matrices. Many of the problems that arise in the real world are very large systems. The above type of system is call a finite dimensional dynamical system. There are two different approaches to solving the above problems that we will study. They are both related to only one topic. The first thing we want to look at is what happens if Ax =
x. This means that
x = A(Ax) = A(
x) =
Ax =
(
x) =
x
So if A is nxn i.e. we have a system of n equations then if any vector can be written as
a linear combination of independent vectors with the property that Ax1 =
x1 or
x =
x1 +
x2 +.....+
xn
and
x (0)=
x1 +
x2 + ...+
xn
This means that we have turned the problem of taking powers of a matrix into taking powers of n scalar numbers.
The next thing that we would like to look at is what happens when we take powers of a diagonal matrix.
| > |
A:=DiagonalMatrix(<a,b,c,d>,4,4); |
With this in mind now condsider what happens if the matrix A can be written as
where D is a diagonal matrix.
or since
we have
so now we have shifted the problem of taking powers to finding powers of the diagonal elements. Returning to our origional dynamical system we see if we can write the matrix
A as
then
x(0) =
x (0).
The remainder of the semester we will be studying the properites of finding the scalars
and a vectors x associated with the
. i.e we want to look at
A x =
x or (A -
) x = 0. For this to work we will need to have a non-trivial vector x in the null space of the matrix
. This means the matrix
must be a singular matrix. We then are looking for the values of
that will make the matrix singular. To do this we need the
det(
) = 0. This will give us a polynomial of degree n to solve. We will now consider a few problems to see how this works.
Example 1.
| > |
p:=Determinant(A-lambda*I2); |
| > |
Da:=MatrixInverse(P).A.P; |
Since the first column of P is an eigenvector associated with the eigenvalue -1 then -1 will appear in the 1,1 position of the diagonal matrix. Next the second column of P is an eigenvector associated with the eigenvalue 3 so 3 appears in the 2,2 position. Alway be aware of how you build the P matrix and the placement of the eigenvlues on the diagonal. Later, you may not get the desired results if you do not place the eigenvector in the P matrix in exactly the correct order.
Example 2.
| > |
p1:=Determinant(B-lambda*I2); |
| > |
DD:=MatrixInverse(P).B.P; |
An eigenvalue combined with an associated eigenvector is call an eigen pair
is one eigenpair and
is another eigenpair for this matrix.
The spectrium for the matrix is
and the spectrial radius
= max{|5|,|-2|}= 5
Since there are two vectors for each of these we can write any vector in two space as a linear combination of these two eigenvectors. We need to look at bigger matrices.
Example 3.
| > |
AA:=<<1,1,1>|<0,2,-1>|<0,-3,0>>; |
| > |
p3:=factor(Determinant(AA-lambda*I3)); |
| > |
eigvalues:=solve(p3=0,lambda); |
The problem with x1, x2, x3 above is that they are lists and not values. The {{ is the problem. The following command is the way to get to the vector which we need to use.
| > |
D3:=MatrixInverse(P3).AA.P3; |
Eigenvectors for distince eigenvectors are always linearly independent. If we have three distince eigenvalues then we can find three linearly independent eigenvectors.
The eigenvalues and hence the eigenvectors do not have to be real as the following example will illustrate. What is the spectruim and spectrial radius of this matrix.
Also, list eigenpars
Example 4.
| > |
p:=Determinant(E-lambda*I2); |
| > |
lambda:=(2 + sqrt(4-4*9*1))/2; |
| > |
lambda2:=(2-sqrt(4-4*9*1))/2; |
Need to look for the null space
Facts about complex numbers
1. (a+bi)(c+di)=ac-bd + (ad+cb)i
2. (a+bi)(a-bi)=
3. |a+bi| =
4.
=
=
+
Let us return gauss elimination of the matrix EI2
We will have to use the with(linalg): package here since I want to use the addrow command
| > |
EI21:=addrow(EI2,1,2,1/(sqrt(2)*I)); |
This shows that the two rows are proportional even though they do not look like it.
| > |
EI22:=mulrow(EI21,1,-1/2); |
We see that an eigen vector is therefore (1,
)
For the eigenvaule
the eigenvector will obey (1,
) . You should check these results.
List all the eigenpairs and also list the spectrium and spectrial radius for the matrix E.
What is ment by |
| =
= 3 Look at the spectrail radius in the complex plane.
Look at the figure.
Example 5.
| > |
F:=<<1/2,1/2>|<1/4,3/4>>; |
 |
(1.1) |
| > |
EiF:=Determinant(F-lambda*I2); |
| > |
P:=solve(EiF=0,lambda); |
 |
(1.2) |
| > |
p1:=NullSpace(F-lambda[1]*I2)[1]; |
| > |
p2:=NullSpace(F-lambda[2]*I2)[1]; |
The [1] after the NullSpace command above will give you a vector not a list.
| > |
DF:=DiagonalMatrix(<lambda[1],lambda[2]>); |
If we want to find the steady state of
and take the limit as n goes to infinity. Let
be the matrix
. Notice that
. As we can see the entry 1/4 on the diagonal goes to zero very quickly so the steady state will be the following product
| > |
DFS:=DiagonalMatrix(<1,0>); |
| > |
Steadystate:=P.DFS.MatrixInverse(P).<30,20>; |
Let us look at the matrix before we multiplied by the vector [30,20].
| > |
P.DFS.MatrixInverse(P); |
The last operation was not necessary but does allow you to see what happens before you multiply by the initial value.