
2.4. Computing Eigenvectors and Eigenvalues 81
The eigenvector associated with λ
1
= 1 can be found similarly:
P − 1I =
.9925 − 1 .0125
.0075 .9875 − 1
=
−.0075 .0125
.0075 −.0125
so we must solve
−.0075x + .0125y = 0
.0075x − .0125y = 0.
Because the equations are multiples of each other, we solve −.0075x +
.0125y = 0 to get y =
.0075
.0125
x =
3
5
x,sov =
x,
3
5
x
= x
1,
3
5
. Choosing
x = 5 (since it makes the vector have the simplest form), we find v = (5, 3).
Although this was only one example of calculating an eigenvector for a
particular matrix P, for any 2 × 2 matrix the procedure works the same way.
Although we will not prove it here, you will always find one of the equations
is a multiple of the other and then be able to solve for y in terms of x (or x in
terms of y) to find all the eigenvectors.
As with eigenvalues, calculating eigenvectors for 3 × 3 or larger matrices
is done analogously to the 2 × 2 case, although some additional complications
come in. We’ll leave a discussion of those to a full course in linear algebra
and instead suggest that you let MATLAB do the computations for you.
Computer methods of calculation. Actually, MATLAB and other com-
puter packages do not really calculate eigenvectors and eigenvalues in the
way described previously. Because the computation of these is so important,
not only for biological models but for a host of problems throughout science
and engineering, quite clever and sophisticated methods have been developed
and incorporated into many standard software packages.
Although we will not really explain any methods these packages use, we
will give a hint at one type of approach by discussing the power method.
Given A, pick any initial vector x
0
and compute x
1
= Ax
0
. According
to the Strong Ergodic Theorem, if λ
1
is the dominant eigenvalue of A with
corresponding eigenvector v
1
, then we should expect
1
λ
1
x
1
to be closer to v
1
than x
0
was. Because we do not yet know what λ
1
is, we have to somehow
adjust x
1
to account for the growth factor. One way of doing this is to simply
divide each entry of x
1
by its largest entry to get a new vector we call x
1
.
This means x
1
will have one entry that is a 1 and will be “closer” to being an
eigenvector than x
0
was.
We can then repeat the process using x
1
in place of x
0
to get an even better
approximate eigenvector. Of course, we should then repeat the process again,