108 Chapter 6 Higher Dimensional Linear Systems
is a solution of Y
= (T
−1
AT )Y that satisfies the initial condition Y (0) =
(c
1
, ..., c
n
). As in Chapter 3, this is the only such solution, because if
W (t) =
⎛
⎜
⎝
w
1
(t)
.
.
.
w
n
(t)
⎞
⎟
⎠
is another solution, then differentiating each expression w
j
(t) exp(−λ
j
t), we
find
d
dt
w
j
(t)e
−λ
j
t
= (w
j
− λ
j
w
j
)e
−λ
j
t
= 0.
Hence w
j
(t) = c
j
exp(λ
j
t) for each j. Therefore the collection of solutions
Y (t ) yields the general solution of Y
= (T
−1
AT )Y .
It then follows that X (t ) = TY (t) is the general solution of X
= AX,so
this general solution may be written in the form
X(t ) =
n
j=1
c
j
e
λ
j
t
V
j
.
Now suppose that the eigenvalues λ
1
, ..., λ
k
of A are negative, while the
eigenvalues λ
k+1
, ..., λ
n
are positive. Since there are no zero eigenvalues, the
system is hyperbolic. Then any solution that starts in the subspace spanned
by the vectors V
1
, ..., V
k
must first of all stay in that subspace for all time
since c
k+1
=···=c
n
= 0. Secondly, each such solution tends to the origin
as t →∞. In analogy with the terminology introduced for planar systems,
we call this subspace the stable subspace. Similarly, the subspace spanned by
V
k+1
, ..., V
n
contains solutions that move away from the origin. This subspace
is the unstable subspace. All other solutions tend toward the stable subspace
as time goes backward and toward the unstable subspace as time increases.
Therefore this system is a higher dimensional analog of a saddle.
Example. Consider
X
=
⎛
⎝
12−1
03−2
02−2
⎞
⎠
X.
In Section 5.2 in Chapter 5, we showed that this matrix has eigenvalues
2, 1, and −1 with associated eigenvectors (3, 2, 1), (1, 0, 0), and (0, 1, 2),
respectively. Therefore the matrix
T =
⎛
⎝
310
201
102
⎞
⎠