44 2. Matrix, Graph, and Matroid
For two vertices u and v, we say that v is reachable from u on G,which
we denote as u
∗
−→ v, if there exists a directed path from u to v on G.Based
on the reachability we define an equivalence relation ∼ on V by: u ∼ v ⇐⇒
[u
∗
−→ v and v
∗
−→ u]. In fact it is straightforward to verify (i) [reflexivity]
v ∼ v, (ii) [symmetry] u ∼ v ⇒ v ∼ u, and (iii) [transitivity] u ∼ v, v ∼
w ⇒ u ∼ w. Accordingly, the vertex set V is partitioned into equivalence
classes {V
k
}
k
, called strongly connected components (or strong components,
in short). Namely, two vertices u and v belong, by definition, to the same
strong component if and only if u
∗
−→ v and v
∗
−→ u. A partial order can
be defined on the family {V
k
}
k
of strong components by
V
k
V
l
⇐⇒ v
l
∗
−→ v
k
on G for some v
k
∈ V
k
and v
l
∈ V
l
.
Each strong component V
k
determines a vertex-induced subgraph G
k
=
(V
k
,A
k
)ofG, also called a strong component of G.Thepartial order
is induced naturally on the family of strong components {G
k
}
k
by: G
k
G
l
⇐⇒ V
k
V
l
. The decomposition of G into partially ordered strong compo-
nents {G
k
}
k
is referred to as the strong component decomposition of G.An
efficient algorithm of complexity O(|A|) is known for the strong component
decomposition (see Aho–Hopcroft–Ullman [1, 2], Tarjan [310]).
For an n × n matrix A =(A
ij
| i, j =1, ···,n) we can represent the
zero/nonzero pattern of the matrix in terms of a directed graph
2
G =(V,
˜
A)
with V = {1, ···,n} and
˜
A = {(j, i) | A
ij
=0}. The strong component de-
composition of the graph G corresponds to a (finest possible) block-triangular
decomposition of the matrix A by means of a simultaneous permutation of
the rows and the columns.
Example 2.2.1. For the matrix A of (2.11) the zero/nonzero structure can
be represented by a directed graph G of Fig. 2.1. The graph has three strong
components, V
1
= {4, 6}, V
2
= {1, 2, 5}, V
3
= {3}, with V
1
V
2
V
3
.The
strong component decomposition of G yields a block-triangular form given in
(2.12), where V
k
corresponds to R
k
= C
k
for k =1, 2, 3. 2
In applications of linear algebra it is often crucial to recognize the relevant
transformations associated with a matrix. For example, we can talk of the
Jordan canonical form of A only if A is subject to similarity transformations,
SAS
−1
with nonsingular S. This is the case when A corresponds to a mapping
in a single vector space. If a matrix A corresponds to a mapping between a
pair of different vector spaces, it is subject to equivalence transformations,
S
r
AS
c
with nonsingular matrices S
r
and S
c
. In this case, it is meaningless
to consider the Jordan canonical form of A, whereas it is still sound to talk
about the rank of A. Consider, for instance, a state-space equation
˙
x =
Ax + Bu for a dynamical system. The matrix A here is subject to similarity
transformations, and B to equivalence transformations. It should be clear
2
This graph is called the Coates graph in Chen [34].