92
a value attained only with Reed-Solomon codes.
If
the coderate r =
k/n
is
fixed,
then
d/n
has a fixed upper bound
(1
- r +
I/n).
In fact the Plotkin bound
(Chapter
2)
shows that
d/n
has a more severe upper limit, namely,
(1
-
r)/2.
In
the case
of
cyclic codes, if m
is
the degree
of
the minimal polynomial
of
the basic
root, we have (very approximately)
md/2
< (n - k), since the number
of
polyno-
mial factors required to produce a distance
d, with d consecutive roots,
is
d/2.
Therefore
d/n
<
2(1
-
r)/m.
But m = log2n approximately; so that as n in-
creases, keeping the coderate constant,
d/n
decreases in general. The block codes
we have looked at are severely constrained as to their distance, not in absolute
terms
but
in proportion to the code length; and the ratio
d/n
is
the
critical
parameter when
we
consider the objective of correcting random bit errors of rate
p.
We want
d/2
= t >
pn.
The
situation with convolutional codes
is
very different.
At
first sight they
have small distances and low coderates. However,
the
distance
is
a local property.
It
is
true that a convolutional code cannot correct more than
(d
-
0/2
errors if
they all occur in a short interval, but if they are scattered over a longer period they
are correctable.
If
the codevector
of
least weight d represents a digression from the all-zero
path
of
length
L,
then
(roughly speaking) any number of
error
bursts
of
less
than
(d
-
1)/2
bits can be corrected, provided they are spaced at intervals greater than
L.
Although there are digressions longer than
L,
these tend to have proportionally
greater distances, so the statement still holds true. However, more important, the
coderate,
r, has only a tenuous connection with the distance. The determining
factor for the distance
is
the constraint length, to which there
is
essentially no upper
limit except that imposed
by
the complexity of the associated operations, in
particular decoding algorithms.
The
overall error-correcting capability
of
a convolutional code therefore
principally depends on the constraint length and on the nature of the convolu-
tional polynomials
g,(T),
but it cannot usually be expressed as a simple function of
these items, but
rather
by
bounding inequalities on the postdecoding
error
rate,
such as that given
by
the expression
of
(5.7).
Finally with regard to the
low
coderates of convolutional codes as commonly
used, this
is
not an inherent property
of
such codes
but
again the result
of
wishing
to avoid excessive complexity.
To
obtain higher code rates than r =
1/2
we require
b >
1.
But there are 2
b
(k-l)
states (for a binary code), each one of which has
2b
outgoing connections to
other
states, and also
2b
incoming ones from
other
states.
Figure 5.12 illustrates an extension
of
our
b =
1,
U =
2,
k = 3
I/2-rate
binary code
of Figure 5.3 to a
2/3-rate
b =
2,
U =
3,
k = 3 code.
An
attempt to draw the state
diagram of this simple scheme will convince the reader how quickly the complexity
of
the code grows as b increases above unity. Moreover, the code has a poor
distance
= 2 (consider the input
...
0011000000
...
),
which
is
essentially due to the
fact that
it
is
systematic,
so
there
is
only one parity bit out of the
L·
= 3 output bits