6
values it produces, and this additional information can be used to improve further
the error-correcting procedures.
For
example, some bits could be flagged as
"uncertain",
that
is,
either 0
or
1,
and could be omitted in the search for the
nearest codeword and filled in only
after
the codeword was found. A channel that
gives this 3-valued
output
(0,
1,
X = uncertain)
is
called a Binary Erasure Channel.
To see how an
Erasure
Channel permits more accurate decoding compared with a
"hard-decision" channel, consider
the
previous 3-bit example and suppose
we
receive
OX1.
If
we had been told
that
this was 001,
we
would have said
101
was
transmitted; if we had been told
that
it was 011,
we
would have said 010 was
transmitted. But if we work with
OXl, we compare with
OXO
and
lXI,
from which
OXI
is
equidistant and conclude
that
we do not know what was transmitted. This
is
certainly more accurate (if less conclusive) than "hard-decoding" based
on
a prior,
possibly arbitrary decision by a preprocessor,
of
which
we
are ignorant.
The
Erasure Channel
is
a special example
of
a more general channel
that
delivers qualifying information for
each
bit, for example, some sort
of
reliability
factor. These reliability factors
can
be used to influence the concept and evaluation
of
"nearness".
Thus, each bit might have an accompanying factor
of
value
0.5
to
1.0, which would be
the
estimated probability
that
it
is
what it claims to be.
If
we
receive 0(0.5), 1(0.6), 1(0.9), where the value in parentheses
are
the probabilities,
then we could say the
"distance"
to 010
is
0.5 + 0.4 + 0.9 = 1.8, whereas the
"distance"
to
101
is
0.5 + 0.6 +
0.1
= 1.2, and we would decode the received
011
to
101, not to 010. (Alternatively we could say
that
the
probability
of
010 given the
received values
is
0.5'
0.6'
0.1
= 0.03, whereas the probability
of
101
is
0.5·0.4·
0.9
= 0.18 and again choose
101
in preference to 010). This sort
of
decoding tech-
nique, in which reliability factors are taken into account,
is
called "soft-decision
decoding".
It
has useful a;Jplications, particularly in delicate systems,
but
in many
cases it
is
simply not practicable, because the receiver
of
the
data
is
presented with
an uncompromising and unqualified bit stream
by
other
equipment over which he
has little
or
no control.
The
other
assumption
that
has
been
implicit in this introduction
is
that
bit
errors
are
random and uniform. In practice this
is
seldom so. Errors have their
causes, and if that cause
is
an
electrical "spike", a momentary short-circuit,
or
a
scratch on a magnetic medium, it
is
common
that
the corruption affects more than
one bit. This will
happen
if the
"noise"
lasts for longer
than
a bit-time
or
spreads
wider
than
a bit-space on the medium. Bursts
of
errors
are
created, usually at
widely spaced, irregular intervals. Error-correction techniques take account
of
burst-errors in various ways, such as
• Rearranging
the
sequence
of
the
data
so
that
the
burst
of
errors
is
scattered
randomly throughout it;
• Using specifically designed burst-error-detecting and-correcting codes (e.g.,
Fire codes);