70
Although
these
divergences look very
natural,
it
is usually difficult
to
com-
pute
them,
unless
the
operators
rp
and
&
commute,
in
which case
they
both
can
be
written
as
VZr
(W;4?)
=
(4?-w)(1)
-w
(lr
(!)),
with
h/2
(q)
= 2
(01
-1)
for r =
1/2
and
1
h
(q)
=
"2
(q
- 1 +
Iq
-
11)
==
(q
-
lL
\lq;::O:
O.
(14)
(15)
Note
that
the
operator
function
ll/2
is
the
square
root
case
of
the
REmyi
logarithm
1 1
lr
(q)
= -
(e
r1nq
-
1)
= -
(qr
- 1),
lo
(q)
= lnq
(16)
r r
or
r-Iogarithm,
which is well defined for
any
0
S;
r < 1
as
a
smooth,
strictly
monotone
and
concave
operator
function
of
q > 0 including
the
limiting
case r
~
0
when
lo
= lim
lr
is
the
natural
logarithm
lo
= In.
It
can
be
naturally
extended
to
a
proper
concave function
on
lR
by
l
(q)
=
-00
on
q
S;
0,
has
finite
strictly
negative values for 0 < q < 1
with
l (1) = 0
and
the
normalized derivative
l~
(1) = 1
at
q = 1
and
is
strictly
positive if q >
O.
However,
in
the
case r = 1
the
REmyi
"logarithm"
l
(q)
=
q-l
is
not
concave
but
only affine,
corresponding
to
the
trivial
divergence d (w,
4?)
= 0
in
(14)
if
h is replaced
by
this
l.
This
is why
in
the
case r = 1 we redefine
the
divergence
by
another
monotone
concave function (15) which is, however,
not
strictly
monotone
and
concave
and
is
not
smooth
at
q =
l.
The information divergence V
(w;
4?)
of w from
4?
is usually defined
as
a positive negaentropy V
z
=
-Sz
by
the
semifinite relative entropy
(17)
Here w
(ill)
= w
(m)
and
l is usually
taken
to
be
the
Renyi
logarithm
(16),
r E
[0,
1[
for which In
(rpj&)
is usually
understood
as
In
rp
- In &.
3.2.
The
general
information
divergences
The
general information divergence V (w;
4?)
of
states
on
a
matrix
algebra
M
can
be
defined like a
distance
to
have only positive values, however, unlike
the
distance
it
is
not
assumed
to
be
symmetric
and
satisfying
the
triangular
inequality,
and
usually is allowed
to
have also
the
infinite value
+00,
say,
for some
central
states
w i-
4?
on
an
infinite dimensional M
S;;;
lIl\
(~).
The