logic, belief functions, upper and lower probabilities, or any other alterna-
tive to probability, can better be done with probability.
These extreme claims are often justified by referring to a theorem by Cox [1946,
1961], which supposedly established that the only coherent way of representing
and dealing with uncertainty is to use rules of probability calculus. However, it
has been shown that either the theorem does not hold under assumptions stated
explicitly by Cox [Halpern, 1999], or it holds under additional, hidden assump-
tions that are too strong to capture all aspects of uncertainty [Colyvan, 2004] (see
Colyvan’s paper for additional references on this topic).
10.5. The distinction between the broad concept of uncertainty and the narrower
concept of probability has been obscured in the literature on classical, probabil-
ity-based information theory. In this extensive literature, the Hartley measure is
routinely portrayed as a special case of the Shannon entropy that emerges from
the uniform probability distribution. This view is ill-conceived since the Hartley
measure is totally independent of any probabilistic assumptions, as correctly rec-
ognized by Kolmogorov [1965] and Rényi [1970b]. Strictly speaking, the Hartley
measure is based on one concept only: a finite set of possible alternatives, which
can be interpreted as experimental outcomes, states of a system, events, messages,
and the like, or as sequences of these. In order to use this measure, possible alter-
natives must be distinguished, within a given universal set, from those that are
not possible. It is thus the possibility of each relevant alternative that matters in
the Hartley measure. Hence, the Hartley measure can be meaningfully general-
ized only through broadening the notion of possibility. This avenue is now avail-
able in terms of the theory of graded possibilities and other nonclassical
uncertainty theories that are the subject of GIT.
10.6. As shown by Dretske [1981, 1983], a study of semantic aspects of information
requires a well-founded underlying theory of uncertainty-based information.
While in his study Dretske relies only on information expressed in terms of the
Shannon entropy, the broader view of GIT allows us now to approach the same
study in a more flexible and, consequently, more meaningful fashion. Studies
regarding pragmatic aspects of information, such as those by Whittemore and
Yovits [1973, 1974] and Yovits et al. [1981], should be affected likewise. Hence,
the use of the various novel uncertainty theories, uncertainty measures, and
uncertainty principles in the study of semantic and pragmatic aspects of infor-
mation will likely be another main, long-term direction of research.
10.7. The aims of GIT are very similar to those of generalized theory of uncertainty
(GTU), which have recently been proposed by Lotfi Zadeh [2005]. However, the
two theories are formulated quite differently. In GTU, information is viewed in
terms of generalized constraints on values of given variables. These constraints
are expressed, in general, in terms of the granular structure of linguistic variables.
In the absence of any constraint regarding the variables of concern, we are totally
ignorant about their actual state. In this situation, our uncertainty about the
actual state of the variables is maximal.Any known constraint regarding the vari-
ables reduces this uncertainty, and may thus be viewed as a source of informa-
tion. The concept of a generalized constraint, which is central to GTU, has many
distinct modalities. Choosing any of them reduces the level of generality. Making
further choices will eventually result in one of the classical theories of uncer-
NOTES 423