197 4.6 A microscopic view of entropy
state (discussed in Chapters 8 and 9) for which it is useful to have an understanding of the
microscopic underpinnings of entropy. In addition to this practical reason, there is a more
fundamental one. We saw in Section 1.14 that internal energy, the macroscopic concept that
forms the basis for the First Law, has a well-defined microscopic interpretation. It would be
deeply unsatisfactory if the same were not true of entropy, which is at the core of the Second
Law. There is also much misuse and abuse of the word entropy by non-scientists and pseudo-
scientists, and for this reason too it is important to have a deeper understanding of what it is.
The microscopic interpretation of entropy begins with the concepts of the microstate of
a system vs. its macrostate. The macrostate of a system is defined solely on the basis of
macroscopic variables and does not require knowledge of the underlying microscopic con-
figuration of the system, which is what we call its microstate. For example, the macrostate
of an ideal gas made up of a single chemical species is fully specified by its temperature
and its pressure, from which we can calculate all of its other macroscopic parameters, such
as molar volume, internal energy, enthalpy and entropy (we shall see how). Although this
macroscopic specification of the sate of a system is independent of any knowledge of its
underlying microscopic structure, in Section 1.14 we saw that we can also define the internal
energy of an ideal gas in terms of a microscopic model called the kinetic theory of gases. In
order to describe the microstate of a system made up of ideal gas we would need to specify
the position and velocity of each of the individual molecules. Here lies the crucial differ-
ence between macrostates and microstates: a given macrostate may arise from more than
one different microstate. For example, we know that the distribution of molecular speeds
is as given by Fig. 1.12, but there are many ways in which 10
23
individual molecules may
arrange themselves to give this distribution (see Box 4.1). Each of these different ways is
a microstate. As far as the macrostate is concerned we don’t care what the velocity of each
individual molecule is, i.e. which specific equivalent microstate we have, as long as the
velocity distribution is the one that corresponds to that specific macrostate.
Box 4.1
Counting microstates
The microscopic interpretation of entropy, and indeed much of statistical mechanics, relies on being able to
count the number of equivalent microstates. We define all microstates that give rise to the same macrostate
as being equivalent. Consider a situation in which there are N objects of the same kind, each of which can
exist in i different states, with N ≥i. For example, it could be a group of N coins, each of which can be in one
of two states (heads or tails), or a group of N octahedral sites in a crystal of olivine, each of which can be in
one of two states (filled with Mg or Fe
2+
), or a group of N particles in a gas, each of which can be in one of i
different energy levels. We want to know how many possible microstates these systems have. A microstate
is defined by the number of objects that are in each possible state. All microstates with the same number of
objects in each state are equivalent and correspond to the same macrostate. For example, if we have 5 coins,
then all microstates with 3 heads and 2 tails are equivalent, and correspond to a single macrostate that is
different from the one that arises from all microstates in which there are 4 heads and 1 tail. We wish to
calculate the number of equivalent microstates that underlie a given macrostate.
The total number of arrangements of the N objects, which you can think of as the number of ways in
which we can choose them one at a time, is N!: we can choose the first in N ways, the next in (N −1)
ways, the next one in (N −2) ways, and so. You can visualize this process as arranging the objects in a
row, but this is just a help in visualization and has nothing to do with any putative spatial distribution of the
objects. We thus have N! possible arrangements of objects (e.g. coins), and each object can be in one of i
different states (e.g. heads or tails). Let n
i
be the number of objects that have the property i, i.e. that are in