1118 Part E Modeling and Simulation Methods
and magnetic polarization of a magnet, upon various
external conditions such as temperature, pressure and
magnetic field, instead of the detailed trajectories of
individual particles. The statistical mechanics is such
a disciplinary, which reveals simple rules governing
systems of huge numbers of components. In many inter-
esting cases, unfortunately, statistical mechanics cannot
provide a final compact expression for the desired phys-
ical quantities because of the many-body effect of the
interactions among components, although the bare in-
teraction is two-body. Many approximate theories have
been developed which make the field called condensed
matter physics very rich. With the huge leaping of
power of computers, the Monte Carlo approach based
on the principle of statistical mechanics has been de-
veloped in the recent decades. This chapter is devoted
to introducing the basic notions for the Monte Carlo
method [22.1], with several examples of application.
22.1.1 Boltzmann Weight
According to the principle of statistical mechanics, the
probability for a particular microscopic state appearing
in an equilibrium system attached to a heat reservoir
with temperature T is given by the Boltzmann weight
p
i
= e
−βE
i
/
i
e
−βE
i
≡ e
−βE
i
/Z , (22.1)
where β =1/k
B
T, E
i
the energy of the system at the
particular state, and Z the partition function. The ex-
pectation value of any physical quantity, such as the
magnetic polarization of a magnet, can be evaluated by
O=
1
Z
i
O
i
e
−βE
i
. (22.2)
The difficulty one meets in applications is: there are too
many possible states such that it is practically unable
to compute the partition function and the expectation
values for desired physical quantities.
22.1.2 Monte Carlo Technique
The Monte Carlo techniques overcome this difficulty by
choosing a subset of possible states, and approximate
the expectation value of a physical quantity by the av-
erage within the subset of states of limited number. The
way for picking up the subset of states can be speci-
fied by ourselves, and is very crucial to the accuracy
of the estimation thus made. A successful Monte Carlo
simulation thus relies heavily on the way of sampling.
Importance Sampling
One can regard the statistical expectation value as a time
average over the states a system passes through the
course of a measurement. This suggests that we can
take a sample of the states of the system such that
the possibility of any particular state being chosen is
proportional to its Boltzmann weight. This importance
sampling method is the simplest and most widely-used
in Monte Carlo approaches. It is easy to see that the esti-
mation for the desired expectation value is given simply
by
O
M
=
M
i=1
O
i
/M , (22.3)
since the Boltzmann weight has been already involved
into the process of sampling. This estimation is much
more accurate than a simple sampling process, espe-
cially at low temperatures where the system spends the
large portion of time in several states with low energies.
The implementation of the above importance sam-
pling is achieved by Markov processes. A Markov
process for Monte Carlo simulation generates a Markov
chain of states: starting from the state a, it indicates
a new one b, and upon inputting b it points the third one
c, and so on. A certain number of the states at the head
of these Markov chains are dropped, since they depend
on initial state a. However, after running a sufficiently
long time, the picked states should obey the Boltzmann
distribution. This goal is guaranteed by the ergodicity
property and the condition of detailed balance.
Ergodicity
This property requires that any state of the system can
be reached via the Markov process from any other state
supposing we run it sufficiently long. The importance
of this condition is very clear since otherwise the miss-
ing states will have zero weight in our simulation which
is not the case in reality. It is noticed that one still has
room to set the direct transition probability from one
state to many others to zero, as usually done in many
algorithms.
Detailed Balance
This condition is slightly subtle and requires some con-
sideration. For the desired distribution vector p, with the
components of the vector being the Boltzmann weight
for each state, suppose we find an appropriate Markov
process characterized by a set of transition probabilities
among possible states such that
j
p
i
P(i → j) =
j
p
j
P( j →i) , (22.4)
Part E 22.1