184
BEYOND THE GILLESPIE ALGORITHM
on
it
being
greater
than
t).
Again,
a
combination
of
the
memory
less
property
and
the
rescaling property (Proposition 3.20) ensures that this is okay.
The next thing worth noting is that it is assumed that the algorithm
"knows" which
hazards are affected by each reaction. Gibson
& Bruck (2000) suggest that this is
done by creating a
"dependency graph" for the system. The dependency graph has
nodes corresponding to each reaction in the system. A directed arc joins node
i to
node
j
if
a reaction event
of
type i induces a change
of
state that affects the hazard
for the reaction
of
type
j.
These can be determined (automatically) from the forms
of
the associated reactions. Using this graph,
if
a reaction
of
type i occurs, the set
of
all children
of
node i in the graph gives the set
of
hazards that needs to be updated.
An interesting alternative to the dependency graph is to work directly on the Petri
net representation
of
the system. Then, for a given reaction node, the set-of all "neigh-
bours"
(species nodes connected to that reaction node), X is the set
of
all species that
can
be
altered. Then the set
of
all reaction nodes that are "children"
of
a node in X
is the set
of
all reaction nodes whose hazards may need updating. This approach is
slightly conservative in that the resulting set
of
reaction nodes is a superset
of
the set
which absolutely must be updated, but nevertheless represents a satisfactory alterna-
tive.
This algorithm is now
"local" in the sense that all computations (bar one) involve
only adjacent nodes on the associated Petri net representation
of
the problem. The
only remaining
"global" computation is the location
of
the index
of
the smallest re-
action time. Gibson and Bruck's clever solution to this problem is to keep all reaction
times (and their associated indices) in an
"indexed priority queue." This is another
graph, allowing searches and changes to be made using only fast and local operations;
see the original paper for further details
of
exactly how this is done. The advantage
of
having local operations on the associated Petri net is that the algorithm becomes
straightforward to implement in an event-driven object-oriented programming style,
consistent with the ethos behind the Petri net approach. Further, such
an
implementa-
tion could be multi-threaded on an
SMP or hyper-threading machine, and would also
lend itself to a full message-passing implementation on a parallel computing cluster.
For further information on parallel stochastic simulation, see Wilkinson
(2005).
This algorithm is more efficient than Gillespie's direct method in the sense that
only one new random number needs to be simulated for each reaction event which
takes place, as opposed to the two that are required for the Gillespie algorithm. Note
however, that selective recalculation
of
the hazards, hi(X, Ci) (and the cumulative
hazard
ho(x,
c)), is also possible (and highly desirable) for the Gillespie algorithm,
and could speed up that algorithm enormously for large systems. Given that the Gille-
spie algorithm otherwise requires fewer operations than the next reaction method,
and does not rely on the ability to efficiently store putative reaction times in an in-
dexed priority queue, the relative efficiency
of
a cleverly implemented direct method
and the next reaction method is likely to depend on the precise structure
of
the model
and the speed
of
the random number generator used.