37 Online Algorithms – What Is It Worth to Know the Future? 365
to evict without knowledge of any future requests. If the algorithm knew which
pages will not be referenced for a long time, it could discard those. However,
this information is not available as a processor issues requests to pages in an
online fashion. The following algorithm works very well in practice.
Online strategy Least-Recently-Used (LRU): On a page fault,
if the main memory is full, evict the page that has not been requested
for the longest time, considering the string of past memory requests.
Then load the missing page.
In our example LRU would evict page B because it has not been requested
for the longest time. The intuition of LRU is that pages whose last request
is long ago are not relevant at the moment and hopefully will not be needed
in the near future. One can show that LRU is k-competitive, where k is the
number of pages that can simultaneously reside in main memory. This is a
high competitive ratio, as k takes large values in practice. On the other hand,
the result implies that LRU has a provably good performance. This does not
necessarily hold for other online paging algorithms.
Beside LRU, there are other well-known strategies for the paging problem:
Online strategy First-In First-Out (FIFO): On a page fault, if
the main memory is full, evict the page that was loaded into main
memory first. Then load the missing page.
Online strategy Most-Recently-Used (MRU): On a page fault,
if the main memory is full, evict the page that was requested last.
Then load the missing page.
One can prove that FIFO is also k-competitive. However, in practice, LRU
is superior to FIFO. For MRU, one can easily construct a request sequence
such that MRU, given a full main memory, incurs a page fault on each request.
Given a full main memory, consider two additional pages A and B, not residing
in main memory that are requested in turn. On the first request to A,this
page is loaded into main memory after some other page has been evicted. On
the following request to B, page A is evicted because it was requested last.
The next request to A incurs another page fault as A was just evicted from
main memory. MRU evicts page B, which was referenced last, so that the
following request to B incurs yet another fault. This process goes on as long
as A and B are requested. MRU incurs a page fault on each of these requests.
On the other hand, an optimal offline algorithm loads A and B on their first
requests into main memory by evicting two other pages. Both A and B remain
in main memory while the alternating requests to them are processed. Hence
an optimal offline algorithm incurs two page faults only, and MRU exhibits
a very bad performance on this request sequence. In practice, cyclic request
sequences are quite common as programs typically contain loops accessing
a few different memory pages. Consequently, MRU should not be used in
practice.