On-Line File Caching
Abstract
Consider the following file caching problem: in response to a sequence of requests for files, where each file has a specified size and retrieval cost, maintain a cache of files of total size at most some specified k so as to minimize the total retrieval cost. Specifically, when a requested file is not in the cache, bring it into the cache and pay the retrieval cost, and remove other files from the cache so that the total size of files remaining in the cache is at most k. This problem generalizes previous paging and caching problems by allowing objects of arbitrary size and cost, both important attributes when caching files for world-wide-web browsers, servers, and proxies. We give a simple deterministic on-line algorithm that generalizes many well-known paging and weighted-caching strategies, including least-recently-used, first-in-first-out, flush-when-full, and the balance algorithm. On any request sequence, the total cost incurred by the algorithm is at most k/(k - h + 1) times the minimum possible using a cache of size h ≤ k. For any algorithm satisfying the latter bound, we show it is also the case that for most choices of k, the retrieval cost is either insignificant or at most a constant (independent of k) times the optimum. This helps explain why competitive ratios of many on-line paging algorithms have been typically observed to be constant in practice.
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you.