Subj : Re: best data structure for LRU cache? To : comp.programming From : Jon Harrop Date : Fri Aug 12 2005 08:34 pm Alex Fraser wrote: > "Jon Harrop" wrote in message > news:42fb8f31$0$17477$ed2e19e4@ptn-nntp-reader04.plus.net... >> I think that is quite a small discrepancy. If you use C++ and the STL >> then the list allocator is likely to make efficient use of an array for >> you (avoiding big allocation and deallocation costs) > > I'd like to think so but I wouldn't bet on it. IIRC, the STL bundled with gcc does a good job here. >> but if you use an FPL like OCaml then the cost of cons may well outweight >> the cost of looping over a small array (sequential access). > > I'm not sure what you mean here. > > The sequential access of the array is the advantage compared to a list; The cost of allocating a cons cell for the list may well be greater than the cost of non-sequential memory access, depending upon the method of allocation. > the disadvantage is the need to move chunks of the array around. Yes. > The catch > with using an array is that while you can mitigate that disadvantage by > storing pointers to items in the array, in doing so you lose the advantage > of sequential access. Very true. Hadn't thought of that. It isn't necessarily true but it is likely to be true in general, or for a generic data structure. So arrays will be faster for tiny caches and lists will be asymptotically faster but it isn't clear where the cross-over is, i.e. lists and arrays perform equivalently for "n"-entry caches but what is the value of "n"? The answer will be very platform, architecture, language and implementation specific... -- Dr Jon D Harrop, Flying Frog Consultancy http://www.ffconsultancy.com .