Newsgroups: comp.lang.c++
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!sdd.hp.com!elroy.jpl.nasa.gov!decwrl!parc!boehm
From: boehm@parc.xerox.com (Hans Boehm)
Subject: Re: reference counting vs. scanning GC
Message-ID: <boehm.674163589@siria>
Keywords: locality, large address spaces, reference counting, garbage collection
Sender: news@parc.xerox.com
Organization: Xerox PARC
References: <1991Apr24.192424.23745@kestrel.edu> <72149@microsoft.UUCP> <boehm.673720634@siria> <1991May11.054648.2483@kestrel.edu>
Date: 13 May 91 19:39:49 GMT

gyro@kestrel.edu (Scott Layson Burson) writes:

>In article <boehm.673720634@siria> boehm@parc.xerox.com (Hans Boehm) writes:
>>Fixed placement allocators typically allocate successively allocated
>>objects close to each other.  (If they don't, they should.)

>They don't, at least not when used simplistically.  When objects are
>freed, allocators attempt to reuse the space they took up.  The
>strategies for doing this vary, of course, but the bottom line is that
>after a lot of allocations and freeings, successively allocated
>objects get less and less likely to be close to each other.

>One factor that exacerbates the problem greatly is allocating
>short-lived and long-lived objects in the same space.  So it would
>seem that the first thing to do would be to try to segregate objects
>by expected lifetime.

Granted, things will get a little worse as more objects are reallocated, but
not terribly so.  Let's be a bit more precise.  This issue is of interest 
primarily for small (< 1 page) objects.  As an example, let's assume we're
allocating 8 byte cons cells on 4K pages.  Let's further assume that pages
contain only objects of one size.  (This is likely to be a good choice for
a nonmoving allocator.)  Let's further assume the heap is 3/4 full.

When I garbage collect, I will find that an average page contains 512/4=128
free objects.  These will end up next to each other on the size 8 free list.
Thus the probability of two subsequent allocations residing on the same
page is about 127/128, assuming all of memory is equally densely populated.
This is worse than what I would get for newly allocated memory from a
copying collector (which would get 511/512).  But it is probably better than
what I would get by considering those same list nodes after they've been
breadth-first copied a few times.

Reality is of course more complicated.  Different objects sizes interact,
though this is probably not a major effect, since the number of different
object sizes in a typical data structure is probably small.  In our
experience, pages tend to be very nonuniformly populated, even after
a system has been up for quite a while.  This helps
the nonmoving allocator, since it can chose not to allocate on nearly
full pages.

Breadth-first copying is of course not optimal.  But other strategies seem
to make it harder to build concurrent collectors on conventional hardware.
The whole point is still that the tradeoffs are nontrivial.

Hans
(boehm@xerox.com)

