Newsgroups: comp.graphics
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!uupsi!eye!erich
From: erich@eye.com (Eric Haines)
Subject: Re: Ray Tracing Optimization Idea
Message-ID: <1991Mar26.202601.9830@eye.com>
Keywords: Exploit coherence
Reply-To: erich@eye.com ()
Organization: 3D/Eye Inc., Ithaca, NY
References: <1991Mar21.183340.19319@dartvax.dartmouth.edu>
Date: Tue, 26 Mar 91 20:26:01 GMT

In article <1991Mar21.183340.19319@dartvax.dartmouth.edu> npw@eleazar.dartmouth.edu (Nicholas Wilt) writes:
>I had a simple, interesting idea to make tracing primary rays more
>efficient.  Cache the last object intersected; intersect with it
>first and if hit, determine whether it is still the nearest object as
>follows:
>	1. Fire a ray from the intersection back at the eye
>	2. If no object is intersected, the cached object is still nearest
>	3. If any objects are intersected, find the farthest intersection
>	   that's not beyond the eye.  This is the new nearest intersection
>	   to the eye.

There's really no need to shoot a ray back from the intersection:  simply
shoot the ray from the eye, but first attempt to intersect the object you hit
last time.  If you hit it, then use its intersection distance as a maximum
bound on the intersection tests with the rest of the environment.  If the ray
does hit something, but that something's distance is beyond your cutoff
distance, then you know you don't hit it.  This is equivalent to your
approach:  objects _beyond_ the intersected object from the eye are equivalent
to those _behind_ your new ray (shot from the intersection back to the eye).

The idea of using the distance to the nearest intersection is an old one
(which in computer graphics terms means probably 8 years old), and I don't
think anyone claims inventing it.  I independently came up with trying to
first intersect the object hit by the last eye ray with this eye ray to get a
minimum distance, and I'm sure others thought of it independently, too (great
minds think alike, as do mediocre ones like mine).

Doing this caching trick for all rays (eye, reflection, refraction) is a win.
In fact, I found that doing caching along with Kay/Kajiya sorting would then
sometimes _lose_ me a little speed over just doing this cache intersection
then intersecting with a straight, unsorted traversal of the hierarchy, as the
caching often was much of the Kay/Kajiya speed up.  Testing other closer
objects first via Kay/Kajiya was often nullified by the time spent sorting.
Kay/Kajiya saves time by trying to intersect potentially closer objects first,
and when we find that the nearest intersection distance is closer than the
closest potential distance of the next closest object on the list of objects
to be tested (you got all that?) is farther than this distance, testing ends.
So, if we know a close hit (or any hit, even if not closest) early on, we
partly do what Kay/Kajiya does, as we can eliminate all objects beyond this
distance as in Kay/Kajiya.

Of course, caching works only when there's a fair bit of image coherence - the
"field of grass" image they did is a possible example of Kay/Kajiya paying off
over the single object cache (though I've never done a comparison).

The first time I know of these ideas being in print is the "Tracing Tricks"
article in the "Intro to Ray Tracing" course notes in SIGGRAPH '89 (pretty
obscure, eh?).  The whole article is on-line as "RTfull9" (I think) via
anonymous FTP at weedeater.math.yale.edu.  It's Volume 2, Number 8 of the Ray
Tracing News.  Get a copy to see some of the other optimizations people have
thought up (but rarely turned into articles) over the years.  Some of these
will be in "Graphics Gems II", by the way.

Eric
