I think he's just saying that it could be done.
>Sterographic
>viewing is NOT strictly 3d transformed to 2d, and may be what many
>people invision when they think of VR.
This is the central problem. Assuming a stereographic view, it
probably doesn't work.
>Also, the problem with doing
>the Escher transform, is that the actual physical representation of
>the space would change based on the 2d results as the viewer moved,
>basically a reverse of how the transform normally works. The point is
>developing a 3d model to represent Esherian space and have a standard
>viewing transform for it, which is most likely not possible. What your
>suggesting would involve more than one viewing engine, basically one
>for every view produced, as there is no standard algorithm for
>producing Escherian effects.
If you assume a, um, monographic perspective, then it might be
possible. What it would mean is that instead of generating a 3D world
up front and just navigating it in real time, you would have to have
some kind of transform that generates the world on the fly as you
navigate it. This isn't how we normally think about a virtual world,
but there's no real reason you couldn't build one this way (technical
limitations not withstanding).
>What would be more interesting, I think, is to take the 3d space and
>play games with representing 4d space... i.e. extrapolate what Escher
>might do with the same tools.
Tell me what you're thinking of with 4D worlds. It's interesting
conceptually, but I'm not sure what it means in practical--or, for
that matter, even visual--terms.
--Andy