Submitted by , posted on 26 June 2003



Image Description, by


This image shows my latest terrain engine. I've stolen the landscape and texture images from here. In this shot there are 80,000 triangles within the frustum or approximately 2.4M tris/second, which is not terrific as terrain rendering algorithms go.

However, this engine features smooth LOD. There are no LOD states that get switched and stuck together, instead a constant level of detail (pixels/poly) is defined and maintained at all depths. There is no spatial partitioning system, and there is also extremely little overdraw. There is also a highly accurate and extremely fast occlusion culling algorithm which allows extremely fast (negligible time required over and above the tesselation of the landscape) culling of invisible terrain regions, and terrain-object occlusion culling in about 10 operations per object bounding volume vertex. There is also nearly constant (~6% variation) system overhead, and minimal memory usage. The algorithm scales to all terrain sizes (I've tested it with 256x256 up to 16384x16384 heightmaps) with no performance penalty. And since the detail is constant at all depths, although there is a lower overall triangle throughput the triangles that are rendered are used very efficiently which probably makes up for it.

The above scene took 27 ms to update the landscape. With all the portions of the process that can be moved to graphics hardware are omitted, the update time drops to 16 ms. With SIMD and standard optimizations, that could likely be halved again. Next generation cards that have the ability to do texture lookups in the vertex transformation stage (ie. compliant with the glSlang spec) would be able to implement all but an initialization phase and 3 matrix multiplies per frame as dynamic vertex buffers (giving all the advantages above but with essentially zero system overhead)

James Gregson



[prev]
Image of the Day Gallery
www.flipcode.com

[next]


 


Copyright 1999-2008 (C) FLIPCODE.COM and/or the original content author(s). All rights reserved.
Please read our Terms, Conditions, and Privacy information.