Structures For Physics
Question submitted by (06 February 2000)




Return to The Archives
 
  I'm currently working on a 3d game project using OpenGL (and soon, DirectX). I'm currently putting together a realtime physics engine for the dynamic objects in the game, and my question is this: When dealing with character animation, unless I do something like a fully-featured skeletal animation system with bones, etc, how can I correctly have dynamic models with partially static geometry react to the other objects? I should explain what I mean by partially static geometry...I think it would be easiest to store keyframes of animation (prebuilt movement data) along with textures, etc, because it will cut down some calculation time on the models...but the way the basic physics works is that if objects collide, they react and adjust accordingly....that means if my model is a man walking along a flat surface and the "foot" of the model hits a rock, it'll move the entire model and look terribly unrealistic.

The next problem is that I want to use display lists in OpenGL as they are much faster than standard storage. That means you can't change them after they are created. The last problem is that since I can't change the display lists, I can't even begin to have a static keyframe react realistically to the other objects. I'm looking for speed along with the solution. Would the best way be to create each indivdual body part in the modeler, attach them at joints and animate keyframes there, but export each part as a model and create a partially-functional skeletal animation system in that joints would have ranges of motion, etc, but new positions based upon the skeleton wouldn't be calculated unless part of the static geometry was deformed? And even if that is a viable solution (which I think it is), how can I modify the display lists? Perfomance is key.
 
 

 
  If I understand you correctly what you want to use is a hierarchical skeleton based animation system with inverse kinematics thrown in to deal with character / world interaction (check out the Halo movies). Getting good performance out of a system like this is tricky, but not impossible.

First. Forget display lists. They're not going to help you since they take a lot of cycles to compile. They're great if you have an object that you're going to draw often and for which the vertices don't change relative to each other (no morphing / blending). Display lists should be reserved for something you're going to allocate once when you load the level and never change. You can place the objects in the scene by using the matrices, but that's about it.

With a skeletal animation system you have your bones (defined as matrices or quaternions) and you have your vertices. Each vertex is influenced by N bones, and the influence is weighted (20% of one bone, 30% of another, and 50% of another for example). When drawing this type of object you transform each vertex by its associated bones, scale the result accordingly, and add the results together (there are more efficient ways, but that's the basic idea). Many of the vertices in a human model will only have one bone but joints will have at least two, and faces can use more than 16 pretty quickly. In both OpenGL (there's an extension from NVIDIA) and Direct3D (as of DX7) there are APIs designed for weighted vertex blending, but there are some catches. The first is that you are limited to 2 bones on current hardware (GeForce 256) and the second is that you cannot change matrices during a triangle. All vertices of a triangle must use the same matrices, although each vertex gets its own set of weights. If you can live with those limitations then stick each segment of the model into a vertex buffer and draw away with full hardware acceleration. (see specific API documentation for details). This will get you the best performance on a GeForce.

For cards without vertex blending (everything else besides the GeForce) you must generate the key-frames by yourself with your own transformation routines. The added benefit of this is that you can now build in whatever limitations you like (16 bones or more per vertex for facial animations .. whatever you need). You take the resulting vertices and place them into a vertex buffer, then use the graphics API to send them to the card for rendering. One thing I've heard of others doing is generating the key frames at a lower frame rate than the display rate (say at 10 Hz) and then using the much less costly method of linearly interpolating the vertices from one keyframe to the other. I have heard that 3D hardware is coming that will take two vertex buffers and linearly interpolate between them for you, and an extension to D3D is being proposed to Microsoft for this purpose.

This is not a topic that I've had a lot of personal experience with and things are changing pretty quickly on this front, but that should be the basics. I expect that this will become a hot topic with the second generation of cards with 3D geometry acceleration.



Response provided by Tom Hubina
 
 

This article was originally an entry in flipCode's Fountain of Knowledge, an open Question and Answer column that no longer exists.


 

Copyright 1999-2008 (C) FLIPCODE.COM and/or the original content author(s). All rights reserved.
Please read our Terms, Conditions, and Privacy information.