|
Main Loop with Fixed Time Steps
Submitted by |
The following description came up during a thread in the SWEng mailing list,
and I thought it would make and interesting Tip Of The Day. Thanks to Neil
Stewart for his input, and Dave Smith for bringing it up in the first place.
Most games want to keep a consistent gameplay speed regardless of the
framerate achieved by the visuals. Given the wildly varying performance of
PC hardware, this is not immediately obvious. The key idea here is the
assumption that rendering takes MUCH more time than game logic.
In our current project, we're using a fixed timestep scheme. In the past I
have worked with variable timesteps, and somehow always keep coming back to
fixed. :-) What we do in our main game loop is, essentially:
time0 = getTickCount();
do
{
time1 = getTickCount();
frameTime = 0;
int numLoops = 0;
while ((time1 - time0) TICK_TIME && numLoops < MAX_LOOPS)
{
GameTickRun();
time0 += TICK_TIME;
frameTime += TICK_TIME;
numLoops++;
// Could this be a good idea? We're not doing it, anyway.
// time1 = getTickCount();
}
IndependentTickRun(frameTime);
// If playing solo and game logic takes way too long, discard pending
time.
if (!bNetworkGame && (time1 - time0) TICK_TIME)
time0 = time1 - TICK_TIME;
if (canRender)
{
// Account for numLoops overflow causing percent 1.
float percentWithinTick = Min(1.f, float(time1 - time0)/TICK_TIME);
GameDrawWithInterpolation(percentWithinTick);
}
}
while (!bGameDone); |
Let's examine the main parts:
while ((time1 - time0) TICK_TIME && numLoops < MAX_LOOPS)
{
GameTickRun();
time0 += TICK_TIME;
frameTime += TICK_TIME;
numLoops++;
time1 = getTickCount();
} |
The concept behind this is forcing every game logic tick to represent a
fixed amount of real-time (real-time is the time shown on your wristwatch).
Of course, ticks better take less CPU time to execute, than the real-time it
represents. For the record, a variable timestep model would look more like
this:
GameTickRun(m1-m0); // Each tick represents a varying amount of time.
time0 = time1; |
I won't go in the details & issues of a variable timestep model, although it
would make another great TOTD (hint, hint).
Imagine the following situation: You want to have a ball rolling at a speed
of 10 pixels per second (real-time seconds). You have a TICK_TIME of 50
milliseconds (20 ticks per second), which means that every tick, or call to
GameTickRun(), the ball should move 0.5 pixels per tick. That's the internal
speed the game should use for each time it updates the ball's position
inside the GameTickRun() function.
No, let's say that rendering each frame takes 100 milliseconds, and that
updating the ball position takes no time. This means that every time you
enter the loop, time1 will be 100 more than time0, therefore the loop will
execute twice for each frame rendered (at 10 fps). If each render takes 50
milliseconds (say you got a better videocard), the loop will be done just
once and you'll be achieving 20 fps. In both cases, you'll get your ball
moving at 10 pixels per second, the second one with smoother framerate than
the first. Remember, we're not fixing a framerate, we're rendering as fast
as our hardware permits, and running the game logic whenever we have to.
Things get more interesting if we say that we're achieving the framerate of
30, therefore each render takes 33.333333333 milliseconds to complete. What
do we get? The first time we run the loop after rendering, we find that only
33.33333 milliseconds have elapsed, not enough to fulfill a single game
logic tick (TICK_TIME == 50), so we just skip the game logic and render a
frame. Next time the time elapsed is 66.666666666, so we run one logic tick
and account for the 50 milliseconds a tick represents, thus getting
16.666666666 milliseconds left, and render. Next tick we have
16.66666+33.3333333333 = 50 milliseconds, so we run a tick, get a remainder
of 0 milliseconds, render, and go on and on...
The real-world situation is that your framerate varies anywhere from 30 to
60 fps for a fast PC game (console games often try to achieve a constant
framerate).
As for the concern that a series of game logic ticks would take more time to
execute, than time they represent, that's what MAX_LOOPS is for, to limit
the amount of ticks executed before rendering. Yes I have seen this happen
many times, that's why I have added it. If you're playing a network game you
usually can't just slow down the "game time" unless all machines do the
same, so you can't use the MAX_LOOPS guard. (I'm assuming a lockstep network
model)
I love the separation between frames (visual updates) and ticks (game logic
updates), makes everything so simple.
IndependentTickRun(frameTime); |
This is where we gather user input and update the whole user interface and
perform general housekeeping. :) Things that don't need to be run at a
precise real-time speed. Since we want our game logic to be perfectly
deterministic, the things executed here (or during rendering) must
not affect the fixed-step game logic, in any way. In fact we have a
run mode where we record the user commands to a file, and we can play back
these commands entirely within the timestep loop above. This is an
invaluable aid for debugging, and is the basis for the lockstep
network model we employ.
float percentWithinTick = Min(1.f, float(time1 - time0)/TICK_TIME);
GameDrawWithInterpolation(percentWithinTick); |
This litte thing allows us to interpolate the positions, orientations, etc
of the visuals, in between game ticks. It smooths the visual aliasing caused
by a non-constant framerate. Which is the case, because we run a game logic
tick 15 times per second (TICK_TIME = 1000/15), while displaying at
framerates in the range 20-60 frames per second (fingers crossed).
Javier Arevalo
Pyro Studios
|
The zip file viewer built into the Developer Toolbox made use
of the zlib library, as well as the zlibdll source additions.
|