Game Engines and Determinism
August 18, 2021
· Written by
Allie O'brien
Brad Kriel

More than once I have heard this idea set in stone that, “You cannot use game engines for physically accurate simulations''. Since I am not a physicist, I took this rule as granted, and never argued against it — until now  😉

Falcon, Duality’s digital twin simulator, leverages the Unreal Engine as a 3D operating system. So the dynamics of our digital twin world are PhysX based and we are currently evaluating Chaos for future releases. Both Nvidia PhysX and Epic Chaos are ‘game-friendly’ physics engines.

What is a Physics Engine?

A physics engine is a pretty elaborate system for reproducing how forces interact with objects. I will focus on rigid objects (RBD) in this post.

As a simple example, let’s define an object by its mass (we are not interested in its shape right now):

float MassOfObjectInKilos = 10;

And apply a force from top to bottom of 5 newtons:

float3 ForceToApply = (0, 0, -5);

Forces are represented as vectors as they are composed of an intensity and a direction.

We want to know the position of the object after the force has been applied.

Thanks to the Newton’s Second’s Law of motion, we know that:

F = m * a (F is our ForceToApply and m is our MassOfObjectInKilos)

Let’s solve it for ‘a’ (the acceleration we want to compute)

a = F / m

float3 Acceleration = ForceToApply / MassOfObjectInKilos;

// (0, 0, -5) / 10 = (0, 0, -0.5)

Ok, we now know that by applying a force of -5 Newton on top of the object we will generate an acceleration of -0.5 meters per second squared (m/s²). Cool, but whenever you have ‘time’ (seconds in this case) as a meaningful variable in your software, things quickly get complicated.

The hardest of all choices: Sampling

When you move from an analog world (like the one we live in) to a digital one (like software simulations), you need to decide at which rate you want to ‘sample’ the analog world.

Any kind of physical phenomenon in the real world (sound waves, moving objects, etc.) must be ‘sampled’ when reproduced in the digital world. Sampling is like taking a picture: you are getting information about a specific subject at a specific time. By taking more pictures of the same object at different times you will be able to ‘reconstruct’ its ‘behavior over time’.

Imagine a bouncing ball and taking pictures of it with your camera (something like 16 pictures per second, so a 16hz sampling)

What happens if we reduce the sampling from 16 to 4?

Yes, a lower sample rate means losing information.

So, which is the right sampling frequency? The answer (unfortunately) depends on the domain we are working with and on the so-called ‘Nyquist Limit/Frequency’.

In the world of sound waves (especially the ones related to the music industry) we have a pretty decent comprehension and experience in sampling. So, when sampling an analog sound wave (like the one generated by a guitar) we know that sampling it 44,100 times per second will result in a very good quality for a common ear (in fact 44,100 is the default sample rate for the compact disks (if you still use/buy them…). But why 44,100? The Nyquist theorem says that if we want to sample a signal without losing information we need to sample it at the double of its max frequency. As the max human-audible frequency is around 20kz we end up with that value (the double of 22,050)

What about the image/video world?

Again the Nyquist theorem will help here: Which is the maximum frequency our eyes can manage (remember that we need to double it!)? The psychophysics of vision says that the flicker fusion threshold (the frequency at which an intermittent light appears steady to human) is 15hz (for rods, cones have a higher frequency). In fact 30hz is the minimum required for a good image signal (and what TV has used for decades)

The most funny example of ‘sampling’ in movies is the wagon-wheel effect. If you do not know about it, be prepared to blow your mind: have you ever experienced the wheels of a moving car rotating in reverse? Well I am pretty sure the answer is yes.

The reason is pretty simple: as the movie camera is missing a lot of frames in respect to the speed of the wheel, the sampled positions of spokes fool your brain forcing it to think they are moving in reverse. Wikipedia as a great page about it here.

As a side (and funny) note, In the movie industry, 24 ‘camera shots’ per second are the standard sampling rate. The historical reason is about limits imposed by hardware like cameras and films, but nowadays is almost a ‘cultural’ one: for lot of people, me included, seeing a movie at 60fps or more, will result in the ‘soap opera’ effect (something we generally, wrongly, mark as a ‘cheap’ production). Cool fact: my kids prefer 60hz 😉

Now, sound waves and images are related to the human brain's ability to process signals at a certain rate, but when dealing with other physical phenomena, like the movement of a projectile, things start to become quite complex…

The main issue is that we are trying to sample an unbounded frequency (an object moving in space), so the math would be 2 * infinity. Definitely something we cannot manage with our machines 🙁

The classical example is projectile vs paper sheet (the projectile is so fast and the paper so thin that without a super high sampling we will lose the exact frame when the projectile hits the paper), or back to our bouncing ball example, detecting when it hits the ground:

In the second example we have basically lost the information about the ball hitting the ground. No good.

The solution here is to increase the sample rate, but what happens if the ball moves faster? we need to increase it again, and so on and so on...

In addition to this more samples means more hardware resources, and simulations are the kind of softwares that can easily meltdown your machine.

While theoretically with higher sampling we have smaller error, in complex systems and taking implementation constraints into consideration, arriving at the optimal sampling value is often a tradeoff.

So, at which sample rate does the physics engine work? The correct answer is that there is always a compromise, and for specific situations (like collision detection of RBD) we end up cheating 🙂 Continuous Collision Detection (CCD) is a common, while expensive, approach to the collision-of-fast-moving-objects problem: we basically reverse the problem by computing a potential intersection between objects using the available informations (like the velocity vector) and getting the ‘instant’ of the intersection.

Moving Objects in the Digital World

Let’s assume that we have decided to sample the movement of our object 50 times per second, from a programming point of view we are updating its position 50 times. Remember we know the acceleration:

float3 Acceleration = ForceToApply / MassOfObjectInKilos;

// (0, 0, -5) / 10 = (0, 0, -0.5)

float3 Speed = (0, 0, 0)

// This will be called every 1/50 seconds



   const float SamplingRate = 1.0 / 50;

   Speed += Acceleration * SamplingRate;

   ObjectPosition += Speed * SamplingRate;


After having called UpdateObjectPosition() 50 times, we will get the position of the object at T1 (1 second) after having applied the force.

Now this code will probably work if we simulate an object in outer space, if we want to simulate an object in our world we need to take into account friction, drag, gravity and so on…

PhysX and SampleRate

The main cause for the ‘lack of trust’ (about determinism) of game engines is caused by a pretty strict rule when simulating physics in a computer: you must always use the same sample rate for the whole simulation! The reasons are multiple, with the easiest one being the intrinsic ‘lack of precision’ of real numbers (yes, we include irrational numbers too) of current machines.

The most common standard in use in current CPUS is the IEEE754. Without going into details, let’s see a classic example (from a Python shell):

>>> 0.022211 + 0.01



Yes, for the way floating point numbers work (under the hood) we can easily lose precision and introduce errors at a pretty fast pace. There are ways (well, tricks) to reduce that impact, but if you do not have a stable numeric source (like a fixed sample rate), fighting it quickly becomes a lost battle. 

So, the main rule if you want an accurate, stable and deterministic physics simulation: use a fixed sample rate!

Adding the wall clock, rendering & humans in the loop

The Physics simulation is only a part of the stack, very often we want a visual representation of what we are simulating and we want a human to be able to recognize what is happening. This means that we need to burden our hardware with a real time graphics rendering load and to introduce ‘true/human time’ management in the code (as we need to deal with how fast humans process signals, audio and video in our case). In the Falcon simulator we even allow the user to manually ‘pilot’ the simulated machines and this introduces other (heavily time-based) logic too.

We know that 60hz is the current standard de-facto in terms of ‘responsiveness’ of input as well as graphics rendering. But we have systems where we can accomplish higher frame rates (as an example my machine generally runs the basic Falcon simulation at 120hz)

In an ideal world, that 60hz and 120hz value would be something fixed and solid, but unfortunately the hardware power is a limited resource and often we will get unstable frame rates. A classic example is when moving out from a room to an open area, the amount of graphical objects to render (and physically compute) increases, dramatically slowing down the whole simulation. And the opposite is relevant too, when the amount of objects to manage in the 3D scene reduces, we get higher frame rates.

This constant, intrinsic instability leads to a common approach in games/simulations development: instead of using a fixed sample rate (like in the example code shown above) we constantly compute (at each tick) how much time the hardware required to complete the current frame, and we use that value (the famous DeltaTime) as the current sample rate.

So if you want to move an object over the X axis at the speed of 10 units per seconds (assuming the deltaTime is represented in fraction of seconds):

Object.X += 10 * DeltaTime;

This is one of the first things you learn in game development courses, but if you remember the previous part of the article it should make alarms go off from the deterministic physics point of view...

Unreal and Physics

How does the Unreal Engine deal with physics?

First of all, we can remove the rendering from the equation: on the vast majority of non-mobile platforms, the rendering engine runs on a dedicated thread (that opens to a completely different set of problems, but unrelated to our determinism analysis).

The PhysX simulation (again on the vast majority of non-mobile platforms) runs on a threadpool (4 worker threads by default) synchronized with the game thread. The ‘synchronized’ here is the key: the Unreal Game Thread triggers the PhysX simulation step at each tick passing it an ‘AveragedFrameTime’ as the deltaTime. What is this value? and how much will it contribute to determinism of the simulation?

The relevant code is in the /Engine/Source/Runtime/Engine/Private/PhysicsEngine/PhysScene_PhysX.cpp file, in the function  FPhysScene_PhysX::TickPhysScene():

float UseDelta = FMath::Min(DeltaSeconds, MaxPhysicsDeltaTime);

   // Only simulate a positive time step.

   if (UseDelta <= 0.f)


           if (UseDelta < 0.f)


      // only do this if negative. Otherwise, whenever we pause, this will come up

      UE_LOG(LogPhysics, Warning, TEXT("TickPhysScene: Negative timestep (%f) - aborting."), UseDelta);





   * Weight frame time according to PhysScene settings.


   AveragedFrameTime *= FrameTimeSmoothingFactor;

   AveragedFrameTime += (1.0f - FrameTimeSmoothingFactor)*UseDelta;

PScene->simulate(AveragedFrameTime, Task, SimScratchBuffer.Buffer, SimScratchBuffer.BufferSize);

The ‘UseDelta’ value is computed by choosing the minimum value between the current DeltaTime and a global value (configurable in project settings, default 1.0/30) called ‘MaxPhysicsDeltaTime’.

Let’s stop here for a moment as we have the first issue :) our DeltaTime is variable by definition (again, in the default Unreal setup) can be 120hz for 90% of the simulation, but can be 15hz for the 10% of it, leading to a non stable physics stepping with the additional ‘issue’ of having the physics simulation running at double speed (30hz) when the framerate goes at 15hz (as the ‘MaxPhysicsDeltaTime’ will be selected)

Back to the code, we have the AveragedFrameTime computation, by default the FrameTimeSmoothingFactor is 0, this result in:

AveragedFrameTime *= 0;

AveragedFrameTime += (1-0) * UseDelta;

It should be pretty easy to realize that AveragedFrameTime is equal to the UseDelta in the default setup. But what happens if we tune the FrameTimeSmoothingFactor?

By doing the math you will realize that it basically slows down how fast the new physics timestep will move to the new value. So, as an example, if in the previous frame you were running at 20hz, and in the new one at 30hz, with a smoothing factor of 0.5 you will get a value of 25hz, and if the game rate remains stable at 30hz it will slowly stabilize the physics framerate too.

In the first part, i told you how important having a stable sample rate in a physics simulation is, the default Unreal behavior is definitely good at giving a ‘smooth’ user experience, but not suited to scenarios where physics accuracy needs to be high.

Unreal FrameRate Management

In addition to the default behavior where Unreal ticks as fast as possible and synchronizes itself with the vsync (and remember that you can disable vsync too), there are two additional modes that you can set (that will obviously impact physics): Fixed Frame Rate and Smooth Frame Rate.

Fixed Frame Rate basically acts as a ‘speed limiter’, so if you set it as 60hz, and your engine is ticking at 90hz, a CPU sleep will be added after each frame to maintain a stable 60hz rate (and this is good for physics accuracy).

But what happens when you are slower than the configured 60hz? Well you are back again to instability (and this is bad for physics too). Obviously if you build your simulations in a way that allows you to always guarantee a minimum frame rate (like 30hz) this setup is the best bet for stable simulations.

Smooth Frame Rate is heavily related to VSync. With the default behavior ,if your frame rate drops from 60 to 40, the VSync will force it to 30, generating annoying stuttering. This option allows you to define a range in which the framerate is slowly adapted instead of fast jumping around. Definitely something that will impact physics (in)stability 🙂

Unity vs Unreal

I think it is worth talking about how the Unity Game Engine deals with Physics to better understand the next Unreal Engine configuration parameters.

The Unity developers decided to ‘honor’ the fixed DeltaTime required by the physics engine in a pretty simple way (note: Unity source is not public, so this is plain speculation):

float physicsAccumulator = 0;

const float physicsFixedDeltaTime = 0.02; 

RunOneFrame(float deltaTime)


    Time.deltaTime = deltaTime;


    physicsAccumulator += DeltaTime;

    while (physicsAccumulator >= physicsFixedDeltaTime)


        physicsAccumulator -= physicsFixedDeltaTime;

        Time.fixedDeltaTime = physicsFixedDeltaTime;




If your game runs at 60fps, this means that every 5 frames you will have a double call to CallFixedUpdateOfAllGameObjects(). Alternatively, if your game runs at 100fps, you will get 2 calls to CallFixedUpdateOfAllGameObjects() for each frame, while if it runs at 1 fps (!!!) it will call CallFixedUpdateOfAllGameObjects() 50 times per frame...

Unreal SubStepping

As we have seen, the default behavior in Unreal is opposite of Unity: the physic’s engine tick happens at the same time as the game tick. This means that the Physics Engine runs with a variable deltaTime.

Substepping is an Unreal feature allowing the developer to instruct the engine on how much time a physics ‘step’ should take (e.g. 0.02 seconds) and based on the current DeltaTime (e.g. 0.08) will call the physics step multiple times (4 in this case). This is pretty much the same approach of Unity, giving a more solid (and deterministic) physics simulation at the cost of increasing CPU usage.

Compared to the the fixed frame rate approach, it allows us to reach higher frame rate (important for VR experiences) but currently, in Falcon, we use it only for very specific cases where it is difficult to choose a good fixed frame rate and the increased CPU usage is not an issue (the vast majority of cases our simulations run machine learning algorithms that blasts the CPU constantly)

Unreal Frame Stepping: When your Wall Clock is irrelevant

All of the previous approaches assume a ‘human’ wanting to see what is happening in the simulation. But often we want to run our simulations in ‘headless’ mode’ to gather data and run expensive algorithms. In such a case the ‘wall clock’ (or the analog time if you prefer) is no longer meaningful and we can run the simulation as fast as possible (without rendering and any kind of human input) with a hardcoded DeltaTime. From a human point of view the result will be having the world running faster 🙂

This is exposed in unreal using the FixedFrameStep, you can just force a DeltaTime and let the engine run independently of time.

About PhysX (In)Determinism

You can spend literally hours (like I did) on PhysX related forums to understand if you can achieve determinism. The official answer of NVIDIA is that PhysX is fully deterministic, as long as you run the same simulation with the same actors with the same framestep you will always get the same results. This is definitely deterministic.

The problem (in games and the related engines) is that you need to give priority to the smoothness of the experience instead of the accuracy, so the lack of determinism is just a consequence of the compromises developers make to have the best experience for gamers.

But if you want to achieve solid simulations you can definitely build them with a top-class engine like Unreal and tuning it for the specific non-gaming use case.

Do not forget about random numbers!

Just a quick note about random numbers. Sometimes you may want to introduce some kind of entropy in your simulations. They are good for spotting edge cases, but they are not good for repeatability either. Always remember that pseudo random generators (including Unreal’s RandomStreams) can be seeded -- just set the seed to a fixed number and you will always get the same sequence of numbers.

Next Episode: Physics over Network, AKA Synchronization of Different Worlds

Let’s conclude with a cliffhanger: Falcon offers a multi-machine/multiplayer experience that forces us to manage Physics at the network level. How do we manage that in Falcon and our enterprise metaverse? Stay tuned 😉

Our work at Duality is supported by the Unreal Engine team and Epic MegaGrants. Thanks to Epic Games for seeding a vibrant and inclusive creator+developer ecosystem!