top of page

Engine: Compression & Blending

Animation Compression

Processing complex animations in real-time can lead to enormous amounts of data resonant in memory. This is due primarily to the animation clips themselves, which, depending on the sampling, may contain hundreds or even thousands of keyframes. In turn, these keyframes each contain unique positions for every bone in a skeletal hierarchy. In the case of the Azul Engine, this information is stored in the form of “Bone” structures, each of which contains a translation vector, rotation quaternion, and scale vector. Couple this with a complex skeletal hierarchy, and the data begins to grow exponentially. While there are methods to improve performance at runtime (see Engine: Skinning & Animation), compression provides a viable solution to reducing this data considerably. The following is a demo of two models at varying compression ratios running different animation clips side by side:

As can be seen in the video, one model is rendered using compressed data, while the other is entirely uncompressed. The compression ratio indicates the number of frames removed to achieve the resulting fidelity. In a 5:1 ratio, for example, the compressed animation clip contains 1/5 of the original frames. This compression is performed entirely within the Model Converter executable, as an optional command-line parameter (See Engine: Model Viewer & Converter for more details). To perform the compression, the complete animation data is examined to determine which keyframes are non-essential to achieve the desired fidelity of the animation. This fidelity is specified in the degree to which the animation is to be compressed; a value the user may alternatively specify if the default is unacceptable. The following is another example, this time with three varying degrees of fidelity:

The degree is specified as an inputted variable, essentially serving as an epsilon for determining the acceptable error between keyframe estimates. These estimates are performed by iteratively traversing the animation clip to determine if certain keyframes may be intuitively interpolated at runtime. The acceptability of the interpolated result is determined by checking the angle of each bone in the keyframe against the angle of the root. The resulting value of the interpolated bone is then compared to the original reference, where the final determination is made. Once a desired level of fidelity has been achieved, the engine will intuit these missing keyframes within the Compute Shader for animation mixing (see Engine: Skinning & Animation for more information on Compute Shaders within the engine).

Blending/Transitioning

In most games, a single skeletal hierarchy will support many different animation clips, all with varying degrees of complexity and duration. In any large real-time application, these clips will undoubtedly require transitioning and blending at runtime. This feature was fully built into the Azul Engine, and is controlled through an instance of the Playback class held within each Game Object. This Playback class utilizes a state pattern to effortlessly handle all of the information pertaining to the timing and execution of the animation, including the duration of the transition/blending between clips. These values may be uniquely set for each animated Game Object, as to support differing behavior.

These animation clips, which are instantiated only once and stored on the GPU (see Engine: Skinning & Animation), may also be utilized by any Game Object sharing the same skeleton. A pointer to the user-specified clip(s) is stored in a link-list structure managed by the Playback class. Upon being signaled to enter a “blend” state, the playback will then utilize the animation mixing Compute Shader to interpolate the skeletal bone positions. However, whereas this Compute Shader is typically run only once per frame to process the animation, in a blending state it is run three times: twice to retrieve the interpolated results of the separate clips, and once more to determine the value between those results. The following is a code excerpt of the processing of this “blend of a blend” functionality:

AnimationMixer1.png

The final demo showcases more transitioning functionality, this time utilizing a greater variety of animations. The examples include more action-oriented animations, better demonstrating the necessity of proper blending and transitioning within game development. The speed of the animations has also been reduced to 30% of their original speed with an applied transition delta of 3 seconds. This is to allow for easier visualization of the blending and transitioning functionality:

bottom of page