tag:gamedevelopment.tutsplus.com,2005:/categories/cppEnvato Tuts+ Game Development - Cpp2014-04-01T21:39:48Ztag:gamedevelopment.tutsplus.com,2005:PostPresenter/gamedev-14466Gamma Correction and Why It Matters<p>If you're a game developer, you've probably heard of the terms <i>gamma</i> and <i>gamma correction</i>. You may or may not know what they mean, but they should not be lightly dismissed.</p><p>Game developers tend to ignore gamma because its effects are subtle enough to be approximately corrected by adjusting light intensities, specular intensities, and the like, but to achieve true image quality with realistic-looking lighting, it's important to understand the gamma value and steps needed to work around its presence in digital imaging, so as to receive the best possible quality. Applying proper gamma correction is one of the most effortless ways to radically improve the look of your real-time 3D graphics.</p>
<h2>Introduction: How Monitors Work</h2>
<p>The CRT monitors that were originally used for computer displays have a curious property: the color response on their screens is <em>non-linear</em> in respect to raw values passed from the graphics card. </p><p>Non-linear, in this sense, means that increases in one of your color components by a constant ratio (say, if a red component of a color becomes twice as high) will not result in an increase of the monitor-emitted light intensity by that same ratio (that means, the red light emitted from the screen will <i>not</i> be twice as high).</p><p>The color response of a CRT monitor is actually an <i>exponential</i> function. (As in all physics, this is way more complex than we're describing, but for simplicity's sake, we'll stick to this assumption.) That is, the function <code class="inline">EmittedLight(C)</code>, where <code class="inline">C</code> is a color component value (red, green, or blue) ranging from <code class="inline">0</code> (no light) to <code class="inline">1</code> (full light intensity), is <code class="inline">C</code> raised to some power <code class="inline">γ</code>. </p><p>This number, <code class="inline">γ</code>, is called the <em>gamma exponent</em> or just <em>gamma</em>. Typical gamma values range from 2.0 to 2.4, and when dealing with gamma in the general sense, the value is agreed upon to be 2.2 as a compromise, and a lot of newer monitors are <em>designed</em> to have the gamma value of precisely 2.2</p><figure class="post_image"><img alt="" data-src="https://cms-assets.tutsplus.com/uploads/users/218/posts/19377/image/graph2.png" class=" lazy-load-image__no-display-style"><figcaption>In a common scenario of gamma=2.2, this is how the monitor actually displays color intensities from your game (green curve). The dotted red line shows how a linear monitor would display the same intensities.</figcaption></figure><p>In practice, this means that black and white will be shown undistorted on the screen (because zero raised to any power is zero, and one raised to any power is one), but all values in between will be skewed with no reliable way to perceive this happening just by watching. </p><p>For example, if you're displaying a color that is supposedly two times darker than black—that is, <code class="inline">RGB(0.5, 0.5, 0.5)</code>—it will actually be shown as less than <i>four</i> times darker, given the common gamma value of 2.2, since 0.5 raised to 2.2 is around 0.22. Clearly, this is not what you intend, and this is not the case just with CRT monitors: LCDs, while not <em>unintentionally</em> having this property, are designed to be compatible with their older counterparts, and thus display your color values stretched like this.</p><p>Moreover, as red, green and blue components are treated independently, the intended color tones of images can be easily mangled as intensities of the three color components will not scale uniformly. What will happen when you're displaying the color value <code class="inline">RGB(1, 0.5, 0.5)</code>? The red component will stay at 1, but the other ones will drop down to half their values, completely changing the tone of your color.</p><figure class="post_image"><img alt="" data-src="https://cms-assets.tutsplus.com/uploads/users/218/posts/19377/image/twocolors.png" class=" lazy-load-image__no-display-style"><figcaption>The second color was obtained from the first one by applying the nonlinear scale that monitors employ. Notice how not only the brightness of the color, but also its saturation, was affected by this transformation.</figcaption></figure><p>Now that we've seen what effects this monitor property has on the color data given to the monitor, we can see what steps there are to combat them.<br></p>
<h2>What is Gamma Correction?</h2>
<p>Gamma correction is the act of undoing the monitor's unfortunate work. Gamma-correcting an image is essentially raising its color intensities to <code class="inline">1/gamma</code>, so that when the monitor in turn raises the value to <code class="inline">gamma</code>, these cancel out, and the result is the color we originally intended to be shown. </p><p>(Remember that <code class="inline">A</code> raised to <code class="inline">B</code>, and then raised to <code class="inline">C</code>, is the same as <code class="inline">A</code> raised to <code class="inline">B×C</code>, and this is why these operations will cancel out, as <code class="inline">gamma × (1/gamma)</code> is <code class="inline">1</code>.) </p><p>Since the average user does not calibrate their monitor to have a linear response, many images they encounter are corrected so that they never feel the difference. As a convention, most image files on the Internet are distributed in what is called the <i>sRGB color space</i>—this means that original, intended color values are <em>roughly</em> raised to the power of <code class="inline">1/2.2</code> before putting them into files (although more complex equations take place in reality). This ensures that all users with conventional displays see the real colors. Scanners, cameras and a lot of digital imaging devices all take this into account, and correct their output for you when saving in conventional image formats.</p><figure class="post_image"><img alt="" data-src="https://cms-assets.tutsplus.com/uploads/users/218/posts/19377/image/graph.png" class=" lazy-load-image__no-display-style"><figcaption>This image shows the mapping of color intensities as sent to the monitor by the graphics card, and intensities that are displayed by the monitor.</figcaption></figure><p>Take a look at the above image. If we do not account for gamma, the curve will be exponential (lower green curve). If we perform gamma correction the actual response will be linear, as it ought to be. For comparison, the image also shows how the graph looks when we perform gamma correction but the monitor actually has a linear response. In this case, the intensities will be distorted in the opposite fashion, and we can see that when a nonlinear monitor distorts them in turn, this cancels out, and we end up with a straight line.<br></p>
<h2>When Do I Need to Worry?</h2>
<p>So far, we have explained the theory behind these phenomena—sure, monitors are non-linear and most images are corrected so they look right on these monitors, but what seems to be the problem? Why should I, an aspiring 3D game developer, concern myself with gamma correction and do anything besides <em></em>just <em>knowing</em> about it?</p>
<p>The answer is simple: as long as images are created just to be displayed, the problem actually doesn't even exist. However, as soon as you want a program to do something to these images (scale them, use them as textures, you name it), you have to take care that the program knows that the values are <i>not real</i> and are just corrected so that they<em> look real</em> on a monitor.</p><p>Particularly, this happens in a renderer when it takes texture maps, such as diffuse surfaces, as input. It does operations on them assuming their color values accurately represent light intensities; that is, assuming a <em>linear correspondence </em>with real-life phenomena they are representing.</p><p>But this is a fundamental error: if you want to sum color values and they are gamma-corrected (raised to <code class="inline">1/gamma</code>) you get the wrong values. It doesn't take a math genius to realize that <code class="inline">A</code> raised to <code class="inline">1/gamma</code> plus <code class="inline">B</code> raised to <code class="inline">1/gamma</code> does not equal <code class="inline">(A+B)</code> raised to <code class="inline">1/gamma</code>. The problem also happens when a renderer outputs some values, such as when it outputs light contributions: if it <i>sums </i>two light contributions but doesn't know the result will be raised to gamma when displayed on the screen, it has produced <em>wrong </em>values.</p>
<p>And this is precisely where the problem occurs: whenever a renderer assumes that the colors it gets linearly correspond to real-life phenomena when they don't, or assumes that colors it outputs will linearly correspond to light intensities on the screen when they won't, it's made quite a serious error which can affect the look and feel of images it produces.</p>
<p>If you don't correct any of the mistakes, don't make sure input texture colors fed into the renderer are linear, and don't make sure the renderer's output image will be linear in respect to the screen, these images will cancel each other out to some degree, much like how they cancel each out when showing precorrected JPEG files in a web browser. However, as soon as you include some intermediate calculations that assume linear correspondences, your math will be wrong.</p>
<figure class="post_image"><img alt="" data-src="https://cms-assets.tutsplus.com/uploads/users/218/posts/19377/image/figures.png" class=" lazy-load-image__no-display-style"><br><figcaption><strong>(a)</strong> Not correcting textures and not correcting the final image, <strong>(b)</strong> not correcting textures but correcting the final image, <strong>(c)</strong> correcting textures but not correcting the final image, <strong>(d)</strong> correcting both the textures and the final image.<br></figcaption></figure>
<p>Recall what we said about changing color tones earlier—that fact <i></i>can (sometimes) help you spot non-linearity. A rule of thumb is: if, when you apply linear tweaks to parameters (such as doubling the brightness of lights in the scene), the resulting image changes not only in brightness but also in color tones (for example, an area going from a reddish-orange hue towards yellow), this means that some nonlinear intermediate process is most likely taking place.</p>
<p>This can happen with texture maps that were retrieved from various sources—the Internet, a digital camera that saves to sRGB JPEG, a scanner, or if the texture was painted on a monitor which was not explicitly calibrated to have a linear response or not explicitly corrected afterwards. Any math done on these texture maps will be wrong, and deviate slightly from theoretically right values. This is visible with texture filtering and mipmaps: since filtering assumes linear responses when averaging color values, you will see pronounced errors: smaller textures (distant ones) will appear noticeably darker than larger ones (that is, when they're nearer you): this is because when they're distant, the filtering algorithm averages more samples and their non-linearity affects the result more.</p><p>Illumination will also suffer from improper gamma: light contributions to surfaces <em>sum</em> in real world, and consequentially in a renderer, but summing is not a faithful operation if the result is non-linearly skewed. If you have complex fragment shaders doing sophisticated lighting, such as subsurface scattering or HDR, the errors become more and more pronounced, up to the point that you <em>actually</em> wonder what's wrong with the image, as opposed to having an uneasy feeling of "maybe kinda wrong lighting, but it's probably just me" which can also happen often. Darkening the textures or brightening the final images by a constant or linear factor don't kill the effect, because they are also linear operations, and you need a non-linear one to combat the inherent exponential response curve happening in the monitor.</p>
<h2>How Do I Fix It?</h2>
<p>Now, hopefully, you're fully aware of what gamma and gamma correction are, and why this is such a big deal when doing real-time 3D graphics. But, of course, there must be some way to fix these problems? </p><p>The answer is yes, and fixing gamma is a rather simple operation that doesn't require you to change anything besides adding a few lines of code, not counting additional parameter, intensity, and color tweaks you will need to perform to get right lighting if you've been setting your scenes to look good on non-linear monitors without correcting them. </p><p>There are three basic steps to ensure you stay linear as long as possible and do the correction at the right point:</p>
<h3>1. Make Sure Your Texture Colors Are Right</h3>
<p>You should not normally alter the source images so that they contain linear colors; having colors gamma-corrected for the typical monitor in eight-bit color fields affords you necessary added resolution in darker areas where the human eye is more sensitive to intensity variations. However, you can make sure that color values are linear before they reach your shaders. </p><p>Normally, in OpenGL, you can do this by passing <code>GL_SRGB8</code> instead of <code>GL_RGB8</code>, and <code class="inline">GL_SRGB8_ALPHA8</code> instead of <code class="inline">GL_RGBA8</code>, to <code>glTexImage2D()</code>, when specifying a texture. This will ensure that <i>all values read from this texture through a shader sampler will be corrected back from sRGB color space to a linear one</i>, which is precisely what we need! If you're using a rendering or game engine which does texture loading for you, it might take this into account or you might need to specify it manually; consult the documentation of the library or ask someone for help if you're unsure.</p>
<p>However, be sure not to erroneously do this to images that, by definition, do not represent color information, and were explicitly painted with this in mind. Examples include <a href="https://gamedevelopment.tutsplus.com/articles/gamedev-glossary-what-is-a-normal-map--gamedev-3893" target="_self">normal maps</a>, bump maps, or height maps, which all encode some data other than color in the color channels of a texture and hence are not likely to need this kind of precorrection.</p><p>From the demo included in this article (some parameters swapped with their actual values for clarity):<br></p><pre class="brush: cpp noskimlinks noskimwords">glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);
</pre><p>This will load the texture in an uncorrected color space. However, if the data in the texture file is in the sRGB color space, we ought to change the third parameter to <code class="inline">GL_SRGB8</code>, yielding:</p><pre class="brush: cpp noskimlinks noskimwords">glTexImage2D(GL_TEXTURE_2D, 0, GL_SRGB8, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data);</pre><p>This will ensure OpenGL corrects the texture data when we look them up.</p>
<h3>2. Make Sure Your Output Image Colors Are Right</h3>
<p>Now you have to apply color correction to the final output images of your renderer. Be sure not to apply correction to anything but the <em>final</em> framebuffer that is to be displayed on the screen. (Don't touch the intermediate buffers that are input to other post-processing shaders, as these will still expect to work with linear values.) </p><p>This can be done in OpenGL by specifying the renderbuffer (the final, non-sampleable framebuffer) to have an sRGB color encoding by passing <code>GL_SRGB</code> instead of <code>GL_RGB</code> as a parameter to <code>glRenderbufferStorage()</code>. After that, you have to raise the <code>GL_FRAMEBUFFER_SRGB</code> flag by calling <code>glEnable</code>. This way, shader writes to sRGB buffers will be corrected so they are displayed right on a typical monitor. </p><p>If you're using an engine or a framework, it probably includes some kind of option to create an sRGB framebuffer for you and set it up properly. Again, you can consult the documentation of the library or ask someone to clarify this for you.</p><p>In the demo, we use the GLFW library, which offers us a painless way to request an sRGB framebuffer. In particular, we set a window hint and then, later, tell OpenGL to enable the framebuffer operations to be in the sRGB space:</p><pre class="brush: cpp noskimlinks noskimwords">glfwWindowHint(GLFW_SRGB_CAPABLE, TRUE);
...
glEnable(GL_FRAMEBUFFER_SRGB);</pre><p></p>
<h3>3. Fix Your Tweaked Light Intensities and Color Parameters</h3>
<p>If this is not a start of a new project, chances are that gamma-incorrect illumination and filtering have taken their toll on you. Maybe you've tweaked your diffuse reflectance colors, light intensities and whatnot in an attempt to make up for subtle nuisances that neglecting gamma has brought to you. </p><p>You need to go through these values once again and tweak them so they look right again—however, this time, your scenes will look more natural due to illumination more accurately representing real world circumstances. Corners will not look too dark so you won't need to add more intensity to lights (thereby wrecking illumination of brighter objects which will then look artificially bright for that amount of light in the scene). </p><p>This will pay off: revisiting your parameters to create a natural ambient with gamma correction will go a long way towards providing your users with an experience and brightness distribution which looks just right to their eyes, so accustomed and sensitive to how light works in real life.</p>
<h2>Demo</h2>
<p><a href="https://github.com/tutsplus/gamma-correction-and-why-it-matters" target="_self">Included with this article</a> is a small OpenGL 3.3 demo which shows a simple scene with some textures lit by two moving light sources. It allows you to switch between several scenarios: not correcting textures but correcting the final image; correcting textures but neglecting to correct the final image; correcting both (that is, doing everything right); and failing to correct either (effectively making a double mistake). </p><p>The demo is written in C++ (with two GLSL shaders) and uses portable GLFW and GLEW libraries so it should run on a broad variety of platforms. The source code is ripe with comments so you can go about and explore every aspect of this short application.</p><figure class="post_image"><img alt="" data-src="https://cms-assets.tutsplus.com/uploads/users/218/posts/19377/image/demoscreen.png" class=" lazy-load-image__no-display-style"><figcaption>The demo in action.</figcaption></figure><p>Use the <b>1</b> key on your keyboard to cycle between correcting textures and not correcting textures, and the <b>2</b> key to cycle between correcting the framebuffer and not correcting the framebuffer. To cycle both of these at the same time, press <b>3</b>—useful to see the difference between neglecting gamma completely (two errors that cancel each other out for the most part) and doing everything right. When the demo starts, none of these corrections are being performed, so hit <b>3</b> to see the benefits of proper gamma correction.<br></p><p>I have included a Microsoft Visual C++ 2013 project, compatible 64-bit versions of the GLFW and GLEW libraries, and a 64-bit Windows executable. However, you can compile this rather easily on any platform with GLFW and GLEW support: just compile <code class="inline">main.cpp</code> and <code class="inline">loader.cpp</code> together and link them against those two libraries. On Linux, installing these libraries via your package manager and passing <code class="inline">-lglew -lglfw</code> to <code class="inline">g++</code> should do the trick. (Please note that this was not tested on operating systems other than Windows, but it is supposed to work—if you encounter any problems, please let me know in the comments and I'll fix them as soon as possible.)</p><p>As you can see when running the demo, the effects are quite noticeable even with a simple model and simple scene like this. Of course, in this simple case, you could maybe get away with tweaking the shader parameters so the image looks good when uncorrected. However, as soon as you start building up complexity in your scenes, the difference will simply be too visible to ever compensate in this way.<br></p><h2>Conclusion</h2>
<p>In this article we've covered terms such as gamma, gamma correction, non-linear inputs and outputs, and non-linear math. Hopefully, I've managed to convince you that you should start worrying about gamma correction right now if you'd neglected it so far, and if you've been careful with gamma before encountering this article, I just hope it's given you some new tiny piece of information to tackle the issue with. </p><p>We have, most importantly, learned how to fix problems that arise when you do incorrect manipulation on color values, assuming they're linear, and we've reviewed common pitfalls and symptoms that occur when you neglect this important aspect of computer graphics.<br></p>
<p>I hope you've had fun and learned something new while reading this article. Until next time!</p>2014-04-01T21:39:48.650Z2014-04-01T21:39:48.650ZDavid Davidovićtag:gamedevelopment.tutsplus.com,2005:PostPresenter/gamedev-8032How to Create a Custom 2D Physics Engine: Oriented Rigid Bodies<p>So far, we've covered <a href="https://gamedev.tutsplus.com/tutorials/implementation/create-custom-2d-physics-engine-aabb-circle-impulse-resolution/">impulse resolution</a>, <a href="https://gamedev.tutsplus.com/tutorials/implementation/how-to-create-a-custom-2d-physics-engine-the-core-engine/">the core architecture</a>, and <a href="https://gamedev.tutsplus.com/tutorials/implementation/how-to-create-a-custom-2d-physics-engine-friction-scene-and-jump-table/">friction</a>. In this, the final tutorial in this series, we'll go over a very interesting topic: orientation. <!--more--></p>
<p>In this article we will discuss the following topics:</p>
<ul>
<li>Rotation math</li>
<li>Oriented shapes</li>
<li>Collision detection</li>
<li>Collision resolution</li>
</ul>
<p>I highly recommended reading up on <a href="https://gamedev.tutsplus.com/series/custom-game-physics-engine/">the previous three articles in the series</a> before attempting to tackle this one. Much of the key information in the previous articles are prerequisites to rest of this article.<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script></p>
<hr>
<h2>Sample Code</h2>
<p>I've created a small sample engine in C++, and I recommend that you browse and refer to the source code throughout the reading of this article, as many practical implementation details could not fit into the article itself.</p>
<figure class="tutorial_image">
<iframe width="600" height="338" data-src="https://www.youtube.com/embed/AzA_owsZU04" frameborder="0" allowfullscreen></iframe><br>
</figure>
<p><a href="https://github.com/tutsplus/ImpulseEngine">This GitHub repo</a> contains the sample engine itself, along with a Visual Studio 2010 project. GitHub allows you to view the source without needing to download the source itself, for convenience.</p>
<div class="related-shortcode">
<strong>Related Posts</strong>
<ul>
<li>
<a href="http://magnos.org/">Philip Diffenderfer</a> has forked the repo to create a <a href="https://github.com/ClickerMonkey/ImpulseEngine">Java version</a> of the engine as well!</li>
</ul>
</div>
<hr>
<h2>Orientation Math</h2>
<p>The math involving rotations in 2D is quite simple, although a mastery of the subject will be required to create anything of value in a physics engine. Newton's second law states:</p>
<p>\[ Equation \: 1:\\<br>
F = ma\]</p>
<p>There is a similar equation that relates specifically angular force and angular acceleration. However, before these equations can be shown, a quick description of the cross product in 2D is required.</p>
<h3>Cross Product</h3>
<p>The cross product in 3D is a <a href="http://en.wikipedia.org/wiki/Cross_product">well known operation</a>. However, the cross product in 2D can be quite confusing, as there isn't really a solid geometric interpretation.</p>
<p>The 2D cross product, unlike the 3D version, does not return a vector but a scalar. This scalar value actually represents the magnitude of the orthogonal vector along the z-axis, if the cross product were to actually be performed in 3D. In a way, the 2D cross product is just a simplified version of the 3D cross product, as it is an extension of 3D vector math.</p>
<p>If this is confusing, do not worry: a thorough understanding of the 2D cross product is not all that necessary. Just know exactly how to perform the operation, and know that the order of operations is important: \(a \times b\) is not the same as \(b \times a\). This article will make heavy use of the cross product in order to transform angular velocity into linear velocity.</p>
<p>Knowing <em>how</em> to perform the cross product in 2D is very important, however. Two vectors can be crossed, a scalar can be crossed with a vector, and a vector can be crossed with a scalar. Here are the operations:</p>
<pre class="brush: cpp noskimlinks noskimwords">
// Two crossed vectors return a scalar
float CrossProduct( const Vec2& a, const Vec2& b )
{
return a.x * b.y - a.y * b.x;
}
// More exotic (but necessary) forms of the cross product
// with a vector a and scalar s, both returning a vector
Vec2 CrossProduct( const Vec2& a, float s )
{
return Vec2( s * a.y, -s * a.x );
}
Vec2 CrossProduct( float s, const Vec2& a )
{
return Vec2( -s * a.y, s * a.x );
}</pre>
<h3>Torque and Angular Velocity</h3>
<p>As we all should know from the previous articles, this equation represents a relationship between the force acting upon a body with that body's mass and acceleration. There is an analog for rotation:</p>
<p>\[ Equation \: 2:\\<br>
T = r \: \times \: \omega\]</p>
<p>\(T\) stands for <em>torque</em>. Torque is rotation force.</p>
<p>\(r\) is a vector from the center of mass (COM) to a particular point on an object. \(r\) can be thought of as referring to a "radius" from COM to a point. Every single unique point on an object will require a different \(r\) value to be represented in Equation 2.</p>
<p>\(\omega\) is called "omega", and refers to rotational velocity. This relationship will be used to integrate angular velocity of a rigid body.</p>
<p>It is important to understand that linear velocity is the velocity of the COM of a rigid body. In the previous article, all objects had no rotational components, so the linear velocity of the COM was the same velocity for all points on a body. When orientation is introduced, points farther away from the COM rotate faster than those near the COM. This means we need a new equation to find the velocity of a point on a body, since bodies can now spin and translate at the same time.</p>
<p>Use the following equation to understand the relationship between a point on a body and the velocity of that point:</p>
<p>\[ Equation \: 3:\\<br>
\omega = r \: \times v \]</p>
<p>\(v\) represents linear velocity. To transform linear velocity into angular velocity, cross the \(r\) radius with \(v\).</p>
<p>Similarly, we can rearrange Equation 3 to form another version:</p>
<p>\[ Equation \: 4:\\<br>
v = \omega \: \times r \]</p>
<p>The equations from the last section are quite powerful only if rigid bodies have uniform density. Non-uniform density makes the math involved in calculating anything required the rotation and behavior of a rigid body much too complicated. Furthermore, if the point representing a rigid body is not at the COM, then the calculations regarding \(r\) are going to be entirely wonky.</p>
<h3>Inertia</h3>
<p>In two dimensions an object rotates about the imaginary z-axis. This rotation can be quite difficult depending on how much mass an object has, and how far away from the COM the object's mass is. A circle with mass equal to a long thin rod will be easier to rotate than the rod. This "difficulty to rotate" factor can be thought of as the moment of inertia of an object.</p>
<p>In a sense, inertia is the rotational mass of an object. The more inertia something has, the harder it is to get it spinning.</p>
<p>Knowing this, one could store the inertia of an object within the body as the same format as mass. It would wise to also store the inverse of this inertia value, being careful not to perform a division by zero. Please see the previous articles in this series for more information on mass and inverse mass.</p>
<h3>Integration</h3>
<p>Each rigid body will require a few more fields to store rotational information. Here's a quick example of a structure to hold some additional data:</p>
<pre class="brush: cpp noskimlinks noskimwords">
struct RigidBody
{
Shape *shape
// Linear components
Vec2 position
Vec2 velocity
float acceleration
// Angular components
float orientation // radians
float angularVelocity
float torque
};</pre>
<p>Integrating the angular velocity and orientation of a body are very similar to the integration of velocity and acceleration. Here is a quick code sample to show how it's done (note: details about integration were covered in a <a href="https://gamedev.tutsplus.com/tutorials/implementation/how-to-create-a-custom-2d-physics-engine-the-core-engine/">previous article</a>):</p>
<pre class="brush: cpp noskimlinks noskimwords">
const Vec2 gravity( 0, -10.0f )
velocity += force * (1.0f / mass + gravity) * dt
angularVelocity += torque * (1.0f / momentOfInertia) * dt
position += velocity * dt
orient += angularVelocity * dt</pre>
<p>With the small amount of information presented so far, you should be able to start rotating various things on the screen without any trouble. With just a few lines of code, something rather impressive can be constructed, perhaps by tossing a shape into the air while it rotates about the COM as gravity pulls it downward to form an arced path of travel.</p>
<h3><code>Mat22</code></h3>
<p>Orientation should be stored as a single radian value, as seen above, though often times the use of a small rotation matrix can be a much better choice for certain shapes.</p>
<p>A great example is the Oriented Bounding Box (OBB). The OBB consists of a width and height extent, both of which can be represented by vectors. These two extent vectors can then be rotated by a two-by-two rotation matrix to represent the axes of an OBB.</p>
<p>I suggest the creation of a <code>Mat22</code> matrix class to be added to whatever math library you are using. I myself use a small custom math library which is packaged in the open source demo. Here is an example of what such an object may look like:</p>
<pre class="brush: cpp noskimlinks noskimwords">
struct Mat22
{
union
{
struct
{
float m00, m01
float m10, m11;
};
struct
{
Vec2 xCol;
Vec2 yCol;
};
};
};</pre>
<p>Some useful operations include: construction from angle, construction from column vectors, transpose, multiply with <code>Vec2</code>, multiply with another <code>Mat22</code>, absolute value.</p>
<p>The last useful function is to be able to retrieve either the <code>x</code> or <code>y</code> column from a vector. The column function would look something like:</p>
<pre class="brush: cpp noskimlinks noskimwords">
Mat22 m( PI / 2.0f );
Vec2 r = m.ColX( ); // retrieve the x axis column</pre>
<p>This technique is useful for retrieving a unit vector along the an axis of rotation, either the <code>x</code> or <code>y</code> axis. Additionally, a two-by-two matrix can be constructed from two orthogonal unit vectors, as each vector can be directly inserted into the rows. Although this construction method is a bit uncommon for 2D physics engines, it can still be very useful to understand how rotations and matrices work in general.</p>
<p>This constructor might look something like:</p>
<pre class="brush: cpp noskimlinks noskimwords">
Mat22::Mat22( const Vec2& x, const Vec2& y )
{
m00 = x.x;
m01 = x.y;
m01 = y.x;
m11 = y.y;
}
// or
Mat22::Mat22( const Vec2& x, const Vec2& y )
{
xCol = x;
yCol = y;
}</pre>
<p>Since the most important operation of a rotation matrix is to perform rotations based off of an angle, it's important to be able to construct a matrix from an angle and multiply a vector by this matrix (to rotate the vector counter-clockwise by the angle the matrix was constructed with):</p>
<pre class="brush: cpp noskimlinks noskimwords">
Mat2( real radians )
{
real c = std::cos( radians );
real s = std::sin( radians );
m00 = c; m01 = -s;
m10 = s; m11 = c;
}
// Rotate a vector
const Vec2 operator*( const Vec2& rhs ) const
{
return Vec2( m00 * rhs.x + m01 * rhs.y, m10 * rhs.x + m11 * rhs.y );
}</pre>
<p>For the sake of brevity I will not derive why the counter-clockwise rotation matrix is of the form:</p>
<pre class="brush: cpp noskimlinks noskimwords">
a = angle
cos( a ), -sin( a )
sin( a ), cos( a )</pre>
<p>However it is important to at the very least know that this is the form of the rotation matrix. For more information about rotation matrices please see <a href="http://en.wikipedia.org/wiki/Rotation_matrix">the Wikipedia page</a>.</p>
<div class="related-shortcode">
<strong>Related Posts</strong>
<ul>
<li><a href="https://gamedev.tutsplus.com/tutorials/implementation/lets-build-a-3d-graphics-engine-linear-transformations/">Let's Build a 3D Graphics Engine: Linear Transformations</a></li>
</ul>
</div>
<hr>
<h2>Transforming to a Basis</h2>
<p>It is important to understand the difference between model and world space. Model space is the coordinate system local to a physics shape. The origin is at the COM, and the orientation of the coordinate system is aligned with the axes of the shape itself.</p>
<p>In order to transform a shape into world space it must be rotated and translated. Rotation must occur first, as rotation is always performed about the origin. Since the object is in model space (origin at COM), rotation will rotate about the COM of the shape. Rotation would occur with a <code>Mat22</code> matrix. In the sample code, orientation matrices are of the name <code>u</code>.</p>
<p>After rotation is performed, the object can then be translated to its position in the world by vector addition.</p>
<p>Once an object is in world space, it can then be translated to the model space of an entirely different object by using inverse transformations. Inverse rotation followed by inverse translation are used to do so. This is how much math is simplified during collision detection!</p>
<figure class="tutorial_image"><img class="aligncenter size-full wp-image-8689 lazy-load-image__no-display-style" alt="Inverse transformation from world space to model space of the red polygon." data-src="https://cdn.tutsplus.com/gamedev/uploads/2013/06/InverseTransform.png" data-original-url="http://cdn.tutsplus.com/gamedev.tutsplus.com/uploads/2013/06/InverseTransform.png" width="600" height="400">
<figcaption>Inverse transformation (left to right) from world space to model space of the red polygon.</figcaption>
</figure>
<p>As seen in the above image, if the inverse transformation of the red object is applied to both the red and blue polygons, then a collision detection test can be reduced to the form of an AABB vs OBB test, instead of computing complex math between two oriented shapes.</p>
<p>In much of the sample source code, vertices are constantly transformed from model to world and back to model, for all sorts of reasons. You should have a clear understanding of what this means in order to comprehend the sample collision detection code.</p>
<hr>
<h2>Collision Detection and Manifold Generation</h2>
<p>In this section, I'll present quick outlines of polygon and circle collisions. Please see the sample source code for more in-depth implementation details.</p>
<h3>Polygon to Polygon</h3>
<p>Lets start with the most complex collision detection routine in this entire article series. The idea of checking for collision between two polygons is best done (in my opinion) with the <a href="https://gamedev.tutsplus.com/tutorials/implementation/collision-detection-with-the-separating-axis-theorem/">Separating Axis Theorem</a> (SAT).</p>
<p>However, instead of projecting each polygon's extents onto each other, there is a slightly newer and more efficient method, as outlined by <a href="http://gdcvault.com/play/1017646/Physics-for-Game-Programmers-The">Dirk Gregorius in his 2013 GDC Lecture</a> (slides <a href="https://code.google.com/p/box2d/downloads/detail?name=DGregorius_GDC2013.zip&can=2&q=">available here</a> for free).</p>
<p>The first thing that must be learned is the concept of support points.</p>
<h3>Support Points</h3>
<p>The support point of a polygon is the vertex that is the farthest along a given direction. If two vertices have equal distances along the given direction, either one is acceptable.</p>
<p>In order to compute a supporting point, the dot product must be used to find a signed distance along a given direction. Since this is very simple, I'll show a quick example within this article:</p>
<pre class="brush: cpp noskimlinks noskimwords">
// The extreme point along a direction within a polygon
Vec2 GetSupport( const Vec2& dir )
{
real bestProjection = -FLT_MAX;
Vec2 bestVertex;
for(uint32 i = 0; i < m_vertexCount; ++i)
{
Vec2 v = m_vertices[i];
real projection = Dot( v, dir );
if(projection > bestProjection)
{
bestVertex = v;
bestProjection = projection;
}
}
return bestVertex;
}</pre>
<p>The dot product is used on each vertex. The dot product represents a signed distance in a given direction, so the vertex with the greatest projected distance would be the vertex to return. This operation is performed in model space of the given polygon within the sample engine.</p>
<h3>Finding Axis of Separation</h3>
<p>By using the concept of support points, a search for the axis of separation can be performed between two polygons (polygon A and polygon B). The idea of this search is to loop along all faces of polygon A and find the support point in the negative normal to that face.</p>
<figure class="tutorial_image"><img class="aligncenter size-full wp-image-8696 lazy-load-image__no-display-style" alt="SupportPoints" data-src="https://cdn.tutsplus.com/gamedev/uploads/2013/06/SupportPoints.png" data-original-url="http://cdn.tutsplus.com/gamedev.tutsplus.com/uploads/2013/06/SupportPoints.png" width="253" height="195"></figure>
<p>In the above image, two support points are shown: one on each object. The blue normal would correspond to the supporting point on the other polygon as the vertex farthest along in the opposite direction of the blue normal. Similarly, the red normal would be used to find the support point located at the end of the red arrow.</p>
<p>The distance from each supporting point to the current face would be the signed penetration. By storing the greatest distance a possible minimum axis of penetration can be recorded.</p>
<p>Here is an example function from the sample source code that finds the possible axis of minimum penetration using the <code>GetSupport</code> function:</p>
<pre class="brush: cpp noskimlinks noskimwords">
real FindAxisLeastPenetration( uint32 *faceIndex, PolygonShape *A, PolygonShape *B )
{
real bestDistance = -FLT_MAX;
uint32 bestIndex;
for(uint32 i = 0; i < A->m_vertexCount; ++i)
{
// Retrieve a face normal from A
Vec2 n = A->m_normals[i];
// Retrieve support point from B along -n
Vec2 s = B->GetSupport( -n );
// Retrieve vertex on face from A, transform into
// B's model space
Vec2 v = A->m_vertices[i];
// Compute penetration distance (in B's model space)
real d = Dot( n, s - v );
// Store greatest distance
if(d > bestDistance)
{
bestDistance = d;
bestIndex = i;
}
}
*faceIndex = bestIndex;
return bestDistance;
}</pre>
<p>Since this function returns the greatest penetration, if this penetration is positive that means the two shapes are not overlapping (negative penetration would signify no separating axis).</p>
<p>This function will need to be called twice, flipping A and B objects each call.</p>
<h3>Clipping Incident and Reference Face</h3>
<p>From here, the incident and reference face need to be identified, and the incident face needs to be clipped against the side planes of the reference face. This is a rather non-trivial operation, although Erin Catto (creator of Box2D, and all physics currently used by Blizzard) has created <a href="https://code.google.com/p/box2d/downloads/list">some great slides covering this topic in detail</a>.</p>
<p>This clipping will generate two potential contact points. All contact points behind the reference face can be considered contact points.</p>
<p>Beyond Erin Catto's slides, the sample engine also has the clipping routines implemented as an example.</p>
<h3>Circle to Polygon</h3>
<p>The circle vs. polygon collision routine is quite a bit simpler than polygon vs. polygon collision detection. First, the closest face on the polygon to the center of the circle is computed in a similar way to using support points from the previous section: by looping over each face normal of the polygon and finding the distance from the circle's center to the face.</p>
<p>If the center of the circle is behind this closest face, specific contact information can be generated and the routine can immediately end.</p>
<p>After the closest face is identified, the test devolves into a line segment vs. circle test. A line segment has three interesting regions called <em>Voronoi regions</em>. Examine the following diagram:</p>
<figure class="tutorial_image"><img class="aligncenter size-full wp-image-8710 lazy-load-image__no-display-style" alt="Voronoi regions of a line segment." data-src="https://cdn.tutsplus.com/gamedev/uploads/2013/06/custom-physics-edge-voronoi.png" data-original-url="http://cdn.tutsplus.com/gamedev.tutsplus.com/uploads/2013/06/custom-physics-edge-voronoi.png" width="600" height="400">
<figcaption>Voronoi regions of a line segment.</figcaption>
</figure>
<p>Intuitively, depending on where the center of the circle is located different contact information can be derived. Imagine the center of the circle is located on either vertex region. This means that the closest point to the circle's center will be an edge vertex, and the proper collision normal will be a vector from this vertex to the circle center.</p>
<p>If the circle is within the face region then the closest point of the segment to the circle's center will be the circle's center project onto the segment. The collision normal will just be the face normal.</p>
<p>To compute which Voronoi region the circle lies within, we use the dot product between a couple of vertices. The idea is to create an imaginary triangle and test to see whether the angle of the corner constructed with the segment's vertex is above or below 90 degrees. One triangle is created for each vertex of the line segment.</p>
<figure class="tutorial_image"><img class="aligncenter size-full wp-image-8711 lazy-load-image__no-display-style" alt="Projecting vector from edge vertex to circle center onto the edge." data-src="https://cdn.tutsplus.com/gamedev/uploads/2013/06/custom-voronoi-dot.png" data-original-url="http://cdn.tutsplus.com/gamedev.tutsplus.com/uploads/2013/06/custom-voronoi-dot.png" width="600" height="400">
<figcaption>Projecting vector from edge vertex to circle center onto the edge.</figcaption>
</figure>
<p>A value of above 90 degrees will mean an edge region has been identified. If neither triangle's edge vertex angles is above 90 degrees, then the circle's center needs to be projected onto the segment itself to generate manifold information. As seen in the image above, if the vector from the edge vertex to the circle center dotted with the edge vector itself is negative, then the Voronoi region the circle lies within is known.</p>
<p>Luckily, the dot product can be used to compute a signed projection, and this sign will be negative if above 90 degrees and positive if below.</p>
<hr>
<h2>Collision Resolution</h2>
<p>It is that time again: we'll return to our impulse resolution code for a third and final time. By now, you should be fully comfortable writing their own resolution code that computes resolution impulses, along with friction impulses, and can also can performed linear projection to resolve leftover penetration.</p>
<p>Rotational components need to be added to both the friction and penetration resolution. Some energy will be placed into angular velocity.</p>
<p>Here is our impulse resolution as we left it from the previous article on friction:</p>
<p>\[ Equation 5: \\<br>
j = \frac{-(1 + e)((V^{A} - V^{B}) * t)}{\frac{1}{mass^{A}} + \frac{1}{mass^{B}}}<br>
\]</p>
<p>If we throw in rotational components, the final equation looks like this:</p>
<p>\[ Equation 6: \\<br>
j = \frac{-(1 + e)((V^{A} - V^{B}) * t)}{\frac{1}{mass^{A}} + \frac{1}{mass^{B}} + \frac{(r^{A} \times t)^{2}}{I^{A}} + \frac{(r^{B} \times t)^{2}}{I^{B}}}<br>
\]</p>
<p>In the above equation, \(r\) is again a "radius", as in a vector from the COM of an object to the point of contact. A more in-depth derivation of this equation can be found <a href="http://chrishecker.com/images/e/e7/Gdmphys3.pdf">on Chris Hecker's site</a>.</p>
<p>It is important to realize that the velocity of a given point on an object is:</p>
<p>\[ Equation 7: \\<br>
V' = V + \omega \times r<br>
\]</p>
<p>The application of impulses changes slightly in order to account for the rotational terms:</p>
<pre class="brush: cpp noskimlinks noskimwords">
void Body::ApplyImpulse( const Vec2& impulse, const Vec2& contactVector )
{
velocity += 1.0f / mass * impulse;
angularVelocity += 1.0f / inertia * Cross( contactVector, impulse );
}</pre>
<hr>
<h2>Conclusion</h2>
<p>This concludes the final article of this series. By now, quite a few topics have been covered, including impulse based resolution, manifold generation, friction, and orientation, all in two dimensions.</p>
<p>If you've made it this far, I must congratulate you! Physics engine programming for games is an extremely difficult area of study. I wish all readers luck, and again please feel free to comment or ask questions below.</p>
2013-06-17T15:00:21.000Z2013-06-17T15:00:21.000ZRandy Gaultag:gamedevelopment.tutsplus.com,2005:PostPresenter/gamedev-7756How to Create a Custom 2D Physics Engine: Friction, Scene and Jump Table<p>In the first two tutorials in this series, I covered the topics of <a href="https://gamedev.tutsplus.com/tutorials/implementation/create-custom-2d-physics-engine-aabb-circle-impulse-resolution/">Impulse Resolution</a> and <a href="https://gamedev.tutsplus.com/tutorials/implementation/how-to-create-a-custom-2d-physics-engine-the-core-engine/">Core Architecture</a>. Now it's time to add on some of the final touches to our 2D, impulse-based physics engine.<!--more--></p>
<p>The topics we'll look at in this article are:</p>
<ul>
<li>Friction</li>
<li>Scene</li>
<li>Collision Jump Table</li>
</ul>
<p>I highly recommended reading up on <a href="https://gamedev.tutsplus.com/series/custom-game-physics-engine/">the previous two articles in the series</a> before attempting to tackle this one. Some key information in the previous articles is built upon within this article.</p>
<p><em><strong>Note:</strong> Although this tutorial is written using C++, you should be able to use the same techniques and concepts in almost any game development environment.</em><script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script></p>
<hr>
<h2>Video Demo</h2>
<p>Here's a quick demo of what we're working towards in this part:</p>
<figure class="tutorial_image"><iframe width="600" height="338" data-src="https://www.youtube.com/embed/AHGcs-m-vhI" frameborder="0" allowfullscreen></iframe></figure>
<hr>
<h2>Friction</h2>
<p>Friction is a part of collision resolution. Friction always applies a force upon objects in the direction opposite to the motion in which they are to travel.</p>
<p>In real life, friction is an incredibly complex interaction between different substances, and in order to model it, vast assumptions and approximations are made. These assumptions are implied within the math, and are usually something like "the friction can be approximated by a single vector" - similarly to how rigid body dynamics simulates real life interactions by assuming bodies with uniform density that cannot deform.</p>
<p>Take a quick look at the video demo from the first article in this series:</p>
<figure class="tutorial_image"><iframe data-src="https://www.youtube.com/embed/bjmbh_0FzZ0" height="338" width="600" allowfullscreen="" frameborder="0"></iframe></figure>
<p>The interactions between the bodies are quite interesting, and the bouncing during collisions feels realistic. However, once the objects land on the solid platform, they just sort of all press away and drift off the edges of the screen. This is due to a lack of friction simulation.</p>
<h3>Impulses, Again?</h3>
<p>As you should recall from <a href="https://gamedev.tutsplus.com/tutorials/implementation/create-custom-2d-physics-engine-aabb-circle-impulse-resolution/">the first article in this series</a>, a particular value, <code>j</code>, represented the magnitude of an impulse required to separate two objects penetration during a collision. This magnitude can be referred as <code>jnormal</code> or <code>jN</code> as it is used to modify velocity along the collision normal.</p>
<p>Incorporating a friction response involves calculating another magnitude, referred to as <code>jtangent</code> or <code>jT</code>. Friction will be modeled as an impulse. This magnitude will modify the velocity of an object along the negative tangent vector of the collision, or in other words along the friction vector. In two dimensions, solving for this friction vector is a solvable problem, but in 3D the problem becomes much more complex.</p>
<p>Friction is quite simple, and we can make use of our previous equation for <code>j</code>, except we will replace all instances of the normal <code>n</code> with a tangent vector <code>t</code>.</p>
<p>\[ Equation 1:\\<br>
j = \frac{-(1 + e)(V^{B}-V^{A})\cdot n)}<br>
{\frac{1}{mass^A} + \frac{1}{mass^B}}\]</p>
<p>Replace <code>n</code> with <code>t</code>:</p>
<p>\[ Equation 2:\\<br>
j = \frac{-(1 + e)((V^{B}-V^{A})\cdot t)}<br>
{\frac{1}{mass^A} + \frac{1}{mass^B}}\]</p>
<p>Although only a single instance of <code>n</code> was replaced with <code>t</code> in this equation, once rotations are introduced a few more instances must be replaced besides the single one in the numerator of Equation 2.</p>
<p>Now the matter of how to calculate <code>t</code> arises. The tangent vector is a vector perpendicular to the collision normal that is facing more towards the normal. This might sound confusing - don't worry, I have a diagram!</p>
<p>Below you can see the tangent vector perpendicular to the normal. The tangent vector can either point to the left or the right. To the left would be "more away" from the relative velocity. However, it is defined as the perpendicular to the normal that is pointing "more towards" the relative velocity.</p>
<figure class="tutorial_image"><img class="aligncenter size-full wp-image-7879 lazy-load-image__no-display-style" alt="Vectors of various types within the timeframe of a collision of rigid bodies." data-src="https://cdn.tutsplus.com/gamedev/uploads/2013/05/custom-physics-tangent-normal-rv.png" data-original-url="http://cdn.tutsplus.com/gamedev.tutsplus.com/uploads/2013/05/custom-physics-tangent-normal-rv.png" width="600" height="400"><br>
<figcaption>Vectors of various types within the timeframe of a collision of rigid bodies.</figcaption>
</figure>
<p>As stated briefly earlier, friction will be a vector facing opposite to the tangent vector. This means that the direction in which to apply friction can be directly computed, since the normal vector was found during the collision detection.</p>
<p>Knowing this, the tangent vector is (where <code>n</code> is the collision normal):</p>
<p>\[ V^R = V^{B}-V^{A} \\<br>
t = V^R - (V^R \cdot n) * n \]</p>
<p>All that is left to solve for <code>jt</code>, the magnitude of the friction, is to compute the value directly using the equations above. There are some very tricky pieces after this value is computed that will be covered shortly, so this isn't the last thing needed in our collision resolver:</p>
<pre class="brush: cpp noskimlinks noskimwords">
// Re-calculate relative velocity after normal impulse
// is applied (impulse from first article, this code comes
// directly thereafter in the same resolve function)
Vec2 rv = VB - VA
// Solve for the tangent vector
Vec2 tangent = rv - Dot( rv, normal ) * normal
tangent.Normalize( )
// Solve for magnitude to apply along the friction vector
float jt = -Dot( rv, t )
jt = jt / (1 / MassA + 1 / MassB)</pre>
<p>The above code follows Equation 2 directly. Again, it's important to realize that the friction vector points in the opposite direction of our tangent vector, and as such we must apply a negative sign when we dot the relative velocity along the tangent to solve for the relative velocity along the tangent vector. This negative sign flips the tangent velocity and suddenly points in the direction in which friction should be approximated as.</p>
<h3>Coulomb's Law</h3>
<p><a href="http://en.wikipedia.org/wiki/Friction#Dry_friction">Coulomb's law</a> is the portion of friction simulation that most programmers have trouble with. I myself had to do quite a bit of studying to figure out the correct way of modeling it. The trick is that Coulomb's law is an inequality.</p>
<p>Coulomb friction states:</p>
<p>\[ Equation 3: \\<br>
F_f <= \mu F_n \]</p>
<p>In other words, the force of friction is always less than or equal to the normal force multiplied by some constant <code>μ</code> (whose value depends on the materials of the objects).</p>
<p>The normal force is just our old <code>j</code> magnitude multiplied by the collision normal. So if our solved <code>jt</code> (representing the force of friction) is less than <code>μ</code> times the normal force, then we can use our <code>jt</code> magnitude as friction. If not, then we must use our normal force times <code>μ</code> instead. This "else" case is a form of clamping our friction below some maximum value, the max being the normal force times <code>μ</code>.</p>
<p>The whole point of Coulomb's law is to perform this clamping procedure. This clamping turns out to be the most difficult portion of friction simulation for impulse-based resolution to find documentation on anywhere - until now, at least! Most white papers I could find on the subject either skipped friction entirely, or stopped short and implemented improper (or non-existent) clamping procedures. Hopefully by now you have an appreciation for understanding that getting this part right is important.</p>
<p>Lets just dish out the clamping all in one go before explaining anything. This next code block is the previous code example with the finished clamping procedure and friction impulse application all together:</p>
<pre class="brush: cpp noskimlinks noskimwords">
// Re-calculate relative velocity after normal impulse
// is applied (impulse from first article, this code comes
// directly thereafter in the same resolve function)
Vec2 rv = VB - VA
// Solve for the tangent vector
Vec2 tangent = rv - Dot( rv, normal ) * normal
tangent.Normalize( )
// Solve for magnitude to apply along the friction vector
float jt = -Dot( rv, t )
jt = jt / (1 / MassA + 1 / MassB)
// PythagoreanSolve = A^2 + B^2 = C^2, solving for C given A and B
// Use to approximate mu given friction coefficients of each body
float mu = PythagoreanSolve( A->staticFriction, B->staticFriction )
// Clamp magnitude of friction and create impulse vector
Vec2 frictionImpulse
if(abs( jt ) < j * mu)
frictionImpulse = jt * t
else
{
dynamicFriction = PythagoreanSolve( A->dynamicFriction, B->dynamicFriction )
frictionImpulse = -j * t * dynamicFriction
}
// Apply
A->velocity -= (1 / A->mass) * frictionImpulse
B->velocity += (1 / B->mass) * frictionImpulse</pre>
<p>I decided to use this formula to solve for the friction coefficients between two bodies, given a coefficient for each body:</p>
<p>\[ Equation 4: \\<br>
Friction = \sqrt[]{Friction^2_A + Friction^2_B} \]</p>
<p>I actually saw someone else do this in their own physics engine, and I liked the result. An average of the two values would work perfectly fine to get rid of the use of square root. Really, any form of picking the friction coefficient will work; this is just what I prefer. Another option would be to use a lookup table where the type of each body is used as an index into a 2D table.</p>
<p>It is important that the absolute value of <code>jt</code> is used in the comparison, since the comparison is theoretically clamping raw magnitudes below some threshold. Since <code>j</code> is always positive, it must be flipped in order to represent a proper friction vector, in the case dynamic friction is used.</p>
<h3>Static and Dynamic Friction</h3>
<p>In the last code snippet static and dynamic friction were introduced without any explanation! I'll dedicate this whole section to explaining the difference between and necessity of these two types of values.</p>
<p>Something interesting happens with friction: it requires an "energy of activation" in order for objects to start moving when at complete rest. When two objects are resting upon one another in real life, it takes a fair amount of energy to push on one and get it moving. However once you get something sliding it is often easier to keep it sliding from then on.</p>
<p>This is due to how friction works on a microscopic level. Another picture helps here:</p>
<figure class="tutorial_image"><img class="aligncenter size-full wp-image-7880 lazy-load-image__no-display-style" alt="Microscopic view of what causes energy of activation due to friction." data-src="https://cdn.tutsplus.com/gamedev/uploads/2013/05/custom-physics-energy-activation.png" data-original-url="http://cdn.tutsplus.com/gamedev.tutsplus.com/uploads/2013/05/custom-physics-energy-activation.png" width="600" height="400"><br>
<figcaption>Microscopic view of what causes energy of activation due to friction.</figcaption>
</figure>
<p>As you can see, the small deformities between the surfaces are really the major culprit that creates friction in the first place. When one object is at rest on another, microscopic deformities rest between the objects, interlocking. These need to be broken or separated in order for the objects to slide against one another.</p>
<p>We need a way to model this within our engine. A simple solution is to provide each type of material with two friction values: one for static and one for dynamic.</p>
<p>The static friction is used to clamp our <code>jt</code> magnitude. If the solved <code>jt</code> magnitude is low enough (below our threshold), then we can assume the object is at rest, or nearly as rest and use the entire <code>jt</code> as an impulse.</p>
<p>On the flipside, if our solved <code>jt</code> is above the threshold, it can be assumed that the object has already broken the "energy of activation", and in such a situation a lower friction impulse is used, which is represented by a smaller friction coefficient and a slightly different impulse computation.</p>
<hr>
<h2>Scene</h2>
<p>Assuming you did not skip any portion of the Friction section, well done! You have completed the hardest part of this entire series (in my opinion).</p>
<p>The <code>Scene</code> class acts as a container for everything involving a physics simulation scenario. It calls and uses the results of any broad phase, contains all rigid bodies, runs collision checks and calls resolution. It also integrates all live objects. The scene also interfaces with the user (as in the programmer using the physics engine).</p>
<p>Here is an example of what a scene structure may look like:</p>
<pre class="brush: cpp noskimlinks noskimwords">
class Scene
{
public:
Scene( Vec2 gravity, real dt );
~Scene( );
void SetGravity( Vec2 gravity )
void SetDT( real dt )
Body *CreateBody( ShapeInterface *shape, BodyDef def )
// Inserts a body into the scene and initializes the body (computes mass).
//void InsertBody( Body *body )
// Deletes a body from the scene
void RemoveBody( Body *body )
// Updates the scene with a single timestep
void Step( void )
float GetDT( void )
LinkedList *GetBodyList( void )
Vec2 GetGravity( void )
void QueryAABB( CallBackQuery cb, const AABB& aabb )
void QueryPoint( CallBackQuery cb, const Point2& point )
private:
float dt // Timestep in seconds
float inv_dt // Inverse timestep in sceonds
LinkedList body_list
uint32 body_count
Vec2 gravity
bool debug_draw
BroadPhase broadphase
};</pre>
<p>There is not anything particularly complex about the <code>Scene</code> class. The idea is to allow the user to add and remove rigid bodies easily. The <code>BodyDef</code> is a structure that holds all information about a rigid body, and can be used to allow the user to insert values as a sort of configuration structure.</p>
<p>The other important function is <code>Step()</code>. This function performs a single round of collision checks, resolution and integration. This should be called from within the timestepping loop outlined in <a href="https://gamedev.tutsplus.com/tutorials/implementation/how-to-create-a-custom-2d-physics-engine-the-core-engine/">the second article of this series</a>.</p>
<p>Querying a point or AABB involves checking to see which objects actually collide with either a pointer or AABB within the scene. This makes it easy for gameplay-related logic to see how things are placed within the world.</p>
<hr>
<h2>Jump Table</h2>
<p>We need an easy way to pick out which collision function should be called, based on the type of two different objects.</p>
<p>In C++ there are two major ways that I am aware of: double dispatch and a 2D jump table. In my own personal tests I found the 2D jump table to superior, so I'll go into detail about how to implement that. If you're planning to use a language other than C or C++ I am sure an array of functions or functor objects can be constructed similarly to a table of function pointers (which is another reason I chose to talk about jump tables rather than other options that are more specific to C++).</p>
<p>A jump table in C or C++ is a table of function pointers. Indices representing arbitrary names or constants are used to index into the table and call a specific function. The usage could look something like this for a 1D jump table:</p>
<pre class="brush: cpp noskimlinks noskimwords">
enum Animal
{
Rabbit
Duck
Lion
};
const void (*talk)( void )[] = {
RabbitTalk,
DuckTalk,
LionTalk,
};
// Call a function from the table with 1D virtual dispatch
talk[Rabbit]( ) // calls the RabbitTalk function</pre>
<p>The above code actually mimics what the C++ language itself implements with <code>virtual function calls and inheritance</code>. However, C++ only implements single dimensional virtual calls. A 2D table can be constructed by hand.</p>
<p>Here is some psuedocode for a 2D jump table to call collision routines:</p>
<pre class="brush: cpp noskimlinks noskimwords">
collisionCallbackArray = {
AABBvsAABB
AABBvsCircle
CirclevsAABB
CirclevsCircle
}
// Call a collsion routine for collision detection between A and B
// two colliders without knowing their exact collider type
// type can be of either AABB or Circle
collisionCallbackArray[A->type][B->type]( A, B )</pre>
<p>And there we have it! The actual types of each collider can be used to index into a 2D array and pick a function to resolve collision.</p>
<p>Note, however, that <code>AABBvsCircle</code> and <code>CirclevsAABB</code> are almost duplicates. This is necessary! The normal needs to be flipped for one of these two functions, and that is the only difference between them. This allows for consistent collision resolution, no matter the combination of objects to resolve.</p>
<hr>
<h2>Conclusion</h2>
<p>By now we have covered a huge amount of topics in setting up a custom rigid body physics engine entirely from scratch! Collision resolution, friction, and engine architecture are all the topics that have been covered thus far. An entirely successful physics engine suitable for many production-level two dimensional games can be constructed with the knowledge presented in this series so far.</p>
<p>Looking ahead into the future, I plan to write one more article devoted entirely to a very desirable feature: rotation and orientation. Oriented objects are exceedingly attractive to watch interact with one another, and are the final piece that our custom physics engine requires.</p>
<p>Resolution of rotation turns out to be quite simple, though collision detection takes a hit in complexity. Good luck until next time, and please do ask questions or post comments below!</p>
2013-05-15T21:00:20.000Z2013-05-15T21:00:20.000ZRandy Gaultag:gamedevelopment.tutsplus.com,2005:PostPresenter/gamedev-7493How to Create a Custom 2D Physics Engine: The Core Engine<p>In this part of my series on creating a custom 2D physics engine for your games, we'll add more features to the impulse resolution we got working in the first part. In particular, we'll look at integration, timestepping, using a modular design for our code, and broad phase collision detection.<!--more--></p>
<p><script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script></p>
<hr>
<h2>Introduction</h2>
<p>In the last post in this series I covered the topic of <a href="https://gamedev.tutsplus.com/tutorials/implementation/create-custom-2d-physics-engine-aabb-circle-impulse-resolution/">impulse resolution</a>. Read that first, if you haven't already!</p>
<p>Let's dive straight into the topics covered in this article. These topics are all of the necessities of any half-decent physics engine, so now is an appropriate time to build more features on top of the core resolution from the last article.</p>
<ul>
<li><a href="#integration">Integration</a></li>
<li><a href="#timestepping">Timestepping</a></li>
<li>
<a href="#modulardesign">Modular Design</a>
<ul>
<li>Bodies</li>
<li>Shapes</li>
<li>Forces</li>
<li>Materials</li>
</ul>
</li>
<li>
<a href="#broadphase">Broad Phase</a>
<ul>
<li>Contact pair duplicate culling</li>
<li>Layering</li>
</ul>
</li>
<li><a href="#halfspace">Halfspace Intersection Test</a></li>
</ul>
<hr>
<h2><a name="integration">Integration</a></h2>
<p>Integration is entirely simple to implement, and there are very many areas on the internet that provide good information for iterative integration. This section will mostly show how to implement a proper integration function, and point to some different locations for further reading, if desired.</p>
<p>First it should be known what acceleration actually is. <a href="http://en.wikipedia.org/wiki/Newton%27s_laws_of_motion">Newton's Second Law</a> states:</p>
<p>\[Equation 1:\\<br>
F = ma\]</p>
<p>This states that the sum of all forces acting upon some object is equal to that object's mass <code>m</code> multiplied by its acceleration <code>a</code>. <code>m</code> is in kilograms, <code>a</code> is in meters/second, and <code>F</code> is in Newtons.</p>
<p>Rearranging this equation a bit to solve for <code>a</code> yields:</p>
<p>\[Equation 2:\\<br>
a = \frac{F}{m}\\<br>
\therefore\\<br>
a = F * \frac{1}{m}\]</p>
<p>The next step involves using acceleration to step an object from one location to another. Since a game is displayed in discrete separate frames in an illusion-like animation, the locations of each position at these discrete steps has to be calculated. For a more in-depth cover of these equations please see: <a href="https://code.google.com/p/box2d/downloads/detail?name=GDC2009_ErinCatto.zip&can=2&q=">Erin Catto's Integration Demo from GDC 2009</a> and <a href="http://www.niksula.hut.fi/~hkankaan/Homepages/gravity.html">Hannu's addition to symplectic Euler for more stability in low FPS environments</a>.</p>
<p>Explicit Euler (pronounced "oiler") integration is shown in the following snippet, where <code>x</code> is position and <code>v</code> is velocity. Please note that <code>1/m * F</code> is acceleration, as explained above:</p>
<pre class="brush: cpp noskimlinks noskimwords">
// Explicit Euler
x += v * dt
v += (1/m * F) * dt</pre>
<div class="tip-shortcode">
<code>dt</code> here refers to delta time. Δ is the symbol for delta, and can be read literally as "change in", or written as Δ<code>t</code>. So whenever you see <code>dt</code> it can be read as "change in time". <code>dv</code> would be "change in velocity".</div>
<p>This will work, and is commonly used as a starting point. However, it has numerical inaccuracies that we can get rid of without any extra effort. Here is what is known as Symplectic Euler:</p>
<pre class="brush: cpp noskimlinks noskimwords">
// Symplectic Euler
v += (1/m * F) * dt
x += v * dt</pre>
<p>Note that all I did was rearrange the order of the two lines of code - see <a href="<a%20href=" http:>">the aforementioned article from Hannu</a>.</p>
<p>This post explains the numerical inaccuracies of Explicit Euler, but be warned that he starts covering RK4, which I don't personally recommend: <a href="http://gafferongames.com/game-physics/integration-basics/">gafferongames.com: Euler Inaccuracy</a>.</p>
<p>These simple equations are all that we need to move all objects around with linear velocity and acceleration.</p>
<hr>
<h2><a name="timestepping">Timestepping</a></h2>
<p>Since games are displayed at discrete time intervals, there needs to be a way of manipulating the time between these steps in a controlled manner. Have you ever seen a game that will run at different speeds depending on what computer it is being played on? That's an example of a game running at a speed dependent on the computer's ability to run the game.</p>
<p>We need a way to ensure that our physics engine only runs when a specific amount of time has passed. This way, the <code>dt</code> that is used within calculations is always the exact same number. Using the exact same <code>dt</code> value in your code everywhere will actually make your physics engine <em>deterministic</em>, and is known as a <em>fixed timestep</em>. This is a good thing.</p>
<p>A deterministic physics engine is one that will always do the exact same thing every time it is run assuming the same inputs are given. This is essential for many types of games where gameplay needs to be very fine-tuned to the physics engine's behavior. This is also essential for debugging your physics engine, as in order to pinpoint bugs the behavior of your engine needs to be consistent.</p>
<p>Let's first cover a simple version of a fixed timestep. Here is an example:</p>
<pre class="brush: cpp noskimlinks noskimwords">
const float fps = 100
const float dt = 1 / fps
float accumulator = 0
// In units of seconds
float frameStart = GetCurrentTime( )
// main loop
while(true)
const float currentTime = GetCurrentTime( )
// Store the time elapsed since the last frame began
accumulator += currentTime - frameStart( )
// Record the starting of this frame
frameStart = currentTime
while(accumulator > dt)
UpdatePhysics( dt )
accumulator -= dt
RenderGame( )</pre>
<p>This waits around, rendering the game, until enough time has elapsed to update the physics. The elapsed time is recorded, and discrete <code>dt</code>-sized chunks of time are taken from the accumulator and processed by the physics. This ensures that the exact same value is passed to the physics no matter what, and that the value passed to the physics is an accurate representation of actual time that passes by in real life. Chunks of <code>dt</code> are removed from the <code>accumulator</code> until the <code>accumulator</code> is smaller than a <code>dt</code> chunk.</p>
<p>There are a couple of problems that can be fixed here. The first involves how long it takes to actually perform the physics update: What if the physics update takes too long and the <code>accumulator</code> goes higher and higher each game loop? This is called the spiral of death. If this is not fixed, your engine will quickly grind to a complete halt if your physics cannot be performed fast enough.</p>
<p>To solve this, the engine really needs to just run fewer physics updates if the <code>accumulator</code> gets too high. A simple way to do this would be to clamp the <code>accumulator</code> below some arbitrary value.</p>
<pre class="brush: cpp noskimlinks noskimwords">
const float fps = 100
const float dt = 1 / fps
float accumulator = 0
// In units seconds
float frameStart = GetCurrentTime( )
// main loop
while(true)
const float currentTime = GetCurrentTime( )
// Store the time elapsed since the last frame began
accumulator += currentTime - frameStart( )
// Record the starting of this frame
frameStart = currentTime
// Avoid spiral of death and clamp dt, thus clamping
// how many times the UpdatePhysics can be called in
// a single game loop.
if(accumulator > 0.2f)
accumulator = 0.2f
while(accumulator > dt)
UpdatePhysics( dt )
accumulator -= dt
RenderGame( )</pre>
<p>Now, if a game running this loop ever encounters some sort of stalling for whatever reason, the physics will not drown itself in a spiral of death. The game will simply run a little more slowly, as appropriate.</p>
<p>The next thing to fix is quite minor in comparison to the spiral of death. This loop is taking <code>dt</code> chunks from the <code>accumulator</code> until the <code>accumulator</code> is smaller than <code>dt</code>. This is fun, but there's still a little bit of remaining time left in the <code>accumulator</code>. This poses a problem.</p>
<p>Assume the <code>accumulator</code> is left with 1/5th of a <code>dt</code> chunk every frame. On the sixth frame the <code>accumulator</code> will have enough remaining time to perform one more physics update than all the other frames. This will result in one frame every second or so performing a slightly larger discrete jump in time, and might be very noticeable in your game.</p>
<p>To solve this, the use of <em>linear interpolation</em> is required. If this sounds scary, don't worry - the implementation will be shown. If you want to understand the implementation there are many resources online for linear interpolation.</p>
<pre class="brush: cpp noskimlinks noskimwords">
// linear interpolation for a from 0 to 1
// from t1 to t2
t1 * a + t2(1.0f - a)</pre>
<p>Using this we can interpolate (approximate) where we might be between two different time intervals. This can be used to render the state of a game in between two different physics updates.</p>
<p>With linear interpolation, the rendering of an engine can run at a different pace than the physics engine. This allows a graceful handling of the leftover <code>accumulator</code> from the physics updates.</p>
<p>Here is a full example:</p>
<pre class="brush: cpp noskimlinks noskimwords">
const float fps = 100
const float dt = 1 / fps
float accumulator = 0
// In units seconds
float frameStart = GetCurrentTime( )
// main loop
while(true)
const float currentTime = GetCurrentTime( )
// Store the time elapsed since the last frame began
accumulator += currentTime - frameStart( )
// Record the starting of this frame
frameStart = currentTime
// Avoid spiral of death and clamp dt, thus clamping
// how many times the UpdatePhysics can be called in
// a single game loop.
if(accumulator > 0.2f)
accumulator = 0.2f
while(accumulator > dt)
UpdatePhysics( dt )
accumulator -= dt
const float alpha = accumulator / dt;
RenderGame( alpha )
void RenderGame( float alpha )
for shape in game do
// calculate an interpolated transform for rendering
Transform i = shape.previous * alpha + shape.current * (1.0f - alpha)
shape.previous = shape.current
shape.Render( i )</pre>
<p>Here, all objects within the game can be drawn at variable moments between discrete physics timesteps. This will gracefully handle all error and remainder time accumulation. This is actually rendering ever so slightly behind what the physics has currently solved for, but when watching the game run all motion is smoothed out perfectly by the interpolation.</p>
<p>The player will never know that the rendering is ever so slightly behind the physics, because the player will only know what they see, and what they'll see is perfectly smooth transitions from one frame to another.</p>
<p>You may be wondering, "why don't we interpolate from the current position to the next?". I tried this and it requires the rendering to "guess" where objects are going to be in the future. Often, objects in a physics engine make sudden changes in movement, such as during collision, and when such a sudden movement change is made, objects will teleport around due to inaccurate interpolations into the future.</p>
<hr>
<h2><a name="modulardesign">Modular Design</a></h2>
<p>There are a few things every physics object is going to need. However, the specific things each physics object needs may change slightly from object to object. A clever way to organize all this data is required, and it would be assumed that the lesser amount of code to write to achieve such organization is desired. In this case some modular design would be of good use.</p>
<p>Modular design probably sounds a little pretentious or over-complicated, but it does make sense and is quite simple. In this context, "modular design" just means we want to break a physics object into separate pieces, so that we can connect or disconnect them however we see fit.</p>
<h3>Bodies</h3>
<p>A physics body is an object that contains all information about some given physics object. It will store the shape(s) that the object is represented by, mass data, transformation (position, rotation), velocity, torque, and so on. Here is what our <code>body</code> ought to look like:</p>
<pre class="brush: cpp noskimlinks noskimwords">
struct body
{
Shape *shape;
Transform tx;
Material material;
MassData mass_data;
Vec2 velocity;
Vec2 force;
real gravityScale;
};</pre>
<p>This is a great starting point for the design of a physics body structure. There are some intelligent decisions made here that tend towards strong code organization.</p>
<p>The first thing to notice is that a shape is contained within the body by means of a pointer. This represents a loose relationship between the body and its shape. A body can contain any shape, and the shape of a body can be swapped around at will. In fact, a body can be represented by multiple shapes, and such a body would be known as a "composite", as it would be composed of multiple shapes. (I'm not going to cover composites in this tutorial.)</p>
<figure class="tutorial_image"><img class="aligncenter size-full wp-image-7526 lazy-load-image__no-display-style" alt="Body and Shape interface." data-src="https://cdn.tutsplus.com/gamedev/uploads/2013/04/custom-physics-shape-interface.png" data-original-url="http://cdn.tutsplus.com/gamedev.tutsplus.com/uploads/2013/04/custom-physics-shape-interface.png" width="600" height="400">
<figcaption>Body and Shape interface.</figcaption>
</figure>
<p>The <code>shape</code> itself is responsible for computing bounding shapes, calculating mass based on density, and rendering.</p>
<p>The <code>mass_data</code> is a small data structure to contain mass-related information:</p>
<pre class="brush: cpp noskimlinks noskimwords">
struct MassData
{
float mass;
float inv_mass;
// For rotations (not covered in this article)
float inertia;
float inverse_inertia;
};</pre>
<p>It is nice to store all mass- and intertia-related values in a single structure. The mass should never be set by hand - mass should always be calculated by the shape itself. Mass is a rather unintuitive type of value, and setting it by hand will take a lot of tweaking time. It's defined as:</p>
<p>\[ Equation 3:\\Mass = density * volume\]</p>
<p>Whenever a designer wants a shape to be more "massive" or "heavy", they should modify the density of a shape. This density can be used to calculate the mass of a shape given its volume. This is the proper way to go about the situation, as density is not affected by volume and will never change during the runtime of the game (unless specifically supported with special code).</p>
<p>Some examples of shapes such as AABBs and Circles can be found in <a href="https://gamedev.tutsplus.com/tutorials/implementation/create-custom-2d-physics-engine-aabb-circle-impulse-resolution/">the previous tutorial in this series</a>.</p>
<h3>Materials</h3>
<p>All this talk of mass and density leads to the question: Where does the density value lay? It resides within the <code>Material</code> structure:</p>
<pre class="brush: cpp noskimlinks noskimwords">
struct Material
{
float density;
float restitution;
};</pre>
<p>Once the values of the material are set, this material can be passed to the shape of a body so that the body can calculate the mass.</p>
<p>The last thing worthy of mention is the <code>gravity_scale</code>. Scaling gravity for different objects is so often required for tweaking gameplay that it is best to just include a value in each body specifically for this task.</p>
<p>Some useful material settings for common material types can be used to construct a <code>Material</code> object from an enumeration value:</p>
<pre class="brush: cpp noskimlinks noskimwords">
Rock Density : 0.6 Restitution : 0.1
Wood Density : 0.3 Restitution : 0.2
Metal Density : 1.2 Restitution : 0.05
BouncyBall Density : 0.3 Restitution : 0.8
SuperBall Density : 0.3 Restitution : 0.95
Pillow Density : 0.1 Restitution : 0.2
Static Density : 0.0 Restitution : 0.4</pre>
<h3>Forces</h3>
<p>There is one more thing to talk about in the <code>body</code> structure. There is a data member called <code>force</code>. This value starts at zero at the beginning of each physics update. Other influences in the physics engine (such as gravity) will add <code>Vec2</code> vectors into this <code>force</code> data member. Just before integration all of this force will be used to calculate acceleration of the body, and be used during integration. After integration this <code>force</code> data member is zeroed out.</p>
<p>This allows for any number of forces to act upon an object whenever they see fit, and no extra code will be required to be written when new types of forces are to be applied to objects.</p>
<p>Lets take an example. Say we have a small circle representing a very heavy object. This small circle is flying around in the game, and it is so heavy that it pulls other objects towards it ever so slightly. Here is some rough pseudocode to demonstrate this:</p>
<pre class="brush: cpp noskimlinks noskimwords">
HeavyObject object
for body in game do
if(object.CloseEnoughTo( body )
object.ApplyForcePullOn( body )</pre>
<p>The function <code>ApplyForcePullOn()</code> could perhaps apply a small force to pull the <code>body</code> towards the <code>HeavyObject</code>, only if the <code>body</code> is close enough.</p>
<figure class="tutorial_image"><img class="aligncenter size-full wp-image-7528 lazy-load-image__no-display-style" alt="Two objects pulled towards a larger one. The pull force is dependent upon distance." data-src="https://cdn.tutsplus.com/gamedev/uploads/2013/04/custom-physics-pull-force.png" data-original-url="http://cdn.tutsplus.com/gamedev.tutsplus.com/uploads/2013/04/custom-physics-pull-force.png" width="600" height="400"><br>
<figcaption>Two objects pulled towards a larger one moving pasted them. The pull forces are dependent upon their distance to the larger box.</figcaption>
</figure>
<p>It doesn't matter how many forces are added to the <code>force</code> of a body, as they will all add up to a single summed force vector for that body. This means that two forces acting upon the same body can potentially cancel each other out.</p>
<hr>
<h2><a name="broadphase">Broad Phase</a></h2>
<p>In the previous article in this series collision detection routines were introduced. These routines were actually apart of what is known as the "narrow phase". The differences between broad phase and narrow phase can be researched rather easily with a Google search.</p>
<p>(In brief: we use broad phase collision detection to figure out which pairs of objects <em>might</em> be colliding, and then narrow phase collision detection to check whether they actually <em>are</em> colliding.)</p>
<p>I would like to provide some example code along with an explanation of how to implement a broad phase of \(O(n^2)\) time-complexity pair calculations.</p>
<div class="tip-shortcode">\(O(n^2)\) essentially means that the time taken to check every pair of potential collisions will depend on the square of the number of objects. It uses <a href="http://en.wikipedia.org/wiki/Time_complexity">Big-O notation</a>.</div>
<p>Since we're working with pairs of objects, it will be useful to create a structure like so:</p>
<pre class="brush: cpp noskimlinks noskimwords">
struct Pair
{
body *A;
body *B;
};</pre>
<p>A broad phase should collect a bunch of possible collisions and store them all in <code>Pair</code> structures. These pairs can then be passed on to another portion of the engine (the narrow phase), and then resolved.</p>
<p>Example broad phase:</p>
<pre class="brush: cpp noskimlinks noskimwords">
// Generates the pair list.
// All previous pairs are cleared when this function is called.
void BroadPhase::GeneratePairs( void )
{
pairs.clear( )
// Cache space for AABBs to be used in computation
// of each shape's bounding box
AABB A_aabb
AABB B_aabb
for(i = bodies.begin( ); i != bodies.end( ); i = i->next)
{
for(j = bodies.begin( ); j != bodies.end( ); j = j->next)
{
Body *A = &i->GetData( )
Body *B = &j->GetData( )
// Skip check with self
if(A == B)
continue
A->ComputeAABB( &A_aabb )
B->ComputeAABB( &B_aabb )
if(AABBtoAABB( A_aabb, B_aabb ))
pairs.push_back( A, B )
}
}
}</pre>
<p>The above code is quite simple: Check each body against each body, and skip self-checks.</p>
<h3>Culling Duplicates</h3>
<p>There is one problem from the last section: many duplicate pairs will be returned! These duplicates need to be culled from the results. Some familiarity with sorting algorithms will be required here if you do not have some sort of sorting library available. If you're using C++ then you're in luck:</p>
<pre class="brush: cpp noskimlinks noskimwords">
// Sort pairs to expose duplicates
sort( pairs, pairs.end( ), SortPairs );
// Queue manifolds for solving
{
int i = 0;
while(i < pairs.size( ))
{
Pair *pair = pairs.begin( ) + i;
uniquePairs.push_front( pair );
++i;
// Skip duplicate pairs by iterating i until we find a unique pair
while(i < pairs.size( ))
{
Pair *potential_dup = pairs + i;
if(pair->A != potential_dup->B || pair->B != potential_dup->A)
break;
++i;
}
}
}</pre>
<p>After sorting all the pairs in a specific order it can be assumed that all pairs in the <code>pairs</code> container will have all duplicates adjacent to one another. Place all unique pairs into a new container called <code>uniquePairs</code>, and the job of culling duplicates is finished.</p>
<p>The last thing to mention is the predicate <code>SortPairs()</code>. This <code>SortPairs()</code> function is what's actually used to do the sorting, and it might look like this:</p>
<pre class="brush: cpp noskimlinks noskimwords">
bool SortPairs( Pair lhs, Pair rhs )
{
if(lhs.A < rhs.A)
return true;
if(lhs.A == rhs.A)
return lhs.B < rhs.B;
return false;
}</pre>
<div class="tip-shortcode">The terms <code>lhs</code> and <code>rhs</code> can be read as "left hand side" and "right hand side". These terms are commonly used to refer to parameters of functions where things can logically be viewed as the left and right side of some equation or algorithm.</div>
<h3>Layering</h3>
<p><em>Layering</em> refers to the act of having different objects never collide with one another. This is key for having bullets fired from certain objects not affect certain other objects. For example, players on one team might want their rockets to harm the enemies but not each other.</p>
<figure class="tutorial_image"><img class="aligncenter size-full wp-image-7531 lazy-load-image__no-display-style" alt="Representation of layering; some object collide with one another, some do not." data-src="https://cdn.tutsplus.com/gamedev/uploads/2013/04/custom-physics-layering.png" data-original-url="http://cdn.tutsplus.com/gamedev.tutsplus.com/uploads/2013/04/custom-physics-layering.png" width="600" height="400"><br>
<figcaption>Representation of layering; some object collide with one another, some do not.</figcaption>
</figure>
<p>Layering is best implemented with <em>bitmasks</em> - see <a href="http://stu.mp/2004/06/a-quick-bitmask-howto-for-programmers.html">A Quick Bitmask How-To for Programmers</a> and <a href="http://en.wikipedia.org/wiki/Mask_%28computing%29">the Wikipedia page</a> for a quick introduction, and the Filtering section of <a href="http://www.box2d.org/manual.html">the Box2D manual</a> to see how that engine uses bitmasks.</p>
<p>Layering should be done within the broad phase. Here I'll just paste a finished broad phase example:</p>
<pre class="brush: cpp noskimlinks noskimwords">
// Generates the pair list.
// All previous pairs are cleared when this function is called.
void BroadPhase::GeneratePairs( void )
{
pairs.clear( )
// Cache space for AABBs to be used in computation
// of each shape's bounding box
AABB A_aabb
AABB B_aabb
for(i = bodies.begin( ); i != bodies.end( ); i = i->next)
{
for(j = bodies.begin( ); j != bodies.end( ); j = j->next)
{
Body *A = &i->GetData( )
Body *B = &j->GetData( )
// Skip check with self
if(A == B)
continue
// Only matching layers will be considered
if(!(A->layers & B->layers))
continue;
A->ComputeAABB( &A_aabb )
B->ComputeAABB( &B_aabb )
if(AABBtoAABB( A_aabb, B_aabb ))
pairs.push_back( A, B )
}
}
}</pre>
<p>Layering turns out to be both highly efficient and very simple.</p>
<hr>
<h2><a name="halfspace">Halfspace Intersection</a></h2>
<p>A <em>halfspace</em> can be viewed as one side of a line in 2D. Detecting whether a point is on one side of a line or the other is a pretty common task, and should be understood thoroughly by anyone creating their own physics engine. It is too bad that this topic isn't really covered anywhere on the internet in a meaningful way, at least from what I've seen - until now, of course!</p>
<p>The general equation of a line in 2D is:</p>
<p>\[Equation 4:\\<br>
General \: form: ax + by + c = 0\\<br>
Normal \: to \: line: \begin{bmatrix}<br>
a \\<br>
b \\<br>
\end{bmatrix}\]</p>
<figure class="tutorial_image"><img class="aligncenter size-full wp-image-6470 lazy-load-image__no-display-style" alt="custom-physics-line2d" data-src="https://cdn.tutsplus.com/gamedev/authors/randy-gaul/custom-physics-line2d.png" data-original-url="http://cdn.tutsplus.com/gamedev.tutsplus.com/authors/randy-gaul/custom-physics-line2d.png" width="600" height="400"></figure>
<p>Note that, despite its name, the normal vector is not necessarily normalized (that is, it doesn't necessarily have a length of 1).</p>
<p>To see whether a point is on a particular side of this line, all that we need to do is plug the point into the <code>x</code> and <code>y</code> variables in the equation and check the sign of the result. A result of 0 means the point is on the line, and positive/negative mean different sides of the line.</p>
<p>That is all there is to it! Knowing this the distance from a point to the line is actually the result from the previous test. If the normal vector is not normalized, then the result will be scaled by the magnitude of the normal vector.</p>
<hr>
<h2>Conclusion</h2>
<p>By now a complete, albeit simple, physics engine can be constructed entirely from scratch. More advanced topics such as friction, orientation, and dynamic AABB tree may be covered in future tutorials. Please ask questions or provide comments below, I enjoy reading and answering them!</p>
2013-05-01T14:30:00.000Z2013-05-01T14:30:00.000ZRandy Gaul