Compact Normal Storage for small G-Buffers – Aras’ website

Various deferred shading/lighting approaches or image postprocessing effects need to store normals as part of their G-buffer. Let’s figure out a compact storage method for view space normals. In my case, main target is minimalist G-buffer, where depth and normals are packed into a single 32 bit (8 bits/channel) render texture. I try to minimize error and shader cycles to encode/decode. Now of course, 8 bits/channel storage for normals can be not enough for shading, especially if you want specular (low precision & quantization leads to specular “wobble” when camera or objects move). However, everything below should Just Work (tm) for 10 or 16 bits/channel integer formats. For 16 bits/channel half-float formats, some of the computations are not necessary (e.g. bringing normal values into 0..1 range). Original version of my article had some stupidity: encoding shaders did not normalize the incoming per-vertex normal. This resulted in quality evaluation results being somewhat wrong. Also, if normal is assumed to be normalized, then three methods in original article (Sphere Map, Cry Engine 3 and Lambert Azimuthal) are in fact completely equivalent. The old version is still available for the sake of integrity of the internets. Here is a small Windows application I used to test everything below: NormalEncodingPlayground.zip (4.8MB, source included). It requires GPU with Shader Model 3.0 support. When it writes fancy shader reports, it expects AMD’s GPUShaderAnalyzer and NVIDIA’s NVShaderPerf to be installed. Source code should build with Visual C++ 2008. Just to set the basis, store all three components of the normal. It’s not suitable for our quest, but I include it here to evaluate “base” encoding error (which happens here only because of quantization to 8 bits per component). It is possible to use spherical coordinates to encode the normal. Since we know it’s unit length, we can just store the two angles. Suggested by Pat Wilson of Garage Games: GG blog post. Other mentions: MJP’s blog, GarageGames thread, Wolf Engel’s blog, gamedev.net forum thread. Spherical environment mapping (indirectly) maps reflection vector to a texture coordinate in [0..1] range. The reflection vector can point away from the camera, just like our view space normals. Bingo! See Siggraph 99 notes for sphere map math. Normal we want to encode is R, resulting values are (s,t). If we assume that incoming normal is normalized, then there are methods derived from elsewhere that end up being exactly equivalent: What the title says: use Stereographic Projection (Wikipedia link), plus rescaling so that “practically visible” range of normals maps into unit circle (regular stereographic projection maps sphere to circle of infinite size). In my tests, scaling factor of 1.7777 produced best results, in practice it depends on FOV used and how much do you care about normals that point away from the camera. If we compute view space per-pixel, then Z component of a normal can never be negative. Then just store X&Y, and compute Z. Source.


Яндекс.Метрика Рейтинг@Mail.ru Free Web Counter
page counter
Last Modified: April 18, 2016 @ 11:02 pm