[Public WebGL] WEBGL_compressed_texture_s3tc_srgb
Wed Jun 15 01:23:39 PDT 2016
On Wed, Jun 15, 2016 at 3:47 AM, Mark Callow <[email protected]> wrote:
> There is a canvas color space proposal
> <https://github.com/junov/CanvasColorSpace/blob/master/CanvasColorSpaceProposal.md> under
> development for both WebGL and 2D canvases. It describes some of the issues
> with the current situation as well as a solution. The proposal is being
> discussed here
> It is imprecise to say that gl_FragColor is in gamma space.
That's correct, you will find no specification of any kind of nonlinear
space in GL. Therefore the space is undefined. You will however find this:
*Similarly, display of framebuffer contents on a monitor or LCD panel
> (including the transformation of individual framebuffer values by such
> techniques as gamma correction) is not addressed by the GL*
OpenGL ES 2.0, Section 2.1.
You will also find this in EGL 1.5:
> EGL itself does not distinguish multiple colorspace models
Although some mentions are made to framebuffers (not objects) with the sRGB
colorspace to be understood. Which is logical. However EGL contains no
mechanism to discover a displays colorspace.
BTW, this is the first time I have heard it called gamma space.
- http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html "capture
images in gamma (nonlinear) space", though referred trough the rest of the
article as "nonlinear [color] space".
- https://en.wikipedia.org/wiki/Gamma_correction " converting from and
to a gamma colorspace", though often used more precise, such as sRGB
I like to think of it as compressed or non-linear space.
It's synonyms for the same thing:
- gamma space
- gamma colorspace
- sRGB colorspace
- nonlinear space
They're all compressed. They're all nonlinear. Use whatever you like to
Values written to gl_FragColor, OpenGL ES 2 and WebGL 1, or user-defined
> color outputs, OpenGL ES 3 and WebGL 2, are in linear space.
They're in the space that's passed trough, which almost always is nonlinear
space, however the space itself is undefined, and we assume it is
nonlinear, and most likely sRGB.
> If the the value of FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING is SRGB then
> the values will be compressed to non-linear before being written to the
Otherwise they are written as is.
This is incorrect. You need to distinguish between framebuffers, and
framebuffer objects. The latter of which WebGL has, the former of which it
doesn't (at least not explicitely). Framebuffers are constructs intended to
communicate with the compositor/windowing system which might (but most of
the time does not) have knowledge of a displays colorspace. It's therefore
the case that sRGB framebuffers are interpreted correctly by the EGL
specification. This provision does not in any way apply to framebuffer
*objects*, which are offscreen and are interoperated with by way of a
texture. This texture itself being in sRGB has some meaning, but the
distinction has nothing todo with the colorspace itself.
The functional difference between gl_FragColor.rgb = gamma(someColor); vs
gl_FragColor.rgb = someColor (assuming an sRGB texture is attached to the
framebuffer object is this):
1. Blending is defined as a mathematical operation and two weights, such
as a*(1-x) + b*x (aka alpha blending). If you're adding two values in
nonlinear space, the result is not the same as when you do it in linear
space. An sRGB texture attached framebuffer will decode the destination
nonlinear color to linear, perform blending with the source linear color,
and write the result linear value to the nonlinear backing store. An
ordinary texture does not do this.
2. MSAA, which uses multiple samples at primitive edges, uses a weighted
average of some kind or other to compute an output color based on a
destination color and a bunch of source values, based on depth coverage.
This is a form of blending, and it suffers the same problem as blending
(see explanation of blending problem at point #1)
3. Alpha to coverage blends between a destination and source color based
on the coverage indicated by alpha, and so will suffer the same problem as
blending (see point #1)
The functional difference between texture2D(somesRGBtexture, uv).rgb and
degamma(texture2D(ordinaryTexture).rgb) is that:
- For bilinear interpolation 4 texels are consulted which are mixed
(a*(1-x) + b*x) based on the fractional distance of the UV coordinate to
the texel centroid. If the texture is sRGB, the consulted texels are
decoded first into linear space, and then mixed. This is not the case with
an ordinary texture. So the result (as explained in #1 from before) is
- Mipmapping consults two layers of a texture and mixes them together,
this suffers the same problem as explained in #1 from before).
- Anisotropic filtering performs additional computation (most often
implemented as dFdx and dFdy independent mip-levels), and that would suffer
the same problem as explained in point #1 from before.
You can generalize this with a relatively simple statement. The main
advantage (ideally, but not guaranteed) of sRGB textures attached to
framebuffer *objects* is that:
- All forms of mixing colors into and out of sRGB textures is done in
the correct order of: mix -> encode -> store / lookup -> decode -> mix
- All forms of manual gamma handling performs all operations in the
incorrect order as in: encode -> mix -> store / lookup -> mix -> decode
WebGL 1 & OpenGl ES 2 do not support sRGB rendering so the values are
> written as is.
The *core* WebGL1/ES2 specifciation does not. The extension EXT_sRGB
however does (and it is available in WebGL 1). The specification of this
extension states that:
If the currently bound texture's internal format is one of SRGB_EXT or
SRGB_ALPHA_EXT the red, green, and blue components are converted from an
sRGB color space to a linear color space as part of filtering described in
sections 3.7.7 and 3.7.8. Any alpha component is left unchanged. Ideally,
implementations should perform this color conversion on each sample prior
to filtering but implementations are allowed to perform this conversion
after filtering (though this post-filtering approach is inferior to
converting from sRGB prior to filtering).
The definitions for blending are more strict and do state that conversion
has to occur in the right order.
The framebuffer is therefore considered to hold linear data and its values
> should be compressed to non-linear (a.k.a. gamma corrected) by the browser
> or window/operating system before being sent to the display,
Incorrect. The browser operates colorspace agnostically, and truthfully, so
do operating systems. There is the notion of color profile, but this is an
additional user-defined transformation that cannot be queried by
applications and is applied automatically regardless (and independently of)
the source values colorspace, of which the OS knows nothing.
> assuming a non-linear display.
There are no linear displays, see history lesson from a few messages back.
> Unfortunately browsers differ in how they handle this
They do not differ, they are agnostic, which means, they're not performing
> and, especially on mobile devices,
Mobile devices are no different than any other machine in that context,
they might have color profiles for their display, but they are source
> it is unclear which if any OSes apply gamma correction.
None do. Color profiles are applied (brightness, contrast, gamut, etc.),
colorspace conversion on output is not.
> Only in cases where no gamma correction is applied can the values written
> to gl_FragColor be considered to be in non-linear space.
If gl_FragColor means a framebuffer (not an object), you will always have
to apply gamma correction. There is two mechanisms for this, one of which
does not exist in WebGL1 or 2. That is an sRGB framebuffer, which WebGL1
and 2 do not have.
> The canvas color space proposal together with sRGB rendering support is
> intended to resolve these and other color space issues.
It will not solve the underlying issue that everything that gets sent to a
display is sent to the display in sRGB, because the display is forced to
accept sRGB only because that's how it came into existence. None of the
wire protocols in existence to talk to monitor have any provision to
discover a displays gamut and gamma. So the idea of using "the full gamut"
of a display is illusory, as what's happening in practice is that a limited
sRGB colorspace expressed in 8-bits per channel, gets stretched out into
the displays colorspace, which leads to banding and a reduction in color
precision. A true solution to this problem cannot be achieved by a new
canvas specification. And in any case, the browser operates colorspace
agnostically, and so does the OS, and so does the display, which in
practice means the canvas colorspace is irrelevant, because it's always
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the public_webgl