[Public WebGL] WEBGL_debug_shader_precision extension proposal

Rob Manson [email protected]
Wed Nov 12 02:09:09 PST 2014


First, I think this discussion has slipped sideways into the wrong 
thread so I've just replied to the list only so this doesn't continue. I 
think replies to the main thread on this topic would be best if possible.

Second, we're really happy to provide the background on why we've 
developed the proposal as is. We really value the feedback and 
constructive criticism provided so far and I hope this can help resolve 
some of the angst this also seems to be generating 8/

Our first idea was simply to use RGBA/UNSIGNED_SHORT_4_4_4_4 but the 
initial feedback we got (which made good sense to us) was that a 3 
channel data structure like RGB/UNSIGNED_BYTE would save a number of 
steps in the conversion/unpacking process and would therefore be more 
efficient.

We created some tests and ran them across various browsers and during 
that process also realised that we could get better performance and 
channel usage by using RGB/UNSIGNED_SHORT_5_6_5 - already a unit16 array 
and only 3 channels instead of 4.

And this is what we then took to this wider group for more feedback and 
suggestions.

As far as I'm aware using LUMINANCE_ALPHA/UNSIGNED_BYTE is not as 
effective as the LUMINANCE_ALPHA is internally converted to an RGBA with 
the luminance spread across the R, G and B channels. But please correct 
me if this is not right?

Our primary goal here was simply to find a pragmatic approach that could 
easily be developed against WebGL 1.x right now - so people could start 
using Depth Camera Streams via WebGL in the "very near future".

Then in WebGL 2.x we could just move to using RED_INTEGER so we would 
then move back to using just standard WebGL with no extension. But 
waiting for only this option seemed like a long delay when a feasible 
stop-gap approach seemed available.

If there are other options that could help us meet our primary goal then 
we'd definitely like to hear about it.

Yet it does seem to me that the WEBGL_texture_from_depth_video extension 
does define a very minimal "novel behavior of a piece of software". At 
the moment there is no way to upload a <video> frame that includes a 
depth data track. This extension enables that - which makes this novel.

Florian, it sounds like you're saying that our only option is that 
users/devs really have to wait for WebGL 2.x to access Depth Camera 
Tracks - am I understanding that correctly?

roBman


On 12/11/14 6:19 PM, Florian Bösch wrote:
> A specifications purpose is to describe a novel behavior of a piece of 
> software/hardware, and how a user has to use those new capabilities. I 
> believe that every specification for OpenGL, OpenGL ES and WebGL 
> follows this idea, and I believe every extension does too (in any 
> case, there is no functional difference between an extension and the 
> core specification, since an extension modifies the specification). 
> And so, it follows that any extension has to pass muster required to 
> pass, as if it where to be included in the specification (because 
> that's where it might end up in in due time).
>
> The fixation on the 5-6-5 format primarily has one motivation: Depth 
> data happens to be 16-bit for arbitrary reasons, and 5-6-5 happens to 
> be a 16-bit internal format, so the first idea anyone would have, just 
> mash them together and call it a day, no wait, draft an extension for 
> it, too. It's an "it happens to work" cum extension.
>
> There are in fact other 16-bit formats that depth could be packed into:
>
>   * unsigned byte and luminance alpha
>   * rgba and unsigned short 4-4-4-4
>   * rgba and unsigned short 5-5-5-1
>
> And many more coming with WebGL 2.0
>
> Why 5-6-5 should be superior to any of the others is a mystery to me. 
> Personally, I think luminance alpha is more convenient, because then 
> conversion to a 0-1 scaled depth can be done much simpler:
>
>     vec2 texel = texture2D(mydepth, texcoord).xw;
>     float depth = texel.x + texel.y/255.0;
>
>
> The real problem is, this isn't a technical specification about a 
> behavior. This is a specification how the USER has to behave. It's a 
> leaky abstraction. To my knowledge, there isn't any piece of 
> specification or extension that resorts to a similar hack. It's a 
> precedent.
>
> A WebGL (or OpenGL or ES) extension modifies the specification. Let's 
> suppose you where to encode that behavior in the core specification 
> (that would never fly). But what would a core functionality likely do? 
> Well let's assume that, for whatever reason, you're lacking an 
> appropriate internal format for the kind of format you'd like to store.
>
> Introduce a new internal format (gl.DEPTH), and a new external format 
> (gl.UNSIGNED_SHORT_DEPTH) modify the gl.tex(Sub)Image2D call to accept 
> these parameters, such that you've fully described a useful internal 
> format (mipmappable, interpolatable, mixable, blendable, coveragable, 
> renderable gl.DEPTH) and fully specified a transfer format (it's an 
> unsigned short 16bpp/16bbc). You might introduce a new GLSL sampler 
> type (samplerDepth) and a new texturing function (textureDepth), 
> although that's a bit frowned upon I think, and instead the behavior 
> would probably be that texture2D just returns the depth on all 
> channels of the returned vec4. Afaik, this is the kind of thing that 
> could pass muster for inclusion in the core specification. And in 
> fact, it has. All of those things, have passed muster previously.
>
> So I'm strictly against an extension, that encodes a "it happens to 
> work" behavior, that is less about a technical specification, but an 
> external format description and specifying how a user has to behave. 
> That's not an extension, that's a hack.
>
>
> On Wed, Nov 12, 2014 at 3:26 AM, Jeff Gilbert <[email protected] 
> <mailto:[email protected]>> wrote:
>
>     I agree with Gregg.
>
>     I will add that if it's something that we feel is important enough
>     as a working group, we could canonize the library and maintain it
>     as part of our github repo.
>
>     -Jeff
>
>     ----- Original Message -----
>     From: "Gregg Tavares" <[email protected]
>     <mailto:[email protected]>>
>     To: "Mark Callow" <[email protected] <mailto:[email protected]>>
>     Cc: "Florian Bösch" <[email protected] <mailto:[email protected]>>,
>     "Jeff Gilbert" <[email protected]
>     <mailto:[email protected]>>, "Olli Etuaho" <[email protected]
>     <mailto:[email protected]>>, "Kenneth Russell" <[email protected]
>     <mailto:[email protected]>>, "public webgl" <[email protected]
>     <mailto:[email protected]>>
>     Sent: Tuesday, November 11, 2014 6:16:19 PM
>     Subject: Re: [Public WebGL] WEBGL_debug_shader_precision extension
>     proposal
>
>     If this works just fine as a JavaScript library why add it as an
>     extension?
>
>     As an extension what it does has to be specifically specified.
>     As an extension it can't be upgraded without making and proposing
>     a new
>     extension.
>     As an extension it passes all work to the browser vendors who each
>     need to
>     implement it
>
>     As a library it can be updated and extended whenever
>     As a library it only needs one implementation and everyone can use it
>     As a library it can do whatever it wants, no spec needed
>
>     >From the discussion above it doesn't seems like it needs to be an
>     extension. It doesn't seem like there is some specific OpenGL
>     functionality
>     that needs to be exposed to make it possible. It also doesn't
>     sound like a
>     speed issue given that the resulting shaders are up to 10x slower.
>
>     Also as a library it should be easy to patch it the same way the WebGL
>     Inspector patches itself in or various other libraries that patch
>     things
>     like WebGLRenderingContext.prototype.compileShader
>
>
>
>
>     On Tue, Nov 11, 2014 at 2:23 PM, Mark Callow <[email protected]
>     <mailto:[email protected]>> wrote:
>
>     >
>     >
>     > > On Nov 12, 2014, at 7:19 AM, Florian Bösch <[email protected]
>     <mailto:[email protected]>> wrote:
>     > >
>     > > What's wrong with it is that it does not allow you to isolate
>     an issue
>     > with any of your shader code buried in use somewhere in your
>     application.
>     > >
>     >
>     > You have to find either the buried shader code or the buried call to
>     > compileShader for that shader. These efforts may or may not be much
>     > different, depending on the structure of your code. I would not
>     object to
>     > supporting both an API toggle and a pragma, getting the best of
>     both worlds.
>     >
>     > Regards
>     >
>     >     -Mark
>     >
>     >
>     > -----------------------------------------------------------
>     > You are currently subscribed to [email protected]
>     <mailto:[email protected]>.
>     > To unsubscribe, send an email to [email protected]
>     <mailto:[email protected]> with
>     > the following command in the body of your email:
>     > unsubscribe public_webgl
>     > -----------------------------------------------------------
>     >
>     >
>
>
>

-----------------------------------------------------------
You are currently subscribed to [email protected]
To unsubscribe, send an email to [email protected] with
the following command in the body of your email:
unsubscribe public_webgl
-----------------------------------------------------------





More information about the public_webgl mailing list