[Public WebGL] WEBGL_debug_shader_precision extension proposal

Florian Bösch [email protected]
Mon Nov 17 10:20:25 PST 2014

Btw. how'd you interprete a statement like "lowp double foobar;" ?

On Mon, Nov 17, 2014 at 7:14 PM, Florian Bösch <[email protected]> wrote:

> Yes, but in C/C++ the modifiers are part of the type, and you'd do things
> like typedef unsigned int uint; typedef long long iquad; etc. , and of
> course short is 2 bytes, int is 4, long long is 8. Floats of course could,
> in theory, be something else than IEEE. They mostly aren't though. the 4
> bytes that constitute a float have been stuck into files and sockets for
> ages now. It's common practice to do it. You might run into byte order
> trouble, which is easy enough to resolve. But I've never, not even once,
> run into a case of trying to load a 4-byte float that one machine produced,
> that another machine didn't understand.
> Strangely though GLSL has introduced double and uint (and their vector
> types). But you cannot typedef to get type consistency. For instance you
> cannot typedef mediump float half; That'd be useful, but GLSL doesn't have
> a half data type (and neither does it have a short, or byte), and anyways,
> it doesn't have typedef either. Which is strange, because it also contains
> packing/unpacking instructions, which explicitly refer to formats like
> that. The modifiers modify  globally for a type, or of the variable being
> modified, not of the type being used.
> Anyway, it'd be tremendously useful for consistency of a program if a
> programmer could rely on numerical implementation. And that's what I'm
> bemoaning.
> On Mon, Nov 17, 2014 at 6:23 PM, Tibor Ouden, den <[email protected]
> > wrote:
>> Paragraph 4.5.2 of the OpenGL ES standard specifies the minimum
>> precisions for the qualifiers, but in principle they all could have the
>> same precision.
>> Isn't this similar to how it is done is C / C++ : short int, int, long
>> int, long long int ?
>> In principle they all could have the same count of bits, but hardware
>> vendors are free implement them as long as the next type is at least as big
>> as the preceding type ?
>> But there is no way to query the bit size of the in C / C++ (you can
>> figure it out with some tests), at least in WebGL you can query the bit
>> size of the significand and exponent
>> (although most of the time the reported significand bit count is one less
>> compared to what it should be).
>> 2014-11-17 17:42 GMT+01:00 Florian Bösch <[email protected]>:
>>> I would, if at all feasible, prefer it to be a library, because then it
>>> can be used to run it across every browser. Which is kinda important
>>> because the architecture of the backends trough ANGLE, and in IE11 is
>>> probably quite different, and may lead to different results.
>>> I'd rather see this happening in some fashion, no matter what fashion
>>> though, than not.
>>> As a sidenote, I wanted to comment on precision qualifiers. It's always
>>> struck me that the OpenGL specification is deficient in its definition of
>>> numerical types (as in that it didn't specify which type has what precision
>>> and which standard it'll have to implement). I understand that this is
>>> largely historical. I also understand that the precision modifiers came to
>>> be later on as the GL API was adopted for mobiles. But the combination of
>>> no implementation guarantee, no precision guarantees, no numerical
>>> implementation guarantee, and "types" which can change precision depending
>>> on a specifier, strikes me as a particularly bad idea. I cannot recall any
>>> statically typed language that would've followed the same logic (though
>>> there might be one). And that's probably because most statically typed
>>> languages designers thought, that would've probably been a wonky and bad
>>> idea.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://khronos.org/pipermail/public_webgl_khronos.org/attachments/20141117/48970db7/attachment.html>

More information about the public_webgl mailing list