[Public WebGL] gl.sizeInBytes
Sun Jan 10 23:44:26 PST 2010
It is hardly a matter of "does a GL implementation have 64-bit GL_FLOATs",
but more of "the WebGL spec explicitly states the size of its types." -- the
latter entirely shutting off the concept of "implementation dependent"
Vlad's right, even if gl.sizeInBytes(GL_FLOAT) did return 8 (double prec.)
there would be no way to efficiently/portably buffer the data.
On Sun, Jan 10, 2010 at 7:10 PM, Vladimir Vukicevic <[email protected]>wrote:
> On 1/10/2010 4:24 PM, Chris Marrin wrote:
>> On Jan 10, 2010, at 12:44 PM, Vladimir Vukicevic wrote:
>>> On 1/10/2010 12:30 PM, Patrick Baggett wrote:
>>>> In section 5.13.3, the first table defines the different types of
>>>> WebGL[Type]Arrays, and in that process, it defines the size, down to the
>>>> bit, of the elements inside each array. Since these types are already
>>>> completely specified, what is the purpose of WebGLContext::sizeInBytes()?
>>>> Or to put it another way, how would app handle sizeInBytes(FLOAT) == 8
>>>> if 5.13.3 defines WebGLFloatArray to be 32-bit floating point values?
>>>> Wouldn't it make more sense for WebGL[Type]Arrays to have elements of size
>>>> sizeInBytes([Type])? Or keep 5.13.3 and drop sizeInBytes() entirely?
>>> sizeInBytes is intended to be a convenience function, so that you can
>>> write 100 * gl.sizeInBytes(gl.FLOAT) instead of having a magic "4" there.
>>> It will always return the same size values that are listed in 5.13.3. But
>>> I do think that we can do without it; if anything, we could just define
>>> constants on the gl object, e.g. gl.FLOAT_SIZE, or perhaps
>>> WebGLFloatArray.ELEMENT_SIZE or something (though the latter is pretty
>> Are you saying that we could not support a GL implementation that has 8
>> byte floats as their native floating point type, or 8 byte ints? I thought
>> the entire point of that call was to support such things.
> Interesting, that's not what I thought it would do -- I thought it was just
> a convenience to avoid having magic integer numbers all over the place. We
> have no support for double-precision floating point arrays or 64-bit int
> arrays currently, and making the size of the various types potentially
> variable would both cause problems and I think be totally unnecessary for
> any GL implementation out there currently. It would also make it much
> harder to use the arrays for dealing with any data read from disk/network.
> Are there any GL implementations out there that have GL_FLOAT be a 64-bit
> double? Reading the GL spec seems to say that 'float' only has a 32-bit
> minimum requirement, but I guess could be implemented using 64-bit doubles.
> I doubt anyone does that, though.
> - Vlad
> You are currently subscribe to [email protected]
> To unsubscribe, send an email to [email protected] with
> the following command in the body of your email:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the public_webgl