[Public WebGL] gl.sizeInBytes

Chris Marrin [email protected]
Sun Jan 10 16:24:24 PST 2010


On Jan 10, 2010, at 12:44 PM, Vladimir Vukicevic wrote:

> On 1/10/2010 12:30 PM, Patrick Baggett wrote:
>> In section 5.13.3, the first table defines the different types of WebGL[Type]Arrays, and in that process, it defines the size, down to the bit, of the elements inside each array. Since these types are already completely specified, what is the purpose of WebGLContext::sizeInBytes()?
>> 
>> Or to put it another way, how would app handle sizeInBytes(FLOAT) == 8 if 5.13.3 defines WebGLFloatArray to be 32-bit floating point values? Wouldn't it make more sense for WebGL[Type]Arrays to have elements of size sizeInBytes([Type])? Or keep 5.13.3 and drop sizeInBytes() entirely?
> 
> sizeInBytes is intended to be a convenience function, so that you can write 100 * gl.sizeInBytes(gl.FLOAT) instead of having a magic "4" there.  It will always return the same size values that are listed in 5.13.3.  But I do think that we can do without it; if anything, we could just define constants on the gl object, e.g. gl.FLOAT_SIZE, or perhaps WebGLFloatArray.ELEMENT_SIZE or something (though the latter is pretty wordy).


Are you saying that we could not support a GL implementation that has 8 byte floats as their native floating point type, or 8 byte ints? I thought the entire point of that call was to support such things.

-----
~Chris
[email protected]




-----------------------------------------------------------
You are currently subscribe to [email protected]
To unsubscribe, send an email to [email protected] with
the following command in the body of your email:




More information about the public_webgl mailing list