[Public WebGL] gl.sizeInBytes

Chris Marrin [email protected]
Mon Jan 11 14:30:32 PST 2010

On Jan 11, 2010, at 1:17 PM, Kenneth Russell wrote:

>> ...As we consider proposing broader use of these array-like types, we will have to specify the exact size of the machine types they manage. However, the mapping between e.g. WebGLFloatArray to e.g. FloatArray vs. DoubleArray would need to be flexible.
> We already have the exact size of the machine types specified for the WebGL Arrays; I think that this needs to remain the case, because otherwise we have the problem that people will just assume 4 bytes anyway because it's currently the probably-100% case and the world breaks if there is an 8-byte "GL_FLOAT" platform.  Otherwise, people have to use sizeInBytes constantly to get correct portable behaviour, and we've tried pretty hard to avoid requirements like that (e.g UniformIndex and friends)..
> You're right, the machine types are currently specified for the WebGL arrays.
> For completely portable behavior, we could consider changing the WebGL spec to say for example that the WebGLFloatArray contains floating-point values compatible with the GLfloat typedef on the host platform.
> I agree that realistically no OpenGL implementation is going to typedef GLfloat to double. However, if there were one that did, it would be more likely that a C program would work after recompilation than a WebGL program, because struct alignment and the sizeof operator would "just work". If we keep the sizeInBytes function and encourage programmers to use it, WebGL code could be as robust.

If we go down this path, I think we should do more than hope no 8 byte float platforms appear. We should mandate it in WebGL. A WebGLFloatArray has 4 byte floats no matter what. If there is a platform that expects GL_FLOAT to be 8 bytes we will have to convert internally. That might be hard or impossible given alignment issues in heterogenous arrays. But I don't think we can hope this case never comes up and hope that if it does everything works out alright. We need to make our definitions in the spec and stick to them.

Like I said, I can't imagine there would ever be such a platform. If there is support for 8 byte floats, it will be with a new data type, which already exists on desktop OpenGL. So I don't think there will be a problem.

Another way to look at this is, if the WebGLArray spec gets wider use than WebGL, we MUST define a float as 4 bytes. As a standalone spec, there will be no underlying OpenGL implementation from which to take guidance on the size of a float.

[email protected]

You are currently subscribe to [email protected]
To unsubscribe, send an email to [email protected] with
the following command in the body of your email:

More information about the public_webgl mailing list