[Public WebGL] NaN handling in Typed Array spec

Kenneth Russell [email protected]
Mon Feb 7 19:17:35 PST 2011

A bug was recently filed against WebKit's implementation of Typed
Arrays (https://bugs.webkit.org/show_bug.cgi?id=53598). The basic
issue is that the Web IDL specification defines the bit pattern for
the not-a-number (NaN) value. Ordinarily, it is not possible for
ECMAScript programs to examine this bit pattern, but with the
introduction of the Typed Array specification, it is possible to use a
Float32Array to store NaN and then read back the bytes using, for
example, a Uint8Array.

Some ECMAScript engines use multiple representations for NaN
internally, and forcing them to be canonicalized into a single bit
pattern would impose a significant performance penalty on all stores
into Float32Arrays. It is absolutely essential for WebGL programs that
loads from and stores into Float32Arrays remain as performant as

I would like to add a small, normative section to the Typed Array
specification indicating that the bit pattern for NaN values stored
using Float32Array, Float64Array and DataView is not specified, and
that implementations may utilize any of the legal NaN bit patterns
defined by the IEEE-754 specification. I do not believe that doing so
would introduce any significant ambiguity into the spec; this is a
small corner case.

Are there any comments on this proposal?

You are currently subscribed to [email protected]
To unsubscribe, send an email to [email protected] with
the following command in the body of your email:
unsubscribe public_webgl

More information about the public_webgl mailing list