[Public WebGL] NaN handling in Typed Array spec

Kenneth Russell [email protected]
Tue Feb 8 15:04:02 PST 2011

On Tue, Feb 8, 2011 at 9:40 AM, Oliver Hunt <[email protected]> wrote:
> I'm concerned about "undefined" behaviour -- historically any form of undefined behaviour leads to patterns that people unintentionally take advantage of.
> For example I can imagine someone doing
> mySourceByteArray = srcView.asBytes(); // or whatever, i've actually forgotten the API -- whoops :)
> myDestByteArray = destView.asBytes();
> for (var i = 0; i < numBytes; i++)
>    myDestByteArray[i] = mySrcByteArray[i];
> And thinking that's slow, how about if i copy 8 bytes at a time:
> mySourceDoubleArray = srcView.asDoubles(); // or whatever, i've actually forgotten the API -- whoops :)
> myDestDoubleArray = destView.asDoubles();
> numDoubles = numBytes / 8;
> for (var i = 0; i < numDoubles; i++)
>    myDestByteArray[i] = mySrcByteArray[i];
> In an implementation that doesn't need to marshal NaN this will _probably_ work enough for a number of cases, whereas in an implementation where NaNs have to be normalised.
> For JS bindings this is only a concern for double/float64 views -- all arithmetic in JS is defined as being in IEEE 754 space, so float32 (and potentially float16) would need to be converted to 64bit IEEE 754 which will normalize any values as a matter of course.

I understand your concern about undefined behavior, but your example
will fail regardless of the proposed change to the typed array spec.
As Andreas pointed out, it is highly likely that JS engines will need
to normalize NaNs loaded from Float32Array and Float64Array, because
particular bit patterns can be constructed using the other view types
like Uint8Array. Therefore attempting to perform a bitwise copy of
data using Float32Array or Float64Array is practically guaranteed to
fail. The integral typed array types (e.g. Uint32Array) can be used to
handle this case.


> --Oliver
> On Feb 7, 2011, at 7:17 PM, Kenneth Russell wrote:
>> A bug was recently filed against WebKit's implementation of Typed
>> Arrays (https://bugs.webkit.org/show_bug.cgi?id=53598). The basic
>> issue is that the Web IDL specification defines the bit pattern for
>> the not-a-number (NaN) value. Ordinarily, it is not possible for
>> ECMAScript programs to examine this bit pattern, but with the
>> introduction of the Typed Array specification, it is possible to use a
>> Float32Array to store NaN and then read back the bytes using, for
>> example, a Uint8Array.
>> Some ECMAScript engines use multiple representations for NaN
>> internally, and forcing them to be canonicalized into a single bit
>> pattern would impose a significant performance penalty on all stores
>> into Float32Arrays. It is absolutely essential for WebGL programs that
>> loads from and stores into Float32Arrays remain as performant as
>> possible.
>> I would like to add a small, normative section to the Typed Array
>> specification indicating that the bit pattern for NaN values stored
>> using Float32Array, Float64Array and DataView is not specified, and
>> that implementations may utilize any of the legal NaN bit patterns
>> defined by the IEEE-754 specification. I do not believe that doing so
>> would introduce any significant ambiguity into the spec; this is a
>> small corner case.
>> Are there any comments on this proposal?
>> -Ken
>> -----------------------------------------------------------
>> You are currently subscribed to [email protected]
>> To unsubscribe, send an email to [email protected] with
>> the following command in the body of your email:
>> unsubscribe public_webgl
>> -----------------------------------------------------------

You are currently subscribed to [email protected]
To unsubscribe, send an email to [email protected] with
the following command in the body of your email:
unsubscribe public_webgl

More information about the public_webgl mailing list