[Public WebGL] Typed WebGLArray sequence parameter types
Vladimir Vukicevic
[email protected]
Wed Jan 6 18:31:06 PST 2010
On 1/6/2010 5:55 PM, Kenneth Russell wrote:
> On Wed, Jan 6, 2010 at 1:07 AM, Shiki Okasaka<[email protected]> wrote:
>
>> I've uploaded a validated WebGL IDL file at,
>>
>> http://es-operating-system.googlecode.com/svn/trunk/esidl/dom/webgl.idl
>>
>> This is written in the current Web IDL editor's draft [0] format with one
>> extended keyword 'byte' for 8-bit integers.
>> Does this look reasonable? Maybe the getter and setter types should be
>> changed as well?
>>
> Thanks, this looks great. It's fantastic that you've verified the IDL.
> I think we should switch over the spec's IDL to this version.
>
> Given what we now understand about the WebIDL conversion rules, I
> think we should switch the getters and setters for the WebGLArray
> types to be precisely what the underlying array is supposed to hold.
>
That works for me, given that we have 'byte'. I can make the change.
- Vlad
>> [0] http://dev.w3.org/2006/webapi/WebIDL/
>>
>> - Shiki
>>
>> On Wed, Dec 23, 2009 at 2:28 AM, Kenneth Russell<[email protected]> wrote:
>>
>>> On Mon, Dec 21, 2009 at 10:35 PM, Vladimir Vukicevic
>>> <[email protected]> wrote:
>>>
>>>> On 12/21/2009 8:38 PM, Shiki Okasaka wrote:
>>>>
>>>>> Is this possible to modify typed WebGLArray sequence parameter types
>>>>> in the IDL definitions as below?
>>>>>
>>>>> * WebGLByteArray:
>>>>> sequence<long> -> sequence<octet>
>>>>> * WebGLUnsignedByteArray:
>>>>> sequence<unsigned long> -> sequence<octet>
>>>>> * WebGLShortArray:
>>>>> sequence<long> -> sequence<short>
>>>>> * WebGLUnsignedShortArray:
>>>>> sequence<unsigned long> -> sequence<unsigned short>
>>>>>
>>>>> This change would make the generated interfaces for statically typed
>>>>> languages (e.g. Java) more useful.
>>>>>
>>>>> Note currently Web IDL does not have a primitive type for 8-bit signed
>>>>> integer values. If it is useful for WebGL, maybe we can propose an
>>>>> addition of it to Web IDL as Geolocation WG requested to add 'double'
>>>>> in addition to 'float' [0].
>>>>>
>>>>>
>>>> Yep, that's the main reason why long/unsigned long are used instead of
>>>> octet
>>>> -- if octet was used, then it becomes impossible to actually specify
>>>> signed
>>>> 8-bit integers. For short, we decided to follow the same convention.
>>>> However, maybe a workaround would be to add a typedef somewhere for our
>>>> own
>>>> signed_octet type, by default typedef'd to unsigned long, but with a
>>>> statement in the spec saying that this should be a signed 8 bit type if
>>>> the
>>>> language supports it?
>>>>
>>> Sounds like a good workaround.
>>>
>>> -Ken
>>>
>>> -----------------------------------------------------------
>>> You are currently subscribe to [email protected]
>>> To unsubscribe, send an email to [email protected] with
>>> the following command in the body of your email:
>>>
>>>
>>
>>
-----------------------------------------------------------
You are currently subscribe to [email protected]
To unsubscribe, send an email to [email protected] with
the following command in the body of your email:
More information about the public_webgl
mailing list