From cma...@ Tue Jun 1 06:20:15 2010 From: cma...@ (Chris Marrin) Date: Tue, 01 Jun 2010 06:20:15 -0700 Subject: [Public WebGL] WebGL Extensions In-Reply-To: References: <5E7C0AEB-8796-4328-AF0F-FCAD28A299FC@apple.com> Message-ID: On May 31, 2010, at 2:16 PM, Ilmari Heikkinen wrote: > 2010/5/31 Andor Salga : >> Ok, thanks. >> >> Can you guys point me to some resources for writing WebGL extensions? How >> would someone go about writing one? I need to start working on it since I >> have a project which will require GL_ARB_occlusion_query. Having the >> extension done by the time WebGL extensions are supported would be really >> great. >> >> Thanks, >> Andor > > As far as I know there is no consensus on how the extension mechanism > would work. So you're pretty much pioneering on here. I'd start by > writing a JavaScript object that wraps all your extension-requiring > WebGL calls. Then brush up your C++ and go hack the WebGL > implementation in some browser to add a new native object type for > your extension (which would implement only the stuff the extension > does and call the underlying webgl context) ... tell you what, this is > getting complex. The extension mechanism has been fleshed out. It has 2 functions: getSupportedExtensions() and getExtension(). The first gives an array of extensions supported by this browser and the second returns an object which provides an interface to the given extension. getExtension() will only ever return a valid object for an extension that is in the getSupportedExtensions() list. This feature is similar to, but not the same as the extension mechanism of OpenGL ES 2.0. It will be the path by which we expose OpenGL ES 2.0 extensions. But the only extensions officially supported will be those added to the spec. The 1.0 version of the spec won't officially support any extensions, and when we do we will only support either WebGL specific extensions (which would typically be implemented in software under the hood) or OpenGL ES 2.0 extensions. So we will never support GL_ARB_occlusion_query officially, unless an equivalent ES version is created. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Tue Jun 1 09:31:17 2010 From: gma...@ (Gregg Tavares) Date: Tue, 1 Jun 2010 09:31:17 -0700 Subject: [Public WebGL] WebGL Extensions In-Reply-To: References: <5E7C0AEB-8796-4328-AF0F-FCAD28A299FC@apple.com> Message-ID: On Sun, May 30, 2010 at 11:13 AM, Cedric Vivier wrote: > On Mon, May 31, 2010 at 01:34, Chris Marrin wrote: > > I'm asking because I'm interested in using GL_ARB_occlusion_query. > >> > >> I first thought the Quake 2 demo was using extensions: > >> http://playwebgl.com:8080/GwtQuake.html > >> > >> But then I realized it was probably just listing ones available. Can > anyone give me some thoughts on this? > > > > There will be no extensions available in WebGL in the first release. > > > Chris, has something been decided in F2F with regards to the strings > returned by getString(GL_EXTENSIONS) and similar calls ? > (iirc this issue was present in the agenda) > Yes it was mentioned and if I remember correctly we decided to remove it from the spec since it would never return anything but the empty string or NULL. The only way to query extensions is to call getSupportedExtensions() > > Currently WebKit returns the list of native GL extensions, which might > be confusing to web developers who might then assume that they could > be used in WebGL as well without a WebGL version of the extension > specification (as it seems was implied by Andor's question wrt > ARB_occlusion_query). > > > Regards, > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Jun 1 11:51:28 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 1 Jun 2010 11:51:28 -0700 Subject: [Public WebGL] WebGL Extensions In-Reply-To: <1893173025.316617.1275341650204.JavaMail.root@cm-mail03.mozilla.org> References: <1893173025.316617.1275341650204.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Mon, May 31, 2010 at 2:34 PM, Vladimir Vukicevic wrote: > > Your pseudocode is roughly how I recollect things will work; we need to get this written up in the spec (Chris, others -- I can take a stab at this if you guys want). That would be great from my standpoint. -Ken > The important thing to remember is that any extensions must be implemented by the browser itself, regardless of what the underlying hardware supports -- that is, there's no way from the content JS side to get access to, say, ARB_occlusion_query unless the browser itself implements the extension. ?So it needs to be supported in two places: the underlying GL implementation and the browser WebGL implementation before it can be used by content script. > > ? ?- Vlad > > ----- "Ilmari Heikkinen" wrote: > >> 2010/5/31 Andor Salga : >> > Ok, thanks. >> > >> > Can you guys point me to some resources for writing WebGL >> extensions? How >> > would someone go about writing one? I need to start working on it >> since I >> > have a project which will require GL_ARB_occlusion_query. Having >> the >> > extension done by the time WebGL extensions are supported would be >> really >> > great. >> > >> > Thanks, >> > ? Andor >> >> As far as I know there is no consensus on how the extension mechanism >> would work. So you're pretty much pioneering on here. I'd start by >> writing a JavaScript object that wraps all your extension-requiring >> WebGL calls. Then brush up your C++ and go hack the WebGL >> implementation in some browser to add a new native object type for >> your extension (which would implement only the stuff the extension >> does and call the underlying webgl context) ... tell you what, this >> is >> getting complex. >> >> Hmm. The WebGL context needs to reject invalid calls. And the >> extension calls are invalid calls unless the extension is enabled. >> The >> problem is how to enable the extension and expose the extension >> constants and methods. >> >> The desktop GL enables extensions by never disabling them. It knows >> what extensions it supports and if you're trying to do something it >> doesn't support, it errors. The constants are exposed in the header >> files and the entrypoints hot-plugged in by extension managers like >> GLEW. >> >> The current WebGL implementations are light-weight wrappers on top of >> desktop GL with error checking and validation, plus an IDL file to >> define all the methods and constants in the context. You can't really >> add stuff to the IDL at run-time (I .. think?) So you'd have to >> either >> put all the extension constants in the WebGLContext IDL or define >> some >> kind of extension object with the constants and methods. >> >> A hypothetical GL_ARB_occlusion_query extension object might look >> something like this: >> >> // FIXME implement >> WebGLARBOcclusionQueryExtension { >> ? int SAMPLES_PASSED = 0x8914; >> ? int QUERY_COUNTER_BITS = 0x8864; >> ? int CURRENT_QUERY = 0x8865; >> ? int QUERY_RESULT = 0x8866; >> ? int QUERY_RESULT_AVAILABLE = 0x8867; >> >> ? uint genQuery(); >> ? void deleteQuery(uint id); >> ? boolean isQuery(uint id); >> ? void beginQuery(enum target, uint id); >> ? void endQuery(enum target); >> ? int[] getQuery(enum target, enum pname); >> ? int[] getQueryObject(uint id, enum pname); >> } >> >> And you'd get a version bound to the issuing WebGLContext by doing >> something like >> >> // FIXME implement >> var occ = glctx.getExtension("GL_ARB_occlusion_query"); >> if (occ) { // great, found it >> ? var query = occ.genQuery(); // internally makes the glctx context >> active and calls genQueriesARB (guess you need to find-and-bind those >> as well) >> ? // do stuff >> ? occ.deleteQuery(query); >> } >> >> Disclaimer: this is all my pseudocode sketching and likely not how it >> would turn out in the end. >> >> But hey, try doing a prototype and come back with stories of great >> glory and a treasure chest full of implementation tidbits. >> >> Cheers, >> Ilmari >> >> > >> > ----- Original Message ----- >> > From: Chris Marrin >> > Date: Monday, May 31, 2010 12:54 pm >> > Subject: Re: [Public WebGL] WebGL Extensions >> > To: Cedric Vivier >> > Cc: Andor Salga , public webgl >> > >> > >> >> >> >> On May 30, 2010, at 11:13 AM, Cedric Vivier wrote: >> >> >> >> > On Mon, May 31, 2010 at 01:34, Chris Marrin >> >> wrote: >> >> >> I'm asking because I'm interested in using >> >> GL_ARB_occlusion_query.>>> >> >> >>> I first thought the Quake 2 demo was using extensions: >> >> >>> http://playwebgl.com:8080/GwtQuake.html >> >> >>> >> >> >>> But then I realized it was probably just listing ones >> >> available. Can anyone give me some thoughts on this? >> >> >> >> >> >> There will be no extensions available in WebGL in the first >> >> release.> >> >> > >> >> > Chris, has something been decided in F2F with regards to the >> strings >> >> > returned by getString(GL_EXTENSIONS) and similar calls ? >> >> > (iirc this issue was present in the agenda) >> >> >> >> I don't have my notes with me. I have wanted to post them, but >> >> I've been out sick for the last few days. My recollection is >> >> that we won't ever return anything in the GL_EXTENSIONS string. >> >> We may even have removed it, I don't remember. We will return an >> >> array of strings in getSupportedExtensions() instead. >> >> >> >> > >> >> > Currently WebKit returns the list of native GL extensions, >> >> which might >> >> > be confusing to web developers who might then assume that they >> could >> >> > be used in WebGL as well without a WebGL version of the >> extension >> >> > specification (as it seems was implied by Andor's question wrt >> >> > ARB_occlusion_query). >> >> >> >> Yes, that's a bug >> >> >> >> ----- >> >> ~Chris >> >> cmarrin...@ >> >> >> >> >> >> >> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Tue Jun 1 12:50:06 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 1 Jun 2010 12:50:06 -0700 Subject: [Public WebGL] Chromium/Firefox handling of FloatArray In-Reply-To: <1556103730.299395.1275079647387.JavaMail.root@cm-mail03.mozilla.org> References: <265950397.299366.1275079453490.JavaMail.root@cm-mail03.mozilla.org> <1556103730.299395.1275079647387.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Fri, May 28, 2010 at 1:47 PM, Vladimir Vukicevic wrote: > > ----- "Alan Chaney" wrote: > > This code works fine on Chromium 6.0.408.1.dev (which claims to be up to > date) but fails on Firefox 3.7a5pre. > The Error console says: "$wnd.FloatArray is not a constructor." > > Since the code is based on the current WebGL spec + the TypedArray spec > I assume that Chromium is getting it right? Or > what am I doing wrong? > > Hmm, whoops -- these are called Float32Array and Float64Array in Firefox, > but it looks like I missed updating those names in the original Typed Array > draft spec.? I believe that you can temporarily use WebGLFloatArray and have > things work in both. > > I need to check with Ken and others if there's any strong preference for > naming here -- the Float -> Float32, Double -> Float64 change happened based > on feedback that it's more consistent with Int8/Uint8 etc., and that it > makes it more natural to extend to Float128 and possibly Float16 in the > future. Is there precedent for this naming convention in the current or forthcoming ECMAScript specifications? The Int8, Uint8, etc. naming convention is good because (some of) these types are already in ECMA-262. I see parseFloat() in ECMA-262 which makes me prefer the current Float and Double type names, though I don't feel strongly about it. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Tue Jun 1 12:53:19 2010 From: vla...@ (Vladimir Vukicevic) Date: Tue, 1 Jun 2010 12:53:19 -0700 (PDT) Subject: [Public WebGL] Chromium/Firefox handling of FloatArray In-Reply-To: Message-ID: <1195879162.323531.1275421999879.JavaMail.root@cm-mail03.mozilla.org> ----- "Kenneth Russell" wrote: > On Fri, May 28, 2010 at 1:47 PM, Vladimir Vukicevic > wrote: > > > > ----- "Alan Chaney" wrote: > > > > This code works fine on Chromium 6.0.408.1.dev (which claims to be > up to > > date) but fails on Firefox 3.7a5pre. > > The Error console says: "$wnd.FloatArray is not a constructor." > > > > Since the code is based on the current WebGL spec + the TypedArray > spec > > I assume that Chromium is getting it right? Or > > what am I doing wrong? > > > > Hmm, whoops -- these are called Float32Array and Float64Array in > Firefox, > > but it looks like I missed updating those names in the original > Typed Array > > draft spec.? I believe that you can temporarily use WebGLFloatArray > and have > > things work in both. > > > > I need to check with Ken and others if there's any strong preference > for > > naming here -- the Float -> Float32, Double -> Float64 change > happened based > > on feedback that it's more consistent with Int8/Uint8 etc., and that > it > > makes it more natural to extend to Float128 and possibly Float16 in > the > > future. > > Is there precedent for this naming convention in the current or > forthcoming ECMAScript specifications? The Int8, Uint8, etc. naming > convention is good because (some of) these types are already in > ECMA-262. I see parseFloat() in ECMA-262 which makes me prefer the > current Float and Double type names, though I don't feel strongly > about it. Hmm, not sure -- note that parseFloat will end up parsing a double because that's the only floating point type that JS has... so I don't think we should necessarily follow that :-) - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Tue Jun 1 13:02:11 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 1 Jun 2010 13:02:11 -0700 Subject: [Public WebGL] Chromium/Firefox handling of FloatArray In-Reply-To: <1195879162.323531.1275421999879.JavaMail.root@cm-mail03.mozilla.org> References: <1195879162.323531.1275421999879.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Tue, Jun 1, 2010 at 12:53 PM, Vladimir Vukicevic wrote: > > ----- "Kenneth Russell" wrote: > >> On Fri, May 28, 2010 at 1:47 PM, Vladimir Vukicevic >> wrote: >> > >> > ----- "Alan Chaney" wrote: >> > >> > This code works fine on Chromium 6.0.408.1.dev (which claims to be >> up to >> > date) but fails on Firefox 3.7a5pre. >> > The Error console says: "$wnd.FloatArray is not a constructor." >> > >> > Since the code is based on the current WebGL spec + the TypedArray >> spec >> > I assume that Chromium is getting it right? Or >> > what am I doing wrong? >> > >> > Hmm, whoops -- these are called Float32Array and Float64Array in >> Firefox, >> > but it looks like I missed updating those names in the original >> Typed Array >> > draft spec.? I believe that you can temporarily use WebGLFloatArray >> and have >> > things work in both. >> > >> > I need to check with Ken and others if there's any strong preference >> for >> > naming here -- the Float -> Float32, Double -> Float64 change >> happened based >> > on feedback that it's more consistent with Int8/Uint8 etc., and that >> it >> > makes it more natural to extend to Float128 and possibly Float16 in >> the >> > future. >> >> Is there precedent for this naming convention in the current or >> forthcoming ECMAScript specifications? The Int8, Uint8, etc. naming >> convention is good because (some of) these types are already in >> ECMA-262. I see parseFloat() in ECMA-262 which makes me prefer the >> current Float and Double type names, though I don't feel strongly >> about it. > > Hmm, not sure -- note that parseFloat will end up parsing a double because that's the only floating point type that JS has... so I don't think we should necessarily follow that :-) Understood. It's OK with me if we make this change now; any comments from the WebKit team? -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Tue Jun 1 13:49:47 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 1 Jun 2010 13:49:47 -0700 Subject: [Public WebGL] Few questions/comments about context creation attributes In-Reply-To: References: <1974356647.245506.1274552716538.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Mon, May 24, 2010 at 10:48 PM, Cedric Vivier wrote: > On Tue, May 25, 2010 at 06:59, Kenneth Russell wrote: >> We discussed this at length earlier in the working group. The basic >> question is whether getContext("webgl") should ever return null if the >> hardware is capable of running WebGL. The earlier consensus was that >> it is easier to program to a "closest fit" context selection model >> than a "minimum requirements" model. > > Good to know, nothing in the specification does imply this model > afaik, quite the opposite if you consider repeated usage of "at > least". > Does it means that for instance when "stencil:true" is passed but > stencil is not supported with the given canvas resolution on a given > hardware, WebGL will give back a context without stencil buffer since > it would be the closest fit it could find ? Could get funny results if > the application relies on it. > Or "closest fit" works only with regards to precision in each buffer ? > In this case, right now with the booleans attribute WebGL indeed > defines a minimum fit model ("at least"). > > That said I understand the intention and agree that the closest-fit > model sounds more web-ish :) > I think should add a bit about this in the spec so that > implementations (and implementations to-be) do not assume otherwise. > > I think replacing some of the boolean context attributes with integers > as suggested would be the perfect opportunity to do that, what do you > think ? > It works beautifully with the "closest fit" model as well, allowing > WebGL to return the closest fit to what is requested/preferred by the > developer as soon as better support lands in implementations [1]. > Also since we use a "closest fit" model adding minX/maxX for every > attribute later on does make even less sense imho (one would assume > that if the closest fit is below minimum or above maximum a context > cannot be created and null should be returned - breaking the whole > purpose of the closest-fit model). > > > Regards, > > > [1] : e.g {depth:24} does not return null not supported, just returns > the "closest-fit" 16-bit depth buffer if not supported, and returns a > 24-bit on supporting implementations where it makes sense. The working group discussed at length whether to make the context creation attributes numeric values, and the decision was to leave them as boolean flags for the WebGL 1.0 release. The feeling was that this will provide enough control while still retaining simplicity, and could be evolved in a forward direction (allowing numeric values to be supplied in a future version of the specification). -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Tue Jun 1 13:51:29 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 1 Jun 2010 13:51:29 -0700 Subject: [Public WebGL] Few questions/comments about context creation attributes In-Reply-To: References: <1974356647.245506.1274552716538.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Fri, May 28, 2010 at 9:15 AM, Chris Marrin wrote: > > On May 26, 2010, at 8:19 PM, Cedric Vivier wrote: > >> On Tue, May 25, 2010 at 06:59, Kenneth Russell wrote: >>> Chris Marrin can probably comment on the decision to enable the >>> stencil buffer by default. >> >> Can you comment on this Chris? > > In case it wasn't clear in my last post, I think the stencil buffer should be off by default I think it would be a good idea to disable the stencil buffer by default as well. Are there any objections to changing the default? -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Tue Jun 1 13:53:26 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 1 Jun 2010 13:53:26 -0700 Subject: [Public WebGL] Few questions/comments about context creation attributes In-Reply-To: References: <2120207313.283284.1274935917773.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Wed, May 26, 2010 at 11:27 PM, Cedric Vivier wrote: > On Thu, May 27, 2010 at 12:51, Vladimir Vukicevic wrote: >> FWIW, I'd be happy to disable the stencil buffer by default -- RGBA + depth >> seems like a very reasonable default context format. > > I think we can discuss on the RGBA versus RGB as well. > > Two arguments for this : > - on mobile devices the common default and most efficient color buffer > mode is RGB (565 but the actual bits-per-component is irrelevant here) > - of all WebGL demos available very few do need alpha-compositing with > the HTML page, almost all of them do set clearColor with alpha=1.0, > without passing alpha:false as context attribute, based on this real > usage statistic making WebGL an opaque surface by default makes sense > imho as it helps performance (and memory usage) for the greatest > number of use cases. > > In fact when testing RGB context (hardcoded in FF to check all apps) I > realized I forgot to use clearColor in a work-in-progress app when > opaque white was intended (this gone unnoticed since the HTML page > currently has white background but the content was intended to be > white, whatever the HTML background is), so this might also prove > beneficial to avoid unnecessary/unintended/wasteful alpha-blending due > to a simple mistake/typo that would not happen if presence of alpha > channel was set explicitly when actually needed. I think the alpha channel should be enabled by default. There are many situations in HTML compositing where alpha=0 implies that elements underneath the current one show through. Since WebGL is a spec for the web I think the default should be the least surprising result. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Tue Jun 1 15:21:49 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 1 Jun 2010 15:21:49 -0700 Subject: [Public WebGL] Addition of texParameteriv and texParameterfv signatures In-Reply-To: References: Message-ID: On Wed, May 26, 2010 at 6:47 PM, Cedric Vivier wrote: > On Thu, May 27, 2010 at 07:23, Kenneth Russell wrote: >> >> This assumption is incorrect. The core WebGL code will not only have >> to implement these entry points but also support them for all of the >> existing enums and values. See >> http://www.khronos.org/opengles/sdk/docs/man/glTexParameter.xml and >> note that all of the enums that can be passed to glTexParameter[if] >> can also be passed to glTexParameter[if]v. > > Right.?However this does not disprove the general idea that it requires only > a minimal stub, passing control to the non-vectored version on the first > element (ie. "gl.texParameteri(target, pname, param[0]);" ) for the four > valid enums. > Is this considered a significantly expensive stub to implement ? It requires handwritten code, and there is a recent push to eliminate such handwritten code in the WebKit JavaScript bindings. Additionally, balancing this against other needed work for WebGL 1.0, it would be far down on my priority list. >> What is the justification for adding these entry points? I am not >> aware of any core OpenGL ES texture parameter, one specified by an >> OpenGL ES extension, or any planned WebGL-specific texture parameter >> that would require them. > > Justification is being able to use them in _any_ extension, by your > assumption that they are useless and always will be, ES would also have > removed these signatures imho. > You're right that there is probably no ES extension requiring them > currently, however this might happen during the lifetime of WebGL 1.0... and > _there is_ non-ES extensions that requires them (e.g ARB_texture_swizzle is > an obvious one, core now in 3.3+, but there is probably others), so WebGL > bindings for desktop-specific extensions would need them. > Finally, do you imply every potential WebGL-specific extension should be > been planned already ? ;-)?The nice characteristic about an open extension > system is that it can be used to experiment new ideas anytime, for instance > a WebGL extension to support non-zero border (and associated > TEXTURE_BORDER_COLOR parameter) on desktop could worthy to experiment [just > an idea!]. If an extension comes along which needs these entry points and they aren't in WebGL 1.0 they can always be supplied on the extension object itself. Does anyone else on the WebGL working group have an opinion on this topic? -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ben...@ Wed Jun 2 09:29:52 2010 From: ben...@ (Henrik Bennetsen) Date: Wed, 2 Jun 2010 09:29:52 -0700 Subject: [Public WebGL] WebGL Camp #1 Message-ID: The WebGL spec is currently in review with 1.0 expected later this year. But the idea of native to the browser 3D is already gathering steam. WebGL camp is a developer focused event that aims to bring together the people who are writing the spec with the those who build projects on it for some show and tell. This informal one day event takes place at from 9-5 on Friday, June 25th, 2010 at Wallenberg Hall, Stanford University. Registration is only $20 and now open: http://webglcamp.com/ Remote participants will be able to follow the day via a free live video stream. What you can do: 1. Please help me spread the word! 2. Am still looking for presenters so help with suggestions. Remote presentation is an option. Cheers, Henrik -- www.katalabs.com henrik...@ Cell: 415.418.4042 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed Jun 2 15:22:38 2010 From: bja...@ (Benoit Jacob) Date: Wed, 2 Jun 2010 15:22:38 -0700 (PDT) Subject: [Public WebGL] Error codes in drawArrays / drawElements In-Reply-To: <941562101.334865.1275517280727.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <629892735.334878.1275517358861.JavaMail.root@cm-mail03.mozilla.org> Hi, These Khronos tests suggests that in certain circumstances drawArray / drawElements give INVALID_OPERATION, as opposed to INVALID_VALUE: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-arrays-out-of-bounds.html https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-elements-out-of-bounds.html But neither the WebGL spec, nor the OpenGL ES documentation, say that these functions can give INVALID_OPERATION. What's happening? Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Wed Jun 2 15:27:46 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 2 Jun 2010 15:27:46 -0700 Subject: [Public WebGL] Error codes in drawArrays / drawElements In-Reply-To: <629892735.334878.1275517358861.JavaMail.root@cm-mail03.mozilla.org> References: <941562101.334865.1275517280727.JavaMail.root@cm-mail03.mozilla.org> <629892735.334878.1275517358861.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Wed, Jun 2, 2010 at 3:22 PM, Benoit Jacob wrote: > Hi, > > These Khronos tests suggests that in certain circumstances drawArray / drawElements give INVALID_OPERATION, as opposed to INVALID_VALUE: > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-arrays-out-of-bounds.html > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-elements-out-of-bounds.html > > But neither the WebGL spec, nor the OpenGL ES documentation, say that these functions can give INVALID_OPERATION. Section 4.1 of the WebGL indicates, but does not currently specify, this behavior. We agreed at the F2F that generating an INVALID_OPERATION error will be the specified behavior. The spec still needs to catch up but the tests verify the intended behavior. -Ken > What's happening? > > Benoit > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Jun 2 21:54:27 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 2 Jun 2010 21:54:27 -0700 (PDT) Subject: [Public WebGL] WebGL Extensions In-Reply-To: Message-ID: <1372126690.336864.1275540867244.JavaMail.root@cm-mail03.mozilla.org> ----- "Kenneth Russell" wrote: > On Mon, May 31, 2010 at 2:34 PM, Vladimir Vukicevic > wrote: > > > > Your pseudocode is roughly how I recollect things will work; we need > to get this written up in the spec (Chris, others -- I can take a stab > at this if you guys want). > > That would be great from my standpoint. Hm, I thought we didn't have this which is where the confusion came from, but looks like Chris put it in a while ago: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.14.14 Maybe it's just missing a sentence or two at the end explaining that extensions are WebGL specific, and if WebGL is built on top of an underlying OpenGL driver, that driver's extensions will not necessarily be exposed? - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Jun 3 04:47:27 2010 From: ste...@ (Steve Baker) Date: Thu, 03 Jun 2010 06:47:27 -0500 Subject: [Public WebGL] WebGL Extensions In-Reply-To: <1372126690.336864.1275540867244.JavaMail.root@cm-mail03.mozilla.org> References: <1372126690.336864.1275540867244.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C07964F.80003@sjbaker.org> Vladimir Vukicevic wrote: > Hm, I thought we didn't have this which is where the confusion came from, but looks like Chris put it in a while ago: > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.14.14 > > Maybe it's just missing a sentence or two at the end explaining that extensions are WebGL specific, and if WebGL is built on top of an underlying OpenGL driver, that driver's extensions will not necessarily be exposed? > IMHO, it is essential that WebGL does NOT expose underlying driver extensions by default. The reason being one of security. Suppose something in the underlying OpenGL driver opened up a vulnerability on the client computer - some means for evildoers to get into the machine and install malware or whatever. If WebGL exposed that vulnerability by default - then a simple JavaScript hack on a website - or even in an HTML email - would be all that would be needed to turn it into a usable exploit. The big problem would be that there would be no simple way to close that loophole promptly because it would require the cooperation of the OpenGL driver authors - who normally do not have to worry too much about exposing vulnerabilities since OpenGL programs are installed and run by the owner of the computer. They might not even WANT to close the loophole - on the grounds that desktop applications are no worse off. On the other hand, if WebGL is picky about which extensions it exposes, then it's easy to remove access to an extension by changing one line in a table someplace...we could even (in an emergency) tell people how to disable the extension themselves by a per-extension checkbox in the 'about:config' page. That would be a handy feature for application debuggers anyway ("How does my application run without the yadda-yadda extension?"). -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Thu Jun 3 05:12:10 2010 From: oli...@ (Oliver Hunt) Date: Thu, 3 Jun 2010 05:12:10 -0700 Subject: [Public WebGL] WebGL Extensions In-Reply-To: <4C07964F.80003@sjbaker.org> References: <1372126690.336864.1275540867244.JavaMail.root@cm-mail03.mozilla.org> <4C07964F.80003@sjbaker.org> Message-ID: On Jun 3, 2010, at 4:47 AM, Steve Baker wrote: > Vladimir Vukicevic wrote: >> Hm, I thought we didn't have this which is where the confusion came from, but looks like Chris put it in a while ago: >> >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.14.14 >> >> Maybe it's just missing a sentence or two at the end explaining that extensions are WebGL specific, and if WebGL is built on top of an underlying OpenGL driver, that driver's extensions will not necessarily be exposed? >> > IMHO, it is essential that WebGL does NOT expose underlying driver > extensions by default. The reason being one of security. For what it's worth the way that the JS/DOM bindings work in most (all?) browsers require every exposed function to be defined and implemented explicitly -- it would not be possible for an implementation to automate the exposure of an arbitrary set of unknown extensions. --Oliver > > -- Steve > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Thu Jun 3 07:54:47 2010 From: bja...@ (Benoit Jacob) Date: Thu, 3 Jun 2010 07:54:47 -0700 (PDT) Subject: [Public WebGL] SHADER_BINARY_FORMATS Message-ID: <1515287678.339661.1275576887245.JavaMail.root@cm-mail03.mozilla.org> Hi, This test: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/gl-get-calls.html is referring to SHADER_BINARY_FORMATS but I can't see it in the spec. Is this is bug in the test or in the spec? Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Thu Jun 3 09:24:28 2010 From: gma...@ (Gregg Tavares) Date: Thu, 3 Jun 2010 09:24:28 -0700 Subject: [Public WebGL] WebGL Extensions In-Reply-To: References: <1372126690.336864.1275540867244.JavaMail.root@cm-mail03.mozilla.org> <4C07964F.80003@sjbaker.org> Message-ID: On Thu, Jun 3, 2010 at 5:12 AM, Oliver Hunt wrote: > > On Jun 3, 2010, at 4:47 AM, Steve Baker wrote: > > > Vladimir Vukicevic wrote: > >> Hm, I thought we didn't have this which is where the confusion came > from, but looks like Chris put it in a while ago: > >> > >> > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.14.14 > >> > >> Maybe it's just missing a sentence or two at the end explaining that > extensions are WebGL specific, and if WebGL is built on top of an underlying > OpenGL driver, that driver's extensions will not necessarily be exposed? > >> > > IMHO, it is essential that WebGL does NOT expose underlying driver > > extensions by default. The reason being one of security. > > For what it's worth the way that the JS/DOM bindings work in most (all?) > browsers require every exposed function to be defined and implemented > explicitly -- it would not be possible for an implementation to automate the > exposure of an arbitrary set of unknown extensions. > Plenty of extensions only enable new ENUMs and new features to GLSL. No changes to the API are required so it's very possible to automate the exposure of an arbitrary set of unknown extensions unless WebGL specifically prevents that exposure. > > --Oliver > > > > > -- Steve > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Jun 3 11:04:36 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 3 Jun 2010 11:04:36 -0700 Subject: [Public WebGL] WebGL Extensions In-Reply-To: References: <1372126690.336864.1275540867244.JavaMail.root@cm-mail03.mozilla.org> <4C07964F.80003@sjbaker.org> Message-ID: On Thu, Jun 3, 2010 at 9:24 AM, Gregg Tavares wrote: > > > On Thu, Jun 3, 2010 at 5:12 AM, Oliver Hunt wrote: >> >> On Jun 3, 2010, at 4:47 AM, Steve Baker wrote: >> >> > Vladimir Vukicevic wrote: >> >> Hm, I thought we didn't have this which is where the confusion came >> >> from, but looks like Chris put it in a while ago: >> >> >> >> >> >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.14.14 >> >> >> >> Maybe it's just missing a sentence or two at the end explaining that >> >> extensions are WebGL specific, and if WebGL is built on top of an underlying >> >> OpenGL driver, that driver's extensions will not necessarily be exposed? >> >> >> > IMHO, it is essential that WebGL does NOT expose underlying driver >> > extensions by default. ?The reason being one of security. >> >> For what it's worth the way that the JS/DOM bindings work in most (all?) >> browsers require every exposed function to be defined and implemented >> explicitly -- it would not be possible for an implementation to automate the >> exposure of an arbitrary set of unknown extensions. > > Plenty of extensions only enable new ENUMs and new features to GLSL. No > changes to the API are required so it's very possible to automate the > exposure of an arbitrary set of unknown extensions unless WebGL specifically > prevents that exposure. Agreed. Because these extensions will generally be the actual underlying OpenGL ES 2.0 compatible extensions, just exposed and enabled differently for WebGL, I don't think it's warranted to state that extensions are WebGL specific. Concrete examples in the spec would definitely help though -- perhaps one like OES_texture_float where no new enums or functions are defined, just new functionality that needs to be enabled, as well as one which adds both functions and enums, like OES_texture_3D. -Ken >> >> --Oliver >> >> > >> > ?-- Steve >> > >> > ----------------------------------------------------------- >> > You are currently subscribed to public_webgl...@ >> > To unsubscribe, send an email to majordomo...@ with >> > the following command in the body of your email: >> > >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Thu Jun 3 11:05:30 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 3 Jun 2010 11:05:30 -0700 (PDT) Subject: [Public WebGL] INVALID_VALUE or INVALID_OPERATION for bad object args Message-ID: <1572680775.342256.1275588330853.JavaMail.root@cm-mail03.mozilla.org> For various GL calls that take GL object names as arguments, e.g. LinkProgram, they can generate two distinct errors in case a somehow "wrong" object is passed: GL_INVALID_VALUE is generated if program is not a value generated by OpenGL. GL_INVALID_OPERATION is generated if program is not a program object. We don't really have this distinction in WebGL, and I'm not sure what the right error to raise is here. I'm interpreting this to mean, for WebGL: GL_INVALID_VALUE is generated if the given program was not created by this WebGL context. GL_INVALID_OPERATION is generated if the program is not a program object. Does that sound right? - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Thu Jun 3 11:07:39 2010 From: oli...@ (Oliver Hunt) Date: Thu, 3 Jun 2010 11:07:39 -0700 Subject: [Public WebGL] WebGL Extensions In-Reply-To: References: <1372126690.336864.1275540867244.JavaMail.root@cm-mail03.mozilla.org> <4C07964F.80003@sjbaker.org> Message-ID: <05D6C63E-C46E-4AE4-8C51-23BD79BFA71F@apple.com> On Jun 3, 2010, at 9:24 AM, Gregg Tavares wrote: > > > On Thu, Jun 3, 2010 at 5:12 AM, Oliver Hunt wrote: > > On Jun 3, 2010, at 4:47 AM, Steve Baker wrote: > > > Vladimir Vukicevic wrote: > >> Hm, I thought we didn't have this which is where the confusion came from, but looks like Chris put it in a while ago: > >> > >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.14.14 > >> > >> Maybe it's just missing a sentence or two at the end explaining that extensions are WebGL specific, and if WebGL is built on top of an underlying OpenGL driver, that driver's extensions will not necessarily be exposed? > >> > > IMHO, it is essential that WebGL does NOT expose underlying driver > > extensions by default. The reason being one of security. > > For what it's worth the way that the JS/DOM bindings work in most (all?) browsers require every exposed function to be defined and implemented explicitly -- it would not be possible for an implementation to automate the exposure of an arbitrary set of unknown extensions. > > Plenty of extensions only enable new ENUMs and new features to GLSL. No changes to the API are required so it's very possible to automate the exposure of an arbitrary set of unknown extensions unless WebGL specifically prevents that exposure. How does the runtime _know_ that an extension is only exposing an new enum? How would the glsl validator _know_ what glsl features existed due to an arbitrary extension? The implementation needs to handle every extension itself -- it can't do it automatically through the glGetExtensions API (or whatever it's called) --Oliver > > > --Oliver > > > > > -- Steve > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu Jun 3 11:16:29 2010 From: gma...@ (Gregg Tavares) Date: Thu, 3 Jun 2010 11:16:29 -0700 Subject: [Public WebGL] WebGL Extensions In-Reply-To: <05D6C63E-C46E-4AE4-8C51-23BD79BFA71F@apple.com> References: <1372126690.336864.1275540867244.JavaMail.root@cm-mail03.mozilla.org> <4C07964F.80003@sjbaker.org> <05D6C63E-C46E-4AE4-8C51-23BD79BFA71F@apple.com> Message-ID: On Thu, Jun 3, 2010 at 11:07 AM, Oliver Hunt wrote: > > On Jun 3, 2010, at 9:24 AM, Gregg Tavares wrote: > > > > On Thu, Jun 3, 2010 at 5:12 AM, Oliver Hunt wrote: > >> >> On Jun 3, 2010, at 4:47 AM, Steve Baker wrote: >> >> > Vladimir Vukicevic wrote: >> >> Hm, I thought we didn't have this which is where the confusion came >> from, but looks like Chris put it in a while ago: >> >> >> >> >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.14.14 >> >> >> >> Maybe it's just missing a sentence or two at the end explaining that >> extensions are WebGL specific, and if WebGL is built on top of an underlying >> OpenGL driver, that driver's extensions will not necessarily be exposed? >> >> >> > IMHO, it is essential that WebGL does NOT expose underlying driver >> > extensions by default. The reason being one of security. >> >> For what it's worth the way that the JS/DOM bindings work in most (all?) >> browsers require every exposed function to be defined and implemented >> explicitly -- it would not be possible for an implementation to automate the >> exposure of an arbitrary set of unknown extensions. >> > > Plenty of extensions only enable new ENUMs and new features to GLSL. No > changes to the API are required so it's very possible to automate the > exposure of an arbitrary set of unknown extensions unless WebGL specifically > prevents that exposure. > > > How does the runtime _know_ that an extension is only exposing an new enum? > How would the glsl validator _know_ what glsl features existed due to an > arbitrary extension? The implementation needs to handle every extension > itself -- it can't do it automatically through the glGetExtensions API (or > whatever it's called) > > In real GL the features just work if they exist. Querying is only their for your benefit. In WebGL, the WebGL implementation has to do checking to make sure no extensions get passed through to the system's GL unless the user has explicit called ctx.getExtension (A WebGL function, not a GL function) for that extension. So in other words, passing GL_FLOAT to texImage2D fails in WebGL. The implementation explicitly checks for that. If we add a floating point texture extension then if the user calls ctx.getExtension("floating-point-textures") the WebGL code starts allowing GL_FLOAT to be passed to texImage2D. The same thing for the validator. If we add support for glsl 2.0 then you'll have to do ctx.getExtension("glsl-2.0") and internally some new flags will be sent to the validator to allow glsl 2.0 features to compile. > --Oliver > > > >> >> --Oliver >> >> > >> > -- Steve >> > >> > ----------------------------------------------------------- >> > You are currently subscribed to public_webgl...@ >> > To unsubscribe, send an email to majordomo...@ with >> > the following command in the body of your email: >> > >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Thu Jun 3 11:20:30 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 3 Jun 2010 11:20:30 -0700 (PDT) Subject: [Public WebGL] WebGL Extensions In-Reply-To: <05D6C63E-C46E-4AE4-8C51-23BD79BFA71F@apple.com> Message-ID: <1025119755.342550.1275589230922.JavaMail.root@cm-mail03.mozilla.org> Sorry -- I did not mean to imply anywhere that WebGL should ever blidnly expose access to underlying GL extension functionality. My original phrasing meant that just because a driver supports it, it doesn't mean that it'll be available in WebGL, and that still stands. For /any/ extension to be available in WebGL, the browser WebGL implementation must fully participate in that and must fully understand the effects of the extension. There will /never/ be any "automatic" exposure of underlying GL extensions. getSupportedExtensions() will return the list of strings of available extensions that the implementation knows about. None of these are enabled by default. For any of them to be enabled (and their functionality available to the app), getExtension() must be called on it and it must have a non-null return type. For some extensions, such as float textures, you'll get a non-null object back, but it won't have any properties on it -- but using FLOAT as a token to the texture functions will begin to work. If the author hadn't called getExtension(), then using FLOAT would have resulted in an error being generated. - Vlad ----- "Oliver Hunt" wrote: > On Jun 3, 2010, at 9:24 AM, Gregg Tavares wrote: > > On Thu, Jun 3, 2010 at 5:12 AM, Oliver Hunt < oliver...@ > > wrote: > > On Jun 3, 2010, at 4:47 AM, Steve Baker wrote: > > > Vladimir Vukicevic wrote: > >> Hm, I thought we didn't have this which is where the confusion came > from, but looks like Chris put it in a while ago: > >> > >> > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.14.14 > >> > >> Maybe it's just missing a sentence or two at the end explaining > that extensions are WebGL specific, and if WebGL is built on top of an > underlying OpenGL driver, that driver's extensions will not > necessarily be exposed? > >> > > IMHO, it is essential that WebGL does NOT expose underlying driver > > extensions by default. The reason being one of security. > > For what it's worth the way that the JS/DOM bindings work in most > (all?) browsers require every exposed function to be defined and > implemented explicitly -- it would not be possible for an > implementation to automate the exposure of an arbitrary set of unknown > extensions. > > Plenty of extensions only enable new ENUMs and new features to GLSL. > No changes to the API are required so it's very possible to automate > the exposure of an arbitrary set of unknown extensions unless WebGL > specifically prevents that exposure. > > > How does the runtime _know_ that an extension is only exposing an new > enum? How would the glsl validator _know_ what glsl features existed > due to an arbitrary extension? The implementation needs to handle > every extension itself -- it can't do it automatically through the > glGetExtensions API (or whatever it's called) > > > --Oliver ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Jun 3 11:35:47 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 3 Jun 2010 11:35:47 -0700 Subject: [Public WebGL] INVALID_VALUE or INVALID_OPERATION for bad object args In-Reply-To: <1572680775.342256.1275588330853.JavaMail.root@cm-mail03.mozilla.org> References: <1572680775.342256.1275588330853.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 3, 2010 at 11:05 AM, Vladimir Vukicevic wrote: > For various GL calls that take GL object names as arguments, e.g. LinkProgram, they can generate two distinct errors in case a somehow "wrong" object is passed: > > ?GL_INVALID_VALUE is generated if program is not a value generated by OpenGL. > > ?GL_INVALID_OPERATION is generated if program is not a program object. > > We don't really have this distinction in WebGL, and I'm not sure what the right error to raise is here. ?I'm interpreting this to mean, for WebGL: > > ?GL_INVALID_VALUE is generated if the given program was not created by this WebGL context. > > ?GL_INVALID_OPERATION is generated if the program is not a program object. > > Does that sound right? Because e.g. LinkProgram takes a WebGLProgram as an argument, Web IDL defines the behavior when a value not compatible with WebGLProgram is passed: a TypeError is raised. http://dev.w3.org/2006/webapi/WebIDL/#es-interface Therefore the cases that need to generate OpenGL errors are: 1. Passing null when a non-null object is required. 2. Passing an object created by another context. 3. Passing an object which has been explicitly deleted. It seems to me that the first case, passing a null value when a non-null one is required, should generate INVALID_VALUE, since null is never a valid value for such functions. In the second, though, I think INVALID_OPERATION should be generated. The reason is that in a future version of the WebGL spec we expect to support sharing of resources between contexts, so in some situations (when the two contexts are sharing resources) a given value would be valid, and in other situations (when the two contexts don't share resources) it would be invalid. Because the value wouldn't always be invalid, INVALID_OPERATION seems to be the right error to generate. In the third case, since the object was once valid, INVALID_OPERATION again seems like the right choice. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Thu Jun 3 12:10:24 2010 From: ced...@ (Cedric Vivier) Date: Fri, 4 Jun 2010 03:10:24 +0800 Subject: [Public WebGL] Purpose of WebGLObjectArray ? Message-ID: Hi, I'm not sure to grasp the need and benefits of WebGLObjectArray ? The only place it's used is getAttachedShaders() and even for this function it would be probably simpler and more useful to return a standard array (ie. WebGLObject[ ] in the IDL) so that all array functions are available as usual to developers, e.g checking a shader has been attached correctly would be intuitively enough : if (!gl.getAttachedShaders(program).indexOf(coolShader)) { // coolShader is not yet attached to this program, let's attach it now... } Thoughts ? Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Jun 3 12:11:27 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 3 Jun 2010 12:11:27 -0700 Subject: [Public WebGL] SHADER_BINARY_FORMATS In-Reply-To: <1515287678.339661.1275576887245.JavaMail.root@cm-mail03.mozilla.org> References: <1515287678.339661.1275576887245.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 3, 2010 at 7:54 AM, Benoit Jacob wrote: > Hi, > > This test: > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/gl-get-calls.html > > is referring to SHADER_BINARY_FORMATS but I can't see it in the spec. Is this is bug in the test or in the spec? It's a bug in the test. It looks like the upstream version has already been updated. See http://trac.webkit.org/browser/trunk/LayoutTests/fast/canvas/webgl/gl-get-calls.html . I have an action item to sync the WebKit and Khronos WebGL tests, but please feel free to sync this one yourself as it will be some time before I do the complete merge. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Jun 3 12:20:08 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 3 Jun 2010 12:20:08 -0700 Subject: [Public WebGL] Few questions/comments about context creation attributes In-Reply-To: References: <1974356647.245506.1274552716538.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Tue, Jun 1, 2010 at 1:51 PM, Kenneth Russell wrote: > On Fri, May 28, 2010 at 9:15 AM, Chris Marrin wrote: >> >> On May 26, 2010, at 8:19 PM, Cedric Vivier wrote: >> >>> On Tue, May 25, 2010 at 06:59, Kenneth Russell wrote: >>>> Chris Marrin can probably comment on the decision to enable the >>>> stencil buffer by default. >>> >>> Can you comment on this Chris? >> >> In case it wasn't clear in my last post, I think the stencil buffer should be off by default > > I think it would be a good idea to disable the stencil buffer by > default as well. > > Are there any objections to changing the default? Since there were no objections on the public list and the working group today agreed to make this change, I've revised the spec to indicate that the default value for stencil in the WebGLContextAttributes is false. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Thu Jun 3 13:17:46 2010 From: bja...@ (Benoit Jacob) Date: Thu, 3 Jun 2010 13:17:46 -0700 (PDT) Subject: [Public WebGL] Error codes in drawArrays / drawElements In-Reply-To: <444553213.343693.1275596182179.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <2013385593.343704.1275596266915.JavaMail.root@cm-mail03.mozilla.org> ----- "Kenneth Russell" wrote: > On Wed, Jun 2, 2010 at 3:22 PM, Benoit Jacob > wrote: > > Hi, > > > > These Khronos tests suggests that in certain circumstances drawArray > / drawElements give INVALID_OPERATION, as opposed to INVALID_VALUE: > > > > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-arrays-out-of-bounds.html > > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-elements-out-of-bounds.html > > > > But neither the WebGL spec, nor the OpenGL ES documentation, say > that these functions can give INVALID_OPERATION. > > Section 4.1 of the WebGL indicates, but does not currently specify, > this behavior. > > We agreed at the F2F that generating an INVALID_OPERATION error will > be the specified behavior. The spec still needs to catch up but the > tests verify the intended behavior. Thanks for the answer, I now have a more specific question: >From the draw-elements-out-of-bounds test: shouldGenerateGLError(context, context.INVALID_VALUE, "context.drawArrays(context.TRIANGLES, 0, -1)"); shouldGenerateGLError(context, context.INVALID_OPERATION, "context.drawArrays(context.TRIANGLES, 1, 0)"); shouldGenerateGLError(context, context.INVALID_VALUE, "context.drawArrays(context.TRIANGLES, -1, 0)"); shouldGenerateGLError(context, context.INVALID_OPERATION, "context.drawArrays(context.TRIANGLES, 1, -1)"); shouldGenerateGLError(context, context.INVALID_OPERATION, "context.drawArrays(context.TRIANGLES, -1, 1)"); Could you please explain me the logic here, choosing between VALUE and OPERATION? Benoit > > -Ken > > > What's happening? > > > > Benoit > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From zhe...@ Thu Jun 3 14:33:58 2010 From: zhe...@ (Mo, Zhenyao) Date: Thu, 3 Jun 2010 14:33:58 -0700 Subject: [Public WebGL] Error codes in drawArrays / drawElements In-Reply-To: <2013385593.343704.1275596266915.JavaMail.root@cm-mail03.mozilla.org> References: <444553213.343693.1275596182179.JavaMail.root@cm-mail03.mozilla.org> <2013385593.343704.1275596266915.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 3, 2010 at 1:17 PM, Benoit Jacob wrote: > > ----- "Kenneth Russell" wrote: > >> On Wed, Jun 2, 2010 at 3:22 PM, Benoit Jacob >> wrote: >> > Hi, >> > >> > These Khronos tests suggests that in certain circumstances drawArray >> / drawElements give INVALID_OPERATION, as opposed to INVALID_VALUE: >> > >> > >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-arrays-out-of-bounds.html >> > >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-elements-out-of-bounds.html >> > >> > But neither the WebGL spec, nor the OpenGL ES documentation, say >> that these functions can give INVALID_OPERATION. >> >> Section 4.1 of the WebGL indicates, but does not currently specify, >> this behavior. >> >> We agreed at the F2F that generating an INVALID_OPERATION error will >> be the specified behavior. The spec still needs to catch up but the >> tests verify the intended behavior. > > Thanks for the answer, I now have a more specific question: > > From the draw-elements-out-of-bounds test: > > shouldGenerateGLError(context, context.INVALID_VALUE, > ?"context.drawArrays(context.TRIANGLES, 0, -1)"); > shouldGenerateGLError(context, context.INVALID_OPERATION, > ?"context.drawArrays(context.TRIANGLES, 1, 0)"); > shouldGenerateGLError(context, context.INVALID_VALUE, > ?"context.drawArrays(context.TRIANGLES, -1, 0)"); > shouldGenerateGLError(context, context.INVALID_OPERATION, > ?"context.drawArrays(context.TRIANGLES, 1, -1)"); > shouldGenerateGLError(context, context.INVALID_OPERATION, > ?"context.drawArrays(context.TRIANGLES, -1, 1)"); > > Could you please explain me the logic here, choosing between VALUE and OPERATION? > > Benoit > I think with negative first/count, it should generate INVALID_VALUE. INVALID_OPERATION is for out-of-boundary errors. For the situation where count==0 and first is out of boundary, I am not sure if an error should be generated. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Thu Jun 3 14:47:04 2010 From: bja...@ (Benoit Jacob) Date: Thu, 3 Jun 2010 14:47:04 -0700 (PDT) Subject: [Public WebGL] Error codes in drawArrays / drawElements In-Reply-To: Message-ID: <340936801.345580.1275601624355.JavaMail.root@cm-mail03.mozilla.org> ----- "Zhenyao Mo" wrote: > On Thu, Jun 3, 2010 at 1:17 PM, Benoit Jacob > wrote: > > > > ----- "Kenneth Russell" wrote: > > > >> On Wed, Jun 2, 2010 at 3:22 PM, Benoit Jacob > >> wrote: > >> > Hi, > >> > > >> > These Khronos tests suggests that in certain circumstances > drawArray > >> / drawElements give INVALID_OPERATION, as opposed to > INVALID_VALUE: > >> > > >> > > >> > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-arrays-out-of-bounds.html > >> > > >> > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-elements-out-of-bounds.html > >> > > >> > But neither the WebGL spec, nor the OpenGL ES documentation, say > >> that these functions can give INVALID_OPERATION. > >> > >> Section 4.1 of the WebGL indicates, but does not currently > specify, > >> this behavior. > >> > >> We agreed at the F2F that generating an INVALID_OPERATION error > will > >> be the specified behavior. The spec still needs to catch up but > the > >> tests verify the intended behavior. > > > > Thanks for the answer, I now have a more specific question: > > > > From the draw-elements-out-of-bounds test: > > > > shouldGenerateGLError(context, context.INVALID_VALUE, > > ?"context.drawArrays(context.TRIANGLES, 0, -1)"); > > shouldGenerateGLError(context, context.INVALID_OPERATION, > > ?"context.drawArrays(context.TRIANGLES, 1, 0)"); > > shouldGenerateGLError(context, context.INVALID_VALUE, > > ?"context.drawArrays(context.TRIANGLES, -1, 0)"); > > shouldGenerateGLError(context, context.INVALID_OPERATION, > > ?"context.drawArrays(context.TRIANGLES, 1, -1)"); > > shouldGenerateGLError(context, context.INVALID_OPERATION, > > ?"context.drawArrays(context.TRIANGLES, -1, 1)"); > > > > Could you please explain me the logic here, choosing between VALUE > and OPERATION? > > > > Benoit > > > > I think with negative first/count, it should generate INVALID_VALUE. > > INVALID_OPERATION is for out-of-boundary errors. > > For the situation where count==0 and first is out of boundary, I am > not sure if an error should be generated. I see. I guess that what I don't understand is why the 2 last ones return INVALID_OPERATION and not INVALID_VALUE. They could return either, since they are both using a negative value, and an out-of-bounds value. So is it an official rule, that in this case we return INVALID_OPERATION? Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Thu Jun 3 14:55:38 2010 From: gma...@ (Gregg Tavares) Date: Thu, 3 Jun 2010 14:55:38 -0700 Subject: [Public WebGL] Error codes in drawArrays / drawElements In-Reply-To: <340936801.345580.1275601624355.JavaMail.root@cm-mail03.mozilla.org> References: <340936801.345580.1275601624355.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 3, 2010 at 2:47 PM, Benoit Jacob wrote: > ----- "Zhenyao Mo" wrote: > > > On Thu, Jun 3, 2010 at 1:17 PM, Benoit Jacob > > wrote: > > > > > > ----- "Kenneth Russell" wrote: > > > > > >> On Wed, Jun 2, 2010 at 3:22 PM, Benoit Jacob > > >> wrote: > > >> > Hi, > > >> > > > >> > These Khronos tests suggests that in certain circumstances > > drawArray > > >> / drawElements give INVALID_OPERATION, as opposed to > > INVALID_VALUE: > > >> > > > >> > > > >> > > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-arrays-out-of-bounds.html > > >> > > > >> > > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-elements-out-of-bounds.html > > >> > > > >> > But neither the WebGL spec, nor the OpenGL ES documentation, say > > >> that these functions can give INVALID_OPERATION. > > >> > > >> Section 4.1 of the WebGL indicates, but does not currently > > specify, > > >> this behavior. > > >> > > >> We agreed at the F2F that generating an INVALID_OPERATION error > > will > > >> be the specified behavior. The spec still needs to catch up but > > the > > >> tests verify the intended behavior. > > > > > > Thanks for the answer, I now have a more specific question: > > > > > > From the draw-elements-out-of-bounds test: > > > > > > shouldGenerateGLError(context, context.INVALID_VALUE, > > > "context.drawArrays(context.TRIANGLES, 0, -1)"); > > > shouldGenerateGLError(context, context.INVALID_OPERATION, > > > "context.drawArrays(context.TRIANGLES, 1, 0)"); > > > shouldGenerateGLError(context, context.INVALID_VALUE, > > > "context.drawArrays(context.TRIANGLES, -1, 0)"); > > > shouldGenerateGLError(context, context.INVALID_OPERATION, > > > "context.drawArrays(context.TRIANGLES, 1, -1)"); > > > shouldGenerateGLError(context, context.INVALID_OPERATION, > > > "context.drawArrays(context.TRIANGLES, -1, 1)"); > > > > > > Could you please explain me the logic here, choosing between VALUE > > and OPERATION? > > > > > > Benoit > > > > > > > I think with negative first/count, it should generate INVALID_VALUE. > > > > INVALID_OPERATION is for out-of-boundary errors. > > > > For the situation where count==0 and first is out of boundary, I am > > not sure if an error should be generated. > > I see. I guess that what I don't understand is why the 2 last ones return > INVALID_OPERATION and not INVALID_VALUE. They could return either, since > they are both using a negative value, and an out-of-bounds value. So is it > an official rule, that in this case we return INVALID_OPERATION? > No, I think they should probably be INVALID_VALUE > > Benoit > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Jun 3 17:12:33 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 3 Jun 2010 17:12:33 -0700 Subject: [Public WebGL] Purpose of WebGLObjectArray ? In-Reply-To: References: Message-ID: On Thu, Jun 3, 2010 at 12:10 PM, Cedric Vivier wrote: > Hi, > > I'm not sure to grasp the need and benefits of WebGLObjectArray ? > > The only place it's used is getAttachedShaders() and even for this > function it would be probably simpler and more useful to return a > standard array (ie. WebGLObject[ ] in the IDL) so that all array > functions are available as usual to developers, e.g checking a shader > has been attached correctly would be intuitively enough : > > if (!gl.getAttachedShaders(program).indexOf(coolShader)) { > ? // coolShader is not yet attached to this program, let's attach it now... > } > > > Thoughts ? I believe we specified WebGLObjectArray before Web IDL supported sequence. Note that as far as I know none of the WebGL implementations currently support getAttachedShaders(), so this issue hasn't been explored in depth yet. We could plausibly eliminate it in favor of sequence. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Thu Jun 3 17:21:41 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 3 Jun 2010 17:21:41 -0700 (PDT) Subject: [Public WebGL] Purpose of WebGLObjectArray ? In-Reply-To: Message-ID: <1959516821.346862.1275610901513.JavaMail.root@cm-mail03.mozilla.org> ----- "Kenneth Russell" wrote: > On Thu, Jun 3, 2010 at 12:10 PM, Cedric Vivier > wrote: > > Hi, > > > > I'm not sure to grasp the need and benefits of WebGLObjectArray ? > > > > The only place it's used is getAttachedShaders() and even for this > > function it would be probably simpler and more useful to return a > > standard array (ie. WebGLObject[ ] in the IDL) so that all array > > functions are available as usual to developers, e.g checking a shader > > has been attached correctly would be intuitively enough : > > > > if (!gl.getAttachedShaders(program).indexOf(coolShader)) { > > ? // coolShader is not yet attached to this program, let's attach it > now... > > } > > > > > > Thoughts ? > > I believe we specified WebGLObjectArray before Web IDL supported > sequence. Note that as far as I know none of the WebGL > implementations currently support getAttachedShaders(), so this issue > hasn't been explored in depth yet. We could plausibly eliminate it in > favor of sequence. Sounds good to me. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Thu Jun 3 17:41:11 2010 From: cma...@ (Chris Marrin) Date: Thu, 3 Jun 2010 17:41:11 -0700 Subject: [Public WebGL] WebGL Extensions In-Reply-To: References: <1372126690.336864.1275540867244.JavaMail.root@cm-mail03.mozilla.org> <4C07964F.80003@sjbaker.org> <05D6C63E-C46E-4AE4-8C51-23BD79BFA71F@apple.com> Message-ID: On Jun 3, 2010, at 11:16 AM, Gregg Tavares wrote: > > > On Thu, Jun 3, 2010 at 11:07 AM, Oliver Hunt wrote: > > On Jun 3, 2010, at 9:24 AM, Gregg Tavares wrote: > >> >> >> On Thu, Jun 3, 2010 at 5:12 AM, Oliver Hunt wrote: >> >> On Jun 3, 2010, at 4:47 AM, Steve Baker wrote: >> >> > Vladimir Vukicevic wrote: >> >> Hm, I thought we didn't have this which is where the confusion came from, but looks like Chris put it in a while ago: >> >> >> >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#5.14.14 >> >> >> >> Maybe it's just missing a sentence or two at the end explaining that extensions are WebGL specific, and if WebGL is built on top of an underlying OpenGL driver, that driver's extensions will not necessarily be exposed? >> >> >> > IMHO, it is essential that WebGL does NOT expose underlying driver >> > extensions by default. The reason being one of security. >> >> For what it's worth the way that the JS/DOM bindings work in most (all?) browsers require every exposed function to be defined and implemented explicitly -- it would not be possible for an implementation to automate the exposure of an arbitrary set of unknown extensions. >> >> Plenty of extensions only enable new ENUMs and new features to GLSL. No changes to the API are required so it's very possible to automate the exposure of an arbitrary set of unknown extensions unless WebGL specifically prevents that exposure. > > How does the runtime _know_ that an extension is only exposing an new enum? How would the glsl validator _know_ what glsl features existed due to an arbitrary extension? The implementation needs to handle every extension itself -- it can't do it automatically through the glGetExtensions API (or whatever it's called) > > > In real GL the features just work if they exist. Querying is only their for your benefit. > > In WebGL, the WebGL implementation has to do checking to make sure no extensions get passed through to the system's GL unless the user has explicit called ctx.getExtension (A WebGL function, not a GL function) for that extension. > > So in other words, passing GL_FLOAT to texImage2D fails in WebGL. The implementation explicitly checks for that. > > If we add a floating point texture extension then if the user calls ctx.getExtension("floating-point-textures") the WebGL code starts allowing GL_FLOAT to be passed to texImage2D. > > The same thing for the validator. If we add support for glsl 2.0 then you'll have to do ctx.getExtension("glsl-2.0") and internally some new flags will be sent to the validator to allow glsl 2.0 features to compile. If an implementation supported, say 3D Textures, an author might want to simply supply the numeric code for TEXTURE_3D to avoid enabling the extension and getting an object which contained the enum. I hope such a thing is explicitly prohibited. I hope it will not be possible by any means to use an extension that has not first been enabled. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Thu Jun 3 17:45:43 2010 From: cma...@ (Chris Marrin) Date: Thu, 3 Jun 2010 17:45:43 -0700 Subject: [Public WebGL] Purpose of WebGLObjectArray ? In-Reply-To: References: Message-ID: On Jun 3, 2010, at 5:12 PM, Kenneth Russell wrote: > On Thu, Jun 3, 2010 at 12:10 PM, Cedric Vivier wrote: >> Hi, >> >> I'm not sure to grasp the need and benefits of WebGLObjectArray ? >> >> The only place it's used is getAttachedShaders() and even for this >> function it would be probably simpler and more useful to return a >> standard array (ie. WebGLObject[ ] in the IDL) so that all array >> functions are available as usual to developers, e.g checking a shader >> has been attached correctly would be intuitively enough : >> >> if (!gl.getAttachedShaders(program).indexOf(coolShader)) { >> // coolShader is not yet attached to this program, let's attach it now... >> } >> >> >> Thoughts ? > > I believe we specified WebGLObjectArray before Web IDL supported > sequence. Note that as far as I know none of the WebGL > implementations currently support getAttachedShaders(), so this issue > hasn't been explored in depth yet. We could plausibly eliminate it in > favor of sequence. It's true, this was done before we felt we could use sequence<>. In fact I thought we had decided to get rid of it a while back in favor of sequence. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Jun 3 22:07:57 2010 From: ste...@ (Steve Baker) Date: Fri, 04 Jun 2010 00:07:57 -0500 Subject: [Public WebGL] Slowdowns & lockups in Firefox/Minefield. Message-ID: <4C088A2D.2090707@sjbaker.org> Forgive me if you guys already know this - but my (increasingly hefty) application seems to only run a half dozen times (assuming I keep hitting 'Reload') before Firefox's frame rate slows down dramatically (like one frame every two seconds) - or perhaps locks up completely. It kinda feels like maybe some resources are not being free'd up...but that's just a guess. Killing and restarting the browser reliably fixes it. This is with: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.3a5pre) Gecko/20100509 Minefield/3.7a5pre I presume we don't expect the application to do any specific cleanup...I'm not currently doing anything like that. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Thu Jun 3 23:47:05 2010 From: ced...@ (Cedric Vivier) Date: Fri, 4 Jun 2010 14:47:05 +0800 Subject: [Public WebGL] Purpose of WebGLObjectArray ? In-Reply-To: References: Message-ID: On Fri, Jun 4, 2010 at 08:45, Chris Marrin wrote: > It's true, this was done before we felt we could use sequence<>. In fact I thought we had decided to get rid of it a while back in favor of sequence. Great. After reading WebIDL spec I've understood that T[ ] is passed by-reference and sequence is passed by-value, which sounds possibly less efficient and farther in principle than the C signature, any reason to prefer sequence instead of array ? Also is untyped "sequence" valid (and intended) for all uniform*v and vertexAttrib*v signatures ? I guess they should be "sequence" or sequence". (here also, passing by-value may generate unwanted copies on some bindings ?) Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gil...@ Fri Jun 4 02:35:53 2010 From: gil...@ (Giles Thomas) Date: Fri, 4 Jun 2010 10:35:53 +0100 Subject: [Public WebGL] Slowdowns & lockups in Firefox/Minefield. In-Reply-To: <4C088A2D.2090707@sjbaker.org> References: <4C088A2D.2090707@sjbaker.org> Message-ID: On 4 June 2010 06:07, Steve Baker wrote: > Forgive me if you guys already know this - but my (increasingly hefty) > application seems to only run a half dozen times (assuming I keep > hitting 'Reload') before Firefox's frame rate slows down dramatically > (like one frame every two seconds) - or perhaps locks up completely. It > kinda feels like maybe some resources are not being free'd up...but > that's just a guess. Killing and restarting the browser reliably fixes it. > > This is with: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.3a5pre) > Gecko/20100509 Minefield/3.7a5pre > > I presume we don't expect the application to do any specific > cleanup...I'm not currently doing anything like that. > I'm running Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.3a5pre) Gecko/20100524 Minefield/3.7a5pre GTB6 (.NET CLR 3.5.30729) and I'm not seeing anything like this when mashing reload on < http://learningwebgl.com/lessons/lesson15/index.html>. Perhaps the problem's specific to the Linux build, or has been fixed since 9 May? Cheers, Giles -- Giles Thomas giles...@ http://www.gilesthomas.com/ http://learningwebgl.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Fri Jun 4 07:10:12 2010 From: cma...@ (Chris Marrin) Date: Fri, 04 Jun 2010 07:10:12 -0700 Subject: [Public WebGL] Purpose of WebGLObjectArray ? In-Reply-To: References: Message-ID: <0E19381D-2478-4570-91EE-C5706883CBC9@apple.com> On Jun 3, 2010, at 11:47 PM, Cedric Vivier wrote: > On Fri, Jun 4, 2010 at 08:45, Chris Marrin wrote: >> It's true, this was done before we felt we could use sequence<>. In fact I thought we had decided to get rid of it a while back in favor of sequence. > > Great. > > After reading WebIDL spec I've understood that T[ ] is passed > by-reference and sequence is passed by-value, which sounds possibly > less efficient and farther in principle than the C signature, any > reason to prefer sequence instead of array ? > > Also is untyped "sequence" valid (and intended) for all uniform*v and > vertexAttrib*v signatures ? > I guess they should be "sequence" or sequence". > > (here also, passing by-value may generate unwanted copies on some bindings ?) I'm not sure how WebGL deals with return values. The semantics needs to be that a new array is created, filled and returned. Perhaps that requires the use of sequence? Seems like whether this is literally returned by value or by a reference to a newly created object is a detail of the language binding. But I don't have any preference for one syntax over the other. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Fri Jun 4 07:23:20 2010 From: bja...@ (Benoit Jacob) Date: Fri, 4 Jun 2010 07:23:20 -0700 (PDT) Subject: [Public WebGL] Error codes in drawArrays / drawElements In-Reply-To: Message-ID: <1771465689.351112.1275661400090.JavaMail.root@cm-mail03.mozilla.org> ----- "Gregg Tavares" wrote: ----- I wrote: >> I see. I guess that what I don't understand is why the 2 last ones >> return INVALID_OPERATION and not INVALID_VALUE. They could return >> either, since they are both using a negative value, and an >> out-of-bounds value. So is it an official rule, that in this case we >> return INVALID_OPERATION? > > > No, I think they should probably be INVALID_VALUE I agree! That makes a lot of sense to me. I actually think that could become a general rule that INVALID_VALUE has priority over INVALID_OPERATION, but let me start a new thread for that. Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Fri Jun 4 07:57:10 2010 From: bja...@ (Benoit Jacob) Date: Fri, 4 Jun 2010 07:57:10 -0700 (PDT) Subject: [Public WebGL] a proposal to sort error codes discussions In-Reply-To: <1205981546.351186.1275662465324.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <39819985.351319.1275663430006.JavaMail.root@cm-mail03.mozilla.org> Hi, In order to write conformance tests checking error codes, we must agree precisely on what error codes to produce in every circumstance. Here's a proposal to sort this once and for all. In every WebGL function, let's use the following logic: 1. first check for INVALID_ENUM 2. then check for INVALID_VALUE. Only raise that error if some parameter value is *absolutely* wrong in itself, regardless of other parameters, and regardless of the state. 3. finally, check for INVALID_OPERATION. Adopting such a clear hierarchy between these 3 error codes, will allow to sort out all the ambiguous situations where more than one of these errors could legitimately be produced. Here's a quick rationalization. INVALID_ENUM and INVALID_VALUE mean that a parameter value is absolutely, intrinsically wrong in itself, so they are the most directly useful errors, so they are prioritized over INVALID_OPERATION, which means that parameter values are relatively wrong (relatively to each other or to the state). Between INVALID_ENUM and INVALID_VALUE, let's prioritize INVALID_ENUM, since the GLenum parameters are typically passed compile-time constants. Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Fri Jun 4 09:07:04 2010 From: ced...@ (Cedric Vivier) Date: Sat, 5 Jun 2010 00:07:04 +0800 Subject: [Public WebGL] Purpose of WebGLObjectArray ? In-Reply-To: <0E19381D-2478-4570-91EE-C5706883CBC9@apple.com> References: <0E19381D-2478-4570-91EE-C5706883CBC9@apple.com> Message-ID: On Fri, Jun 4, 2010 at 22:10, Chris Marrin wrote: > The semantics needs to be that a new array is created, filled and returned. Yes that's what I assumed too, afaik this semantic is the one defined by T[ ] not sequence. > Seems like whether this is literally returned by value or by a reference to a newly created object is a detail of the language binding. According to the WebIDL spec sequence is always passed by value, difference might be subtle in Javascript but it certainly can have negative consequences (at least in other languages) because it involves unnecessary copying. In performance-sensitive methods like uniform*() it could be quite expensive to do so, e.g uniformMatrix4fv would copy 16 floats on the stack instead of just passing reference. Performance aside, sequence is not allowed to be null (since by-value) whereas T[] is allowed to be null (since by-reference [1]), in WebGL we need the value returned to possibly be null when a GL error has been generated (or context is lost). [1] : "The T[] type is a parameterized type whose values are (possibly zero-length) arrays of values of type T *or the special value null*." Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Fri Jun 4 10:12:23 2010 From: cma...@ (Chris Marrin) Date: Fri, 4 Jun 2010 10:12:23 -0700 Subject: [Public WebGL] A different approach to interleaved typed arrays Message-ID: <2BC44ECB-EE51-40D6-A631-3BD51E711E46@apple.com> The TC39 folks are making a valiant effort at defining structured data in JavaScript. But I don't think their efforts will be able to bear fruit in a timeframe that would be convenient for us. But still I feel their pain. Their main complaint of our current Typed Arrays proposal is that it exposes endian issues to the author. Not only that, but it makes it easy to accidentally write endian dependent code. Since the majority of machines today are little endian, this is a ticking time bomb waiting for the day that someone tries it on a big endian machine. I'd like to propose changes to the current Typed Array proposal which would solve the problem of exposing endianness, but would still be efficient: ========== We've discussed in the past the idea of making the views non-overlapping. Mapping a range of an ArrayBuffer to one view would make it inaccessible to any other view. Furthermore, when mapping a range of an ArrayBuffer, that range is clear, preventing the author from, for instance: 1) Mapping a range to bytes 2) Filling that range with data, which happens to be floating point values in little endian order 3) removing that mapping 4) Mapping that same range to floats to access the previously loaded data This would all be well and good, and the implementation isn't really that complex. It would simply require remembering which views are mapped to which buffer and doing a validation whenever a new mapping is made. But it has a problem because it would not be practical to do interleaved mapping. You'd essentially need a new view for each component of each element. For instance, if you had 1000 vertices, made up of 3 floats followed by 4 bytes, you'd need 1000 Float32Arrays and 1000 Int8Arrays. But we could add an explicit interleaved mapping capability to the Typed Arrays. It would require something like this: var buf = new ArrayBuffer(1000 * 16 /* 3 floats + 4 bytes */); var floats = new Float32Array(buf, 0, 1000, 3, 4); var colors = new UInt8Array(buf, 12, 1000, 4, 12); The extra 2 parameters added to the Typed Array constructors are: elementsPerGroup - number of elements in each group of elements bytesToNextGroup - number of bytes to skip to get from the end of a group of elements to the start of the next There are other ways to phrase those parameters. For instance you might want the second one to be 'stride', which is the number of bytes from the start of one group to the next. But the result is the same. In the above example the views don't overlap because the extra parameters define which parts of the buffer belong to which view. Checking for validity is essentially the same as for the non-stride case, although perhaps a bit more complex. But I don't believe creating views into a buffer is likely to happen frequently, so it can be a relatively expensive operation without hurting performance. The next question is, how does access change? In the current scheme, writing to an interleaved buffer involves computing offsets to each group in JavaScript. With this scheme, each view would appear to be a contiguous array of the given type. Loading a view is simply a matter of: var values = [ ... array of 3000 floating point values ... ]; floats.set(values); and have the values "scattered" to the appropriate groups of elements. Doing this in native code would be significantly faster that doing a JavaScript loop, computing offsets and loading single values at a time. How would you change mappings? How would you disassociate a Typed Array with a range in an ArrayBuffer and then associate a different one? One simple answer is that you can't. Once an association is made, no other association with those bytes of the ArrayBuffer can ever be made. This seems reasonable given the automatic garbage collection that occurs in JavaScript. ArrayBuffers are not precious resources that need to be shared. It would be just as easy to create a new ArrayBuffer when new mappings are needed. This is especially true if the data in the ArrayBuffer doesn't survive across mappings. It also makes it unnecessary to clear the data in a newly mapped range. If you assume the ArrayBuffer was cleared on creation, the data in the newly mapped range is guaranteed to be valid. We also discussed the notion of separating ArrayBuffers that are used for incoming data from those that are used to prepare data for uploading to the GPU. I think we should repurpose the DataView concept for this functionality. We can call it a DataBuffer, or NetworkBuffer, or whatever. It would have the API's from the DataView, but it would actually be backed by an internal buffer. This object would be returned from XMLHttpRequest, a BLOb object or any other object that accesses data in some external (not machine specific) endianness. When getting data from this buffer, you specify the endianness you know the data to be in and it is returned to you in the native machine endianness. When writing data to the buffer (for eventual output to an external destination) you specify endianness and the data is converted from internal endianness to the one specified. We should probably add methods to this object which can deal with arrays of data and we might even want to make it aware of interleaved Typed Arrays. For instance, if you have a DataBuffer 'inData' with floats and bytes interleaved the same as in the above example, but in little endian order, you might say: inData.setArray(floats, 0, 1000, true); inData.setArray(colors, 12, 1000, true); These calls would handle the byte swapping for matching endianness (if needed) and the interleaving the values into groups according to the layout of Typed Arrays. This solves the problem of undesired endian errors and provides a fast API for interleaving data. Comments? ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Fri Jun 4 12:12:50 2010 From: kbr...@ (Kenneth Russell) Date: Fri, 4 Jun 2010 12:12:50 -0700 Subject: [Public WebGL] A different approach to interleaved typed arrays In-Reply-To: <2BC44ECB-EE51-40D6-A631-3BD51E711E46@apple.com> References: <2BC44ECB-EE51-40D6-A631-3BD51E711E46@apple.com> Message-ID: On Fri, Jun 4, 2010 at 10:12 AM, Chris Marrin wrote: > > The TC39 folks are making a valiant effort at defining structured data in JavaScript. But I don't think their efforts will be able to bear fruit in a timeframe that would be convenient for us. But still I feel their pain. Their main complaint of our current Typed Arrays proposal is that it exposes endian issues to the author. Not only that, but it makes it easy to accidentally write endian dependent code. Since the majority of machines today are little endian, this is a ticking time bomb waiting for the day that someone tries it on a big endian machine. > > I'd like to propose changes to the current Typed Array proposal which would solve the problem of exposing endianness, but would still be efficient: > > ========== > > We've discussed in the past the idea of making the views non-overlapping. Mapping a range of an ArrayBuffer to one view would make it inaccessible to any other view. Furthermore, when mapping a range of an ArrayBuffer, that range is clear, preventing the author from, for instance: > > 1) Mapping a range to bytes > 2) Filling that range with data, which happens to be floating point values in little endian order > 3) removing that mapping > 4) Mapping that same range to floats to access the previously loaded data > > This would all be well and good, and the implementation isn't really that complex. It would simply require remembering which views are mapped to which buffer and doing a validation whenever a new mapping is made. > > But it has a problem because it would not be practical to do interleaved mapping. You'd essentially need a new view for each component of each element. For instance, if you had 1000 vertices, made up of 3 floats followed by 4 bytes, you'd need 1000 Float32Arrays and 1000 Int8Arrays. > > But we could add an explicit interleaved mapping capability to the Typed Arrays. It would require something like this: > > ? ? ? ?var buf ?= new ArrayBuffer(1000 * 16 /* 3 floats + 4 bytes */); > ? ? ? ?var floats = new Float32Array(buf, 0, 1000, 3, 4); > ? ? ? ?var colors = new UInt8Array(buf, 12, 1000, 4, 12); > > The extra 2 parameters added to the Typed Array constructors are: > > ? ? ? ?elementsPerGroup - number of elements in each group of elements > ? ? ? ?bytesToNextGroup - number of bytes to skip to get from the end of a group of elements to the start of the next > > There are other ways to phrase those parameters. For instance you might want the second one to be 'stride', which is the number of bytes from the start of one group to the next. But the result is the same. In the above example the views don't overlap because the extra parameters define which parts of the buffer belong to which view. Checking for validity is essentially the same as for the non-stride case, although perhaps a bit more complex. But I don't believe creating views into a buffer is likely to happen frequently, so it can be a relatively expensive operation without hurting performance. > > The next question is, how does access change? In the current scheme, writing to an interleaved buffer involves computing offsets to each group in JavaScript. With this scheme, each view would appear to be a contiguous array of the given type. Loading a view is simply a matter of: > > ? ? ? ?var values = [ ... array of 3000 floating point values ... ]; > ? ? ? ?floats.set(values); > > and have the values "scattered" to the appropriate groups of elements. Doing this in native code would be significantly faster that doing a JavaScript loop, computing offsets and loading single values at a time. > > How would you change mappings? How would you disassociate a Typed Array with a range in an ArrayBuffer and then associate a different one? One simple answer is that you can't. Once an association is made, no other association with those bytes of the ArrayBuffer can ever be made. This seems reasonable given the automatic garbage collection that occurs in JavaScript. ArrayBuffers are not precious resources that need to be shared. It would be just as easy to create a new ArrayBuffer when new mappings are needed. This is especially true if the data in the ArrayBuffer doesn't survive across mappings. It also makes it unnecessary to clear the data in a newly mapped range. If you assume the ArrayBuffer was cleared on creation, the data in the newly mapped range is guaranteed to be valid. > > We also discussed the notion of separating ArrayBuffers that are used for incoming data from those that are used to prepare data for uploading to the GPU. I think we should repurpose the DataView concept for this functionality. We can call it a DataBuffer, or NetworkBuffer, or whatever. It would have the API's from the DataView, but it would actually be backed by an internal buffer. This object would be returned from XMLHttpRequest, a BLOb object or any other object that accesses data in some external (not machine specific) endianness. When getting data from this buffer, you specify the endianness you know the data to be in and it is returned to you in the native machine endianness. When writing data to the buffer (for eventual output to an external destination) you specify endianness and the data is converted from internal endianness to the one specified. > > We should probably add methods to this object which can deal with arrays of data and we might even want to make it aware of interleaved Typed Arrays. For instance, if you have a DataBuffer 'inData' with floats and bytes interleaved the same as in the above example, but in little endian order, you might say: > > ? ? ? ?inData.setArray(floats, 0, 1000, true); > ? ? ? ?inData.setArray(colors, 12, 1000, true); > > These calls would handle the byte swapping for matching endianness (if needed) and the interleaving the values into groups according to the layout of Typed Arrays. > > This solves the problem of undesired endian errors and provides a fast API for interleaving data. > > Comments? We have discussed similar proposals earlier in the working group, and during the TC-39 meeting a similar scatter/gather idea was discussed. The problem with this proposal is that setting and getting individual elements is too slow, which is why you've worked around that by adding bulk setters and getters. This does not solve the basic problem that arrays in JavaScript are tremendously inefficient for storing and manipulating large numbers of vertices. Supporting this capability is essential for interesting applications to be developed with WebGL. NVIDIA's vertex_buffer_object demo [1] is a reasonable example and benchmark of dynamic vertex generation in JavaScript. In Firefox and Chromium nightly builds on my laptop it generates roughly 4 million vertices per second. This is not bad, but is still roughly a factor of seven slower than the Java HotSpot server compiler on the same demo. I am convinced that better performance can be attained in JavaScript, but there is much work to be done. It is absolutely essential that random access reads and writes of single elements be efficiently supported. Having one array reference multiple contiguous elements with gaps in between groups of elements will perform too poorly. We could consider having each array reference only one element per group, with a fixed stride between elements. This will still add at least a load and if-test, or load and shift, to each array load and store, to fetch the stride and either compare it to zero for the fastest case, or to always shift the index to compute the byte offset. I am not in favor of making this change. I personally don't believe that the data aliasing issues exposed by Typed Arrays are as severe as some do, and I am not willing to sacrifice any of the performance gains we have already achieved with Typed Arrays because we still have a long way to go. For reference, the vertex_buffer_object is running over seven times faster than it used to in Chromium before any optimizations to the Typed Array implementation were done. -Ken [1] https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/demos/google/nvidia-vertex-buffer-object/index.html ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From zhe...@ Fri Jun 4 12:19:32 2010 From: zhe...@ (Mo, Zhenyao) Date: Fri, 4 Jun 2010 12:19:32 -0700 Subject: [Public WebGL] Framebuffer attachment point Message-ID: In WebGL spec section 6.2, when internal format of a renderbuffer does not match the attachment point, an INVALID_OPERATION error is generated. In the same spirit, shouldn't we generate the same INVALID_OPERATION for framebufferTexImage2D when the attachment is STENCIL or DEPTH? In GLES, we don't have a way to generate non color textures. Also, is it really necessary that we check internal format of renderbuffer/texture when attaching it to a framebuffer? This dictates we has to call texImage2D or renderbufferStorage before attaching. Shouldn't we just follow GLES and allow the mismatch? So we can attach and later change the internalformat of renderbuffer/texture and make the framebuffer complete. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Fri Jun 4 12:47:13 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 4 Jun 2010 12:47:13 -0700 (PDT) Subject: [Public WebGL] A different approach to interleaved typed arrays In-Reply-To: <807404609.353791.1275680169960.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1288978043.353894.1275680833801.JavaMail.root@cm-mail03.mozilla.org> ----- "Kenneth Russell" wrote: > On Fri, Jun 4, 2010 at 10:12 AM, Chris Marrin > wrote: > > > > The TC39 folks are making a valiant effort at defining structured > data in JavaScript. But I don't think their efforts will be able to > bear fruit in a timeframe that would be convenient for us. But still I > feel their pain. Their main complaint of our current Typed Arrays > proposal is that it exposes endian issues to the author. Not only > that, but it makes it easy to accidentally write endian dependent > code. Since the majority of machines today are little endian, this is > a ticking time bomb waiting for the day that someone tries it on a big > endian machine. > > > > I'd like to propose changes to the current Typed Array proposal > which would solve the problem of exposing endianness, but would still > be efficient: > > > > ========== > > > > We've discussed in the past the idea of making the views > non-overlapping. Mapping a range of an ArrayBuffer to one view would > make it inaccessible to any other view. Furthermore, when mapping a > range of an ArrayBuffer, that range is clear, preventing the author > from, for instance: > > > > 1) Mapping a range to bytes > > 2) Filling that range with data, which happens to be floating point > values in little endian order > > 3) removing that mapping > > 4) Mapping that same range to floats to access the previously loaded > data > > > > This would all be well and good, and the implementation isn't really > that complex. It would simply require remembering which views are > mapped to which buffer and doing a validation whenever a new mapping > is made. > > > > But it has a problem because it would not be practical to do > interleaved mapping. You'd essentially need a new view for each > component of each element. For instance, if you had 1000 vertices, > made up of 3 floats followed by 4 bytes, you'd need 1000 Float32Arrays > and 1000 Int8Arrays. > > > > But we could add an explicit interleaved mapping capability to the > Typed Arrays. It would require something like this: > > > > ? ? ? ?var buf ?= new ArrayBuffer(1000 * 16 /* 3 floats + 4 bytes > */); > > ? ? ? ?var floats = new Float32Array(buf, 0, 1000, 3, 4); > > ? ? ? ?var colors = new UInt8Array(buf, 12, 1000, 4, 12); > > > > The extra 2 parameters added to the Typed Array constructors are: > > > > ? ? ? ?elementsPerGroup - number of elements in each group of > elements > > ? ? ? ?bytesToNextGroup - number of bytes to skip to get from the > end of a group of elements to the start of the next > > > > There are other ways to phrase those parameters. For instance you > might want the second one to be 'stride', which is the number of bytes > from the start of one group to the next. But the result is the same. > In the above example the views don't overlap because the extra > parameters define which parts of the buffer belong to which view. > Checking for validity is essentially the same as for the non-stride > case, although perhaps a bit more complex. But I don't believe > creating views into a buffer is likely to happen frequently, so it can > be a relatively expensive operation without hurting performance. > > > > The next question is, how does access change? In the current scheme, > writing to an interleaved buffer involves computing offsets to each > group in JavaScript. With this scheme, each view would appear to be a > contiguous array of the given type. Loading a view is simply a matter > of: > > > > ? ? ? ?var values = [ ... array of 3000 floating point values ... > ]; > > ? ? ? ?floats.set(values); > > > > and have the values "scattered" to the appropriate groups of > elements. Doing this in native code would be significantly faster that > doing a JavaScript loop, computing offsets and loading single values > at a time. > > > > How would you change mappings? How would you disassociate a Typed > Array with a range in an ArrayBuffer and then associate a different > one? One simple answer is that you can't. Once an association is made, > no other association with those bytes of the ArrayBuffer can ever be > made. This seems reasonable given the automatic garbage collection > that occurs in JavaScript. ArrayBuffers are not precious resources > that need to be shared. It would be just as easy to create a new > ArrayBuffer when new mappings are needed. This is especially true if > the data in the ArrayBuffer doesn't survive across mappings. It also > makes it unnecessary to clear the data in a newly mapped range. If you > assume the ArrayBuffer was cleared on creation, the data in the newly > mapped range is guaranteed to be valid. > > > > We also discussed the notion of separating ArrayBuffers that are > used for incoming data from those that are used to prepare data for > uploading to the GPU. I think we should repurpose the DataView concept > for this functionality. We can call it a DataBuffer, or NetworkBuffer, > or whatever. It would have the API's from the DataView, but it would > actually be backed by an internal buffer. This object would be > returned from XMLHttpRequest, a BLOb object or any other object that > accesses data in some external (not machine specific) endianness. When > getting data from this buffer, you specify the endianness you know the > data to be in and it is returned to you in the native machine > endianness. When writing data to the buffer (for eventual output to an > external destination) you specify endianness and the data is converted > from internal endianness to the one specified. > > > > We should probably add methods to this object which can deal with > arrays of data and we might even want to make it aware of interleaved > Typed Arrays. For instance, if you have a DataBuffer 'inData' with > floats and bytes interleaved the same as in the above example, but in > little endian order, you might say: > > > > ? ? ? ?inData.setArray(floats, 0, 1000, true); > > ? ? ? ?inData.setArray(colors, 12, 1000, true); > > > > These calls would handle the byte swapping for matching endianness > (if needed) and the interleaving the values into groups according to > the layout of Typed Arrays. > > > > This solves the problem of undesired endian errors and provides a > fast API for interleaving data. > > > > Comments? > > We have discussed similar proposals earlier in the working group, and > during the TC-39 meeting a similar scatter/gather idea was discussed. > The problem with this proposal is that setting and getting individual > elements is too slow, which is why you've worked around that by > adding > bulk setters and getters. This does not solve the basic problem that > arrays in JavaScript are tremendously inefficient for storing and > manipulating large numbers of vertices. Supporting this capability is > essential for interesting applications to be developed with WebGL. > NVIDIA's vertex_buffer_object demo [1] is a reasonable example and > benchmark of dynamic vertex generation in JavaScript. In Firefox and > Chromium nightly builds on my laptop it generates roughly 4 million > vertices per second. This is not bad, but is still roughly a factor > of > seven slower than the Java HotSpot server compiler on the same demo. > I > am convinced that better performance can be attained in JavaScript, > but there is much work to be done. > > It is absolutely essential that random access reads and writes of > single elements be efficiently supported. Having one array reference > multiple contiguous elements with gaps in between groups of elements > will perform too poorly. We could consider having each array reference > only one element per group, with a fixed stride between elements. > This will still add at least a load and if-test, or load and shift, to > each array load and store, to fetch the stride and either compare it to > zero for the fastest case, or to always shift the index to compute the > byte offset. I am not in favor of making this change. I personally > don't believe that the data aliasing issues exposed by Typed Arrays > are as severe as some do, and I am not willing to sacrifice any of the > performance gains we have already achieved with Typed Arrays because > we still have a long way to go. For reference, the > vertex_buffer_object is running over seven times faster than it used > to in Chromium before any optimizations to the Typed Array > implementation were done. I'm in general agreement with Ken here -- I just don't see the aliasing/endianness issues as a significant showstopper problem. I'm interested in the TC-39 struct proposal purely from a developer convenience point of view, because being able to access interleaved array data in a more natural form (like foo[i].vertex.x, foo[i].normal.y, foo[i].color.r, etc.) would be nice, though only if the implementations can get that indexing to be close to typed array speed. Having it fix the endianness exposure issues is, to me, only a nice side benefit. Current typed array indexing is: base_ptr + index< References: <4C088A2D.2090707@sjbaker.org> Message-ID: <7E069E8B-4CB3-4727-AA21-3B87B337196E@gmail.com> Steve Baker kirjoitti 4.6.2010 kello 8.07: > Forgive me if you guys already know this - but my (increasingly hefty) > application seems to only run a half dozen times (assuming I keep > hitting 'Reload') before Firefox's frame rate slows down dramatically > (like one frame every two seconds) - or perhaps locks up > completely. It > kinda feels like maybe some resources are not being free'd up...but > that's just a guess. Killing and restarting the browser reliably > fixes it. > > This is with: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: > 1.9.3a5pre) > Gecko/20100509 Minefield/3.7a5pre > > I presume we don't expect the application to do any specific > cleanup...I'm not currently doing anything like that. > > -- Steve > Firefox was at one point doing GL context destruction at garbage collection time (instead of document unload). If it's still doing that, the context and the resources allocated by it are only freed after doing some thirty megabytes of JS allocation. In that case, if you have a large canvas and don't allocate much, you might run out of GPU resources before triggering GC. But I don't know too well how the latest builds do things, so I might well be wrong. Ilmari ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Fri Jun 4 15:49:33 2010 From: ste...@ (Steve Baker) Date: Fri, 04 Jun 2010 17:49:33 -0500 Subject: [Public WebGL] Slowdowns & lockups in Firefox/Minefield. In-Reply-To: <7E069E8B-4CB3-4727-AA21-3B87B337196E@gmail.com> References: <4C088A2D.2090707@sjbaker.org> <7E069E8B-4CB3-4727-AA21-3B87B337196E@gmail.com> Message-ID: <4C0982FD.2080509@sjbaker.org> Ilmari Heikkinen wrote: > > > Steve Baker kirjoitti 4.6.2010 kello 8.07: > >> Forgive me if you guys already know this - but my (increasingly hefty) >> application seems to only run a half dozen times (assuming I keep >> hitting 'Reload') before Firefox's frame rate slows down dramatically >> (like one frame every two seconds) - or perhaps locks up completely. It >> kinda feels like maybe some resources are not being free'd up...but >> that's just a guess. Killing and restarting the browser reliably >> fixes it. >> >> This is with: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.3a5pre) >> Gecko/20100509 Minefield/3.7a5pre >> >> I presume we don't expect the application to do any specific >> cleanup...I'm not currently doing anything like that. >> >> -- Steve >> > > Firefox was at one point doing GL context destruction at garbage > collection time (instead of document unload). If it's still doing > that, the context and the resources allocated by it are only freed > after doing some thirty megabytes of JS allocation. In that case, if > you have a large canvas and don't allocate much, you might run out of > GPU resources before triggering GC. But I don't know too well how the > latest builds do things, so I might well be wrong. > > Ilmari I have an 800x600 canvas (Is that "large"?! It's hard to recalibate my brain to thinking about web-based 3D!). I'm deliberately fighting to avoid dynamic allocations in my mainloop in an effort to minimize garbage collection randomly jumping in and killing my frame-rate...although I do a fair amount of allocation on load. So I suppose this could explain the symptoms. I'll try tossing in some gratuitous allocations and see if that fixes it. Thanks! -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Sat Jun 5 09:08:05 2010 From: bja...@ (Benoit Jacob) Date: Sat, 5 Jun 2010 09:08:05 -0700 (PDT) Subject: [Public WebGL] a proposal to sort error codes discussions In-Reply-To: <39819985.351319.1275663430006.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1283621904.362122.1275754085558.JavaMail.root@cm-mail03.mozilla.org> No replies yet: either I said something stupid (likely!) or I should give some more context: In the recent thread "Error codes in drawArrays / drawElements" we discussed a conformance test that currently requires a INVALID_OPERATION error to be raised, and it appeared in the discussion that it should rather require INVALID_VALUE. This shows that in situations where both errors could legitimately be raised, it wasn't clear enough which one to prefer raising. My post here can be summarized by saying: in case of ambiguity, prefer raising INVALID_VALUE over raising INVALID_OPERATION. Cheers, Benoit ----- "Benoit Jacob" wrote: > Hi, > > In order to write conformance tests checking error codes, we must > agree precisely on what error codes to produce in every circumstance. > Here's a proposal to sort this once and for all. > > In every WebGL function, let's use the following logic: > 1. first check for INVALID_ENUM > 2. then check for INVALID_VALUE. Only raise that error if some > parameter value is *absolutely* wrong in itself, regardless of other > parameters, and regardless of the state. > 3. finally, check for INVALID_OPERATION. > > Adopting such a clear hierarchy between these 3 error codes, will > allow to sort out all the ambiguous situations where more than one of > these errors could legitimately be produced. > > Here's a quick rationalization. INVALID_ENUM and INVALID_VALUE mean > that a parameter value is absolutely, intrinsically wrong in itself, > so they are the most directly useful errors, so they are prioritized > over INVALID_OPERATION, which means that parameter values are > relatively wrong (relatively to each other or to the state). Between > INVALID_ENUM and INVALID_VALUE, let's prioritize INVALID_ENUM, since > the GLenum parameters are typically passed compile-time constants. > > Cheers, > Benoit > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From jda...@ Sat Jun 5 09:18:59 2010 From: jda...@ (John Davis) Date: Sat, 5 Jun 2010 11:18:59 -0500 Subject: [Public WebGL] OpenGL ES 2.0 support in Google Native Client Message-ID: Is this going to make WebGL a bit worthless/redundant? -------------- next part -------------- An HTML attachment was scrubbed... URL: From oli...@ Sat Jun 5 10:02:04 2010 From: oli...@ (Oliver Hunt) Date: Sat, 5 Jun 2010 10:02:04 -0700 Subject: [Public WebGL] OpenGL ES 2.0 support in Google Native Client In-Reply-To: References: Message-ID: <9207BC1C-F0C7-423B-A76A-34860D37E5A8@apple.com> No because NaCl isn't a standard, it's a plugin. --Oliver On Jun 5, 2010, at 9:18 AM, John Davis wrote: > Is this going to make WebGL a bit worthless/redundant? > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Mon Jun 7 08:14:19 2010 From: ala...@ (Alan Chaney) Date: Mon, 07 Jun 2010 08:14:19 -0700 Subject: [Public WebGL] Buffer size and viewport Message-ID: <4C0D0CCB.5050204@mechnicality.com> Hi Desktop GL programming frequently requires that the user sets the window size as part of the game/application setup. This means that normally the viewport can be set to (0, 0, displaybufferwidth, displaybufferheight). However, in a WebGL application it is likely to be very common that the window size will change due to use input. The default with WebGL is to set the buffer size to that of the canvas element and the viewport to that as well. This means that if the window is resized to greater dimensions than the original canvas size the display buffer must be discarded and a new buffer initialized - this takes time. One option that I can see is to make the display buffer considerably bigger than the canvas element - perhaps by doing some calculation based upon my UI layout and the underlying screen size and setting this value when creating the context. Then as the canvas is resized I simply set the viewport size to match the size of the canvas element, until, of course, it exceeds the underlying buffer size. Does anyone have any feel to the relationship between viewport size and buffer size and performance? In other words, if I allocate a larger buffer than I actually display in the view port, is this likely to cause a significant performance issue? Thanks in advance, Alan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From jos...@ Mon Jun 7 09:13:52 2010 From: jos...@ (Joshua Bell) Date: Mon, 7 Jun 2010 09:13:52 -0700 Subject: [Public WebGL] Buffer size and viewport In-Reply-To: <4C0D0CCB.5050204@mechnicality.com> References: <4C0D0CCB.5050204@mechnicality.com> Message-ID: Apologies in advance for the pedantic reply. On Mon, Jun 7, 2010 at 8:14 AM, Alan Chaney wrote: > Desktop GL programming frequently requires that the user sets the window > size as part of the game/application setup. This means that normally the > viewport can be set to (0, 0, displaybufferwidth, displaybufferheight). > However, in a WebGL application it is likely to be very common that the > window size will change due to use input. > > The default with WebGL is to set the buffer size to that of the canvas > element and the viewport to that as well. This means that if the window is > resized to greater dimensions than the original canvas size the display > buffer must be discarded and a new buffer initialized - this takes time. How frequently will this occur in practice, during a user session? Is it unacceptable for your application to pause briefly responding to the resize? (e.g. due to inducing latency issues, etc?) Are you optimizing prematurely, or do you have performance data - and if so, can you share it? Note that the underlying canvas will have a fixed buffer size as well; it may be stretched to match the window size via CSS, but changing the buffer's pixel dimensions dynamically will require script. One option that I can see is to make the display buffer considerably bigger > than the canvas element - perhaps by doing some calculation based upon my UI > layout and the underlying screen size and setting this value when creating > the context. > Then as the canvas is resized I simply set the viewport size to match the > size of the canvas element, until, of course, it exceeds the underlying > buffer size. > Multiple monitors will make this tricky - the answer given by the window.screen.width/height will change if I move my window, and is inadequate if I stretch my window to span two monitors. > Does anyone have any feel to the relationship between viewport size and > buffer size and performance? In other words, if I allocate a larger buffer > than I actually display in the view port, is this likely to cause a > significant performance issue? Despite the skepticism, I'm interested in the answers as well. My gut would be that allocating extra space would not be worth it, but I'd love to hear from implementers if it would be a performance problem. On a related note: games often run full-screen at a resolution lower than the "native" resolution of the device/monitor (and/or what the OS is normally set to) to find the sweet spot between visual fidelity and frame rate. With WebGL, the JS engine will be a limiting factor for now at least - more so than the pixel pipeline. I would naively expect, going full-screen (possibly via an HTML5-or-later API as discussed here: http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2010-January/024872.html) at "native" resolution would not incur a frame rate hit. I'm curious if this has been explored, and/or if approaches have been discussed. Anyone have any data or pointers to previous discussions? It's an issue my company runs into even with desktop GL; we have extremely high poly user-generated content but typically run in a window that users maximize, rather than running full-screen at reduced resolution, which induces a significant frame rate hit on lower end (CPU+GPU) hardware. Joshua -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Mon Jun 7 10:18:41 2010 From: ala...@ (Alan Chaney) Date: Mon, 07 Jun 2010 10:18:41 -0700 Subject: [Public WebGL] Buffer size and viewport In-Reply-To: References: <4C0D0CCB.5050204@mechnicality.com> Message-ID: <4C0D29F1.2000808@mechnicality.com> Joshua Bell wrote: > Apologies in advance for the pedantic reply. > No problem, it was a pedantic question! > On Mon, Jun 7, 2010 at 8:14 AM, Alan Chaney > wrote: > > Desktop GL programming frequently requires that the user sets the > window size as part of the game/application setup. This means that > normally the viewport can be set to (0, 0, displaybufferwidth, > displaybufferheight). However, in a WebGL application it is likely > to be very common that the window size will change due to use input. > > The default with WebGL is to set the buffer size to that of the > canvas element and the viewport to that as well. This means that > if the window is resized to greater dimensions than the original > canvas size the display buffer must be discarded and a new buffer > initialized - this takes time. > > > How frequently will this occur in practice, during a user session? Is > it unacceptable for your application to pause briefly responding to > the resize? (e.g. due to inducing latency issues, etc?) > Are you optimizing prematurely, or do you have performance data - and > if so, can you share it? I'm raising it because I encountered problems with a OpenGL player written using JOGL. Similarly to WebGL, JOGL allocates a drawing buffer and by default makes it a power of 2 size that encloses the specified viewport. When the window was resized we were encountering problems in the application stalling whilst the (GL) buffers were reloaded. When you dealing with a desktop application it is easier to get around this by exploiting techniques such as memory mapped files. I'm evaluating similar options by using the various storage APIs that are being proposed. As far as the "premature optimization" goes, I'm actually at this point considering options - this is a problem which I've hit before, so I think its valid to do some early investigation. Ironically, in the example I gave I had to do quite a lot of reorganization of the code after I discovered the problem, which is part of the reason why I'm considering it in this case. As an aside, this blog (which is nothing to do with WebGL, its about MS FSX/Dx10) has some interesting points on the strategy of "make it work, then make it fast" in connection with drawing high vertex-count content - http://blogs.technet.com/b/torgo3000/archive/2007/06/01/performance-art.aspx . > > Note that the underlying canvas will have a fixed buffer size as well; > it may be stretched to match the window size via CSS, but changing the > buffer's pixel dimensions dynamically will require script. > > One option that I can see is to make the display buffer > considerably bigger than the canvas element - perhaps by doing > some calculation based upon my UI layout and the underlying screen > size and setting this value when creating the context. > Then as the canvas is resized I simply set the viewport size to > match the size of the canvas element, until, of course, it exceeds > the underlying buffer size. > > > Multiple monitors will make this tricky - the answer given by the > window.screen.width/height will change if I move my window, and is > inadequate if I stretch my window to span two monitors. Agreed. As it happens, my focus at the moment is on "lower end" systems where multiple monitors are rare (although I'm writing this to you on a dual monitor machine...) > > > Does anyone have any feel to the relationship between viewport > size and buffer size and performance? In other words, if I > allocate a larger buffer than I actually display in the view port, > is this likely to cause a significant performance issue? > > > Despite the skepticism, I'm interested in the answers as well. My gut > would be that allocating extra space would not be worth it, but I'd > love to hear from implementers if it would be a performance problem. > Yes, part of the problem with "just trying it out" is that I only have a very limited set of browser/graphics card combinations at the moment, and I was hoping that the implementors might have some insight into the more general issues and likely performance trends (eg. mobile) I think WebGL has the opportunity to play a significant part in mobile devices if/when mobile WebGL compatible browsers become available. I was going to raise this issue in a separate thread. I realize that the issue of resizing is not likely to be so relevant on a mobile platforms. > On a related note: games often run full-screen at a resolution lower > than the "native" resolution of the device/monitor (and/or what the OS > is normally set to) to find the sweet spot between visual fidelity and > frame rate. > Agreed however I think that WebGL will be used for a lot more things than traditional games and such apps. may well need/use resizeable canvas elements. > With WebGL, the JS engine will be a limiting factor for now at least - > more so than the pixel pipeline. I would naively expect, going > full-screen (possibly via an HTML5-or-later API as discussed > here: http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2010-January/024872.html) > at "native" resolution would not incur a frame rate hit. I'm curious > if this has been explored, and/or if approaches have been discussed. > Anyone have any data or pointers to previous discussions? > At the moment I'm seeing the biggest issues of JS connected with its inability to manipulate arrays of non-JS types efficiently. At least for the applications that I'm the most interested in, once the VBOs and FBOs have been created there is little direct JS activity. I understand that there are plans afoot to propose new standards to address problems connected with array manipulation. > It's an issue my company runs into even with desktop GL; we have > extremely high poly user-generated content but typically run in a > window that users maximize, rather than running full-screen at reduced > resolution, which induces a significant frame rate hit on lower end > (CPU+GPU) hardware. > Do you know why your users do this? Alan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Mon Jun 7 11:02:06 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 7 Jun 2010 11:02:06 -0700 Subject: [Public WebGL] Buffer size and viewport In-Reply-To: <4C0D29F1.2000808@mechnicality.com> References: <4C0D0CCB.5050204@mechnicality.com> <4C0D29F1.2000808@mechnicality.com> Message-ID: On Mon, Jun 7, 2010 at 10:18 AM, Alan Chaney wrote: > Joshua Bell wrote: >> >> Apologies in advance for the pedantic reply. >> > No problem, it was a pedantic question! > >> On Mon, Jun 7, 2010 at 8:14 AM, Alan Chaney > > wrote: >> >> ? ?Desktop GL programming frequently requires that the user sets the >> ? ?window size as part of the game/application setup. This means that >> ? ?normally the viewport can be set to (0, 0, displaybufferwidth, >> ? ?displaybufferheight). However, in a WebGL application it is likely >> ? ?to be very common that the window size will change due to use input. >> >> ? ?The default with WebGL is to set the buffer size to that of the >> ? ?canvas element and the viewport to that as well. This means that >> ? ?if the window is resized to greater dimensions than the original >> ? ?canvas size the display buffer must be discarded and a new buffer >> ? ?initialized - this takes time. >> >> How frequently will this occur in practice, during a user session? Is it >> unacceptable for your application to pause briefly responding to the resize? >> (e.g. due to inducing latency issues, etc?) >> Are you optimizing prematurely, or do you have performance data - and if >> so, can you share it? > > I'm raising it because I encountered problems with a OpenGL player written > using JOGL. Similarly to WebGL, JOGL allocates a drawing buffer and by > default makes it a power of 2 size that encloses the specified viewport. > When the window was resized we were encountering problems in the application > stalling whilst the ?(GL) buffers were reloaded. When you dealing with a > desktop application it is easier to get around this by exploiting techniques > such as memory mapped files. I'm evaluating similar options by using the > various storage APIs that are being proposed. I assume you were using JOGL's GLJPanel. The issue you were likely running into with that component was that during some resizing operations the OpenGL context was destroyed and re-created, which generally forced the entire application to re-initialize. WebGL specifies that the OpenGL context is preserved during resizing operations, though it is necessary for the application to redraw. Resizing operations will therefore be much cheaper with WebGL so you will probably see no issue. Note also that the HTML Canvas element only resizes in response to explicit setting of its width and height properties in JavaScript. If you use a CSS style to set its width to for example "100%" then the result will be a stretched version of the fixed-size back buffer. -Ken > As far as the "premature optimization" goes, I'm actually at this point > considering options - this is a problem which I've hit before, so I think > its valid to do some early investigation. Ironically, in the example I gave > I had to do quite a lot of reorganization of the code after I discovered the > problem, which is part of the reason why I'm considering it in this case. > > As an aside, this blog (which is nothing to do with WebGL, its about MS > FSX/Dx10) has some interesting points on the strategy of ?"make it work, > then make it fast" in connection with drawing high vertex-count content - > ?http://blogs.technet.com/b/torgo3000/archive/2007/06/01/performance-art.aspx > . >> >> Note that the underlying canvas will have a fixed buffer size as well; it >> may be stretched to match the window size via CSS, but changing the buffer's >> pixel dimensions dynamically will require script. >> >> ? ?One option that I can see is to make the display buffer >> ? ?considerably bigger than the canvas element - perhaps by doing >> ? ?some calculation based upon my UI layout and the underlying screen >> ? ?size and setting this value when creating the context. >> ? ?Then as the canvas is resized ?I simply set the viewport size to >> ? ?match the size of the canvas element, until, of course, it exceeds >> ? ?the underlying buffer size. >> >> >> Multiple monitors will make this tricky - the answer given by the >> window.screen.width/height will change if I move my window, and is >> inadequate if I stretch my window to span two monitors. > > Agreed. As it happens, my focus at the moment is on "lower end" systems > where multiple monitors are rare (although I'm writing this to you on a dual > monitor machine...) > > >> >> ? ?Does anyone have any feel to the relationship between viewport >> ? ?size and buffer size and performance? In other words, if I >> ? ?allocate a larger buffer than I actually display in the view port, >> ? ?is this likely to cause a significant performance issue? >> >> >> Despite the skepticism, I'm interested in the answers as well. My gut >> would be that allocating extra space would not be worth it, but I'd love to >> hear from implementers if it would be a performance problem. >> > Yes, part of the problem with "just trying it out" is that I only have a > very limited set of browser/graphics card combinations at the moment, and I > was hoping that the implementors might have some insight into the more > general issues and likely performance trends (eg. mobile) I think WebGL has > the opportunity to play a significant part in mobile devices if/when mobile > WebGL compatible browsers become available. I was going to raise this issue > in a separate thread. I realize that the issue of resizing is not likely to > be so relevant on a mobile platforms. > >> On a ?related note: games often run full-screen at a resolution lower than >> the "native" resolution of the device/monitor (and/or what the OS is >> normally set to) to find the sweet spot between visual fidelity and frame >> rate. > > Agreed however I think that WebGL will be used for a lot more things than > traditional games and such apps. may well need/use resizeable canvas > elements. >> >> With WebGL, the JS engine will be a limiting factor for now at least - >> more so than the pixel pipeline. I would naively expect, going full-screen >> (possibly via an HTML5-or-later API as discussed here: >> http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2010-January/024872.html) >> at "native" resolution would not incur a frame rate hit. I'm curious if this >> has been explored, and/or if approaches have been discussed. Anyone have any >> data or pointers to previous discussions? >> > At the moment I'm seeing the biggest issues of JS connected with its > inability to manipulate arrays of non-JS types efficiently. At least for the > applications that I'm the most interested in, once the VBOs and FBOs have > been created there is little direct JS activity. I understand that there are > plans afoot to propose new standards to address problems connected with > array manipulation. > >> It's an issue my company runs into even with desktop GL; we have extremely >> high poly user-generated content but typically run in a window that users >> maximize, rather than running full-screen at reduced resolution, which >> induces a significant frame rate hit on lower end (CPU+GPU) hardware. >> > Do you know why your users do this? > > Alan > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Mon Jun 7 12:45:34 2010 From: ala...@ (Alan Chaney) Date: Mon, 07 Jun 2010 12:45:34 -0700 Subject: [Public WebGL] Buffer size and viewport In-Reply-To: References: <4C0D0CCB.5050204@mechnicality.com> <4C0D29F1.2000808@mechnicality.com> Message-ID: <4C0D4C5E.60603@mechnicality.com> Hi Ken Kenneth Russell wrote: > > > I assume you were using JOGL's GLJPanel. I was. > The issue you were likely > running into with that component was that during some resizing > operations the OpenGL context was destroyed and re-created, which > generally forced the entire application to re-initialize. WebGL > specifies that the OpenGL context is preserved during resizing > operations, though it is necessary for the application to redraw. > I'm not arguing, but I assume you are referring to the content of Sec 2 Context Creation and Drawing Buffer Presentation? Maybe its my fault, but I didn't interpret that section as explicitly stating what you said above. In other words, please could you point me to the wording that says that the context is preserved through resizing? > Resizing operations will therefore be much cheaper with WebGL so you > will probably see no issue. > > Glad to hear it! This will be a good thing. > Note also that the HTML Canvas element only resizes in response to > explicit setting of its width and height properties in JavaScript. If > you use a CSS style to set its width to for example "100%" then the > result will be a stretched version of the fixed-size back buffer. > In the specification in sec 2.3 it says "A WebGL implementation /shall not/ affect the state of the OpenGL viewport in response to resizing of the canvas element." which can be read to mean the reverse of what you say above. I think it might help to be a bit more specific as to the difference between setting the canvas size explicitly and the resizing of the canvas by the browser as part of a "%" scaling operation. Once again, I'm not trying to disagree with what you are saying it does, only that I don't think the spec makes it very clear. I'm looking at "Working Draft 03 June 2010" from the Khronos web site. Sorry if I'm being a bit obtuse/picky. Regards Alan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Mon Jun 7 13:22:53 2010 From: vla...@ (Vladimir Vukicevic) Date: Mon, 7 Jun 2010 13:22:53 -0700 (PDT) Subject: [Public WebGL] Buffer size and viewport In-Reply-To: <4C0D4C5E.60603@mechnicality.com> Message-ID: <1536003357.377455.1275942173849.JavaMail.root@cm-mail03.mozilla.org> ----- "Alan Chaney" wrote: > Hi Ken > > Kenneth Russell wrote: > > > > > > I assume you were using JOGL's GLJPanel. I was. > > > The issue you were likely > > running into with that component was that during some resizing > > operations the OpenGL context was destroyed and re-created, which > > generally forced the entire application to re-initialize. WebGL > > specifies that the OpenGL context is preserved during resizing > > operations, though it is necessary for the application to redraw. > > > I'm not arguing, but I assume you are referring to the content of Sec 2 > Context Creation and Drawing Buffer Presentation? > Maybe its my fault, but I didn't interpret that section as explicitly > stating what you said above. In other words, please could you point me > to the wording that says that the context is preserved through > resizing? Note that this is currently not implemented in Firefox, as we create a new pbuffer context for each resize thus losing all previous GL resources. We correctly track that now and throw errors when you try to use them in the future -- this is why things like MeShade don't work. We'll likely switch soon to using FBOs for backbuffer rendering -- the main reason I'm not a fan of that is that it requires you to share resources with a parent GL context that's rendering to the screen, which means you don't get nice cleanup semantics by just destroying a GL context. You have to make sure you delete all objects; I also worry about fragmentation issues with very long lived GL contexts (such as the parent context), with many contexts being created/destroyed that share resources with it... but we'll cross that bridge when we get there. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Mon Jun 7 13:41:17 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 7 Jun 2010 13:41:17 -0700 Subject: [Public WebGL] Buffer size and viewport In-Reply-To: <1536003357.377455.1275942173849.JavaMail.root@cm-mail03.mozilla.org> References: <4C0D4C5E.60603@mechnicality.com> <1536003357.377455.1275942173849.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Mon, Jun 7, 2010 at 1:22 PM, Vladimir Vukicevic wrote: > > ----- "Alan Chaney" wrote: > >> Hi Ken >> >> Kenneth Russell wrote: >> > >> > >> > I assume you were using JOGL's GLJPanel. I was. >> >> > The issue you were likely >> > running into with that component was that during some resizing >> > operations the OpenGL context was destroyed and re-created, which >> > generally forced the entire application to re-initialize. WebGL >> > specifies that the OpenGL context is preserved during resizing >> > operations, though it is necessary for the application to redraw. >> > >> I'm not arguing, but I assume you are referring to the content of Sec 2 >> Context Creation and Drawing Buffer Presentation? >> Maybe its my fault, but I didn't interpret that section as explicitly >> stating what you said above. In other words, please could you point me >> to the wording that says that the context is preserved through >> resizing? > > Note that this is currently not implemented in Firefox, as we create a new pbuffer context for each resize thus losing all previous GL resources. ?We correctly track that now and throw errors when you try to use them in the future -- this is why things like MeShade don't work. ?We'll likely switch soon to using FBOs for backbuffer rendering -- the main reason I'm not a fan of that is that it requires you to share resources with a parent GL context that's rendering to the screen, which means you don't get nice cleanup semantics by just destroying a GL context. ?You have to make sure you delete all objects; I also worry about fragmentation issues with very long lived GL contexts (such as the parent context), with many contexts being created/destroyed that share resources with it... but we'll cross that bridge when we get there. Would it be technically possible in your current implementation to destroy the old pbuffer and create a new one, but continue to use the same GL context? What do you think about strengthening the wording in the spec around this? Implementations that are incapable of handling a resize operation gracefully could post a WebGLContextLost / WebGLContextRestored event pair. We could state that implementations that are incapable of handling a resize operation gracefully could post a WebGLContextLost / WebGLContextRestored event pair(*), but that it is strongly encouraged not to resort to this mechanism, and to preserve the context's state during resizing. -Ken (*) WebGLContextRestored still needing to be specified of course. > ? ?- Vlad > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Mon Jun 7 13:41:44 2010 From: ala...@ (Alan Chaney) Date: Mon, 07 Jun 2010 13:41:44 -0700 Subject: [Public WebGL] Buffer size and viewport In-Reply-To: <1536003357.377455.1275942173849.JavaMail.root@cm-mail03.mozilla.org> References: <1536003357.377455.1275942173849.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C0D5988.6000007@mechnicality.com> Vlad Vladimir Vukicevic wrote: > > > Note that this is currently not implemented in Firefox, as we create a new pbuffer context for each resize thus losing all previous GL resources. Not being awkward, and for clarification, you are saying that you think the spec *does* require you to not destroy the context but currently Firefox does. My comment to Ken was that I couldn't see where the spec. explicitly requires that on resize the context is NOT lost. A;am > We correctly track that now and throw errors when you try to use them in the future -- this is why things like MeShade don't work. We'll likely switch soon to using FBOs for backbuffer rendering -- the main reason I'm not a fan of that is that it requires you to share resources with a parent GL context that's rendering to the screen, which means you don't get nice cleanup semantics by just destroying a GL context. You have to make sure you delete all objects; I also worry about fragmentation issues with very long lived GL contexts (such as the parent context), with many contexts being created/destroyed that share resources with it... but we'll cross that bridge when we get there. > > - Vlad > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Mon Jun 7 14:28:19 2010 From: vla...@ (Vladimir Vukicevic) Date: Mon, 7 Jun 2010 14:28:19 -0700 (PDT) Subject: [Public WebGL] Buffer size and viewport In-Reply-To: <423600899.379325.1275945953200.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1942865683.379378.1275946099776.JavaMail.root@cm-mail03.mozilla.org> ----- "Kenneth Russell" wrote: > Would it be technically possible in your current implementation to > destroy the old pbuffer and create a new one, but continue to use the > same GL context? Hmm -- maybe! I misread the wglMakeCurrent docs, so this should be possible. I'll give it a shot. > > What do you think about strengthening the wording in the spec around > this? Implementations that are incapable of handling a resize > operation gracefully could post a WebGLContextLost / > WebGLContextRestored event pair. We could state that implementations > that are incapable of handling a resize operation gracefully could > post a WebGLContextLost / WebGLContextRestored event pair(*), but that > it is strongly encouraged not to resort to this mechanism, and to > preserve the context's state during resizing. That sounds like a good idea; given that we already have the concept of a lost context, seems pretty natural to put it in here. I agree that we should encourage implementations to take steps to avoid having to use it in this case, though. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Jun 7 15:27:54 2010 From: ste...@ (Steve Baker) Date: Mon, 07 Jun 2010 17:27:54 -0500 Subject: [Public WebGL] Buffer size and viewport In-Reply-To: <4C0D0CCB.5050204@mechnicality.com> References: <4C0D0CCB.5050204@mechnicality.com> Message-ID: <4C0D726A.2020303@sjbaker.org> Alan Chaney wrote: > Does anyone have any feel to the relationship between viewport size > and buffer size and performance? In other words, if I allocate a > larger buffer than I actually display in the view port, is this likely > to cause a significant performance issue? It's not a speed issue (generally) but rather a graphics memory buffer consumption problem. With front+back+Z buffers - the amount of video memory consumed can get big. It's worse if you need additional shadow/ambient-occlusion/motion-blur/blur/HDR buffers since these are generally sized to match the main buffer size in order that they can all share the same default Z buffer. This is especially problematic for mobile devices and older laptops where video memory is already at a premium...and recalling our earlier discussion that suggested that compressed textures might be impossible in 1.0. Worse still, if HTML compositing is being done in OpenGL - then that too will consume Graphics memory. IMHO, when the user resizes the window, he's taken himself out of interactivity and a small fraction of a second of delay to resize the buffer isn't so terrible. So I think the buffer should automatically resize - and if it takes a little time, it's well worth the video memory savings. There is an argument for only resizing the buffer when it gets larger than the largest size it has been for this application - and for keeping it the same size when it shrinks...after all, if you had enough texture memory at the original size - then you still have enough when it shrinks. It would be nice if efforts to increase the canvas size failed and left the buffer the same size when no more video memory is available since that would be an elegant fallback in most cases. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From jos...@ Mon Jun 7 17:15:30 2010 From: jos...@ (Joshua Bell) Date: Mon, 7 Jun 2010 17:15:30 -0700 Subject: [Public WebGL] Re: Typed Array spec nitpicks In-Reply-To: References: Message-ID: On Mon, Apr 12, 2010 at 12:02 PM, Joshua Bell wrote: > For experimental purposes unrelated to WebGL, I implemented > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/TypedArray-spec.html as > closely as I could in ECMAScript 3, and observed a few issues: > FYI, this "close enough" implementation of the Typed Array draft in ES3 is now live as part of this collection of LLSD[1] implementations: http://hg.secondlife.com/llsd Specifically, the tip is at: http://hg.secondlife.com/llsd/src/tip/js/typedarray.js I implemented this since I was doing binary resource serialization and wanted a decent abstraction; Typed Arrays fit the bill nicely. I admit I went a little overboard, since I'm only actually packing/unpacking float64, int32 and uint32 values. It implements the May 17th draft with the addition of get/set as aliases for indexers (although there's an ES5 approach that makes indexers work, albeit inefficiently), and it includes the Float/Double -> Float32/Float64 rename that's been discussed. This is probably not of any utility to folks on this list, but I figured it couldn't hurt to share it. There are also some unit tests buried one folder down. [1] http://tools.ietf.org/html/draft-hamrick-vwrap-type-system-00 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Jun 7 18:52:27 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 7 Jun 2010 18:52:27 -0700 Subject: [Public WebGL] Purpose of WebGLObjectArray ? In-Reply-To: References: <0E19381D-2478-4570-91EE-C5706883CBC9@apple.com> Message-ID: On Fri, Jun 4, 2010 at 9:07 AM, Cedric Vivier wrote: > On Fri, Jun 4, 2010 at 22:10, Chris Marrin wrote: >> The semantics needs to be that a new array is created, filled and >> returned. > > Yes that's what I assumed too, afaik this semantic is the one defined by T[ > ] not sequence. > > >> Seems like whether this is literally returned by value or by a reference >> to a newly created object is a detail of the language binding. > > According to the WebIDL spec sequence is always passed by value, > difference might be subtle in Javascript but it certainly can have negative > consequences (at least in other languages) because it involves unnecessary > copying. > In performance-sensitive methods like uniform*() it could be quite expensive > to do so, e.g uniformMatrix4fv would copy 16 floats on the stack instead of > just passing reference. > Performance aside, sequence is not allowed to be null (since by-value) > whereas T[] is allowed to be null (since by-reference [1]), in WebGL we need > the value returned to possibly be null when a GL error has been generated > (or context is lost). > > [1] : ?"The T[] type is a parameterized type whose values are (possibly > zero-length) arrays of values of type T or the special value null." Agreed, WebGLShader[] should be used here. The next (large) round of specification changes will remove WebGLObjectArray and return WebGLShader[] from getAttachedShaders. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Mon Jun 7 19:20:07 2010 From: cal...@ (Mark Callow) Date: Tue, 08 Jun 2010 11:20:07 +0900 Subject: [Public WebGL] Buffer size and viewport In-Reply-To: <1536003357.377455.1275942173849.JavaMail.root@cm-mail03.mozilla.org> References: <1536003357.377455.1275942173849.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C0DA8D7.4010409@hicorp.co.jp> While it is unfortunately necessary to create a minimum 1x1 native surface in order to use FBO's, you only need one context. This is true at least with EGL/OpenGL ES. I'm not very familiar with WGL. Regards -Mark > Note that this is currently not implemented in Firefox, as we create a new pbuffer context for each resize thus losing all previous GL resources. We correctly track that now and throw errors when you try to use them in the future -- this is why things like MeShade don't work. We'll likely switch soon to using FBOs for backbuffer rendering -- the main reason I'm not a fan of that is that it requires you to share resources with a parent GL context that's rendering to the screen, which means you don't get nice cleanup semantics by just destroying a GL context. You have to make sure you delete all objects; I also worry about fragmentation issues with very long lived GL contexts (such as the parent context), with many contexts being created/destroyed that share resources with it... but we'll cross that bridge when we get there. > > - Vlad > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 412 bytes Desc: not available URL: From cal...@ Mon Jun 7 19:24:29 2010 From: cal...@ (Mark Callow) Date: Tue, 08 Jun 2010 11:24:29 +0900 Subject: [Public WebGL] Buffer size and viewport In-Reply-To: <1942865683.379378.1275946099776.JavaMail.root@cm-mail03.mozilla.org> References: <1942865683.379378.1275946099776.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C0DA9DD.2050009@hicorp.co.jp> > ----- "Kenneth Russell" wrote: > > >> Would it be technically possible in your current implementation to >> destroy the old pbuffer and create a new one, but continue to use the >> same GL context? >> > Hmm -- maybe! I misread the wglMakeCurrent docs, so this should be possible. I'll give it a shot. It is certainly possible with EGL. Regards -Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 412 bytes Desc: not available URL: From cma...@ Tue Jun 8 06:35:24 2010 From: cma...@ (Chris Marrin) Date: Tue, 08 Jun 2010 06:35:24 -0700 Subject: [Public WebGL] Buffer size and viewport In-Reply-To: <4C0D0CCB.5050204@mechnicality.com> References: <4C0D0CCB.5050204@mechnicality.com> Message-ID: <21B55021-2757-4670-A0A0-8C41FB4CF9C7@apple.com> On Jun 7, 2010, at 8:14 AM, Alan Chaney wrote: > Hi > > Desktop GL programming frequently requires that the user sets the window size as part of the game/application setup. This means that normally the viewport can be set to (0, 0, displaybufferwidth, displaybufferheight). However, in a WebGL application it is likely to be very common that the window size will change due to use input. > > The default with WebGL is to set the buffer size to that of the canvas element and the viewport to that as well. This means that if the window is resized to greater dimensions than the original canvas size the display buffer must be discarded and a new buffer initialized - this takes time. One option that I can see is to make the display buffer considerably bigger than the canvas element - perhaps by doing some calculation based upon my UI layout and the underlying screen size and setting this value when creating the context. > Then as the canvas is resized I simply set the viewport size to match the size of the canvas element, until, of course, it exceeds the underlying buffer size. > > Does anyone have any feel to the relationship between viewport size and buffer size and performance? In other words, if I allocate a larger buffer than I actually display in the view port, is this likely to cause a significant performance issue? I'm not sure what the issue is here. The width and height of the Canvas are attributes of the Canvas element. As such there is no "automatic" way to change them. For instance, if you wanted to change the Canvas size when the user resized the window, you'd have to listen for Window size changes and then use JS to set the size attributes on the Canvas. So you have complete control over when and if you change the size and force the expensive reinitialization. I think there will be many games which use fixed size Canvases so there will not be an issue. You could also delay the Canvas resize to let the use settle on a new window size before changing the Canvas size. Incidentally, you can change the apparent Canvas size by setting it's CSS width and height properties. This will scale the Canvas to fit that width and height and will not reinitialize the Canvas. You could use this scaling while the user is resizing the Window and then resize the Canvas itself a few seconds after the resizing is done. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From zhe...@ Tue Jun 8 10:12:13 2010 From: zhe...@ (Mo, Zhenyao) Date: Tue, 8 Jun 2010 10:12:13 -0700 Subject: [Public WebGL] Re: Framebuffer attachment point In-Reply-To: References: Message-ID: No response? I recommend we post the same restriction in framebufferTexImage2D as in framebufferRenderbuffer, i.e., if the texture internal format does not match the attachment point, an INVALID_OPERATION error will be generated. Since GLES2 only allows color texture, the only valid attachment point for framebufferTexImage2D will be COLOR_ATTACHMENT0. The benefits of doing this is, (1) framebufferTexImage2D and framebufferRenderbuffer will be under the same constraint rule; (2) implementation wise, it is a much clearer solution. If there is a reason we shouldn't do this, please respond. - Mo On Fri, Jun 4, 2010 at 12:19 PM, Mo, Zhenyao wrote: > In WebGL spec section 6.2, when internal format of a renderbuffer does > not match the attachment point, an INVALID_OPERATION error is > generated. > > In the same spirit, shouldn't we generate the same INVALID_OPERATION > for framebufferTexImage2D when the attachment is STENCIL or DEPTH? ?In > GLES, we don't have a way to generate non color textures. > > Also, is it really necessary that we check internal format of > renderbuffer/texture when attaching it to a framebuffer? ?This > dictates we has to call texImage2D or renderbufferStorage before > attaching. ? Shouldn't we just follow GLES and allow the mismatch? ?So > we can attach and later change the internalformat of > renderbuffer/texture and make the framebuffer complete. > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Tue Jun 8 12:05:03 2010 From: ced...@ (Cedric Vivier) Date: Wed, 9 Jun 2010 03:05:03 +0800 Subject: [Public WebGL] WebGLExtension interface proposal Message-ID: Hi, We specify extensions as objects with more or less the same behavior as other WebGL objects mapped to GL resources, ie. an extensions object is only valid for use by the WebGLRenderingContext who created it and they should be re-created after context is lost ; as such I believe it would make sense to declare a WebGLExtension interface from which all WebGL extensions must derive. It could simplify implementations (can use the same base WebGLObject implementation for validation/tracking, extensions could also more easily use same base object for internal housekeeping), but more importantly it makes API friendlier to more strongly-typed languages (versus 'object'), avoids potential 'empty object extension' which might be confusing, and allows us to transparently add new 'utility' members on all extensions if we need to do so in a future revision. In other words, I propose following spec addition : 5.XX WebGLExtension The WebGLExtension interface represents an extension object. All WebGL extensions derive from this interface. The object is returned by calling getExtension with a supported extension string (see 5.16.14 Detecting and enabling extensions). interface WebGLExtension : WebGLObject { readonly attribute DOMString name; } And : WebGLExtension getExtension(DOMString name); instead of : object getExtension(DOMString name); Finally, for consistency with other WebGLObjects I propose introduction of an isExtension function to check if an extension object is valid (ie. not obtained from another context and not acquired before context loss) : GLboolean isExtension(WebGLExtension extension); Possibly we might as well simplify API and merge all is* functions into only one, e.g a probably better named "GLboolean isValid(WebGLObject object)". (all current is* functions can trivially be wrapped to use it in Javascript so there is no feature removal and/or divergence imho) Thoughts? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Jun 8 12:24:08 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 8 Jun 2010 12:24:08 -0700 Subject: [Public WebGL] WebGL specification updates Message-ID: All, A large number of updates to the WebGL and TypedArray specifications have been checked in. Here is the commit message indicating what was changed. Please review and send your feedback to the list. We are starting now to implement these changes in WebKit's WebGL code; I gather that Mozilla is doing the same. Hopefully within a couple of weeks the implementations will have converged on the new specification. -Ken Incorporated specification changes from WebGL face-to-face meeting on May 25-26, 2010, and from discussion on the public mailing list: - Added section on Premultiplied Alpha, Canvas APIs and texImage2D, and referred to it from the specification of the premultipliedAlpha flag in the WebGLContextAttributes. - Added section on Enabled Vertex Attributes and Range Checking, and referred to it from the specifications of enableVertexAttribArray, drawArrays and drawElements, as well as from the Resource Restrictions specification. Deleted now-obsolete section on ArrayBuffer and Uninitialized Data. - Added differences section on extension queries. Removed EXTENSIONS enum from WebGL spec. - Added WebGL-specific UNPACK_FLIP_Y_WEBGL, UNPACK_PREMULTIPLY_ALPHA_WEBGL and CONTEXT_LOST_WEBGL enums. - Added Pixel Storage Parameters section documenting UNPACK_FLIP_Y_WEBGL and UNPACK_PREMULTIPLY_ALPHA_WEBGL parameters, and referred to it from pixelStorei, texImage2D and texSubImage2D specifications. - Removed getString() and added its legal arguments to the documentation of getParameter(). Added UNPACK_FLIP_Y_WEBGL and UNPACK_PREMULTIPLY_ALPHA_WEBGL to table of return values from getParameter. - Documented return values for getParameter taking RENDERER, SHADING_LANGUAGE_VERSION, VENDOR and VERSION. - Added internalformat, format and type arguments to texImage2D variants taking ImageData, HTMLImageElement, HTMLCanvasElement and HTMLVideoElement. Removed flipY and asPremultipliedAlpha arguments. Added documentation for new arguments and semantics. - Added format and type arguments to texSubImage2D variants taking ImageData, HTMLImageElement, HTMLCanvasElement and HTMLVideoElement. Removed flipY and asPremultipliedAlpha arguments. Referred to texImage2D documentation. - Updated readPixels() specification to take ArrayBufferView as argument rather than returning it. Specified behavior for out-of-range pixels and mismatches between type and pixels arguments, the latter for texImage2D as well. - Expanded specification of WebGLContextLostEvent. Added specification of WebGLContextRestoredEvent. Removed resetContext(). Added example from Gregg Tavares using these events to reset an application. - Removed WebGLObjectArray. Used WebGLShader[] as return type of getAttachedShaders(). - Added note about negative "first" argument to drawElements generating INVALID_VALUE based on feedback from Vladimir Vukicevic. - Synchronized webgl.idl and WebGL specification. - Renamed references to FloatArray to Float32Array. - Minor grammatical cleanups. In TypedArray specification: - Renamed FloatArray to Float32Array and DoubleArray to Float64Array. - Renamed getFloat/setFloat to getFloat32/setFloat32 and getDouble/setDouble to getFloat64/setFloat64 on DataView. - Synchronized typedarrays.idl. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Tue Jun 8 14:33:27 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 8 Jun 2010 14:33:27 -0700 Subject: [Public WebGL] a proposal to sort error codes discussions In-Reply-To: <1283621904.362122.1275754085558.JavaMail.root@cm-mail03.mozilla.org> References: <39819985.351319.1275663430006.JavaMail.root@cm-mail03.mozilla.org> <1283621904.362122.1275754085558.JavaMail.root@cm-mail03.mozilla.org> Message-ID: This sounds reasonable, but I'm concerned that we will need to completely replicate OpenGL's error checking code in WebGL implementations, where currently we are able to delegate at least some of the error checking to OpenGL. To the best of my knowledge the OpenGL spec doesn't specify which error is produced when there are two things wrong with the incoming arguments that would result in different errors. I think our conformance tests would be good enough if each negative test tested only one error produced from a given function call. Verifying all combinations of incorrect inputs or states for every function call will result in a combinatorial explosion. -Ken On Sat, Jun 5, 2010 at 9:08 AM, Benoit Jacob wrote: > No replies yet: either I said something stupid (likely!) or I should give some more context: > > In the recent thread "Error codes in drawArrays / drawElements" we discussed a conformance test that currently requires a INVALID_OPERATION error to be raised, and it appeared in the discussion that it should rather require INVALID_VALUE. This shows that in situations where both errors could legitimately be raised, it wasn't clear enough which one to prefer raising. My post here can be summarized by saying: in case of ambiguity, prefer raising INVALID_VALUE over raising INVALID_OPERATION. > > Cheers, > Benoit > > ----- "Benoit Jacob" wrote: > >> Hi, >> >> In order to write conformance tests checking error codes, we must >> agree precisely on what error codes to produce in every circumstance. >> Here's a proposal to sort this once and for all. >> >> In every WebGL function, let's use the following logic: >> ? ? 1. first check for INVALID_ENUM >> ? ? 2. then check for INVALID_VALUE. Only raise that error if some >> parameter value is *absolutely* wrong in itself, regardless of other >> parameters, and regardless of the state. >> ? ? 3. finally, check for INVALID_OPERATION. >> >> Adopting such a clear hierarchy between these 3 error codes, will >> allow to sort out all the ambiguous situations where more than one of >> these errors could legitimately be produced. >> >> Here's a quick rationalization. INVALID_ENUM and INVALID_VALUE mean >> that a parameter value is absolutely, intrinsically wrong in itself, >> so they are the most directly useful errors, so they are prioritized >> over INVALID_OPERATION, which means that parameter values are >> relatively wrong (relatively to each other or to the state). Between >> INVALID_ENUM and INVALID_VALUE, let's prioritize INVALID_ENUM, since >> the GLenum parameters are typically passed compile-time constants. >> >> Cheers, >> Benoit >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Tue Jun 8 14:58:08 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 8 Jun 2010 14:58:08 -0700 Subject: [Public WebGL] Re: Framebuffer attachment point In-Reply-To: References: Message-ID: I think this would be a good idea, because as you point out it would make the behavior more similar to that of framebufferRenderbuffer, and allow errors to be caught early rather than later returning FRAMEBUFFER_INCOMPLETE_ATTACHMENT from checkFramebufferStatus (cf. OpenGL ES 2.0 spec, section 4.4, page 115). -Ken On Tue, Jun 8, 2010 at 10:12 AM, Mo, Zhenyao wrote: > No response? > > I recommend we post the same restriction in framebufferTexImage2D as > in framebufferRenderbuffer, i.e., if the texture internal format does > not match the attachment point, an INVALID_OPERATION error will be > generated. ?Since GLES2 only allows color texture, the only valid > attachment point for framebufferTexImage2D will be COLOR_ATTACHMENT0. > > The benefits of doing this is, (1) framebufferTexImage2D and > framebufferRenderbuffer will be under the same constraint rule; (2) > implementation wise, it is a much clearer solution. > > If there is a reason we shouldn't do this, please respond. > > - Mo > > On Fri, Jun 4, 2010 at 12:19 PM, Mo, Zhenyao wrote: >> In WebGL spec section 6.2, when internal format of a renderbuffer does >> not match the attachment point, an INVALID_OPERATION error is >> generated. >> >> In the same spirit, shouldn't we generate the same INVALID_OPERATION >> for framebufferTexImage2D when the attachment is STENCIL or DEPTH? ?In >> GLES, we don't have a way to generate non color textures. >> >> Also, is it really necessary that we check internal format of >> renderbuffer/texture when attaching it to a framebuffer? ?This >> dictates we has to call texImage2D or renderbufferStorage before >> attaching. ? Shouldn't we just follow GLES and allow the mismatch? ?So >> we can attach and later change the internalformat of >> renderbuffer/texture and make the framebuffer complete. >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Tue Jun 8 14:58:18 2010 From: bja...@ (Benoit Jacob) Date: Tue, 8 Jun 2010 14:58:18 -0700 (PDT) Subject: [Public WebGL] a proposal to sort error codes discussions In-Reply-To: Message-ID: <1265210575.390781.1276034298081.JavaMail.root@cm-mail03.mozilla.org> I don't have a problem with that, I think I agree with you but am a bit too new here to make that kind of decision; but if we go this path, the existing conformance tests must be patched to avoid requiring one particular error code in situations where more than one are legitimate. I'm waiting for a green light here to start doing so. Benoit ----- "Kenneth Russell" wrote: > This sounds reasonable, but I'm concerned that we will need to > completely replicate OpenGL's error checking code in WebGL > implementations, where currently we are able to delegate at least > some > of the error checking to OpenGL. To the best of my knowledge the > OpenGL spec doesn't specify which error is produced when there are > two > things wrong with the incoming arguments that would result in > different errors. I think our conformance tests would be good enough > if each negative test tested only one error produced from a given > function call. Verifying all combinations of incorrect inputs or > states for every function call will result in a combinatorial > explosion. > > -Ken > > On Sat, Jun 5, 2010 at 9:08 AM, Benoit Jacob > wrote: > > No replies yet: either I said something stupid (likely!) or I should > give some more context: > > > > In the recent thread "Error codes in drawArrays / drawElements" we > discussed a conformance test that currently requires a > INVALID_OPERATION error to be raised, and it appeared in the > discussion that it should rather require INVALID_VALUE. This shows > that in situations where both errors could legitimately be raised, it > wasn't clear enough which one to prefer raising. My post here can be > summarized by saying: in case of ambiguity, prefer raising > INVALID_VALUE over raising INVALID_OPERATION. > > > > Cheers, > > Benoit > > > > ----- "Benoit Jacob" wrote: > > > >> Hi, > >> > >> In order to write conformance tests checking error codes, we must > >> agree precisely on what error codes to produce in every > circumstance. > >> Here's a proposal to sort this once and for all. > >> > >> In every WebGL function, let's use the following logic: > >> ? ? 1. first check for INVALID_ENUM > >> ? ? 2. then check for INVALID_VALUE. Only raise that error if some > >> parameter value is *absolutely* wrong in itself, regardless of > other > >> parameters, and regardless of the state. > >> ? ? 3. finally, check for INVALID_OPERATION. > >> > >> Adopting such a clear hierarchy between these 3 error codes, will > >> allow to sort out all the ambiguous situations where more than one > of > >> these errors could legitimately be produced. > >> > >> Here's a quick rationalization. INVALID_ENUM and INVALID_VALUE > mean > >> that a parameter value is absolutely, intrinsically wrong in > itself, > >> so they are the most directly useful errors, so they are > prioritized > >> over INVALID_OPERATION, which means that parameter values are > >> relatively wrong (relatively to each other or to the state). > Between > >> INVALID_ENUM and INVALID_VALUE, let's prioritize INVALID_ENUM, > since > >> the GLenum parameters are typically passed compile-time constants. > >> > >> Cheers, > >> Benoit > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Tue Jun 8 14:58:55 2010 From: vla...@ (Vladimir Vukicevic) Date: Tue, 8 Jun 2010 14:58:55 -0700 (PDT) Subject: [Public WebGL] a proposal to sort error codes discussions In-Reply-To: Message-ID: <646227574.390791.1276034335348.JavaMail.root@cm-mail03.mozilla.org> That's a good point... I think that we do need tests that specify multiple incorrect arguments, and they should just check that /some/ error is produced (or perhaps they can check for "INVALID_VALUE or INVALID_ENUM", depending no what they're checking). So maybe we just write tests in that form and not mandate a specific error reporting order. - Vlad ----- "Kenneth Russell" wrote: > This sounds reasonable, but I'm concerned that we will need to > completely replicate OpenGL's error checking code in WebGL > implementations, where currently we are able to delegate at least some > of the error checking to OpenGL. To the best of my knowledge the > OpenGL spec doesn't specify which error is produced when there are two > things wrong with the incoming arguments that would result in > different errors. I think our conformance tests would be good enough > if each negative test tested only one error produced from a given > function call. Verifying all combinations of incorrect inputs or > states for every function call will result in a combinatorial > explosion. > > -Ken > > On Sat, Jun 5, 2010 at 9:08 AM, Benoit Jacob > wrote: > > No replies yet: either I said something stupid (likely!) or I should > give some more context: > > > > In the recent thread "Error codes in drawArrays / drawElements" we > discussed a conformance test that currently requires a > INVALID_OPERATION error to be raised, and it appeared in the > discussion that it should rather require INVALID_VALUE. This shows > that in situations where both errors could legitimately be raised, it > wasn't clear enough which one to prefer raising. My post here can be > summarized by saying: in case of ambiguity, prefer raising > INVALID_VALUE over raising INVALID_OPERATION. > > > > Cheers, > > Benoit > > > > ----- "Benoit Jacob" wrote: > > > >> Hi, > >> > >> In order to write conformance tests checking error codes, we must > >> agree precisely on what error codes to produce in every > circumstance. > >> Here's a proposal to sort this once and for all. > >> > >> In every WebGL function, let's use the following logic: > >> ? ? 1. first check for INVALID_ENUM > >> ? ? 2. then check for INVALID_VALUE. Only raise that error if some > >> parameter value is *absolutely* wrong in itself, regardless of > other > >> parameters, and regardless of the state. > >> ? ? 3. finally, check for INVALID_OPERATION. > >> > >> Adopting such a clear hierarchy between these 3 error codes, will > >> allow to sort out all the ambiguous situations where more than one > of > >> these errors could legitimately be produced. > >> > >> Here's a quick rationalization. INVALID_ENUM and INVALID_VALUE > mean > >> that a parameter value is absolutely, intrinsically wrong in > itself, > >> so they are the most directly useful errors, so they are > prioritized > >> over INVALID_OPERATION, which means that parameter values are > >> relatively wrong (relatively to each other or to the state). > Between > >> INVALID_ENUM and INVALID_VALUE, let's prioritize INVALID_ENUM, > since > >> the GLenum parameters are typically passed compile-time constants. > >> > >> Cheers, > >> Benoit > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Wed Jun 9 10:27:15 2010 From: ced...@ (Cedric Vivier) Date: Thu, 10 Jun 2010 01:27:15 +0800 Subject: [Public WebGL] TypedArray constructors and zero length Message-ID: Hi, Currently TypedArrays spec does not define behavior when zero or a negative value is passed as length of a constructor. I guess we should specify the behavior, I propose following addition to the spec of all constructors with a 'length' argument : """ If length is zero or negative, an INDEX_SIZE_ERR exception is raised. """ Negative might be unnecessary though since type is "unsigned long", does this mean a negative value should raise a TYPE_ERR if I understand WebIDL correctly ? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Wed Jun 9 10:42:07 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 9 Jun 2010 10:42:07 -0700 (PDT) Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: Message-ID: <630890380.397618.1276105327243.JavaMail.root@cm-mail03.mozilla.org> ----- "Cedric Vivier" wrote: > Hi, > > > Currently TypedArrays spec does not define behavior when zero or a > negative value is passed as length of a constructor. > I guess we should specify the behavior, I propose following addition > to the spec of all constructors with a 'length' argument : > > """ > If length is zero or negative, an INDEX_SIZE_ERR exception is raised. > """ > > Negative might be unnecessary though since type is "unsigned long", > does this mean a negative value should raise a TYPE_ERR if I > understand WebIDL correctly ? 0 is a valid length; you should be able to create 0-length arrays as necessary, otherwise there's no way to represent a valid empty array. You'd have to pass null, which means special casing everything that works with arrays to understand null. And as the length is already unsigned, then it can't ever be negative, so I don't think we need to do any special language here. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Jun 9 11:21:34 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 9 Jun 2010 11:21:34 -0700 (PDT) Subject: [Public WebGL] vertex attrib 0 Message-ID: <1686451902.398270.1276107694326.JavaMail.root@cm-mail03.mozilla.org> A while ago we had a discussion about making vertex attrib index 0 special, and requiring that it be enabled with an array bound, but I don't think we ever documented that anywhere. I think we have to add this -- the gl-vertexattrib.html test in the test suite fails with attrib index 0 on a wide variety of cards/desktop GLSL drivers, but works fine with index 1. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Wed Jun 9 11:24:59 2010 From: ced...@ (Cedric Vivier) Date: Thu, 10 Jun 2010 02:24:59 +0800 Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: <630890380.397618.1276105327243.JavaMail.root@cm-mail03.mozilla.org> References: <630890380.397618.1276105327243.JavaMail.root@cm-mail03.mozilla.org> Message-ID: Hey Vlad, On Thu, Jun 10, 2010 at 01:42, Vladimir Vukicevic wrote: > 0 is a valid length; you should be able to create 0-length arrays as > necessary, otherwise there's no way to represent a valid empty array. You'd > have to pass null, which means special casing everything that works with > arrays to understand null. > Zero-length arrays might make sense but zero-length buffers ? I see ArrayBuffer as a way to efficiently allocate packed (buffer) memory from Javascript, hence my question about such behavior (ie. malloc(0) or equivalent on some GCs will fail). Not sure to get what you imply by "special casing everything that works with arrays" ? TypedArray are interface types so null is already a potential 'legal' value to take care of afaik. In my understanding supporting 0-length arrays requires additional special casing on top of that, for avoiding divide by zero and internal memory housekeeping. > And as the length is already unsigned, then it can't ever be negative, so I > don't think we need to do any special language here. > I see, so what should happen in this case ? TYPE_ERR ? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Wed Jun 9 11:29:33 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 9 Jun 2010 11:29:33 -0700 (PDT) Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: Message-ID: <116006142.398438.1276108173757.JavaMail.root@cm-mail03.mozilla.org> A zero-length array still has a .buffer property, which has to be non-null -- thus it has to be of zero size if the array was created with 'new Int32Array(0);'. The behaviour of malloc(0) isn't really relevant, though, as the underlying implementation can just have its raw data pointer be NULL without ever calling malloc; ArrayBuffer itself still needs additional metadata inside an implementation, so it can't ever just be a bare chunk of memory. - Vlad ----- "Cedric Vivier" wrote: > Hey Vlad, > > > On Thu, Jun 10, 2010 at 01:42, Vladimir Vukicevic < > vladimir...@ > wrote: > > > > > > 0 is a valid length; you should be able to create 0-length arrays as > necessary, otherwise there's no way to represent a valid empty array. > You'd have to pass null, which means special casing everything that > works with arrays to understand null. > > > Zero-length arrays might make sense but zero-length buffers ? I see > ArrayBuffer as a way to efficiently allocate packed (buffer) memory > from Javascript, hence my question about such behavior (ie. malloc(0) > or equivalent on some GCs will fail). > > > Not sure to get what you imply by "special casing everything that > works with arrays" ? > TypedArray are interface types so null is already a potential 'legal' > value to take care of afaik. > > > In my understanding supporting 0-length arrays requires additional > special casing on top of that, for avoiding divide by zero and > internal memory housekeeping. > > > > > And as the length is already unsigned, then it can't ever be negative, > so I don't think we need to do any special language here. > > > > I see, so what should happen in this case ? TYPE_ERR ? > > > > > Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Jun 9 11:38:49 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 9 Jun 2010 11:38:49 -0700 (PDT) Subject: [Public WebGL] vertex attrib 0 In-Reply-To: <1686451902.398270.1276107694326.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1257440518.398591.1276108729763.JavaMail.root@cm-mail03.mozilla.org> In the interest of avoiding "what the heck is going on, why does it not work?" head-scratching, I suggest that calling a drawing operation without an enabled array at vertex attrib index 0 should generate INVALID_OPERATION. - Vlad ----- "Vladimir Vukicevic" wrote: > A while ago we had a discussion about making vertex attrib index 0 > special, and requiring that it be enabled with an array bound, but I > don't think we ever documented that anywhere. I think we have to add > this -- the gl-vertexattrib.html test in the test suite fails with > attrib index 0 on a wide variety of cards/desktop GLSL drivers, but > works fine with index 1. > > - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Wed Jun 9 11:48:18 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 9 Jun 2010 11:48:18 -0700 Subject: [Public WebGL] vertex attrib 0 In-Reply-To: <1257440518.398591.1276108729763.JavaMail.root@cm-mail03.mozilla.org> References: <1686451902.398270.1276107694326.JavaMail.root@cm-mail03.mozilla.org> <1257440518.398591.1276108729763.JavaMail.root@cm-mail03.mozilla.org> Message-ID: No, we agreed during the recent face-to-face that WebGL will always emulate the OpenGL ES 2.0 behavior, in which vertex attribute 0 is not special. This is achievable; Gregg Tavares has implemented it on desktop GL and can describe the techniques necessary. This decision has been documented at http://khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences . -Ken On Wed, Jun 9, 2010 at 11:38 AM, Vladimir Vukicevic wrote: > > In the interest of avoiding "what the heck is going on, why does it not work?" head-scratching, I suggest that calling a drawing operation without an enabled array at vertex attrib index 0 should generate INVALID_OPERATION. > > ? ?- Vlad > > ----- "Vladimir Vukicevic" wrote: > >> A while ago we had a discussion about making vertex attrib index 0 >> special, and requiring that it be enabled with an array bound, but I >> don't think we ever documented that anywhere. ?I think we have to add >> this -- the gl-vertexattrib.html test in the test suite fails with >> attrib index 0 on a wide variety of cards/desktop GLSL drivers, but >> works fine with index 1. >> >> ? ? - Vlad > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Jun 9 11:53:41 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 9 Jun 2010 11:53:41 -0700 (PDT) Subject: [Public WebGL] vertex attrib 0 In-Reply-To: Message-ID: <1573899892.398791.1276109621160.JavaMail.root@cm-mail03.mozilla.org> Oof, ok; I forgot that's how the discussion went. - Vlad ----- "Kenneth Russell" wrote: > No, we agreed during the recent face-to-face that WebGL will always > emulate the OpenGL ES 2.0 behavior, in which vertex attribute 0 is > not > special. This is achievable; Gregg Tavares has implemented it on > desktop GL and can describe the techniques necessary. > > This decision has been documented at > http://khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences . > > -Ken > > On Wed, Jun 9, 2010 at 11:38 AM, Vladimir Vukicevic > wrote: > > > > In the interest of avoiding "what the heck is going on, why does it > not work?" head-scratching, I suggest that calling a drawing operation > without an enabled array at vertex attrib index 0 should generate > INVALID_OPERATION. > > > > ? ?- Vlad > > > > ----- "Vladimir Vukicevic" wrote: > > > >> A while ago we had a discussion about making vertex attrib index 0 > >> special, and requiring that it be enabled with an array bound, but > I > >> don't think we ever documented that anywhere. ?I think we have to > add > >> this -- the gl-vertexattrib.html test in the test suite fails with > >> attrib index 0 on a wide variety of cards/desktop GLSL drivers, > but > >> works fine with index 1. > >> > >> ? ? - Vlad > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Wed Jun 9 12:27:59 2010 From: ced...@ (Cedric Vivier) Date: Thu, 10 Jun 2010 03:27:59 +0800 Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: <116006142.398438.1276108173757.JavaMail.root@cm-mail03.mozilla.org> References: <116006142.398438.1276108173757.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 10, 2010 at 02:29, Vladimir Vukicevic wrote: > > A zero-length array still has a .buffer property, which has to be non-null -- thus it has to be of zero size if the array was created with 'new Int32Array(0);'. ?The behaviour of malloc(0) isn't really relevant, though, as the underlying implementation can just have its raw data pointer be NULL without ever calling malloc; ArrayBuffer itself still needs additional metadata inside an implementation, so it can't ever just be a bare chunk of memory. Okay, so this needs some minimal special casing in implementations indeed, thanks! About negative values I'm still confused about what to do however. WebIDL ECMAscript conversion rules refers to ToUint32 in ECMA-262v5 (section 9.6) which does not appear to possibly generate an error (as in C-style casting in some way). Interestingly ECMA262 specifies "new Array(len)" with the following : """ If the argument len is a Number and ToUint32(len) is not equal to len, a RangeError exception is thrown. """ Which certainly confirms ToUint32 does not generate an error, as the case of negative value as well as NaN and other things is specified here. WebIDL Java conversion rules are clearer about this and indeed only C-style casting is implied by WebIDL (section 5.2.8) : """ In Java this is the same as performing a bit-wise AND of the?int?value with the?long?constant?0xffffffffL. """ This means that for instance passing -1 to a TypedArray constructor would not generate an error and be equivalent to passing 268435455 as length... Should we replace type of length with "long" instead so that we can generate an error in case a negative value is passed ? Or perhaps this needs modification in WebIDL spec and/or addition of a stricter conversion rule parameter attribute (ie. possibly raising TYPE_ERR or OverflowException) ? Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Wed Jun 9 13:51:13 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 9 Jun 2010 13:51:13 -0700 Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: References: <116006142.398438.1276108173757.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Wed, Jun 9, 2010 at 12:27 PM, Cedric Vivier wrote: > On Thu, Jun 10, 2010 at 02:29, Vladimir Vukicevic wrote: >> >> A zero-length array still has a .buffer property, which has to be non-null -- thus it has to be of zero size if the array was created with 'new Int32Array(0);'. ?The behaviour of malloc(0) isn't really relevant, though, as the underlying implementation can just have its raw data pointer be NULL without ever calling malloc; ArrayBuffer itself still needs additional metadata inside an implementation, so it can't ever just be a bare chunk of memory. > > Okay, so this needs some minimal special casing in implementations > indeed, thanks! > > > About negative values I'm still confused about what to do however. > > WebIDL ECMAscript conversion rules refers to ToUint32 in ECMA-262v5 > (section 9.6) which does not appear to possibly generate an error (as > in C-style casting in some way). > > Interestingly ECMA262 specifies "new Array(len)" with the following : > """ > If the argument len is a Number and ToUint32(len) is not equal to len, > a RangeError exception is thrown. > """ > Which certainly confirms ToUint32 does not generate an error, as the > case of negative value as well as NaN and other things is specified > here. > > > WebIDL Java conversion rules are clearer about this and indeed only > C-style casting is implied by WebIDL (section 5.2.8) : > """ > In Java this is the same as performing a bit-wise AND of the?int?value > with the?long?constant?0xffffffffL. > """ > > This means that for instance passing -1 to a TypedArray constructor > would not generate an error and be equivalent to passing 268435455 as > length... > > > Should we replace type of length with "long" instead so that we can > generate an error in case a negative value is passed ? No, we should definitely not do this. We had a host of issues with the TypedArray implementation in WebKit due to the use of signed values where conceptually negative values are invalid. While -1 will convert to a large number, implementations are expected to attempt the allocation, fail, and return a buffer with zero length. -Ken > Or perhaps this needs modification in WebIDL spec and/or addition of a > stricter conversion rule parameter attribute (ie. possibly raising > TYPE_ERR or OverflowException) ? > > > Regards, > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Wed Jun 9 14:38:00 2010 From: bja...@ (Benoit Jacob) Date: Wed, 9 Jun 2010 14:38:00 -0700 (PDT) Subject: [Public WebGL] problematic GetParameter pnames In-Reply-To: <2133854877.401548.1276119352331.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <151852184.401586.1276119480539.JavaMail.root@cm-mail03.mozilla.org> Hi, I have trouble implementing the following pnames in GetParameter: A. The following pnames are mentioned in the WebGl spec for GetParameter, but are seem to be absent from desktop OpenGL. How to implement them on desktop OpenGL systems? MAX_FRAGMENT_UNIFORM_VECTORS MAX_VERTEX_UNIFORM_VECTORS MAX_VARYING_VECTORS B. The following pnames are mentioned in the WebGL spec only in the general list of constants, but not in the section on getParameter. Yet the gl-get-calls test is trying to pass them to getParameter. Which is right? SHADER_COMPILER IMPLEMENTATION_COLOR_READ_FORMAT IMPLEMENTATION_COLOR_READ_TYPE Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Wed Jun 9 15:44:57 2010 From: ced...@ (Cedric Vivier) Date: Thu, 10 Jun 2010 06:44:57 +0800 Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: References: <116006142.398438.1276108173757.JavaMail.root@cm-mail03.mozilla.org> Message-ID: Hi Kenneth, On Thu, Jun 10, 2010 at 04:51, Kenneth Russell wrote: > On Wed, Jun 9, 2010 at 12:27 PM, Cedric Vivier wrote: >> Should we replace type of length with "long" instead so that we can >> generate an error in case a negative value is passed ? > > No, we should definitely not do this. We had a host of issues with the > TypedArray implementation in WebKit due to the use of signed values > where conceptually negative values are invalid. Changing argument type is probably not be the best solution indeed, conceptually unsigned makes sense, however there is still issues at a level higher than implementation imho because of WebIDL's lack of potential stricter conversion rules (ie. no OverflowException or similar concept when a negative value is passed to a unsigned argument which conceptually require a positive value), hence the second part of the question : "" > Or perhaps this needs modification in WebIDL spec and/or addition of a > stricter conversion rule parameter attribute (ie. possibly raising > TYPE_ERR or OverflowException) ? """ > While -1 will convert > to a large number, implementations are expected to attempt the > allocation, fail, and return a buffer with zero length. Are you implying TypedArray construction can never fail then ? IF returning a zero-length array is the behavior only when "a large number" is passed, how implementations should differentiate "a large number" which couldn't be allocated from "out of memory (within the resource limits of the Javascript context or GC)" ? Rather than directly failing at creation time (with an exception), which is easy to debug, it will fail at an undefined later point with another exception, possibly from a different object as a result of a sequence of events - or worse, executing with unexpected behavior... this goes against fail-fast principle and looks like a potential debugging hell for WebGL developers imho. In Javascript : new Array(invalid_value) will immediately stop with an exception, like almost every language/framework. How is having different argument error handling a benefit here ? IMO whether or not unsigned-to-integer overflow is specified by WebIDL someday, we should stick to the fail-fast principle and raise an exception when a TypedArray cannot be allocated for whatever reason (ie. not enough memory or not allowed to perform large allocation, possibly because passed length was bogus/negative). Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Wed Jun 9 19:26:52 2010 From: cal...@ (Mark Callow) Date: Thu, 10 Jun 2010 11:26:52 +0900 Subject: [Public WebGL] problematic GetParameter pnames In-Reply-To: <151852184.401586.1276119480539.JavaMail.root@cm-mail03.mozilla.org> References: <151852184.401586.1276119480539.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C104D6C.1080506@hicorp.co.jp> The first three are equivalent respectively to the following in Open GL 3.1: MAX_FRAGMENT_UNIFORM_COMPONENTS/4 MAX_VERTEX_UNIFORM_COMPONENTS/4 MAX_VARYING_COMPONENTS/4 For the first 2, I'm assuming the WebGL implementation is using only the default uniform block. A shader compiler is required for WebGL so I think we decided to drop the SHADER_COMPILER query. Since readPixels is supported, IMPLEMENTATION_COLOR_READ_FORMAT and IMPLEMENTATION_COLOR_READ_TYPE ought to be supported by WebGL so the application can use the most efficient format when reading pixels. This seems be an oversight in the WebGL spec. There is no equivalent on desktop GL as typically they have format conversion hardware so any format is equally good. A WebGL implementation on desktop will have to emulate these. Regards -Mark On 10/06/2010 06:38, Benoit Jacob wrote: > Hi, > > I have trouble implementing the following pnames in GetParameter: > > A. The following pnames are mentioned in the WebGl spec for GetParameter, but are seem to be absent from desktop OpenGL. How to implement them on desktop OpenGL systems? > MAX_FRAGMENT_UNIFORM_VECTORS > MAX_VERTEX_UNIFORM_VECTORS > MAX_VARYING_VECTORS > > B. The following pnames are mentioned in the WebGL spec only in the general list of constants, but not in the section on getParameter. Yet the gl-get-calls test is trying to pass them to getParameter. Which is right? > SHADER_COMPILER > IMPLEMENTATION_COLOR_READ_FORMAT > IMPLEMENTATION_COLOR_READ_TYPE > > Cheers, > Benoit > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 412 bytes Desc: not available URL: From kbr...@ Wed Jun 9 21:18:01 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 9 Jun 2010 21:18:01 -0700 Subject: [Public WebGL] problematic GetParameter pnames In-Reply-To: <4C104D6C.1080506@hicorp.co.jp> References: <151852184.401586.1276119480539.JavaMail.root@cm-mail03.mozilla.org> <4C104D6C.1080506@hicorp.co.jp> Message-ID: On Wed, Jun 9, 2010 at 7:26 PM, Mark Callow wrote: > The first three are equivalent respectively to the following in Open GL 3.1: > > MAX_FRAGMENT_UNIFORM_COMPONENTS/4 > MAX_VERTEX_UNIFORM_COMPONENTS/4 > MAX_VARYING_COMPONENTS/4 > > For the first 2, I'm assuming the WebGL implementation is using only the > default uniform block. > > A shader compiler is required for WebGL so I think we decided to drop the > SHADER_COMPILER query. > > Since readPixels is supported, IMPLEMENTATION_COLOR_READ_FORMAT and > IMPLEMENTATION_COLOR_READ_TYPE ought to be supported by WebGL so the > application can use the most efficient format when reading pixels. This > seems be an oversight in the WebGL spec. There is no equivalent on desktop > GL as typically they have format conversion hardware so any format is > equally good. A WebGL implementation on desktop will have to emulate these. Yes, the absence of IMPLEMENTATION_COLOR_READ_FORMAT and IMPLEMENTATION_COLOR_READ_TYPE in the getParameter documentation is an oversight, as is the presence of the SHADER_COMPILER enum in the spec. I've updated the spec. The gl-get-calls test querying SHADER_COMPILER is a bug in the test. -Ken > Regards > > -Mark > > > > On 10/06/2010 06:38, Benoit Jacob wrote: > > Hi, > > I have trouble implementing the following pnames in GetParameter: > > A. The following pnames are mentioned in the WebGl spec for GetParameter, > but are seem to be absent from desktop OpenGL. How to implement them on > desktop OpenGL systems? > MAX_FRAGMENT_UNIFORM_VECTORS > MAX_VERTEX_UNIFORM_VECTORS > MAX_VARYING_VECTORS > > B. The following pnames are mentioned in the WebGL spec only in the general > list of constants, but not in the section on getParameter. Yet the > gl-get-calls test is trying to pass them to getParameter. Which is right? > SHADER_COMPILER > IMPLEMENTATION_COLOR_READ_FORMAT > IMPLEMENTATION_COLOR_READ_TYPE > > Cheers, > Benoit > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Thu Jun 10 02:12:31 2010 From: ced...@ (Cedric Vivier) Date: Thu, 10 Jun 2010 17:12:31 +0800 Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: References: <116006142.398438.1276108173757.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 10, 2010 at 06:44, Cedric Vivier wrote: >> While -1 will convert >> to a large number, implementations are expected to attempt the >> allocation, fail, and return a buffer with zero length. > > Are you implying TypedArray construction can never fail then ? > IF returning a zero-length array is the behavior only when "a large > number" is passed, how implementations should differentiate "a large > number" which couldn't be allocated from "out of memory (within the > resource limits of the Javascript context or GC)" ? > FWIW I wrote some tests on this today, Mozilla has the sane behavior of throwing an exception ("invalid array size") when length is negative while WebKit swaps like crazy attempting to allocate memory... and in the end returns "undefined". If we do not simply go for changing length type in constructors (only [1]) to 'long', I guess we should somehow specify stricter conversion rules in WebIDL so that passing an out of range value to a unsigned parameter throws as expected (and therefore provide consistent defined behavior). Regards, [1] : other attributes are not an issue since they are "readonly" or have specified behavior wrt values are out of range (e.g slice,...), the issue is only present for constructor as it is an "in" value. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Jun 10 05:07:23 2010 From: ste...@ (Steve Baker) Date: Thu, 10 Jun 2010 07:07:23 -0500 Subject: [Public WebGL] WebGL on cellphones. Message-ID: <4C10D57B.3050104@sjbaker.org> What is the status/prospects for WebGL on cellphones? I can see that the specification is aimed more at OpenGLES than full-up OpenGL, so clearly this has been thought about. Are there any test implementations out there for either Android or iPhone that I could get a hold of to play with? ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gil...@ Thu Jun 10 05:38:41 2010 From: gil...@ (Giles Thomas) Date: Thu, 10 Jun 2010 13:38:41 +0100 Subject: [Public WebGL] WebGL on cellphones. In-Reply-To: <4C10D57B.3050104@sjbaker.org> References: <4C10D57B.3050104@sjbaker.org> Message-ID: On 10 June 2010 13:07, Steve Baker wrote: > What is the status/prospects for WebGL on cellphones? I can see that > the specification is aimed more at OpenGLES than full-up OpenGL, so > clearly this has been thought about. Are there any test > implementations out there for either Android or iPhone that I could get > a hold of to play with? > It's not Android or iPhone, but Nokia's latest firmware for their N900 smartphone supports WebGL by default in the built-in Gecko-based browser (somewhat to the surprise of the Mozilla guys, it seems). I put together a video of some of the demos that run on it: < http://learningwebgl.com/blog/?p=2340>. A summary: as you'd expect, it's behind where the desktop implementations are, but it works surprisingly well. Cheers, Giles -- Giles Thomas giles...@ http://www.gilesthomas.com/ http://learningwebgl.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From min...@ Thu Jun 10 06:07:18 2010 From: min...@ (=?utf-8?B?QWxleCBaaG91?=) Date: Thu, 10 Jun 2010 21:07:18 +0800 Subject: [Public WebGL] =?utf-8?B?UmU6IFJlOiBbUHVibGljIFdlYkdMXSBXZWJHTCBvbiBjZWxscGhvbmVzLg==?= References: <4C10D57B.3050104@sjbaker.org>, Message-ID: <201006102106573614471@roylead.com> I'm concerning with the same issue here ... Don't iPhone's Safari and Android's Chrome support WebGL? As I know they support HTML5 but am not sure about WebGL. Regards, - Alex Zhou ???? Giles Thomas ????? 2010-06-10 20:40:58 ???? Steve Baker ??? public webgl ??? Re: [Public WebGL] WebGL on cellphones. On 10 June 2010 13:07, Steve Baker wrote: What is the status/prospects for WebGL on cellphones? I can see that the specification is aimed more at OpenGLES than full-up OpenGL, so clearly this has been thought about. Are there any test implementations out there for either Android or iPhone that I could get a hold of to play with? It's not Android or iPhone, but Nokia's latest firmware for their N900 smartphone supports WebGL by default in the built-in Gecko-based browser (somewhat to the surprise of the Mozilla guys, it seems). I put together a video of some of the demos that run on it: . A summary: as you'd expect, it's behind where the desktop implementations are, but it works surprisingly well. Cheers, Giles -- Giles Thomas giles...@ http://www.gilesthomas.com/ http://learningwebgl.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Thu Jun 10 08:55:56 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 10 Jun 2010 08:55:56 -0700 (PDT) Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: Message-ID: <1366146391.407526.1276185356020.JavaMail.root@cm-mail03.mozilla.org> ----- "Cedric Vivier" wrote: > On Thu, Jun 10, 2010 at 06:44, Cedric Vivier > wrote: > >> While -1 will convert > >> to a large number, implementations are expected to attempt the > >> allocation, fail, and return a buffer with zero length. > > > > Are you implying TypedArray construction can never fail then ? > > IF returning a zero-length array is the behavior only when "a large > > number" is passed, how implementations should differentiate "a > large > > number" which couldn't be allocated from "out of memory (within the > > resource limits of the Javascript context or GC)" ? > > > > FWIW I wrote some tests on this today, Mozilla has the sane behavior > of throwing an exception ("invalid array size") when length is > negative while WebKit swaps like crazy attempting to allocate > memory... and in the end returns "undefined". Note that we don't throw because it's negative, but because it's larger than the biggest size that we allow due to implementation constraints (roughly 2^31). - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Thu Jun 10 09:02:43 2010 From: ced...@ (Cedric Vivier) Date: Fri, 11 Jun 2010 00:02:43 +0800 Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: <1366146391.407526.1276185356020.JavaMail.root@cm-mail03.mozilla.org> References: <1366146391.407526.1276185356020.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 10, 2010 at 23:55, Vladimir Vukicevic wrote: >> FWIW I wrote some tests on this today, Mozilla has the sane behavior >> of throwing an exception ("invalid array size") when length is >> negative while WebKit swaps like crazy attempting to allocate >> memory... and in the end returns "undefined". > > Note that we don't throw because it's negative, but because it's larger than the biggest size that we allow due to implementation constraints (roughly 2^31). Which is quite equivalent in the end, it should be specified what happens when instantiation fails (for whatever reason), so that we don't have inconsistent behavior between implementations like it is the case currently. I support Mozilla's current behavior, if it cannot create the typed array, just throw (as usual, and as with standard Javascript Array object). Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Thu Jun 10 09:56:38 2010 From: bja...@ (Benoit Jacob) Date: Thu, 10 Jun 2010 09:56:38 -0700 (PDT) Subject: [Public WebGL] GetParameter(COLOR_WRITEMASK) In-Reply-To: <1625117257.408217.1276188904127.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <685950020.408231.1276188998246.JavaMail.root@cm-mail03.mozilla.org> Hi, The current WebGL spec says that GetParameter(COLOR_WRITEMASK) returns an array of 4 uint8's. But in the OpenGL ES 2.0 documentation, it is an array of four bools. http://www.khronos.org/opengles/sdk/2.0/docs/man/glGet.xml Which seems more logical, since they are really boolean values --- only 2 possible values, not 256. So is this a typo in the WebGL spec? Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Jun 10 10:34:32 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 10 Jun 2010 10:34:32 -0700 Subject: [Public WebGL] GetParameter(COLOR_WRITEMASK) In-Reply-To: <685950020.408231.1276188998246.JavaMail.root@cm-mail03.mozilla.org> References: <1625117257.408217.1276188904127.JavaMail.root@cm-mail03.mozilla.org> <685950020.408231.1276188998246.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 10, 2010 at 9:56 AM, Benoit Jacob wrote: > Hi, > > The current WebGL spec says that GetParameter(COLOR_WRITEMASK) returns an array of 4 uint8's. But in the OpenGL ES 2.0 documentation, it is an array of four bools. > > http://www.khronos.org/opengles/sdk/2.0/docs/man/glGet.xml > > Which seems more logical, since they are really boolean values --- only 2 possible values, not 256. > > So is this a typo in the WebGL spec? I think this was done because again at the time it wasn't possible to express i.e. "boolean[]" in Web IDL. Actually with the WebGL typedefs this would return "GLboolean[]". We could change this; it would require some (more) handwritten JS binding code but otherwise isn't a big deal. Thoughts? -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Jun 10 10:49:42 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 10 Jun 2010 10:49:42 -0700 Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: References: <116006142.398438.1276108173757.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 10, 2010 at 2:12 AM, Cedric Vivier wrote: > On Thu, Jun 10, 2010 at 06:44, Cedric Vivier wrote: >>> While -1 will convert >>> to a large number, implementations are expected to attempt the >>> allocation, fail, and return a buffer with zero length. >> >> Are you implying TypedArray construction can never fail then ? >> IF returning a zero-length array is the behavior only when "a large >> number" is passed, how implementations should differentiate "a large >> number" which couldn't be allocated from "out of memory (within the >> resource limits of the Javascript context or GC)" ? >> > > FWIW I wrote some tests on this today, Mozilla has the sane behavior > of throwing an exception ("invalid array size") when length is > negative while WebKit swaps like crazy attempting to allocate > memory... and in the end returns "undefined". After looking back at the JavaScript bindings in WebKit, the intended behavior is that this should throw an INDEX_SIZE_ERR exception. See http://trac.webkit.org/browser/trunk/WebCore/bindings/js/JSArrayBufferConstructor.cpp , line 61, and http://trac.webkit.org/browser/trunk/WebCore/bindings/v8/custom/V8ArrayBufferCustom.cpp , line 77. I'm not sure why this isn't happening. Feel free to file a bug on http://bugs.webkit.org/ . -Ken > If we do not simply go for changing length type in constructors (only > [1]) to 'long', I guess we should somehow specify stricter conversion > rules in WebIDL so that passing an out of range value to a unsigned > parameter throws as expected (and therefore provide consistent defined > behavior). > > > Regards, > > > [1] : other attributes are not an issue since they are "readonly" or > have specified behavior wrt values are out of range (e.g slice,...), > the issue is only present for constructor as it is an "in" value. > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Thu Jun 10 11:19:26 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 10 Jun 2010 11:19:26 -0700 (PDT) Subject: [Public WebGL] GetParameter(COLOR_WRITEMASK) In-Reply-To: Message-ID: <331093512.409457.1276193966832.JavaMail.root@cm-mail03.mozilla.org> ----- "Kenneth Russell" wrote: > On Thu, Jun 10, 2010 at 9:56 AM, Benoit Jacob > wrote: > > Hi, > > > > The current WebGL spec says that GetParameter(COLOR_WRITEMASK) > returns an array of 4 uint8's. But in the OpenGL ES 2.0 documentation, > it is an array of four bools. > > > > http://www.khronos.org/opengles/sdk/2.0/docs/man/glGet.xml > > > > Which seems more logical, since they are really boolean values --- > only 2 possible values, not 256. > > > > So is this a typo in the WebGL spec? > > I think this was done because again at the time it wasn't possible to > express i.e. "boolean[]" in Web IDL. Actually with the WebGL typedefs > this would return "GLboolean[]". > > We could change this; it would require some (more) handwritten JS > binding code but otherwise isn't a big deal. Thoughts? I think we should; we go through a lot of trouble to return the "right" type elsewhere, might as well not leave this (tiny) wart here. :-) - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Jun 10 11:56:26 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 10 Jun 2010 11:56:26 -0700 Subject: [Public WebGL] GetParameter(COLOR_WRITEMASK) In-Reply-To: <331093512.409457.1276193966832.JavaMail.root@cm-mail03.mozilla.org> References: <331093512.409457.1276193966832.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 10, 2010 at 11:19 AM, Vladimir Vukicevic wrote: > > ----- "Kenneth Russell" wrote: > >> On Thu, Jun 10, 2010 at 9:56 AM, Benoit Jacob >> wrote: >> > Hi, >> > >> > The current WebGL spec says that GetParameter(COLOR_WRITEMASK) >> returns an array of 4 uint8's. But in the OpenGL ES 2.0 documentation, >> it is an array of four bools. >> > >> > http://www.khronos.org/opengles/sdk/2.0/docs/man/glGet.xml >> > >> > Which seems more logical, since they are really boolean values --- >> only 2 possible values, not 256. >> > >> > So is this a typo in the WebGL spec? >> >> I think this was done because again at the time it wasn't possible to >> express i.e. "boolean[]" in Web IDL. Actually with the WebGL typedefs >> this would return "GLboolean[]". >> >> We could change this; it would require some (more) handwritten JS >> binding code but otherwise isn't a big deal. Thoughts? > > I think we should; we go through a lot of trouble to return the "right" type elsewhere, might as well not leave this (tiny) wart here. :-) OK, done. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Thu Jun 10 13:15:20 2010 From: ced...@ (Cedric Vivier) Date: Fri, 11 Jun 2010 04:15:20 +0800 Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: References: <116006142.398438.1276108173757.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Fri, Jun 11, 2010 at 01:49, Kenneth Russell wrote: > On Thu, Jun 10, 2010 at 2:12 AM, Cedric Vivier wrote: >> FWIW I wrote some tests on this today, Mozilla has the sane behavior >> of throwing an exception ("invalid array size") when length is >> negative while WebKit swaps like crazy attempting to allocate >> memory... and in the end returns "undefined". > > After looking back at the JavaScript bindings in WebKit, the intended > behavior is that this should throw an INDEX_SIZE_ERR exception. See > http://trac.webkit.org/browser/trunk/WebCore/bindings/js/JSArrayBufferConstructor.cpp > , line 61, and http://trac.webkit.org/browser/trunk/WebCore/bindings/v8/custom/V8ArrayBufferCustom.cpp > , line 77. I'm not sure why this isn't happening. Feel free to file a > bug on http://bugs.webkit.org/ . Interesting, I guess in V8's case it setting DOM exception is not enough as an exception object must be passed/returned to the runtime. Looks like the behavior is inconsistent between the 2 WebKit bindings even for some corner cases (NaN/+inf/-inf gives 0 on JSCore - as WebIDL specifies - but throws on the V8 binding). Mozilla does indeed treat the input as an int32 internally and specifically check if the value is negative (which also means "too big" >2^31 depending how you look at it sure, but in the end it just works as intended by WebIDL's unsigned long... a signed value passed as parameter will not work and won't have side-effects [like attempting to allocate possibly huge chunk of memory]) : http://mxr.mozilla.org/mozilla-central/source/js/src/jstypedarray.cpp#137 I think reproducing Mozilla's behavior in both WebKit bindings would make a lot of sense, for instance with this patch for V8's binding : http://neonux.com/webgl/v8-binding-arraybuffer.patch Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From tah...@ Thu Jun 10 22:49:53 2010 From: tah...@ (Tahin Rahman) Date: Fri, 11 Jun 2010 11:49:53 +0600 Subject: [Public WebGL] Help needed to configure browser Message-ID: Hello all, I am Tahin, an amateur to openGL and beginner to WebGL. I am facing problem to configure WebGL in my browser. I've tried in Firefox Nightly version 3.7a4 by going to "about:config" and changing 'webgl.enable_for_all_sites' to *true.* I also tried in Google Chrome 6.0.427.0 dev by modifying the desktop shortcut target to "chrome.exe --enable -webgl". In both cases when I visit hereit says "Sorry, your browser does not appear to support WebGL (or it is disabled)." Now I have no idea what to do. Did I miss anything step of browser configuration? Please help me out Thanks -- Tahin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ced...@ Thu Jun 10 23:00:19 2010 From: ced...@ (Cedric Vivier) Date: Fri, 11 Jun 2010 14:00:19 +0800 Subject: [Public WebGL] Help needed to configure browser In-Reply-To: References: Message-ID: Hi Tahin, It probably means both browsers are unable to create an OpenGL 2.0+ context necessary for WebGL. What is your graphics card/GPU configuration ? Perhaps you should try updating your video drivers (OpenGL drivers in particular). Regards, On Fri, Jun 11, 2010 at 13:49, Tahin Rahman wrote: > Hello all, > I am Tahin, an?amateur?to openGL and beginner to WebGL. I am facing problem > to configure WebGL in my browser. > I've tried in Firefox Nightly version 3.7a4 by going to "about:config" and > changing 'webgl.enable_for_all_sites' to true. > I also tried in Google Chrome?6.0.427.0 dev by modifying the desktop > shortcut target to "chrome.exe --enable -webgl". > In both cases when I visit here it says "Sorry, your browser does not appear > to support WebGL (or it is disabled)." > Now I have no idea what to do. Did I miss anything step of browser > configuration? Please help me out > Thanks > -- > Tahin > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Fri Jun 11 10:38:50 2010 From: cma...@ (Chris Marrin) Date: Fri, 11 Jun 2010 10:38:50 -0700 Subject: [Public WebGL] A different approach to interleaved typed arrays In-Reply-To: <1288978043.353894.1275680833801.JavaMail.root@cm-mail03.mozilla.org> References: <1288978043.353894.1275680833801.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Jun 4, 2010, at 12:47 PM, Vladimir Vukicevic wrote: > > ... > I'm in general agreement with Ken here -- I just don't see the aliasing/endianness issues as a significant showstopper problem. I'm interested in the TC-39 struct proposal purely from a developer convenience point of view, because being able to access interleaved array data in a more natural form (like foo[i].vertex.x, foo[i].normal.y, foo[i].color.r, etc.) would be nice, though only if the implementations can get that indexing to be close to typed array speed. Having it fix the endianness exposure issues is, to me, only a nice side benefit. > > Current typed array indexing is: > > base_ptr + index< > The struct proposal could be, with enough parser work: > > (base_ptr + structsize*index) + member_offset > > (Though base_ptr+offset will be constant for each element, so maybe there's something useful that can be done there.) > > The interleaved proposal above would be, I believe: > > stride = elementsPerGroup*elemsize + bytesToNextGroup > (ptr + stride*(index/elementsPerGroup)) + (index % elementsPerGroup) Your indexing scheme does look better, but think about where that 'index' value is coming from. In the current scheme filling an interleaved array requires index arithmetic in JavaScript. You essentially have to do this: var index = 0; for (var i = 0; i < values.length; i += 3) { floats.set(index++, values[i]); floats.set(index++, values[i+1]); floats.set(index++, values[i+2]); index++; } In my scheme, all that is done in native code. From a standpoint of optimizing JavaScript to more efficiently access the structure of the views I don't think the difference is significant. The more extensive math you show for accessing my structure can be optimized out because elementsPerGroup, elemsize and bytesToNextGroup are all constant for a given view. So I don't think the "less efficient" argument holds. The reason I get concerned about the endianness issue is because I think we've done a great job of hiding all the gritty, crash-prone issues of OpenGL ES. And while it wouldn't cause browser crashes, the ease with which an author can expose the endianness of the machine is troubling. If we can button that up (and I think we can) then we will have a better API ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Fri Jun 11 11:22:31 2010 From: oli...@ (Oliver Hunt) Date: Fri, 11 Jun 2010 11:22:31 -0700 Subject: [Public WebGL] A different approach to interleaved typed arrays In-Reply-To: References: <1288978043.353894.1275680833801.JavaMail.root@cm-mail03.mozilla.org> Message-ID: I was thinking an alternative approach to the array of structures approach (which looks to be extraordinarily complicated if it's at all achievable) would be a structure of arrays, eg. you define your type as in the last structure concept, but instead of array[i].x you would do array.x[i] etc This would have the benefit of being achievable now, having all the perf benefits of typed arrays, and would solve the issues of byte endianess, etc. Additionally in the current ES proposal for an array of structures it looks like (effectively) implementations would have to do a significant amount of optimisation to prevent reification of GC objects whenever you do myArrayOfStructures[i]. --Oliver On Jun 11, 2010, at 10:38 AM, Chris Marrin wrote: > > On Jun 4, 2010, at 12:47 PM, Vladimir Vukicevic wrote: > >> >> ... >> I'm in general agreement with Ken here -- I just don't see the aliasing/endianness issues as a significant showstopper problem. I'm interested in the TC-39 struct proposal purely from a developer convenience point of view, because being able to access interleaved array data in a more natural form (like foo[i].vertex.x, foo[i].normal.y, foo[i].color.r, etc.) would be nice, though only if the implementations can get that indexing to be close to typed array speed. Having it fix the endianness exposure issues is, to me, only a nice side benefit. >> >> Current typed array indexing is: >> >> base_ptr + index<> >> The struct proposal could be, with enough parser work: >> >> (base_ptr + structsize*index) + member_offset >> >> (Though base_ptr+offset will be constant for each element, so maybe there's something useful that can be done there.) >> >> The interleaved proposal above would be, I believe: >> >> stride = elementsPerGroup*elemsize + bytesToNextGroup >> (ptr + stride*(index/elementsPerGroup)) + (index % elementsPerGroup) > > Your indexing scheme does look better, but think about where that 'index' value is coming from. In the current scheme filling an interleaved array requires index arithmetic in JavaScript. You essentially have to do this: > > var index = 0; > for (var i = 0; i < values.length; i += 3) { > floats.set(index++, values[i]); > floats.set(index++, values[i+1]); > floats.set(index++, values[i+2]); > index++; > } > > In my scheme, all that is done in native code. From a standpoint of optimizing JavaScript to more efficiently access the structure of the views I don't think the difference is significant. The more extensive math you show for accessing my structure can be optimized out because elementsPerGroup, elemsize and bytesToNextGroup are all constant for a given view. > > So I don't think the "less efficient" argument holds. > > The reason I get concerned about the endianness issue is because I think we've done a great job of hiding all the gritty, crash-prone issues of OpenGL ES. And while it wouldn't cause browser crashes, the ease with which an author can expose the endianness of the machine is troubling. If we can button that up (and I think we can) then we will have a better API > > ----- > ~Chris > cmarrin...@ > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Fri Jun 11 13:49:18 2010 From: cma...@ (Chris Marrin) Date: Fri, 11 Jun 2010 13:49:18 -0700 Subject: [Public WebGL] WebGLExtension interface proposal In-Reply-To: References: Message-ID: On Jun 8, 2010, at 12:05 PM, Cedric Vivier wrote: > Hi, > > We specify extensions as objects with more or less the same behavior as other WebGL objects mapped to GL resources, ie. an extensions object is only valid for use by the WebGLRenderingContext who created it and they should be re-created after context is lost ; as such I believe it would make sense to declare a WebGLExtension interface from which all WebGL extensions must derive. > It could simplify implementations (can use the same base WebGLObject implementation for validation/tracking, extensions could also more easily use same base object for internal housekeeping), but more importantly it makes API friendlier to more strongly-typed languages (versus 'object'), avoids potential 'empty object extension' which might be confusing, and allows us to transparently add new 'utility' members on all extensions if we need to do so in a future revision. > > In other words, I propose following spec addition : > > > 5.XX WebGLExtension > > The WebGLExtension interface represents an extension object. All WebGL extensions derive from this interface. > The object is returned by calling getExtension with a supported extension string (see 5.16.14 Detecting and enabling extensions). > > interface WebGLExtension : WebGLObject { > readonly attribute DOMString name; > } > > > And : > > WebGLExtension getExtension(DOMString name); > > instead of : > > object getExtension(DOMString name); I think we may have discussed this and rejected it as not being necessary. I may even have been an advocate of this approach but was convinced that such arbitrary base classes are unnecessary. But I may be confusing this issue with another similar one. Even if we did have a base class for extensions, I would not want it to be derived from WebGLObject. That base interface has a very specific meaning as the WebGL realization of the OpenGL Object concept. An extension is not part of that concept, so it would be inappropriate for it to be in that class hierarchy. I don't see a need to expose the extension name in this interface. We don't expose identifications like this for any other object types in WebGL. I am comfortable with the interface returned by each extension being unique and unrelated to the others. > > > > Finally, for consistency with other WebGLObjects I propose introduction of an isExtension function to check if an extension > object is valid (ie. not obtained from another context and not acquired before context loss) : > > GLboolean isExtension(WebGLExtension extension); Again, this is a convention used for WebGLObject derived interfaces. I don't see a reason to add extensions to that mix. > > > > Possibly we might as well simplify API and merge all is* functions into only one, e.g a probably better named "GLboolean isValid(WebGLObject object)". > (all current is* functions can trivially be wrapped to use it in Javascript so there is no feature removal and/or divergence imho) I don't think this is a good idea for similar reasons. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Fri Jun 11 13:50:14 2010 From: cma...@ (Chris Marrin) Date: Fri, 11 Jun 2010 13:50:14 -0700 Subject: [Public WebGL] WebGL specification updates In-Reply-To: References: Message-ID: <4D7A2D25-4F3D-4670-9111-6EDE068CB6CD@apple.com> Great. Thanks Ken. Feels like we're getting really close to a final spec! On Jun 8, 2010, at 12:24 PM, Kenneth Russell wrote: > All, > > A large number of updates to the WebGL and TypedArray specifications > have been checked in. Here is the commit message indicating what was > changed. Please review and send your feedback to the list. > > We are starting now to implement these changes in WebKit's WebGL code; > I gather that Mozilla is doing the same. Hopefully within a couple of > weeks the implementations will have converged on the new > specification. > > -Ken > > Incorporated specification changes from WebGL face-to-face meeting on > May 25-26, 2010, and from discussion on the public mailing list: > > - Added section on Premultiplied Alpha, Canvas APIs and texImage2D, > and referred to it from the specification of the premultipliedAlpha > flag in the WebGLContextAttributes. > - Added section on Enabled Vertex Attributes and Range Checking, and > referred to it from the specifications of enableVertexAttribArray, > drawArrays and drawElements, as well as from the Resource > Restrictions specification. Deleted now-obsolete section on > ArrayBuffer and Uninitialized Data. > - Added differences section on extension queries. Removed EXTENSIONS > enum from WebGL spec. > - Added WebGL-specific UNPACK_FLIP_Y_WEBGL, > UNPACK_PREMULTIPLY_ALPHA_WEBGL and CONTEXT_LOST_WEBGL enums. > - Added Pixel Storage Parameters section documenting > UNPACK_FLIP_Y_WEBGL and UNPACK_PREMULTIPLY_ALPHA_WEBGL parameters, > and referred to it from pixelStorei, texImage2D and texSubImage2D > specifications. > - Removed getString() and added its legal arguments to the > documentation of getParameter(). Added UNPACK_FLIP_Y_WEBGL and > UNPACK_PREMULTIPLY_ALPHA_WEBGL to table of return values from > getParameter. > - Documented return values for getParameter taking RENDERER, > SHADING_LANGUAGE_VERSION, VENDOR and VERSION. > - Added internalformat, format and type arguments to texImage2D > variants taking ImageData, HTMLImageElement, HTMLCanvasElement and > HTMLVideoElement. Removed flipY and asPremultipliedAlpha arguments. > Added documentation for new arguments and semantics. > - Added format and type arguments to texSubImage2D variants taking > ImageData, HTMLImageElement, HTMLCanvasElement and > HTMLVideoElement. Removed flipY and asPremultipliedAlpha arguments. > Referred to texImage2D documentation. > - Updated readPixels() specification to take ArrayBufferView as > argument rather than returning it. Specified behavior for > out-of-range pixels and mismatches between type and pixels > arguments, the latter for texImage2D as well. > - Expanded specification of WebGLContextLostEvent. Added > specification of WebGLContextRestoredEvent. Removed resetContext(). > Added example from Gregg Tavares using these events to reset an > application. > - Removed WebGLObjectArray. Used WebGLShader[] as return type of > getAttachedShaders(). > - Added note about negative "first" argument to drawElements > generating INVALID_VALUE based on feedback from Vladimir Vukicevic. > - Synchronized webgl.idl and WebGL specification. > - Renamed references to FloatArray to Float32Array. > - Minor grammatical cleanups. > > In TypedArray specification: > > - Renamed FloatArray to Float32Array and DoubleArray to Float64Array. > - Renamed getFloat/setFloat to getFloat32/setFloat32 and > getDouble/setDouble to getFloat64/setFloat64 on DataView. > - Synchronized typedarrays.idl. > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Fri Jun 11 23:22:20 2010 From: ced...@ (Cedric Vivier) Date: Sat, 12 Jun 2010 14:22:20 +0800 Subject: [Public WebGL] WebGLExtension interface proposal In-Reply-To: References: Message-ID: Hi Chris! On Sat, Jun 12, 2010 at 04:49, Chris Marrin wrote: > I think we may have discussed this and rejected it as not being necessary. I may even have been an advocate of this approach but was convinced that such arbitrary base classes are unnecessary. But I may be confusing this issue with another similar one. I agree this might not be necessary in Javascript, however this would really simplify stronger-typed bindings where 'object' are to be avoided whenever possible. > Even if we did have a base class for extensions, I would not want it to be derived from WebGLObject. That base interface has a very specific meaning as the WebGL realization of the OpenGL Object concept. An extension is not part of that concept, so it would be inappropriate for it to be in that class hierarchy. I agree the extensions are not really part of that concept in the current revision of the spec, however I think we had a consensus before to do the following change (which hasn't been acted upon in the spec yet) : """ * in 5.16.1 WebGLContextLostEvent: replace: "Once the context is reset, the application must reinitialize the context's state and recreate all of its WebGL resources such as textures, shaders and programs." with: "Once the context is reset, the application must reinitialize the context's state and recreate all of its WebGL resources such as textures, shaders, programs and extensions. The set of extensions returned by getSupportedExtensions() is not guaranteed to be the same when the context is resumed, the application must test and enable each supported extension individually again." """ (from original message in thread "Addition to WebGLContextLostEvent wrt extensions") With that addition in mind I assume the concepts are very related : boxed WebGL objects that are acquired and valid only within the same context and within the same lifetime (before context loss), extension objects fit that model. > > I don't see a need to expose the extension name in this interface. We don't expose identifications like this for any other object types in WebGL. I am comfortable with the interface returned by each extension being unique and unrelated to the others. Okay, we can do without 'name' attribute, no strong opinion on that, my intent was to avoid the 'empty' extension object behavior in some cases. >> Possibly we might as well simplify API and merge all is* functions into only one, e.g a probably better named "GLboolean isValid(WebGLObject object)". >> (all current is* functions can trivially be wrapped to use it in Javascript so there is no feature removal and/or divergence imho) > > I don't think this is a good idea for similar reasons. On the other hand, we already did similar thing with all getString/getBoolean/getFloat/getInteger functions merged into getParameter ... in OpenGL the typed versions of every function (IsTexture, IsShader, ...) are necessary because the names are just plain integers, so indeed an integer does not have to be a texture, shader, whatever, or even the same integer could be both a texture and a shader for instance. In WebGL they are boxed typed objects... from an API discoverability perspective it is confusing imho to have a signature IsTexture that takes a WebGLTexture object as parameter... it looks like a helper for (obj instanceof WebGLTexture) but this is not what it is about, a isValid/IsAlive(WebGLObject) function would make the intent of the function clearer, removing that confusion and making our API leaner imho (including when we add other objects later, e.g WebGLQuery someday probably). Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Sat Jun 12 07:26:47 2010 From: cma...@ (Chris Marrin) Date: Sat, 12 Jun 2010 07:26:47 -0700 Subject: [Public WebGL] Need to start getting serious about shaders Message-ID: I've been going around the web this morning looking at WebGL content and I am concerned. There are several examples that have a variety of shader problems. The two most notable ones are: 1) doing math between floats and ints without an explicit cast (allowed on some NVidia drivers but not ATI on Mac) 2) Using Features of GLSL 1.2 like float arrays with constructors Add to that the fact that NONE of the existing content will work in OpenGL ES because they don't have precision qualifiers (at least none that I've found). So I have a comment and a question. My comment is that pretty much everything will break once we turn on shader validation. I know we've talked about it, I just wanted to prepare everyone for that fact. My question is, I know the ANGLE shader validator will deal with the precision qualifier issue, but does it also check for the above two errors? I would assume it will correctly reject (2) since that is a syntactic problem. But does it validate data types and operations and correctly reject (1)? ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Sat Jun 12 08:12:46 2010 From: cma...@ (Chris Marrin) Date: Sat, 12 Jun 2010 08:12:46 -0700 Subject: [Public WebGL] A different approach to interleaved typed arrays In-Reply-To: References: <1288978043.353894.1275680833801.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Jun 11, 2010, at 11:22 AM, Oliver Hunt wrote: > I was thinking an alternative approach to the array of structures approach (which looks to be extraordinarily complicated if it's at all achievable) would be a structure of arrays, eg. you define your type as in the last structure concept, but instead of > > array[i].x > > you would do > > array.x[i] And what do I get back if I say 'array.x'? Would it be an Array, a TypedArray or something else? And I assume if you did that you would get a new object with a copy of the array elements. If so, can you avoid creating that object with 'array.x[i]? If so, this seems like an interesting proposal. And what object type would 'array' be? How would you define the structure? ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Sat Jun 12 08:29:20 2010 From: cma...@ (Chris Marrin) Date: Sat, 12 Jun 2010 08:29:20 -0700 Subject: [Public WebGL] WebGLExtension interface proposal In-Reply-To: References: Message-ID: <09288DE1-F64A-4AB5-B670-337186B54210@apple.com> On Jun 11, 2010, at 11:22 PM, Cedric Vivier wrote: > Hi Chris! > > On Sat, Jun 12, 2010 at 04:49, Chris Marrin wrote: >> I think we may have discussed this and rejected it as not being necessary. I may even have been an advocate of this approach but was convinced that such arbitrary base classes are unnecessary. But I may be confusing this issue with another similar one. > > I agree this might not be necessary in Javascript, however this would > really simplify stronger-typed bindings where 'object' are to be > avoided whenever possible. Right, but a given language binding is free to add any sort of more strongly typed objects as needed. I think the reason for the existance of 'object' is to avoid the need of locking the API into a particular hierarchy unnecessarily. > > >> Even if we did have a base class for extensions, I would not want it to be derived from WebGLObject. That base interface has a very specific meaning as the WebGL realization of the OpenGL Object concept. An extension is not part of that concept, so it would be inappropriate for it to be in that class hierarchy. > > I agree the extensions are not really part of that concept in the > current revision of the spec, however I think we had a consensus > before to do the following change (which hasn't been acted upon in the > spec yet) : > > """ > * in 5.16.1 WebGLContextLostEvent: > replace: > "Once the context is reset, the application must reinitialize the > context's state and recreate all of its WebGL resources such as > textures, shaders and programs." > > with: > "Once the context is reset, the application must reinitialize the > context's state and recreate all of its WebGL resources such as > textures, shaders, programs and extensions. > The set of extensions returned by getSupportedExtensions() is not > guaranteed to be the same when the context is resumed, the application > must test and enable each supported extension individually again." > """ > > (from original message in thread "Addition to WebGLContextLostEvent > wrt extensions") > > With that addition in mind I assume the concepts are very related : > boxed WebGL objects that are acquired and valid only within the same > context and within the same lifetime (before context loss), extension > objects fit that model. I don't think the concepts are that related. The WebGL objects represent actual GL resources. The extension objects just represent a wrapper around a capability. You restore WebGL objects because the GL objects they represent need to be reconstructed. The extension objects need to be restored because you may have a different set of extensions available. These are different concepts. > > >> >> I don't see a need to expose the extension name in this interface. We don't expose identifications like this for any other object types in WebGL. I am comfortable with the interface returned by each extension being unique and unrelated to the others. > > Okay, we can do without 'name' attribute, no strong opinion on that, > my intent was to avoid the 'empty' extension object behavior in some > cases. There are no empty objects, only empty :-) The very existence of an extension object is an indication of the availability and activation of that extension. That in and of itself is useful. > > >>> Possibly we might as well simplify API and merge all is* functions into only one, e.g a probably better named "GLboolean isValid(WebGLObject object)". >>> (all current is* functions can trivially be wrapped to use it in Javascript so there is no feature removal and/or divergence imho) >> >> I don't think this is a good idea for similar reasons. > > On the other hand, we already did similar thing with all > getString/getBoolean/getFloat/getInteger functions merged into > getParameter ... in OpenGL the typed versions of every function > (IsTexture, IsShader, ...) are necessary because the names are just > plain integers, so indeed an integer does not have to be a texture, > shader, whatever, or even the same integer could be both a texture and > a shader for instance. True and I'm not strongly opposed to this suggestion. I just don't think it's particularly helpful and makes the code a bit less readable. Unifying the get calls doesn't do damage because you pass a parameter indicating the information you want so good documentation is maintained. From a documentation standpoint I think 'isShader(myThing)' is better than 'isValid(myThing)'. > > In WebGL they are boxed typed objects... from an API discoverability > perspective it is confusing imho to have a signature IsTexture that > takes a WebGLTexture object as parameter... it looks like a helper for > (obj instanceof WebGLTexture) but this is not what it is about, a > isValid/IsAlive(WebGLObject) function would make the intent of the > function clearer, removing that confusion and making our API leaner > imho (including when we add other objects later, e.g WebGLQuery > someday probably). Yeah, true, and again, I'm not strongly opposed to it. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Sat Jun 12 13:54:40 2010 From: vla...@ (Vladimir Vukicevic) Date: Sat, 12 Jun 2010 13:54:40 -0700 (PDT) Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: Message-ID: <1580755822.428800.1276376080139.JavaMail.root@cm-mail03.mozilla.org> One thing to note for the precision qualifiers.. shaders that are otherwise compatible except for a missing precision qualifier can be made to work both with current implementations and future post-validation implementations by just adding: #ifdef GL_ES precision mediump float; #endif to the start of every fragment shader. I've been doing that on various demos as I've been testing on mobile, and things largely work fine. We should have the ANGLE shader validator integrated in the coming few days, and there will be a pref that authors can flip to enable it -- it will default to off initially, but we will flip it on at the same time that we check in the TexImage2D API changes. - Vlad ----- "Chris Marrin" wrote: > I've been going around the web this morning looking at WebGL content > and I am concerned. There are several examples that have a variety of > shader problems. The two most notable ones are: > > 1) doing math between floats and ints without an explicit cast > (allowed on some NVidia drivers but not ATI on Mac) > > 2) Using Features of GLSL 1.2 like float arrays with constructors > > Add to that the fact that NONE of the existing content will work in > OpenGL ES because they don't have precision qualifiers (at least none > that I've found). > > So I have a comment and a question. My comment is that pretty much > everything will break once we turn on shader validation. I know we've > talked about it, I just wanted to prepare everyone for that fact. My > question is, I know the ANGLE shader validator will deal with the > precision qualifier issue, but does it also check for the above two > errors? I would assume it will correctly reject (2) since that is a > syntactic problem. But does it validate data types and operations and > correctly reject (1)? > > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Sat Jun 12 18:12:21 2010 From: ste...@ (Steve Baker) Date: Sat, 12 Jun 2010 20:12:21 -0500 Subject: [Public WebGL] Help needed to configure browser In-Reply-To: References: Message-ID: <4C143075.4060509@sjbaker.org> There are some Intel motherboard chipsets that don't have OpenGL drivers in some versions of Windows - in those situations, you need to enable the software implementation of OpenGL (which is another checkbox in the about:config menu). However, doing that is going to be abysmally slow for all but the simplest cases. But if you have even a halfway decent graphics device on your machine (an nVidia 6800 card from 2004 runs WebGL just fine on my oldest computer!) - then the things you did should have been enough to make it work. Since that didn't work, we have to assume that your graphics system doesn't have OpenGL drivers. (It would be nice if the JavaScript canvas creation stuff gave the application enough information to produce a better error message under these circumstances!) -- Steve > On Fri, Jun 11, 2010 at 13:49, Tahin Rahman wrote: > >> Hello all, >> I am Tahin, an amateur to openGL and beginner to WebGL. I am facing problem >> to configure WebGL in my browser. >> I've tried in Firefox Nightly version 3.7a4 by going to "about:config" and >> changing 'webgl.enable_for_all_sites' to true. >> I also tried in Google Chrome 6.0.427.0 dev by modifying the desktop >> shortcut target to "chrome.exe --enable -webgl". >> In both cases when I visit here it says "Sorry, your browser does not appear >> to support WebGL (or it is disabled)." >> Now I have no idea what to do. Did I miss anything step of browser >> configuration? Please help me out >> Thanks >> -- >> Tahin >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Sat Jun 12 18:32:15 2010 From: cma...@ (Chris Marrin) Date: Sat, 12 Jun 2010 18:32:15 -0700 Subject: [Public WebGL] Help needed to configure browser In-Reply-To: <4C143075.4060509@sjbaker.org> References: <4C143075.4060509@sjbaker.org> Message-ID: <86E175A3-BF64-47EA-B9B0-AC2537510407@apple.com> On Jun 12, 2010, at 6:12 PM, Steve Baker wrote: > There are some Intel motherboard chipsets that don't have OpenGL drivers > in some versions of Windows - in those situations, you need to enable > the software implementation of OpenGL (which is another checkbox in the > about:config menu). However, doing that is going to be abysmally slow > for all but the simplest cases. > > But if you have even a halfway decent graphics device on your machine > (an nVidia 6800 card from 2004 runs WebGL just fine on my oldest > computer!) - then the things you did should have been enough to make it > work. Since that didn't work, we have to assume that your graphics > system doesn't have OpenGL drivers. > > (It would be nice if the JavaScript canvas creation stuff gave the > application enough information to produce a better error message under > these circumstances!) Hopefully when we have ANGLE based WebGL implementations, these problems should go away. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gil...@ Sun Jun 13 08:53:00 2010 From: gil...@ (Giles Thomas) Date: Sun, 13 Jun 2010 16:53:00 +0100 Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: <1580755822.428800.1276376080139.JavaMail.root@cm-mail03.mozilla.org> References: <1580755822.428800.1276376080139.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On 12 June 2010 21:54, Vladimir Vukicevic wrote: > We should have the ANGLE shader validator integrated in the coming few > days, and there will be a pref that authors can flip to enable it -- it will > default to off initially, but we will flip it on at the same time that we > check in the TexImage2D API changes. > This sounds like a great plan; as the source of some of the existing demos, I'll make sure that my stuff gets precision qualifiers and any other required GLSL changes as quickly as possible after this. Will you be posting to the list when it's live? Regards, Giles -- Giles Thomas giles...@ http://www.gilesthomas.com/ http://learningwebgl.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Sun Jun 13 19:31:41 2010 From: cma...@ (Chris Marrin) Date: Sun, 13 Jun 2010 19:31:41 -0700 Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: References: <1580755822.428800.1276376080139.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Jun 13, 2010, at 8:53 AM, Giles Thomas wrote: > On 12 June 2010 21:54, Vladimir Vukicevic wrote: > We should have the ANGLE shader validator integrated in the coming few days, and there will be a pref that authors can flip to enable it -- it will default to off initially, but we will flip it on at the same time that we check in the TexImage2D API changes. > > This sounds like a great plan; as the source of some of the existing demos, I'll make sure that my stuff gets precision qualifiers and any other required GLSL changes as quickly as possible after this. Will you be posting to the list when it's live? > Yes, but I expect that there will be a couple of weeks of churn as the various browsers get the feature turned on ----- ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Sun Jun 13 20:50:08 2010 From: cal...@ (Mark Callow) Date: Mon, 14 Jun 2010 12:50:08 +0900 Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: References: Message-ID: <4C15A6F0.6050700@hicorp.co.jp> On 12/06/2010 23:26, Chris Marrin wrote: > 1) doing math between floats and ints without an explicit cast (allowed on some NVidia drivers but not ATI on Mac) > It's not really relevant for WebGL but you have to go as far back as the very first version of GLSL for desktop, i.e. v1.10, to find no implicit casting. Since v1.20, int expressions can be implicitly converted to float but not vice-versa. Without knowing the expression involved it is not possible say if you are suffering from a very old ATI driver or a buggy driver (either ATI or NVidia). Regards -Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 398 bytes Desc: not available URL: From vla...@ Sun Jun 13 22:47:03 2010 From: vla...@ (Vladimir Vukicevic) Date: Sun, 13 Jun 2010 22:47:03 -0700 (PDT) Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: <1947475852.434995.1276494311254.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1171314155.434997.1276494423472.JavaMail.root@cm-mail03.mozilla.org> ----- "Mark Callow" wrote: > On 12/06/2010 23:26, Chris Marrin wrote: > > 1) doing math between floats and ints without an explicit cast > (allowed on some NVidia drivers but not ATI on Mac) > > It's not really relevant for WebGL but you have to go as far back as the > very first version of GLSL for desktop, i.e. v1.10, to find no implicit > casting. Since v1.20, int expressions can be implicitly converted to > float but not vice-versa. Without knowing the expression involved it is > not possible say if you are suffering from a very old ATI driver or a > buggy driver (either ATI or NVidia). Unless the shader included a #version directive though, it should have been treated as 110 and the lack of casting should have failed. Granted, not having implicit casting is a little annoying, but GLSL ES doesn't have them, so.. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bpd...@ Mon Jun 14 07:52:59 2010 From: bpd...@ (Benjamin DeLillo) Date: Mon, 14 Jun 2010 10:52:59 -0400 Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: <1171314155.434997.1276494423472.JavaMail.root@cm-mail03.mozilla.org> References: <1947475852.434995.1276494311254.JavaMail.root@cm-mail03.mozilla.org> <1171314155.434997.1276494423472.JavaMail.root@cm-mail03.mozilla.org> Message-ID: Intel integrated drivers also lack the implicit cast. On Mon, Jun 14, 2010 at 1:47 AM, Vladimir Vukicevic wrote: > > ----- "Mark Callow" wrote: > > > On 12/06/2010 23:26, Chris Marrin wrote: > > > 1) doing math between floats and ints without an explicit cast > > (allowed on some NVidia drivers but not ATI on Mac) > > > > It's not really relevant for WebGL but you have to go as far back as the > > very first version of GLSL for desktop, i.e. v1.10, to find no implicit > > casting. Since v1.20, int expressions can be implicitly converted to > > float but not vice-versa. Without knowing the expression involved it is > > not possible say if you are suffering from a very old ATI driver or a > > buggy driver (either ATI or NVidia). > > Unless the shader included a #version directive though, it should have been > treated as 110 and the lack of casting should have failed. Granted, not > having implicit casting is a little annoying, but GLSL ES doesn't have them, > so.. > > - Vlad > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oli...@ Mon Jun 14 11:02:28 2010 From: oli...@ (Oliver Hunt) Date: Mon, 14 Jun 2010 11:02:28 -0700 Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: References: <1947475852.434995.1276494311254.JavaMail.root@cm-mail03.mozilla.org> <1171314155.434997.1276494423472.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <833EB8CE-4163-4F80-8B8D-E2BCADA30660@apple.com> And of course we have to ensure compatible behaviour across all implementations across as many systems as possible which means that all the implementations will have to ensure as close to identical behaviour as possible, regardless of platform. eg. with the exception of resource constraints all code that works on one combination of device+platform should work on another. So the validator will need to either reject all code using implicit casts, or correct for the absence of explicit casts. --Oliver On Jun 14, 2010, at 7:52 AM, Benjamin DeLillo wrote: > Intel integrated drivers also lack the implicit cast. > > On Mon, Jun 14, 2010 at 1:47 AM, Vladimir Vukicevic wrote: > > ----- "Mark Callow" wrote: > > > On 12/06/2010 23:26, Chris Marrin wrote: > > > 1) doing math between floats and ints without an explicit cast > > (allowed on some NVidia drivers but not ATI on Mac) > > > > It's not really relevant for WebGL but you have to go as far back as the > > very first version of GLSL for desktop, i.e. v1.10, to find no implicit > > casting. Since v1.20, int expressions can be implicitly converted to > > float but not vice-versa. Without knowing the expression involved it is > > not possible say if you are suffering from a very old ATI driver or a > > buggy driver (either ATI or NVidia). > > Unless the shader included a #version directive though, it should have been treated as 110 and the lack of casting should have failed. Granted, not having implicit casting is a little annoying, but GLSL ES doesn't have them, so.. > > - Vlad > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Mon Jun 14 13:13:24 2010 From: ala...@ (Alan Chaney) Date: Mon, 14 Jun 2010 13:13:24 -0700 Subject: [Public WebGL] Linux 64bit/minefield 3.7a6pre Message-ID: <4C168D64.9050501@mechnicality.com> Hi I have a Linux 64 bit Ubuntu 10.04 system with a ATI 5770 graphics card and the "propriertary" FGLX driver installed. glxinfo reports: direct rendering: Yes server glx vendor string: ATI server glx version string: 1.4 server glx extensions: GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group client glx vendor string: ATI client glx version string: 1.4 client glx extensions: ....... OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: ATI Radeon HD 5700 Series OpenGL version string: 3.2.9756 Compatibility Profile Context OpenGL shading language version string: 1.50 OpenGL extensions: ..... Chromium 6.0.427.0 dev runs WebGL fine (using --enable-webgl in the invocation.) With Minefield 3.7a6pre and about:config ---> webgl.enabled_for_all_sites;true I get: Canvas 3D: creating PBuffer... Canvas 3D: can't get a native PBuffer, trying OSMesa... Canvas 3D: can't create a OSMesa pseudo-PBuffer. Does anyone have any suggestions? TIA Alan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Mon Jun 14 13:26:44 2010 From: bja...@ (Benoit Jacob) Date: Mon, 14 Jun 2010 13:26:44 -0700 (PDT) Subject: [Public WebGL] Linux 64bit/minefield 3.7a6pre In-Reply-To: <1133636820.441976.1276547137832.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <613302038.441992.1276547204431.JavaMail.root@cm-mail03.mozilla.org> Hi, On X-based systems, in Minefield, WebGL depends (for its ability to use OpenGL) on this bug being fixed: https://bugzilla.mozilla.org/show_bug.cgi?id=565833 I am already successfully using these patches (note it depends on 569775) with my nvidia card. Hopefully these changes will be checked in soon. Otherwise, you can always use software rendering with OSMesa, see http://www.khronos.org/webgl/wiki/Getting_a_WebGL_Implementation Benoit ----- "Alan Chaney" wrote: > Hi > > I have a Linux 64 bit Ubuntu 10.04 system with a ATI 5770 graphics > card > and the "propriertary" FGLX driver installed. > > glxinfo reports: > > direct rendering: Yes > server glx vendor string: ATI > server glx version string: 1.4 > server glx extensions: > GLX_ARB_multisample, GLX_EXT_import_context, > GLX_EXT_texture_from_pixmap, > GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_OML_swap_method, > GLX_SGI_make_current_read, GLX_SGI_swap_control, > GLX_SGIS_multisample, > GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, > GLX_SGIX_visual_select_group > client glx vendor string: ATI > client glx version string: 1.4 > client glx extensions: > ....... > > OpenGL vendor string: ATI Technologies Inc. > OpenGL renderer string: ATI Radeon HD 5700 Series > OpenGL version string: 3.2.9756 Compatibility Profile Context > OpenGL shading language version string: 1.50 > OpenGL extensions: > ..... > > > Chromium 6.0.427.0 dev runs WebGL fine (using --enable-webgl in the > invocation.) > With Minefield 3.7a6pre and about:config ---> > webgl.enabled_for_all_sites;true I get: > > Canvas 3D: creating PBuffer... > Canvas 3D: can't get a native PBuffer, trying OSMesa... > Canvas 3D: can't create a OSMesa pseudo-PBuffer. > > Does anyone have any suggestions? > > TIA > > Alan > > > > > > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Mon Jun 14 13:40:00 2010 From: ala...@ (Alan Chaney) Date: Mon, 14 Jun 2010 13:40:00 -0700 Subject: [Public WebGL] Linux 64bit/minefield 3.7a6pre In-Reply-To: <613302038.441992.1276547204431.JavaMail.root@cm-mail03.mozilla.org> References: <613302038.441992.1276547204431.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C1693A0.7050700@mechnicality.com> Thanks, Benoit Regards Alan On 06/14/2010 01:26 PM, Benoit Jacob wrote: > Hi, > > On X-based systems, in Minefield, WebGL depends (for its ability to use OpenGL) on this bug being fixed: > > https://bugzilla.mozilla.org/show_bug.cgi?id=565833 > > I am already successfully using these patches (note it depends on 569775) with my nvidia card. Hopefully these changes will be checked in soon. Otherwise, you can always use software rendering with OSMesa, see > http://www.khronos.org/webgl/wiki/Getting_a_WebGL_Implementation > > Benoit > > ----- "Alan Chaney" wrote: > > >> Hi >> >> I have a Linux 64 bit Ubuntu 10.04 system with a ATI 5770 graphics >> card >> and the "propriertary" FGLX driver installed. >> >> glxinfo reports: >> >> direct rendering: Yes >> server glx vendor string: ATI >> server glx version string: 1.4 >> server glx extensions: >> GLX_ARB_multisample, GLX_EXT_import_context, >> GLX_EXT_texture_from_pixmap, >> GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_OML_swap_method, >> GLX_SGI_make_current_read, GLX_SGI_swap_control, >> GLX_SGIS_multisample, >> GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, >> GLX_SGIX_visual_select_group >> client glx vendor string: ATI >> client glx version string: 1.4 >> client glx extensions: >> ....... >> >> OpenGL vendor string: ATI Technologies Inc. >> OpenGL renderer string: ATI Radeon HD 5700 Series >> OpenGL version string: 3.2.9756 Compatibility Profile Context >> OpenGL shading language version string: 1.50 >> OpenGL extensions: >> ..... >> >> >> Chromium 6.0.427.0 dev runs WebGL fine (using --enable-webgl in the >> invocation.) >> With Minefield 3.7a6pre and about:config ---> >> webgl.enabled_for_all_sites;true I get: >> >> Canvas 3D: creating PBuffer... >> Canvas 3D: can't get a native PBuffer, trying OSMesa... >> Canvas 3D: can't create a OSMesa pseudo-PBuffer. >> >> Does anyone have any suggestions? >> >> TIA >> >> Alan >> >> >> >> >> >> >> >> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> > > !DSPAM:4c1691e013541361014391! > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Mon Jun 14 20:01:49 2010 From: cal...@ (Mark Callow) Date: Tue, 15 Jun 2010 12:01:49 +0900 Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: <833EB8CE-4163-4F80-8B8D-E2BCADA30660@apple.com> References: <1947475852.434995.1276494311254.JavaMail.root@cm-mail03.mozilla.org> <1171314155.434997.1276494423472.JavaMail.root@cm-mail03.mozilla.org> <833EB8CE-4163-4F80-8B8D-E2BCADA30660@apple.com> Message-ID: <4C16ED1D.7020005@hicorp.co.jp> On 15/06/2010 03:02, Oliver Hunt wrote: > And of course we have to ensure compatible behaviour across all > implementations across as many systems as possible which means that > all the implementations will have to ensure as close to identical > behaviour as possible, regardless of platform. > > eg. with the exception of resource constraints all code that works on > one combination of device+platform should work on another. So the > validator will need to either reject all code using implicit casts, or > correct for the absence of explicit casts. The validator needs to reject all code using implicit casts as they are not supported in GLSL ES. Regards -Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 398 bytes Desc: not available URL: From gma...@ Mon Jun 14 23:08:56 2010 From: gma...@ (Gregg Tavares) Date: Mon, 14 Jun 2010 23:08:56 -0700 Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: <4C16ED1D.7020005@hicorp.co.jp> References: <1947475852.434995.1276494311254.JavaMail.root@cm-mail03.mozilla.org> <1171314155.434997.1276494423472.JavaMail.root@cm-mail03.mozilla.org> <833EB8CE-4163-4F80-8B8D-E2BCADA30660@apple.com> <4C16ED1D.7020005@hicorp.co.jp> Message-ID: We'll (I'll) add some more conformance tests for this. The current tests to test for that kind of stuff are here. https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/glsl-conformance.html I'll add a few casting ones asap. On Mon, Jun 14, 2010 at 8:01 PM, Mark Callow wrote: > On 15/06/2010 03:02, Oliver Hunt wrote: > > And of course we have to ensure compatible behaviour across all > > implementations across as many systems as possible which means that > > all the implementations will have to ensure as close to identical > > behaviour as possible, regardless of platform. > > > > eg. with the exception of resource constraints all code that works on > > one combination of device+platform should work on another. So the > > validator will need to either reject all code using implicit casts, or > > correct for the absence of explicit casts. > The validator needs to reject all code using implicit casts as they are > not supported in GLSL ES. > > Regards > > -Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Tue Jun 15 01:59:16 2010 From: cal...@ (Mark Callow) Date: Tue, 15 Jun 2010 17:59:16 +0900 Subject: [Public WebGL] problematic GetParameter pnames In-Reply-To: <4C104D6C.1080506@hicorp.co.jp> References: <151852184.401586.1276119480539.JavaMail.root@cm-mail03.mozilla.org> <4C104D6C.1080506@hicorp.co.jp> Message-ID: <4C1740E4.7080207@hicorp.co.jp> On 10/06/2010 11:26, Mark Callow wrote: > The first three are equivalent respectively to the following in Open > GL 3.1: > MAX_FRAGMENT_UNIFORM_COMPONENTS/4 > MAX_VERTEX_UNIFORM_COMPONENTS/4 > MAX_VARYING_COMPONENTS/4 > When I wrote the above, I did not realize that MAX_VARYING_COMPONENTS has been deprecated. as of GL 3.2. If you want to be future proof you should implement MAX_VARYING_VECTORS as min(MAX_VERTEX_OUTPUT_COMPONENTS, MAX_FRAGMENT_INPUT_COMPONENTS) / 4 Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 412 bytes Desc: not available URL: From bja...@ Tue Jun 15 04:11:14 2010 From: bja...@ (Benoit Jacob) Date: Tue, 15 Jun 2010 04:11:14 -0700 (PDT) Subject: [Public WebGL] Linux 64bit/minefield 3.7a6pre In-Reply-To: <1605998169.450668.1276599978526.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <126080290.450672.1276600274984.JavaMail.root@cm-mail03.mozilla.org> Vladimir just checked it in: http://hg.mozilla.org/mozilla-central/rev/3b3e795a1c2e ----- "Alan Chaney" wrote: > Thanks, Benoit > > Regards > > Alan > > On 06/14/2010 01:26 PM, Benoit Jacob wrote: > > Hi, > > > > On X-based systems, in Minefield, WebGL depends (for its ability to > use OpenGL) on this bug being fixed: > > > > https://bugzilla.mozilla.org/show_bug.cgi?id=565833 > > > > I am already successfully using these patches (note it depends on > 569775) with my nvidia card. Hopefully these changes will be checked > in soon. Otherwise, you can always use software rendering with OSMesa, > see > > http://www.khronos.org/webgl/wiki/Getting_a_WebGL_Implementation > > > > Benoit > > > > ----- "Alan Chaney" wrote: > > > > > >> Hi > >> > >> I have a Linux 64 bit Ubuntu 10.04 system with a ATI 5770 graphics > >> card > >> and the "propriertary" FGLX driver installed. > >> > >> glxinfo reports: > >> > >> direct rendering: Yes > >> server glx vendor string: ATI > >> server glx version string: 1.4 > >> server glx extensions: > >> GLX_ARB_multisample, GLX_EXT_import_context, > >> GLX_EXT_texture_from_pixmap, > >> GLX_EXT_visual_info, GLX_EXT_visual_rating, > GLX_OML_swap_method, > >> GLX_SGI_make_current_read, GLX_SGI_swap_control, > >> GLX_SGIS_multisample, > >> GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, > >> GLX_SGIX_visual_select_group > >> client glx vendor string: ATI > >> client glx version string: 1.4 > >> client glx extensions: > >> ....... > >> > >> OpenGL vendor string: ATI Technologies Inc. > >> OpenGL renderer string: ATI Radeon HD 5700 Series > >> OpenGL version string: 3.2.9756 Compatibility Profile Context > >> OpenGL shading language version string: 1.50 > >> OpenGL extensions: > >> ..... > >> > >> > >> Chromium 6.0.427.0 dev runs WebGL fine (using --enable-webgl in > the > >> invocation.) > >> With Minefield 3.7a6pre and about:config ---> > >> webgl.enabled_for_all_sites;true I get: > >> > >> Canvas 3D: creating PBuffer... > >> Canvas 3D: can't get a native PBuffer, trying OSMesa... > >> Canvas 3D: can't create a OSMesa pseudo-PBuffer. > >> > >> Does anyone have any suggestions? > >> > >> TIA > >> > >> Alan > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > >> > > > > !DSPAM:4c1691e013541361014391! > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Tue Jun 15 06:48:51 2010 From: cma...@ (Chris Marrin) Date: Tue, 15 Jun 2010 06:48:51 -0700 Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: <833EB8CE-4163-4F80-8B8D-E2BCADA30660@apple.com> References: <1947475852.434995.1276494311254.JavaMail.root@cm-mail03.mozilla.org> <1171314155.434997.1276494423472.JavaMail.root@cm-mail03.mozilla.org> <833EB8CE-4163-4F80-8B8D-E2BCADA30660@apple.com> Message-ID: <1D7B35DC-3EE4-4010-BB43-B142D28DA748@apple.com> On Jun 14, 2010, at 11:02 AM, Oliver Hunt wrote: > And of course we have to ensure compatible behaviour across all implementations across as many systems as possible which means that all the implementations will have to ensure as close to identical behaviour as possible, regardless of platform. > > eg. with the exception of resource constraints all code that works on one combination of device+platform should work on another. So the validator will need to either reject all code using implicit casts, or correct for the absence of explicit casts. Given that GLSL ES disallows implicit casts, it seems clear that the validator should simply reject them. I don't know of any cases where explicit casts are not allowed. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Tue Jun 15 08:46:25 2010 From: ced...@ (Cedric Vivier) Date: Tue, 15 Jun 2010 23:46:25 +0800 Subject: [Public WebGL] WebGLExtension interface proposal In-Reply-To: <09288DE1-F64A-4AB5-B670-337186B54210@apple.com> References: <09288DE1-F64A-4AB5-B670-337186B54210@apple.com> Message-ID: On Sat, Jun 12, 2010 at 23:29, Chris Marrin wrote: >> On Sat, Jun 12, 2010 at 04:49, Chris Marrin wrote: >>> I think we may have discussed this and rejected it as not being necessary. I may even have been an advocate of this approach but was convinced that such arbitrary base classes are unnecessary. But I may be confusing this issue with another similar one. >> >> I agree this might not be necessary in Javascript, however this would >> really simplify stronger-typed bindings where 'object' are to be >> avoided whenever possible. > > Right, but a given language binding is free to add any sort of more strongly typed objects as needed. I think the reason for the existance of 'object' is to avoid the need of locking the API into a particular hierarchy unnecessarily. That's a good point, indeed it adds "unnecessary hierarchy" and language bindings are free to add more strongly typed objects as needed. But I see this rather as an advantage, this base class makes sure all language bindings can use the same base class name (whatever their implementation is) so it helps WebGL code portability between different bindings. Also this could prove necessary/useful if we need to add one method to all extensions that we might not have envisioned yet... granted this might fall in the "too much flexibility" trap, however since this is an extension system we are talking about increased flexibility is not a disadvantage in this specific case imho. >> With that addition in mind I assume the concepts are very related : >> boxed WebGL objects that are acquired and valid only within the same >> context and within the same lifetime (before context loss), extension >> objects fit that model. > > I don't think the concepts are that related. The WebGL objects represent actual GL resources. The extension objects just represent a wrapper around a capability. You restore WebGL objects because the GL objects they represent need to be reconstructed. The extension objects need to be restored because you may have a different set of extensions available. These are different concepts. Slightly different but very similar concepts imo, extension objects need not to be restored only because you may have a different set of extensions available, even when the same set of extensions are available the underlying function pointers might need to be updated for use within the new context (or driver even). Extensions objects are rather a collection of function pointers, which are also GL resources that might have a direct relationship with the context they have been requested from (through eglGetProcAddress, wglGetProcAddres, ...). You could say as well that you restore WebGL extensions because the GL objects (ie. *GL function pointers) they represent need to be reconstructed (ie. updated/reloaded through *getProcAddress from the newly constructed GL context). Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Tue Jun 15 20:47:33 2010 From: ste...@ (stephen white) Date: Wed, 16 Jun 2010 13:17:33 +0930 Subject: [Public WebGL] OpenCL in WebGL? Message-ID: Even with the current efforts to optimise things, I can only get about 500 draw commands for interactive frame rates, even though each draw can be arbitrarily complex once the GPU has been told what to do. http://www.cmsoft.com.br/index.php?option=com_content&view=category&layout=blog&id=99&Itemid=150 So something like the above article and video looks like an interesting way around the slow Javascript problems. To see what I mean, change lesson 9 from learningwebgl to have 500 points. -- steve...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From lee...@ Tue Jun 15 23:17:43 2010 From: lee...@ (Lee Sandberg) Date: Wed, 16 Jun 2010 08:17:43 +0200 Subject: [Public WebGL] OpenCL in WebGL? In-Reply-To: References: Message-ID: Hi If we could have one thread (first) updating textures an issuing draw calls only and the second thread running the appilication and updating the scenegraph that would be better. Even better if the draw loop can be native which includes alot of scenegraph traversing etc. Hmm, not easy getting good preformance in real time graphics, it has been my bread and butter for a good while now at some companies having serious problems even if they use C++ and 3D engines such as Unreal 3, Good luck, Lee On Wed, Jun 16, 2010 at 5:47 AM, stephen white wrote: > Even with the current efforts to optimise things, I can only get about 500 > draw commands for interactive frame rates, even though each draw can be > arbitrarily complex once the GPU has been told what to do. > > > http://www.cmsoft.com.br/index.php?option=com_content&view=category&layout=blog&id=99&Itemid=150 > > So something like the above article and video looks like an interesting way > around the slow Javascript problems. To see what I mean, change lesson 9 > from learningwebgl to have 500 points. > > -- > steve...@ > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Wed Jun 16 07:58:00 2010 From: cma...@ (Chris Marrin) Date: Wed, 16 Jun 2010 07:58:00 -0700 Subject: [Public WebGL] OpenCL in WebGL? In-Reply-To: References: Message-ID: On Jun 15, 2010, at 8:47 PM, stephen white wrote: > Even with the current efforts to optimise things, I can only get about 500 draw commands for interactive frame rates, even though each draw can be arbitrarily complex once the GPU has been told what to do. > > http://www.cmsoft.com.br/index.php?option=com_content&view=category&layout=blog&id=99&Itemid=150 > > So something like the above article and video looks like an interesting way around the slow Javascript problems. To see what I mean, change lesson 9 from learningwebgl to have 500 points. It really all depends on when mobile hardware gets these capabilities. Geometry shaders is another technology that would do many of the things that OpenCL interoperability can do. Neither of these are available on any mobile hardware today, although the OpenCL group is working on OpenCL ES, which may be the soonest technology available on mobile hardware, even if it would have to be partly implemented in software, using something like LLVM (http://llvm.org/devmtg/2009-10/OpenCLWithLLVM.pdf). So these things will happen. We just have to wait a while. In the meantime, it would be interesting to understand why your content has the performance it does. Can you sample it? It may be the interface to Typed Arrays being slow, which can be significantly improved. Or it may be in some heavily used WebGL API call, which might be optimizable some. We should make sure we have all the fast path logic possible in place before resorting to other technologies to solve performance problems. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Wed Jun 16 08:01:01 2010 From: cma...@ (Chris Marrin) Date: Wed, 16 Jun 2010 08:01:01 -0700 Subject: [Public WebGL] OpenCL in WebGL? In-Reply-To: References: Message-ID: <7B79EDB3-51DC-439F-9CA6-3F9167390575@apple.com> On Jun 15, 2010, at 11:17 PM, Lee Sandberg wrote: > Hi > > If we could have one thread (first) updating textures an issuing draw calls only and the second thread running the appilication and updating the scenegraph that would be better. Even better if the draw loop can be native which includes alot of scenegraph traversing etc. Yes, investigating using Web Workers would be interesting. That will most likely have performance issues with the communication between the threads, which today is string based. But we've discussed ways to make it possible to pass data between threads using Typed Arrays. But if we get experience with Web Workers, I'm sure we can find some areas for optimization. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From alo...@ Wed Jun 16 12:28:03 2010 From: alo...@ (Alok Priyadarshi) Date: Wed, 16 Jun 2010 13:28:03 -0600 Subject: [Public WebGL] Need to start getting serious about shaders In-Reply-To: References: <1947475852.434995.1276494311254.JavaMail.root@cm-mail03.mozilla.org> <1171314155.434997.1276494423472.JavaMail.root@cm-mail03.mozilla.org> <833EB8CE-4163-4F80-8B8D-E2BCADA30660@apple.com> <1D7B35DC-3EE4-4010-BB43-B142D28DA748@apple.com> Message-ID: On Wed, Jun 16, 2010 at 12:40 PM, Alok Priyadarshi wrote: > > >> Given that GLSL ES disallows implicit casts, it seems clear that the >> validator should simply reject them. I don't know of any cases where >> explicit casts are not allowed. >> >> > I agree. We should just reject shaders with implicit casts. I will file a > bug against ANGLE GLSL translator. > > I have filed a bug here: http://code.google.com/p/angleproject/issues/detail?id=6 -------------- next part -------------- An HTML attachment was scrubbed... URL: From psa...@ Wed Jun 16 17:04:10 2010 From: psa...@ (Paul Sawaya) Date: Wed, 16 Jun 2010 17:04:10 -0700 Subject: [Public WebGL] WebGL Shader Validation Message-ID: Hi, What is the current state of WebGL shader validation? I'm curious, since shader containing a trivial infinite loop (while (true) {} ) crashes my entire system in the latest Safari. Google's ANGLE verifies this shader without a problem, and leaves the loop intact. Thanks Paul ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Wed Jun 16 17:18:50 2010 From: gma...@ (Gregg Tavares) Date: Wed, 16 Jun 2010 17:18:50 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: References: Message-ID: What's the definition of "crash"? There are lots of shaders that will effectively freeze OSX for long enough that it will appear as though the entire system is frozen. The system is waiting for the GPU to finish and if that takes 30 minutes then the system will wait 30 minutes. Don't know about the infinite loop case Window7 will do a hard reset of the GPU after a few seconds so the user gets his machine back. I'm personally hoping Apple will add similar behavior to OSX and iOS. -g On Wed, Jun 16, 2010 at 5:04 PM, Paul Sawaya wrote: > Hi, > > What is the current state of WebGL shader validation? > > I'm curious, since shader containing a trivial infinite loop (while (true) > {} ) crashes my entire system in the latest Safari. Google's ANGLE verifies > this shader without a problem, and leaves the loop intact. > > Thanks > Paul > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Wed Jun 16 19:20:43 2010 From: cal...@ (Mark Callow) Date: Thu, 17 Jun 2010 11:20:43 +0900 Subject: [Public WebGL] OpenCL in WebGL? In-Reply-To: References: Message-ID: <4C19867B.3010902@hicorp.co.jp> I agree with Chris about the need to first understand the current bottlenecks. Besides, just what performance is going to be like when a GPU, especially a mobile GPU, is context switching between OpenGL and OpenCL is still largely unknown. Regards -Mark On 16/06/2010 23:58, Chris Marrin wrote: > It really all depends on when mobile hardware gets these capabilities. Geometry shaders is another technology that would do many of the things that OpenCL interoperability can do. Neither of these are available on any mobile hardware today, although the OpenCL group is working on OpenCL ES, which may be the soonest technology available on mobile hardware, even if it would have to be partly implemented in software, using something like LLVM (http://llvm.org/devmtg/2009-10/OpenCLWithLLVM.pdf). So these things will happen. We just have to wait a while. > > In the meantime, it would be interesting to understand why your content has the performance it does. Can you sample it? It may be the interface to Typed Arrays being slow, which can be significantly improved. Or it may be in some heavily used WebGL API call, which might be optimizable some. We should make sure we have all the fast path logic possible in place before resorting to other technologies to solve performance problems. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 398 bytes Desc: not available URL: From kbr...@ Wed Jun 16 19:33:56 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 16 Jun 2010 19:33:56 -0700 Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: References: <116006142.398438.1276108173757.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jun 10, 2010 at 1:15 PM, Cedric Vivier wrote: > On Fri, Jun 11, 2010 at 01:49, Kenneth Russell wrote: >> On Thu, Jun 10, 2010 at 2:12 AM, Cedric Vivier wrote: >>> FWIW I wrote some tests on this today, Mozilla has the sane behavior >>> of throwing an exception ("invalid array size") when length is >>> negative while WebKit swaps like crazy attempting to allocate >>> memory... and in the end returns "undefined". >> >> After looking back at the JavaScript bindings in WebKit, the intended >> behavior is that this should throw an INDEX_SIZE_ERR exception. See >> http://trac.webkit.org/browser/trunk/WebCore/bindings/js/JSArrayBufferConstructor.cpp >> , line 61, and http://trac.webkit.org/browser/trunk/WebCore/bindings/v8/custom/V8ArrayBufferCustom.cpp >> , line 77. I'm not sure why this isn't happening. Feel free to file a >> bug on http://bugs.webkit.org/ . > > > Interesting, I guess in V8's case it setting DOM exception is not > enough as an exception object must be passed/returned to the runtime. > > Looks like the behavior is inconsistent between the 2 WebKit bindings > even for some corner cases (NaN/+inf/-inf gives 0 on JSCore - as > WebIDL specifies - but throws on the V8 binding). > > Mozilla does indeed treat the input as an int32 internally and > specifically check if the value is negative (which also means "too > big" >2^31 depending how you look at it sure, but in the end it just > works as intended by WebIDL's unsigned long... a signed value passed > as parameter will not work and won't have side-effects [like > attempting to allocate possibly huge chunk of memory]) : > http://mxr.mozilla.org/mozilla-central/source/js/src/jstypedarray.cpp#137 > > I think reproducing Mozilla's behavior in both WebKit bindings would > make a lot of sense, for instance with this patch for V8's binding : > http://neonux.com/webgl/v8-binding-arraybuffer.patch Thanks for the initial patch. Generally agree about clarifying the error behavior. Some different and additional error checks were needed. FYI, this is being worked on under https://bugs.webkit.org/show_bug.cgi?id=40755 . -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From aru...@ Wed Jun 16 19:35:41 2010 From: aru...@ (Arun Ranganathan) Date: Wed, 16 Jun 2010 19:35:41 -0700 Subject: [Public WebGL] WebGL WG Chair | Transitions, etc. Message-ID: <4C1989FD.6040100@mozilla.com> Dear WebGL WG, As I'm transitioning on from Mozilla, I thought I'd send email to this list saying that as of July, I'll no longer be the Chair of the WebGL WG. Vlad Vukicevic, who many of you know through his posts, has offered to step up and be interim Chair till we ship WebGL 1.0. Members of the WG will confer and make sure that the work is transitioned smoothly, especially since we're making such rapid progress. Look forward to seeing many of you out in the wild at technology events, and look forward to continuing to watch the public listserv :) WebGL has come a long way, and the demos look great! -- A* ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Jun 17 00:33:10 2010 From: ste...@ (stephen white) Date: Thu, 17 Jun 2010 17:03:10 +0930 Subject: [Public WebGL] OpenCL in WebGL? In-Reply-To: References: Message-ID: <747C6289-90F8-42FC-A199-09E9FEC5D6D6@adam.com.au> On 17/06/2010, at 12:28 AM, Chris Marrin wrote: > In the meantime, it would be interesting to understand why your > content has the performance it does. Can you sample it? It may be > the interface to Typed Arrays being slow, which can be significantly > improved. Or it may be in some heavily used WebGL API call, which > might be optimizable some. We should make sure we have all the fast > path logic possible in place before resorting to other technologies > to solve performance problems. Hi Chris, I have confirmed with Thatcher that we're seeing approximately the same results via the benchmark, and he is musing about porting to C++ for an A/B test to verify where bottlenecks are coming into the picture. I am pleased to hear that there is an OpenCL ES on its way, and have no current issue other than a wish list for more speed, which I'm sure everyone shares. :) Steve. -- steve...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Thu Jun 17 05:09:24 2010 From: ala...@ (Alan Chaney) Date: Thu, 17 Jun 2010 05:09:24 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: References: Message-ID: <4C1A1074.6040309@mechnicality.com> So a simple XSS exploit on a high volume site could potentially leave millions of machines frozen displaying some possibly extremely unsavory message? The targeted site doesn't have to have anything to do with displaying 3D content. Its not clear whether a GPU response timeout should be part of the spec. but without it a WebGL enabled browser exposes one hell of a vulnerability which the opponents of WebGL, for example those in Redmond, will, in turn, do their best to exploit for their own nefarious purposes. Regards Alan On 06/16/2010 05:18 PM, Gregg Tavares wrote: > What's the definition of "crash"? > > There are lots of shaders that will effectively freeze OSX for long > enough that it will appear as though the entire system is frozen. The > system is waiting for the GPU to finish and if that takes 30 minutes > then the system will wait 30 minutes. Don't know about the infinite > loop case > > Window7 will do a hard reset of the GPU after a few seconds so the > user gets his machine back. > > I'm personally hoping Apple will add similar behavior to OSX and iOS. > > -g > > > On Wed, Jun 16, 2010 at 5:04 PM, Paul Sawaya > wrote: > > Hi, > > What is the current state of WebGL shader validation? > > I'm curious, since shader containing a trivial infinite loop > (while (true) {} ) crashes my entire system in the latest Safari. > Google's ANGLE verifies this shader without a problem, and leaves > the loop intact. > > Thanks > Paul > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > . > To unsubscribe, send an email to majordomo...@ > with > the following command in the body of your email: > > > !DSPAM:4c196be2212643966023671! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Thu Jun 17 06:12:47 2010 From: ala...@ (Alan Chaney) Date: Thu, 17 Jun 2010 06:12:47 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: <4C1A1074.6040309@mechnicality.com> References: <4C1A1074.6040309@mechnicality.com> Message-ID: <4C1A1F4F.8020605@mechnicality.com> Just for clarification on re-reading my message - I'm not suggesting that MS will attempt to exploit vulnerabilities in WebGL. My point is that they will use any perceived security vulnerabilities as a reason why it will be doomed to fail. Alan On 06/17/2010 05:09 AM, Alan Chaney wrote: > So a simple XSS exploit on a high volume site could potentially leave > millions of machines frozen displaying some possibly extremely > unsavory message? The targeted site doesn't have to have anything to > do with displaying 3D content. > > Its not clear whether a GPU response timeout should be part of the > spec. but without it a WebGL enabled browser exposes one hell of a > vulnerability which the opponents of WebGL, for example those in > Redmond, will, in turn, do their best to exploit for their own > nefarious purposes. > > Regards > > Alan > > > > > On 06/16/2010 05:18 PM, Gregg Tavares wrote: >> What's the definition of "crash"? >> >> There are lots of shaders that will effectively freeze OSX for long >> enough that it will appear as though the entire system is frozen. The >> system is waiting for the GPU to finish and if that takes 30 minutes >> then the system will wait 30 minutes. Don't know about the infinite >> loop case >> >> Window7 will do a hard reset of the GPU after a few seconds so the >> user gets his machine back. >> >> I'm personally hoping Apple will add similar behavior to OSX and iOS. >> >> -g >> >> >> On Wed, Jun 16, 2010 at 5:04 PM, Paul Sawaya > > wrote: >> >> Hi, >> >> What is the current state of WebGL shader validation? >> >> I'm curious, since shader containing a trivial infinite loop >> (while (true) {} ) crashes my entire system in the latest Safari. >> Google's ANGLE verifies this shader without a problem, and leaves >> the loop intact. >> >> Thanks >> Paul >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> . >> To unsubscribe, send an email to majordomo...@ >> with >> the following command in the body of your email: >> >> > > !DSPAM:4c1a128655172117114708! -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Thu Jun 17 09:12:03 2010 From: cma...@ (Chris Marrin) Date: Thu, 17 Jun 2010 09:12:03 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: References: Message-ID: On Jun 16, 2010, at 5:18 PM, Gregg Tavares wrote: > What's the definition of "crash"? > > There are lots of shaders that will effectively freeze OSX for long enough that it will appear as though the entire system is frozen. The system is waiting for the GPU to finish and if that takes 30 minutes then the system will wait 30 minutes. Don't know about the infinite loop case > > Window7 will do a hard reset of the GPU after a few seconds so the user gets his machine back. > > I'm personally hoping Apple will add similar behavior to OSX and iOS. I believe our driver team is working on recovering from errant shaders more quickly. But that's not the point. There are two issues here. First, while loops are not required in OpenGL ES and I thought we were going to disallow them. Second, the shader validator should at the very least be able to detect simple infinite loops like this. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Jun 17 11:47:44 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 17 Jun 2010 11:47:44 -0700 Subject: [Public WebGL] WebGL WG Chair | Transitions, etc. In-Reply-To: <4C1989FD.6040100@mozilla.com> References: <4C1989FD.6040100@mozilla.com> Message-ID: On Wed, Jun 16, 2010 at 7:35 PM, Arun Ranganathan wrote: > Dear WebGL WG, > > As I'm transitioning on from Mozilla, I thought I'd send email to this list > saying that as of July, I'll no longer be the Chair of the WebGL WG. ?Vlad > Vukicevic, who many of you know through his posts, has offered to step up > and be interim Chair till we ship WebGL 1.0. ?Members of the WG will confer > and make sure that the work is transitioned smoothly, especially since we're > making such rapid progress. > > Look forward to seeing many of you out in the wild at technology events, and > look forward to continuing to watch the public listserv :) ?WebGL has come a > long way, and the demos look great! Thanks for your good work as chair of the WebGL working group, and best of luck in your future endeavors! -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Thu Jun 17 17:02:06 2010 From: cma...@ (Chris Marrin) Date: Thu, 17 Jun 2010 17:02:06 -0700 Subject: [Public WebGL] Shader validation and limitations Message-ID: If you look at section A.4 of the SLES spec: http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf you'll see the minimal requirements of GLSL ES shaders, specifically control flow. I think we should adopt these rules in their entirety. It will not ensure that all shaders are safe, but it will avoid obvious mistakes like infinite loops. It also reduces the floorplan of the shader language a bit to make it easier to to do a better job of validation. I also like Vlad's suggestion of restricting loop indices to ints (as opposed to ints plus floats). This will allow us to accurately determine the number of iterations a for loop will perform so we can choose to reject those that go over some limit. I believe this solves the halting problem issue, (although I suspect Ken disagrees with me). But doesn't necessarily prevent a shader from running for an extremely long time, which I suppose is the same thing in most cases. I think we should be as restrictive as possible in the first release. It is much easier to relax restrictions than to tighten them. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Jun 17 17:49:49 2010 From: ste...@ (Steve Baker) Date: Thu, 17 Jun 2010 19:49:49 -0500 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: <4C1A1074.6040309@mechnicality.com> References: <4C1A1074.6040309@mechnicality.com> Message-ID: <4C1AC2AD.3010900@sjbaker.org> There is a rather fundamental problem with detecting infinite loops either by code inspection or simulation. Alan Turing had a few words to say about that: http://en.wikipedia.org/wiki/Halting_problem It is a mathematical certainty that no "verifier" can possibly detect whether an arbitary program will eventually halt. Absolutely the ONLY way out of that trap is to limit our shader programming language to not be "Turing Complete"...which is a rather nasty limitation for some kinds of shader algorithm. Since there is little or no practical use for shader programs that don't complete within a very small fraction of a second - a one second (or so) watchdog timer would seem to be the best - and perhaps only - pragmatic solution. We could even allow users to set how long the timer would wait before killing the web page in some 'about:config' page just in case they want to use WebGL for some kind of non-traditional use (eg CUDA-like applications). -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Thu Jun 17 18:46:27 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 17 Jun 2010 18:46:27 -0700 (PDT) Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: <4C1AC2AD.3010900@sjbaker.org> Message-ID: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> ----- "Steve Baker" wrote: > Since there is little or no practical use for shader programs that don't > complete within a very small fraction of a second - a one second (or so) > watchdog timer would seem to be the best - and perhaps only - pragmatic > solution. We could even allow users to set how long the timer would > wait before killing the web page in some 'about:config' page just in > case they want to use WebGL for some kind of non-traditional use (eg > CUDA-like applications). If we had a way to do this (abort execution while inside a long-running shader), we wouldn't be having this conversation :-) The problem is that in most cases your entire system is locked up waiting for the GPU to come back; Windows Vista/7 handles this case by forcibly resetting the entire graphics card (and thus killing any contexts), which is probably the best response for now. But that functionality isn't available to any programs to use directly, and no equivalent exists on other platforms. Something might happen on that front, but even if it does we need a solution for older (current) driver versions; that solution might be "you don't get to run complex shader programs until you upgrade your drivers", though. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Jun 17 19:28:23 2010 From: ste...@ (Steve Baker) Date: Thu, 17 Jun 2010 21:28:23 -0500 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: References: Message-ID: <4C1AD9C7.6040907@sjbaker.org> If "It will not ensure that shaders are safe" (ie if it leaves loopholes for the malicious) then all you're doing is needlessly restricting the good-guys. Shaders that get stuck in infinite loops are super-spectacularly rare in ordinary development and are almost always found during basic debugging. For any measurable number of them to show up in the wild, they'll have to be malicious - and then any loophole, however tiny is as bad as leaving the gate wide open. IMHO, we should only really care about this in the context of malicious people deliberately trying to shut down people's computers. If we can't tweak the spec to make it utterly impossible for infinite loops (or even ridiculously long but finite loops) - then we might as well not bother since we'll need a timeout/kill mechanism anyway. The timeout approach is a good one because it directly applies a limit to user-inconvenience - no matter how it's inflicted. If people are using CPU-based software-emulated shaders then the number of loops that will annoy a user is much lower than for those with an honest to goodness GPU. So a fixed (say) 1 second timeout is "The Right Thing". As far as most people are concerned, a computer that's "locked up" for even 30 seconds is as bad as a locked-up-forever computer because they're going to push the big red button before they ever find out that it would eventually have completed. -- Steve Chris Marrin wrote: > If you look at section A.4 of the SLES spec: > > http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf > > you'll see the minimal requirements of GLSL ES shaders, specifically control flow. I think we should adopt these rules in their entirety. It will not ensure that all shaders are safe, but it will avoid obvious mistakes like infinite loops. It also reduces the floorplan of the shader language a bit to make it easier to to do a better job of validation. > > I also like Vlad's suggestion of restricting loop indices to ints (as opposed to ints plus floats). This will allow us to accurately determine the number of iterations a for loop will perform so we can choose to reject those that go over some limit. > > I believe this solves the halting problem issue, (although I suspect Ken disagrees with me). But doesn't necessarily prevent a shader from running for an extremely long time, which I suppose is the same thing in most cases. > > I think we should be as restrictive as possible in the first release. It is much easier to relax restrictions than to tighten them. > > ----- > ~Chris > cmarrin...@ > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Jun 17 20:43:26 2010 From: ste...@ (stephen white) Date: Fri, 18 Jun 2010 13:13:26 +0930 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: <4C1AD9C7.6040907@sjbaker.org> References: <4C1AD9C7.6040907@sjbaker.org> Message-ID: <3F31F47F-F6E2-4A8C-83A5-C6E2B14198A3@adam.com.au> On 18/06/2010, at 11:58 AM, Steve Baker wrote: > The timeout approach is a good one because it directly applies a limit > to user-inconvenience - no matter how it's inflicted. If people are while (1) malicious_shader(); The whole system being frozen is the big issue, and reminds me of the days before multi-tasking. The obvious option then seems to be some form of shader multi-tasking, with the problem shaders able to be selected and killed off over time. -- steve...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Jun 17 21:34:51 2010 From: ste...@ (Steve Baker) Date: Thu, 17 Jun 2010 23:34:51 -0500 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: <3F31F47F-F6E2-4A8C-83A5-C6E2B14198A3@adam.com.au> References: <4C1AD9C7.6040907@sjbaker.org> <3F31F47F-F6E2-4A8C-83A5-C6E2B14198A3@adam.com.au> Message-ID: <4C1AF76B.2050103@sjbaker.org> stephen white wrote: > On 18/06/2010, at 11:58 AM, Steve Baker wrote: >> The timeout approach is a good one because it directly applies a limit >> to user-inconvenience - no matter how it's inflicted. If people are > while (1) malicious_shader(); > > The whole system being frozen is the big issue, and reminds me of the > days before multi-tasking. The obvious option then seems to be some > form of shader multi-tasking, with the problem shaders able to be > selected and killed off over time. But if the GPU can't do that - (and I'm pretty sure it can't) then it's no solution. GPU's are far from being general purpose computers - there are things that they are simply unable to do. In the 'while(1) malicious_shader();' case, the 'while(1)' part would be in the CPU - where we can break in and kill it...so that's no more a problem than writing 'while(1);' in JavaScript and "locking up the machine" that way. As I understand it - the problem is with shaders that run for a crazy amount of time without ever letting the CPU finish waiting for them. If the shader stops and returns - for however little time - then the problem is easy to solve without resorting to new code in the underlying OpenGL driver. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Thu Jun 17 21:49:40 2010 From: ced...@ (Cedric Vivier) Date: Fri, 18 Jun 2010 12:49:40 +0800 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: References: Message-ID: On Fri, Jun 18, 2010 at 08:02, Chris Marrin wrote: > I believe this solves the halting problem issue, (although I suspect Ken disagrees with me). But doesn't necessarily prevent a shader from running for an extremely long time, which I suppose is the same thing in most cases. It seems Ken and/or others investigated this issue in depth months ago, is there any document available demonstrating all shader constructs - besides loops - found to possibly take an extremely long time to run ? Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Thu Jun 17 22:24:32 2010 From: oli...@ (Oliver Hunt) Date: Thu, 17 Jun 2010 22:24:32 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> References: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Jun 17, 2010, at 6:46 PM, Vladimir Vukicevic wrote: > > ----- "Steve Baker" wrote: > >> Since there is little or no practical use for shader programs that don't >> complete within a very small fraction of a second - a one second (or so) >> watchdog timer would seem to be the best - and perhaps only - pragmatic >> solution. We could even allow users to set how long the timer would >> wait before killing the web page in some 'about:config' page just in >> case they want to use WebGL for some kind of non-traditional use (eg >> CUDA-like applications). > > If we had a way to do this (abort execution while inside a long-running shader), we wouldn't be having this conversation :-) The problem is that in most cases your entire system is locked up waiting for the GPU to come back; Windows Vista/7 handles this case by forcibly resetting the entire graphics card (and thus killing any contexts), which is probably the best response for now. But that functionality isn't available to any programs to use directly, and no equivalent exists on other platforms. Something might happen on that front, but even if it does we need a solution for older (current) driver versions; that solution might be "you don't get to run complex shader programs until you upgrade your drivers", though. The other issue that exists is that even if the OS was willing to kill off the GPU insanely quickly (say 5ms), i could still write a piece of code that hammered the GPU by restarting the bad shader. This would (effectively) produce a DoS of the entire machine. All of the OS/driver responses to broken shaders is based on the assumption that the code is accidental, not that it is malicious. We can't make the same assumption. --Oliver > > - Vlad > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Thu Jun 17 22:57:04 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 17 Jun 2010 22:57:04 -0700 (PDT) Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: Message-ID: <1280593933.486924.1276840624908.JavaMail.root@cm-mail03.mozilla.org> ----- "Oliver Hunt" wrote: > The other issue that exists is that even if the OS was willing to kill > off the GPU insanely quickly (say 5ms), i could still write a piece of > code that hammered the GPU by restarting the bad shader. This would > (effectively) produce a DoS of the entire machine. Well, at that point the browser itself can get in play -- if it knows that the shader was killed off (or even if it knows that the entire system was reset, like we can find out on Windows because we lose contexts, I believe), we can blacklist webgl for that page (until reload or something similiar). I think the key piece is making sure that control is returned to the browser within a reasonable amount of time. If it is, then we can build on top of that. But if we never get control, that's where the problems lie. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Thu Jun 17 23:19:57 2010 From: ced...@ (Cedric Vivier) Date: Fri, 18 Jun 2010 14:19:57 +0800 Subject: [Public WebGL] TypedArray constructors and zero length In-Reply-To: References: <116006142.398438.1276108173757.JavaMail.root@cm-mail03.mozilla.org> Message-ID: I've noticed it's already been fixed in WebKit trunk. I think behavior clarifications as in the bug's point 1 to 3 should be included in the TypedArrays spec to ensure consistent behavior and proper test coverage. Regards, On Thu, Jun 17, 2010 at 10:33, Kenneth Russell wrote: > On Thu, Jun 10, 2010 at 1:15 PM, Cedric Vivier wrote: >> On Fri, Jun 11, 2010 at 01:49, Kenneth Russell wrote: >>> On Thu, Jun 10, 2010 at 2:12 AM, Cedric Vivier wrote: >>>> FWIW I wrote some tests on this today, Mozilla has the sane behavior >>>> of throwing an exception ("invalid array size") when length is >>>> negative while WebKit swaps like crazy attempting to allocate >>>> memory... and in the end returns "undefined". >>> >>> After looking back at the JavaScript bindings in WebKit, the intended >>> behavior is that this should throw an INDEX_SIZE_ERR exception. See >>> http://trac.webkit.org/browser/trunk/WebCore/bindings/js/JSArrayBufferConstructor.cpp >>> , line 61, and http://trac.webkit.org/browser/trunk/WebCore/bindings/v8/custom/V8ArrayBufferCustom.cpp >>> , line 77. I'm not sure why this isn't happening. Feel free to file a >>> bug on http://bugs.webkit.org/ . >> >> >> Interesting, I guess in V8's case it setting DOM exception is not >> enough as an exception object must be passed/returned to the runtime. >> >> Looks like the behavior is inconsistent between the 2 WebKit bindings >> even for some corner cases (NaN/+inf/-inf gives 0 on JSCore - as >> WebIDL specifies - but throws on the V8 binding). >> >> Mozilla does indeed treat the input as an int32 internally and >> specifically check if the value is negative (which also means "too >> big" >2^31 depending how you look at it sure, but in the end it just >> works as intended by WebIDL's unsigned long... a signed value passed >> as parameter will not work and won't have side-effects [like >> attempting to allocate possibly huge chunk of memory]) : >> http://mxr.mozilla.org/mozilla-central/source/js/src/jstypedarray.cpp#137 >> >> I think reproducing Mozilla's behavior in both WebKit bindings would >> make a lot of sense, for instance with this patch for V8's binding : >> http://neonux.com/webgl/v8-binding-arraybuffer.patch > > Thanks for the initial patch. Generally agree about clarifying the > error behavior. Some different and additional error checks were > needed. FYI, this is being worked on under > https://bugs.webkit.org/show_bug.cgi?id=40755 . > > -Ken > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Fri Jun 18 07:55:04 2010 From: cma...@ (Chris Marrin) Date: Fri, 18 Jun 2010 07:55:04 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: References: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Jun 17, 2010, at 10:24 PM, Oliver Hunt wrote: > On Jun 17, 2010, at 6:46 PM, Vladimir Vukicevic wrote: > >> >> ----- "Steve Baker" wrote: >> >>> Since there is little or no practical use for shader programs that don't >>> complete within a very small fraction of a second - a one second (or so) >>> watchdog timer would seem to be the best - and perhaps only - pragmatic >>> solution. We could even allow users to set how long the timer would >>> wait before killing the web page in some 'about:config' page just in >>> case they want to use WebGL for some kind of non-traditional use (eg >>> CUDA-like applications). >> >> If we had a way to do this (abort execution while inside a long-running shader), we wouldn't be having this conversation :-) The problem is that in most cases your entire system is locked up waiting for the GPU to come back; Windows Vista/7 handles this case by forcibly resetting the entire graphics card (and thus killing any contexts), which is probably the best response for now. But that functionality isn't available to any programs to use directly, and no equivalent exists on other platforms. Something might happen on that front, but even if it does we need a solution for older (current) driver versions; that solution might be "you don't get to run complex shader programs until you upgrade your drivers", though. > > The other issue that exists is that even if the OS was willing to kill off the GPU insanely quickly (say 5ms), i could still write a piece of code that hammered the GPU by restarting the bad shader. This would (effectively) produce a DoS of the entire machine. Javascript will kick in its timeouts if you do this, so that's not a problem. As long as the GPU can kick the shader in a reasonable amount of time and the damage to the system due to the GPU reset is not unrecoverable, you will be able to get back control of your system. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From wan...@ Fri Jun 18 09:09:56 2010 From: wan...@ (=?ISO-8859-1?Q?Cau=EA_Waneck?=) Date: Fri, 18 Jun 2010 13:09:56 -0300 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: References: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> Message-ID: Another solution could be to only allow loops that can be unrolled at compile time. Couldn't it? I actually thought that OpenGL ES 2.0 specs already had this limitation. 2010/6/18 Chris Marrin > > On Jun 17, 2010, at 10:24 PM, Oliver Hunt wrote: > > > On Jun 17, 2010, at 6:46 PM, Vladimir Vukicevic wrote: > > > >> > >> ----- "Steve Baker" wrote: > >> > >>> Since there is little or no practical use for shader programs that > don't > >>> complete within a very small fraction of a second - a one second (or > so) > >>> watchdog timer would seem to be the best - and perhaps only - pragmatic > >>> solution. We could even allow users to set how long the timer would > >>> wait before killing the web page in some 'about:config' page just in > >>> case they want to use WebGL for some kind of non-traditional use (eg > >>> CUDA-like applications). > >> > >> If we had a way to do this (abort execution while inside a long-running > shader), we wouldn't be having this conversation :-) The problem is that in > most cases your entire system is locked up waiting for the GPU to come back; > Windows Vista/7 handles this case by forcibly resetting the entire graphics > card (and thus killing any contexts), which is probably the best response > for now. But that functionality isn't available to any programs to use > directly, and no equivalent exists on other platforms. Something might > happen on that front, but even if it does we need a solution for older > (current) driver versions; that solution might be "you don't get to run > complex shader programs until you upgrade your drivers", though. > > > > The other issue that exists is that even if the OS was willing to kill > off the GPU insanely quickly (say 5ms), i could still write a piece of code > that hammered the GPU by restarting the bad shader. This would > (effectively) produce a DoS of the entire machine. > > Javascript will kick in its timeouts if you do this, so that's not a > problem. As long as the GPU can kick the shader in a reasonable amount of > time and the damage to the system due to the GPU reset is not unrecoverable, > you will be able to get back control of your system. > > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Fri Jun 18 09:25:03 2010 From: ala...@ (Alan Chaney) Date: Fri, 18 Jun 2010 09:25:03 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: References: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C1B9DDF.7090704@mechnicality.com> I have the OpenGL ES 2 shader specification 1.00 rev 17 open in front of me: It says: (Sec 6.3 Iteration pp57) "Non-terminating loops are allowed. The consequences of very long or non-terminating loops are platform dependent." It appears to me that a syntactically correct loop could be: for ( int i = 0; i < 5; i ++) { i = i - 1; // whatever else you want to do here... } Regards Alan On 06/18/2010 09:09 AM, Cau? Waneck wrote: > Another solution could be to only allow loops that can be unrolled at > compile time. Couldn't it? I actually thought that OpenGL ES 2.0 specs > already had this limitation. > > 2010/6/18 Chris Marrin > > > > On Jun 17, 2010, at 10:24 PM, Oliver Hunt wrote: > > > On Jun 17, 2010, at 6:46 PM, Vladimir Vukicevic wrote: > > > >> > >> ----- "Steve Baker" > wrote: > >> > >>> Since there is little or no practical use for shader programs > that don't > >>> complete within a very small fraction of a second - a one > second (or so) > >>> watchdog timer would seem to be the best - and perhaps only - > pragmatic > >>> solution. We could even allow users to set how long the timer > would > >>> wait before killing the web page in some 'about:config' page > just in > >>> case they want to use WebGL for some kind of non-traditional > use (eg > >>> CUDA-like applications). > >> > >> If we had a way to do this (abort execution while inside a > long-running shader), we wouldn't be having this conversation :-) > The problem is that in most cases your entire system is locked up > waiting for the GPU to come back; Windows Vista/7 handles this > case by forcibly resetting the entire graphics card (and thus > killing any contexts), which is probably the best response for > now. But that functionality isn't available to any programs to > use directly, and no equivalent exists on other platforms. > Something might happen on that front, but even if it does we need > a solution for older (current) driver versions; that solution > might be "you don't get to run complex shader programs until you > upgrade your drivers", though. > > > > The other issue that exists is that even if the OS was willing > to kill off the GPU insanely quickly (say 5ms), i could still > write a piece of code that hammered the GPU by restarting the bad > shader. This would (effectively) produce a DoS of the entire machine. > > Javascript will kick in its timeouts if you do this, so that's not > a problem. As long as the GPU can kick the shader in a reasonable > amount of time and the damage to the system due to the GPU reset > is not unrecoverable, you will be able to get back control of your > system. > > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > . > To unsubscribe, send an email to majordomo...@ > with > the following command in the body of your email: > > > !DSPAM:4c1b9d49164292117114708! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mir...@ Fri Jun 18 11:05:55 2010 From: mir...@ (Miroslav Karpis) Date: Fri, 18 Jun 2010 20:05:55 +0200 Subject: [Public WebGL] chromium - 6.0.442.0 (50249) Message-ID: Hi all, was working for a while in a older build (everything was running nice and smooth). Now I took the last build and *nothing* works (my program, demos from wiki) - just get a dialog box 'pages non-responsive...etc'. running on Ubuntu 10.04, 64 bit - tried with last build minefield and there it works nice -miro -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan...@ Fri Jun 18 11:29:30 2010 From: dan...@ (Daniel Koch) Date: Fri, 18 Jun 2010 14:29:30 -0400 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: <4C1B9DDF.7090704@mechnicality.com> References: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> <4C1B9DDF.7090704@mechnicality.com> Message-ID: <6E006A0A-95AE-42BC-AC29-4D4135D9A5C9@transgaming.com> Now flip down to Appendix A: Limitations for ES 2.0 (p108), where it says (among other things): 1. OpenGL ES 2.0 implementations are not required to support the full GLSL ES 1.00 specification. ... ... 4. In general, control flow is limited to forward branching and to loops where the maximum number of iterations can easily be determined at compile time. Hope this helps Daniel On 2010-06-18, at 12:25 PM, Alan Chaney wrote: > I have the OpenGL ES 2 shader specification 1.00 rev 17 open in front of me: > > It says: (Sec 6.3 Iteration pp57) > > "Non-terminating loops are allowed. The consequences of very long or non-terminating loops are platform dependent." > > > It appears to me that a syntactically correct loop could be: > > for ( int i = 0; i < 5; i ++) { > > i = i - 1; > > // whatever else you want to do here... > } > > > Regards > > Alan > > > > > > On 06/18/2010 09:09 AM, Cau? Waneck wrote: >> >> Another solution could be to only allow loops that can be unrolled at compile time. Couldn't it? I actually thought that OpenGL ES 2.0 specs already had this limitation. >> >> 2010/6/18 Chris Marrin >> >> On Jun 17, 2010, at 10:24 PM, Oliver Hunt wrote: >> >> > On Jun 17, 2010, at 6:46 PM, Vladimir Vukicevic wrote: >> > >> >> >> >> ----- "Steve Baker" wrote: >> >> >> >>> Since there is little or no practical use for shader programs that don't >> >>> complete within a very small fraction of a second - a one second (or so) >> >>> watchdog timer would seem to be the best - and perhaps only - pragmatic >> >>> solution. We could even allow users to set how long the timer would >> >>> wait before killing the web page in some 'about:config' page just in >> >>> case they want to use WebGL for some kind of non-traditional use (eg >> >>> CUDA-like applications). >> >> >> >> If we had a way to do this (abort execution while inside a long-running shader), we wouldn't be having this conversation :-) The problem is that in most cases your entire system is locked up waiting for the GPU to come back; Windows Vista/7 handles this case by forcibly resetting the entire graphics card (and thus killing any contexts), which is probably the best response for now. But that functionality isn't available to any programs to use directly, and no equivalent exists on other platforms. Something might happen on that front, but even if it does we need a solution for older (current) driver versions; that solution might be "you don't get to run complex shader programs until you upgrade your drivers", though. >> > >> > The other issue that exists is that even if the OS was willing to kill off the GPU insanely quickly (say 5ms), i could still write a piece of code that hammered the GPU by restarting the bad shader. This would (effectively) produce a DoS of the entire machine. >> >> Javascript will kick in its timeouts if you do this, so that's not a problem. As long as the GPU can kick the shader in a reasonable amount of time and the damage to the system due to the GPU reset is not unrecoverable, you will be able to get back control of your system. >> >> ----- >> ~Chris >> cmarrin...@ >> >> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> >> >> !DSPAM:4c1b9d49164292117114708! > --- Daniel Koch -+- daniel...@ Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bag...@ Fri Jun 18 11:35:59 2010 From: bag...@ (Patrick Baggett) Date: Fri, 18 Jun 2010 13:35:59 -0500 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: <6E006A0A-95AE-42BC-AC29-4D4135D9A5C9@transgaming.com> References: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> <4C1B9DDF.7090704@mechnicality.com> <6E006A0A-95AE-42BC-AC29-4D4135D9A5C9@transgaming.com> Message-ID: Section A4 also mentions that modifications to the loop counter are not allowed, so: for(int i=0; i<50; i++) { i--; } can't be legal. On Fri, Jun 18, 2010 at 1:29 PM, Daniel Koch wrote: > Now flip down to Appendix A: Limitations for ES 2.0 (p108), where it says > (among other things): > 1. OpenGL ES 2.0 implementations are not required to support the full GLSL > ES 1.00 specification. ... > ... > 4. In general, control flow is limited to forward branching and to loops > where the maximum number of iterations can easily be determined at compile > time. > > Hope this helps > Daniel > > On 2010-06-18, at 12:25 PM, Alan Chaney wrote: > > I have the OpenGL ES 2 shader specification 1.00 rev 17 open in front of > me: > > It says: (Sec 6.3 Iteration pp57) > > "Non-terminating loops are allowed. The consequences of very long or > non-terminating loops are platform dependent." > > > It appears to me that a syntactically correct loop could be: > > for ( int i = 0; i < 5; i ++) { > > i = i - 1; > > // whatever else you want to do here... > } > > > Regards > > Alan > > > > > > On 06/18/2010 09:09 AM, Cau? Waneck wrote: > > Another solution could be to only allow loops that can be unrolled at > compile time. Couldn't it? I actually thought that OpenGL ES 2.0 specs > already had this limitation. > > 2010/6/18 Chris Marrin > >> >> On Jun 17, 2010, at 10:24 PM, Oliver Hunt wrote: >> >> > On Jun 17, 2010, at 6:46 PM, Vladimir Vukicevic wrote: >> > >> >> >> >> ----- "Steve Baker" wrote: >> >> >> >>> Since there is little or no practical use for shader programs that >> don't >> >>> complete within a very small fraction of a second - a one second (or >> so) >> >>> watchdog timer would seem to be the best - and perhaps only - >> pragmatic >> >>> solution. We could even allow users to set how long the timer would >> >>> wait before killing the web page in some 'about:config' page just in >> >>> case they want to use WebGL for some kind of non-traditional use (eg >> >>> CUDA-like applications). >> >> >> >> If we had a way to do this (abort execution while inside a long-running >> shader), we wouldn't be having this conversation :-) The problem is that in >> most cases your entire system is locked up waiting for the GPU to come back; >> Windows Vista/7 handles this case by forcibly resetting the entire graphics >> card (and thus killing any contexts), which is probably the best response >> for now. But that functionality isn't available to any programs to use >> directly, and no equivalent exists on other platforms. Something might >> happen on that front, but even if it does we need a solution for older >> (current) driver versions; that solution might be "you don't get to run >> complex shader programs until you upgrade your drivers", though. >> > >> > The other issue that exists is that even if the OS was willing to kill >> off the GPU insanely quickly (say 5ms), i could still write a piece of code >> that hammered the GPU by restarting the bad shader. This would >> (effectively) produce a DoS of the entire machine. >> >> Javascript will kick in its timeouts if you do this, so that's not a >> problem. As long as the GPU can kick the shader in a reasonable amount of >> time and the damage to the system due to the GPU reset is not unrecoverable, >> you will be able to get back control of your system. >> >> ----- >> ~Chris >> cmarrin...@ >> >> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> >> > !DSPAM:4c1b9d49164292117114708! > > > > --- > Daniel Koch -+- daniel...@ > Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bag...@ Fri Jun 18 11:36:26 2010 From: bag...@ (Patrick Baggett) Date: Fri, 18 Jun 2010 13:36:26 -0500 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: References: <4C1A1074.6040309@mechnicality.com> <4C1AC2AD.3010900@sjbaker.org> Message-ID: On Thu, Jun 17, 2010 at 7:49 PM, Steve Baker wrote: > There is a rather fundamental problem with detecting infinite loops > either by code inspection or simulation. Alan Turing had a few words to > say about that: > > http://en.wikipedia.org/wiki/Halting_problem > > It is a mathematical certainty that no "verifier" can possibly detect > whether an arbitary program will eventually halt. > > And here I was thinking some of this stuff we learned in our CS classes would never be used. My two cents: any driver that doesn't have a watchdog timer such as that is sort of like running a cooperative multitasking OS except the programs can be written by anyone on the internet. Anyone at all. Expect terrible things. I agree with Steve, limiting the shaders' flexibility due to an issue that can be solved by "better drivers" is the wrong way to handle the issue. If the halting problem were really a problem for a multitasking OS, then we'd be in deep trouble. I'd like to see a configurable timer for the web browser, but I think the interval of the timer will largely be OS-specific, and not all that configurable. For example, Windows Vista/7 use a registry key (global), and that doesn't seem like something an unprivileged app will/should be able to change. Patrick Baggett -------------- next part -------------- An HTML attachment was scrubbed... URL: From oli...@ Fri Jun 18 11:49:53 2010 From: oli...@ (Oliver Hunt) Date: Fri, 18 Jun 2010 11:49:53 -0700 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: References: Message-ID: <12BC22A7-909F-43FB-824E-23D2C4E80C21@apple.com> On Jun 17, 2010, at 9:49 PM, Cedric Vivier wrote: > On Fri, Jun 18, 2010 at 08:02, Chris Marrin wrote: >> I believe this solves the halting problem issue, (although I suspect Ken disagrees with me). But doesn't necessarily prevent a shader from running for an extremely long time, which I suppose is the same thing in most cases. > > It seems Ken and/or others investigated this issue in depth months > ago, is there any document available demonstrating all shader > constructs - besides loops - found to possibly take an extremely long > time to run ? I believe the trick was to make a very expensive shader, and then throw thousands of large polygons at it. --Oliver > > > Regards, > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Fri Jun 18 11:51:43 2010 From: oli...@ (Oliver Hunt) Date: Fri, 18 Jun 2010 11:51:43 -0700 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: References: Message-ID: <397B28CA-D4E0-4EBB-A7AB-5D5DE88D7598@apple.com> I thought we had agreement a long time ago that WebGL 1.0 would enforce the _minimum_ requirements of a GL|ES implementation, both to reduce the scope for validation, and to try and maximise compatibility across devices. There are plenty of devices for which while(1) ...; Won't compile as the GPU doesn't support actual back branches. --Oliver On Jun 17, 2010, at 5:02 PM, Chris Marrin wrote: > > If you look at section A.4 of the SLES spec: > > http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf > > you'll see the minimal requirements of GLSL ES shaders, specifically control flow. I think we should adopt these rules in their entirety. It will not ensure that all shaders are safe, but it will avoid obvious mistakes like infinite loops. It also reduces the floorplan of the shader language a bit to make it easier to to do a better job of validation. > > I also like Vlad's suggestion of restricting loop indices to ints (as opposed to ints plus floats). This will allow us to accurately determine the number of iterations a for loop will perform so we can choose to reject those that go over some limit. > > I believe this solves the halting problem issue, (although I suspect Ken disagrees with me). But doesn't necessarily prevent a shader from running for an extremely long time, which I suppose is the same thing in most cases. > > I think we should be as restrictive as possible in the first release. It is much easier to relax restrictions than to tighten them. > > ----- > ~Chris > cmarrin...@ > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Fri Jun 18 12:03:37 2010 From: oli...@ (Oliver Hunt) Date: Fri, 18 Jun 2010 12:03:37 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: References: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <03E457BC-A554-4455-90F3-810E53DD6333@apple.com> On Jun 18, 2010, at 7:55 AM, Chris Marrin wrote: > > On Jun 17, 2010, at 10:24 PM, Oliver Hunt wrote: > >> On Jun 17, 2010, at 6:46 PM, Vladimir Vukicevic wrote: >> >>> >>> ----- "Steve Baker" wrote: >>> >>>> Since there is little or no practical use for shader programs that don't >>>> complete within a very small fraction of a second - a one second (or so) >>>> watchdog timer would seem to be the best - and perhaps only - pragmatic >>>> solution. We could even allow users to set how long the timer would >>>> wait before killing the web page in some 'about:config' page just in >>>> case they want to use WebGL for some kind of non-traditional use (eg >>>> CUDA-like applications). >>> >>> If we had a way to do this (abort execution while inside a long-running shader), we wouldn't be having this conversation :-) The problem is that in most cases your entire system is locked up waiting for the GPU to come back; Windows Vista/7 handles this case by forcibly resetting the entire graphics card (and thus killing any contexts), which is probably the best response for now. But that functionality isn't available to any programs to use directly, and no equivalent exists on other platforms. Something might happen on that front, but even if it does we need a solution for older (current) driver versions; that solution might be "you don't get to run complex shader programs until you upgrade your drivers", though. >> >> The other issue that exists is that even if the OS was willing to kill off the GPU insanely quickly (say 5ms), i could still write a piece of code that hammered the GPU by restarting the bad shader. This would (effectively) produce a DoS of the entire machine. > > Javascript will kick in its timeouts if you do this, so that's not a problem. As long as the GPU can kick the shader in a reasonable amount of time and the damage to the system due to the GPU reset is not unrecoverable, you will be able to get back control of your system. JavaScript timeouts kick in after some arbitrary amount of time -- if memory serves the default is ~10s. I don't want my system unusable for 10s. That said i can also do setInterval(evilFunction, 10) Which will circumvent slow script timers in more or less all browsers. > > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Fri Jun 18 12:07:50 2010 From: oli...@ (Oliver Hunt) Date: Fri, 18 Jun 2010 12:07:50 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: References: <4C1A1074.6040309@mechnicality.com> <4C1AC2AD.3010900@sjbaker.org> Message-ID: <09CC6508-791C-4857-A53D-A47AB8D7F6A0@apple.com> On Jun 18, 2010, at 11:36 AM, Patrick Baggett wrote: > On Thu, Jun 17, 2010 at 7:49 PM, Steve Baker wrote: > There is a rather fundamental problem with detecting infinite loops > either by code inspection or simulation. Alan Turing had a few words to > say about that: > > http://en.wikipedia.org/wiki/Halting_problem > > It is a mathematical certainty that no "verifier" can possibly detect > whether an arbitary program will eventually halt. > > > And here I was thinking some of this stuff we learned in our CS classes would never be used. > > My two cents: any driver that doesn't have a watchdog timer such as that is sort of like running a cooperative multitasking OS except the programs can be written by anyone on the internet. Anyone at all. Expect terrible things. > > I agree with Steve, limiting the shaders' flexibility due to an issue that can be solved by "better drivers" is the wrong way to handle the issue. If the halting problem were really a problem for a multitasking OS, then we'd be in deep trouble. I'd like to see a configurable timer for the web browser, but I think the interval of the timer will largely be OS-specific, and not all that configurable. For example, Windows Vista/7 use a registry key (global), and that doesn't seem like something an unprivileged app will/should be able to change. This isn't a driver issue -- the design of the hardware is such that you throw a shader at the gpu and the gpu doesn't come back until it's done. The only OS response is to reset the entire GPU at the hardware level, which you don't want to do to rapidly or you will break "safe" code on slower gpus. --Oliver > > Patrick Baggett > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilm...@ Fri Jun 18 13:04:39 2010 From: ilm...@ (Ilmari Heikkinen) Date: Fri, 18 Jun 2010 23:04:39 +0300 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: <12BC22A7-909F-43FB-824E-23D2C4E80C21@apple.com> References: <12BC22A7-909F-43FB-824E-23D2C4E80C21@apple.com> Message-ID: 2010/6/18 Oliver Hunt : > > On Jun 17, 2010, at 9:49 PM, Cedric Vivier wrote: > >> On Fri, Jun 18, 2010 at 08:02, Chris Marrin wrote: >>> I believe this solves the halting problem issue, (although I suspect Ken disagrees with me). But doesn't necessarily prevent a shader from running for an extremely long time, which I suppose is the same thing in most cases. >> >> It seems Ken and/or others investigated this issue in depth months >> ago, is there any document available demonstrating all shader >> constructs - besides loops - found to possibly take an extremely long >> time to run ? > > I believe the trick was to make a very expensive shader, and then throw thousands of large polygons at it. Or you can just throw a model with a million screen-sized triangles at a trivial shader. The shader complexity (off the top of my head) follows O(geometry * (vertex_shader + fragments_per_geometry * fragment_shader)) Get geometry and fragments_per_geometry high enough (say, 1M each) and even a discard shader takes a trillion ops to finish. If the driver or hardware doesn't allow canceling long-running shaders, it does make defending against this difficult. You'd have to estimate the max runtime for a shader and geometry at drawElements/drawArrays -time and if the estimate is high enough, throw an exception. I guess you could do a similar attack by creating an html layout with a few million glyphs on screen and changing the CSS font settings... Or maybe even just by allocating enough to make the computer swap / OOM-kill processes. But you don't see many sites like that because DoSsing your visitors isn't good for traffic. -- Ilmari ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Fri Jun 18 13:06:31 2010 From: ala...@ (Alan Chaney) Date: Fri, 18 Jun 2010 13:06:31 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: <6E006A0A-95AE-42BC-AC29-4D4135D9A5C9@transgaming.com> References: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> <4C1B9DDF.7090704@mechnicality.com> <6E006A0A-95AE-42BC-AC29-4D4135D9A5C9@transgaming.com> Message-ID: <4C1BD1C7.8040902@mechnicality.com> Hi Daniel On 06/18/2010 11:29 AM, Daniel Koch wrote: > Now flip down to Appendix A: Limitations for ES 2.0 (p108), where it > says (among other things): > 1. OpenGL ES 2.0 implementations are not required to support the full > GLSL ES 1.00 specification. ... > ... > 4. In general, control flow is limited to forward branching and to > loops where the maximum number of iterations can easily be determined > at compile time. > > Hope this helps > Daniel Yes it does. I've now studied Appendix A. To clarify my position, I think that this thread is discussing 3 different things: 1. The requirements for shader validation. 2. The ways in which a shader could be maliciously manipulated to create a denial-of-service exploit. 3. The problems and issues connected with hardware and O/S features relating to overcoming 2. above. My focus is on 2. I believe that for the widespread adoption of WebGL it is essential that all possible steps are taken to avoid DOS attacks. If one of the ways of doing this is to put restrictions on the capabilities of the shaders then I would hope that: 1. Those restrictions should apply to all shader implementations 2. They should be clearly defined in the WebGL specification. Sadly, although the points that have been discussed may help, I have a feeling that without some kind of enforceable timeout, malicious individuals will determine ways to write shaders which can pass validation but will still take an unacceptably long time to execute. Since it appears that the timeout is not going to be easy Regards Alan > > On 2010-06-18, at 12:25 PM, Alan Chaney wrote: > >> I have the OpenGL ES 2 shader specification 1.00 rev 17 open in front >> of me: >> >> It says: (Sec 6.3 Iteration pp57) >> >> "Non-terminating loops are allowed. The consequences of very long or >> non-terminating loops are platform dependent." >> >> >> It appears to me that a syntactically correct loop could be: >> >> for ( int i = 0; i < 5; i ++) { >> >> i = i - 1; >> >> // whatever else you want to do here... >> } >> >> >> Regards >> >> Alan >> >> >> >> >> >> On 06/18/2010 09:09 AM, Cau? Waneck wrote: >>> Another solution could be to only allow loops that can be unrolled >>> at compile time. Couldn't it? I actually thought that OpenGL ES 2.0 >>> specs already had this limitation. >>> >>> 2010/6/18 Chris Marrin > >>> >>> >>> On Jun 17, 2010, at 10:24 PM, Oliver Hunt wrote: >>> >>> > On Jun 17, 2010, at 6:46 PM, Vladimir Vukicevic wrote: >>> > >>> >> >>> >> ----- "Steve Baker" >> > wrote: >>> >> >>> >>> Since there is little or no practical use for shader >>> programs that don't >>> >>> complete within a very small fraction of a second - a one >>> second (or so) >>> >>> watchdog timer would seem to be the best - and perhaps only >>> - pragmatic >>> >>> solution. We could even allow users to set how long the >>> timer would >>> >>> wait before killing the web page in some 'about:config' page >>> just in >>> >>> case they want to use WebGL for some kind of non-traditional >>> use (eg >>> >>> CUDA-like applications). >>> >> >>> >> If we had a way to do this (abort execution while inside a >>> long-running shader), we wouldn't be having this conversation >>> :-) The problem is that in most cases your entire system is >>> locked up waiting for the GPU to come back; Windows Vista/7 >>> handles this case by forcibly resetting the entire graphics card >>> (and thus killing any contexts), which is probably the best >>> response for now. But that functionality isn't available to any >>> programs to use directly, and no equivalent exists on other >>> platforms. Something might happen on that front, but even if it >>> does we need a solution for older (current) driver versions; >>> that solution might be "you don't get to run complex shader >>> programs until you upgrade your drivers", though. >>> > >>> > The other issue that exists is that even if the OS was willing >>> to kill off the GPU insanely quickly (say 5ms), i could still >>> write a piece of code that hammered the GPU by restarting the >>> bad shader. This would (effectively) produce a DoS of the >>> entire machine. >>> >>> Javascript will kick in its timeouts if you do this, so that's >>> not a problem. As long as the GPU can kick the shader in a >>> reasonable amount of time and the damage to the system due to >>> the GPU reset is not unrecoverable, you will be able to get back >>> control of your system. >>> >>> ----- >>> ~Chris >>> cmarrin...@ >>> >>> >>> >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> . >>> To unsubscribe, send an email to majordomo...@ >>> with >>> the following command in the body of your email: >>> >>> >> > > --- > Daniel Koch -+- daniel...@ > > Senior Graphics Architect -+- TransGaming Inc. -+- > www.transgaming.com > > !DSPAM:4c1bbd44195772051017194! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Fri Jun 18 14:11:31 2010 From: gma...@ (Gregg Tavares) Date: Fri, 18 Jun 2010 14:11:31 -0700 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: References: Message-ID: Here are a couple of samples This one hangs my MacbookPro for 10-30 seconds. Sometimes the desktop gets garbage but a click usually restores it. On my Windows7 box the after a few seconds the screen will flash several times and then a notification will appear that the driver has been reset. Your mileage may vary. https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/extra/lots-of-polys-example.html It's using about the simplest textured shader possible but it's drawing 100000 polys at 1024x1024 in 1 draw call. It might be possible to have WebGL break up the draw calls. Unfortunately that could add a significant slowdown to apps since each call to glDrawXXX has a large overhead. It would be good to test that I guess as a possible solution. Here's another https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/extra/big-fbos-example.html This one just allocates some largo FBOs. It doesn't have to render anything to basically kill OSX on my MacBook Pro. On my Windows7 box it eventually runs, at least on my machine. Again your mileage may vary. I suspect this is more of a driver issue, not a GPU issue. I'll see if I can make another with one with far less polygons but a slightly more complex shader. I don't believe any loops are necessary. Also the shader toy, which at this point it time is sadly offline, is able to hang or dos several GPUs. Note: I only tested those samples on top of tree WebKit on OSX and Chromium on Windows7. The first one requires a WebGL that matches the current spec for texImage2D (the updated API version) I understand the concern about all of this but I guess I kind of see it as a self limiting problem. If I go to site xyz.com and it locks my machine I'm not going to go back to site xyz.com anymore. You can't use these issues to steal user info. While maybe some kind of cross scripting hack thing could get several sites to serve up something like this it seems like it's got limited usefulness. It can't be used to install a virus or execute code so all it really does is DOS the user's machine and make him not want to go back to that site. It seems like a person with malicious intent would be far more likely to serve some kind of real exploit. Any site that could be hacked to serve this could also have been hacked to serve far more malicious code. It's more likely in the short term that some Safe Browsing system is the best solution since it will take time for OSes and Drivers to handle this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Jun 18 17:41:32 2010 From: kbr...@ (Kenneth Russell) Date: Fri, 18 Jun 2010 17:41:32 -0700 Subject: [Public WebGL] chromium - 6.0.442.0 (50249) In-Reply-To: References: Message-ID: On Fri, Jun 18, 2010 at 11:05 AM, Miroslav Karpis wrote: > Hi all, was working for a while in a older build (everything was running > nice and smooth). Now I took the last build and nothing works (my program, > demos from wiki) - just get a dialog box 'pages non-responsive...etc'. > running on Ubuntu 10.04, 64 bit > - tried with last build minefield and there it works nice I've just built Chromium top of tree on Ubuntu 10.04 with the current NVIDIA drivers and WebGL is working for me. It would be really helpful if you could build Chromium from source in debug mode and help debug this. See http://crbug.com/44590 . -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Fri Jun 18 18:05:26 2010 From: gma...@ (Gregg Tavares) Date: Fri, 18 Jun 2010 18:05:26 -0700 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: References: Message-ID: Here's one more https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/extra/slow-shader-example.html It's a 1 line shader, no loops. It draws just 1000 1024x1024 quads. It resets my Windows7 and had it both freeze my Macbook for 10-20 seconds on a couple of runs and also crash my Macbook completely on other runs. I can lower it to 300 polys and Windows7 will still reset and OSX will still freeze for 5-10 seconds. I didn't try other values since it's no fun to recover. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Sat Jun 19 05:29:39 2010 From: ste...@ (stephen white) Date: Sat, 19 Jun 2010 21:59:39 +0930 Subject: [Public WebGL] WebKitCSSMatrix and WebGL Message-ID: Is there any current or future correlation between WebKitCSSMatrix and WebGL? > m = new WebKitCSSMatrix WebKitCSSMatrix > a = new WebGLFloatArray(m) WebGLFloatArray > a.length 0 // Doesn't work. It would also be interesting to see benchmarks if it works, as Chris likes to say. :) -- steve...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Jun 21 05:26:40 2010 From: ste...@ (Steve Baker) Date: Mon, 21 Jun 2010 07:26:40 -0500 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: References: Message-ID: <4C1F5A80.3020704@sjbaker.org> Worse still, there is nothing the driver can do (even in principle) to detect large quads because you can write vertex shader code that takes a tiny quad and turns it into a gigantic one. You can't reasonably chop up 300 to 1000 triangle batches without totally crippling performance - so (as I have maintained before) nothing short of a watchdog timer reset will fix this. We probably need to talk with the underlying driver authors (nVidia, ATI, etc) about getting a watchdog timer feature into their code as an OpenGL extension. Since WebGL is very much in their interests (getting 3D graphics into more places and therefore selling more hardware) - they ought to be cooperative. -- Steve. Gregg Tavares wrote: > Here's one more > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/extra/slow-shader-example.html > > It's a 1 line shader, no loops. It draws just 1000 1024x1024 quads. It > resets my Windows7 and had it both freeze my Macbook for 10-20 seconds > on a couple of runs and also crash my Macbook completely on other runs. > > I can lower it to 300 polys and Windows7 will still reset and OSX will > still freeze for 5-10 seconds. I didn't try other values since it's no > fun to recover. > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Mon Jun 21 11:14:43 2010 From: cma...@ (Chris Marrin) Date: Mon, 21 Jun 2010 11:14:43 -0700 Subject: [Public WebGL] WebGL Shader Validation In-Reply-To: <4C1B9DDF.7090704@mechnicality.com> References: <1433175242.486311.1276825587193.JavaMail.root@cm-mail03.mozilla.org> <4C1B9DDF.7090704@mechnicality.com> Message-ID: On Jun 18, 2010, at 9:25 AM, Alan Chaney wrote: > I have the OpenGL ES 2 shader specification 1.00 rev 17 open in front of me: > > It says: (Sec 6.3 Iteration pp57) > > "Non-terminating loops are allowed. The consequences of very long or non-terminating loops are platform dependent." > > > It appears to me that a syntactically correct loop could be: > > for ( int i = 0; i < 5; i ++) { > > i = i - 1; > > // whatever else you want to do here... > } But Appendix goes on to add limits which prevent infinite loops. In that appendix it explicitly disallows assigning to the index variable inside the loop. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Mon Jun 21 11:32:18 2010 From: cma...@ (Chris Marrin) Date: Mon, 21 Jun 2010 11:32:18 -0700 Subject: [Public WebGL] WebKitCSSMatrix and WebGL In-Reply-To: References: Message-ID: <90DC21F8-9B26-4BD6-BDEB-11BE617F3BAA@apple.com> On Jun 19, 2010, at 5:29 AM, stephen white wrote: > Is there any current or future correlation between WebKitCSSMatrix and WebGL? > > > m = new WebKitCSSMatrix > WebKitCSSMatrix > > a = new WebGLFloatArray(m) > WebGLFloatArray > > a.length > 0 // Doesn't work. > > It would also be interesting to see benchmarks if it works, as Chris likes to say. :) I've made an implementation of a Matrix class which uses WebKitCSSMatrix if available: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/demos/webkit/resources/J3DIMath.js with a test case here: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/demos/webkit/MatrixTest.html In doing some informal benchmarking, I'm not seeing significant speedup. I believe this is mainly due to the fact that CSSMatrix is immutable (i.e., it returns a new CSSMatrix from every call). This causes a lot of GC churn which affects performance. We made it immutable to match SVGMatrix, but in retrospect, that might have been a mistake. I'm looking now at possibly trying to change the CSSMatrix spec to mutate the passed matrix in place, or at least add methods that do mutation. Interestingly, J3DIMatrix can also use a "copy" function. This is a method I experimentally added to CSSMatrix. You pass a Float32Array with space for at least 16 floats and the current matrix will be copied into it. This yielded an improvement of 30-40% over both the version that did matrix math in JS and the CSSMatrix version which picked values out of the matrix by hand. This tells me that CSSMatrix probably IS faster, but the GC overhead is eating all that extra performance. It also tells me that with a few small changes and additions, we can significantly improve matrix math in WebGL. PS - In your example above, for instance, if you had the CSSMatrix.copy() function you would say: var a = WebGLFloatArray(16); m.copy(a); and 'a' will then have the matrix, all without any additional object creation. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Mon Jun 21 11:37:32 2010 From: cma...@ (Chris Marrin) Date: Mon, 21 Jun 2010 11:37:32 -0700 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: References: <12BC22A7-909F-43FB-824E-23D2C4E80C21@apple.com> Message-ID: <6478A414-233C-44F7-A026-712CED67CEC5@apple.com> On Jun 18, 2010, at 1:04 PM, Ilmari Heikkinen wrote: > 2010/6/18 Oliver Hunt : >> >> On Jun 17, 2010, at 9:49 PM, Cedric Vivier wrote: >> >>> On Fri, Jun 18, 2010 at 08:02, Chris Marrin wrote: >>>> I believe this solves the halting problem issue, (although I suspect Ken disagrees with me). But doesn't necessarily prevent a shader from running for an extremely long time, which I suppose is the same thing in most cases. >>> >>> It seems Ken and/or others investigated this issue in depth months >>> ago, is there any document available demonstrating all shader >>> constructs - besides loops - found to possibly take an extremely long >>> time to run ? >> >> I believe the trick was to make a very expensive shader, and then throw thousands of large polygons at it. > > Or you can just throw a model with a million screen-sized triangles at > a trivial shader. Not with OpenGL ES you can't. It's limited to 65535 vertices per call, which I suppose translates to 65533 triangles if you're using TStrips. But the point is that a malicious author can do damaging things without writing an infinite loop. Even with that, I still advocate restricting shaders to the limits in Appendix A. It is always easier to relax restrictions than to tighten them. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Mon Jun 21 11:45:36 2010 From: cma...@ (Chris Marrin) Date: Mon, 21 Jun 2010 11:45:36 -0700 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: References: Message-ID: <71F94229-D958-4678-AB8B-798121C73380@apple.com> On Jun 18, 2010, at 6:05 PM, Gregg Tavares wrote: > Here's one more > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/extra/slow-shader-example.html > > It's a 1 line shader, no loops. It draws just 1000 1024x1024 quads. It resets my Windows7 and had it both freeze my Macbook for 10-20 seconds on a couple of runs and also crash my Macbook completely on other runs. Can you send me detailed results for these tests (i.e., the shader you ran, the results on these machines, the hardware config of each machine)? I'm collecting results for an internal bug report to the Apple OpenGL driver group. Your data would really help them understand how to fix it, and hopefully which drivers are the highest priority. If anyone else has useful results on Mac, that would be helpful as well. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From sit...@ Mon Jun 21 22:21:28 2010 From: sit...@ (Sitaram Naik) Date: Mon, 21 Jun 2010 22:21:28 -0700 (PDT) Subject: [Public WebGL] Software rendering on Windows Vista Message-ID: <810457.97604.qm@web44808.mail.sp1.yahoo.com> I have "Intel GMA 4500MHD" graphics card which supports up to OpenGL 1.5. Operating system is Windows Vista 32-bit. So I cannot have hardware acceleration for WebGL. For software rendering, I installed: Mesa 7.8.1 Minefield 3.7a5pre Updated the about:config values as follows: webgl.enabled_for_all_sites true webgl.osmesalib D:\Mesa3D\Mesa-7.8.1\lib\OSMESA32.dll webgl.software_render true When I tried to load page: http://learningwebgl.com/lessons/lesson02/index.html Got an error as follows: Could not initialise WebGL, sorry :-( Then I downloaded OSMESA32.dll from the following site: http://people.mozilla.com/~vladimir/webgl/webgl-mesa-751.zip Replaced the original DLL and tried to load WebGL page, I got same error. Please let me know what is missing in my system. Or let me know if I need to download any particular version of software to run WebGL. Regards, Sitaram ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Mon Jun 21 22:52:16 2010 From: cal...@ (Mark Callow) Date: Tue, 22 Jun 2010 14:52:16 +0900 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: <4C1F5A80.3020704@sjbaker.org> References: <4C1F5A80.3020704@sjbaker.org> Message-ID: <4C204F90.40207@hicorp.co.jp> An extension including this feature is already in the works within Khronos. The initial draft was created by one of the 3D hardware vendors. Regards -Mark Steve Baker wrote: > We probably need to talk with the underlying driver authors (nVidia, > ATI, etc) about getting a watchdog timer feature into their code as an > OpenGL extension. Since WebGL is very much in their interests (getting > 3D graphics into more places and therefore selling more hardware) - they > ought to be cooperative. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 398 bytes Desc: not available URL: From gil...@ Tue Jun 22 02:51:52 2010 From: gil...@ (Giles Thomas) Date: Tue, 22 Jun 2010 10:51:52 +0100 Subject: [Public WebGL] Software rendering on Windows Vista In-Reply-To: <810457.97604.qm@web44808.mail.sp1.yahoo.com> References: <810457.97604.qm@web44808.mail.sp1.yahoo.com> Message-ID: Hi Sitaram, Recently the Mozilla team made a change that stopped Minefield from working with that version of Mesa on Windows -- at the same time, it made it possible to use the version which is shipped as standard with most versions of Linux. This will be fixed sometime soon (perhaps by making a new version of Mesa available for Windows users, or perhaps by some other method) but in the meantime you're probably best-off using this older version of Minefield: < http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/3.7a4-candidates/build1/> -- this should work with the version of Mesa you got from Vladimir's site. Cheers, Giles On 22 June 2010 06:21, Sitaram Naik wrote: > > > I have "Intel GMA 4500MHD" graphics card which supports up to OpenGL 1.5. > Operating system is Windows Vista 32-bit. So I cannot have hardware > acceleration for WebGL. > > For software rendering, I installed: > > Mesa 7.8.1 > Minefield 3.7a5pre > > Updated the about:config values as follows: > > webgl.enabled_for_all_sites true > webgl.osmesalib D:\Mesa3D\Mesa-7.8.1\lib\OSMESA32.dll > webgl.software_render true > > When I tried to load page: > > http://learningwebgl.com/lessons/lesson02/index.html > > Got an error as follows: > > Could not initialise WebGL, sorry :-( > > Then I downloaded OSMESA32.dll from the following site: > > http://people.mozilla.com/~vladimir/webgl/webgl-mesa-751.zip > > Replaced the original DLL and tried to load WebGL page, I got same error. > > Please let me know what is missing in my system. Or let me know if I need > to download any particular version of software to run WebGL. > > Regards, > Sitaram > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -- Giles Thomas giles...@ http://www.gilesthomas.com/ http://learningwebgl.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From sit...@ Tue Jun 22 05:26:18 2010 From: sit...@ (Sitaram Naik) Date: Tue, 22 Jun 2010 05:26:18 -0700 (PDT) Subject: [Public WebGL] Software rendering on Windows Vista In-Reply-To: References: <810457.97604.qm@web44808.mail.sp1.yahoo.com> Message-ID: <441742.21059.qm@web44809.mail.sp1.yahoo.com> Hi Giles, Thank you very much for the information. It worked. I could see WebGL rendered on my system. Regards, Sitaram ________________________________ From: Giles Thomas To: Sitaram Naik Cc: public_webgl...@ Sent: Tue, June 22, 2010 3:21:52 PM Subject: Re: [Public WebGL] Software rendering on Windows Vista Hi Sitaram, Recently the Mozilla team made a change that stopped Minefield from working with that version of Mesa on Windows -- at the same time, it made it possible to use the version which is shipped as standard with most versions of Linux.? This will be fixed sometime soon (perhaps by making a new version of Mesa available for Windows users, or perhaps by some other method) but in the meantime you're probably best-off using this older version of Minefield: ? -- this should work with the version of Mesa you got from Vladimir's site. Cheers, Giles On 22 June 2010 06:21, Sitaram Naik wrote: > >I have "Intel GMA 4500MHD" graphics card which supports up to OpenGL 1.5. Operating system is Windows Vista 32-bit. So I cannot have hardware acceleration for WebGL. > >For software rendering, I installed: > >Mesa 7.8.1 >Minefield 3.7a5pre > >Updated the about:config values as follows: > >webgl.enabled_for_all_sites true >webgl.osmesalib D:\Mesa3D\Mesa-7.8.1\lib\OSMESA32.dll >webgl.software_render true > >When I tried to load page: > >http://learningwebgl.com/lessons/lesson02/index.html > >Got an error as follows: > >Could not initialise WebGL, sorry :-( > >Then I downloaded OSMESA32.dll from the following site: > >http://people.mozilla.com/%7Evladimir/webgl/webgl-mesa-751.zip > >Replaced the original DLL and tried to load WebGL page, I got same error. > >Please let me know what is missing in my system. Or let me know if I need to download any particular version of software to run WebGL. > >Regards, >Sitaram > > > > > >----------------------------------------------------------- >You are currently subscribed to public_webgl...@ >To unsubscribe, send an email to majordomo...@ with >the following command in the body of your email: > > -- Giles Thomas giles...@ http://www.gilesthomas.com/ http://learningwebgl.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gil...@ Tue Jun 22 05:50:05 2010 From: gil...@ (Giles Thomas) Date: Tue, 22 Jun 2010 13:50:05 +0100 Subject: [Public WebGL] Software rendering on Windows Vista In-Reply-To: <441742.21059.qm@web44809.mail.sp1.yahoo.com> References: <810457.97604.qm@web44808.mail.sp1.yahoo.com> <441742.21059.qm@web44809.mail.sp1.yahoo.com> Message-ID: On 22 June 2010 13:26, Sitaram Naik wrote: > Thank you very much for the information. It worked. I could see WebGL > rendered on my system. > Thanks for confirming! I've updated my "getting started" page at < http://learningwebgl.com/blog/?p=11> to accurately reflect the current situation. Cheers, Giles -- Giles Thomas giles...@ http://www.gilesthomas.com/ http://learningwebgl.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Tue Jun 22 17:03:46 2010 From: gma...@ (Gregg Tavares) Date: Tue, 22 Jun 2010 17:03:46 -0700 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: <6478A414-233C-44F7-A026-712CED67CEC5@apple.com> References: <12BC22A7-909F-43FB-824E-23D2C4E80C21@apple.com> <6478A414-233C-44F7-A026-712CED67CEC5@apple.com> Message-ID: On Mon, Jun 21, 2010 at 11:37 AM, Chris Marrin wrote: > > On Jun 18, 2010, at 1:04 PM, Ilmari Heikkinen wrote: > > > 2010/6/18 Oliver Hunt : > >> > >> On Jun 17, 2010, at 9:49 PM, Cedric Vivier wrote: > >> > >>> On Fri, Jun 18, 2010 at 08:02, Chris Marrin wrote: > >>>> I believe this solves the halting problem issue, (although I suspect > Ken disagrees with me). But doesn't necessarily prevent a shader from > running for an extremely long time, which I suppose is the same thing in > most cases. > >>> > >>> It seems Ken and/or others investigated this issue in depth months > >>> ago, is there any document available demonstrating all shader > >>> constructs - besides loops - found to possibly take an extremely long > >>> time to run ? > >> > >> I believe the trick was to make a very expensive shader, and then throw > thousands of large polygons at it. > > > > Or you can just throw a model with a million screen-sized triangles at > > a trivial shader. > > Not with OpenGL ES you can't. It's limited to 65535 vertices per call, > which I suppose translates to 65533 triangles if you're using TStrips. But > the point is that a malicious author can do damaging things without writing > an infinite loop. Even with that, I still advocate restricting shaders to > the limits in Appendix A. It is always easier to relax restrictions than to > tighten them. > I don't think OpenGL ES is limited to 65535 vertices per call. It's limited to 16 bit indices but you can draw 2^31 vertices with either DrawArrays or DrawElements > > ----- > ~Chris > cmarrin...@ > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Jun 22 18:15:16 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 22 Jun 2010 18:15:16 -0700 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: <397B28CA-D4E0-4EBB-A7AB-5D5DE88D7598@apple.com> References: <397B28CA-D4E0-4EBB-A7AB-5D5DE88D7598@apple.com> Message-ID: On Fri, Jun 18, 2010 at 11:51 AM, Oliver Hunt wrote: > I thought we had agreement a long time ago that WebGL 1.0 would enforce the _minimum_ requirements of a GL|ES implementation, both to reduce the scope for validation, and to try and maximise compatibility across devices. > > There are plenty of devices for which > while(1) ...; > > Won't compile as the GPU ?doesn't support actual back branches. The working group did agree to enforce the minimum requirements defined in Sections 4 and 5 of Appendix A of the GLSL ES specification. Your additions to https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#SUPPORTED_GLSL_CONSTRUCTS codify the decision. We decided not to limit WebGL implementations to only exposing, for example, 8 texture units. The GLSL ES shader translator / validator does need to be modified to enforce these restrictions. We need to schedule this work and complete it before compliant WebGL 1.0 implementations ship, but I don't think that should prevent incorporation of the shader validator in its current form. As has been demonstrated elsewhere in this thread, enforcing these restrictions does not solve the problem of shaders taking an unusably long time to run. This problem will need to be addressed at a lower level, likely by the operating system or graphics driver, although safeguards like safe browsing can help prevent this content from running at all on most users' machines. -Ken > --Oliver > > On Jun 17, 2010, at 5:02 PM, Chris Marrin wrote: > >> >> If you look at section A.4 of the SLES spec: >> >> ? ? ? http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf >> >> you'll see the minimal requirements of GLSL ES shaders, specifically control flow. I think we should adopt these rules in their entirety. It will not ensure that all shaders are safe, but it will avoid obvious mistakes like infinite loops. It also reduces the floorplan of the shader language a bit to make it easier to to do a better job of validation. >> >> I also like Vlad's suggestion of restricting loop indices to ints (as opposed to ints plus floats). This will allow us to accurately determine the number of iterations a for loop will perform so we can choose to reject those that go over some limit. >> >> I believe this solves the halting problem issue, (although I suspect Ken disagrees with me). But doesn't necessarily prevent a shader from running for an extremely long time, which I suppose is the same thing in most cases. >> >> I think we should be as restrictive as possible in the first release. It is much easier to relax restrictions than to tighten them. >> >> ----- >> ~Chris >> cmarrin...@ >> >> >> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From phi...@ Tue Jun 22 20:41:28 2010 From: phi...@ (Philip Rideout) Date: Tue, 22 Jun 2010 21:41:28 -0600 Subject: [Public WebGL] resizing the canvas render surface Message-ID: I wrote my first WebGL demo and it looks aliased when CSS causes the canvas to enlarge. ?This is probably expected behavior, but what's the recommended practice for dealing with it? Should my resize handler create a new?WebGLRenderingContext?object somehow? ?I'd love to see a tiny snippet of Javascript that shows how to do this. Looks like the current spec is very clear about the viewport transform, but has little to say about resolution. Philip ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Tue Jun 22 23:59:40 2010 From: gma...@ (Gregg Tavares) Date: Tue, 22 Jun 2010 23:59:40 -0700 Subject: [Public WebGL] resizing the canvas render surface In-Reply-To: References: Message-ID: On Tue, Jun 22, 2010 at 8:41 PM, Philip Rideout wrote: > I wrote my first WebGL demo and it looks aliased when CSS causes the > canvas to enlarge. This is probably expected behavior, but what's the > recommended practice for dealing with it? Should my resize handler > create a new WebGLRenderingContext object somehow? I'd love to see a > tiny snippet of Javascript that shows how to do this. Looks like the > current spec is very clear about the viewport transform, but has > little to say about resolution. > > You have to explicitly resize the canvas resolution in response to the display size of the canvas changing. This works exactly the same in WebGL as it does for the 2d canvas. canvasElement = document.getElementById("id-of-canvas"); window.onresize = handleResize; function handleResize() { // change the resolution of the canvas to match the size it's displayed element.width = element.clientWidth; element.height = element.clientHeight; } When a canvas is resized (2d or 3d) it is cleared so at that point you'll need to re-render your graphics. For example function handleResize() { // change the resolution of the canvas to match the size it's displayed element.width = element.clientWidth; element.height = element.clientHeight; drawMyGraphics(); } If your graphics take a while to draw it might be best to just set a flag in the handleResize function and draw later. There's no need to get a new WebGLRenderingContext > Philip > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Wed Jun 23 08:38:24 2010 From: ala...@ (ala...@) Date: Wed, 23 Jun 2010 08:38:24 -0700 Subject: [Public WebGL] Discrepancy between Chromium and Minefield Message-ID: <4C222A70.2090807@mechnicality.com> Hi I'm using the latest builds of Minefield (updated and rebuilt this morning) and Chromium (6.0.437.3 dev) running on Linux Ubuntu 10.04 (AMD64). I'm hitting a problem with Float32Array in Minefield. The code is actually generated using GWT, but I've extracted the relevant part of the code into a small html file (attached below). The code works fine on Chrome, but when I try it in Minefield barfs at the line containing fa.set(jan) where jan is an array of floats. The error is: Error: fa.set is not a function Source File: file:///home/me/mechnicality/workspace.mechweb/WebGL2/util/floatarray.html Line: 14 The expected(??) result generated by Chrome is: Create Float32Array 5, 1, 1, 1.100000023841858, 2.5, 3 What am I doing wrong? As far as I can see the code complies with the latest TypedArray spec. Are there planned changes to Firefox which will correct this? And yes, I'm aware that the current code has to all intents and purposes a pointless array copy, but that's not my current main issue! Thanks Alan

Create Float32Array

here

-------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Wed Jun 23 10:46:35 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 23 Jun 2010 10:46:35 -0700 (PDT) Subject: [Public WebGL] Discrepancy between Chromium and Minefield In-Reply-To: <4C222A70.2090807@mechnicality.com> Message-ID: <1002028341.532497.1277315195330.JavaMail.root@cm-mail03.mozilla.org> Yep, we don't yet implement set() -- I'm working on it currently, and it should show up in the next few days. For now, replacing it with a simple loop copy will get you unstuck. - Vlad ----- alan...@ wrote: > Hi > > I'm using the latest builds of Minefield (updated and rebuilt this > morning) and Chromium (6.0.437.3 dev) running on Linux Ubuntu 10.04 > (AMD64). I'm hitting a problem with Float32Array in Minefield. The > code is actually generated using GWT, but I've extracted the relevant > part of the code into a small html file (attached below). > > The code works fine on Chrome, but when I try it in Minefield barfs at > the line containing fa.set(jan) where jan is an array of floats. > > The error is: > Error: fa.set is not a function > Source File: > file:///home/me/mechnicality/workspace.mechweb/WebGL2/util/floatarray.html > Line: 14 > > > The expected(??) result generated by Chrome is: > Create Float32Array > > > 5, 1, 1, 1.100000023841858, 2.5, 3 > > > > What am I doing wrong? As far as I can see the code complies with the > latest TypedArray spec. Are there planned changes to Firefox which > will correct this? > > And yes, I'm aware that the current code has to all intents and > purposes a pointless array copy, but that's not my current main issue! > > Thanks > > Alan > > > > > > > > > > > >

Create Float32Array

>

here

> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Wed Jun 23 13:08:31 2010 From: ala...@ (ala...@) Date: Wed, 23 Jun 2010 13:08:31 -0700 Subject: [Public WebGL] Discrepancy between Chromium and Minefield In-Reply-To: <1002028341.532497.1277315195330.JavaMail.root@cm-mail03.mozilla.org> References: <1002028341.532497.1277315195330.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C2269BF.8040505@mechnicality.com> FYI On 06/23/2010 10:46 AM, Vladimir Vukicevic wrote: > Yep, we don't yet implement set() -- I'm working on it currently, and it should show up in the next few days. For now, replacing it with a simple loop copy will get you unstuck. > > - Vlad > > ----- alan...@ wrote: > > In fact, even better than a simple loop, I just used the constructor - so the GWT code just became: public static native Float32Array createInstance(JsArrayNumber verts) /*-{ return new Float32Array(verts); }-*/; (Float32Array is a Java class - I use the same class names as the JS objects. JsArrayNumber is a "gwitish" wrapper for a sequence.) As TypedArray(sequence array) appears to be in the spec it works today with both Chrome and Firefox. Alan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Wed Jun 23 14:36:21 2010 From: cma...@ (Chris Marrin) Date: Wed, 23 Jun 2010 14:36:21 -0700 Subject: [Public WebGL] Shader validation and limitations In-Reply-To: References: <12BC22A7-909F-43FB-824E-23D2C4E80C21@apple.com> <6478A414-233C-44F7-A026-712CED67CEC5@apple.com> Message-ID: <0C86C2DB-02C0-4C7F-8A6C-399852A5FD67@apple.com> On Jun 22, 2010, at 5:03 PM, Gregg Tavares wrote: > > > On Mon, Jun 21, 2010 at 11:37 AM, Chris Marrin wrote: > > On Jun 18, 2010, at 1:04 PM, Ilmari Heikkinen wrote: > > > 2010/6/18 Oliver Hunt : > >> > >> On Jun 17, 2010, at 9:49 PM, Cedric Vivier wrote: > >> > >>> On Fri, Jun 18, 2010 at 08:02, Chris Marrin wrote: > >>>> I believe this solves the halting problem issue, (although I suspect Ken disagrees with me). But doesn't necessarily prevent a shader from running for an extremely long time, which I suppose is the same thing in most cases. > >>> > >>> It seems Ken and/or others investigated this issue in depth months > >>> ago, is there any document available demonstrating all shader > >>> constructs - besides loops - found to possibly take an extremely long > >>> time to run ? > >> > >> I believe the trick was to make a very expensive shader, and then throw thousands of large polygons at it. > > > > Or you can just throw a model with a million screen-sized triangles at > > a trivial shader. > > Not with OpenGL ES you can't. It's limited to 65535 vertices per call, which I suppose translates to 65533 triangles if you're using TStrips. But the point is that a malicious author can do damaging things without writing an infinite loop. Even with that, I still advocate restricting shaders to the limits in Appendix A. It is always easier to relax restrictions than to tighten them. > > I don't think OpenGL ES is limited to 65535 vertices per call. It's limited to 16 bit indices but you can draw 2^31 vertices with either DrawArrays or DrawElements Right. My mistake. You can only index 65536 unique vertices, but you can render many more per call by reusing vertices. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Jun 24 00:24:30 2010 From: ste...@ (stephen white) Date: Thu, 24 Jun 2010 16:54:30 +0930 Subject: [Public WebGL] WebKitCSSMatrix and WebGL In-Reply-To: <90DC21F8-9B26-4BD6-BDEB-11BE617F3BAA@apple.com> References: <90DC21F8-9B26-4BD6-BDEB-11BE617F3BAA@apple.com> Message-ID: <8A1F2FBA-BD58-4B2E-BDA8-26502BE98FF0@adam.com.au> On 22/06/2010, at 4:02 AM, Chris Marrin wrote: > Interestingly, J3DIMatrix can also use a "copy" function. This is a > method I experimentally added to CSSMatrix. You pass a Float32Array > with space for at least 16 floats and the current matrix will be > copied into it. This yielded an improvement of 30-40% over both the > version that did matrix math in JS and the CSSMatrix version which > picked values out of the matrix by hand. This tells me that > CSSMatrix probably IS faster, but the GC overhead is eating all that > extra performance. It also tells me that with a few small changes > and additions, we can significantly improve matrix math in WebGL. That's great news and good to hear. I had a look at SVGMatrix as well, but that seems to be 3x3 for 2D, as opposed to CSSMatrix which is 4x4 for 3D (with 6 numbers off on the side?). A mutable version of CSSMatrix would make it easier to get started with WebGL before delving into libraries, and allows tutorials and examples to reduce the initial amount of code loaded to demonstrate a point. Thanks for looking into this! -- steve...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From van...@ Fri Jun 25 01:18:10 2010 From: van...@ (Vangelis Kokkevis) Date: Fri, 25 Jun 2010 01:18:10 -0700 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: References: Message-ID: I finally got around to checking into the spec the restrictions to WebGL necessitated by the D3D port. They are in sections 6.5 through 6.9 of the spec document. Please have a look and let me know if there are any objections or omissions. Cheers, Vangelis On Thu, Apr 22, 2010 at 12:18 PM, Vangelis Kokkevis wrote: > > Following up on a request made during the weekly WebGL teleconference, I'm > reposting a list of spec changes we think are necessary to get WebGL > implemented on top of D3D9. Please let me know if you have any strong > preferences regarding the options for #1 and #4. > > **** > With great input from Daniel Koch of TransGaming we compiled a list of > proposed modifications to the WebGL spec that will allow for a reasonable > implementation of the API on top of Direct3D 9.0. Please take a look and > let me know if there are any objections to the proposed changes or stuff > we've missed: > > 1. Restrict allowable arguments to stencilMaskSeparate and > stencilFuncSeparate. The underlying issue here is that D3D9 isn't as > flexible as GL when it comes to supplying different stencil masks and > functions to front and back facing triangles. More specifically here's a > listing of the GL stencil related functions/parameters and what portions can > be supported via D3D9: > > Stencil Separate (Write) Mask: > void StencilMask( uint mask ); > void StencilMaskSeparate( enum face, uint mask ); > mask ==> D3DRS_STENCILWRITEMASK (no FRONT/BACK support) > There is no equivalent separate mask state in D3D. *Spec change:* > *a) Remove StencilMaskSeparate. -or-* > *b) StencilMaskSeparate returns INVALID_VALUE if face is not > FRONT_AND_BACK. -or- * > *c) Draw returns INVALID_OPERATION if FRONT mask != BACK mask.* > > Stencil Function Separate Ref & Mask: > void StencilFunc( enum func, int ref, uint mask ); > void StencilFuncSeparate( enum face, enum func, int ref, uint mask ); > FRONT func ==> D3DRS_STENCILFUNC > BACK func ==> D3DRS_CCW_STENCILFUNC > ref ==> D3DRS_STENCILREF (no FRONT/BACK support) > mask ==> D3DRS_STENCILMASK (no FRONT/BACK support) > In D3D there is separate stencil state for func, but not for ref and mask. > Separate func is genuinely useful, and thus removing StencilFuncSeparate is > not a good option. *Spec change: Draw returns INVALID_OPERATION if FRONT > ref != BACK ref or FRONT mask != BACK mask.* > > Stencil Separate Operation: > void StencilOp( enum sfail, enum dpfail, enum dppass ); > void StencilOpSeparate( enum face, enum sfail, enum dpfail, enum dppass ); > FRONT sfail ==> D3DRS_STENCILFAIL > BACK sfail ==> D3DRS_CCW_STENCILFAIL > FRONT dpfail ==> D3DRS_STENCILZFAIL > BACK dpfail ==> D3DRS_CCW_STENCILZFAIL > FRONT dppass ==> D3DRS_STENCILPASS > BACK dppass ==> D3DRS_CCW_STENCILPASS > D3D has corresponding state for all of these, so no changes are required > for this. > > 2. Limit vertex stride to 255. *Spec change: vertexAttribPointer raises a > GL_INVALID_VALUE error if stride parameter value exceeds 255.* > * > * > 3. Viewport depth range. D3D doesn't support "far < near", and this would > be very annoying to have to emulate. I believe there was buy-in at the F2F > to limit valid parameters to glDepthRangef to "near <= far". * **Spec > change: glDepthRangef returns GL_INVALID_OPERATION if f < n.* > > 4. Conflicting constant color usage. In D3D we can't directly support alpha > blending for the cases where the source blend function is set to > GL_CONSTANT_ALPHA (or GL_ONE_MINUS_CONSTANT_ALPHA), and the destination > blend function is set to GL_CONSTANT_COLOR, or vice versa. That's because > GL_CONSTANT_ALPHA has no D3D9 equivalent and if we replicate the alpha to > the RGB components we can no longer use the RGB components for the other > blend function. *Spec change: * > * a) completely remove support for GL_CONSTANT_ALPHA and > ONE_MINUS_CONSTANT_ALPHA (ie return GL_INVALID_VALUE), -or-* > * b) glBlendFunc/Separate sets GL_INVALID_OPERATION if a source function > is GL_CONSTANT_ALPHA or GL_ONE_MINUS_CONSTANT_ALPHA and the corresponding > destination function is GL_CONSTANT_COLOR or or GL_ONE_MINUS_CONSTANT_COLOR, > or vice versa.* > > 5. Shader invariance as described in GLSL ES spec ( > http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf) > sections 4.6.1 and enforced by the "invariant" qualifier and #pragma STDGL > invariant(all) cannot be guaranteed in D3D9. *Spec change: No guarantees > are made about invariance in shader outputs the invariant qualifier and > #pragma are ignored.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phi...@ Fri Jun 25 08:06:13 2010 From: phi...@ (Philip Rideout) Date: Fri, 25 Jun 2010 09:06:13 -0600 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: References: Message-ID: Looks good -- wide lines and smooth lines would need to be removed for native D3D9 rendering, but they're easy to emulate (e.g., using ID3DXLine). On Fri, Jun 25, 2010 at 2:18 AM, Vangelis Kokkevis wrote: > I finally got around to checking into the spec the restrictions to WebGL > necessitated by the D3D port. ?They are in sections 6.5 through 6.9 of the > spec document. ?Please have a look and let me know if there are any > objections or omissions. > Cheers, > Vangelis > > > On Thu, Apr 22, 2010 at 12:18 PM, Vangelis Kokkevis > wrote: >> >> Following up on a request made during the weekly WebGL teleconference, I'm >> reposting a list of spec changes we think are necessary to get WebGL >> implemented on top of D3D9. ?Please let me know if you have any strong >> preferences regarding the options for #1 and #4. >> **** >> With great input from Daniel Koch of TransGaming we compiled a list of >> proposed modifications to the WebGL spec that will allow for a reasonable >> implementation of the API on top of Direct3D 9.0. ?Please take a look and >> let me know if there are any objections to the proposed changes or stuff >> we've missed: >> 1. Restrict allowable arguments to stencilMaskSeparate and >> stencilFuncSeparate. ?The underlying issue here is that D3D9 isn't as >> flexible as GL when it comes to supplying different stencil masks and >> functions to front and back facing triangles. More specifically here's a >> listing of the GL stencil related functions/parameters and what portions can >> be supported via D3D9: >> Stencil Separate (Write) Mask: >> void StencilMask( uint mask ); >> void StencilMaskSeparate( enum face, uint mask ); >> ?? ?mask ==> D3DRS_STENCILWRITEMASK ?(no FRONT/BACK support) >> There is no equivalent separate mask state in D3D. ?Spec change: >> a) ?Remove StencilMaskSeparate. ?-or- >> b)??StencilMaskSeparate returns INVALID_VALUE if face is not >> FRONT_AND_BACK. -or- >> c) ?Draw returns INVALID_OPERATION if FRONT mask != BACK mask. >> Stencil Function Separate Ref & Mask: >> void StencilFunc( enum func, int ref, uint mask ); >> void StencilFuncSeparate( enum face, enum func, int ref, uint mask ); >> ?? ?FRONT func ==> D3DRS_STENCILFUNC >> ?? ?BACK func ==> D3DRS_CCW_STENCILFUNC >> ?? ?ref ==> D3DRS_STENCILREF ?(no FRONT/BACK support) >> ?? ?mask ==> D3DRS_STENCILMASK (no FRONT/BACK support) >> In D3D there is separate stencil state for func, but not for ref and mask. >> ?Separate func is genuinely useful, and thus removing StencilFuncSeparate is >> not a good option. ?Spec change:?Draw returns INVALID_OPERATION if FRONT ref >> ?!= BACK ref or FRONT mask != BACK mask. >> Stencil Separate Operation: >> void StencilOp( enum sfail, enum dpfail, enum dppass ); >> void StencilOpSeparate( enum face, enum sfail, enum dpfail, enum dppass ); >> ?? ?FRONT sfail ==> D3DRS_STENCILFAIL >> ?? ?BACK sfail ==> ?D3DRS_CCW_STENCILFAIL >> ?? ?FRONT dpfail ==> D3DRS_STENCILZFAIL >> ?? ?BACK dpfail ==> D3DRS_CCW_STENCILZFAIL >> ?? ?FRONT dppass ==> D3DRS_STENCILPASS >> ?? ?BACK dppass ==> D3DRS_CCW_STENCILPASS >> D3D has corresponding state for all of these, so no changes are required >> for this. >> 2. Limit vertex stride to 255.?Spec change: ?vertexAttribPointer raises a >> GL_INVALID_VALUE error if stride parameter value exceeds 255. >> 3. Viewport depth range. ?D3D doesn't support ?"far < near", and this >> would be very annoying to have to emulate. I believe there was buy-in at the >> F2F to limit valid parameters to glDepthRangef to ?"near <= far". ??Spec >> ?change: ? glDepthRangef returns GL_INVALID_OPERATION if f < n. >> 4. Conflicting constant color usage. In D3D we can't directly support >> alpha blending for the cases where the source blend function is set to >> GL_CONSTANT_ALPHA (or GL_ONE_MINUS_CONSTANT_ALPHA), and the destination >> blend function is set to GL_CONSTANT_COLOR, or vice versa. That's because >> GL_CONSTANT_ALPHA has no D3D9 equivalent and if we replicate the alpha to >> the RGB components we can no longer use the RGB components for the other >> blend function. ?Spec change: >> ? a) completely remove support for GL_CONSTANT_ALPHA and >> ONE_MINUS_CONSTANT_ALPHA (ie return GL_INVALID_VALUE), -or- >> ??b) glBlendFunc/Separate sets GL_INVALID_OPERATION if a source function >> is GL_CONSTANT_ALPHA or GL_ONE_MINUS_CONSTANT_ALPHA and the corresponding >> destination function is GL_CONSTANT_COLOR or or GL_ONE_MINUS_CONSTANT_COLOR, >> or vice versa. >> 5. Shader invariance as described in GLSL ES spec >> (http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf) >> sections 4.6.1 and enforced by the "invariant" qualifier and #pragma STDGL >> invariant(all) cannot be guaranteed in D3D9. ?Spec change: No guarantees are >> made about invariance in shader outputs the invariant qualifier and #pragma >> are ignored. > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Sat Jun 26 17:17:19 2010 From: ste...@ (Steve Baker) Date: Sat, 26 Jun 2010 19:17:19 -0500 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: References: Message-ID: <4C26988F.6000506@sjbaker.org> YIKES!!! Lack of shader invarience can be a major headache in many common multipass algorithms. Without the 'invariant' keyword, we're going to need something like the 'ftransform' function (which I think was obsoleted in GLSL 1.4 and GLES 1.0). Without EITHER 'invariant' OR 'ftransform', some rather important algorithms become impossible - and that would be really bad news! -- Steve >> On Thu, Apr 22, 2010 at 12:18 PM, Vangelis Kokkevis >> wrote: >> >>> 5. Shader invariance as described in GLSL ES spec >>> (http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf) >>> sections 4.6.1 and enforced by the "invariant" qualifier and #pragma STDGL >>> invariant(all) cannot be guaranteed in D3D9. Spec change: No guarantees are >>> made about invariance in shader outputs the invariant qualifier and #pragma >>> are ignored. >>> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ry...@ Tue Jun 29 16:35:04 2010 From: ry...@ (ry...@) Date: Tue, 29 Jun 2010 16:35:04 -0700 Subject: [Public WebGL] Typed Arrays & Strings Message-ID: Hello, I like the Typed Array proposal -- it seems like one of the more sane binary javascript proposals. One important thing it's missing is the ability to decode a chunk of UTF8 data. Is there a reason that is not part of the spec? Ryan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Tue Jun 29 18:15:44 2010 From: ste...@ (Steve Baker) Date: Tue, 29 Jun 2010 20:15:44 -0500 Subject: [Public WebGL] Weird texture problems. Message-ID: <4C2A9AC0.4060907@sjbaker.org> I'm testing my WebGL app on some ancient hardware - probing to see where the compatibility envelope is - and I'm getting some really strange texture mapping errors on simple RGB textures on an nVidia GeForce GO 6400 with WinXP and Minefield. The texture is a 16x1024 texels .PNG - (ultimately, it's a lookup table for another calculation - but it goes wrong even when I don't use it that way). Because this is really a lookup table, I'm only reading the ".rg" components of the map and only using one axis of the texture - so in the minimal test case, my shader is something like: gl_FragColor = vec4 ( texture2D ( myMap, vec2(0.5, texCoord.y ) ).rg, 0, 1 ) ; If the '.b' component of the map is all zeroes, the colors come out perfectly...but if I put non-zero data in blue, the red and green go nuts...nothing like the data I put in (48,51,128)==>(22,65,128) and (16,64,0)==>(0,79,0) ! It doesn't seem to be an addressing issue because the colors I'm getting don't have the value of any of the texels in the original map...and it doesn't look like a MIPmapping issue either because the colors that I see are nowhere near the average of my texels. The program/shader/texture works great on all manner of other hardware, OS's, etc). Any ideas? Is it possible that some kind of lossy compression might be happening under the hood? I've seen this kind of thing with DXT compression before...but this is uncompressed .PNG. -- Steve. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Tue Jun 29 17:08:59 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 29 Jun 2010 17:08:59 -0700 Subject: [Public WebGL] Weird texture problems. In-Reply-To: <4C2A9AC0.4060907@sjbaker.org> References: <4C2A9AC0.4060907@sjbaker.org> Message-ID: On Tue, Jun 29, 2010 at 6:15 PM, Steve Baker wrote: > I'm testing my WebGL app on some ancient hardware - probing to see where > the compatibility envelope is - and I'm getting some really strange > texture mapping errors on simple RGB textures on an nVidia GeForce GO > 6400 with WinXP and Minefield. > > The texture is a 16x1024 texels .PNG - (ultimately, it's a lookup table > for another calculation - but it goes wrong even when I don't use it > that way). ?Because this is really a lookup table, I'm only reading the > ".rg" components of the map and only using one axis of the texture - so > in the minimal test case, my shader is something like: > > ? gl_FragColor = vec4 ( texture2D ( myMap, vec2(0.5, texCoord.y ) ).rg, > 0, 1 ) ; > > If the '.b' component of the map is all zeroes, the colors come out > perfectly...but if I put non-zero data in blue, the red and green go > nuts...nothing like the data I put in (48,51,128)==>(22,65,128) and > (16,64,0)==>(0,79,0) ! ? It doesn't seem to be an addressing issue > because the colors I'm getting don't have the value of any of the texels > in the original map...and it doesn't look like a MIPmapping issue either > because the colors that I see are nowhere near the average of my texels. > > The program/shader/texture works great on all manner of other hardware, > OS's, etc). > > Any ideas? ? Is it possible that some kind of lossy compression might be > happening under the hood? ?I've seen this kind of thing with DXT > compression before...but this is uncompressed .PNG. Could undesired color space conversion be occurring? Does the same thing happen in the Chromium continuous builds for Windows? -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Tue Jun 29 17:09:44 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Tue, 29 Jun 2010 17:09:44 -0700 Subject: [Public WebGL] Weird texture problems. In-Reply-To: <4C2A9AC0.4060907@sjbaker.org> References: <4C2A9AC0.4060907@sjbaker.org> Message-ID: On Tue, Jun 29, 2010 at 6:15 PM, Steve Baker wrote: > I'm testing my WebGL app on some ancient hardware - probing to see where > the compatibility envelope is - and I'm getting some really strange > texture mapping errors on simple RGB textures on an nVidia GeForce GO > 6400 with WinXP and Minefield. > > The texture is a 16x1024 texels .PNG - (ultimately, it's a lookup table > for another calculation - but it goes wrong even when I don't use it > that way). Because this is really a lookup table, I'm only reading the > ".rg" components of the map and only using one axis of the texture - so > in the minimal test case, my shader is something like: > > gl_FragColor = vec4 ( texture2D ( myMap, vec2(0.5, texCoord.y ) ).rg, > 0, 1 ) ; > > If the '.b' component of the map is all zeroes, the colors come out > perfectly...but if I put non-zero data in blue, the red and green go > nuts...nothing like the data I put in (48,51,128)==>(22,65,128) and > (16,64,0)==>(0,79,0) ! It doesn't seem to be an addressing issue > because the colors I'm getting don't have the value of any of the texels > in the original map...and it doesn't look like a MIPmapping issue either > because the colors that I see are nowhere near the average of my texels. > > The program/shader/texture works great on all manner of other hardware, > OS's, etc). > > Any ideas? Is it possible that some kind of lossy compression might be > happening under the hood? I've seen this kind of thing with DXT > compression before...but this is uncompressed .PNG. > Somethings off the top of my head. Some browsers look at the color space field inside a .PNG and adjust the pixels. Also some browsers pre-multiply by alpha. Both of those are are supposed to be avoided when using a PNG in WebGL but I don't think any browsers have implemented that yet. Given the behavior you are seeing that doesn't sound like it's the issue though. Sorry I can't be more help. > > -- Steve. > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jos...@ Tue Jun 29 17:34:04 2010 From: jos...@ (Joshua Bell) Date: Tue, 29 Jun 2010 17:34:04 -0700 Subject: [Public WebGL] Typed Arrays & Strings In-Reply-To: References: Message-ID: On Tue, Jun 29, 2010 at 4:35 PM, wrote: > I like the Typed Array proposal -- it seems like one of the more sane > binary javascript proposals. One important thing it's missing is the > ability to decode a chunk of UTF8 data. Is there a reason that is not > part of the spec? > This is possible to do within JavaScript, and at least IMHO doesn't need to be enshrined in the TypedArray spec's host objects. The CommonJS crew (http://commonjs.org/) has a handful of proposals (see the Binary modules), most of which include support for encodings. Many of these could be implemented on top of a Uint8Array in JavaScript; although the performance of some of the array methods (slice, etc) would be poor, it's also questionable if those are actually desirable. Rooting around in the CommonJS discussion groups is instructive; the conversation has also jumped over to es-discuss a few times. I recently needed similar functionality for binary data parsing, so I ended up with an ES3 "best effort" impl. of TypedArrays and also a Binary type with various encodings although to avoid dependencies, my Binary type was actually based on an "octet array" so it's not terribly efficient). I intentionally mirrored the CommonJS API ideas, since they're reasonable. Code is MIT licensed and at http://hg.secondlife.com/llsd/src/tip/js/llsd.js if you want to dig around. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ry...@ Tue Jun 29 17:58:27 2010 From: ry...@ (ry...@) Date: Tue, 29 Jun 2010 17:58:27 -0700 Subject: [Public WebGL] Typed Arrays & Strings In-Reply-To: References: Message-ID: On Tue, Jun 29, 2010 at 5:34 PM, Joshua Bell wrote: > On Tue, Jun 29, 2010 at 4:35 PM, wrote: >> >> I like the Typed Array proposal -- it seems like one of the more sane >> binary javascript proposals. One important thing it's missing is the >> ability to decode a chunk of UTF8 data. Is there a reason that is not >> part of the spec? > > This is possible to do within JavaScript, and at least IMHO doesn't need to > be enshrined in the TypedArray spec's host objects. > The CommonJS crew (http://commonjs.org/)?has a handful of proposals (see the > Binary modules), most of which include support for encodings. Many of these > could be implemented on top of a Uint8Array in JavaScript; although the > performance of some of the array methods (slice, etc) would be poor, it's > also questionable if those are actually desirable. Rooting around in the > CommonJS discussion groups is instructive; the conversation has also jumped > over to es-discuss a few times. > I recently needed similar functionality for binary data parsing, so I ended > up with an ES3 "best effort" impl. of TypedArrays and also a Binary type > with various encodings although to avoid dependencies, my Binary type was > actually based on an "octet array" so it's not terribly efficient). I > intentionally mirrored the CommonJS API ideas, since they're reasonable. > Code is MIT licensed and > at?http://hg.secondlife.com/llsd/src/tip/js/llsd.js?if you want to dig > around. Yes, it's possible to do in JS but I think there is a large performance hit. My implementation of binary-in-javascript is similar to the Typed Arrays except that it lacks the ability to view the object as arrays of integers other than 8bit ones. It does however have "slices" which do not copy, but reference the underlying data; and it does have string decoding and encoding. http://nodejs.org/api.html#buffers-3 I'm not married to my API (from which CommonJS Binary/F was derived) but for my use case (servers) being able to read and write strings from and to the buffer is important and quite sensitive to performance. I would like to offer up a non-browser implementation to Typed Arrays but I need a native API to deal with strings. Ryan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From tu...@ Tue Jun 29 18:05:18 2010 From: tu...@ (Thatcher Ulrich) Date: Tue, 29 Jun 2010 21:05:18 -0400 Subject: [Public WebGL] Weird texture problems. In-Reply-To: <4C2A9AC0.4060907@sjbaker.org> References: <4C2A9AC0.4060907@sjbaker.org> Message-ID: DXT sounds plausible. Does the texture look like itself if you just paint it to the screen? -T On Jun 29, 2010 7:53 PM, "Steve Baker" wrote: > I'm testing my WebGL app on some ancient hardware - probing to see where > the compatibility envelope is - and I'm getting some really strange > texture mapping errors on simple RGB textures on an nVidia GeForce GO > 6400 with WinXP and Minefield. > > The texture is a 16x1024 texels .PNG - (ultimately, it's a lookup table > for another calculation - but it goes wrong even when I don't use it > that way). Because this is really a lookup table, I'm only reading the > ".rg" components of the map and only using one axis of the texture - so > in the minimal test case, my shader is something like: > > gl_FragColor = vec4 ( texture2D ( myMap, vec2(0.5, texCoord.y ) ).rg, > 0, 1 ) ; > > If the '.b' component of the map is all zeroes, the colors come out > perfectly...but if I put non-zero data in blue, the red and green go > nuts...nothing like the data I put in (48,51,128)==>(22,65,128) and > (16,64,0)==>(0,79,0) ! It doesn't seem to be an addressing issue > because the colors I'm getting don't have the value of any of the texels > in the original map...and it doesn't look like a MIPmapping issue either > because the colors that I see are nowhere near the average of my texels. > > The program/shader/texture works great on all manner of other hardware, > OS's, etc). > > Any ideas? Is it possible that some kind of lossy compression might be > happening under the hood? I've seen this kind of thing with DXT > compression before...but this is uncompressed .PNG. > > -- Steve. > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Tue Jun 29 18:25:48 2010 From: vla...@ (Vladimir Vukicevic) Date: Tue, 29 Jun 2010 18:25:48 -0700 (PDT) Subject: [Public WebGL] Weird texture problems. In-Reply-To: Message-ID: <542476092.584181.1277861148617.JavaMail.root@cm-mail03.mozilla.org> Also does it look like you'd expect if you just load it in an element? - Vlad ----- "Thatcher Ulrich" wrote: > DXT sounds plausible. Does the texture look like itself if you just > paint it to the screen? -T > > On Jun 29, 2010 7:53 PM, "Steve Baker" < steve...@ > wrote: > > I'm testing my WebGL app on some ancient hardware - probing to see > where > > the compatibility envelope is - and I'm getting some really strange > > texture mapping errors on simple RGB textures on an nVidia GeForce > GO > > 6400 with WinXP and Minefield. > > > > The texture is a 16x1024 texels .PNG - (ultimately, it's a lookup > table > > for another calculation - but it goes wrong even when I don't use it > > that way). Because this is really a lookup table, I'm only reading > the > > ".rg" components of the map and only using one axis of the texture - > so > > in the minimal test case, my shader is something like: > > > > gl_FragColor = vec4 ( texture2D ( myMap, vec2(0.5, texCoord.y ) > ).rg, > > 0, 1 ) ; > > > > If the '.b' component of the map is all zeroes, the colors come out > > perfectly...but if I put non-zero data in blue, the red and green go > > nuts...nothing like the data I put in (48,51,128)==>(22,65,128) and > > (16,64,0)==>(0,79,0) ! It doesn't seem to be an addressing issue > > because the colors I'm getting don't have the value of any of the > texels > > in the original map...and it doesn't look like a MIPmapping issue > either > > because the colors that I see are nowhere near the average of my > texels. > > > > The program/shader/texture works great on all manner of other > hardware, > > OS's, etc). > > > > Any ideas? Is it possible that some kind of lossy compression might > be > > happening under the hood? I've seen this kind of thing with DXT > > compression before...but this is uncompressed .PNG. > > > > -- Steve. > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ . > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Wed Jun 30 00:24:42 2010 From: ced...@ (Cedric Vivier) Date: Wed, 30 Jun 2010 15:24:42 +0800 Subject: [Public WebGL] Typed Arrays & Strings In-Reply-To: References: Message-ID: On Wed, Jun 30, 2010 at 08:58, wrote: > I'm not married to my API (from which CommonJS Binary/F was derived) > but for my use case (servers) being able to read and write strings > from and to the buffer is important and quite sensitive to > performance. I would like to offer up a non-browser implementation to > Typed Arrays but I need a native API to deal with strings. I completely support this. Few weeks ago I was researching the same thing and made a prototype but didn't come up with a formal proposal yet. The goal of my prototype (running on Mozilla) was to see how much more efficient could binary loading get by being able to load XHR content directly with an insert(string data) method instead of going through the charCodeAt-loop kind of thing : Demo : http://neonux.com/webgl/binaryloader.html The interesting part being : xhr.overrideMimeType("text/plain; charset=x-user-defined"); ... var res = xhr.responseText; var len = parseInt(xhr.getResponseHeader("Content-Length")); var buf = new WebGLUnsignedByteArray(len); var parsingStartTime = Date.now(); if (this.forceCharCodeAt || !buf.insert) { for (var i = 0; i < len; ++i) buf[i] = res.charCodeAt(i) & 0xFF; } else if (buf.insert) { buf.insert(res); Result was obviously consistently faster and allows 2x faster loading of large meshes and other binary data. Endianness issues can be resolved by inserting with right offsets and corresponding typed WebGL*Array on a same WebGLArrayBuffer. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Wed Jun 30 00:46:41 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 30 Jun 2010 00:46:41 -0700 (PDT) Subject: [Public WebGL] Typed Arrays & Strings In-Reply-To: Message-ID: <832267373.585622.1277884001542.JavaMail.root@cm-mail03.mozilla.org> Is this really the same use case? Cedric, I think you want something closer to https://bugzilla.mozilla.org/show_bug.cgi?id=572522 -- but I think the original request actually wanted the ability to insert true strings, not binary data disguised as strings, right? - Vlad ----- "Cedric Vivier" wrote: > On Wed, Jun 30, 2010 at 08:58, < ry...@ > wrote: > > I'm not married to my API (from which CommonJS Binary/F was derived) > but for my use case (servers) being able to read and write strings > from and to the buffer is important and quite sensitive to > performance. I would like to offer up a non-browser implementation to > Typed Arrays but I need a native API to deal with strings. > > > I completely support this. > > > Few weeks ago I was researching the same thing and made a prototype > but didn't come up with a formal proposal yet. > The goal of my prototype (running on Mozilla) was to see how much more > efficient could binary loading get by being able to load XHR content > directly with an insert(string data) method instead of going through > the charCodeAt-loop kind of thing : > > > Demo : http://neonux.com/webgl/binaryloader.html > > > The interesting part being : > > > xhr.overrideMimeType("text/plain; charset=x-user-defined"); ... > var res = xhr.responseText; > var len = parseInt(xhr.getResponseHeader("Content-Length")); > var buf = new WebGLUnsignedByteArray(len); > > var parsingStartTime = Date.now(); > if (this.forceCharCodeAt || !buf.insert) { > for (var i = 0; i < len; ++i) > buf[i] = res.charCodeAt(i) & 0xFF; > } else if (buf.insert) { > buf.insert(res); > > > > Result was obviously consistently faster and allows 2x faster loading > of large meshes and other binary data. > > > Endianness issues can be resolved by inserting with right offsets and > corresponding typed WebGL*Array on a same WebGLArrayBuffer. > > > > > Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Wed Jun 30 01:00:03 2010 From: ced...@ (Cedric Vivier) Date: Wed, 30 Jun 2010 16:00:03 +0800 Subject: [Public WebGL] Typed Arrays & Strings In-Reply-To: <832267373.585622.1277884001542.JavaMail.root@cm-mail03.mozilla.org> References: <832267373.585622.1277884001542.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Wed, Jun 30, 2010 at 15:46, Vladimir Vukicevic wrote: > Is this really the same use case? Cedric, I think you want something > closer to https://bugzilla.mozilla.org/show_bug.cgi?id=572522 -- but I > think the original request actually wanted the ability to insert true > strings, not binary data disguised as strings, right? > > Hey Vlad, >From my understanding of Ryan's first message he'd also like to decode arbitrary string data into TypedArrays (binary data views), but I'll let him confirm ;-) You're right that my use case would be satisfied by #572522, however having an "insert-string-as-binary-data facility" in TypedArrays directly is more flexible imho as it could work with any TypedArray implementation (regardless of XHR) and can be used the same way with any non-XHR source (WebSockets? File API?). Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ry...@ Wed Jun 30 08:36:46 2010 From: ry...@ (ry...@) Date: Wed, 30 Jun 2010 08:36:46 -0700 Subject: [Public WebGL] Typed Arrays & Strings In-Reply-To: References: <832267373.585622.1277884001542.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Wed, Jun 30, 2010 at 1:00 AM, Cedric Vivier wrote: > On Wed, Jun 30, 2010 at 15:46, Vladimir Vukicevic > wrote: >> >> Is this really the same use case? ?Cedric, I think you want something >> closer to https://bugzilla.mozilla.org/show_bug.cgi?id=572522 -- but I think >> the original request actually wanted the ability to insert true strings, not >> binary data disguised as strings, right? >> > > Hey Vlad, > From my understanding of Ryan's first message he'd also like to decode > arbitrary string data into TypedArrays (binary data views), but I'll let him > confirm ;-) Correct ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From jos...@ Wed Jun 30 10:35:05 2010 From: jos...@ (Joshua Bell) Date: Wed, 30 Jun 2010 10:35:05 -0700 Subject: [Public WebGL] Typed Arrays & Strings In-Reply-To: References: <832267373.585622.1277884001542.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Wed, Jun 30, 2010 at 8:36 AM, wrote: > On Wed, Jun 30, 2010 at 1:00 AM, Cedric Vivier wrote: > > On Wed, Jun 30, 2010 at 15:46, Vladimir Vukicevic > > wrote: > >> > >> Is this really the same use case? Cedric, I think you want something > >> closer to https://bugzilla.mozilla.org/show_bug.cgi?id=572522 -- but I > think > >> the original request actually wanted the ability to insert true strings, > not > >> binary data disguised as strings, right? > >> > > > > Hey Vlad, > > From my understanding of Ryan's first message he'd also like to decode > > arbitrary string data into TypedArrays (binary data views), but I'll let > him > > confirm ;-) > > Correct > To leap back in - I'd love to see browser vendors include a JS API that allows for string encoding/decoding from Uint8Arrays. At a minimum for my scenarios: ASCII, "binary", UTF-8, and Base64 (the latter with inverted semantics if the words "encode" and "decode" are used). Many libraries fail to properly deal with UTF-16 surrogate pairs, so getting it into the browser with a spec and conformance tests would be lovely. That said, IMHO string encodings don't belong in the TypedArray spec or Uint8Array type itself. One approach would be a Binary/Buffer type with API like Ryan's in Node.js that "has-a" (or "is-a") Uint8Array and is what is yielded by proposed binary support in XHR, File, and WebSockets. It's certainly a clean and easy to understand API. But... that would delay being able to lock down any of those other specs, though... so perhaps the common currency for binary data should really be simply Uint8Array (Vlad, is that what you were showing at WebGL camp?), and string<->binary encoding should exist in a separate API (e.g. like the CommonJS Encodings proposals) (Hopefully I'm being pessimistic about the speed at which a Binary/Buffer type can be settled on.) Vlad: does there need to be any further discussion on this list, or do you have all the requirements? Should continued discussion stay here, or be redirected to... bugzilla? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Wed Jun 30 12:22:15 2010 From: ste...@ (ste...@) Date: Wed, 30 Jun 2010 12:22:15 -0700 Subject: [Public WebGL] Weird texture problems. In-Reply-To: References: <4C2A9AC0.4060907@sjbaker.org> Message-ID: I haven't tried painting it onto the screen directly - but I've mapped it onto a simple cube with the simplest imaginable shader - and it still doesn't come out right. I'm away from my computer right now - but I'll put a simple demo of the thing with some screen shots from the GeForce 6400 GO machine someplace tonight. I tried all sorts of things last night to try to trick it out of this behavior - I resized the array to a more 'normal' 128x128 - I tried the PNG with and without an alpha plane. I tried sampling the texture with ".rgb" rather than just ".rg" - but none of those things helped. If implementations are allowed to do things to my texture behind my back (like changing the color space, doing compression or something else) - then this needs to be highlighted in the spec. Modern shaders VERY frequently use textures for all sorts of things other than Red/Green/Blue images glued onto polygons. In this case, I'm using the texture to tell the shader how a second, atlassed, texture is packed (I do this in order that random sub-map selection can be done to make 100 identical single-mesh models look like 100 totally different models with different colors and textures on each). I'm deliberately only using the three or four high-order bits from each band of the map just in case it's loaded in 4/4/4/4 or 5/6/5 mode under the hood...but the errors I'm seeing are vastly bigger than could be accounted for like that. DXT compression would certainly explain it - but are there REALLY situations where the underlying graphics system is allowed to mangle my texture like that without me telling it to? Is there some kind of option to force it not to do that? Is there a way for me to query the system to see whether it's going to do that? -- Steve > DXT sounds plausible. Does the texture look like itself if you just paint > it to the screen? -T > > On Jun 29, 2010 7:53 PM, "Steve Baker" wrote: >> I'm testing my WebGL app on some ancient hardware - probing to see where >> the compatibility envelope is - and I'm getting some really strange >> texture mapping errors on simple RGB textures on an nVidia GeForce GO >> 6400 with WinXP and Minefield. >> >> The texture is a 16x1024 texels .PNG - (ultimately, it's a lookup table >> for another calculation - but it goes wrong even when I don't use it >> that way). Because this is really a lookup table, I'm only reading the >> ".rg" components of the map and only using one axis of the texture - so >> in the minimal test case, my shader is something like: >> >> gl_FragColor = vec4 ( texture2D ( myMap, vec2(0.5, texCoord.y ) ).rg, >> 0, 1 ) ; >> >> If the '.b' component of the map is all zeroes, the colors come out >> perfectly...but if I put non-zero data in blue, the red and green go >> nuts...nothing like the data I put in (48,51,128)==>(22,65,128) and >> (16,64,0)==>(0,79,0) ! It doesn't seem to be an addressing issue >> because the colors I'm getting don't have the value of any of the texels >> in the original map...and it doesn't look like a MIPmapping issue either >> because the colors that I see are nowhere near the average of my texels. >> >> The program/shader/texture works great on all manner of other hardware, >> OS's, etc). >> >> Any ideas? Is it possible that some kind of lossy compression might be >> happening under the hood? I've seen this kind of thing with DXT >> compression before...but this is uncompressed .PNG. >> >> -- Steve. >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Wed Jun 30 13:58:28 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 30 Jun 2010 13:58:28 -0700 Subject: [Public WebGL] Weird texture problems. In-Reply-To: References: <4C2A9AC0.4060907@sjbaker.org> Message-ID: On Wed, Jun 30, 2010 at 12:22 PM, wrote: > I haven't tried painting it onto the screen directly - but I've mapped it > onto a simple cube with the simplest imaginable shader - and it still > doesn't come out right. > > I'm away from my computer right now - but I'll put a simple demo of the > thing with some screen shots from the GeForce 6400 GO machine someplace > tonight. > > I tried all sorts of things last night to try to trick it out of this > behavior - I resized the array to a more 'normal' 128x128 - I tried the > PNG with and without an alpha plane. I tried sampling the texture with > ".rgb" rather than just ".rg" - but none of those things helped. > > If implementations are allowed to do things to my texture behind my back > (like changing the color space, doing compression or something else) - > then this needs to be highlighted in the spec. > > The implementation is NOT allowed to do this. BUT!!!! No implementation is doing the correct thing here yet as far as I know. I'll be writing conformance tests to test for this soon. > Modern shaders VERY frequently use textures for all sorts of things other > than Red/Green/Blue images glued onto polygons. In this case, I'm using > the texture to tell the shader how a second, atlassed, texture is packed > (I do this in order that random sub-map selection can be done to make 100 > identical single-mesh models look like 100 totally different models with > different colors and textures on each). I'm deliberately only using the > three or four high-order bits from each band of the map just in case it's > loaded in 4/4/4/4 or 5/6/5 mode under the hood...but the errors I'm seeing > are vastly bigger than could be accounted for like that. > > DXT compression would certainly explain it - but are there REALLY > situations where the underlying graphics system is allowed to mangle my > texture like that without me telling it to? Is there some kind of option > to force it not to do that? Is there a way for me to query the system to > see whether it's going to do that? > > -- Steve > > > DXT sounds plausible. Does the texture look like itself if you just > paint > > it to the screen? -T > > > > On Jun 29, 2010 7:53 PM, "Steve Baker" wrote: > >> I'm testing my WebGL app on some ancient hardware - probing to see where > >> the compatibility envelope is - and I'm getting some really strange > >> texture mapping errors on simple RGB textures on an nVidia GeForce GO > >> 6400 with WinXP and Minefield. > >> > >> The texture is a 16x1024 texels .PNG - (ultimately, it's a lookup table > >> for another calculation - but it goes wrong even when I don't use it > >> that way). Because this is really a lookup table, I'm only reading the > >> ".rg" components of the map and only using one axis of the texture - so > >> in the minimal test case, my shader is something like: > >> > >> gl_FragColor = vec4 ( texture2D ( myMap, vec2(0.5, texCoord.y ) ).rg, > >> 0, 1 ) ; > >> > >> If the '.b' component of the map is all zeroes, the colors come out > >> perfectly...but if I put non-zero data in blue, the red and green go > >> nuts...nothing like the data I put in (48,51,128)==>(22,65,128) and > >> (16,64,0)==>(0,79,0) ! It doesn't seem to be an addressing issue > >> because the colors I'm getting don't have the value of any of the texels > >> in the original map...and it doesn't look like a MIPmapping issue either > >> because the colors that I see are nowhere near the average of my texels. > >> > >> The program/shader/texture works great on all manner of other hardware, > >> OS's, etc). > >> > >> Any ideas? Is it possible that some kind of lossy compression might be > >> happening under the hood? I've seen this kind of thing with DXT > >> compression before...but this is uncompressed .PNG. > >> > >> -- Steve. > >> > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > >> > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: