From bja...@ Tue May 1 11:22:40 2012 From: bja...@ (Benoit Jacob) Date: Tue, 1 May 2012 11:22:40 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> Message-ID: <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Hi, We've discussed several times before, and rejected, proposals to expose device/driver info to content. Here's another, different proposal. 1. add a new function --- let's call it getDeviceAdvisories() --- that can be called without a WebGL context, as it will be meant to be called before getContext. For example, it could be a method on the window object. 2. getDeviceAdvisories() returns a JS Object, which may be null. By default, the browser will return a null object, which should be interpreted as "nothing special to say about this device". 3. Now if browser developers get reports from application developers that a certain device is causing them trouble, browser vendors can quickly update this object to add some properties that are characteristic of this device. For example, if a driver is known to have very slow VTF, the object returned would have a boolean property, vtf_slow = true We could also imagine slow = true for software-only WebGL renderers, as many apps that have a good Canvas2D or even non-Canvas fallback would prefer it over a slow WebGL implementation. It should even be possible to update it within 24 hours without requiring a software update, as is typically done for blocklist updates. I imagine that JSON would be a convenient format for browsers to store this information. 4. Whenever the device is no longer a major cause of trouble, e.g. if a work-around has been implemented since, the device advisory can be dropped. Some reasons why I like this proposal while I was against the vendor/renderer/version strings: * Any exposed information is directly useful. By contrast, UA-string-like solutions rely on applications to correctly parse them and correctly make use of that information, which is a well-known cause of artificial portability problems on the Web. * The default is to expose no information. * If a browser always returns null, that's totally cool, apps will just assume no driver issue then. * The amount of information exposed is minimal (I don't think we'd expose more than 2 bits given that the OS name is already exposed, compared to about 10 bits for the renderer/version strings). * The information exposed pertains to specific testable issues, so it could be obtained fairly easily anyway by running some WebGL code exposing the corresponding issue. Why do I care about this now? Partly from conversations with Web developers, and partly because browsers are getting software fallbacks for WebGL (Chrome already has it now), and I am concerned that silently falling back to software rendering for WebGL is going to break applications that have a WebGL and a non-WebGL renderer and make the naive assumption that if WebGL is available then it is necessarily the better choice. Opinions? Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kos...@ Tue May 1 12:40:16 2012 From: kos...@ (David Sheets) Date: Tue, 1 May 2012 12:40:16 -0700 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <68851291.338775.1335896560640.JavaMail.root@mozilla.com> References: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 11:22 AM, Benoit Jacob wrote: > > Hi, > > We've discussed several times before, and rejected, proposals to expose device/driver info to content. Here's another, different proposal. > > 1. add a new function --- let's call it getDeviceAdvisories() --- that can be called without a WebGL context, as it will be meant to be called before getContext. For example, it could be a method on the window object. > > 2. getDeviceAdvisories() returns a JS Object, which may be null. By default, the browser will return a null object, which should be interpreted as "nothing special to say about this device". > > 3. Now if browser developers get reports from application developers that a certain device is causing them trouble, browser vendors can quickly update this object to add some properties that are characteristic of this device. For example, if a driver is known to have very slow VTF, the object returned would have a boolean property, > > ? ?vtf_slow = true > > We could also imagine > > ? ?slow = true > > for software-only WebGL renderers, as many apps that have a good Canvas2D or even non-Canvas fallback would prefer it over a slow WebGL implementation. > > It should even be possible to update it within 24 hours without requiring a software update, as is typically done for blocklist updates. I imagine that JSON would be a convenient format for browsers to store this information. I propose using URIs for capability profile predicates. If you dislike the idea of using absolute URIs, I propose using relative URI references like "#vtf_slow" and "#slow" with a default base URI at khronos.org. This same host profile facility is probably useful to a wide variety of nouveau browser APIs. Integrating with an extant standard or host profile system would be preferable to creating a new profile system. I believe most logical assertions regarding Web hosts, resources, and clients should attempt to use the federated namespace structure of URI that is now ubiquitous. David > 4. Whenever the device is no longer a major cause of trouble, e.g. if a work-around has been implemented since, the device advisory can be dropped. > > > Some reasons why I like this proposal while I was against the vendor/renderer/version strings: > > ?* Any exposed information is directly useful. By contrast, UA-string-like solutions rely on applications to correctly parse them and correctly make use of that information, which is a well-known cause of artificial portability problems on the Web. > ?* The default is to expose no information. > ?* If a browser always returns null, that's totally cool, apps will just assume no driver issue then. > ?* The amount of information exposed is minimal (I don't think we'd expose more than 2 bits given that the OS name is already exposed, compared to about 10 bits for the renderer/version strings). > ?* The information exposed pertains to specific testable issues, so it could be obtained fairly easily anyway by running some WebGL code exposing the corresponding issue. > > > Why do I care about this now? Partly from conversations with Web developers, and partly because browsers are getting software fallbacks for WebGL (Chrome already has it now), and I am concerned that silently falling back to software rendering for WebGL is going to break applications that have a WebGL and a non-WebGL renderer and make the naive assumption that if WebGL is available then it is necessarily the better choice. > > > Opinions? > Benoit > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From vla...@ Tue May 1 13:04:52 2012 From: vla...@ (Vladimir Vukicevic) Date: Tue, 1 May 2012 16:04:52 -0400 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 3:40 PM, David Sheets wrote: > I propose using URIs for capability profile predicates. > > If you dislike the idea of using absolute URIs, I propose using > relative URI references like "#vtf_slow" and "#slow" with a default > base URI at khronos.org. > > This same host profile facility is probably useful to a wide variety > of nouveau browser APIs. Integrating with an extant standard or host > profile system would be preferable to creating a new profile system. I > believe most logical assertions regarding Web hosts, resources, and > clients should attempt to use the federated namespace structure of URI > that is now ubiquitous. Hrm, I'm confused how URIs are useful/relevant here -- how would you propose that they be used? I believe Benoit is just talking about querying the browser for data.. - Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Tue May 1 13:32:45 2012 From: kos...@ (David Sheets) Date: Tue, 1 May 2012 13:32:45 -0700 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 1:04 PM, Vladimir Vukicevic wrote: > > > On Tue, May 1, 2012 at 3:40 PM, David Sheets wrote: >> >> I propose using URIs for capability profile predicates. >> >> If you dislike the idea of using absolute URIs, I propose using >> relative URI references like "#vtf_slow" and "#slow" with a default >> base URI at khronos.org. >> >> This same host profile facility is probably useful to a wide variety >> of nouveau browser APIs. Integrating with an extant standard or host >> profile system would be preferable to creating a new profile system. I >> believe most logical assertions regarding Web hosts, resources, and >> clients should attempt to use the federated namespace structure of URI >> that is now ubiquitous. > > > Hrm, I'm confused how URIs are useful/relevant here -- how would you propose > that they be used?? I believe Benoit is just talking about querying the > browser for data.. URIs provide a global federated namespace. They would be used as identifiers for properties just like ambiguous bare names would be used. I am also talking about querying the browser for data. The data that you are requesting is a collection of assertions about the local host's capabilities. These assertions have predicates and values (literal objects). I fully support Benoit's proposal. I think it's an idea whose time is long overdue (>= 1yr 2mo). My additional proposal is to use our already existing WWW namespace system to describe local host capabilities. Here are some use cases: 1. Bob writes a getDeviceAdvisories() shim that returns additional custom assertions based on his JS profiling efforts. Bob's assertions use predicates that aren't specified by a standards body. How does Bob ensure that the property names don't collide? Bob uses a namespace he already owns in the form of a URI. 2. Alice wants to perform automated reasoning and analysis on a host's capabilities using data from getDeviceAdvisories as well as other sources. How does Alice integrate these disparate data sources? Alice unifies them all under a global federated namespace (URI). 3. Charles wants to provide the highest quality virtual space experience possible. Charles is using a web service that automatically transforms assets based on host profiles. What is the format that Charles uses to transmit profile data? Host-predicate-object triples with URIs and literals. 4. The WebGL WG wishes to make 'standard' assertions about these host properties. How does the WebGL WG under Khronos' auspices make unambiguous assertions about these properties? (e.g. a partial order on capabilities) The WebGL WG uses the URIs they had already assigned to the properties. The common thread here is interoperability and use of ubiquitous and battle-tested web standards. If the WG decides to promulgate an API like this (and I fully support such an API), I believe the responsible thing to do as an Open Web standards steward is to embed the namespace for assertion elements into URI. Why make a new anonymous namespace when you could leverage the standard namespace of the WWW? David > ? ? - Vlad > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Tue May 1 14:01:47 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 1 May 2012 23:01:47 +0200 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <68851291.338775.1335896560640.JavaMail.root@mozilla.com> References: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 8:22 PM, Benoit Jacob wrote: > Opinions? I'm a bit hazy as to how that profile object thing would be updated. Also isn't it a problem if some capability issue is indicated by the profile object, which then some app developer jumps on, and later that is deemed "resolved", despite it not being resolved for that particular app developer, and suddenly his app's crashing machines again. Furthermore, assume some compatibility test is made by an app developer, and then the issue is deemed resolved, but later the same issue string is exposed again, the app flip-flops between working and not working for some people, without any real control of the app developer. I think issue profiling is extremely important to deploying WebGL apps. Here's a scenario: Someone makes some game in webgl which for some or another reason turns out to be hugely popular. They go from zilch to 50 million users overnight. Now some percentage of that users will get issues, say 2%, imagine getting a million support requests over night, and having no clue whatsoever what to tell users... -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Tue May 1 14:02:23 2012 From: vla...@ (Vladimir Vukicevic) Date: Tue, 1 May 2012 17:02:23 -0400 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Message-ID: Hrm, I still don't understand -- can you provide some examples of how you see things being used? Part of what you're describing sounds like you're reinventing RDF, which needs to die a slow death IMO (or really, already has on the web), but maybe I'm misunderstanding :) - Vlad On Tue, May 1, 2012 at 4:32 PM, David Sheets wrote: > > URIs provide a global federated namespace. They would be used as > identifiers for properties just like ambiguous bare names would be > used. > > I am also talking about querying the browser for data. The data that > you are requesting is a collection of assertions about the local > host's capabilities. These assertions have predicates and values > (literal objects). > > I fully support Benoit's proposal. I think it's an idea whose time is > long overdue (>= 1yr 2mo). My additional proposal is to use our > already existing WWW namespace system to describe local host > capabilities. > > Here are some use cases: > > 1. Bob writes a getDeviceAdvisories() shim that returns additional > custom assertions based on his JS profiling efforts. Bob's assertions > use predicates that aren't specified by a standards body. How does Bob > ensure that the property names don't collide? Bob uses a namespace he > already owns in the form of a URI. > > 2. Alice wants to perform automated reasoning and analysis on a host's > capabilities using data from getDeviceAdvisories as well as other > sources. How does Alice integrate these disparate data sources? Alice > unifies them all under a global federated namespace (URI). > > 3. Charles wants to provide the highest quality virtual space > experience possible. Charles is using a web service that automatically > transforms assets based on host profiles. What is the format that > Charles uses to transmit profile data? Host-predicate-object triples > with URIs and literals. > > 4. The WebGL WG wishes to make 'standard' assertions about these host > properties. How does the WebGL WG under Khronos' auspices make > unambiguous assertions about these properties? (e.g. a partial order > on capabilities) The WebGL WG uses the URIs they had already assigned > to the properties. > > The common thread here is interoperability and use of ubiquitous and > battle-tested web standards. If the WG decides to promulgate an API > like this (and I fully support such an API), I believe the responsible > thing to do as an Open Web standards steward is to embed the namespace > for assertion elements into URI. Why make a new anonymous namespace > when you could leverage the standard namespace of the WWW? > > David > > > - Vlad > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Tue May 1 14:27:43 2012 From: kos...@ (David Sheets) Date: Tue, 1 May 2012 14:27:43 -0700 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 2:02 PM, Vladimir Vukicevic wrote: > > Hrm, I still don't understand -- can you provide some examples of how you > see things being used? Did you read the use cases in my previous email? Were those not clear examples of use? > Part of what you're describing sounds like you're > reinventing RDF I am not reinventing RDF; I am espousing RDF. You don't need to adopt all of the other RDF technologies to be RDF-friendly and the work required by the standards body is negligible -- just decree a namespace embedded inside URI so we all use the same name for the same thing. > , which needs to die a slow death IMO (or really, already has > on the web), but maybe I'm misunderstanding :) What reason do you have to shun RDF? I believe you and I and everyone else are already dying slow deaths... Are you interested in Web Standards? RFC 3986 is pretty standard. I may be misunderstanding. :-) David > ???? - Vlad > > > On Tue, May 1, 2012 at 4:32 PM, David Sheets wrote: >> >> >> URIs provide a global federated namespace. They would be used as >> identifiers for properties just like ambiguous bare names would be >> used. >> >> I am also talking about querying the browser for data. The data that >> you are requesting is a collection of assertions about the local >> host's capabilities. These assertions have predicates and values >> (literal objects). >> >> I fully support Benoit's proposal. I think it's an idea whose time is >> long overdue (>= 1yr 2mo). My additional proposal is to use our >> already existing WWW namespace system to describe local host >> capabilities. >> >> Here are some use cases: >> >> 1. Bob writes a getDeviceAdvisories() shim that returns additional >> custom assertions based on his JS profiling efforts. Bob's assertions >> use predicates that aren't specified by a standards body. How does Bob >> ensure that the property names don't collide? Bob uses a namespace he >> already owns in the form of a URI. >> >> 2. Alice wants to perform automated reasoning and analysis on a host's >> capabilities using data from getDeviceAdvisories as well as other >> sources. How does Alice integrate these disparate data sources? Alice >> unifies them all under a global federated namespace (URI). >> >> 3. Charles wants to provide the highest quality virtual space >> experience possible. Charles is using a web service that automatically >> transforms assets based on host profiles. What is the format that >> Charles uses to transmit profile data? Host-predicate-object triples >> with URIs and literals. >> >> 4. The WebGL WG wishes to make 'standard' assertions about these host >> properties. How does the WebGL WG under Khronos' auspices make >> unambiguous assertions about these properties? (e.g. a partial order >> on capabilities) The WebGL WG uses the URIs they had already assigned >> to the properties. >> >> The common thread here is interoperability and use of ubiquitous and >> battle-tested web standards. If the WG decides to promulgate an API >> like this (and I fully support such an API), I believe the responsible >> thing to do as an Open Web standards steward is to embed the namespace >> for assertion elements into URI. Why make a new anonymous namespace >> when you could leverage the standard namespace of the WWW? >> >> David >> >> > ? ? - Vlad >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Tue May 1 15:13:42 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Tue, 1 May 2012 15:13:42 -0700 Subject: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers Message-ID: A developer ran into a bug today where their app was working on Linux but not Mac and Windows. The issue was they were calling gl.uniform1f(someSamplerLocation, gl.TEXTURE0); when they should have been calling gl.uniform1f(someSamplerLocation, 0); You could say this is their fault for writing bad code but the thing is there are no errors for this condition defined by OpenGL ES or WebGL AFAIK. It just happens that on Linux calling gl.uniform1f(someSamplerLocation, 33984) uses texture unit 0 and on Mac and Windows it does something else. Given that uniforms are program specific and given that at runtime we know whether or not a particular location is a sampler, should we generate an INVALID_VALUE if the value set for a sampler uniform is greater than or MAX_TEXTURE_IMAGE_UNITS? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue May 1 15:30:10 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 2 May 2012 00:30:10 +0200 Subject: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 12:13 AM, Gregg Tavares (?) wrote: > gl.uniform1f(someSamplerLocation, 0); > Happens a lot, I've had that a couple times too from typos. > You could say this is their fault for writing bad code but the thing is > there are no errors for this condition defined by OpenGL ES or WebGL AFAIK. > The glValidateProgram call verifies validity of the configuration of samplers with the current state. > Given that uniforms are program specific and given that at runtime we know > whether or not a particular location is a sampler, should we generate an > INVALID_VALUE > if the value set for a sampler uniform is greater than or > MAX_TEXTURE_IMAGE_UNITS? > It'd be an obvious solution, although that would deviate from the OpenGL ES specification (not that I think it would matter). The texture unit indirection is a bit of an inelegant solution (and that's where such mixups come from in the end). DSA would've been much nicer to implement, as in setSampler(program, location, texture), alas that would break completely with the current specification and would probably be deemed a nogo (regardless, it'd also be somewhat hard to implement for platforms that don't have proprietary DSA extensions). I'm in favor of generating INVALID_VALUE -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Tue May 1 15:50:16 2012 From: gle...@ (Glenn Maynard) Date: Tue, 1 May 2012 17:50:16 -0500 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <68851291.338775.1335896560640.JavaMail.root@mozilla.com> References: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 1:22 PM, Benoit Jacob wrote: > 1. add a new function --- let's call it getDeviceAdvisories() --- that can > be called without a WebGL context, as it will be meant to be called before > getContext. For example, it could be a method on the window object. > I think it would make more sense to put it on the context, because what it returns depends heavily on the context itself. For example, if the GPU changes (eg. the window changes from one monitor to another), that can cause WebGL contexts to be lost; device properties like this are closely tied with that. I don't think it's any problem to create a context, test whether you want to use it, and then discard it if you decide that you want to use a 2D Canvas. > 2. getDeviceAdvisories() returns a JS Object, which may be null. By > default, the browser will return a null object, which should be interpreted > as "nothing special to say about this device". > This is nitpicky at this point in the discussion, but it should return an empty object, {}, not null, so you can say eg. "if(getDeviceAdvisories().slow)". A problem with "slow" is that its meaning changes over time, which has backwards-compat problems. For example, suppose you have a system which is fast enough to not be considered "slow" by the standards of 2012; it works fine with web pages written in 2012. However, in 2016 the device doesn't work well or at all with new webpages; it's slow by the standards of that year. You want to flag the device as "slow". However, there's a problem: pages written in 2012 still exist and are still used in 2016. That system still works fine on those pages. If you suddenly flag it as "slow", there's a huge chance you're going to break something that was working before, because that old webpage--which was working just fine--now thinks it's "slow", and changes behavior (to a lower-functioning 2d canvas implementation, or to a code path that doesn't work at all). This is a really hard thing to do in a way that's both forwards- and backwards-compatible... * The default is to expose no information. > Not exactly--the default is to expose whatever information the browser thinks is useful. "No information" isn't the default, it's just the simplest case. I point this out because for this to be useful, it would need to be enabled in browsers by default, not made into an option that defaults to off. (I don't think disabling it by default is what you meant, it's just what "default is no information" sounds like to me.) On Tue, May 1, 2012 at 2:40 PM, David Sheets wrote: > I propose using URIs for capability profile predicates. > Using URLs for identifiers is almost always severe overdesign. URLs should only be used for real resources that can be usefully fetched with some protocol, not as identifiers for concepts. Just use simple strings, with vendor prefixing if appropriate (even that's probably overkill). It works just fine for extensions, which is a much larger and more complex set, as well as extensible things like (which is simply maintained on a wiki referenced from the HTML spec). -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Tue May 1 16:02:29 2012 From: gle...@ (Glenn Maynard) Date: Tue, 1 May 2012 18:02:29 -0500 Subject: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers In-Reply-To: References: Message-ID: On Tue, May 1, 2012 at 5:13 PM, Gregg Tavares (?) wrote: > when they should have been calling > > gl.uniform1f(someSamplerLocation, 0); > > You could say this is their fault for writing bad code but the thing is > there are no errors for this condition defined by OpenGL ES or WebGL AFAIK. > (It's the job of Web APIs to not have these sorts of interoperability problems, so it would be wrong to blame the developer for being bitten by them.) It just happens that on Linux calling gl.uniform1f(someSamplerLocation, > 33984) uses texture unit 0 and on Mac and Windows it does something else. > > Given that uniforms are program specific and given that at runtime we know > whether or not a particular location is a sampler, should we generate an > INVALID_VALUE > if the value set for a sampler uniform is greater than or > MAX_TEXTURE_IMAGE_UNITS? > Either that, or the behavior of gl.uniform1f(someSamplerLocation, 33984) should be more strictly defined. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue May 1 16:28:35 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 1 May 2012 16:28:35 -0700 Subject: [Public WebGL] Non-binding context creation attributes, specifically 'stencil' In-Reply-To: References: <1644089788.167904.1334632869364.JavaMail.root@zmmbox2.mail.corp.phx1.mozilla.com> <1637830467.168306.1334634331276.JavaMail.root@zmmbox2.mail.corp.phx1.mozilla.com> <4F8E25A3.2030602@hicorp.co.jp> Message-ID: On Fri, Apr 27, 2012 at 7:30 PM, Kenneth Russell wrote: > On Tue, Apr 17, 2012 at 7:23 PM, Mark Callow wrote: >> On 17/04/2012 23:07, Cedric Vivier wrote: >> >> I don't think the Chrome behavior is acceptable. >> ... (ie. "If the value is false, no stencil buffer is >> available." as written in the spec)... >> >> +1 >> >> Seem's we need a conformance test for this. >> >> I can no longer recall why we made the attributes non-binding, except for >> anti-alias, which is not available in all OpenGL ES 2.0 implementations. > > I'm currently investigating this. Here are the results so far: > > History: there was a supposition that some combinations of depth and > stencil may not be supported by certain hardware; for example, > requesting a stencil buffer but no depth buffer. Further, there was a > supposition that it would be difficult to emulate the behavior that > one or the other buffer was missing, if under the hood, the buffer was > actually allocated. > > However, in light of the current situation I agree that it's bad for > compatibility that depth and stencil are non-binding. Jeff, I agree > with you that the spec should be changed so that if depth or stencil > are false, that the context acts as though those buffers aren't > present, even if they are allocated under the hood. It doesn't *seem* > to be that difficult to emulate the absence of a depth or stencil > buffer. > > Is there agreement that the spec should be updated in this manner? If > so, I'll make this change. > > Here's the interesting part: Chrome, like Firefox, attempts to act as > though there is no stencil buffer if the context is allocated with > {stencil:false}. However, something is going wrong in this emulation. > The conformance test > context/context-attributes-alpha-depth-stencil-antialias.html attempts > to verify this behavior, but isn't catching the problem that the demo > exposes. I'll continue to investigate why the demo slips by this > emulation and update the conformance test to catch the bug. The root cause of the issue in Chrome was that if "depth:true, stencil:false" was passed in the context creation attributes, a stencil buffer was actually being allocated (i.e. getContextAttributes().stencil was true). This was a recent regression, I believe. https://bugs.webkit.org/show_bug.cgi?id=85317 has been filed to fix this in Chrome and Safari. The spec has been updated to indicate that a "false" value for any of the depth, stencil or antialias context creation attributes must be honored. context-attributes-alpha-depth-stencil-antialias.html has been updated to verify a couple of cases it was missing (and would now have caught the original issue). Thanks for pointing out this issue; please post if there are any problems with the changes above. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue May 1 16:45:37 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 1 May 2012 16:45:37 -0700 Subject: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers In-Reply-To: References: Message-ID: On Tue, May 1, 2012 at 3:13 PM, Gregg Tavares (?) wrote: > A developer ran into a bug today where their app was working on Linux but > not Mac and Windows. > > The issue was they were calling > > ? ?gl.uniform1f(someSamplerLocation, gl.TEXTURE0); > > when they should have been calling > > ? ?gl.uniform1f(someSamplerLocation, 0); > > You could say this is their fault for writing bad code but the thing is > there are no errors for this condition defined by OpenGL ES or WebGL AFAIK. > > It just happens that on Linux calling?gl.uniform1f(someSamplerLocation, > 33984) uses texture unit 0 and on Mac and Windows it does something else. > > Given that uniforms are program specific and given that at runtime we know > whether or not a particular location is a sampler, should we generate an > INVALID_VALUE > if the value set for a sampler uniform is greater than or > MAX_TEXTURE_IMAGE_UNITS? This sounds good to me. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Tue May 1 17:04:18 2012 From: jgi...@ (Jeff Gilbert) Date: Tue, 1 May 2012 17:04:18 -0700 (PDT) Subject: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers In-Reply-To: Message-ID: <279555745.449536.1335917058109.JavaMail.root@mozilla.com> The idea's not bad, but I should hope we're discussing webgl.uniform1i, not uniform1f. I don't think, though, that this is really necessary, since it's possible to emit JS warnings for this sort of stuff. I think these are plenty sufficient for detecting these issues for developers. Also, if we do go down this path, we should consider checking for valid ranges for other types. -Jeff ----- Original Message ----- From: "Kenneth Russell" To: "Gregg Tavares (?)" Cc: "public webgl" Sent: Tuesday, May 1, 2012 4:45:37 PM Subject: Re: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers On Tue, May 1, 2012 at 3:13 PM, Gregg Tavares (?) wrote: > A developer ran into a bug today where their app was working on Linux but > not Mac and Windows. > > The issue was they were calling > > ? ?gl.uniform1f(someSamplerLocation, gl.TEXTURE0); > > when they should have been calling > > ? ?gl.uniform1f(someSamplerLocation, 0); > > You could say this is their fault for writing bad code but the thing is > there are no errors for this condition defined by OpenGL ES or WebGL AFAIK. > > It just happens that on Linux calling?gl.uniform1f(someSamplerLocation, > 33984) uses texture unit 0 and on Mac and Windows it does something else. > > Given that uniforms are program specific and given that at runtime we know > whether or not a particular location is a sampler, should we generate an > INVALID_VALUE > if the value set for a sampler uniform is greater than or > MAX_TEXTURE_IMAGE_UNITS? This sounds good to me. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gle...@ Tue May 1 17:17:58 2012 From: gle...@ (Glenn Maynard) Date: Tue, 1 May 2012 19:17:58 -0500 Subject: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers In-Reply-To: <279555745.449536.1335917058109.JavaMail.root@mozilla.com> References: <279555745.449536.1335917058109.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 7:04 PM, Jeff Gilbert wrote: > I don't think, though, that this is really necessary, since it's possible > to emit JS warnings for this sort of stuff. I think these are plenty > sufficient for detecting these issues for developers. > I strongly disagree. If the same code causes different results in different browsers (and it's not an intentional variation, eg. different extensions), then it should be fixed to always do the same thing in all browsers. A web API not being interoperable is a bug. Also, if we do go down this path, we should consider checking for valid > ranges for other types. > That's only necessary if there are other cases which give different results in different implementations. If it's already consistent across all implementations (which it really should be--this one sounds like a GLES/GLSL spec bug if it's not a bug in one of the implementations), then it's not as important. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Tue May 1 17:18:45 2012 From: kos...@ (David Sheets) Date: Tue, 1 May 2012 17:18:45 -0700 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 3:50 PM, Glenn Maynard wrote: > On Tue, May 1, 2012 at 1:22 PM, Benoit Jacob wrote: >> >> 1. add a new function --- let's call it getDeviceAdvisories() --- that can >> be called without a WebGL context, as it will be meant to be called before >> getContext. For example, it could be a method on the window object. > > > I think it would make more sense to put it on the context, because what it > returns depends heavily on the context itself.? For example, if the GPU > changes (eg. the window changes from one monitor to another), that can cause > WebGL contexts to be lost; device properties like this are closely tied with > that. > > I don't think it's any problem to create a context, test whether you want to > use it, and then discard it if you decide that you want to use a 2D Canvas. > >> >> 2. getDeviceAdvisories() returns a JS Object, which may be null. By >> default, the browser will return a null object, which should be interpreted >> as "nothing special to say about this device". > > > This is nitpicky at this point in the discussion, but it should return an > empty object, {}, not null, so you can say eg. > "if(getDeviceAdvisories().slow)". > > A problem with "slow" is that its meaning changes over time, which has > backwards-compat problems.? For example, suppose you have a system which is > fast enough to not be considered "slow" by the standards of 2012; it works > fine with web pages written in 2012.? However, in 2016 the device doesn't > work well or at all with new webpages; it's slow by the standards of that > year.? You want to flag the device as "slow".? However, there's a problem: > pages written in 2012 still exist and are still used in 2016.? That system > still works fine on those pages.? If you suddenly flag it as "slow", there's > a huge chance you're going to break something that was working before, > because that old webpage--which was working just fine--now thinks it's > "slow", and changes behavior (to a lower-functioning 2d canvas > implementation, or to a code path that doesn't work at all). > > This is a really hard thing to do in a way that's both forwards- and > backwards-compatible... > >> ?* The default is to expose no information. > > > Not exactly--the default is to expose whatever information the browser > thinks is useful.? "No information" isn't the default, it's just the > simplest case. > > I point this out because for this to be useful, it would need to be enabled > in browsers by default, not made into an option that defaults to off.? (I > don't think disabling it by default is what you meant, it's just what > "default is no information" sounds like to me.) > > > On Tue, May 1, 2012 at 2:40 PM, David Sheets wrote: >> >> I propose using URIs for capability profile predicates. > > > Using URLs for identifiers is almost always severe overdesign.? URLs should > only be used for real resources that can be usefully fetched with some > protocol, not as identifiers for concepts. That's a nice personal opinion. It's my understanding that the W3C strongly disagrees with your view in a large number of web standards. 'slow' is a valid relative URI reference. I am simply proposing a default base URI inside of Khronos' URI namespace with properties interpreted as URI references. This automatically gives us a way to extend the result set of the host profile call without collision and provides means to unambiguously refer to these properties (instead of "the property called 'slow' in the object returned from getDeviceAttributes() in WebGL 1.0"). That is, 'slow', when dereferenced, is actually 'http://www.khronos.org/registry/webgl/properties/slow' or perhaps 'http://www.khronos.org/registry/webgl/properties/2012/slow' and so forth. You yourself said > This is a really hard thing to do in a way that's both forwards- and > backwards-compatible... and I have proposed the use of a ubiquitous web standard which solves this ambiguity problem. > Just use simple strings, with vendor prefixing if appropriate (even that's > probably overkill).? It works just fine for extensions, which is a much > larger and more complex set, as well as extensible things like (which > is simply maintained on a wiki referenced from the HTML spec). It doesn't really work very well for either of those cases because it assumes either no authority or a central authority (or special custom knowledge). Extensions should be referenced by URI (and I will make a proposal regarding this soon). is much weaker than it should be due to the aforementioned wiki and resulting confusion, incompleteness, and disagreement over precise semantics. We already have a global federated namespace the underpins the web. Why don't we use it? It only takes the will of the standards body; the technical infrastructure is already in place. David > -- > Glenn Maynard > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Tue May 1 17:22:19 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Tue, 1 May 2012 17:22:19 -0700 Subject: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers In-Reply-To: <279555745.449536.1335917058109.JavaMail.root@mozilla.com> References: <279555745.449536.1335917058109.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 5:04 PM, Jeff Gilbert wrote: > The idea's not bad, but I should hope we're discussing webgl.uniform1i, > not uniform1f. > > I don't think, though, that this is really necessary, since it's possible > to emit JS warnings for this sort of stuff. I think these are plenty > sufficient for detecting these issues for developers. > > Also, if we do go down this path, we should consider checking for valid > ranges for other types. > What other types could we check? most other types are not checkable. These are which is what makes them special. > > -Jeff > > ----- Original Message ----- > From: "Kenneth Russell" > To: "Gregg Tavares (?)" > Cc: "public webgl" > Sent: Tuesday, May 1, 2012 4:45:37 PM > Subject: Re: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= > MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers > > > On Tue, May 1, 2012 at 3:13 PM, Gregg Tavares (?) wrote: > > A developer ran into a bug today where their app was working on Linux but > > not Mac and Windows. > > > > The issue was they were calling > > > > gl.uniform1f(someSamplerLocation, gl.TEXTURE0); > > > > when they should have been calling > > > > gl.uniform1f(someSamplerLocation, 0); > > > > You could say this is their fault for writing bad code but the thing is > > there are no errors for this condition defined by OpenGL ES or WebGL > AFAIK. > > > > It just happens that on Linux calling gl.uniform1f(someSamplerLocation, > > 33984) uses texture unit 0 and on Mac and Windows it does something else. > > > > Given that uniforms are program specific and given that at runtime we > know > > whether or not a particular location is a sampler, should we generate an > > INVALID_VALUE > > if the value set for a sampler uniform is greater than or > > MAX_TEXTURE_IMAGE_UNITS? > > This sounds good to me. > > -Ken > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Tue May 1 17:23:52 2012 From: vla...@ (Vladimir Vukicevic) Date: Tue, 1 May 2012 20:23:52 -0400 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 8:18 PM, David Sheets wrote: > > On Tue, May 1, 2012 at 3:50 PM, Glenn Maynard wrote: > > On Tue, May 1, 2012 at 2:40 PM, David Sheets wrote: > >> > >> I propose using URIs for capability profile predicates. > > > > > > Using URLs for identifiers is almost always severe overdesign. URLs > should > > only be used for real resources that can be usefully fetched with some > > protocol, not as identifiers for concepts. > > That's a nice personal opinion. It's my understanding that the W3C > strongly disagrees with your view in a large number of web standards. > > Sorry; I think I digressed the discussion when I asked about the URIs; I didn't understand the suggestion, though I do now. But I think it's irrelevant to the actual core issue; we should figure out if we want to expose this information and what we want to expose, and only later bikeshed on names/URIs/whatever. :-) - Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Tue May 1 17:33:28 2012 From: bja...@ (Benoit Jacob) Date: Tue, 1 May 2012 17:33:28 -0700 (PDT) Subject: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers In-Reply-To: Message-ID: <395482666.495522.1335918808414.JavaMail.root@mozilla.com> I like this idea. And yes, "x versus gl.TEXTUREx" is at the top of my list of common WebGL mistakes so anything we can do to help with that is useful. Benoit ----- Original Message ----- > A developer ran into a bug today where their app was working on Linux > but not Mac and Windows. > The issue was they were calling > gl.uniform1f(someSamplerLocation, gl.TEXTURE0); > when they should have been calling > gl.uniform1f(someSamplerLocation, 0); > You could say this is their fault for writing bad code but the thing > is there are no errors for this condition defined by OpenGL ES or > WebGL AFAIK. > It just happens that on Linux calling > gl.uniform1f(someSamplerLocation, 33984) uses texture unit 0 and on > Mac and Windows it does something else. > Given that uniforms are program specific and given that at runtime we > know whether or not a particular location is a sampler, should we > generate an INVALID_VALUE > if the value set for a sampler uniform is greater than or > MAX_TEXTURE_IMAGE_UNITS? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Tue May 1 17:40:41 2012 From: gle...@ (Glenn Maynard) Date: Tue, 1 May 2012 19:40:41 -0500 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <1909422100.310142.1335894092276.JavaMail.root@mozilla.com> <68851291.338775.1335896560640.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 7:18 PM, David Sheets wrote: > That's a nice personal opinion. It's my understanding that the W3C > strongly disagrees with your view in a large number of web standards. > The "W3C" isn't a hive mind with a single opinion. If members of the W3C want to bring their opinions and their reasoning to the discussion, they're free to, but "the W3C thinks this" is meaningless. FWIW, Hixie doesn't seem to find much value in URLs as identifiers: http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1584.html 'slow' is a valid relative URI reference. I am simply proposing a > default base URI inside of Khronos' URI namespace with properties > interpreted as URI references. This automatically gives us a way to > extend the result set of the host profile call without collision and > provides means to unambiguously refer to these properties (instead of > "the property called 'slow' in the object returned from > getDeviceAttributes() in WebGL 1.0"). > The meaning of a string has nothing to do with the "version" of WebGL; it can't change in incompatible ways over time. You can't change the API in backwards-incompatible ways between two "versions" of WebGL. (I put "version" in quotes because versions are meaningless with web APIs, and other web APIs are moving away from versioned specs. I hope that WebGL will follow suit in time.) You yourself said > > > This is a really hard thing to do in a way that's both forwards- and > > backwards-compatible... > > and I have proposed the use of a ubiquitous web standard which solves > this ambiguity problem. > Please reread what I said. The problem has nothing to do with naming, and everything to do with the fact that the meaning of the word "slow" changes over time, as the typical speed of hardware improves. Whether you call it "slow" or "http://pointless.url/slow" changes nothing. (I'm not inclined to debate the URL question further unless a WebGL spec editor thinks it's worth consideration, to not derail the thread further.) -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Tue May 1 18:33:01 2012 From: jgi...@ (Jeff Gilbert) Date: Tue, 1 May 2012 18:33:01 -0700 (PDT) Subject: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers In-Reply-To: Message-ID: <310296277.453150.1335922381051.JavaMail.root@mozilla.com> It looks like I missed the line that stated that this was behaving differently. I thought this was merely trying to address a common dev error. It is, at least, abundantly clear we should add a test for this to the conformance suite. I do *not* believe this is a bug with the spec per se, but rather a driver bug which should prevent the driver from passing conformance. Trying to use an invalid texture unit should fail should not switch out to a valid unit. That said, leaving this behavior for historical reasons when there's no serious use-case for it is less than ideal. Having a tighter, easier-to-use spec, with less of the all-too-common silent failures, is, in my opinion, a great goal. Let's do this change, but let's also be clear that this is not a strictly necessary change, but rather one made for clarity and ease of development. -Jeff ----- Original Message ----- From: "Glenn Maynard" To: "Jeff Gilbert" Cc: "Kenneth Russell" , "public webgl" , "Gregg Tavares (?)" Sent: Tuesday, May 1, 2012 5:17:58 PM Subject: Re: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers On Tue, May 1, 2012 at 7:04 PM, Jeff Gilbert < jgilbert...@ > wrote: I don't think, though, that this is really necessary, since it's possible to emit JS warnings for this sort of stuff. I think these are plenty sufficient for detecting these issues for developers. I strongly disagree. If the same code causes different results in different browsers (and it's not an intentional variation, eg. different extensions), then it should be fixed to always do the same thing in all browsers. A web API not being interoperable is a bug. Also, if we do go down this path, we should consider checking for valid ranges for other types. That's only necessary if there are other cases which give different results in different implementations. If it's already consistent across all implementations (which it really should be--this one sounds like a GLES/GLSL spec bug if it's not a bug in one of the implementations), then it's not as important. -- Glenn Maynard ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gle...@ Tue May 1 19:00:28 2012 From: gle...@ (Glenn Maynard) Date: Tue, 1 May 2012 21:00:28 -0500 Subject: [Public WebGL] Proposal: Generate INVALID_VALUE if value >= MAX_TEXTURE_IMAGE_UNITS on uniform1f(v) for samplers In-Reply-To: <310296277.453150.1335922381051.JavaMail.root@mozilla.com> References: <310296277.453150.1335922381051.JavaMail.root@mozilla.com> Message-ID: On Tue, May 1, 2012 at 8:33 PM, Jeff Gilbert wrote: > Let's do this change, but let's also be clear that this is not a strictly > necessary change, but rather one made for clarity and ease of development. > If it can't be fixed at a lower level (eg. within the shader compiler), I disagree that it's not strictly a necessary change. Interoperability is a top priority for all web APIs, and it should be for WebGL as much as possible too. Sorry to belabor the point, but it's an important one. If UAs can't implement the spec as written, then the spec needs to change. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed May 2 08:43:18 2012 From: bja...@ (Benoit Jacob) Date: Wed, 2 May 2012 08:43:18 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: Message-ID: <686762068.690213.1335973398893.JavaMail.root@mozilla.com> ----- Original Message ----- > On Tue, May 1, 2012 at 1:22 PM, Benoit Jacob < bjacob...@ > > wrote: > > 1. add a new function --- let's call it getDeviceAdvisories() --- > > that can be called without a WebGL context, as it will be meant to > > be called before getContext. For example, it could be a method on > > the window object. > > I think it would make more sense to put it on the context, because > what it returns depends heavily on the context itself. For example, > if the GPU changes (eg. the window changes from one monitor to > another), that can cause WebGL contexts to be lost; device > properties like this are closely tied with that. > I don't think it's any problem to create a context, test whether you > want to use it, and then discard it if you decide that you want to > use a 2D Canvas. In fact, I agree with you now. But please let me add some background here: A while ago, I measured OpenGL context creation to be really slow, often above 100 ms. Further testing showed that it's only the first OpenGL context creation that is that slow, and subsequent context creations are typically in the 5 -- 10 ms range. That is still a slow operation. And one may still be concerned about the cost of the first context creation. But, we also know that we want to allow async context creation, and an idea that's been proposed is to immediately create a context in lost state, then asynchronously create the OpenGL context, and dispatch a webglcontextrestored event when it's created. That would remove the reason not to put getDeviceAdvisories on the WebGL context: users concerned about performance could simply do var advisories = canvas.getContext("webgl", {async:true}).getDeviceAdvisories(); So let's put getDeviceAdvisories on the context and keep focus on adding async context creation asap. > > 2. getDeviceAdvisories() returns a JS Object, which may be null. By > > default, the browser will return a null object, which should be > > interpreted as "nothing special to say about this device". > > This is nitpicky at this point in the discussion, but it should > return an empty object, {}, not null, so you can say eg. > "if(getDeviceAdvisories().slow)". Good point; that was my intention and I didn't notice that returning null was disallowing that. > A problem with "slow" is that its meaning changes over time, which > has backwards-compat problems. For example, suppose you have a > system which is fast enough to not be considered "slow" by the > standards of 2012; it works fine with web pages written in 2012. > However, in 2016 the device doesn't work well or at all with new > webpages; it's slow by the standards of that year. Good point, but by "slow" I didn't mean "this machine is slow compared to other machine". Instead, I meant "this feature is slow compared to other features that you might prefer to use instead, on this given machine". Indeed, the application doesn't get to choose the machine it runs on, but it does get to choose the approach it takes to rendering on a given machine. That's why it seems reasonable to me to call software WebGL rendering "slow". > You want to flag the device as "slow". However, there's a problem: > pages written in 2012 still exist and are still used in 2016. That > system still works fine on those pages. If you suddenly flag it as > "slow", there's a huge chance you're going to break something that > was working before, because that old webpage--which was working just > fine--now thinks it's "slow", and changes behavior (to a > lower-functioning 2d canvas implementation, or to a code path that > doesn't work at all). With my concept of "slow", we would never downgrade a given system to "slow" just because it's gotten old relatively to the current market. Basically, we would only call "slow" implementations that do a large part of rendering in software. Besides software renderers, we could call old Intel chips up to and including the GMA 3000 "slow" because they don't accelerate vertex shaders. For very specialized features that are not used by a majority of applications, and are very slow compared to other operations on some systems, it is worth introducing specialized flags. For example, VTF on the Geforce 6 and 7 should be advertised with a vtf_slow flag. > This is a really hard thing to do in a way that's both forwards- and > backwards-compatible... > > * The default is to expose no information. > > Not exactly--the default is to expose whatever information the > browser thinks is useful. "No information" isn't the default, it's > just the simplest case. I meant that even in browsers that sometimes return non-{}, in a majority of cases, {} will be returned. The only other return value that will represent more than 10% of the market will be {slow=true}. The {vtf_slow=true} case will represent some old cards, and (as hard as that is to phrase diplomatically) we could have a {stability_concerns=true} flag for certain OpenGL implementations that we couldn't quite decide to blacklist. So if a browser vendor wants to always return {}, that's cool: it will be indistinguishable from other browsers in the majority of cases. > I point this out because for this to be useful, it would need to be > enabled in browsers by default, not made into an option that > defaults to off. (I don't think disabling it by default is what you > meant, it's just what "default is no information" sounds like to > me.) Of course! I didn't mean it should be disabled by default. Cheers, Benoit > On Tue, May 1, 2012 at 2:40 PM, David Sheets < kosmo.zb...@ > > wrote: > > I propose using URIs for capability profile predicates. > > Using URLs for identifiers is almost always severe overdesign. URLs > should only be used for real resources that can be usefully fetched > with some protocol, not as identifiers for concepts. > Just use simple strings, with vendor prefixing if appropriate (even > that's probably overkill). It works just fine for extensions, which > is a much larger and more complex set, as well as extensible things > like (which is simply maintained on a wiki referenced from > the HTML spec). > -- > Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed May 2 08:47:09 2012 From: bja...@ (Benoit Jacob) Date: Wed, 2 May 2012 08:47:09 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: Message-ID: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> ----- Original Message ----- > On Tue, May 1, 2012 at 11:22 AM, Benoit Jacob > wrote: > > > > Hi, > > > > We've discussed several times before, and rejected, proposals to > > expose device/driver info to content. Here's another, different > > proposal. > > > > 1. add a new function --- let's call it getDeviceAdvisories() --- > > that can be called without a WebGL context, as it will be meant to > > be called before getContext. For example, it could be a method on > > the window object. > > > > 2. getDeviceAdvisories() returns a JS Object, which may be null. By > > default, the browser will return a null object, which should be > > interpreted as "nothing special to say about this device". > > > > 3. Now if browser developers get reports from application > > developers that a certain device is causing them trouble, browser > > vendors can quickly update this object to add some properties that > > are characteristic of this device. For example, if a driver is > > known to have very slow VTF, the object returned would have a > > boolean property, > > > > ? ?vtf_slow = true > > > > We could also imagine > > > > ? ?slow = true > > > > for software-only WebGL renderers, as many apps that have a good > > Canvas2D or even non-Canvas fallback would prefer it over a slow > > WebGL implementation. > > > > It should even be possible to update it within 24 hours without > > requiring a software update, as is typically done for blocklist > > updates. I imagine that JSON would be a convenient format for > > browsers to store this information. > > I propose using URIs for capability profile predicates. > > If you dislike the idea of using absolute URIs, I propose using > relative URI references like "#vtf_slow" and "#slow" with a default > base URI at khronos.org. I have no experience with that whatsoever, so no opinion. I just want to make one point: any solution that relies on parsing strings is probably bad. I'm not saying that the URIs approach relies on parsing strings, but I do see strings in the above paragraph, so I'm saying that just in case. Even if somehow that particular kind of string parsing can't be gotten wrong by applications, it remains that any string parsing will be more complicated to use than just if (advisories.slow) Cheers, Benoit > > This same host profile facility is probably useful to a wide variety > of nouveau browser APIs. Integrating with an extant standard or host > profile system would be preferable to creating a new profile system. > I > believe most logical assertions regarding Web hosts, resources, and > clients should attempt to use the federated namespace structure of > URI > that is now ubiquitous. > > David > > > 4. Whenever the device is no longer a major cause of trouble, e.g. > > if a work-around has been implemented since, the device advisory > > can be dropped. > > > > > > Some reasons why I like this proposal while I was against the > > vendor/renderer/version strings: > > > > ?* Any exposed information is directly useful. By contrast, > > ?UA-string-like solutions rely on applications to correctly parse > > ?them and correctly make use of that information, which is a > > ?well-known cause of artificial portability problems on the Web. > > ?* The default is to expose no information. > > ?* If a browser always returns null, that's totally cool, apps will > > ?just assume no driver issue then. > > ?* The amount of information exposed is minimal (I don't think we'd > > ?expose more than 2 bits given that the OS name is already > > ?exposed, compared to about 10 bits for the renderer/version > > ?strings). > > ?* The information exposed pertains to specific testable issues, so > > ?it could be obtained fairly easily anyway by running some WebGL > > ?code exposing the corresponding issue. > > > > > > Why do I care about this now? Partly from conversations with Web > > developers, and partly because browsers are getting software > > fallbacks for WebGL (Chrome already has it now), and I am > > concerned that silently falling back to software rendering for > > WebGL is going to break applications that have a WebGL and a > > non-WebGL renderer and make the naive assumption that if WebGL is > > available then it is necessarily the better choice. > > > > > > Opinions? > > Benoit > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From ash...@ Wed May 2 11:53:06 2012 From: ash...@ (Ashley Gullen) Date: Wed, 2 May 2012 19:53:06 +0100 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> References: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> Message-ID: I think this is a great idea and I'm desperate for something like this. Our engine implements both a WebGL and Canvas 2D renderer, and currently the Canvas 2D renderer is never used in Chrome 18 due to Swiftshader. I am keen to fall back to Canvas 2D instead of using Swiftshader but there is no way to do that. I don't think "slow" is a good term to use because it seems to me to be subjective. Suppose someone has a weird system with a fully-hardware 2006 graphics card and a modern 8-core 2012 CPU. Swiftshader might be faster than the GPU. Which is "slow"? I'd rather one of these terms, or both: "no_gpu": no graphics-specific hardware chip is present. This does not mean "software rendering". A GPU using part software rendering still counts as a GPU. "blacklisted": a device is present but was blacklisted (not used) due to known issues. A Swiftshader WebGL context would say "blacklisted", because it *could* have used a GPU, but didn't because the driver was unstable or whatever. In that case I'd want to ditch the blacklisted WebGL context and fall back to Canvas 2D. Alternatively a message could be issued indicating the user needs to upgrade their driver or hardware. Considering Firefox's stats show something like 50% of users have a blacklisted driver, I think it's essential to expose this information. Note they don't mean the same thing, if you have no graphics hardware then no_gpu is true and blacklisted is false (a non-existent graphics card is not blacklisted), whereas a blacklisted GPU sets no_gpu to false and blacklisted to true. If there is the possibility of multiple GPUs with different advisories then I suppose they should be associated with the context that you actually got. I would then myself define 'slow' as: var slow = advisory.no_gpu || advisory.blacklisted; which I think is what most people's definition really is, but is not enforced or suggested by the browser, it's the developer's interpretation. Some people have expressed concern that this will make the weird system with the 2006 graphics card and 2012 CPU run slower than if it had used the CPU. My answer is: I don't care. If there is a GPU there I want to use it, because it is probably more power-efficient than the CPU, which is important on mobile. Also, a good software renderer like Swiftshader will burn up all your CPU cores, which is a little obnoxious - lots of people complained about how Flash would always "use 100% CPU" for a simple advert when looking at a simple web page; people notice if they have a noisy fan; other apps struggle for CPU time; in apps like games, logic is fighting with rendering for CPU time, so there's less parallelism. So in short you're system is designed wrong (why install a graphics chip which has the overall effect of *slowing down* graphics rendering??) and I would say the software rendering is still "wrong" even if it's faster because it's *inefficient*, which is a slightly different meaning to "slow". Ashley Gullen Scirra.com On 2 May 2012 16:47, Benoit Jacob wrote: > > > > ----- Original Message ----- > > On Tue, May 1, 2012 at 11:22 AM, Benoit Jacob > > wrote: > > > > > > Hi, > > > > > > We've discussed several times before, and rejected, proposals to > > > expose device/driver info to content. Here's another, different > > > proposal. > > > > > > 1. add a new function --- let's call it getDeviceAdvisories() --- > > > that can be called without a WebGL context, as it will be meant to > > > be called before getContext. For example, it could be a method on > > > the window object. > > > > > > 2. getDeviceAdvisories() returns a JS Object, which may be null. By > > > default, the browser will return a null object, which should be > > > interpreted as "nothing special to say about this device". > > > > > > 3. Now if browser developers get reports from application > > > developers that a certain device is causing them trouble, browser > > > vendors can quickly update this object to add some properties that > > > are characteristic of this device. For example, if a driver is > > > known to have very slow VTF, the object returned would have a > > > boolean property, > > > > > > vtf_slow = true > > > > > > We could also imagine > > > > > > slow = true > > > > > > for software-only WebGL renderers, as many apps that have a good > > > Canvas2D or even non-Canvas fallback would prefer it over a slow > > > WebGL implementation. > > > > > > It should even be possible to update it within 24 hours without > > > requiring a software update, as is typically done for blocklist > > > updates. I imagine that JSON would be a convenient format for > > > browsers to store this information. > > > > I propose using URIs for capability profile predicates. > > > > If you dislike the idea of using absolute URIs, I propose using > > relative URI references like "#vtf_slow" and "#slow" with a default > > base URI at khronos.org. > > I have no experience with that whatsoever, so no opinion. I just want to > make one point: any solution that relies on parsing strings is probably > bad. I'm not saying that the URIs approach relies on parsing strings, but I > do see strings in the above paragraph, so I'm saying that just in case. > > Even if somehow that particular kind of string parsing can't be gotten > wrong by applications, it remains that any string parsing will be more > complicated to use than just > > if (advisories.slow) > > Cheers, > Benoit > > > > > This same host profile facility is probably useful to a wide variety > > of nouveau browser APIs. Integrating with an extant standard or host > > profile system would be preferable to creating a new profile system. > > I > > believe most logical assertions regarding Web hosts, resources, and > > clients should attempt to use the federated namespace structure of > > URI > > that is now ubiquitous. > > > > David > > > > > 4. Whenever the device is no longer a major cause of trouble, e.g. > > > if a work-around has been implemented since, the device advisory > > > can be dropped. > > > > > > > > > Some reasons why I like this proposal while I was against the > > > vendor/renderer/version strings: > > > > > > * Any exposed information is directly useful. By contrast, > > > UA-string-like solutions rely on applications to correctly parse > > > them and correctly make use of that information, which is a > > > well-known cause of artificial portability problems on the Web. > > > * The default is to expose no information. > > > * If a browser always returns null, that's totally cool, apps will > > > just assume no driver issue then. > > > * The amount of information exposed is minimal (I don't think we'd > > > expose more than 2 bits given that the OS name is already > > > exposed, compared to about 10 bits for the renderer/version > > > strings). > > > * The information exposed pertains to specific testable issues, so > > > it could be obtained fairly easily anyway by running some WebGL > > > code exposing the corresponding issue. > > > > > > > > > Why do I care about this now? Partly from conversations with Web > > > developers, and partly because browsers are getting software > > > fallbacks for WebGL (Chrome already has it now), and I am > > > concerned that silently falling back to software rendering for > > > WebGL is going to break applications that have a WebGL and a > > > non-WebGL renderer and make the naive assumption that if WebGL is > > > available then it is necessarily the better choice. > > > > > > > > > Opinions? > > > Benoit > > > > > > ----------------------------------------------------------- > > > You are currently subscribed to public_webgl...@ > > > To unsubscribe, send an email to majordomo...@ with > > > the following command in the body of your email: > > > unsubscribe public_webgl > > > ----------------------------------------------------------- > > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed May 2 13:47:39 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 2 May 2012 13:47:39 -0700 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> Message-ID: On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen wrote: > I think this is a great idea and I'm desperate for something like this. > Our engine implements both a WebGL and Canvas 2D renderer, and currently > the Canvas 2D renderer is never used in Chrome 18 due to Swiftshader. I am > keen to fall back to Canvas 2D instead of using Swiftshader but there is no > way to do that. That's a little bit of an exaggeration. You can certainly choose Canvas 2D at anytime. You run a small benchmark and switch. You can give the user options like nearly every Window's game in existence. (screen size, rendering features, texture detail, etc...) I'm not arguing against the proposal. I just don't think it should be decided by arguably false points. > > I don't think "slow" is a good term to use because it seems to me to be > subjective. Suppose someone has a weird system with a fully-hardware 2006 > graphics card and a modern 8-core 2012 CPU. Swiftshader might be faster > than the GPU. Which is "slow"? > > I'd rather one of these terms, or both: > "no_gpu": no graphics-specific hardware chip is present. This does not > mean "software rendering". A GPU using part software rendering still > counts as a GPU. > "blacklisted": a device is present but was blacklisted (not used) due to > known issues. A Swiftshader WebGL context would say "blacklisted", because > it *could* have used a GPU, but didn't because the driver was unstable or > whatever. In that case I'd want to ditch the blacklisted WebGL context and > fall back to Canvas 2D. Alternatively a message could be issued indicating > the user needs to upgrade their driver or hardware. Considering Firefox's > stats show something like 50% of users have a blacklisted driver, I think > it's essential to expose this information. > > Note they don't mean the same thing, if you have no graphics hardware then > no_gpu is true and blacklisted is false (a non-existent graphics card is > not blacklisted), whereas a blacklisted GPU sets no_gpu to false and > blacklisted to true. > > If there is the possibility of multiple GPUs with different advisories > then I suppose they should be associated with the context that you actually > got. > > I would then myself define 'slow' as: > var slow = advisory.no_gpu || advisory.blacklisted; > which I think is what most people's definition really is, but is not > enforced or suggested by the browser, it's the developer's interpretation. > > Some people have expressed concern that this will make the weird system > with the 2006 graphics card and 2012 CPU run slower than if it had used the > CPU. My answer is: I don't care. If there is a GPU there I want to use > it, because it is probably more power-efficient than the CPU, which is > important on mobile. Also, a good software renderer like Swiftshader will > burn up all your CPU cores, which is a little obnoxious - lots of people > complained about how Flash would always "use 100% CPU" for a simple advert > when looking at a simple web page; people notice if they have a noisy fan; > other apps struggle for CPU time; in apps like games, logic is fighting > with rendering for CPU time, so there's less parallelism. So in short > you're system is designed wrong (why install a graphics chip which has the > overall effect of *slowing down* graphics rendering??) and I would say the > software rendering is still "wrong" even if it's faster because it's > *inefficient*, which is a slightly different meaning to "slow". > > Ashley Gullen > Scirra.com > > > On 2 May 2012 16:47, Benoit Jacob wrote: > >> >> >> >> ----- Original Message ----- >> > On Tue, May 1, 2012 at 11:22 AM, Benoit Jacob >> > wrote: >> > > >> > > Hi, >> > > >> > > We've discussed several times before, and rejected, proposals to >> > > expose device/driver info to content. Here's another, different >> > > proposal. >> > > >> > > 1. add a new function --- let's call it getDeviceAdvisories() --- >> > > that can be called without a WebGL context, as it will be meant to >> > > be called before getContext. For example, it could be a method on >> > > the window object. >> > > >> > > 2. getDeviceAdvisories() returns a JS Object, which may be null. By >> > > default, the browser will return a null object, which should be >> > > interpreted as "nothing special to say about this device". >> > > >> > > 3. Now if browser developers get reports from application >> > > developers that a certain device is causing them trouble, browser >> > > vendors can quickly update this object to add some properties that >> > > are characteristic of this device. For example, if a driver is >> > > known to have very slow VTF, the object returned would have a >> > > boolean property, >> > > >> > > vtf_slow = true >> > > >> > > We could also imagine >> > > >> > > slow = true >> > > >> > > for software-only WebGL renderers, as many apps that have a good >> > > Canvas2D or even non-Canvas fallback would prefer it over a slow >> > > WebGL implementation. >> > > >> > > It should even be possible to update it within 24 hours without >> > > requiring a software update, as is typically done for blocklist >> > > updates. I imagine that JSON would be a convenient format for >> > > browsers to store this information. >> > >> > I propose using URIs for capability profile predicates. >> > >> > If you dislike the idea of using absolute URIs, I propose using >> > relative URI references like "#vtf_slow" and "#slow" with a default >> > base URI at khronos.org. >> >> I have no experience with that whatsoever, so no opinion. I just want to >> make one point: any solution that relies on parsing strings is probably >> bad. I'm not saying that the URIs approach relies on parsing strings, but I >> do see strings in the above paragraph, so I'm saying that just in case. >> >> Even if somehow that particular kind of string parsing can't be gotten >> wrong by applications, it remains that any string parsing will be more >> complicated to use than just >> >> if (advisories.slow) >> >> Cheers, >> Benoit >> >> > >> > This same host profile facility is probably useful to a wide variety >> > of nouveau browser APIs. Integrating with an extant standard or host >> > profile system would be preferable to creating a new profile system. >> > I >> > believe most logical assertions regarding Web hosts, resources, and >> > clients should attempt to use the federated namespace structure of >> > URI >> > that is now ubiquitous. >> > >> > David >> > >> > > 4. Whenever the device is no longer a major cause of trouble, e.g. >> > > if a work-around has been implemented since, the device advisory >> > > can be dropped. >> > > >> > > >> > > Some reasons why I like this proposal while I was against the >> > > vendor/renderer/version strings: >> > > >> > > * Any exposed information is directly useful. By contrast, >> > > UA-string-like solutions rely on applications to correctly parse >> > > them and correctly make use of that information, which is a >> > > well-known cause of artificial portability problems on the Web. >> > > * The default is to expose no information. >> > > * If a browser always returns null, that's totally cool, apps will >> > > just assume no driver issue then. >> > > * The amount of information exposed is minimal (I don't think we'd >> > > expose more than 2 bits given that the OS name is already >> > > exposed, compared to about 10 bits for the renderer/version >> > > strings). >> > > * The information exposed pertains to specific testable issues, so >> > > it could be obtained fairly easily anyway by running some WebGL >> > > code exposing the corresponding issue. >> > > >> > > >> > > Why do I care about this now? Partly from conversations with Web >> > > developers, and partly because browsers are getting software >> > > fallbacks for WebGL (Chrome already has it now), and I am >> > > concerned that silently falling back to software rendering for >> > > WebGL is going to break applications that have a WebGL and a >> > > non-WebGL renderer and make the naive assumption that if WebGL is >> > > available then it is necessarily the better choice. >> > > >> > > >> > > Opinions? >> > > Benoit >> > > >> > > ----------------------------------------------------------- >> > > You are currently subscribed to public_webgl...@ >> > > To unsubscribe, send an email to majordomo...@ with >> > > the following command in the body of your email: >> > > unsubscribe public_webgl >> > > ----------------------------------------------------------- >> > > >> > >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Wed May 2 14:40:19 2012 From: kos...@ (David Sheets) Date: Wed, 2 May 2012 14:40:19 -0700 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> Message-ID: On Wed, May 2, 2012 at 1:47 PM, Gregg Tavares (?) wrote: > > > On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen wrote: >> >> I think this is a great idea and I'm?desperate?for something like this. >> ?Our engine implements both a WebGL and Canvas 2D renderer, and currently >> the Canvas 2D renderer is never used in Chrome 18 due to Swiftshader. ?I am >> keen to fall back to Canvas 2D instead of using Swiftshader but there is no >> way to do that. > > > That's a little bit of an exaggeration. You can certainly choose Canvas 2D > at anytime. You run a small benchmark and switch. You include a small open source benchmark script which shims getHostProfile() with {"http://helpfulpeople.com/webgl/is_canvas2d_faster": true|false} This is a declarative rather than imperative interface as many of the most useful web interfaces are declarative. "If you can do it with data, don't do it with code." > You can give the user > options like nearly every Window's game in existence. (screen size, > rendering features, texture detail, etc...) Options that could be eliminated by better automatic environment detection is bad UX. > I'm not arguing against the proposal. I just don't think it should be > decided by arguably false points. Whether it is done now by this body or in 2014 by another, this sort of feature will exist. Analogous interfaces already exist in many, many diagnostic and profiling libraries and other new web APIs. Many libraries provide ad hoc feature detection. WebGL is in particularly dire need of this sort of _standard_ capability information due to the complexity of the underlying subsystems. Khronos has the chance to lead the entire World Wide Web here by introducing an extensible host profile API. Even if no 'privileged' host information (like make, model, or implementation) is exposed via this API today, the interface should be in place and returning a well-typed {}. I'm sure there are other groups who have encountered this sort of issue. Perhaps someone knows of a similar system implemented elsewhere? The discussion over the specific properties to expose is irrelevant. Properties will be exposed. They will be useful. The WebGL WG isn't architecting one stand-alone standard. The WebGL WG is doing city planning and local governance for the future of cyberspace. As such, fundamental questions like "what are the types allowed across this interface?" and "what is the canonical data schema?" are far more important than whether the WG decides to make a predicate called "slow" or "really_slow" and what precisely that means. Without deciding a single property to provide, the WG can opt to create a new method call ("getDeviceProfile"? "getHostProfile"?) that returns a JS hash-like object, {}. JS hash-like objects have two type parameters, key and value. To support JS implementations that only provide keys as strings, the type of the key must be some sort of string. I propose the following: 1. This string is a relative URI reference with a base URI of "http://www.khronos.org/registry/webgl/profiles/" 2. For every relative URI reference under the default Khronos base URI, an identical value is inserted in the {} profile with the resolved absolute URI. Now we're having our cake and eating it, too. And that's no lie. The type of the values can be any JavaScript object at the most general. The allowable specific JavaScript types used as objects of the profile predicates is itself a property of the predicate. This type constraint may be decided by the party with authority over the URI of the predicate. Thoughts? David >> >> >> I don't think "slow" is a good term to use because it seems to me to be >> subjective. ?Suppose someone has a weird system with a fully-hardware 2006 >> graphics card and a modern 8-core 2012 CPU. ?Swiftshader might be faster >> than the GPU. ?Which is "slow"? >> >> I'd rather one of these terms, or both: >> "no_gpu": no graphics-specific hardware chip is present. ?This does not >> mean "software rendering". ?A GPU using part software rendering still counts >> as a GPU. >> "blacklisted": a device is present but was blacklisted (not used) due to >> known issues. ?A Swiftshader WebGL context would say "blacklisted", because >> it *could* have used a GPU, but didn't because the driver was unstable or >> whatever. ?In that case I'd want to ditch the blacklisted WebGL context and >> fall back to Canvas 2D. ?Alternatively a message could be issued indicating >> the user needs to upgrade their driver or hardware. ?Considering Firefox's >> stats show something like 50% of users have a blacklisted driver, I think >> it's essential to expose this information. >> >> Note they don't mean the same thing, if you have no graphics hardware then >> no_gpu is true and blacklisted is false (a non-existent graphics card is not >> blacklisted), whereas a blacklisted GPU sets no_gpu to false and blacklisted >> to true. >> >> If there is the possibility of multiple GPUs with different advisories >> then I suppose they should be associated with the context that you actually >> got. >> >> I would then myself define 'slow' as: >> var slow = advisory.no_gpu || advisory.blacklisted; >> which I think is what most people's definition really is, but is not >> enforced or suggested by the browser, it's the developer's interpretation. >> >> Some people have expressed concern that this will make the weird system >> with the 2006 graphics card and 2012 CPU run slower than if it had used the >> CPU. ?My answer is: I don't care. ?If there is a GPU there I want to use it, >> because it is probably more power-efficient than the CPU, which is important >> on mobile. ?Also, a good software renderer like Swiftshader will burn up all >> your CPU cores, which is a little obnoxious - lots of people complained >> about how Flash would always "use 100% CPU" for a simple advert when looking >> at a simple web page;?people notice if they have a noisy fan; other apps >> struggle for CPU time; in apps like games, logic is fighting with rendering >> for CPU time, so there's less parallelism. ?So in short you're system is >> designed wrong (why install a graphics chip which has the overall effect of >> *slowing down* graphics rendering??) and I would say the software rendering >> is still "wrong" even if it's faster because it's *inefficient*, which is a >> slightly different meaning to "slow". >> >> Ashley Gullen >> Scirra.com >> >> >> On 2 May 2012 16:47, Benoit Jacob wrote: >>> >>> >>> >>> >>> ----- Original Message ----- >>> > On Tue, May 1, 2012 at 11:22 AM, Benoit Jacob >>> > wrote: >>> > > >>> > > Hi, >>> > > >>> > > We've discussed several times before, and rejected, proposals to >>> > > expose device/driver info to content. Here's another, different >>> > > proposal. >>> > > >>> > > 1. add a new function --- let's call it getDeviceAdvisories() --- >>> > > that can be called without a WebGL context, as it will be meant to >>> > > be called before getContext. For example, it could be a method on >>> > > the window object. >>> > > >>> > > 2. getDeviceAdvisories() returns a JS Object, which may be null. By >>> > > default, the browser will return a null object, which should be >>> > > interpreted as "nothing special to say about this device". >>> > > >>> > > 3. Now if browser developers get reports from application >>> > > developers that a certain device is causing them trouble, browser >>> > > vendors can quickly update this object to add some properties that >>> > > are characteristic of this device. For example, if a driver is >>> > > known to have very slow VTF, the object returned would have a >>> > > boolean property, >>> > > >>> > > ? ?vtf_slow = true >>> > > >>> > > We could also imagine >>> > > >>> > > ? ?slow = true >>> > > >>> > > for software-only WebGL renderers, as many apps that have a good >>> > > Canvas2D or even non-Canvas fallback would prefer it over a slow >>> > > WebGL implementation. >>> > > >>> > > It should even be possible to update it within 24 hours without >>> > > requiring a software update, as is typically done for blocklist >>> > > updates. I imagine that JSON would be a convenient format for >>> > > browsers to store this information. >>> > >>> > I propose using URIs for capability profile predicates. >>> > >>> > If you dislike the idea of using absolute URIs, I propose using >>> > relative URI references like "#vtf_slow" and "#slow" with a default >>> > base URI at khronos.org. >>> >>> I have no experience with that whatsoever, so no opinion. I just want to >>> make one point: any solution that relies on parsing strings is probably bad. >>> I'm not saying that the URIs approach relies on parsing strings, but I do >>> see strings in the above paragraph, so I'm saying that just in case. >>> >>> Even if somehow that particular kind of string parsing can't be gotten >>> wrong by applications, it remains that any string parsing will be more >>> complicated to use than just >>> >>> ?if (advisories.slow) >>> >>> Cheers, >>> Benoit >>> >>> > >>> > This same host profile facility is probably useful to a wide variety >>> > of nouveau browser APIs. Integrating with an extant standard or host >>> > profile system would be preferable to creating a new profile system. >>> > I >>> > believe most logical assertions regarding Web hosts, resources, and >>> > clients should attempt to use the federated namespace structure of >>> > URI >>> > that is now ubiquitous. >>> > >>> > David >>> > >>> > > 4. Whenever the device is no longer a major cause of trouble, e.g. >>> > > if a work-around has been implemented since, the device advisory >>> > > can be dropped. >>> > > >>> > > >>> > > Some reasons why I like this proposal while I was against the >>> > > vendor/renderer/version strings: >>> > > >>> > > ?* Any exposed information is directly useful. By contrast, >>> > > ?UA-string-like solutions rely on applications to correctly parse >>> > > ?them and correctly make use of that information, which is a >>> > > ?well-known cause of artificial portability problems on the Web. >>> > > ?* The default is to expose no information. >>> > > ?* If a browser always returns null, that's totally cool, apps will >>> > > ?just assume no driver issue then. >>> > > ?* The amount of information exposed is minimal (I don't think we'd >>> > > ?expose more than 2 bits given that the OS name is already >>> > > ?exposed, compared to about 10 bits for the renderer/version >>> > > ?strings). >>> > > ?* The information exposed pertains to specific testable issues, so >>> > > ?it could be obtained fairly easily anyway by running some WebGL >>> > > ?code exposing the corresponding issue. >>> > > >>> > > >>> > > Why do I care about this now? Partly from conversations with Web >>> > > developers, and partly because browsers are getting software >>> > > fallbacks for WebGL (Chrome already has it now), and I am >>> > > concerned that silently falling back to software rendering for >>> > > WebGL is going to break applications that have a WebGL and a >>> > > non-WebGL renderer and make the naive assumption that if WebGL is >>> > > available then it is necessarily the better choice. >>> > > >>> > > >>> > > Opinions? >>> > > Benoit >>> > > >>> > > ----------------------------------------------------------- >>> > > You are currently subscribed to public_webgl...@ >>> > > To unsubscribe, send an email to majordomo...@ with >>> > > the following command in the body of your email: >>> > > unsubscribe public_webgl >>> > > ----------------------------------------------------------- >>> > > >>> > >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From con...@ Wed May 2 14:50:08 2012 From: con...@ (Conor Dickinson) Date: Wed, 2 May 2012 14:50:08 -0700 Subject: [Public WebGL] Caching shader compile assembly Message-ID: Not sure if this email should go to this mailing list or not, so if there is a better place for this please let me know. I have found that calls to linkProgram for our shaders take between 300 ms and 800 ms on Windows in Firefox and Chrome, but on Mac they take less than 10 ms. My guess is that the extra time comes from the GLSL -> HLSL -> assembly conversion that doesn't happen on Mac (I know that HLSL -> assembly is extremely slow because of all the optimizations that D3D runs on the shaders). For us this is taking about 6 to 8 seconds of our load time even if all of our assets are in the browser cache. Is there any way that the browser can cache the results of the shader compiling so that subsequent visits to our app do not take so long to load? I can definitely understand not wanting to give us back the assembled shaders to cache ourselves (for privacy reasons), but it seems reasonable for the browser to cache the results based on the shader string, GPU, and driver version. Conor Dickinson Cloud Party, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed May 2 14:57:47 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 2 May 2012 14:57:47 -0700 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> Message-ID: On Wed, May 2, 2012 at 2:40 PM, David Sheets wrote: > On Wed, May 2, 2012 at 1:47 PM, Gregg Tavares (?) wrote: > > > > > > On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen > wrote: > >> > >> I think this is a great idea and I'm desperate for something like this. > >> Our engine implements both a WebGL and Canvas 2D renderer, and > currently > >> the Canvas 2D renderer is never used in Chrome 18 due to Swiftshader. > I am > >> keen to fall back to Canvas 2D instead of using Swiftshader but there > is no > >> way to do that. > > > > > > That's a little bit of an exaggeration. You can certainly choose Canvas > 2D > > at anytime. You run a small benchmark and switch. > > You include a small open source benchmark script which shims > getHostProfile() with > {"http://helpfulpeople.com/webgl/is_canvas2d_faster": true|false} > > This is a declarative rather than imperative interface as many of the > most useful web interfaces are declarative. > > "If you can do it with data, don't do it with code." > > > You can give the user > > options like nearly every Window's game in existence. (screen size, > > rendering features, texture detail, etc...) > > Options that could be eliminated by better automatic environment > detection is bad UX. > It's not that cut and dry. Try removing all the graphic options in Modern Warfare 3 or Battlefield 3 and you'll hear lots of screaming users. Some will trade speed for beauty. Others prefer beauty over speed. > > I'm not arguing against the proposal. I just don't think it should be > > decided by arguably false points. > > Whether it is done now by this body or in 2014 by another, this sort > of feature will exist. Analogous interfaces already exist in many, > many diagnostic and profiling libraries and other new web APIs. Many > libraries provide ad hoc feature detection. > > WebGL is in particularly dire need of this sort of _standard_ > capability information due to the complexity of the underlying > subsystems. > > Khronos has the chance to lead the entire World Wide Web here by > introducing an extensible host profile API. Even if no 'privileged' > host information (like make, model, or implementation) is exposed via > this API today, the interface should be in place and returning a > well-typed {}. > > I'm sure there are other groups who have encountered this sort of > issue. Perhaps someone knows of a similar system implemented > elsewhere? > > The discussion over the specific properties to expose is irrelevant. > Properties will be exposed. They will be useful. > > The WebGL WG isn't architecting one stand-alone standard. The WebGL WG > is doing city planning and local governance for the future of > cyberspace. As such, fundamental questions like "what are the types > allowed across this interface?" and "what is the canonical data > schema?" are far more important than whether the WG decides to make a > predicate called "slow" or "really_slow" and what precisely that > means. > > Without deciding a single property to provide, the WG can opt to > create a new method call ("getDeviceProfile"? "getHostProfile"?) that > returns a JS hash-like object, {}. > > JS hash-like objects have two type parameters, key and value. To > support JS implementations that only provide keys as strings, the type > of the key must be some sort of string. I propose the following: > > 1. This string is a relative URI reference with a base URI of > "http://www.khronos.org/registry/webgl/profiles/" > 2. For every relative URI reference under the default Khronos base > URI, an identical value is inserted in the {} profile with the > resolved absolute URI. Now we're having our cake and eating it, too. > And that's no lie. > > The type of the values can be any JavaScript object at the most > general. The allowable specific JavaScript types used as objects of > the profile predicates is itself a property of the predicate. This > type constraint may be decided by the party with authority over the > URI of the predicate. > > Thoughts? > > David > > >> > >> > >> I don't think "slow" is a good term to use because it seems to me to be > >> subjective. Suppose someone has a weird system with a fully-hardware > 2006 > >> graphics card and a modern 8-core 2012 CPU. Swiftshader might be faster > >> than the GPU. Which is "slow"? > >> > >> I'd rather one of these terms, or both: > >> "no_gpu": no graphics-specific hardware chip is present. This does not > >> mean "software rendering". A GPU using part software rendering still > counts > >> as a GPU. > >> "blacklisted": a device is present but was blacklisted (not used) due to > >> known issues. A Swiftshader WebGL context would say "blacklisted", > because > >> it *could* have used a GPU, but didn't because the driver was unstable > or > >> whatever. In that case I'd want to ditch the blacklisted WebGL context > and > >> fall back to Canvas 2D. Alternatively a message could be issued > indicating > >> the user needs to upgrade their driver or hardware. Considering > Firefox's > >> stats show something like 50% of users have a blacklisted driver, I > think > >> it's essential to expose this information. > >> > >> Note they don't mean the same thing, if you have no graphics hardware > then > >> no_gpu is true and blacklisted is false (a non-existent graphics card > is not > >> blacklisted), whereas a blacklisted GPU sets no_gpu to false and > blacklisted > >> to true. > >> > >> If there is the possibility of multiple GPUs with different advisories > >> then I suppose they should be associated with the context that you > actually > >> got. > >> > >> I would then myself define 'slow' as: > >> var slow = advisory.no_gpu || advisory.blacklisted; > >> which I think is what most people's definition really is, but is not > >> enforced or suggested by the browser, it's the developer's > interpretation. > >> > >> Some people have expressed concern that this will make the weird system > >> with the 2006 graphics card and 2012 CPU run slower than if it had used > the > >> CPU. My answer is: I don't care. If there is a GPU there I want to > use it, > >> because it is probably more power-efficient than the CPU, which is > important > >> on mobile. Also, a good software renderer like Swiftshader will burn > up all > >> your CPU cores, which is a little obnoxious - lots of people complained > >> about how Flash would always "use 100% CPU" for a simple advert when > looking > >> at a simple web page; people notice if they have a noisy fan; other apps > >> struggle for CPU time; in apps like games, logic is fighting with > rendering > >> for CPU time, so there's less parallelism. So in short you're system is > >> designed wrong (why install a graphics chip which has the overall > effect of > >> *slowing down* graphics rendering??) and I would say the software > rendering > >> is still "wrong" even if it's faster because it's *inefficient*, which > is a > >> slightly different meaning to "slow". > >> > >> Ashley Gullen > >> Scirra.com > >> > >> > >> On 2 May 2012 16:47, Benoit Jacob wrote: > >>> > >>> > >>> > >>> > >>> ----- Original Message ----- > >>> > On Tue, May 1, 2012 at 11:22 AM, Benoit Jacob > >>> > wrote: > >>> > > > >>> > > Hi, > >>> > > > >>> > > We've discussed several times before, and rejected, proposals to > >>> > > expose device/driver info to content. Here's another, different > >>> > > proposal. > >>> > > > >>> > > 1. add a new function --- let's call it getDeviceAdvisories() --- > >>> > > that can be called without a WebGL context, as it will be meant to > >>> > > be called before getContext. For example, it could be a method on > >>> > > the window object. > >>> > > > >>> > > 2. getDeviceAdvisories() returns a JS Object, which may be null. By > >>> > > default, the browser will return a null object, which should be > >>> > > interpreted as "nothing special to say about this device". > >>> > > > >>> > > 3. Now if browser developers get reports from application > >>> > > developers that a certain device is causing them trouble, browser > >>> > > vendors can quickly update this object to add some properties that > >>> > > are characteristic of this device. For example, if a driver is > >>> > > known to have very slow VTF, the object returned would have a > >>> > > boolean property, > >>> > > > >>> > > vtf_slow = true > >>> > > > >>> > > We could also imagine > >>> > > > >>> > > slow = true > >>> > > > >>> > > for software-only WebGL renderers, as many apps that have a good > >>> > > Canvas2D or even non-Canvas fallback would prefer it over a slow > >>> > > WebGL implementation. > >>> > > > >>> > > It should even be possible to update it within 24 hours without > >>> > > requiring a software update, as is typically done for blocklist > >>> > > updates. I imagine that JSON would be a convenient format for > >>> > > browsers to store this information. > >>> > > >>> > I propose using URIs for capability profile predicates. > >>> > > >>> > If you dislike the idea of using absolute URIs, I propose using > >>> > relative URI references like "#vtf_slow" and "#slow" with a default > >>> > base URI at khronos.org. > >>> > >>> I have no experience with that whatsoever, so no opinion. I just want > to > >>> make one point: any solution that relies on parsing strings is > probably bad. > >>> I'm not saying that the URIs approach relies on parsing strings, but I > do > >>> see strings in the above paragraph, so I'm saying that just in case. > >>> > >>> Even if somehow that particular kind of string parsing can't be gotten > >>> wrong by applications, it remains that any string parsing will be more > >>> complicated to use than just > >>> > >>> if (advisories.slow) > >>> > >>> Cheers, > >>> Benoit > >>> > >>> > > >>> > This same host profile facility is probably useful to a wide variety > >>> > of nouveau browser APIs. Integrating with an extant standard or host > >>> > profile system would be preferable to creating a new profile system. > >>> > I > >>> > believe most logical assertions regarding Web hosts, resources, and > >>> > clients should attempt to use the federated namespace structure of > >>> > URI > >>> > that is now ubiquitous. > >>> > > >>> > David > >>> > > >>> > > 4. Whenever the device is no longer a major cause of trouble, e.g. > >>> > > if a work-around has been implemented since, the device advisory > >>> > > can be dropped. > >>> > > > >>> > > > >>> > > Some reasons why I like this proposal while I was against the > >>> > > vendor/renderer/version strings: > >>> > > > >>> > > * Any exposed information is directly useful. By contrast, > >>> > > UA-string-like solutions rely on applications to correctly parse > >>> > > them and correctly make use of that information, which is a > >>> > > well-known cause of artificial portability problems on the Web. > >>> > > * The default is to expose no information. > >>> > > * If a browser always returns null, that's totally cool, apps will > >>> > > just assume no driver issue then. > >>> > > * The amount of information exposed is minimal (I don't think we'd > >>> > > expose more than 2 bits given that the OS name is already > >>> > > exposed, compared to about 10 bits for the renderer/version > >>> > > strings). > >>> > > * The information exposed pertains to specific testable issues, so > >>> > > it could be obtained fairly easily anyway by running some WebGL > >>> > > code exposing the corresponding issue. > >>> > > > >>> > > > >>> > > Why do I care about this now? Partly from conversations with Web > >>> > > developers, and partly because browsers are getting software > >>> > > fallbacks for WebGL (Chrome already has it now), and I am > >>> > > concerned that silently falling back to software rendering for > >>> > > WebGL is going to break applications that have a WebGL and a > >>> > > non-WebGL renderer and make the naive assumption that if WebGL is > >>> > > available then it is necessarily the better choice. > >>> > > > >>> > > > >>> > > Opinions? > >>> > > Benoit > >>> > > > >>> > > ----------------------------------------------------------- > >>> > > You are currently subscribed to public_webgl...@ > >>> > > To unsubscribe, send an email to majordomo...@ with > >>> > > the following command in the body of your email: > >>> > > unsubscribe public_webgl > >>> > > ----------------------------------------------------------- > >>> > > > >>> > > >>> > >>> ----------------------------------------------------------- > >>> You are currently subscribed to public_webgl...@ > >>> To unsubscribe, send an email to majordomo...@ with > >>> the following command in the body of your email: > >>> unsubscribe public_webgl > >>> ----------------------------------------------------------- > >>> > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Wed May 2 14:59:26 2012 From: gle...@ (Glenn Maynard) Date: Wed, 2 May 2012 16:59:26 -0500 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 4:50 PM, Conor Dickinson wrote: > Not sure if this email should go to this mailing list or not, so if there > is a better place for this please let me know. > > I have found that calls to linkProgram for our shaders take between 300 ms > and 800 ms on Windows in Firefox and Chrome, but on Mac they take less than > 10 ms. My guess is that the extra time comes from the GLSL -> HLSL -> > assembly conversion that doesn't happen on Mac (I know that HLSL -> > assembly is extremely slow because of all the optimizations that D3D runs > on the shaders). > > For us this is taking about 6 to 8 seconds of our load time even if all of > our assets are in the browser cache. Is there any way that the browser can > cache the results of the shader compiling so that subsequent visits to our > app do not take so long to load? I can definitely understand not wanting > to give us back the assembled shaders to cache ourselves (for privacy > reasons), but it seems reasonable for the browser to cache the results > based on the shader string, GPU, and driver version. > You could do this in a safe way, by encrypting the data with a key stored on the user's system; all the web app would see is an opaque, random-looking blob. This would be perfectly safe. (It would also include a hash of the shader source; if it doesn't match, or doesn't decrypt, or if the browser version, etc. have changed incompatibly, it would simply be ignored and compile the shader from scratch.) I don't think it's possible to implement this for GLES-based implementations, but if it's possible on D3D-based implementations, it could probably be done. On systems not supporting it, it would simply always return the empty string, so it always recompiles. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed May 2 15:00:08 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 2 May 2012 15:00:08 -0700 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 2:50 PM, Conor Dickinson wrote: > Not sure if this email should go to this mailing list or not, so if there > is a better place for this please let me know. > > I have found that calls to linkProgram for our shaders take between 300 ms > and 800 ms on Windows in Firefox and Chrome, but on Mac they take less than > 10 ms. My guess is that the extra time comes from the GLSL -> HLSL -> > assembly conversion that doesn't happen on Mac (I know that HLSL -> > assembly is extremely slow because of all the optimizations that D3D runs > on the shaders). > > For us this is taking about 6 to 8 seconds of our load time even if all of > our assets are in the browser cache. Is there any way that the browser can > cache the results of the shader compiling so that subsequent visits to our > app do not take so long to load? I can definitely understand not wanting > to give us back the assembled shaders to cache ourselves (for privacy > reasons), but it seems reasonable for the browser to cache the results > based on the shader string, GPU, and driver version. > For chrome it's on our todo list http://code.google.com/p/chromium/issues/detail?id=88572 no ETA though > > > Conor Dickinson > Cloud Party, Inc. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bag...@ Wed May 2 15:01:02 2012 From: bag...@ (Patrick Baggett) Date: Wed, 2 May 2012 17:01:02 -0500 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: Just by the way OpenGL works on the desktop, I'm thinking the answer is going to be "not without a lot of effort". If the browser vendors used "GL_ARB_get_program_binary" and kept a separate "compiled program cache", I think it is possible. Patrick On Wed, May 2, 2012 at 4:50 PM, Conor Dickinson wrote: > Not sure if this email should go to this mailing list or not, so if there > is a better place for this please let me know. > > I have found that calls to linkProgram for our shaders take between 300 ms > and 800 ms on Windows in Firefox and Chrome, but on Mac they take less than > 10 ms. My guess is that the extra time comes from the GLSL -> HLSL -> > assembly conversion that doesn't happen on Mac (I know that HLSL -> > assembly is extremely slow because of all the optimizations that D3D runs > on the shaders). > > For us this is taking about 6 to 8 seconds of our load time even if all of > our assets are in the browser cache. Is there any way that the browser can > cache the results of the shader compiling so that subsequent visits to our > app do not take so long to load? I can definitely understand not wanting > to give us back the assembled shaders to cache ourselves (for privacy > reasons), but it seems reasonable for the browser to cache the results > based on the shader string, GPU, and driver version. > > > Conor Dickinson > Cloud Party, Inc. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed May 2 15:04:27 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 3 May 2012 00:04:27 +0200 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 11:50 PM, Conor Dickinson wrote: > I have found that calls to linkProgram for our shaders take between 300 ms > and 800 ms on Windows in Firefox and Chrome, but on Mac they take less than > 10 ms. My guess is that the extra time comes from the GLSL -> HLSL -> > assembly conversion that doesn't happen on Mac (I know that HLSL -> > assembly is extremely slow because of all the optimizations that D3D runs > on the shaders). > I've observed this extremely sluggish behavior as well, there also seem to be cases where compilation of large-ish shaders takes significantly longer, or crashes the compiler altogether. A compiled client side shader cache would be a lovely idea (based on source hash for example) regardless of performance differences, it can also take considerable time on macs and linux to compile 50 shaders. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed May 2 15:08:01 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 3 May 2012 00:08:01 +0200 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Thu, May 3, 2012 at 12:01 AM, Patrick Baggett wrote: > Just by the way OpenGL works on the desktop, I'm thinking the answer is > going to be "not without a lot of effort". If the browser vendors used > "GL_ARB_get_program_binary" and kept a separate "compiled program cache", I > think it is possible. But the thing we haven't solved with this caching is the first-load experience :/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From bag...@ Wed May 2 15:11:03 2012 From: bag...@ (Patrick Baggett) Date: Wed, 2 May 2012 17:11:03 -0500 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 5:08 PM, Florian B?sch wrote: > On Thu, May 3, 2012 at 12:01 AM, Patrick Baggett < > baggett.patrick...@> wrote: > >> Just by the way OpenGL works on the desktop, I'm thinking the answer is >> going to be "not without a lot of effort". If the browser vendors used >> "GL_ARB_get_program_binary" and kept a separate "compiled program cache", I >> think it is possible. > > > But the thing we haven't solved with this caching is the first-load > experience :/ > Yeah, but caching is a bit easier a problem to solve than "NVIDIA/AMD/Intel, stop being lazy and make better shader compilers that are faster!!!" -------------- next part -------------- An HTML attachment was scrubbed... URL: From toj...@ Wed May 2 15:16:24 2012 From: toj...@ (Brandon Jones) Date: Wed, 2 May 2012 15:16:24 -0700 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: It's worth noting that a great many desktop games will run through an "Optimizing shaders" step on first run or possibly during install. In that sense, the first-run experience for a WebGL app vs. a Desktop app wouldn't be that different. Trying to cache compiled shaders server side strikes me as a bad idea anyway. You're denying the driver an opportunity to make the best choices for that particular hardware. --Branond On Wed, May 2, 2012 at 3:11 PM, Patrick Baggett wrote: > > > On Wed, May 2, 2012 at 5:08 PM, Florian B?sch wrote: > >> On Thu, May 3, 2012 at 12:01 AM, Patrick Baggett < >> baggett.patrick...@> wrote: >> >>> Just by the way OpenGL works on the desktop, I'm thinking the answer is >>> going to be "not without a lot of effort". If the browser vendors used >>> "GL_ARB_get_program_binary" and kept a separate "compiled program cache", I >>> think it is possible. >> >> >> But the thing we haven't solved with this caching is the first-load >> experience :/ >> > > Yeah, but caching is a bit easier a problem to solve than > "NVIDIA/AMD/Intel, stop being lazy and make better shader compilers that > are faster!!!" > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed May 2 15:22:35 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 3 May 2012 00:22:35 +0200 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Thu, May 3, 2012 at 12:16 AM, Brandon Jones wrote: > It's worth noting that a great many desktop games will run through an > "Optimizing shaders" step on first run or possibly during install. In that > sense, the first-run experience for a WebGL app vs. a Desktop app wouldn't > be that different. Lengthy setup phases are kind of a problem for web-apps. The expectation of web-users is different than for desktop app install users. > Trying to cache compiled shaders server side strikes me as a bad idea > anyway. You're denying the driver an opportunity to make the best choices > for that particular hardware. > Yeah serverside caching sounds like a bad idea. -------------- next part -------------- An HTML attachment was scrubbed... URL: From toj...@ Wed May 2 15:40:39 2012 From: toj...@ (Brandon Jones) Date: Wed, 2 May 2012 15:40:39 -0700 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 3:22 PM, Florian B?sch wrote: > Lengthy setup phases are kind of a problem for web-apps. The expectation > of web-users is different than for desktop app install users. > I agree, but I can't help but think that WebGL apps are going to forcibly break that perception in the near future. Content either must be very simple or very specifically coded to be streaming friendly to provide a good instant-on experience, and that's simply not possible in some cases. I do feel strongly that we shouldn't do anything to explicitly prevent a developer from creating an experience like that, but this shader caching doesn't really harm or help that scenario. --Brandon -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Wed May 2 15:41:27 2012 From: gle...@ (Glenn Maynard) Date: Wed, 2 May 2012 17:41:27 -0500 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 5:08 PM, Florian B?sch wrote: > On Thu, May 3, 2012 at 12:01 AM, Patrick Baggett < > baggett.patrick...@> wrote: > >> Just by the way OpenGL works on the desktop, I'm thinking the answer is >> going to be "not without a lot of effort". If the browser vendors used >> "GL_ARB_get_program_binary" and kept a separate "compiled program cache", I >> think it is possible. > > > But the thing we haven't solved with this caching is the first-load > experience :/ > It would be a big improvement to be able to compile shaders asynchronously, so browser tabs don't freeze up and you can keep smoothly rendering any loading animations, etc. Like other things this is probably not possible for pure GLES-based implementations, but it's almost certainly possible for D3D-backed ones, and it probably is for OpenGL (non-ES) ones (where if I remember correctly you can compile in a new context in another thread, then use resource sharing extensions to move the compiled shader across). Technically speaking compilation is asynchronous now (compileShader can return immediately), but there's no way to tell if the compilation is finished; all you can do is call finish() or getLastError(), which block. One possible API would be a call which causes an event to be dispatched when the render queue is empty; you'd call it, then return to the browser, and the event would be dispatched once it's possible to call getLastError without blocking. On implementations that can't do this (GLES, probably), they'd just send the message immediately when you return to the browser. (I'm just brainstorming; if anyone thinks this is interesting enough to discuss further, please bump replies to a new thread.) On Wed, May 2, 2012 at 5:16 PM, Brandon Jones wrote: > It's worth noting that a great many desktop games will run through an > "Optimizing shaders" step on first run or possibly during install. In that > sense, the first-run experience for a WebGL app vs. a Desktop app wouldn't > be that different. > > Trying to cache compiled shaders server side strikes me as a bad idea > anyway. You're denying the driver an opportunity to make the best choices > for that particular hardware. > It's not, since it's caching it for the user's particular configuration; the browser can always discard it. I'm not saying it's wrong to do it client-side, just that "server-side"--more likely something like IndexedDB, which is technically client-side--doesn't make this worse. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed May 2 15:49:02 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 3 May 2012 00:49:02 +0200 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Thu, May 3, 2012 at 12:41 AM, Glenn Maynard wrote: > I'm not saying it's wrong to do it client-side, just that > "server-side"--more likely something like IndexedDB, which is technically > client-side--doesn't make this worse. > So I understand that position as "we give you an undecypherable blob for a compiled shader, and you just do with it what you want". I see a couple problems with that: - purely from a web-dev perspective, having to store terrabytes of cryptic blobs would be brain-dead, nobody's seriously going to do that if they can avoid it. - storing on the client side would probably be the preferred method - IndexDB is not available on some browsers, and it often comes with limits attached. - Cache retirement of a cache shader would have to be tied to GFX/Driver in use on the context. If you just hand out a cryptic blob, the app-developer has no clue when to retire his cache. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed May 2 15:50:50 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 2 May 2012 15:50:50 -0700 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 3:41 PM, Glenn Maynard wrote: > On Wed, May 2, 2012 at 5:08 PM, Florian B?sch wrote: > >> On Thu, May 3, 2012 at 12:01 AM, Patrick Baggett < >> baggett.patrick...@> wrote: >> >>> Just by the way OpenGL works on the desktop, I'm thinking the answer is >>> going to be "not without a lot of effort". If the browser vendors used >>> "GL_ARB_get_program_binary" and kept a separate "compiled program cache", I >>> think it is possible. >> >> >> But the thing we haven't solved with this caching is the first-load >> experience :/ >> > > It would be a big improvement to be able to compile shaders > asynchronously, so browser tabs don't freeze up and you can keep smoothly > rendering any loading animations, etc. Like other things this is probably > not possible for pure GLES-based implementations, but it's almost certainly > possible for D3D-backed ones, and it probably is for OpenGL (non-ES) ones > (where if I remember correctly you can compile in a new context in another > thread, then use resource sharing extensions to move the compiled shader > across). > > Technically speaking compilation is asynchronous now (compileShader can > return immediately), but there's no way to tell if the compilation is > finished; all you can do is call finish() or getLastError(), which block. > One possible API would be a call which causes an event to be dispatched > when the render queue is empty; you'd call it, then return to the browser, > and the event would be dispatched once it's possible to call getLastError > without blocking. On implementations that can't do this (GLES, probably), > they'd just send the message immediately when you return to the browser. > > (I'm just brainstorming; if anyone thinks this is interesting enough to > discuss further, please bump replies to a new thread.) > So just FYI, you can already _effectively_ do this on Chrome. Call vs = gl.createShader(gl.VERTEX_SHADER); fs = gl.createShader(gl.FRAGMENT_SHADER); gl.shaderSource(vs, vsrc); gl.shaderSource(fs, vsrc); gl.compileShader(vs); gl.compileShader(fs); p = gl.createProgram(); gl.attachShader(p, vs); gl.attachShader(p, fs); gl.linkProgram(p); setTimeout(checkResults, 1000); function checkResults() { if (!gl.getProgramParameter(p, gl.LINK_STATUS)) { ... } You can't know how long to set the timeout but you can make the entire thing effectively async. Certainly a async API might be nice in the future. I'm just pointing out what's available now. Note: You probably need Chrome 20 to get the full effect. > > On Wed, May 2, 2012 at 5:16 PM, Brandon Jones wrote: > >> It's worth noting that a great many desktop games will run through an >> "Optimizing shaders" step on first run or possibly during install. In that >> sense, the first-run experience for a WebGL app vs. a Desktop app wouldn't >> be that different. >> >> Trying to cache compiled shaders server side strikes me as a bad idea >> anyway. You're denying the driver an opportunity to make the best choices >> for that particular hardware. >> > > It's not, since it's caching it for the user's particular configuration; > the browser can always discard it. > > I'm not saying it's wrong to do it client-side, just that > "server-side"--more likely something like IndexedDB, which is technically > client-side--doesn't make this worse. > > -- > Glenn Maynard > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Wed May 2 15:53:57 2012 From: gle...@ (Glenn Maynard) Date: Wed, 2 May 2012 17:53:57 -0500 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> Message-ID: On Wed, May 2, 2012 at 10:43 AM, Benoit Jacob wrote: > That would remove the reason not to put getDeviceAdvisories on the WebGL > context: users concerned about performance could simply do > > var advisories = canvas.getContext("webgl", {async:true}). > getDeviceAdvisories(); > I don't think this really works. These device properties are associated with the GPU; if the context is lost and restored on a different GPU, the properties change. This means you want an active, non-lost WebGL context in order to retrieve that data. In the above, the context you're calling the function on would be in the lost state, where it doesn't yet know anything about the hardware. Async context creation is still useful here, since it avoids (or at least makes it possible to avoid) a 100ms browser hitch while you do this, but I think you still have to wait the 100ms before this information is available. Good point, but by "slow" I didn't mean "this machine is slow compared to > other machine". Instead, I meant "this feature is slow compared to other > features that you might prefer to use instead, on this given machine". > That's a little vague, especially if there end up being more than two APIs available. If this really means "slower than 2d canvas"--which is still pretty vague, since it probably depends on which features you exercise--maybe it shouldn't pretend to be generic, and actually say "slowerThan2DCanvas"? On Wed, May 2, 2012 at 1:53 PM, Ashley Gullen wrote: > "blacklisted": a device is present but was blacklisted (not used) due to > known issues. A Swiftshader WebGL context would say "blacklisted", because > it *could* have used a GPU, but didn't because the driver was unstable or > whatever. In that case I'd want to ditch the blacklisted WebGL context and > fall back to Canvas 2D. Alternatively a message could be issued indicating > the user needs to upgrade their driver or hardware. Considering Firefox's > stats show something like 50% of users have a blacklisted driver, I think > it's essential to expose this information. > This is a bit of a different issue. The solution for this that's been kicked around for a while is to add a parameter to context creation, eg. " suppressUpgradePrompt". If not set, the browser is allowed to pop up a UI saying "your drivers are too old for WebGL, click here for new ones". If set, it would act as now and fail silently. Browsers currently can't do this, because there's no way for the browser to know if showing that is appropriate or not. If the page has a Canvas (or DOM, for that matter) fallback, or if it's an optional feature of the page that can simply be turned off, pages often won't want browsers distracting the user with that. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Wed May 2 16:07:57 2012 From: gle...@ (Glenn Maynard) Date: Wed, 2 May 2012 18:07:57 -0500 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 5:50 PM, Gregg Tavares (?) wrote: > > You can't know how long to set the timeout but you can make the entire > thing effectively async. Certainly a async API might be nice in the future. > I'm just pointing out what's available now. Note: You probably need Chrome > 20 to get the full effect. > Unfortunately, if you don't wait long enough, you can't distinguish whether the false result means "the link failed" or "the link hasn't happened yet (so wait and try again)". It's pretty close, though. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed May 2 16:16:44 2012 From: bja...@ (Benoit Jacob) Date: Wed, 2 May 2012 16:16:44 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: Message-ID: <2128895844.1111477.1336000604604.JavaMail.root@mozilla.com> ----- Original Message ----- > On Wed, May 2, 2012 at 10:43 AM, Benoit Jacob < bjacob...@ > > wrote: > > That would remove the reason not to put getDeviceAdvisories on the > > WebGL context: users concerned about performance could simply do > > > var advisories = canvas.getContext("webgl", {async:true}). > > getDeviceAdvisories(); > > I don't think this really works. These device properties are > associated with the GPU; if the context is lost and restored on a > different GPU, the properties change. This means you want an active, > non-lost WebGL context in order to retrieve that data. In the above, > the context you're calling the function on would be in the lost > state, where it doesn't yet know anything about the hardware. > Async context creation is still useful here, since it avoids (or at > least makes it possible to avoid) a 100ms browser hitch while you do > this, but I think you still have to wait the 100ms before this > information is available. You don't need to do the actual OpenGL context creation just to return the correct advisory: all you need is GPU/driver detection and blacklist processing. So we could imagine that async context creation makes getDeviceAdvisories available before the webglcontextrestored event. For example, it could be available either: - immediately after getContext returns (requires the browser to synchronously check the blacklist, which is cheap enough) - or once the webglcontextlost event is dispatched, deferring OpenGL context creation until the script has handled that event - or we could introduce a new kind of event. Benoit > > Good point, but by "slow" I didn't mean "this machine is slow > > compared to other machine". Instead, I meant "this feature is slow > > compared to other features that you might prefer to use instead, on > > this given machine". > > That's a little vague, especially if there end up being more than two > APIs available. If this really means "slower than 2d canvas"--which > is still pretty vague, since it probably depends on which features > you exercise--maybe it shouldn't pretend to be generic, and actually > say "slowerThan2DCanvas"? > On Wed, May 2, 2012 at 1:53 PM, Ashley Gullen < ashley...@ > > wrote: > > "blacklisted": a device is present but was blacklisted (not used) > > due > > to known issues. A Swiftshader WebGL context would say > > "blacklisted", because it *could* have used a GPU, but didn't > > because the driver was unstable or whatever. In that case I'd want > > to ditch the blacklisted WebGL context and fall back to Canvas 2D. > > Alternatively a message could be issued indicating the user needs > > to > > upgrade their driver or hardware. Considering Firefox's stats show > > something like 50% of users have a blacklisted driver, I think it's > > essential to expose this information. > > This is a bit of a different issue. The solution for this that's been > kicked around for a while is to add a parameter to context creation, > eg. " suppressUpgradePrompt ". If not set, the browser is allowed to > pop up a UI saying "your drivers are too old for WebGL, click href= nvidia.com/drivers >here for new ones". If set, it would > act as now and fail silently. > Browsers currently can't do this, because there's no way for the > browser to know if showing that is appropriate or not. If the page > has a Canvas (or DOM, for that matter) fallback, or if it's an > optional feature of the page that can simply be turned off, pages > often won't want browsers distracting the user with that. > -- > Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed May 2 16:16:29 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 2 May 2012 16:16:29 -0700 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 4:07 PM, Glenn Maynard wrote: > On Wed, May 2, 2012 at 5:50 PM, Gregg Tavares (?) wrote: >> >> You can't know how long to set the timeout but you can make the entire >> thing effectively async. Certainly a async API might be nice in the future. >> I'm just pointing out what's available now. Note: You probably need Chrome >> 20 to get the full effect. >> > > Unfortunately, if you don't wait long enough, you can't distinguish > whether the false result means "the link failed" or "the link hasn't > happened yet (so wait and try again)". It's pretty close, though. > Yes you can. GL requires calling glGetProgramiv (in WebGL gl.getProgramParameter) returns the correct result for the link (same for glGetShaderiv, gl.getShaderParameter) So, all that happens is if you setTimeout to 1000ms and it takes 1500ms to compile the shader then you'll wait 500ms when you call gl.getProgramParameter > > > > -- > Glenn Maynard > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Wed May 2 16:19:36 2012 From: gle...@ (Glenn Maynard) Date: Wed, 2 May 2012 18:19:36 -0500 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 6:16 PM, Gregg Tavares (?) wrote: > So, all that happens is if you setTimeout to 1000ms and it takes 1500ms to > compile the shader then you'll wait 500ms when you call > gl.getProgramParameter > Which isn't very asynchronous. :) (Another idea comes to mind, but I'll start a new thread to stop hijacking this one.) -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed May 2 16:25:27 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 2 May 2012 16:25:27 -0700 Subject: [Public WebGL] Caching shader compile assembly In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 4:19 PM, Glenn Maynard wrote: > On Wed, May 2, 2012 at 6:16 PM, Gregg Tavares (?) wrote: > >> So, all that happens is if you setTimeout to 1000ms and it takes 1500ms >> to compile the shader then you'll wait 500ms when you call >> gl.getProgramParameter >> > > Which isn't very asynchronous. :) > It's "effectively" async as I said in the original post. I know several apps that are compiling 10-20 shaders with a timeout of 1000 for the entire lot and having no blocking whatsoever. Honestly, I don't feel adding an async API is a real priority. Caching will solve most of the speed problems. They are mostly dwarfed by downloading then uploading textures and geometry. So on the list of things browser vendors could spend there time on related to WebGL, a new async api for shader complication hardly seems like a priority. > > (Another idea comes to mind, but I'll start a new thread to stop hijacking > this one.) > > -- > Glenn Maynard > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Wed May 2 16:37:22 2012 From: gle...@ (Glenn Maynard) Date: Wed, 2 May 2012 18:37:22 -0500 Subject: [Public WebGL] Asynchronous calls Message-ID: Currently, most WebGL calls, like OpenGL, are allowed (though not required) to be asynchronous. finish() or getLastError() will flush the queue, synchronously waiting for the previous commands to finish. This is good for rendering, where you normally avoid calling those functions at all, but it means some operations that should be asynchronous--shader compilation, most recently--are effectively synchronous, since you always want to call getLastError to make sure your shaders compiled and your program linked. A generic solution for this is an asynchronous getLastError. For example, var shader = ctx.createShader(); ctx.shaderSource(shader, source); ctx.compileShader(shader); ctx.getLastErrorAsync(function(e) { // All commands up to this point are flushed, and e is the error value. Check the result and start compiling the next shader. }); return; The callback is called when the result is available. There are other details (eg. what to do if more WebGL calls are made after getLastErrorAsync but before returning, and if multiple getLastErrorAsync calls are made after that), which I'll put off talking about; for now I'll just put the basic idea out there for people to think about. This could allow making much better use of the asynchronous nature of WebGL/OpenGL, especially at load time (where most getLastError calls are made). It could also allow checking for errors during rendering, which is currently prohibitively expensive, because calling this function wouldn't cause a stall. It wouldn't help with other functions that return values (eg. getParameter), but this is the one that usually matters most. This is probably fairly complex to implement. It may require running the underlying OpenGL/GLES context in its own thread (maybe they already do this; it seems like a good idea on its own). I'm not sure about D3D. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed May 2 17:35:25 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 2 May 2012 17:35:25 -0700 Subject: [Public WebGL] Asynchronous calls In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 4:37 PM, Glenn Maynard wrote: > Currently, most WebGL calls, like OpenGL, are allowed (though not > required) to be asynchronous. finish() or getLastError() will flush the > queue, synchronously waiting for the previous commands to finish. This is > good for rendering, where you normally avoid calling those functions at > all, but it means some operations that should be asynchronous--shader > compilation, most recently--are effectively synchronous, since you always > want to call getLastError to make sure your shaders compiled and your > program linked. > > A generic solution for this is an asynchronous getLastError. For example, > Compile errors are returned with gl.getShaderParameter(shader, gl.COMPILE_STATUS), Link errors are returned with gl.getProgramParameter(program, gl.LINK_STATUS) Neither of those are actually an error. Also, you can compile X shaders and link Y programs all without checking the status of any of them. Which effectively means you can async compile multiple things at once. If you wanted to check them you'd need to able to check them individually, not with something like getLastError. Which one would be get getting the error for? The fastest way to compile/link right now is something like function makeShader(type, src) { s = gl.createShader(type); gl.shaderSoruce(s, src); gl.compileShader(); return s; } var shaders = [ { vsrc: "some v shader", fsrc: "some f shader" }, { vsrc: "some v shader", fsrc: "some f shader" }, { vsrc: "some v shader", fsrc: "some f shader" }, { vsrc: "some v shader", fsrc: "some f shader" }, { vsrc: "some v shader", fsrc: "some f shader" } ]; var programs = []; for (var i = 0; i < shaders.length; ++i) { var shaderPair =shaders[i]; vs = makeShader(gl.VERTEX_SHADER, shaderPair.vsrc); fs = makeShader(gl.FRAGMENT_SHADER, shaderPair.fsrc); p = gl.createProgram(); gl.attachShader(p, vs); gl.attachShader(p, fs); gl.linkProgram(p); programs.push(p) } // Optionally do something else here that takes some time. Upload textures, geometry, contact servers, .... // Then... check if any of them failed. for (var i = 0; i < programs.length; ++i) { if (!gl.getProgramParameter(programs[i], gl.LINK_STATUS)) { // maybe you want to print the program log and check shader status at this point Whether or not you do something between the compile+link and checking for failure it will be much faster then the typical compile, check, compile, check, link, check, compile, check, compile, check, link, check which most GL/WebGL apps do which stalls the system on each check. AFAIK both of these suggestions will currently only help Chrome. AFAIK all other browsers are calling GL on the same thread as JavaScript and I know of no GL drivers that compile or link on another thread. I don't know what Firefox or Safari's plans are with regards to WebGL but I suspect if this truly is a priority issue it will either require major re-architecting so that those browsers actually issue GL calls on other threads or it will require an API like asyncCompileShader(s, callback) and asyncLinkProgram(p, callback) that is easier to implement given their current implementations. But, back to the original point. I'm still not sure this is a priority. > > var shader = ctx.createShader(); > ctx.shaderSource(shader, source); > ctx.compileShader(shader); > ctx.getLastErrorAsync(function(e) { > // All commands up to this point are flushed, and e is the error > value. Check the result and start compiling the next shader. > }); > return; > > The callback is called when the result is available. > > There are other details (eg. what to do if more WebGL calls are made after > getLastErrorAsync but before returning, and if multiple getLastErrorAsync > calls are made after that), which I'll put off talking about; for now I'll > just put the basic idea out there for people to think about. This could > allow making much better use of the asynchronous nature of WebGL/OpenGL, > especially at load time (where most getLastError calls are made). > > It could also allow checking for errors during rendering, which is > currently prohibitively expensive, because calling this function wouldn't > cause a stall. > > It wouldn't help with other functions that return values (eg. > getParameter), but this is the one that usually matters most. > > This is probably fairly complex to implement. It may require running the > underlying OpenGL/GLES context in its own thread (maybe they already do > this; it seems like a good idea on its own). I'm not sure about D3D. > > -- > Glenn Maynard > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Wed May 2 18:09:06 2012 From: gle...@ (Glenn Maynard) Date: Wed, 2 May 2012 20:09:06 -0500 Subject: [Public WebGL] Asynchronous calls In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 7:35 PM, Gregg Tavares (?) wrote: > Compile errors are returned with gl.getShaderParameter(shader, > gl.COMPILE_STATUS), Link errors are returned with > gl.getProgramParameter(program, gl.LINK_STATUS) > > Neither of those are actually an error. > > Also, you can compile X shaders and link Y programs all without checking > the status of any of them. Which effectively means you can async compile > multiple things at once. If you wanted to check them you'd need to able to > check them individually, not with something like getLastError. Which one > would be get getting the error for? > This is the part I didn't go into, but each getErrorAsync call would return the error as of that point. So, function() { var shader1 = ctx.createShader(); ctx.shaderSource(shader1, source1); ctx.compileShader(shader1); ctx.getErrorAsync(func1); var shader2 = ctx.createShader(); ctx.shaderSource(shader2, source2); ctx.compileShader(shader2); ctx.getErrorAsync(func2); } func1 would always be called before func2; both func1 and func2 receive the value that getError would have returned if called at their respective places. This could be done for any function that returns a value, including getProgramParameter and getShaderParameter. (That does result in a larger surface area, but I suppose this isn't unusual for the platform; actually, most web APIs have both sync and async interfaces for *all* blocking functions.) AFAIK both of these suggestions will currently only help Chrome. AFAIK all > other browsers are calling GL on the same thread as JavaScript and I know > of no GL drivers that compile or link on another thread. I don't know what > Firefox or Safari's plans are with regards to WebGL but I suspect if this > truly is a priority issue it will either require major re-architecting so > that those browsers actually issue GL calls on other threads or it will > require an API like asyncCompileShader(s, callback) and asyncLinkProgram(p, > callback) that is easier to implement given their current implementations. > (I'm fine with features that only help a single browser to start; that just helps encourage other browsers to improve their implementations.) But, back to the original point. I'm still not sure this is a priority. > I didn't call it a high priority; that doesn't mean it's not worth thinking about. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From ash...@ Thu May 3 01:07:10 2012 From: ash...@ (Ashley Gullen) Date: Thu, 3 May 2012 09:07:10 +0100 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> Message-ID: On 2 May 2012 21:47, Gregg Tavares (?) wrote: > > > On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen wrote: > >> I think this is a great idea and I'm desperate for something like this. >> Our engine implements both a WebGL and Canvas 2D renderer, and currently >> the Canvas 2D renderer is never used in Chrome 18 due to Swiftshader. I am >> keen to fall back to Canvas 2D instead of using Swiftshader but there is no >> way to do that. > > > That's a little bit of an exaggeration. You can certainly choose Canvas 2D > at anytime. You run a small benchmark and switch. > We don't make any particular game, we just make an engine. Are you sure it's possible to make a benchmark script that is 100% accurate for all kinds of games with their varying performance profiles, and does not delay the start of the game by more than a second? How do you know if your benchmark is working properly? What if one renderer runs faster in some places and slower in others, and the other renderer runs the opposite (faster where the other was slow, slower where the other was faster)? Which renderer should be picked then? I'd rather just say: use the GPU. Ashley -------------- next part -------------- An HTML attachment was scrubbed... URL: From cvi...@ Thu May 3 03:49:48 2012 From: cvi...@ (Cedric Vivier) Date: Thu, 3 May 2012 18:49:48 +0800 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> Message-ID: If the main use case is to allow apps that can be implemented with canvas2d to use that version when WebGL would otherwise run through software rendering (which would be typically slower then on most sensible configs, eg. because of fragment shaders processing), could we 'simply' add a WebGL context attribute such as : allowSoftwareRendering (default: true) If false, if the browser cannot create a fully hardware-accelerated context, the context creation fails (hence returns null). Such apps would be able to do : gl = canvas.getContext("experimental-webgl", {allowSoftwareRendering: false}); if (!gl) { // nevermind, we better use the 2d canvas then.. ctx = canvas.getContext("2d"); } I guess this would be a simple solution for the most common use cases, which are now a bit problematic indeed (unless doing benchmarks during startup phase of the app). This would not solve more complex scenarios (eg. VTF slow) but those would anyways require WebGL support (any VTF-using app probably cannot be easily implemented with canvas2d...), so benchmarking for this use case should be much less of a problem. Thoughts? On Thu, May 3, 2012 at 4:07 PM, Ashley Gullen wrote: > On 2 May 2012 21:47, Gregg Tavares (?) wrote: >> >> >> >> On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen wrote: >>> >>> I think this is a great idea and I'm?desperate?for something like this. >>> ?Our engine implements both a WebGL and Canvas 2D renderer, and currently >>> the Canvas 2D renderer is never used in Chrome 18 due to Swiftshader. ?I am >>> keen to fall back to Canvas 2D instead of using Swiftshader but there is no >>> way to do that. >> >> >> That's a little bit of an exaggeration. You can certainly choose Canvas 2D >> at anytime. You run a small benchmark and switch. > > > We don't make any particular game, we just make an engine. ?Are you sure > it's possible to make a benchmark script that is 100% accurate for all kinds > of games with their varying performance profiles, and does not delay the > start of the game by more than a second??How do you know if your benchmark > is working properly? ?What if one renderer runs faster in some places and > slower in others, and the other renderer runs the opposite (faster where the > other was slow, slower where the other was faster)? ?Which renderer should > be picked then? ?I'd rather just say: use the GPU. > > Ashley ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Thu May 3 05:13:19 2012 From: bja...@ (Benoit Jacob) Date: Thu, 3 May 2012 05:13:19 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: Message-ID: <1066249153.45687.1336047199180.JavaMail.root@mozilla.com> ----- Original Message ----- > If the main use case is to allow apps that can be implemented with > canvas2d to use that version when WebGL would otherwise run through > software rendering (which would be typically slower then on most > sensible configs, eg. because of fragment shaders processing), could > we 'simply' add a WebGL context attribute such as : > > allowSoftwareRendering (default: true) There are 2 parts in your proposal here: 1) replace "slow" by "SoftwareRendering" 2) make it part of context creation flags instead of a new getter Regarding 1), I wanted to avoid mentioning "software rendering" in the spec because it's tricky to define: all software runs on hardware, so all is hardware-accelerated, after all. The current CPU/GPU split might not be there forever, so the concept of a "GPU" might not be perennial either. That's why I wanted to avoid entering into these details and just said "slow". Regarding 2), I was hesitating about that. I don't have a firm opinion either way. But there are going to be other flags, so one should think of an API to allow deciding whether to proceed with WebGL based on multiple factors. Such an API seems harder to design properly, so it seems simpler to add getDeviceAdvisories and let the application implement its own logic. > This would not solve more complex scenarios (eg. VTF slow) but those > would anyways require WebGL support (any VTF-using app probably > cannot > be easily implemented with canvas2d...), so benchmarking for this use > case should be much less of a problem. That's not true! Google MapsGL uses VTF. The key is that an application may want to do fancy things with WebGL while having a much simpler non-WebGL fallback. Cheers, Benoit > > Thoughts? > > > On Thu, May 3, 2012 at 4:07 PM, Ashley Gullen > wrote: > > On 2 May 2012 21:47, Gregg Tavares (?) wrote: > >> > >> > >> > >> On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen > >> wrote: > >>> > >>> I think this is a great idea and I'm?desperate?for something like > >>> this. > >>> ?Our engine implements both a WebGL and Canvas 2D renderer, and > >>> ?currently > >>> the Canvas 2D renderer is never used in Chrome 18 due to > >>> Swiftshader. ?I am > >>> keen to fall back to Canvas 2D instead of using Swiftshader but > >>> there is no > >>> way to do that. > >> > >> > >> That's a little bit of an exaggeration. You can certainly choose > >> Canvas 2D > >> at anytime. You run a small benchmark and switch. > > > > > > We don't make any particular game, we just make an engine. ?Are you > > sure > > it's possible to make a benchmark script that is 100% accurate for > > all kinds > > of games with their varying performance profiles, and does not > > delay the > > start of the game by more than a second??How do you know if your > > benchmark > > is working properly? ?What if one renderer runs faster in some > > places and > > slower in others, and the other renderer runs the opposite (faster > > where the > > other was slow, slower where the other was faster)? ?Which renderer > > should > > be picked then? ?I'd rather just say: use the GPU. > > > > Ashley > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Thu May 3 10:11:35 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Thu, 3 May 2012 10:11:35 -0700 Subject: [Public WebGL] Asynchronous calls In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 6:09 PM, Glenn Maynard wrote: > On Wed, May 2, 2012 at 7:35 PM, Gregg Tavares (?) wrote: > >> Compile errors are returned with gl.getShaderParameter(shader, >> gl.COMPILE_STATUS), Link errors are returned with >> gl.getProgramParameter(program, gl.LINK_STATUS) >> >> Neither of those are actually an error. >> >> Also, you can compile X shaders and link Y programs all without checking >> the status of any of them. Which effectively means you can async compile >> multiple things at once. If you wanted to check them you'd need to able to >> check them individually, not with something like getLastError. Which one >> would be get getting the error for? >> > > This is the part I didn't go into, but each getErrorAsync call would > return the error as of that point. So, > > function() { > var shader1 = ctx.createShader(); > ctx.shaderSource(shader1, source1); > ctx.compileShader(shader1); > ctx.getErrorAsync(func1); > > var shader2 = ctx.createShader(); > ctx.shaderSource(shader2, source2); > ctx.compileShader(shader2); > ctx.getErrorAsync(func2); > } > > I think you must have a different understanding of GL than I do. glCompileShader on every driver I know of is a synchronous call. So if you want this to be async following your model you need that function to be called on another thread from another context. The model above gives the browser no clue that it needs to be called from another thread in another context until it's too late. > func1 would always be called before func2; both func1 and func2 receive > the value that getError would have returned if called at their respective > places. > > This could be done for any function that returns a value, including > getProgramParameter and getShaderParameter. (That does result in a larger > surface area, but I suppose this isn't unusual for the platform; actually, > most web APIs have both sync and async interfaces for *all* blocking > functions.) > > AFAIK both of these suggestions will currently only help Chrome. AFAIK all >> other browsers are calling GL on the same thread as JavaScript and I know >> of no GL drivers that compile or link on another thread. I don't know what >> Firefox or Safari's plans are with regards to WebGL but I suspect if this >> truly is a priority issue it will either require major re-architecting so >> that those browsers actually issue GL calls on other threads or it will >> require an API like asyncCompileShader(s, callback) and asyncLinkProgram(p, >> callback) that is easier to implement given their current implementations. >> > > (I'm fine with features that only help a single browser to start; that > just helps encourage other browsers to improve their implementations.) > > But, back to the original point. I'm still not sure this is a priority. >> > > I didn't call it a high priority; that doesn't mean it's not worth > thinking about. > > -- > Glenn Maynard > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xch...@ Thu May 3 10:22:13 2012 From: xch...@ (Bishop Zareh) Date: Thu, 3 May 2012 12:22:13 -0500 Subject: [Public WebGL] Asynchronous calls In-Reply-To: References: Message-ID: Shameless plug here, but a flow control library like async or Frame.js would help sort out these callbacks. https://github.com/bishopZ/Frame.js As brilliant as async is, its often much more difficult to get JS to be synchronous when you need it to be. anyway, my .02, bz On May 3, 2012, at 12:11 PM, Gregg Tavares (?) wrote: > > > On Wed, May 2, 2012 at 6:09 PM, Glenn Maynard wrote: > On Wed, May 2, 2012 at 7:35 PM, Gregg Tavares (?) wrote: > Compile errors are returned with gl.getShaderParameter(shader, gl.COMPILE_STATUS), Link errors are returned with gl.getProgramParameter(program, gl.LINK_STATUS) > > Neither of those are actually an error. > > Also, you can compile X shaders and link Y programs all without checking the status of any of them. Which effectively means you can async compile multiple things at once. If you wanted to check them you'd need to able to check them individually, not with something like getLastError. Which one would be get getting the error for? > > This is the part I didn't go into, but each getErrorAsync call would return the error as of that point. So, > > function() { > var shader1 = ctx.createShader(); > ctx.shaderSource(shader1, source1); > ctx.compileShader(shader1); > ctx.getErrorAsync(func1); > > var shader2 = ctx.createShader(); > ctx.shaderSource(shader2, source2); > ctx.compileShader(shader2); > ctx.getErrorAsync(func2); > } > > > I think you must have a different understanding of GL than I do. > > glCompileShader on every driver I know of is a synchronous call. > > So if you want this to be async following your model you need that function to be called on another thread from another context. The model above gives the browser no clue that it needs to be called from another thread in another context until it's too late. > > func1 would always be called before func2; both func1 and func2 receive the value that getError would have returned if called at their respective places. > > This could be done for any function that returns a value, including getProgramParameter and getShaderParameter. (That does result in a larger surface area, but I suppose this isn't unusual for the platform; actually, most web APIs have both sync and async interfaces for *all* blocking functions.) > > AFAIK both of these suggestions will currently only help Chrome. AFAIK all other browsers are calling GL on the same thread as JavaScript and I know of no GL drivers that compile or link on another thread. I don't know what Firefox or Safari's plans are with regards to WebGL but I suspect if this truly is a priority issue it will either require major re-architecting so that those browsers actually issue GL calls on other threads or it will require an API like asyncCompileShader(s, callback) and asyncLinkProgram(p, callback) that is easier to implement given their current implementations. > > (I'm fine with features that only help a single browser to start; that just helps encourage other browsers to improve their implementations.) > > But, back to the original point. I'm still not sure this is a priority. > > I didn't call it a high priority; that doesn't mean it's not worth thinking about. > > -- > Glenn Maynard > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu May 3 10:49:42 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Thu, 3 May 2012 10:49:42 -0700 Subject: [Public WebGL] Asynchronous calls In-Reply-To: References: Message-ID: one more thought Adding the ability to use WebGL from WebWorkers would arguably remove the need for many async calls because you can call the blocking calls from a worker and asynchronously notify the main page's JS the work is done. I'm not saying that negates the need for some async APIs. But, I am saying it would be best to first get WebGL available in WebWorkers which would provide a generic solution to all these sync/async issues. Then later, decide which ones should have a specific API -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu May 3 12:00:15 2012 From: kbr...@ (Kenneth Russell) Date: Thu, 3 May 2012 12:00:15 -0700 Subject: [Public WebGL] questions about vertexAttribPointer / getVertexAttrib offsets In-Reply-To: <4F9B647E.9090705@mit.edu> References: <903881096.6230551.1335212614285.JavaMail.root@zmmbox3.mail.corp.phx1.mozilla.com> <4F95FE1A.2050203@mit.edu> <4F9B376B.7010702@mit.edu> <4F9B3F75.10309@mit.edu> <4F9B647E.9090705@mit.edu> Message-ID: On Fri, Apr 27, 2012 at 8:31 PM, Boris Zbarsky wrote: > On 4/27/12 9:40 PM, Kenneth Russell wrote: >> >> I am not completely sure that WebKit can even handle passing "long >> long" values from JavaScript to C++. It will almost certainly not have >> proper Web IDL behavior, where TypeError is thrown for out-of-range >> values. > > > That last only happens for arguments with the [EnforceRange] annotation, > which is not an issue for the WebGL spec, obviously. ?;) I see. After discussion with the WebGL working group, the fixes for the following tests: conformance/more/functions/vertexAttribPointerBadArgs.html conformance/buffers/index-validation.html have been backported to the 1.0.1 conformance suite. WebKit bug https://bugs.webkit.org/show_bug.cgi?id=85528 has been filed about the latter failure. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gle...@ Thu May 3 20:33:44 2012 From: gle...@ (Glenn Maynard) Date: Thu, 3 May 2012 22:33:44 -0500 Subject: [Public WebGL] Asynchronous calls In-Reply-To: References: Message-ID: On Thu, May 3, 2012 at 12:11 PM, Gregg Tavares (?) wrote: > So if you want this to be async following your model you need that > function to be called on another thread from another context. The model > above gives the browser no clue that it needs to be called from another > thread in another context until it's too late. > Like I said at the start, > This is probably fairly complex to implement. It may require running the underlying OpenGL/GLES context in its own thread (maybe they already do this; it seems like a good idea on its own). I'm not sure about D3D. This would still serialize compilations; D3D-based implementations might not have that problem. On Thu, May 3, 2012 at 12:49 PM, Gregg Tavares (?) wrote: > Adding the ability to use WebGL from WebWorkers would arguably remove the > need for many async calls because you can call the blocking calls from a > worker and asynchronously notify the main page's JS the work is done. > > I'm not saying that negates the need for some async APIs. But, I am saying > it would be best to first get WebGL available in WebWorkers which would > provide a generic solution to all these sync/async issues. Then later, > decide which ones should have a specific API > I agree that WebGL in workers is important, of course. It would only actually help here if it's possible to render to a visible canvas, though; with the model of most Worker APIs, you'd only be able to render to an offscreen canvas. Maybe disabling readback functions (toBlob and toDataURL) on HTMLCanvasElements being rendered in a different thread would be enough to fix the main problem (exposing asynchronous behavior to scripts). I think everyone wants WebGL in workers, but nobody's really sure how to proceed, since it's hard to get HTMLImageElement in workers... -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzb...@ Thu May 3 20:53:56 2012 From: bzb...@ (Boris Zbarsky) Date: Thu, 03 May 2012 23:53:56 -0400 Subject: [Public WebGL] Asynchronous calls In-Reply-To: References: Message-ID: <4FA352D4.6000109@mit.edu> On 5/3/12 11:33 PM, Glenn Maynard wrote: > I think everyone wants WebGL in workers, but nobody's really sure how to > proceed, since it's hard to get HTMLImageElement in workers... We don't really need HTMLImageElement in workers. The only places where WebGL takes an HTMLImageElement one can pass the corresponding ImageData. We could add some sort of convenience function for getting said ImageData out of an HTMLImageElement, if desired, to avoid having to roundtrip through a 2d canvas context. Alternately, in the interests of efficiency, we could make it possible to pass an HTMLImageElement to a worker and get an opaque thing that represents its data (actual data plus format information) to avoid forcing conversions to RGBA. -Boris ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Thu May 3 20:59:45 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Thu, 3 May 2012 20:59:45 -0700 Subject: [Public WebGL] WebGL in Workers Message-ID: On Thu, May 3, 2012 at 8:53 PM, Boris Zbarsky wrote: > On 5/3/12 11:33 PM, Glenn Maynard wrote: > >> I think everyone wants WebGL in workers, but nobody's really sure how to >> proceed, since it's hard to get HTMLImageElement in workers... >> > > We don't really need HTMLImageElement in workers. The only places where > WebGL takes an HTMLImageElement one can pass the corresponding ImageData. > We could add some sort of convenience function for getting said ImageData > out of an HTMLImageElement, if desired, to avoid having to roundtrip > through a 2d canvas context. > > Alternately, in the interests of efficiency, we could make it possible to > pass an HTMLImageElement to a worker and get an opaque thing that > represents its data (actual data plus format information) to avoid forcing > conversions to RGBA. Another alternative, Add "Picture" which would be Image minus the HTMLElement parts You could think of as class Picture { }; // HTMLImageElement uses a Picture class HTMLImageElement : public HTMLElement { private: Picture* picture; } Apparently there are issues using any kind of DOM element in a Worker but separating the data part of Image from Image would let you use the data only part in a Worker. > > > -Boris > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Thu May 3 21:29:50 2012 From: gle...@ (Glenn Maynard) Date: Thu, 3 May 2012 23:29:50 -0500 Subject: [Public WebGL] WebGL in Workers In-Reply-To: References: Message-ID: On Thu, May 3, 2012 at 8:53 PM, Boris Zbarsky wrote: > We don't really need HTMLImageElement in workers. The only places where >> WebGL takes an HTMLImageElement one can pass the corresponding ImageData. >> We could add some sort of convenience function for getting said ImageData >> out of an HTMLImageElement, if desired, to avoid having to roundtrip >> through a 2d canvas context. >> >> Alternately, in the interests of efficiency, we could make it possible to >> pass an HTMLImageElement to a worker and get an opaque thing that >> represents its data (actual data plus format information) to avoid forcing >> conversions to RGBA. > > I think the latter is much better. In particular, browsers tend to put off decompressing the image data until you actually use it, and once it happens, it tends to happen synchronously (since a drawImage+getImageData is inherently synchronous). PNGs are pretty expensive to decompress, so this can lead to significant hitching in the UI thread. The latter approach can avoid that problem. (It might even allow decompressing the image in its own thread as part of creating the texture, which could make loading lots of big textures from PNGs faster.) Also, while this isn't strictly needed, it would be a big plus if WebGL in workers can load its own data, instead of having to load the images in the main thread and then hand them off. Having to ask the main thread to do that for you is pretty cumbersome. Maybe it could be done as an XHR mode, which--in terms of the above approach--would return the opaque data that you would have received if you had loaded the HTMLImageElement and pulled it from there. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzb...@ Thu May 3 21:30:54 2012 From: bzb...@ (Boris Zbarsky) Date: Fri, 04 May 2012 00:30:54 -0400 Subject: [Public WebGL] WebGL in Workers In-Reply-To: References: Message-ID: <4FA35B7E.1050302@mit.edu> On 5/3/12 11:59 PM, Gregg Tavares (?) wrote: > On Thu, May 3, 2012 at 8:53 PM, Boris Zbarsky > wrote: > Alternately, in the interests of efficiency, we could make it > possible to pass an HTMLImageElement to a worker and get an opaque > thing that represents its data (actual data plus format information) > to avoid forcing conversions to RGBA. > > Another alternative, Add "Picture" which would be Image minus the > HTMLElement parts That's basically my "Alternately" proposal quoted above, yes. I just realized, rereading it, that it was unclear. The idea was that you'd pass the HTMLImageElement to postMessage on the web page side and the Picture, as you call it, would hang off the message event on the worker side. -Boris ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Fri May 4 17:18:21 2012 From: kbr...@ (Kenneth Russell) Date: Fri, 4 May 2012 17:18:21 -0700 Subject: [Public WebGL] WebGL in Workers In-Reply-To: <4FA35B7E.1050302@mit.edu> References: <4FA35B7E.1050302@mit.edu> Message-ID: On Thu, May 3, 2012 at 9:30 PM, Boris Zbarsky wrote: > > On 5/3/12 11:59 PM, Gregg Tavares (?) wrote: >> >> On Thu, May 3, 2012 at 8:53 PM, Boris Zbarsky > > wrote: >> ? ?Alternately, in the interests of efficiency, we could make it >> ? ?possible to pass an HTMLImageElement to a worker and get an opaque >> ? ?thing that represents its data (actual data plus format information) >> ? ?to avoid forcing conversions to RGBA. >> >> Another alternative, Add "Picture" which would be Image minus the >> HTMLElement parts > > > That's basically my "Alternately" proposal quoted above, yes. ?I just > realized, rereading it, that it was unclear. ?The idea was that you'd pass > the HTMLImageElement to postMessage on the web page side and the Picture, as > you call it, would hang off the message event on the worker side. Rather than having special-purpose handling for HTMLImageElement in postMessage, wouldn't it make sense to make Gregg's refactoring of the "guts" into Picture explicit in the API? Then Picture could be made Transferable, and semantics of "giving" it to a Web Worker would be easy to define. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gle...@ Fri May 4 17:25:00 2012 From: gle...@ (Glenn Maynard) Date: Fri, 4 May 2012 19:25:00 -0500 Subject: [Public WebGL] WebGL in Workers In-Reply-To: <4FA35B7E.1050302@mit.edu> References: <4FA35B7E.1050302@mit.edu> Message-ID: On Thu, May 3, 2012 at 11:30 PM, Boris Zbarsky wrote: > That's basically my "Alternately" proposal quoted above, yes. I just > realized, rereading it, that it was unclear. The idea was that you'd pass > the HTMLImageElement to postMessage on the web page side and the Picture, > as you call it, would hang off the message event on the worker side. > Having a separate method (with a new interface on both sides) is more consistent with structured clone, rather than having an object clone to a different object on the other side. It would be something new--and bad, in my opinion--for cloning an object to result in a completely different object. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Fri May 4 17:42:10 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Fri, 4 May 2012 17:42:10 -0700 Subject: [Public WebGL] WebGL in Workers In-Reply-To: References: <4FA35B7E.1050302@mit.edu> Message-ID: On Fri, May 4, 2012 at 5:25 PM, Glenn Maynard wrote: > On Thu, May 3, 2012 at 11:30 PM, Boris Zbarsky wrote: > >> That's basically my "Alternately" proposal quoted above, yes. I just >> realized, rereading it, that it was unclear. The idea was that you'd pass >> the HTMLImageElement to postMessage on the web page side and the Picture, >> as you call it, would hang off the message event on the worker side. >> > > Having a separate method (with a new interface on both sides) is more > consistent with structured clone, rather than having an object clone to a > different object on the other side. It would be something new--and bad, in > my opinion--for cloning an object to result in a completely different > object. > My goal with "Picture" is not to make it transferable (It could be or not, I don't really have an opinion). It's to be able to instantiate them inside a worker and use them with texImage2D > > -- > Glenn Maynard > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toj...@ Sat May 5 00:05:23 2012 From: toj...@ (Brandon Jones) Date: Sat, 5 May 2012 00:05:23 -0700 Subject: [Public WebGL] WebGL in Workers In-Reply-To: References: <4FA35B7E.1050302@mit.edu> Message-ID: On Fri, May 4, 2012 at 5:42 PM, Gregg Tavares (?) wrote: > My goal with "Picture" is not to make it transferable (It could be or not, > I don't really have an opinion). It's to be able to instantiate them inside > a worker and use them with texImage2D > > I'll echo that sentiment. Given the choice between having transferable images that must be created in the main thread or a more limited image object that can be created in a worker I would absolutely take the image in a worker. And like Gregg said, making it transferable is a non-issue. I'd pretty much always be shoving it into a texture and transferring that instead. Even without any image support in a worker, though, WebGL access in a worker would still be useful. I'd love to unpack and upload DXT textures off of the main thread. --Brandon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ash...@ Sun May 6 03:19:56 2012 From: ash...@ (Ashley Gullen) Date: Sun, 6 May 2012 11:19:56 +0100 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <1066249153.45687.1336047199180.JavaMail.root@mozilla.com> References: <1066249153.45687.1336047199180.JavaMail.root@mozilla.com> Message-ID: AFAIK, SwiftShader is only used as a fallback is the user's driver is blacklisted. So how about a context creation flag {"nofallback": true}? This would indicate not to use any fallback WebGL implementation that might be used if the primary one is blacklisted. The intent is for SwiftShader, but it avoids mentioning software rendering. Ashley On 3 May 2012 13:13, Benoit Jacob wrote: > > > ----- Original Message ----- > > If the main use case is to allow apps that can be implemented with > > canvas2d to use that version when WebGL would otherwise run through > > software rendering (which would be typically slower then on most > > sensible configs, eg. because of fragment shaders processing), could > > we 'simply' add a WebGL context attribute such as : > > > > allowSoftwareRendering (default: true) > > There are 2 parts in your proposal here: > 1) replace "slow" by "SoftwareRendering" > 2) make it part of context creation flags instead of a new getter > > Regarding 1), I wanted to avoid mentioning "software rendering" in the > spec because it's tricky to define: all software runs on hardware, so all > is hardware-accelerated, after all. The current CPU/GPU split might not be > there forever, so the concept of a "GPU" might not be perennial either. > That's why I wanted to avoid entering into these details and just said > "slow". > > Regarding 2), I was hesitating about that. I don't have a firm opinion > either way. But there are going to be other flags, so one should think of > an API to allow deciding whether to proceed with WebGL based on multiple > factors. Such an API seems harder to design properly, so it seems simpler > to add getDeviceAdvisories and let the application implement its own logic. > > > > This would not solve more complex scenarios (eg. VTF slow) but those > > would anyways require WebGL support (any VTF-using app probably > > cannot > > be easily implemented with canvas2d...), so benchmarking for this use > > case should be much less of a problem. > > That's not true! Google MapsGL uses VTF. > > The key is that an application may want to do fancy things with WebGL > while having a much simpler non-WebGL fallback. > > Cheers, > Benoit > > > > Thoughts? > > > > > > On Thu, May 3, 2012 at 4:07 PM, Ashley Gullen > > wrote: > > > On 2 May 2012 21:47, Gregg Tavares (?) wrote: > > >> > > >> > > >> > > >> On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen > > >> wrote: > > >>> > > >>> I think this is a great idea and I'm desperate for something like > > >>> this. > > >>> Our engine implements both a WebGL and Canvas 2D renderer, and > > >>> currently > > >>> the Canvas 2D renderer is never used in Chrome 18 due to > > >>> Swiftshader. I am > > >>> keen to fall back to Canvas 2D instead of using Swiftshader but > > >>> there is no > > >>> way to do that. > > >> > > >> > > >> That's a little bit of an exaggeration. You can certainly choose > > >> Canvas 2D > > >> at anytime. You run a small benchmark and switch. > > > > > > > > > We don't make any particular game, we just make an engine. Are you > > > sure > > > it's possible to make a benchmark script that is 100% accurate for > > > all kinds > > > of games with their varying performance profiles, and does not > > > delay the > > > start of the game by more than a second? How do you know if your > > > benchmark > > > is working properly? What if one renderer runs faster in some > > > places and > > > slower in others, and the other renderer runs the opposite (faster > > > where the > > > other was slow, slower where the other was faster)? Which renderer > > > should > > > be picked then? I'd rather just say: use the GPU. > > > > > > Ashley > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Sun May 6 12:18:34 2012 From: bja...@ (Benoit Jacob) Date: Sun, 6 May 2012 12:18:34 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: Message-ID: <2133174574.2183963.1336331914157.JavaMail.root@mozilla.com> "nofallback" is IMHO both too specific (it should matter by itself whether it's a fallback) and not specific enough (why is a fallback bad?). Let's summarize the main open questions so far: A) what's the right API to expose this? - option 1) getDeviceAdvisories - option 2) new context creation flags that can cause creation to fail if a condition is not met e.g. "allowSoftwareRendering" - none of the above? My main reason for prefering option 1) getDeviceAdvisories is that I prefer to keep separate things that can be separate. I prefer to have a separate pure data getter, getDeviceAdvisories, then let the application do its own logic using that data, then let it create a WebGL context if it wants to. Option 2) tangles these two separate things. Concrete example: if an application will want to create a WebGL context if either of two conditions are met, with Option 2) will require doing two separate getContext calls. Blacklists will have to be processed twice, etc. B) What's the right "slow/software/fallback" concept to expose as an advisory / context creation requirement? - option 1) "slow" / "allowSlow" - option 2) "softwareRenderer" / "allowSoftwareRenderer" - option 3) "fallback" / "allowFallback" ? Cheers, Benoit ----- Original Message ----- > AFAIK, SwiftShader is only used as a fallback is the user's driver is > blacklisted. So how about a context creation flag {"nofallback": > true}? This would indicate not to use any fallback WebGL > implementation that might be used if the primary one is blacklisted. > The intent is for SwiftShader, but it avoids mentioning software > rendering. > Ashley > On 3 May 2012 13:13, Benoit Jacob < bjacob...@ > wrote: > > ----- Original Message ----- > > > > If the main use case is to allow apps that can be implemented > > > with > > > > canvas2d to use that version when WebGL would otherwise run > > > through > > > > software rendering (which would be typically slower then on most > > > > sensible configs, eg. because of fragment shaders processing), > > > could > > > > we 'simply' add a WebGL context attribute such as : > > > > > > > > allowSoftwareRendering (default: true) > > > There are 2 parts in your proposal here: > > > 1) replace "slow" by "SoftwareRendering" > > > 2) make it part of context creation flags instead of a new getter > > > Regarding 1), I wanted to avoid mentioning "software rendering" in > > the spec because it's tricky to define: all software runs on > > hardware, so all is hardware-accelerated, after all. The current > > CPU/GPU split might not be there forever, so the concept of a "GPU" > > might not be perennial either. That's why I wanted to avoid > > entering > > into these details and just said "slow". > > > Regarding 2), I was hesitating about that. I don't have a firm > > opinion either way. But there are going to be other flags, so one > > should think of an API to allow deciding whether to proceed with > > WebGL based on multiple factors. Such an API seems harder to design > > properly, so it seems simpler to add getDeviceAdvisories and let > > the > > application implement its own logic. > > > > This would not solve more complex scenarios (eg. VTF slow) but > > > those > > > > would anyways require WebGL support (any VTF-using app probably > > > > cannot > > > > be easily implemented with canvas2d...), so benchmarking for this > > > use > > > > case should be much less of a problem. > > > That's not true! Google MapsGL uses VTF. > > > The key is that an application may want to do fancy things with > > WebGL > > while having a much simpler non-WebGL fallback. > > > Cheers, > > > Benoit > > > > > > > > Thoughts? > > > > > > > > > > > > On Thu, May 3, 2012 at 4:07 PM, Ashley Gullen < ashley...@ > > > > > > > > wrote: > > > > > On 2 May 2012 21:47, Gregg Tavares (?) < gman...@ > > > > > wrote: > > > > >> > > > > >> > > > > >> > > > > >> On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen < > > > >> ashley...@ > > > > > >> wrote: > > > > >>> > > > > >>> I think this is a great idea and I'm desperate for something > > > >>> like > > > > >>> this. > > > > >>> Our engine implements both a WebGL and Canvas 2D renderer, > > > >>> and > > > > >>> currently > > > > >>> the Canvas 2D renderer is never used in Chrome 18 due to > > > > >>> Swiftshader. I am > > > > >>> keen to fall back to Canvas 2D instead of using Swiftshader > > > >>> but > > > > >>> there is no > > > > >>> way to do that. > > > > >> > > > > >> > > > > >> That's a little bit of an exaggeration. You can certainly > > > >> choose > > > > >> Canvas 2D > > > > >> at anytime. You run a small benchmark and switch. > > > > > > > > > > > > > > > We don't make any particular game, we just make an engine. Are > > > > you > > > > > sure > > > > > it's possible to make a benchmark script that is 100% accurate > > > > for > > > > > all kinds > > > > > of games with their varying performance profiles, and does not > > > > > delay the > > > > > start of the game by more than a second? How do you know if > > > > your > > > > > benchmark > > > > > is working properly? What if one renderer runs faster in some > > > > > places and > > > > > slower in others, and the other renderer runs the opposite > > > > (faster > > > > > where the > > > > > other was slow, slower where the other was faster)? Which > > > > renderer > > > > > should > > > > > be picked then? I'd rather just say: use the GPU. > > > > > > > > > > Ashley > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Tue May 8 07:05:51 2012 From: bja...@ (Benoit Jacob) Date: Tue, 8 May 2012 07:05:51 -0700 (PDT) Subject: [Public WebGL] webgl texture format conversions: doable in 17k of executable code (x86-64) ! In-Reply-To: <2062929252.3633086.1336485716302.JavaMail.root@mozilla.com> Message-ID: <756322726.3637398.1336485951321.JavaMail.root@mozilla.com> Hi, This email is 50% bragging rights and 50% sharing hopefully useful info with other browser vendors. You know that a WebGL implementation has to handle a large number of cases of texture format conversions: there are many source formats, many destination formats, and the possibility of premultiplication or unpremultiplication. Here is a changeset (currently only on mozilla-inbound) that brings this code size down to 17k on x86-64 (as measured with nm -S on linux). Our previous version, which was already quite careful, was 44k. This is using the 'fully templatized' approach i.e. a separate conversion loop is compiled for each case, traversing the bitmaps only once, with all the texel conversion functions inlined into it. Changeset: https://hg.mozilla.org/integration/mozilla-inbound/rev/9f87dbd4d39c The key was to avoid compiling paths that are never called, the code for this is there: https://hg.mozilla.org/integration/mozilla-inbound/rev/9f87dbd4d39c#l4.82 Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From toj...@ Tue May 8 08:54:25 2012 From: toj...@ (Brandon Jones) Date: Tue, 8 May 2012 08:54:25 -0700 Subject: [Public WebGL] webgl texture format conversions: doable in 17k of executable code (x86-64) ! In-Reply-To: <756322726.3637398.1336485951321.JavaMail.root@mozilla.com> References: <2062929252.3633086.1336485716302.JavaMail.root@mozilla.com> <756322726.3637398.1336485951321.JavaMail.root@mozilla.com> Message-ID: Nice work! Firefox was already quite speedy when it came to texture uploads, it would be great if this improved that. Even if it's break-even, however, the code simplification is very nice. --Brandon On Tue, May 8, 2012 at 7:05 AM, Benoit Jacob wrote: > > Hi, > > This email is 50% bragging rights and 50% sharing hopefully useful info > with other browser vendors. > > You know that a WebGL implementation has to handle a large number of cases > of texture format conversions: there are many source formats, many > destination formats, and the possibility of premultiplication or > unpremultiplication. > > Here is a changeset (currently only on mozilla-inbound) that brings this > code size down to 17k on x86-64 (as measured with nm -S on linux). Our > previous version, which was already quite careful, was 44k. This is using > the 'fully templatized' approach i.e. a separate conversion loop is > compiled for each case, traversing the bitmaps only once, with all the > texel conversion functions inlined into it. > > Changeset: > https://hg.mozilla.org/integration/mozilla-inbound/rev/9f87dbd4d39c > > The key was to avoid compiling paths that are never called, the code for > this is there: > https://hg.mozilla.org/integration/mozilla-inbound/rev/9f87dbd4d39c#l4.82 > > Cheers, > Benoit > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Tue May 8 10:13:02 2012 From: bja...@ (Benoit Jacob) Date: Tue, 8 May 2012 10:13:02 -0700 (PDT) Subject: [Public WebGL] webgl texture format conversions: doable in 17k of executable code (x86-64) ! In-Reply-To: Message-ID: <688079182.3910994.1336497182901.JavaMail.root@mozilla.com> It is a bit faster now in certain cases, as it's smarter than before at avoiding doing useless work. It also fixes a bug allowing Firefox to run the WebGL perf tests without errors, http://hg.mozilla.org/users/bjacob_mozilla.com/webgl-perf-tests/raw-file/tip/webgl-performance-tests.html Benoit ----- Original Message ----- > Nice work! Firefox was already quite speedy when it came to texture > uploads, it would be great if this improved that. Even if it's > break-even, however, the code simplification is very nice. > --Brandon > On Tue, May 8, 2012 at 7:05 AM, Benoit Jacob < bjacob...@ > > wrote: > > Hi, > > > This email is 50% bragging rights and 50% sharing hopefully useful > > info with other browser vendors. > > > You know that a WebGL implementation has to handle a large number > > of > > cases of texture format conversions: there are many source formats, > > many destination formats, and the possibility of premultiplication > > or unpremultiplication. > > > Here is a changeset (currently only on mozilla-inbound) that brings > > this code size down to 17k on x86-64 (as measured with nm -S on > > linux). Our previous version, which was already quite careful, was > > 44k. This is using the 'fully templatized' approach i.e. a separate > > conversion loop is compiled for each case, traversing the bitmaps > > only once, with all the texel conversion functions inlined into it. > > > Changeset: > > > https://hg.mozilla.org/integration/mozilla-inbound/rev/9f87dbd4d39c > > > The key was to avoid compiling paths that are never called, the > > code > > for this is there: > > > https://hg.mozilla.org/integration/mozilla-inbound/rev/9f87dbd4d39c#l4.82 > > > Cheers, > > > Benoit > > > ----------------------------------------------------------- > > > You are currently subscribed to public_webgl...@ . > > > To unsubscribe, send an email to majordomo...@ with > > > the following command in the body of your email: > > > unsubscribe public_webgl > > > ----------------------------------------------------------- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue May 8 11:24:42 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 8 May 2012 11:24:42 -0700 Subject: [Public WebGL] webgl texture format conversions: doable in 17k of executable code (x86-64) ! In-Reply-To: <756322726.3637398.1336485951321.JavaMail.root@mozilla.com> References: <2062929252.3633086.1336485716302.JavaMail.root@mozilla.com> <756322726.3637398.1336485951321.JavaMail.root@mozilla.com> Message-ID: Nice work! Thanks for contributing this to the community. I'm looking at integrating your new code back into WebKit now. -Ken On Tue, May 8, 2012 at 7:05 AM, Benoit Jacob wrote: > > Hi, > > This email is 50% bragging rights and 50% sharing hopefully useful info with other browser vendors. > > You know that a WebGL implementation has to handle a large number of cases of texture format conversions: there are many source formats, many destination formats, and the possibility of premultiplication or unpremultiplication. > > Here is a changeset (currently only on mozilla-inbound) that brings this code size down to 17k on x86-64 (as measured with nm -S on linux). Our previous version, which was already quite careful, was 44k. This is using the 'fully templatized' approach i.e. a separate conversion loop is compiled for each case, traversing the bitmaps only once, with all the texel conversion functions inlined into it. > > Changeset: > https://hg.mozilla.org/integration/mozilla-inbound/rev/9f87dbd4d39c > > The key was to avoid compiling paths that are never called, the code for this is there: > https://hg.mozilla.org/integration/mozilla-inbound/rev/9f87dbd4d39c#l4.82 > > Cheers, > Benoit > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Tue May 8 11:35:37 2012 From: bja...@ (Benoit Jacob) Date: Tue, 8 May 2012 11:35:37 -0700 (PDT) Subject: [Public WebGL] webgl texture format conversions: doable in 17k of executable code (x86-64) ! In-Reply-To: Message-ID: <1466935548.3940857.1336502137928.JavaMail.root@mozilla.com> Yup, let me know if you have questions. It's all in these two files: http://hg.mozilla.org/integration/mozilla-inbound/file/9f87dbd4d39c/content/canvas/src/WebGLTexelConversions.h http://hg.mozilla.org/integration/mozilla-inbound/file/9f87dbd4d39c/content/canvas/src/WebGLTexelConversions.cpp Except for the WebGLTexelFormat enum which is declared there: http://hg.mozilla.org/integration/mozilla-inbound/file/9f87dbd4d39c/content/canvas/src/WebGLContext.h#l141 In WebGLTexelConversions.h I had to depart significantly from the pack/unpack routines we were sharing with WebKit: they got templatized, so it was no longer possible to share long blocks of code without modification. I would encourage you to switch to templatized pack/unpack routines to, which has no drawback AFAICS and allows us to keep sharing this code. This can be done regardless of whether you switch to fully templatized conversion loops or not. Also note that below the pack/unpack routines in this file, there are new convertType routines. They are needed e.g. when converting from RGBA8 to RGBA32F, e.g. when creating a float texture from a ImageData object. I found about this when debugging Firefox's failure to run my WebGL performance tests. Cheers, Benoit ----- Original Message ----- > Nice work! Thanks for contributing this to the community. I'm looking > at integrating your new code back into WebKit now. > > -Ken > > > On Tue, May 8, 2012 at 7:05 AM, Benoit Jacob > wrote: > > > > Hi, > > > > This email is 50% bragging rights and 50% sharing hopefully useful > > info with other browser vendors. > > > > You know that a WebGL implementation has to handle a large number > > of cases of texture format conversions: there are many source > > formats, many destination formats, and the possibility of > > premultiplication or unpremultiplication. > > > > Here is a changeset (currently only on mozilla-inbound) that brings > > this code size down to 17k on x86-64 (as measured with nm -S on > > linux). Our previous version, which was already quite careful, was > > 44k. This is using the 'fully templatized' approach i.e. a > > separate conversion loop is compiled for each case, traversing the > > bitmaps only once, with all the texel conversion functions inlined > > into it. > > > > Changeset: > > https://hg.mozilla.org/integration/mozilla-inbound/rev/9f87dbd4d39c > > > > The key was to avoid compiling paths that are never called, the > > code for this is there: > > https://hg.mozilla.org/integration/mozilla-inbound/rev/9f87dbd4d39c#l4.82 > > > > Cheers, > > Benoit > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Tue May 8 16:32:36 2012 From: bja...@ (Benoit Jacob) Date: Tue, 8 May 2012 16:32:36 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <2133174574.2183963.1336331914157.JavaMail.root@mozilla.com> Message-ID: <1113186757.4111205.1336519956891.JavaMail.root@mozilla.com> ----- Original Message ----- > "nofallback" is IMHO both too specific (it should matter by itself > whether it's a fallback) and not specific enough (why is a fallback > bad?). > Let's summarize the main open questions so far: > A) what's the right API to expose this? > - option 1) getDeviceAdvisories > - option 2) new context creation flags that can cause creation to > fail if a condition is not met e.g. "allowSoftwareRendering" > - none of the above? > My main reason for prefering option 1) getDeviceAdvisories is that I > prefer to keep separate things that can be separate. I prefer to > have a separate pure data getter, getDeviceAdvisories, then let the > application do its own logic using that data, then let it create a > WebGL context if it wants to. Option 2) tangles these two separate > things. Concrete example: if an application will want to create a > WebGL context if either of two conditions are met, with Option 2) > will require doing two separate getContext calls. Blacklists will > have to be processed twice, etc. Here is a concrete example of how option 2) doesn't allow things that apps will want to do. Suppose that a browser always honors the default {antialias:true}, for example by implementing FXAA for renderers that don't support MSAA. Suppose that an application wants antialiasing, but not if the renderer is advertised as 'slow'. With option 1), the application can do: var gl = canvas.getContext("webgl", {async:true}); if (gl.getDeviceAdvisories().slow) gl = canvas.getContext("webgl", {async:true, antialias:false}); Thus the whole negociation can happen without waiting for any actual OpenGL context to be created. But with option 2), there is no way to check whether a context is 'slow'. Suppose that we correct option 2) by adding a context flag, 'slow', allowing to determine whether the context is slow. Then the negociation would still require waiting on OpenGL context creation: var gl = canvas.getContext("webgl"); var flags = gl.getContextAttributes(); if (flags.slow && flags.antialias) gl = canvas.getContext("webgl", {allowSlow:false}) ; On a stratospheric level, option 1) is better because it keeps separate things separate. (getting advisories from a blacklist-like kind of database, vs. creating OpenGL contexts). Cheers, Benoit > B) What's the right "slow/software/fallback" concept to expose as an > advisory / context creation requirement? > - option 1) "slow" / "allowSlow" > - option 2) "softwareRenderer" / "allowSoftwareRenderer" > - option 3) "fallback" / "allowFallback" ? > Cheers, > Benoit > ----- Original Message ----- > > AFAIK, SwiftShader is only used as a fallback is the user's driver > > is > > blacklisted. So how about a context creation flag {"nofallback": > > true}? This would indicate not to use any fallback WebGL > > implementation that might be used if the primary one is > > blacklisted. > > The intent is for SwiftShader, but it avoids mentioning software > > rendering. > > > Ashley > > > On 3 May 2012 13:13, Benoit Jacob < bjacob...@ > wrote: > > > > ----- Original Message ----- > > > > > > > If the main use case is to allow apps that can be implemented > > > > with > > > > > > > canvas2d to use that version when WebGL would otherwise run > > > > through > > > > > > > software rendering (which would be typically slower then on > > > > most > > > > > > > sensible configs, eg. because of fragment shaders processing), > > > > could > > > > > > > we 'simply' add a WebGL context attribute such as : > > > > > > > > > > > > > > allowSoftwareRendering (default: true) > > > > > > There are 2 parts in your proposal here: > > > > > > 1) replace "slow" by "SoftwareRendering" > > > > > > 2) make it part of context creation flags instead of a new getter > > > > > > Regarding 1), I wanted to avoid mentioning "software rendering" > > > in > > > the spec because it's tricky to define: all software runs on > > > hardware, so all is hardware-accelerated, after all. The current > > > CPU/GPU split might not be there forever, so the concept of a > > > "GPU" > > > might not be perennial either. That's why I wanted to avoid > > > entering > > > into these details and just said "slow". > > > > > > Regarding 2), I was hesitating about that. I don't have a firm > > > opinion either way. But there are going to be other flags, so one > > > should think of an API to allow deciding whether to proceed with > > > WebGL based on multiple factors. Such an API seems harder to > > > design > > > properly, so it seems simpler to add getDeviceAdvisories and let > > > the > > > application implement its own logic. > > > > > > > This would not solve more complex scenarios (eg. VTF slow) but > > > > those > > > > > > > would anyways require WebGL support (any VTF-using app probably > > > > > > > cannot > > > > > > > be easily implemented with canvas2d...), so benchmarking for > > > > this > > > > use > > > > > > > case should be much less of a problem. > > > > > > That's not true! Google MapsGL uses VTF. > > > > > > The key is that an application may want to do fancy things with > > > WebGL > > > while having a much simpler non-WebGL fallback. > > > > > > Cheers, > > > > > > Benoit > > > > > > > > > > > > > > Thoughts? > > > > > > > > > > > > > > > > > > > > > On Thu, May 3, 2012 at 4:07 PM, Ashley Gullen < > > > > ashley...@ > > > > > > > > > > > > wrote: > > > > > > > > On 2 May 2012 21:47, Gregg Tavares (?) < gman...@ > > > > > > wrote: > > > > > > > >> > > > > > > > >> > > > > > > > >> > > > > > > > >> On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen < > > > > >> ashley...@ > > > > > > > > >> wrote: > > > > > > > >>> > > > > > > > >>> I think this is a great idea and I'm desperate for > > > > >>> something > > > > >>> like > > > > > > > >>> this. > > > > > > > >>> Our engine implements both a WebGL and Canvas 2D renderer, > > > > >>> and > > > > > > > >>> currently > > > > > > > >>> the Canvas 2D renderer is never used in Chrome 18 due to > > > > > > > >>> Swiftshader. I am > > > > > > > >>> keen to fall back to Canvas 2D instead of using Swiftshader > > > > >>> but > > > > > > > >>> there is no > > > > > > > >>> way to do that. > > > > > > > >> > > > > > > > >> > > > > > > > >> That's a little bit of an exaggeration. You can certainly > > > > >> choose > > > > > > > >> Canvas 2D > > > > > > > >> at anytime. You run a small benchmark and switch. > > > > > > > > > > > > > > > > > > > > > > > > We don't make any particular game, we just make an engine. > > > > > Are > > > > > you > > > > > > > > sure > > > > > > > > it's possible to make a benchmark script that is 100% > > > > > accurate > > > > > for > > > > > > > > all kinds > > > > > > > > of games with their varying performance profiles, and does > > > > > not > > > > > > > > delay the > > > > > > > > start of the game by more than a second? How do you know if > > > > > your > > > > > > > > benchmark > > > > > > > > is working properly? What if one renderer runs faster in some > > > > > > > > places and > > > > > > > > slower in others, and the other renderer runs the opposite > > > > > (faster > > > > > > > > where the > > > > > > > > other was slow, slower where the other was faster)? Which > > > > > renderer > > > > > > > > should > > > > > > > > be picked then? I'd rather just say: use the GPU. > > > > > > > > > > > > > > > > Ashley > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue May 8 17:43:49 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 8 May 2012 17:43:49 -0700 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <1113186757.4111205.1336519956891.JavaMail.root@mozilla.com> References: <2133174574.2183963.1336331914157.JavaMail.root@mozilla.com> <1113186757.4111205.1336519956891.JavaMail.root@mozilla.com> Message-ID: On Tue, May 8, 2012 at 4:32 PM, Benoit Jacob wrote: > > > ________________________________ > > "nofallback" is IMHO both too specific (it should matter by itself whether > it's a fallback) and not specific enough (why is a fallback bad?). > > Let's summarize the main open questions so far: > > A) what's the right API to expose this? > ??? - option 1) getDeviceAdvisories > ??? - option 2) new context creation flags that can cause creation to fail > if a condition is not met e.g. "allowSoftwareRendering" > ??? - none of the above? > > My main reason for prefering option 1) getDeviceAdvisories is that I prefer > to keep separate things that can be separate. I prefer to have a separate > pure data getter, getDeviceAdvisories, then let the application do its own > logic using that data, then let it create a WebGL context if it wants to. > Option 2) tangles these two separate things. Concrete example: if an > application will want to create a WebGL context if either of two conditions > are met, with Option 2) will require doing two separate getContext calls. > Blacklists will have to be processed twice, etc. > > Here is a concrete example of how option 2) doesn't allow things that apps > will want to do. > > Suppose that a browser always honors the default {antialias:true}, for > example by implementing FXAA for renderers that don't support MSAA. > > Suppose that an application wants antialiasing, but not if the renderer is > advertised as 'slow'. > > With option 1), the application can do: > > ? var gl = canvas.getContext("webgl", {async:true}); > ? if (gl.getDeviceAdvisories().slow) > ??? gl = canvas.getContext("webgl", {async:true, antialias:false}); > > Thus the whole negociation can happen without waiting for any actual OpenGL > context to be created. This wouldn't work -- the context creation attributes are ignored during the second and subsequent calls to getContext(). I don't think this will change even with the introduction of the "async" context creation flag. -Ken > But with option 2), there is no way to check whether a context is 'slow'. > Suppose that we correct option 2) by adding a context flag, 'slow', allowing > to determine whether the context is slow. Then the negociation would still > require waiting on OpenGL context creation: > > ? var gl = canvas.getContext("webgl"); > ? var flags = gl.getContextAttributes(); > ? if (flags.slow && flags.antialias) > ??? gl = canvas.getContext("webgl", {allowSlow:false}); > > On a stratospheric level, option 1) is better because it keeps separate > things separate.? (getting advisories from a blacklist-like kind of > database, vs. creating OpenGL contexts). > > Cheers, > Benoit > > > > B) What's the right "slow/software/fallback" concept to expose as an > advisory / context creation requirement? > ??? - option 1) "slow" / "allowSlow" > ??? - option 2) "softwareRenderer" / "allowSoftwareRenderer" > ??? - option 3) "fallback" / "allowFallback" ? > > Cheers, > Benoit > > > ________________________________ > > AFAIK, SwiftShader is only used as a fallback is the user's driver is > blacklisted. ?So how about a context creation flag {"nofallback": true}? > ?This would indicate not to use any fallback WebGL implementation that might > be used if the primary one is blacklisted. ?The intent is for SwiftShader, > but it avoids mentioning software rendering. > > Ashley > > > On 3 May 2012 13:13, Benoit Jacob wrote: >> >> >> >> ----- Original Message ----- >> > If the main use case is to allow apps that can be implemented with >> > canvas2d to use that version when WebGL would otherwise run through >> > software rendering (which would be typically slower then on most >> > sensible configs, eg. because of fragment shaders processing), could >> > we 'simply' add a WebGL context attribute such as : >> > >> > allowSoftwareRendering (default: true) >> >> There are 2 parts in your proposal here: >> ?1) replace "slow" by "SoftwareRendering" >> ?2) make it part of context creation flags instead of a new getter >> >> Regarding 1), I wanted to avoid mentioning "software rendering" in the >> spec because it's tricky to define: all software runs on hardware, so all is >> hardware-accelerated, after all. The current CPU/GPU split might not be >> there forever, so the concept of a "GPU" might not be perennial either. >> That's why I wanted to avoid entering into these details and just said >> "slow". >> >> Regarding 2), I was hesitating about that. I don't have a firm opinion >> either way. But there are going to be other flags, so one should think of an >> API to allow deciding whether to proceed with WebGL based on multiple >> factors. Such an API seems harder to design properly, so it seems simpler to >> add getDeviceAdvisories and let the application implement its own logic. >> >> >> > This would not solve more complex scenarios (eg. VTF slow) but those >> > would anyways require WebGL support (any VTF-using app probably >> > cannot >> > be easily implemented with canvas2d...), so benchmarking for this use >> > case should be much less of a problem. >> >> That's not true! Google MapsGL uses VTF. >> >> The key is that an application may want to do fancy things with WebGL >> while having a much simpler non-WebGL fallback. >> >> Cheers, >> Benoit >> > >> > Thoughts? >> > >> > >> > On Thu, May 3, 2012 at 4:07 PM, Ashley Gullen >> > wrote: >> > > On 2 May 2012 21:47, Gregg Tavares (?) wrote: >> > >> >> > >> >> > >> >> > >> On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen >> > >> wrote: >> > >>> >> > >>> I think this is a great idea and I'm?desperate?for something like >> > >>> this. >> > >>> ?Our engine implements both a WebGL and Canvas 2D renderer, and >> > >>> ?currently >> > >>> the Canvas 2D renderer is never used in Chrome 18 due to >> > >>> Swiftshader. ?I am >> > >>> keen to fall back to Canvas 2D instead of using Swiftshader but >> > >>> there is no >> > >>> way to do that. >> > >> >> > >> >> > >> That's a little bit of an exaggeration. You can certainly choose >> > >> Canvas 2D >> > >> at anytime. You run a small benchmark and switch. >> > > >> > > >> > > We don't make any particular game, we just make an engine. ?Are you >> > > sure >> > > it's possible to make a benchmark script that is 100% accurate for >> > > all kinds >> > > of games with their varying performance profiles, and does not >> > > delay the >> > > start of the game by more than a second??How do you know if your >> > > benchmark >> > > is working properly? ?What if one renderer runs faster in some >> > > places and >> > > slower in others, and the other renderer runs the opposite (faster >> > > where the >> > > other was slow, slower where the other was faster)? ?Which renderer >> > > should >> > > be picked then? ?I'd rather just say: use the GPU. >> > > >> > > Ashley >> > > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Tue May 8 18:39:45 2012 From: bja...@ (Benoit Jacob) Date: Tue, 8 May 2012 18:39:45 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: Message-ID: <1439171571.4218455.1336527585159.JavaMail.root@mozilla.com> ----- Original Message ----- > On Tue, May 8, 2012 at 4:32 PM, Benoit Jacob > wrote: > > > > > > ________________________________ > > > > "nofallback" is IMHO both too specific (it should matter by itself > > whether > > it's a fallback) and not specific enough (why is a fallback bad?). > > > > Let's summarize the main open questions so far: > > > > A) what's the right API to expose this? > > ??? - option 1) getDeviceAdvisories > > ??? - option 2) new context creation flags that can cause creation > > ??? to fail > > if a condition is not met e.g. "allowSoftwareRendering" > > ??? - none of the above? > > > > My main reason for prefering option 1) getDeviceAdvisories is that > > I prefer > > to keep separate things that can be separate. I prefer to have a > > separate > > pure data getter, getDeviceAdvisories, then let the application do > > its own > > logic using that data, then let it create a WebGL context if it > > wants to. > > Option 2) tangles these two separate things. Concrete example: if > > an > > application will want to create a WebGL context if either of two > > conditions > > are met, with Option 2) will require doing two separate getContext > > calls. > > Blacklists will have to be processed twice, etc. > > > > Here is a concrete example of how option 2) doesn't allow things > > that apps > > will want to do. > > > > Suppose that a browser always honors the default {antialias:true}, > > for > > example by implementing FXAA for renderers that don't support MSAA. > > > > Suppose that an application wants antialiasing, but not if the > > renderer is > > advertised as 'slow'. > > > > With option 1), the application can do: > > > > ? var gl = canvas.getContext("webgl", {async:true}); > > ? if (gl.getDeviceAdvisories().slow) > > ??? gl = canvas.getContext("webgl", {async:true, antialias:false}); > > > > Thus the whole negociation can happen without waiting for any > > actual OpenGL > > context to be created. > > This wouldn't work -- the context creation attributes are ignored > during the second and subsequent calls to getContext(). I don't think > this will change even with the introduction of the "async" context > creation flag. Oh. Then, discard the canvas element, create a new one in place of it, and do the second getContext call on it? Benoit > > -Ken > > > But with option 2), there is no way to check whether a context is > > 'slow'. > > Suppose that we correct option 2) by adding a context flag, 'slow', > > allowing > > to determine whether the context is slow. Then the negociation > > would still > > require waiting on OpenGL context creation: > > > > ? var gl = canvas.getContext("webgl"); > > ? var flags = gl.getContextAttributes(); > > ? if (flags.slow && flags.antialias) > > ??? gl = canvas.getContext("webgl", {allowSlow:false}); > > > > On a stratospheric level, option 1) is better because it keeps > > separate > > things separate.? (getting advisories from a blacklist-like kind of > > database, vs. creating OpenGL contexts). > > > > Cheers, > > Benoit > > > > > > > > B) What's the right "slow/software/fallback" concept to expose as > > an > > advisory / context creation requirement? > > ??? - option 1) "slow" / "allowSlow" > > ??? - option 2) "softwareRenderer" / "allowSoftwareRenderer" > > ??? - option 3) "fallback" / "allowFallback" ? > > > > Cheers, > > Benoit > > > > > > ________________________________ > > > > AFAIK, SwiftShader is only used as a fallback is the user's driver > > is > > blacklisted. ?So how about a context creation flag {"nofallback": > > true}? > > ?This would indicate not to use any fallback WebGL implementation > > ?that might > > be used if the primary one is blacklisted. ?The intent is for > > SwiftShader, > > but it avoids mentioning software rendering. > > > > Ashley > > > > > > On 3 May 2012 13:13, Benoit Jacob wrote: > >> > >> > >> > >> ----- Original Message ----- > >> > If the main use case is to allow apps that can be implemented > >> > with > >> > canvas2d to use that version when WebGL would otherwise run > >> > through > >> > software rendering (which would be typically slower then on most > >> > sensible configs, eg. because of fragment shaders processing), > >> > could > >> > we 'simply' add a WebGL context attribute such as : > >> > > >> > allowSoftwareRendering (default: true) > >> > >> There are 2 parts in your proposal here: > >> ?1) replace "slow" by "SoftwareRendering" > >> ?2) make it part of context creation flags instead of a new getter > >> > >> Regarding 1), I wanted to avoid mentioning "software rendering" in > >> the > >> spec because it's tricky to define: all software runs on hardware, > >> so all is > >> hardware-accelerated, after all. The current CPU/GPU split might > >> not be > >> there forever, so the concept of a "GPU" might not be perennial > >> either. > >> That's why I wanted to avoid entering into these details and just > >> said > >> "slow". > >> > >> Regarding 2), I was hesitating about that. I don't have a firm > >> opinion > >> either way. But there are going to be other flags, so one should > >> think of an > >> API to allow deciding whether to proceed with WebGL based on > >> multiple > >> factors. Such an API seems harder to design properly, so it seems > >> simpler to > >> add getDeviceAdvisories and let the application implement its own > >> logic. > >> > >> > >> > This would not solve more complex scenarios (eg. VTF slow) but > >> > those > >> > would anyways require WebGL support (any VTF-using app probably > >> > cannot > >> > be easily implemented with canvas2d...), so benchmarking for > >> > this use > >> > case should be much less of a problem. > >> > >> That's not true! Google MapsGL uses VTF. > >> > >> The key is that an application may want to do fancy things with > >> WebGL > >> while having a much simpler non-WebGL fallback. > >> > >> Cheers, > >> Benoit > >> > > >> > Thoughts? > >> > > >> > > >> > On Thu, May 3, 2012 at 4:07 PM, Ashley Gullen > >> > > >> > wrote: > >> > > On 2 May 2012 21:47, Gregg Tavares (?) > >> > > wrote: > >> > >> > >> > >> > >> > >> > >> > >> On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen > >> > >> > >> > >> wrote: > >> > >>> > >> > >>> I think this is a great idea and I'm?desperate?for something > >> > >>> like > >> > >>> this. > >> > >>> ?Our engine implements both a WebGL and Canvas 2D renderer, > >> > >>> ?and > >> > >>> ?currently > >> > >>> the Canvas 2D renderer is never used in Chrome 18 due to > >> > >>> Swiftshader. ?I am > >> > >>> keen to fall back to Canvas 2D instead of using Swiftshader > >> > >>> but > >> > >>> there is no > >> > >>> way to do that. > >> > >> > >> > >> > >> > >> That's a little bit of an exaggeration. You can certainly > >> > >> choose > >> > >> Canvas 2D > >> > >> at anytime. You run a small benchmark and switch. > >> > > > >> > > > >> > > We don't make any particular game, we just make an engine. > >> > > ?Are you > >> > > sure > >> > > it's possible to make a benchmark script that is 100% accurate > >> > > for > >> > > all kinds > >> > > of games with their varying performance profiles, and does not > >> > > delay the > >> > > start of the game by more than a second??How do you know if > >> > > your > >> > > benchmark > >> > > is working properly? ?What if one renderer runs faster in some > >> > > places and > >> > > slower in others, and the other renderer runs the opposite > >> > > (faster > >> > > where the > >> > > other was slow, slower where the other was faster)? ?Which > >> > > renderer > >> > > should > >> > > be picked then? ?I'd rather just say: use the GPU. > >> > > > >> > > Ashley > >> > > > > > > > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue May 8 18:48:07 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 8 May 2012 18:48:07 -0700 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <1439171571.4218455.1336527585159.JavaMail.root@mozilla.com> References: <1439171571.4218455.1336527585159.JavaMail.root@mozilla.com> Message-ID: On Tue, May 8, 2012 at 6:39 PM, Benoit Jacob wrote: > > > ----- Original Message ----- >> On Tue, May 8, 2012 at 4:32 PM, Benoit Jacob >> wrote: >> > >> > >> > ________________________________ >> > >> > "nofallback" is IMHO both too specific (it should matter by itself >> > whether >> > it's a fallback) and not specific enough (why is a fallback bad?). >> > >> > Let's summarize the main open questions so far: >> > >> > A) what's the right API to expose this? >> > ??? - option 1) getDeviceAdvisories >> > ??? - option 2) new context creation flags that can cause creation >> > ??? to fail >> > if a condition is not met e.g. "allowSoftwareRendering" >> > ??? - none of the above? >> > >> > My main reason for prefering option 1) getDeviceAdvisories is that >> > I prefer >> > to keep separate things that can be separate. I prefer to have a >> > separate >> > pure data getter, getDeviceAdvisories, then let the application do >> > its own >> > logic using that data, then let it create a WebGL context if it >> > wants to. >> > Option 2) tangles these two separate things. Concrete example: if >> > an >> > application will want to create a WebGL context if either of two >> > conditions >> > are met, with Option 2) will require doing two separate getContext >> > calls. >> > Blacklists will have to be processed twice, etc. >> > >> > Here is a concrete example of how option 2) doesn't allow things >> > that apps >> > will want to do. >> > >> > Suppose that a browser always honors the default {antialias:true}, >> > for >> > example by implementing FXAA for renderers that don't support MSAA. >> > >> > Suppose that an application wants antialiasing, but not if the >> > renderer is >> > advertised as 'slow'. >> > >> > With option 1), the application can do: >> > >> > ? var gl = canvas.getContext("webgl", {async:true}); >> > ? if (gl.getDeviceAdvisories().slow) >> > ??? gl = canvas.getContext("webgl", {async:true, antialias:false}); >> > >> > Thus the whole negociation can happen without waiting for any >> > actual OpenGL >> > context to be created. >> >> This wouldn't work -- the context creation attributes are ignored >> during the second and subsequent calls to getContext(). I don't think >> this will change even with the introduction of the "async" context >> creation flag. > > Oh. Then, discard the canvas element, create a new one in place of it, and do the second getContext call on it? Yes, that's what would be needed in these examples. -Ken > Benoit > >> >> -Ken >> >> > But with option 2), there is no way to check whether a context is >> > 'slow'. >> > Suppose that we correct option 2) by adding a context flag, 'slow', >> > allowing >> > to determine whether the context is slow. Then the negociation >> > would still >> > require waiting on OpenGL context creation: >> > >> > ? var gl = canvas.getContext("webgl"); >> > ? var flags = gl.getContextAttributes(); >> > ? if (flags.slow && flags.antialias) >> > ??? gl = canvas.getContext("webgl", {allowSlow:false}); >> > >> > On a stratospheric level, option 1) is better because it keeps >> > separate >> > things separate.? (getting advisories from a blacklist-like kind of >> > database, vs. creating OpenGL contexts). >> > >> > Cheers, >> > Benoit >> > >> > >> > >> > B) What's the right "slow/software/fallback" concept to expose as >> > an >> > advisory / context creation requirement? >> > ??? - option 1) "slow" / "allowSlow" >> > ??? - option 2) "softwareRenderer" / "allowSoftwareRenderer" >> > ??? - option 3) "fallback" / "allowFallback" ? >> > >> > Cheers, >> > Benoit >> > >> > >> > ________________________________ >> > >> > AFAIK, SwiftShader is only used as a fallback is the user's driver >> > is >> > blacklisted. ?So how about a context creation flag {"nofallback": >> > true}? >> > ?This would indicate not to use any fallback WebGL implementation >> > ?that might >> > be used if the primary one is blacklisted. ?The intent is for >> > SwiftShader, >> > but it avoids mentioning software rendering. >> > >> > Ashley >> > >> > >> > On 3 May 2012 13:13, Benoit Jacob wrote: >> >> >> >> >> >> >> >> ----- Original Message ----- >> >> > If the main use case is to allow apps that can be implemented >> >> > with >> >> > canvas2d to use that version when WebGL would otherwise run >> >> > through >> >> > software rendering (which would be typically slower then on most >> >> > sensible configs, eg. because of fragment shaders processing), >> >> > could >> >> > we 'simply' add a WebGL context attribute such as : >> >> > >> >> > allowSoftwareRendering (default: true) >> >> >> >> There are 2 parts in your proposal here: >> >> ?1) replace "slow" by "SoftwareRendering" >> >> ?2) make it part of context creation flags instead of a new getter >> >> >> >> Regarding 1), I wanted to avoid mentioning "software rendering" in >> >> the >> >> spec because it's tricky to define: all software runs on hardware, >> >> so all is >> >> hardware-accelerated, after all. The current CPU/GPU split might >> >> not be >> >> there forever, so the concept of a "GPU" might not be perennial >> >> either. >> >> That's why I wanted to avoid entering into these details and just >> >> said >> >> "slow". >> >> >> >> Regarding 2), I was hesitating about that. I don't have a firm >> >> opinion >> >> either way. But there are going to be other flags, so one should >> >> think of an >> >> API to allow deciding whether to proceed with WebGL based on >> >> multiple >> >> factors. Such an API seems harder to design properly, so it seems >> >> simpler to >> >> add getDeviceAdvisories and let the application implement its own >> >> logic. >> >> >> >> >> >> > This would not solve more complex scenarios (eg. VTF slow) but >> >> > those >> >> > would anyways require WebGL support (any VTF-using app probably >> >> > cannot >> >> > be easily implemented with canvas2d...), so benchmarking for >> >> > this use >> >> > case should be much less of a problem. >> >> >> >> That's not true! Google MapsGL uses VTF. >> >> >> >> The key is that an application may want to do fancy things with >> >> WebGL >> >> while having a much simpler non-WebGL fallback. >> >> >> >> Cheers, >> >> Benoit >> >> > >> >> > Thoughts? >> >> > >> >> > >> >> > On Thu, May 3, 2012 at 4:07 PM, Ashley Gullen >> >> > >> >> > wrote: >> >> > > On 2 May 2012 21:47, Gregg Tavares (?) >> >> > > wrote: >> >> > >> >> >> > >> >> >> > >> >> >> > >> On Wed, May 2, 2012 at 11:53 AM, Ashley Gullen >> >> > >> >> >> > >> wrote: >> >> > >>> >> >> > >>> I think this is a great idea and I'm?desperate?for something >> >> > >>> like >> >> > >>> this. >> >> > >>> ?Our engine implements both a WebGL and Canvas 2D renderer, >> >> > >>> ?and >> >> > >>> ?currently >> >> > >>> the Canvas 2D renderer is never used in Chrome 18 due to >> >> > >>> Swiftshader. ?I am >> >> > >>> keen to fall back to Canvas 2D instead of using Swiftshader >> >> > >>> but >> >> > >>> there is no >> >> > >>> way to do that. >> >> > >> >> >> > >> >> >> > >> That's a little bit of an exaggeration. You can certainly >> >> > >> choose >> >> > >> Canvas 2D >> >> > >> at anytime. You run a small benchmark and switch. >> >> > > >> >> > > >> >> > > We don't make any particular game, we just make an engine. >> >> > > ?Are you >> >> > > sure >> >> > > it's possible to make a benchmark script that is 100% accurate >> >> > > for >> >> > > all kinds >> >> > > of games with their varying performance profiles, and does not >> >> > > delay the >> >> > > start of the game by more than a second??How do you know if >> >> > > your >> >> > > benchmark >> >> > > is working properly? ?What if one renderer runs faster in some >> >> > > places and >> >> > > slower in others, and the other renderer runs the opposite >> >> > > (faster >> >> > > where the >> >> > > other was slow, slower where the other was faster)? ?Which >> >> > > renderer >> >> > > should >> >> > > be picked then? ?I'd rather just say: use the GPU. >> >> > > >> >> > > Ashley >> >> > >> > >> > >> > >> > >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From thj...@ Tue May 8 19:04:08 2012 From: thj...@ (Thor Harald Johansen) Date: Wed, 09 May 2012 04:04:08 +0200 Subject: [Public WebGL] Latency issues, ideas for next WebGL revision Message-ID: <4FA9D098.3060409@thj.no> Hello everyone! I'm the creator and maintainer of Sketcher, a Java-based multi-user image editor and rich Internet application, and ArtGrounds, an art gallery and community that Sketcher runs on top of. For the purposes of this mailing list, Sketcher is what I wish to focus on, so here is a screenshot: http://www.artgrounds.com/submission-data/104182/ Liking to be ahead of the curve, I've got an ambitious plan to port the entire application to JavaScript and WebGL. Sketcher supports pressure sensitive graphic tablets through the JTablet API that was developed by my good friend Marcello Bastea-Forte who currently works for Apple. Wacom has developed a browser plugin that exposes a tablet API to JavaScript, and I plan to use this for my port of Sketcher. There is a persistent pattern here, of vendors not bothering to add support for digitizer tablets to their platform APIs, despite these having been popular since the 1970s. High frame rates and low latencies are vital for sketching, inking and coloring, a common usage case for tablets in the graphics industry. I imagine this to be important for computer games as well, and this is probably going to be the most common usage case for WebGL. I have an airbrush stroke rendering algorithm running on the GPU right now. It is looking good, and I even managed to implement dithering, which makes a significant difference when you're performing repeated overlapping alpha blends. From what I can gather, WebGL offers no control over such features as vertical sync and triple buffering, and this becomes very apparent when running my application in Google Chrome, which seems to turn on these things by default. Since vertical sync on its own shouldn't introduce significant latency, I am suspecting that either Chrome or my graphics hardware is enabling triple buffering. I have disabled triple buffering in the NVIDIA Control Panel, but in my experience, these settings are often overridden by the user-space programs. Disabling the vertical sync option in chrome://flags removes all latency issues and improves the frame rate. I have calculated the latency for 3 buffers to be 50 milliseconds, plus monitor latency. In my experience, anything over 10 milliseconds is noticeable. I would like to disable vertical sync and triple buffering from my application code. I'd also like some control over sub-sampling and anisotropic filtering, resource hungry algorithms that my application doesn't really need. I should at least be able to request that these algorithms be disabled. In testing my application on Mozilla Firefox, I found the frame rate to be so low that I was not able to tell if there was any latency. I assume that Firefox is still using software compositing, which leads me to another issue... A very common usage case for WebGL is going to be a rectangular opaque 3D canvas with no overlapping DOM elements. This is a case that can easily be optimized by means of an API flag on the getContext() call. The application promises to not attempt overlapping elements, and the browser renders the WebGL canvas in-place, in a separate sub-window, on top of any other elements, similarly to how browser plugins are painted today. Appropriate OpenGL viewport cropping is performed when the canvas is scrolled out of view. Browsers that use GPU accelerated compositing can simply ignore the flag. Browsers like Firefox will experience a significant speedup. And that pretty much summarizes my thoughts on WebGL. I would appreciate your viewpoints, especially if you're on the WebGL standards committee. Regards, Thor Harald Johansen ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From thj...@ Tue May 8 21:06:08 2012 From: thj...@ (Thor Harald Johansen) Date: Wed, 09 May 2012 06:06:08 +0200 Subject: [Public WebGL] Re: Latency issues, ideas for next WebGL revision In-Reply-To: <4FA9D098.3060409@thj.no> References: <4FA9D098.3060409@thj.no> Message-ID: <4FA9ED30.8050705@thj.no> After exchanging some emails with Boris Zbarsky, it would seem that my Mozilla frame rate issue was due to hardware compatibility. With Opera using hardware acceleration as well, it would seem that compositing is not going to be an issue, so go ahead and discard that part of my message. Can't seem to get WebGL working in the Opera 12 Beta, even after enabling the appropriate flags for WebGL and hardware acceleration in about:config, but if they don't do triple buffering by default either, that part of my message becomes a Chrome specific issue. Thor On 5/9/2012 4:04 AM, Thor Harald Johansen wrote: > Hello everyone! > > I'm the creator and maintainer of Sketcher, a Java-based multi-user > image editor and rich Internet application, and ArtGrounds, an art > gallery and community that Sketcher runs on top of. > > For the purposes of this mailing list, Sketcher is what I wish to focus > on, so here is a screenshot: > > http://www.artgrounds.com/submission-data/104182/ > > Liking to be ahead of the curve, I've got an ambitious plan to port the > entire application to JavaScript and WebGL. > > Sketcher supports pressure sensitive graphic tablets through the JTablet > API that was developed by my good friend Marcello Bastea-Forte who > currently works for Apple. Wacom has developed a browser plugin that > exposes a tablet API to JavaScript, and I plan to use this for my port > of Sketcher. > > There is a persistent pattern here, of vendors not bothering to add > support for digitizer tablets to their platform APIs, despite these > having been popular since the 1970s. > > High frame rates and low latencies are vital for sketching, inking and > coloring, a common usage case for tablets in the graphics industry. I > imagine this to be important for computer games as well, and this is > probably going to be the most common usage case for WebGL. > > I have an airbrush stroke rendering algorithm running on the GPU right > now. It is looking good, and I even managed to implement dithering, > which makes a significant difference when you're performing repeated > overlapping alpha blends. > > From what I can gather, WebGL offers no control over such features as > vertical sync and triple buffering, and this becomes very apparent when > running my application in Google Chrome, which seems to turn on these > things by default. > > Since vertical sync on its own shouldn't introduce significant latency, > I am suspecting that either Chrome or my graphics hardware is enabling > triple buffering. I have disabled triple buffering in the NVIDIA Control > Panel, but in my experience, these settings are often overridden by the > user-space programs. > > Disabling the vertical sync option in chrome://flags removes all latency > issues and improves the frame rate. I have calculated the latency for 3 > buffers to be 50 milliseconds, plus monitor latency. In my experience, > anything over 10 milliseconds is noticeable. > > I would like to disable vertical sync and triple buffering from my > application code. I'd also like some control over sub-sampling and > anisotropic filtering, resource hungry algorithms that my application > doesn't really need. I should at least be able to request that these > algorithms be disabled. > > In testing my application on Mozilla Firefox, I found the frame rate to > be so low that I was not able to tell if there was any latency. I assume > that Firefox is still using software compositing, which leads me to > another issue... > > A very common usage case for WebGL is going to be a rectangular opaque > 3D canvas with no overlapping DOM elements. This is a case that can > easily be optimized by means of an API flag on the getContext() call. > The application promises to not attempt overlapping elements, and the > browser renders the WebGL canvas in-place, in a separate sub-window, on > top of any other elements, similarly to how browser plugins are painted > today. Appropriate OpenGL viewport cropping is performed when the canvas > is scrolled out of view. > > Browsers that use GPU accelerated compositing can simply ignore the > flag. Browsers like Firefox will experience a significant speedup. > > And that pretty much summarizes my thoughts on WebGL. I would appreciate > your viewpoints, especially if you're on the WebGL standards committee. > > Regards, > Thor Harald Johansen ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Tue May 8 22:10:55 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Tue, 8 May 2012 22:10:55 -0700 Subject: [Public WebGL] Debugging WebGL Applications Message-ID: Just FYI, Based on helping some developers I added a few more tips for debugging WebGL apps to the WebGL wiki http://www.khronos.org/webgl/wiki/Debugging#Programmatically_Debugging_WebGL_applications if you find them useful please pass them on. -gregg -------------- next part -------------- An HTML attachment was scrubbed... URL: From thj...@ Tue May 8 22:55:02 2012 From: thj...@ (Thor Harald Johansen) Date: Wed, 09 May 2012 07:55:02 +0200 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <1439171571.4218455.1336527585159.JavaMail.root@mozilla.com> Message-ID: <4FAA06B6.9050905@thj.no> >>> This wouldn't work -- the context creation attributes are ignored >>> during the second and subsequent calls to getContext(). I don't think >>> this will change even with the introduction of the "async" context >>> creation flag. >> >> Oh. Then, discard the canvas element, create a new one in place of it, and do the second getContext call on it? > > Yes, that's what would be needed in these examples. It seems clumsy to create and discard dummy context just for feature checking. Wouldn't it be better if we could query supported features before creating a context? The canvas object unfortunately lacks a mechanism for this. How about adding a flag (false by default) that allows an acquired context to be queried and configured before activation? Alternatively, for complete transparency, drop the flag completely, remain in a configuration state for as long as possible, and create the frame buffer only on demand. Can anyone imagine a scenario where this would break existing code? Thor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Wed May 9 05:03:28 2012 From: bja...@ (Benoit Jacob) Date: Wed, 9 May 2012 05:03:28 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <4FAA06B6.9050905@thj.no> Message-ID: <1005966694.4458781.1336565008176.JavaMail.root@mozilla.com> ----- Original Message ----- > >>> This wouldn't work -- the context creation attributes are ignored > >>> during the second and subsequent calls to getContext(). I don't > >>> think > >>> this will change even with the introduction of the "async" > >>> context > >>> creation flag. > >> > >> Oh. Then, discard the canvas element, create a new one in place of > >> it, and do the second getContext call on it? > > > > Yes, that's what would be needed in these examples. > > It seems clumsy to create and discard dummy context just for feature > checking. Wouldn't it be better if we could query supported features > before creating a context? The canvas object unfortunately lacks a > mechanism for this. > > How about adding a flag (false by default) that allows an acquired > context to be queried and configured before activation? That is what the {async:true} flag is going to be, once we have specified and implemented async context creation. Benoit > > Alternatively, for complete transparency, drop the flag completely, > remain in a configuration state for as long as possible, and create > the > frame buffer only on demand. > > Can anyone imagine a scenario where this would break existing code? > > Thor > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From thj...@ Wed May 9 05:51:00 2012 From: thj...@ (Thor Harald Johansen) Date: Wed, 09 May 2012 14:51:00 +0200 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <1005966694.4458781.1336565008176.JavaMail.root@mozilla.com> References: <1005966694.4458781.1336565008176.JavaMail.root@mozilla.com> Message-ID: <4FAA6834.4000002@thj.no> >> How about adding a flag (false by default) that allows an acquired >> context to be queried and configured before activation? > > That is what the {async:true} flag is going to be, once we have specified and implemented async context creation. Ah! You're referring to what I'd call a "non-blocking" call, then? With things like node.js being all the rage, "asynchronous" has kind of come to mean "event callback" to many developers. Even "non-blocking" isn't a very good term. The main benefit here is not the speed of the call, but the delayed resource creation that allows for additional setup. I am struggling to find a good word for the concept, really. It's not really asynchronous if it synchronizes with the first call that requires hardware resources, is it... Still, is there actually a need for an explicit flag? Neither the programmer nor the end user is going to see a visual difference between "no resources allocated yet" vs "nothing rendered yet". My original point was that it seems entirely possible to allow the following: var gl = canvas.getContext("webgl"); var caps = gl.getDeviceCaps(); gl.enableSomeCap(); var buffer = gl.createBuffer(); // HW context auto-created here ... ... ...and then drop passing flags into getContext() altogether? Thor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From mai...@ Wed May 9 06:25:44 2012 From: mai...@ (Rico P.) Date: Wed, 9 May 2012 15:25:44 +0200 Subject: [Public WebGL] getContextRaw Message-ID: I just found in Chrome 18 a function in the canvas element called getContextRaw analog to getContext. It behaves the same as the regular getContext. I couldn't find any useful information to this method, any idea what this method does? - Rico ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jon...@ Wed May 9 07:57:28 2012 From: jon...@ (Jon Buckley) Date: Wed, 9 May 2012 10:57:28 -0400 Subject: [Public WebGL] WEBGL_compressed_texture_s3tc is now enabled in Firefox Nightly Message-ID: Hello everyone, I'm happy to announce that the WEBGL_compressed_texture_s3tc extension has been enabled in today's Firefox Nightly. If you would like to try it out, please download at http://nightly.mozilla.org. As it's still a draft extension, it's vendor-prefixed with MOZ_, so you need to use the string "MOZ_WEBGL_compressed_texture_s3tc" to enable it in your code. Here are some links to test it out: * Conformance tests - https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/extensions/webgl-compressed-texture-s3tc.html * Demo that Brandon Jones did for WebGL Camp #4 - http://media.tojicode.com/webgl-samples/dds.html * WebGL Texture Loader library that supports DDS textures also by Brandon Jones - https://github.com/toji/webgl-texture-utils If you find any bugs in our implementation, please file a bug at https://bugzilla.mozilla.org/enter_bug.cgi?product=core&component=Canvas%3A%20WebGL&cc=jon...@ Thanks, Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ash...@ Wed May 9 08:04:37 2012 From: ash...@ (Ashley Gullen) Date: Wed, 9 May 2012 16:04:37 +0100 Subject: [Public WebGL] WEBGL_compressed_texture_s3tc is now enabled in Firefox Nightly In-Reply-To: References: Message-ID: Any ideas how widely supported this extension is on desktop machines? Ashley On 9 May 2012 15:57, Jon Buckley wrote: > Hello everyone, I'm happy to announce that the WEBGL_compressed_texture_s3tc > extension has been enabled in today's Firefox Nightly. If you would like to > try it out, please download at http://nightly.mozilla.org. As it's still > a draft extension, it's vendor-prefixed with MOZ_, so you need to use the > string "MOZ_WEBGL_compressed_texture_s3tc" to enable it in your code. > > Here are some links to test it out: > * Conformance tests - > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/extensions/webgl-compressed-texture-s3tc.html > * Demo that Brandon Jones did for WebGL Camp #4 - > http://media.tojicode.com/webgl-samples/dds.html > * WebGL Texture Loader library that supports DDS textures also by Brandon > Jones - https://github.com/toji/webgl-texture-utils > > If you find any bugs in our implementation, please file a bug at > https://bugzilla.mozilla.org/enter_bug.cgi?product=core&component=Canvas%3A%20WebGL&cc=jon...@ > . > > Thanks, Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed May 9 08:06:22 2012 From: bja...@ (Benoit Jacob) Date: Wed, 9 May 2012 08:06:22 -0700 (PDT) Subject: [Public WebGL] WEBGL_compressed_texture_s3tc is now enabled in Firefox Nightly In-Reply-To: Message-ID: <1443166266.4666290.1336575982558.JavaMail.root@mozilla.com> Many thanks for this work, Jon. Let's mention that this extension is already available in recent revisions of WebKit with WEBKIT_ prefix. Thanks also to Gregg for the draft and the conformance tests. Benoit ----- Original Message ----- > Hello everyone, I'm happy to announce that the > WEBGL_compressed_texture_s3tc extension has been enabled in today's > Firefox Nightly. If you would like to try it out, please download at > http://nightly.mozilla.org . As it's still a draft extension, it's > vendor-prefixed with MOZ_, so you need to use the string > "MOZ_WEBGL_compressed_texture_s3tc" to enable it in your code. > Here are some links to test it out: > * Conformance tests - > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/extensions/webgl-compressed-texture-s3tc.html > * Demo that Brandon Jones did for WebGL Camp #4 - > http://media.tojicode.com/webgl-samples/dds.html > * WebGL Texture Loader library that supports DDS textures also by > Brandon Jones - https://github.com/toji/webgl-texture-utils > If you find any bugs in our implementation, please file a bug at > https://bugzilla.mozilla.org/enter_bug.cgi?product=core&component=Canvas%3A%20WebGL&cc=jon...@ > . > Thanks, Jon -------------- next part -------------- An HTML attachment was scrubbed... URL: From bag...@ Wed May 9 08:08:52 2012 From: bag...@ (Patrick Baggett) Date: Wed, 9 May 2012 10:08:52 -0500 Subject: [Public WebGL] WEBGL_compressed_texture_s3tc is now enabled in Firefox Nightly In-Reply-To: References: Message-ID: Extremely well supported on Windows, Linux less so when using open-source drivers due to patents on S3TC, but can be re-enabled with a library for countries where software patents don't apply. Patrick On Wed, May 9, 2012 at 10:04 AM, Ashley Gullen wrote: > Any ideas how widely supported this extension is on desktop machines? > > Ashley > > > On 9 May 2012 15:57, Jon Buckley wrote: > >> Hello everyone, I'm happy to announce that the WEBGL_compressed_texture_s3tc >> extension has been enabled in today's Firefox Nightly. If you would like to >> try it out, please download at http://nightly.mozilla.org. As it's still >> a draft extension, it's vendor-prefixed with MOZ_, so you need to use the >> string "MOZ_WEBGL_compressed_texture_s3tc" to enable it in your code. >> >> Here are some links to test it out: >> * Conformance tests - >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/extensions/webgl-compressed-texture-s3tc.html >> * Demo that Brandon Jones did for WebGL Camp #4 - >> http://media.tojicode.com/webgl-samples/dds.html >> * WebGL Texture Loader library that supports DDS textures also by Brandon >> Jones - https://github.com/toji/webgl-texture-utils >> >> If you find any bugs in our implementation, please file a bug at >> https://bugzilla.mozilla.org/enter_bug.cgi?product=core&component=Canvas%3A%20WebGL&cc=jon...@ >> . >> >> Thanks, Jon >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ash...@ Wed May 9 08:10:21 2012 From: ash...@ (Ashley Gullen) Date: Wed, 9 May 2012 16:10:21 +0100 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <4FAA6834.4000002@thj.no> References: <1005966694.4458781.1336565008176.JavaMail.root@mozilla.com> <4FAA6834.4000002@thj.no> Message-ID: A way around the software rendering problem, without having to change the spec much at all, would just be to encourage browser makers to include the renderer type in the WebGL RENDERER string. For example I believe Chrome's WebGL returns something like "WebKit WebGL", but if it returned "WebKit WebGL (SwiftShader)" when rendering with SwiftShader then software rendering could be detected accurately and any fallbacks implemented. This avoids having to define "slow", "software rendering" or "fallback". Presumably a similar thing could be done for any future browsers that adopt a software WebGL renderer. That will allow us to return to a situation where we can get a WebGL context we are confident is hardware accelerated, or choose to keep using a software rendered one, or fall back to canvas 2D. Not much to do with the general device advisories suggestion, but I think it solves that particular point well. Ashley On 9 May 2012 13:51, Thor Harald Johansen wrote: > How about adding a flag (false by default) that allows an acquired >>> context to be queried and configured before activation? >>> >> >> That is what the {async:true} flag is going to be, once we have specified >> and implemented async context creation. >> > > Ah! You're referring to what I'd call a "non-blocking" call, then? With > things like node.js being all the rage, "asynchronous" has kind of come to > mean "event callback" to many developers. > > Even "non-blocking" isn't a very good term. The main benefit here is not > the speed of the call, but the delayed resource creation that allows for > additional setup. I am struggling to find a good word for the concept, > really. It's not really asynchronous if it synchronizes with the first call > that requires hardware resources, is it... > > Still, is there actually a need for an explicit flag? Neither the > programmer nor the end user is going to see a visual difference between "no > resources allocated yet" vs "nothing rendered yet". My original point was > that it seems entirely possible to allow the following: > > > var gl = canvas.getContext("webgl"); > var caps = gl.getDeviceCaps(); > gl.enableSomeCap(); > > var buffer = gl.createBuffer(); // HW context auto-created here > ... > ... > > ...and then drop passing flags into getContext() altogether? > > Thor > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed May 9 08:10:59 2012 From: bja...@ (Benoit Jacob) Date: Wed, 9 May 2012 08:10:59 -0700 (PDT) Subject: [Public WebGL] WEBGL_compressed_texture_s3tc is now enabled in Firefox Nightly In-Reply-To: Message-ID: <1872660857.4667497.1336576259481.JavaMail.root@mozilla.com> It should be ubiquitous, or near-ubiquitous, on desktop machines. The problem is mobile devices. They support different compressed texture formats (PVRTC, ATC) which we'll have to expose as separate extensions. That's not great, but we didn't see a better solution. Benoit ----- Original Message ----- > Any ideas how widely supported this extension is on desktop machines? > Ashley > On 9 May 2012 15:57, Jon Buckley < jon...@ > wrote: > > Hello everyone, I'm happy to announce that the > > WEBGL_compressed_texture_s3tc extension has been enabled in today's > > Firefox Nightly. If you would like to try it out, please download > > at > > http://nightly.mozilla.org . As it's still a draft extension, it's > > vendor-prefixed with MOZ_, so you need to use the string > > "MOZ_WEBGL_compressed_texture_s3tc" to enable it in your code. > > > Here are some links to test it out: > > > * Conformance tests - > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/extensions/webgl-compressed-texture-s3tc.html > > > * Demo that Brandon Jones did for WebGL Camp #4 - > > http://media.tojicode.com/webgl-samples/dds.html > > > * WebGL Texture Loader library that supports DDS textures also by > > Brandon Jones - https://github.com/toji/webgl-texture-utils > > > If you find any bugs in our implementation, please file a bug at > > https://bugzilla.mozilla.org/enter_bug.cgi?product=core&component=Canvas%3A%20WebGL&cc=jon...@ > > . > > > Thanks, Jon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed May 9 08:54:17 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 9 May 2012 08:54:17 -0700 Subject: [Public WebGL] getContextRaw In-Reply-To: References: Message-ID: I have a feeling that is something inserted by the WebGL Inspector extension. It's not part of Chrome -gregg On Wed, May 9, 2012 at 6:25 AM, Rico P. wrote: > > I just found in Chrome 18 a function in the canvas element called > getContextRaw analog to getContext. It behaves the same as the regular > getContext. I couldn't find any useful information to this method, any > idea what this method does? > > - Rico > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From toj...@ Wed May 9 09:13:33 2012 From: toj...@ (Brandon Jones) Date: Wed, 9 May 2012 09:13:33 -0700 Subject: [Public WebGL] getContextRaw In-Reply-To: References: Message-ID: You're probably correct. It's the original getContext that WebGL Inspector caches so it can pass along calls from it's wrapped version: https://github.com/benvanik/WebGL-Inspector/blob/master/core/extensions/chrome/contentscript.js#L166 On Wed, May 9, 2012 at 8:54 AM, Gregg Tavares (?) wrote: > I have a feeling that is something inserted by the WebGL Inspector > extension. > > It's not part of Chrome > > -gregg > > > On Wed, May 9, 2012 at 6:25 AM, Rico P. wrote: > >> >> I just found in Chrome 18 a function in the canvas element called >> getContextRaw analog to getContext. It behaves the same as the regular >> getContext. I couldn't find any useful information to this method, any >> idea what this method does? >> >> - Rico >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ilm...@ Wed May 9 12:03:32 2012 From: ilm...@ (Ilmari Heikkinen) Date: Wed, 9 May 2012 20:03:32 +0100 Subject: [Public WebGL] Latency issues, ideas for next WebGL revision In-Reply-To: <4FA9D098.3060409@thj.no> References: <4FA9D098.3060409@thj.no> Message-ID: 2012/5/9 Thor Harald Johansen : > > Sketcher supports pressure sensitive graphic tablets through the JTablet API > that was developed by my good friend Marcello Bastea-Forte who currently > works for Apple. Wacom has developed a browser plugin that exposes a tablet > API to JavaScript, and I plan to use this for my port of Sketcher. > > There is a persistent pattern here, of vendors not bothering to add support > for digitizer tablets to their platform APIs, despite these having been > popular since the 1970s. Who to bug about this... Reading through some years-old bugs: on Mozilla-side I guess it's Robert O'Callahan, on Webkit Oliver Hunt. Implementation on Linux GTK is easy (toggle a flag, add fields to mouse events, took me a few days on Firefox), on Mac it's supposedly almost as easy, on Windows you would need to go through third-party driver APIs, chiefly Wintab. https://bugs.webkit.org/show_bug.cgi?id=20458 I don't know how to get more traction for this though :/ Ilmari ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From thj...@ Wed May 9 12:54:30 2012 From: thj...@ (Thor Harald Johansen) Date: Wed, 09 May 2012 21:54:30 +0200 Subject: [Public WebGL] Latency issues, ideas for next WebGL revision In-Reply-To: References: <4FA9D098.3060409@thj.no> Message-ID: <4FAACB76.4030509@thj.no> > Who to bug about this... Reading through some years-old bugs: on > Mozilla-side I guess it's Robert O'Callahan, on Webkit Oliver Hunt. > Implementation on Linux GTK is easy (toggle a flag, add fields to > mouse events, took me a few days on Firefox), on Mac it's supposedly > almost as easy, on Windows you would need to go through third-party > driver APIs, chiefly Wintab. > https://bugs.webkit.org/show_bug.cgi?id=20458 > > I don't know how to get more traction for this though :/ Yeah, I just want to point out some issues with Oliver Hunt's remarks on that bug report: "There are multiple halfway there APIs. The most complete is the WinTab API, but afaict that's not actually part of windows, but that's okay because it also doesn't interact with the event queue." It doesn't need to be part of Windows any more than OpenGL needs to be. WinTab is pretty much the OpenGL of tablets in terms of driver support. Photoshop and most other applications simply look for the WINTAB32.DLL library and link against it. It's basically the industry standard for tablets. "In all honesty I'm leaning towards supporting pressure information on Mac and Gtk, seriously trying to support this on windows seems to be an exercise in futility." The problem he's talking about is basically that, while WinTab can deliver its messages to the Windows message loop (and also sports a polling API), these events are delivered in parallel with the system mouse events. This happens because most tablet drivers will grab and control the Windows mouse cursor whenever the stylus is held near the tablet. I don't think there's room in the Windows message structure for all the required data. The more expensive tablets support tilt, rotation and other telemetry in addition the high-res position and stylus pressure. This of course complicates matters, but it's far from being as hopeless as Oliver portrays it. Mouse and tablet events arrive together in the Windows message queue, so it's easy to just assign the data from the previous WT_PACKET tablet event to the current mouse event. wParam contains the serial number of the WinTab packet, which can be retrieved with WTPacket(). WinTab will then discard any prior, unretrieved packets. Most tablets also have DPI resolutions more on par with high end printers than consumer monitors, and the WinTab API will provide high resolution data to that end. In a browser API, one could either support floating point mouse positions through the regular DOM fields (and break a lot of JavaScript code in the process) or add extra high resolution fields just for tablets. Thor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kos...@ Wed May 9 13:02:53 2012 From: kos...@ (David Sheets) Date: Wed, 9 May 2012 13:02:53 -0700 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <4FAA6834.4000002@thj.no> References: <1005966694.4458781.1336565008176.JavaMail.root@mozilla.com> <4FAA6834.4000002@thj.no> Message-ID: On Wed, May 9, 2012 at 5:51 AM, Thor Harald Johansen wrote: >>> How about adding a flag (false by default) that allows an acquired >>> context to be queried and configured before activation? >> >> >> That is what the {async:true} flag is going to be, once we have specified >> and implemented async context creation. > > > Ah! You're referring to what I'd call a "non-blocking" call, then? With > things like node.js being all the rage, "asynchronous" has kind of come to > mean "event callback" to many developers. > > Even "non-blocking" isn't a very good term. The main benefit here is not the > speed of the call, but the delayed resource creation that allows for > additional setup. I am struggling to find a good word for the concept, > really. It's not really asynchronous if it synchronizes with the first call > that requires hardware resources, is it... "Lazy"? > Still, is there actually a need for an explicit flag? Neither the programmer > nor the end user is going to see a visual difference between "no resources > allocated yet" vs "nothing rendered yet". My original point was that it > seems entirely possible to allow the following: > > > var gl = canvas.getContext("webgl"); > var caps = gl.getDeviceCaps(); > gl.enableSomeCap(); > > var buffer = gl.createBuffer(); // HW context auto-created here > ... > ... > > ...and then drop passing flags into getContext() altogether? > > Thor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed May 9 13:45:25 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 9 May 2012 13:45:25 -0700 Subject: [Public WebGL] Latency issues, ideas for next WebGL revision In-Reply-To: <4FAACB76.4030509@thj.no> References: <4FA9D098.3060409@thj.no> <4FAACB76.4030509@thj.no> Message-ID: On Wed, May 9, 2012 at 12:54 PM, Thor Harald Johansen wrote: > >> Who to bug about this... Reading through some years-old bugs: on >> Mozilla-side I guess it's Robert O'Callahan, on Webkit Oliver Hunt. >> Implementation on Linux GTK is easy (toggle a flag, add fields to >> mouse events, took me a few days on Firefox), on Mac it's supposedly >> almost as easy, on Windows you would need to go through third-party >> driver APIs, chiefly Wintab. >> https://bugs.webkit.org/show_bug.cgi?id=20458 >> >> I don't know how to get more traction for this though :/ > > > Yeah, I just want to point out some issues with Oliver Hunt's remarks on > that bug report: > > "There are multiple halfway there APIs. ?The most complete is the WinTab > API, but afaict that's not actually part of windows, but that's okay because > it also doesn't interact with the event queue." > > It doesn't need to be part of Windows any more than OpenGL needs to be. > WinTab is pretty much the OpenGL of tablets in terms of driver support. > Photoshop and most other applications simply look for the WINTAB32.DLL > library and link against it. It's basically the industry standard for > tablets. > > "In all honesty I'm leaning towards supporting pressure information on Mac > and Gtk, seriously trying to support this on windows seems to be an exercise > in futility." > > The problem he's talking about is basically that, while WinTab can deliver > its messages to the Windows message loop (and also sports a polling API), > these events are delivered in parallel with the system mouse events. > > This happens because most tablet drivers will grab and control the Windows > mouse cursor whenever the stylus is held near the tablet. I don't think > there's room in the Windows message structure for all the required data. The > more expensive tablets support tilt, rotation and other telemetry in > addition the high-res position and stylus pressure. > > This of course complicates matters, but it's far from being as hopeless as > Oliver portrays it. Mouse and tablet events arrive together in the Windows > message queue, so it's easy to just assign the data from the previous > WT_PACKET tablet event to the current mouse event. wParam contains the > serial number of the WinTab packet, which can be retrieved with WTPacket(). > WinTab will then discard any prior, unretrieved packets. > > Most tablets also have DPI resolutions more on par with high end printers > than consumer monitors, and the WinTab API will provide high resolution data > to that end. In a browser API, one could either support floating point mouse > positions through the regular DOM fields (and break a lot of JavaScript code > in the process) or add extra high resolution fields just for tablets. It would be great to bring your app to the web via WebGL. Your first email raises several issues. In order to make progress on them, let's separate them out. - Input latency: browser vendors are definitely aware that lower latency for input events is needed. However, more test cases are needed in order to make progress on this issue. If you could make a preliminary version of your app available for testing, or come up with a self-contained test case that illustrates when input latency is too high and when it's acceptable, that would be helpful. I am optimistic that input latency can be reduced to acceptable levels without providing application control to disable vsync, which would introduce tearing on animated web pages and result in a poor user experience. Chrome does no triple buffering, but the input and rendering pipeline is fairly deep; three OS processes are involved. - You can disable WebGL's built-in antialiasing by calling getContext("experimental-webgl", { antialias: false }). Anisotropic texture filtering is not enabled by default; it is being specified as an extension at http://www.khronos.org/registry/webgl/extensions/ . - On any reasonably modern hardware it should not be necessary to special-case WebGL's rendering to the screen. Specifying { alpha: false } in the context creation attributes should be sufficient to reduce the alpha blending overhead for the compositing of the WebGL canvas in the page. - Adding support for tablets' auxiliary information is a larger discussion that would have to occur with the appropriate web working groups -- perhaps public-webapps or whatwg. I think the best way to make progress on this issue would be to do a preliminary port of your app using whatever mechanisms you need, such as Wacom's browser plugin. Then we can collectively build prototypes in Firefox, WebKit, etc. of cross-browser APIs exposing the additional information you need -- tilt, pressure, etc. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From mai...@ Wed May 9 14:43:57 2012 From: mai...@ (Rico P.) Date: Wed, 9 May 2012 23:43:57 +0200 Subject: [Public WebGL] getContextRaw In-Reply-To: References: Message-ID: Ah, ok,I was totally confused. - Rico 2012/5/9 Brandon Jones : > You're probably correct. It's the original getContext that WebGL Inspector > caches so it can pass along calls from it's wrapped version: > > https://github.com/benvanik/WebGL-Inspector/blob/master/core/extensions/chrome/contentscript.js#L166 > > > On Wed, May 9, 2012 at 8:54 AM, Gregg Tavares (?) wrote: >> >> I have a feeling that is something inserted by the WebGL Inspector >> extension. >> >> It's not part of Chrome >> >> -gregg >> >> >> On Wed, May 9, 2012 at 6:25 AM, Rico P. wrote: >>> >>> >>> I just found in Chrome 18 a function in the canvas element called >>> getContextRaw analog to getContext. It behaves the same as the regular >>> getContext. I couldn't find any useful information to this method, any >>> idea what this method does? >>> >>> - Rico >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Wed May 9 14:54:46 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 9 May 2012 14:54:46 -0700 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub Message-ID: There's been movement to move the WebGL Khronos related repository from svn at cvs.khronos.org to git at github within a few weeks. Are there any concerns or issues anyone has with that? I think the current plan is to completely deprecate the old repo. That means links in the Wiki etc will need to be updated to some github URL. Comments? -gregg -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed May 9 15:09:41 2012 From: bja...@ (Benoit Jacob) Date: Wed, 9 May 2012 15:09:41 -0700 (PDT) Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: Message-ID: <1347907949.5360587.1336601381144.JavaMail.root@mozilla.com> ----- Original Message ----- > There's been movement to move the WebGL Khronos related repository > from svn at cvs.khronos.org to git at github within a few weeks. > Are there any concerns or issues anyone has with that? Great, no concern. > I think the current plan is to completely deprecate the old repo. > That means links in the Wiki etc will need to be updated to some > github URL. Is it possible as a temporary solution to have redirects or keep the svn repo with very basic html pages telling visitors that content has moved? Benoit > Comments? > -gregg -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed May 9 15:13:22 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 9 May 2012 15:13:22 -0700 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: <1347907949.5360587.1336601381144.JavaMail.root@mozilla.com> References: <1347907949.5360587.1336601381144.JavaMail.root@mozilla.com> Message-ID: +Khronos webmaster. On Wed, May 9, 2012 at 3:09 PM, Benoit Jacob wrote: > > > ------------------------------ > > There's been movement to move the WebGL Khronos related repository from > svn at cvs.khronos.org to git at github within a few weeks. > > Are there any concerns or issues anyone has with that? > > Great, no concern. > > > I think the current plan is to completely deprecate the old repo. That > means links in the Wiki etc will need to be updated to some github URL. > > Is it possible as a temporary solution to have redirects or keep the svn > repo with very basic html pages telling visitors that content has moved? > I'm guessing a simple .htaccess mod_rewrite thing at the root might handle the entire tree. Otherwise, I'm happy to check in .html subfiles in the old repo that re-direct to the new repo for every existing .html file > > Benoit > > > Comments? > > -gregg > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Wed May 9 15:21:52 2012 From: kos...@ (David Sheets) Date: Wed, 9 May 2012 15:21:52 -0700 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: References: Message-ID: On Wed, May 9, 2012 at 2:54 PM, Gregg Tavares (?) wrote: > There's been movement to move the WebGL Khronos related repository from svn > at?cvs.khronos.org to git at github within a few weeks. > > Are there any concerns or issues anyone has with that? This is truly wonderful news! Thank you to all internal champions of distributed version control and easier collaboration. I wonder if Khronos should continue to host a canonical repository tied to GitHub via a post-commit hook? It seems like khronos.org should continue to be the authority for the official WebGL repository. That way, Khronos retains control of their hosting, URLs, etc without a required dependency on an external company. Has this been discussed internally? I really like GitHub and I support collaboration via the site but one of the beautiful features of DVCS is independence of domain. What happens in a decade when GitHub gets acquired or shut-down or morphed in some other unacceptable way? Hooray! David Sheets > I think the current plan is to completely deprecate the old repo. That means > links in the Wiki etc will need to be updated to some github URL. > > Comments? > > -gregg > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From web...@ Wed May 9 15:32:29 2012 From: web...@ (James Riordon) Date: Wed, 9 May 2012 18:32:29 -0400 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: References: <1347907949.5360587.1336601381144.JavaMail.root@mozilla.com> Message-ID: On 2012-05-09, at 6:13 PM, Gregg Tavares (?) wrote: > +Khronos webmaster. > > On Wed, May 9, 2012 at 3:09 PM, Benoit Jacob wrote: > > Is it possible as a temporary solution to have redirects or keep the svn repo with very basic html pages telling visitors that content has moved? > > I'm guessing a simple .htaccess mod_rewrite thing at the root might handle the entire tree. Otherwise, I'm happy to check in .html subfiles in the old repo that re-direct to the new repo for every existing .html file I believe .htaccess won't work in subversion, so updating the html files would be required. If you can confirm the exact svn tree you want to move, I can run a test conversion and import to a Github private repository for verification and cleanup, if need be. Thanks. - James -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed May 9 15:43:33 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Wed, 9 May 2012 15:43:33 -0700 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: References: Message-ID: On Wed, May 9, 2012 at 3:21 PM, David Sheets wrote: > On Wed, May 9, 2012 at 2:54 PM, Gregg Tavares (?) wrote: > > There's been movement to move the WebGL Khronos related repository from > svn > > at cvs.khronos.org to git at github within a few weeks. > > > > Are there any concerns or issues anyone has with that? > > This is truly wonderful news! Thank you to all internal champions of > distributed version control and easier collaboration. > > I wonder if Khronos should continue to host a canonical repository > tied to GitHub via a post-commit hook? > > It seems like khronos.org should continue to be the authority for the > official WebGL repository. That way, Khronos retains control of their > hosting, URLs, etc without a required dependency on an external > company. Has this been discussed internally? > > I really like GitHub and I support collaboration via the site but one > of the beautiful features of DVCS is independence of domain. What > happens in a decade when GitHub gets acquired or shut-down or morphed > in some other unacceptable way? > I think the thinking is it's a DVCS. If github goes away the official repo just gets moved somewhere else. Possibly back to khronos. What do think? What would your perfect setup be? > > Hooray! > > David Sheets > > > I think the current plan is to completely deprecate the old repo. That > means > > links in the Wiki etc will need to be updated to some github URL. > > > > Comments? > > > > -gregg > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed May 9 17:54:17 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 9 May 2012 17:54:17 -0700 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: References: <1347907949.5360587.1336601381144.JavaMail.root@mozilla.com> Message-ID: On Wed, May 9, 2012 at 3:21 PM, David Sheets wrote: > I wonder if Khronos should continue to host a canonical repository > tied to GitHub via a post-commit hook? > > It seems like khronos.org should continue to be the authority for the > official WebGL repository. That way, Khronos retains control of their > hosting, URLs, etc without a required dependency on an external > company. Has this been discussed internally? The thinking has been to preserve the links under http://www.khronos.org/registry/webgl/ . Currently, these are checked out from the Subversion repository every 15 minutes or so. It should be straightforward to update this script to pull from the github repo instead. However, the thinking has also been to completely remove everything under https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/ . This is a direct link into the Subversion repository, and if this repo is preserved, I think it would be confusing to understand the canonical location of the repository. See below however; perhaps we could keep these links working via redirects to www.khronos.org. Suggestions welcome. On Wed, May 9, 2012 at 3:32 PM, James Riordon wrote: > I believe .htaccess won't work in subversion, so updating the html files > would be required. Would it be possible to override the handling of all URLs under https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/ and have them always redirect to www.khronos.org/registry/webgl/ ? > If you can confirm the exact svn tree you want to move, I can run a test > conversion and import to a Github private repository for verification and > cleanup, if need be. Thanks. The tree we'd like to move is https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl It would be great if we could test the conversion first. Thanks very much for your help on this. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cvi...@ Wed May 9 18:21:52 2012 From: cvi...@ (Cedric Vivier) Date: Thu, 10 May 2012 09:21:52 +0800 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: References: <1347907949.5360587.1336601381144.JavaMail.root@mozilla.com> Message-ID: On Thu, May 10, 2012 at 8:54 AM, Kenneth Russell wrote: > The tree we'd like to move is > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl > > It would be great if we could test the conversion first. There is already a test repo available at https://github.com/neonux/webgl-spec. I propose the following changes to the repo for clean up and better match with the git model : - remove specs/r13689 and specs/r16849 - flatten specs/latest to spec/ (singular, this has always the latest) and migrate specs/1.0 to spec/ in a "1.0.0" branch. - flatten conformance-suites/1.0.1 to conformance-suite/ (singular) and migrate conformance-suites/1.0.0 to conformance-suite/ in the "1.0.0" branch. - remove doc/ completely (it is unused and a placeholder for redirection) - remove sdk/demos (they are now quite outdated compared to demos on the web and are mostly unmaintained, maybe we can have these in a different repo webgl-demos) The development would happen directly in spec/ and conformance-suite/ instead of spec/latest and sdk/tests. Versioning would be managed as branches (github has nice web tools to view a particular branch and view diffs between them) Thoughts? Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kos...@ Wed May 9 19:40:33 2012 From: kos...@ (David Sheets) Date: Wed, 9 May 2012 19:40:33 -0700 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: References: <1347907949.5360587.1336601381144.JavaMail.root@mozilla.com> Message-ID: On Wed, May 9, 2012 at 6:21 PM, Cedric Vivier wrote: > On Thu, May 10, 2012 at 8:54 AM, Kenneth Russell wrote: >> The tree we'd like to move is >> >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl >> >> It would be great if we could test the conversion first. > > There is already a test repo available at https://github.com/neonux/webgl-spec. > > I propose the following changes to the repo for clean up and better > match with the git model : > > - remove specs/r13689 and specs/r16849 > - flatten specs/latest to spec/ (singular, this has always the latest) > and migrate specs/1.0 to spec/ in a "1.0.0" branch. > - flatten conformance-suites/1.0.1 to conformance-suite/ (singular) > and migrate conformance-suites/1.0.0 to conformance-suite/ in the > "1.0.0" branch. If each version is moved into a separate branch, the HTTP POST agent on khronos.org that receives post-commit messages will need to perform "git pull" against the various version branches to continue serving individual spec and test suite versions (which I believe is necessary for reference purposes and link maintenance). I don't think this is a big deal but it will make the post-commit mechanism more complicated than simply "if (HTTP POST from allowed_ips[]) then git pull" (mirroring). > - remove doc/ completely (it is unused and a placeholder for redirection) > - remove sdk/demos (they are now quite outdated compared to demos on > the web and are mostly unmaintained, maybe we can have these in a > different repo webgl-demos) > > The development would happen directly in spec/ and conformance-suite/ > instead of spec/latest and sdk/tests. > Versioning would be managed as branches (github has nice web tools to > view a particular branch and view diffs between them) > > Thoughts? > > Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cvi...@ Wed May 9 19:54:42 2012 From: cvi...@ (Cedric Vivier) Date: Thu, 10 May 2012 10:54:42 +0800 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: References: <1347907949.5360587.1336601381144.JavaMail.root@mozilla.com> Message-ID: On Thu, May 10, 2012 at 10:40 AM, David Sheets wrote: > If each version is moved into a separate branch, the HTTP POST agent > on khronos.org that receives post-commit messages will need to perform > "git pull" against the various version branches to continue serving > individual spec and test suite versions (which I believe is necessary > for reference purposes and link maintenance). I don't think this is a > big deal but it will make the post-commit mechanism more complicated > than simply "if (HTTP POST from allowed_ips[]) then git pull" > (mirroring). Yes, not a big deal imho to rather "git fetch" then loop all version branches (v*) to generate the reference snapshots on khronos.org. Otoh this would simplify the repository model for day-to-day development and allow nice git(hub)-based diff'ing between snapshots. Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Wed May 9 20:29:36 2012 From: cal...@ (Mark Callow) Date: Thu, 10 May 2012 12:29:36 +0900 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: References: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> Message-ID: <4FAB3620.3030204@hicorp.co.jp> On 12/05/03 3:53, Ashley Gullen wrote: > I think this is a great idea and I'm desperate for something like > this. Our engine implements both a WebGL and Canvas 2D renderer, and > currently the Canvas 2D renderer is never used in Chrome 18 due to > Swiftshader. I am keen to fall back to Canvas 2D instead of using > Swiftshader but there is no way to do that. Why should you expect the browser's Canvas 2D implementation to perform better than Swiftshader, either power- or speed-wise? Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Wed May 9 23:17:48 2012 From: ben...@ (Ben Vanik) Date: Thu, 10 May 2012 15:17:48 +0900 Subject: [Public WebGL] getContextRaw In-Reply-To: References: Message-ID: Yep :) It can be used to get the original, unhooked WebGLRenderingContext On Thu, May 10, 2012 at 6:43 AM, Rico P. wrote: > > Ah, ok,I was totally confused. > > - Rico > > 2012/5/9 Brandon Jones : > > You're probably correct. It's the original getContext that WebGL > Inspector > > caches so it can pass along calls from it's wrapped version: > > > > > https://github.com/benvanik/WebGL-Inspector/blob/master/core/extensions/chrome/contentscript.js#L166 > > > > > > On Wed, May 9, 2012 at 8:54 AM, Gregg Tavares (?) > wrote: > >> > >> I have a feeling that is something inserted by the WebGL Inspector > >> extension. > >> > >> It's not part of Chrome > >> > >> -gregg > >> > >> > >> On Wed, May 9, 2012 at 6:25 AM, Rico P. > wrote: > >>> > >>> > >>> I just found in Chrome 18 a function in the canvas element called > >>> getContextRaw analog to getContext. It behaves the same as the regular > >>> getContext. I couldn't find any useful information to this method, any > >>> idea what this method does? > >>> > >>> - Rico > >>> > >>> ----------------------------------------------------------- > >>> You are currently subscribed to public_webgl...@ > >>> To unsubscribe, send an email to majordomo...@ with > >>> the following command in the body of your email: > >>> unsubscribe public_webgl > >>> ----------------------------------------------------------- > >>> > >> > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu May 10 01:01:09 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo5YukKQ==?=) Date: Thu, 10 May 2012 01:01:09 -0700 Subject: [Public WebGL] getXXXParameter issues Message-ID: I updated gl-object-get-calls to check for invalid enums and couple of issues came up. 1) The WebGL spec says if the parameter is invalid then 'null' is returned. specifically not 'undefined'. Some browsers don't appear to be returning 'null' so I thought I should double check. 2) For pretty much all these functions the WebGL spec only defines what is returned when the 'pname' argument is invalid. For some functions like getTexParameter(target, pname) getBufferParameter(target, pname) getRenderbufferParameter(target, pname); It doesn't say specifically what they return when the 'target' argument is invalid. For now I'm assuming they should return 'null' in those cases too but I just thought I'd bring it up. The spec probably needs to be updated. Similarly getFramebufferAttachmentParameter(target, attachment, pname) Also has an 'attachment' parameter that can be invalid. getVertexAttrib(index, pname); has an index parameter that can be invalid. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Thu May 10 06:10:54 2012 From: bja...@ (Benoit Jacob) Date: Thu, 10 May 2012 06:10:54 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <4FAB3620.3030204@hicorp.co.jp> Message-ID: <1301932845.6052412.1336655454521.JavaMail.root@mozilla.com> ----- Original Message ----- > On 12/05/03 3:53, Ashley Gullen wrote: > > I think this is a great idea and I'm desperate for something like > > this. Our engine implements both a WebGL and Canvas 2D renderer, > > and > > currently the Canvas 2D renderer is never used in Chrome 18 due to > > Swiftshader. I am keen to fall back to Canvas 2D instead of using > > Swiftshader but there is no way to do that. > > Why should you expect the browser's Canvas 2D implementation to > perform better than Swiftshader, either power- or speed-wise? Depending on the use case, that can be plausible. The idea is that Canvas 2D has a much more focused feature set than WebGL, so more specialized code can be used. Suppose for example that you are developing a mathematical curve plotter that may use WebGL. Then a Canvas 2D version will probably perform at least as well as a WebGL-software-renderer version, and the Canvas2D version will have guaranteed perfect per-primitive anti-aliasing while the WebGL-software-renderer version will typically have either no antialiasing or 2x2 MSAA. Benoit > Regards > -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Thu May 10 06:12:38 2012 From: bja...@ (Benoit Jacob) Date: Thu, 10 May 2012 06:12:38 -0700 (PDT) Subject: [Public WebGL] getXXXParameter issues In-Reply-To: Message-ID: <1661976468.6052508.1336655558383.JavaMail.root@mozilla.com> Many thanks for all this work on tests! ----- Original Message ----- > I updated gl-object-get-calls to check for invalid enums and couple > of issues came up. > 1) The WebGL spec says if the parameter is invalid then 'null' is > returned. > specifically not 'undefined'. Some browsers don't appear to be > returning 'null' so I thought I should double check. > 2) For pretty much all these functions the WebGL spec only defines > what is returned when the 'pname' argument is invalid. > For some functions like > getTexParameter(target, pname) > getBufferParameter(target, pname) > getRenderbufferParameter(target, pname); > It doesn't say specifically what they return when the 'target' > argument is invalid. For now I'm assuming they > should return 'null' in those cases too but I just thought I'd bring > it up. The spec probably needs to be updated. > Similarly > getFramebufferAttachmentParameter(target, attachment, pname) > Also has an 'attachment' parameter that can be invalid. > getVertexAttrib(index, pname); > has an index parameter that can be invalid. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Thu May 10 06:18:53 2012 From: bja...@ (Benoit Jacob) Date: Thu, 10 May 2012 06:18:53 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <1301932845.6052412.1336655454521.JavaMail.root@mozilla.com> Message-ID: <2130091751.6061108.1336655933993.JavaMail.root@mozilla.com> ----- Original Message ----- > ----- Original Message ----- > > On 12/05/03 3:53, Ashley Gullen wrote: > > > > I think this is a great idea and I'm desperate for something like > > > this. Our engine implements both a WebGL and Canvas 2D renderer, > > > and > > > currently the Canvas 2D renderer is never used in Chrome 18 due > > > to > > > Swiftshader. I am keen to fall back to Canvas 2D instead of using > > > Swiftshader but there is no way to do that. > > > > > Why should you expect the browser's Canvas 2D implementation to > > perform better than Swiftshader, either power- or speed-wise? > > Depending on the use case, that can be plausible. The idea is that > Canvas 2D has a much more focused feature set than WebGL, so more > specialized code can be used. > Suppose for example that you are developing a mathematical curve > plotter that may use WebGL. Then a Canvas 2D version will probably > perform at least as well as a WebGL-software-renderer version, and > the Canvas2D version will have guaranteed perfect per-primitive > anti-aliasing while the WebGL-software-renderer version will > typically have either no antialiasing or 2x2 MSAA. Actually, in the WebGL version, if you want to be able to draw curves with arbitrary thickness regardless of driver capabilities, you'll have ( I guess) to draw rectangles instead of lines, adding significant overhead on a software renderer. Also, in the Canvas2D version, the browser can draw a whole spline at once, and will interpolate splines, while WebGL only draws triangles and the rest of the work has to be done by the script. Benoit > Benoit > > Regards > > > -Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ash...@ Thu May 10 06:41:07 2012 From: ash...@ (Ashley Gullen) Date: Thu, 10 May 2012 14:41:07 +0100 Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: <4FAB3620.3030204@hicorp.co.jp> References: <333299583.695193.1335973629092.JavaMail.root@mozilla.com> <4FAB3620.3030204@hicorp.co.jp> Message-ID: On 10 May 2012 04:29, Mark Callow wrote: > > On 12/05/03 3:53, Ashley Gullen wrote: > > I think this is a great idea and I'm desperate for something like this. > Our engine implements both a WebGL and Canvas 2D renderer, and currently > the Canvas 2D renderer is never used in Chrome 18 due to Swiftshader. I am > keen to fall back to Canvas 2D instead of using Swiftshader but there is no > way to do that. > > Why should you expect the browser's Canvas 2D implementation to perform > better than Swiftshader, either power- or speed-wise? > The main case would be the Canvas 2D is hardware-accelerated, but WebGL is not (e.g. using SwiftShader) due to the driver being blacklisted. Ashley -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Thu May 10 06:56:50 2012 From: bja...@ (Benoit Jacob) Date: Thu, 10 May 2012 06:56:50 -0700 (PDT) Subject: [Public WebGL] another proposal to temporarily expose some device/driver information only when really needed In-Reply-To: Message-ID: <955566167.6065066.1336658210970.JavaMail.root@mozilla.com> ----- Original Message ----- > On 10 May 2012 04:29, Mark Callow < callow_mark...@ > wrote: > > On 12/05/03 3:53, Ashley Gullen wrote: > > > > I think this is a great idea and I'm desperate for something like > > > this. Our engine implements both a WebGL and Canvas 2D renderer, > > > and > > > currently the Canvas 2D renderer is never used in Chrome 18 due > > > to > > > Swiftshader. I am keen to fall back to Canvas 2D instead of using > > > Swiftshader but there is no way to do that. > > > > > Why should you expect the browser's Canvas 2D implementation to > > perform better than Swiftshader, either power- or speed-wise? > > The main case would be the Canvas 2D is hardware-accelerated, but > WebGL is not (e.g. using SwiftShader) due to the driver being > blacklisted. That can happen in some rare cases, but in most cases, when the driver is blacklisted for WebGL, it also is blacklisted for any kind of content or compositing acceleration. That's why in my replies I focused on explaining why non-accelerated Canvas 2D could, for certain applications, be better than non-accelerated WebGL. Benoit > Ashley -------------- next part -------------- An HTML attachment was scrubbed... URL: From thj...@ Fri May 11 00:18:43 2012 From: thj...@ (Thor Harald Johansen) Date: Fri, 11 May 2012 09:18:43 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy Message-ID: <4FACBD53.6030102@thj.no> So I'm working on a prototype of Sketcher, as mentioned in previous posts, and I think I might've run up against a major roadblock when it comes to alpha blending. I'm superimposing many brush images onto a texture. The images are generated by a fragment shader. I had my pipeline set up for premultiplied alpha earlier, since this was easy to work with, but with only 8 bits per channel to work with, color accuracy was visibly suffering, so I switched to unpremultiplied alpha, and saw immediate improvement. I now run into a problem, and it's a classic one: I cannot find a way of making blendFuncSeparate() take both source and destination alpha into account for the color. This is a problem because the color underneath the alpha matte on my destination layer is bleeding through the soft edges of the brush. Some Googling reveals that there is an extension, OES_texture_float, which would provide enough accuracy for premultiplied alpha, but there are some problems with this approach: - not supported on every OpenGL ES implementation - support for float texture FBOs is optional - increased texture footprint Something tells me that it would typically be mobile devices that would not support the extension, and of those that do, most would not support writing to the texture, and I'm guessing that support is not universal even on desktop computers. In the Java version of Sketcher, I did alpha blending as follows: dA = dA + sA * (1 - dA) dC = dC + (sC - dC) * sA / dA These are serial calculations, so dA in the 2nd equation refers to the final destination alpha, i.e. result of the 1st equation. The 1st equation is a sort of cumulative alpha blend which interpolates between the destination and 1.0 using the source, so that compositing 0.5 over 0.5 gets you 0.75. This was easily implemented in WebGL using: glBlendFuncSeparate(?, ?, gl.ONE_MINUS_DST_ALPHA, gl.ONE); The 2nd equation interpolates between the destination and source color using the source alpha, but with the twist of dividing the source alpha by the final destination alpha first. It's been too long since I worked these equations out, so I don't really understand why the 2nd equation works, except that the result never overflows, granted that I use fixed point arithmetic and a special case for divide by 0. It's not critical that the WebGL version of Sketcher mixes colors in the same way, but if I'm going to do this with unpremultiplied alpha, the equation will need to take both source and destination alpha into account to avoid the color bleeding. I tried to look into using multi-textures and a shader to do my own blending, but it would seem that I'm only allowed to use a single texture unit at any given point, and I need 2 to do blending. It would seem that my problem is impossible to solve with the available tools. I hope that someone can prove me wrong. Thor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From ben...@ Fri May 11 00:30:49 2012 From: ben...@ (Ben Vanik) Date: Fri, 11 May 2012 16:30:49 +0900 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: <4FACBD53.6030102@thj.no> References: <4FACBD53.6030102@thj.no> Message-ID: You can use any number of texture units you like (up to MAX_TEXTURE_IMAGE_UNITS). I've been doing image manipulation and have had to do the blending myself in most cases for the same reasons you've described. It's annoying, as it often requires excessive ping-ponging of render-targets (you cannot read from/write to the same texture) and can slow things down considerably if your pipeline is long. But unfortunately it's the way things are -- unless you write an on-the-fly shader compiler (and deal with all the performance issues that has). Unless someone else proves you and me wrong ;) On Fri, May 11, 2012 at 4:18 PM, Thor Harald Johansen wrote: > > So I'm working on a prototype of Sketcher, as mentioned in previous posts, > and I think I might've run up against a major roadblock when it comes to > alpha blending. > > I'm superimposing many brush images onto a texture. The images are > generated by a fragment shader. I had my pipeline set up for premultiplied > alpha earlier, since this was easy to work with, but with only 8 bits per > channel to work with, color accuracy was visibly suffering, so I switched > to unpremultiplied alpha, and saw immediate improvement. > > I now run into a problem, and it's a classic one: I cannot find a way of > making blendFuncSeparate() take both source and destination alpha into > account for the color. This is a problem because the color underneath the > alpha matte on my destination layer is bleeding through the soft edges of > the brush. > > Some Googling reveals that there is an extension, OES_texture_float, which > would provide enough accuracy for premultiplied alpha, but there are some > problems with this approach: > > - not supported on every OpenGL ES implementation > - support for float texture FBOs is optional > - increased texture footprint > > Something tells me that it would typically be mobile devices that would > not support the extension, and of those that do, most would not support > writing to the texture, and I'm guessing that support is not universal even > on desktop computers. > > In the Java version of Sketcher, I did alpha blending as follows: > > dA = dA + sA * (1 - dA) > dC = dC + (sC - dC) * sA / dA > > These are serial calculations, so dA in the 2nd equation refers to the > final destination alpha, i.e. result of the 1st equation. > > The 1st equation is a sort of cumulative alpha blend which interpolates > between the destination and 1.0 using the source, so that compositing 0.5 > over 0.5 gets you 0.75. > > This was easily implemented in WebGL using: > > glBlendFuncSeparate(?, ?, gl.ONE_MINUS_DST_ALPHA, gl.ONE); > > The 2nd equation interpolates between the destination and source color > using the source alpha, but with the twist of dividing the source alpha by > the final destination alpha first. > > It's been too long since I worked these equations out, so I don't really > understand why the 2nd equation works, except that the result never > overflows, granted that I use fixed point arithmetic and a special case for > divide by 0. > > It's not critical that the WebGL version of Sketcher mixes colors in the > same way, but if I'm going to do this with unpremultiplied alpha, the > equation will need to take both source and destination alpha into account > to avoid the color bleeding. > > I tried to look into using multi-textures and a shader to do my own > blending, but it would seem that I'm only allowed to use a single texture > unit at any given point, and I need 2 to do blending. > > It would seem that my problem is impossible to solve with the available > tools. I hope that someone can prove me wrong. > > Thor > > ------------------------------**----------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ------------------------------**----------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri May 11 01:00:19 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Fri, 11 May 2012 10:00:19 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: <4FACBD53.6030102@thj.no> References: <4FACBD53.6030102@thj.no> Message-ID: I've run into similar blending limitations when trying to implement brushes, but on desktop GL. The issue in a nutshell is that the GL blending equations do only implement a small subset of blending modes commonly used in drawing applications (you might get the simple alpha overlay right, but then there's modes like screen, burn, dodge, lighten, darken, grain merge, grain extract ect.) which you can't do with the blend equations. I agree with Ben that you should do texture ping/pong so you can implement your own unrestricted blending equation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cab...@ Fri May 11 01:24:22 2012 From: cab...@ (Rik Cabanier) Date: Fri, 11 May 2012 10:24:22 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: References: <4FACBD53.6030102@thj.no> Message-ID: What do you mean with texture ping-pong? Is it creating extra textures to hold intermediate values? On Fri, May 11, 2012 at 10:00 AM, Florian B?sch wrote: > I've run into similar blending limitations when trying to implement > brushes, but on desktop GL. The issue in a nutshell is that the GL blending > equations do only implement a small subset of blending modes commonly used > in drawing applications (you might get the simple alpha overlay right, but > then there's modes like screen, burn, dodge, lighten, darken, grain merge, > grain extract ect.) which you can't do with the blend equations. > > I agree with Ben that you should do texture ping/pong so you can implement > your own unrestricted blending equation. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Fri May 11 01:34:46 2012 From: cal...@ (Mark Callow) Date: Fri, 11 May 2012 17:34:46 +0900 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: References: <4FACBD53.6030102@thj.no> Message-ID: <4FACCF26.9050903@hicorp.co.jp> On 12/05/11 16:30, Ben Vanik wrote: > > I've been doing image manipulation and have had to do the blending > myself in most cases for the same reasons you've described. It's > annoying, as it often requires excessive ping-ponging of > render-targets (you cannot read from/write to the same texture) Actually you can but you have to call glFinish between the writes and the reads. > and can slow things down considerably if your pipeline is long. But > unfortunately it's the way things are -- unless you write an > on-the-fly shader compiler (and deal with all the performance issues > that has). > Do you mean an on-the-fly shader generator? Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From thj...@ Fri May 11 01:39:57 2012 From: thj...@ (Thor Harald Johansen) Date: Fri, 11 May 2012 10:39:57 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: <4FACD001.2060805@thj.no> References: <4FACBD53.6030102@thj.no> <4FACD001.2060805@thj.no> Message-ID: <4FACD05D.1060703@thj.no> > You can use any number of texture units you like (up > to MAX_TEXTURE_IMAGE_UNITS). Simultaneously? In a single shader? AFAIK, WebGL only allows one active texture per drawing operation. Thor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Fri May 11 01:42:45 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Fri, 11 May 2012 10:42:45 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: References: <4FACBD53.6030102@thj.no> Message-ID: On Fri, May 11, 2012 at 10:24 AM, Rik Cabanier wrote: > What do you mean with texture ping-pong? Is it creating extra textures to > hold intermediate values? You usually do that with 2 textures and it works like this: a = your current drawing b = the current render target attached to an FBO 1. copy a to b by way of screen quad copy sourcing from a 2. draw what you want to add, sourcing from a and from whatever source you need to draw, using your custom blending equation, still outputting to b 3. swap a and b -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Fri May 11 01:44:48 2012 From: ben...@ (Ben Vanik) Date: Fri, 11 May 2012 17:44:48 +0900 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: <4FACD05D.1060703@thj.no> References: <4FACBD53.6030102@thj.no> <4FACD001.2060805@thj.no> <4FACD05D.1060703@thj.no> Message-ID: RE Mark: I meant that you cannot have an input sampler to a fragment shader also be bound as a render buffer in the current frame buffer: you cannot draw to yourself. It'll sometimes work on certain hardware, but not always. It would remove the need for ping-ponging/multiple render targets (not to be confused with MRT) in most image-related blending scenarios (compositing two layers together). On the fly shader generator, yes - there are many potential pitfalls with that approach, though, that make building a robust image filter pipeline very difficult (namely: precision inconsistencies and performance when compiling). RE Thor: ? activeTexture is used just to setup the sampler bindings for vertex/fragment shaders. In a vertex shader you can sample up to MAX_VERTEX_TEXTURE_IMAGE_UNITS textures, and in a fragment shader MAX_TEXTURE_IMAGE_UNITS. See: multitexturing On Fri, May 11, 2012 at 5:39 PM, Thor Harald Johansen wrote: > > You can use any number of texture units you like (up >> to MAX_TEXTURE_IMAGE_UNITS). >> > > Simultaneously? In a single shader? AFAIK, WebGL only allows one active > texture per drawing operation. > > Thor > > > > ------------------------------**----------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ------------------------------**----------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri May 11 01:46:41 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Fri, 11 May 2012 10:46:41 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: <4FACD05D.1060703@thj.no> References: <4FACBD53.6030102@thj.no> <4FACD001.2060805@thj.no> <4FACD05D.1060703@thj.no> Message-ID: On Fri, May 11, 2012 at 10:39 AM, Thor Harald Johansen wrote: > Simultaneously? In a single shader? AFAIK, WebGL only allows one active > texture per drawing operation. You can only write to one texture at a time. You can read from many textures. If you look at http://webglstats.com/ you can see that most desktops (99.9%) can read from 16 textures, and most mobiles (100%) can read from 8. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thj...@ Fri May 11 01:52:06 2012 From: thj...@ (Thor Harald Johansen) Date: Fri, 11 May 2012 10:52:06 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: References: <4FACBD53.6030102@thj.no> Message-ID: <4FACD336.8090907@thj.no> > You usually do that with 2 textures and it works like this: > a = your current drawing > b = the current render target attached to an FBO > > 1. copy a to b by way of screen quad copy sourcing from a > 2. draw what you want to add, sourcing from a and from whatever source > you need to draw, using your custom blending equation, still outputting to b > 3. swap a and b This would work in the case of an image calculated in the shader, I suppose, so that's a possible solution for the brush issue, but what about blending 2 textures together? I suppose that if I'm going to implement layers, I could just blend these to the main frame buffer by using premultiplied alpha, since I'm not actually outputting an alpha channel to the 3D canvas. For a low number of layers with relatively high opacity values, this should look fine, I guess. Thor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From thj...@ Fri May 11 02:00:22 2012 From: thj...@ (Thor Harald Johansen) Date: Fri, 11 May 2012 11:00:22 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: References: <4FACBD53.6030102@thj.no> <4FACD001.2060805@thj.no> <4FACD05D.1060703@thj.no> Message-ID: <4FACD526.7040300@thj.no> On 5/11/2012 10:46 AM, Florian B?sch wrote: > On Fri, May 11, 2012 at 10:39 AM, Thor Harald Johansen > wrote: > > Simultaneously? In a single shader? AFAIK, WebGL only allows one active > texture per drawing operation. > > You can only write to one texture at a time. You can read from many > textures. If you look at http://webglstats.com/ you can see that most > desktops (99.9%) can read from 16 textures, and most mobiles (100%) can > read from 8. Thanks for that link. Didn't know about that website! And okay, from your and Ben's comments, it seems I'm going to have to look more carefully at how multi-texturing actually works. All the examples I found were doing things like switching textures for each side of a cube, etc. None demonstrated blending of multiple textures. I find API references to be really hard to learn from. :/ So okay, texture ping-pong and multi-texturing are both promising avenues. I might write a shader that takes 8 texture units and blends them together to the main frame buffer, and enforce a limit of 8 layers in the user interface. From practical experience, people rarely use more than perhaps 5 layers anyway. Thor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Fri May 11 01:59:52 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Fri, 11 May 2012 10:59:52 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: <4FACD336.8090907@thj.no> References: <4FACBD53.6030102@thj.no> <4FACD336.8090907@thj.no> Message-ID: On Fri, May 11, 2012 at 10:52 AM, Thor Harald Johansen wrote: > This would work in the case of an image calculated in the shader, I >> suppose, so that's a possible solution for the brush issue, but what about >> blending 2 textures together? I suppose that if I'm going to implement >> layers, I could just blend these to the main frame buffer by using >> premultiplied alpha, since I'm not actually outputting an alpha channel to >> the 3D canvas. For a low number of layers with relatively high opacity >> values, this should look fine, I guess. > > If you have a stack of alpha blended layers, you can just draw them on top of each other. However, if you need different blending equations for layer blending (like grain merge etc.) which the GL blending equation can't do, you'll have to do a ping-pong too. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri May 11 02:13:21 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Fri, 11 May 2012 11:13:21 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: <4FACD526.7040300@thj.no> References: <4FACBD53.6030102@thj.no> <4FACD001.2060805@thj.no> <4FACD05D.1060703@thj.no> <4FACD526.7040300@thj.no> Message-ID: On Fri, May 11, 2012 at 11:00 AM, Thor Harald Johansen wrote: > And okay, from your and Ben's comments, it seems I'm going to have to look > more carefully at how multi-texturing actually works. All the examples I > found were doing things like switching textures for each side of a cube, > etc. None demonstrated blending of multiple textures. I find API references > to be really hard to learn from. :/ > Using multiple textures is relatively straightforward. gl.useProgram(program); // bind first texture gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, first_texture); gl.uniform1i(gl.getUniformLocation(program, 'first_texture'), 0); // bind second texture gl.activeTexture(gl.TEXTURE1); gl.bindTexture(gl.TEXTURE_2D, second_texture); gl.uniform1i(gl.getUniformLocation(program, 'second_texture'), 1); And then in the shader have: varying vec2 uv; uniform sampler2D first_texture, second_texture; void main(){ gl_FragColor = texture2D(first_texture, uv) + texture2D(second_texture, uv); } -------------- next part -------------- An HTML attachment was scrubbed... URL: From thj...@ Fri May 11 02:30:17 2012 From: thj...@ (Thor Harald Johansen) Date: Fri, 11 May 2012 11:30:17 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: References: <4FACBD53.6030102@thj.no> <4FACD001.2060805@thj.no> <4FACD05D.1060703@thj.no> <4FACD526.7040300@thj.no> Message-ID: <4FACDC29.8040603@thj.no> > Using multiple textures is relatively straightforward. > gl.activeTexture(gl.TEXTURE0); Oh, I see! So that just sets the *target* texture *unit* for the bind command? > gl.uniform1i(gl.getUniformLocation(program, 'first_texture'), 0); And you write the texture unit number to the sampler as a numeric uniform? It's simple, but not terribly intuitive! If all of these suggestions that you and Ben have so helpfully provided and explained to me should work out for me, I'll have a piece of software you can do high quality drawing in ready in not too long. Thor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Fri May 11 02:35:44 2012 From: cal...@ (Mark Callow) Date: Fri, 11 May 2012 18:35:44 +0900 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: References: <4FACBD53.6030102@thj.no> <4FACD001.2060805@thj.no> <4FACD05D.1060703@thj.no> <4FACD526.7040300@thj.no> Message-ID: <4FACDD70.7060301@hicorp.co.jp> You could also order the calls // bind first texture gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, first_texture); // bind second texture gl.activeTexture(gl.TEXTURE1); gl.bindTexture(gl.TEXTURE_2D, second_texture); // set up samplers gl.useProgram(program); gl.uniform1i(gl.getUniformLocation(program, 'first_texture'), 0); gl.uniform1i(gl.getUniformLocation(program, 'second_texture'), 1); glActiveTexture only sets the texture unit which will be affected by subsequent bind calls. It's state is completely irrelevant when setting the uniform values. Regards -Mark On 12/05/11 18:13, Florian B?sch wrote: > On Fri, May 11, 2012 at 11:00 AM, Thor Harald Johansen > wrote: > > And okay, from your and Ben's comments, it seems I'm going to have > to look more carefully at how multi-texturing actually works. All > the examples I found were doing things like switching textures for > each side of a cube, etc. None demonstrated blending of multiple > textures. I find API references to be really hard to learn from. :/ > > Using multiple textures is relatively straightforward. > > gl.useProgram(program); > // bind first texture > gl.activeTexture(gl.TEXTURE0); > gl.bindTexture(gl.TEXTURE_2D, first_texture); > gl.uniform1i(gl.getUniformLocation(program, 'first_texture'), 0); > // bind second texture > gl.activeTexture(gl.TEXTURE1); > gl.bindTexture(gl.TEXTURE_2D, second_texture); > gl.uniform1i(gl.getUniformLocation(program, 'second_texture'), 1); > > And then in the shader have: > > varying vec2 uv; > uniform sampler2D first_texture, second_texture; > void main(){ > gl_FragColor = texture2D(first_texture, uv) + > texture2D(second_texture, uv); > } -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri May 11 02:37:16 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Fri, 11 May 2012 11:37:16 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: <4FACDC29.8040603@thj.no> References: <4FACBD53.6030102@thj.no> <4FACD001.2060805@thj.no> <4FACD05D.1060703@thj.no> <4FACD526.7040300@thj.no> <4FACDC29.8040603@thj.no> Message-ID: On Fri, May 11, 2012 at 11:30 AM, Thor Harald Johansen wrote: > Oh, I see! So that just sets the *target* texture *unit* for the bind > command? > Correct > And you write the texture unit number to the sampler as a numeric uniform? > It's simple, but not terribly intuitive! > It isn't very intuitive and often trips people up. Unfortunately it's historical and the unit/sampler thing is baked to hardware. Texture units these days are little more than one pointer indirection. In the olden days they used to represent scarce resources (on a lot of hardware they still do). Direct State Access unfortunately hasn't made its debut to texture handling in the GL specifications yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thj...@ Fri May 11 03:09:50 2012 From: thj...@ (Thor Harald Johansen) Date: Fri, 11 May 2012 12:09:50 +0200 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: <4FACDC29.8040603@thj.no> References: <4FACBD53.6030102@thj.no> <4FACD001.2060805@thj.no> <4FACD05D.1060703@thj.no> <4FACD526.7040300@thj.no> <4FACDC29.8040603@thj.no> Message-ID: <4FACE56E.1040003@thj.no> > If all of these suggestions that you and Ben have so helpfully provided > and explained to me should work out for me, I'll have a piece of > software you can do high quality drawing in ready in not too long. Oh, and on the sunny side of things, OpenGL and shaders are giving me new options in some other areas: - I'm actually using the JPEG2000 RCT color space for my color channels, storing Y, Cb and Cr in the R, G and B channels respectively. This has the interesting side effect of allowing me to treat color like a complex number. Rotating (Cb, Cr) takes you through the color circle. Altering the magnitude alters the saturation. Linear interpolation gives pleasing results, because even complementary colors share the same brightness, so gamma encoded colors don't skew it as much. - In addition to this, I have been experimenting with gamma. Linear gamma, x^(1/2.2), would provide perfect color mixing even with RGB, but this makes dark grays look like absolute s**t at 8 bits per channel. I then tried a gamma of 1.8, x^(1.8/2.2), the one used by old Apple machines. The RCT color mixing was already good, but this took it up another notch, while still rendering dark shades of gray without banding. I have yet to add the linear section specified in sRGB. 2.2 is an adequate approximation for the time being. If it seems like I'm spending too much time on these details, you haven't heard hundreds of artists complain about the muddy color mixing in their painting tools yet. ;) Thor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Fri May 11 14:57:02 2012 From: kbr...@ (Kenneth Russell) Date: Fri, 11 May 2012 14:57:02 -0700 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: References: <1347907949.5360587.1336601381144.JavaMail.root@mozilla.com> Message-ID: On Wed, May 9, 2012 at 7:54 PM, Cedric Vivier wrote: > On Thu, May 10, 2012 at 10:40 AM, David Sheets wrote: >> If each version is moved into a separate branch, the HTTP POST agent >> on khronos.org that receives post-commit messages will need to perform >> "git pull" against the various version branches to continue serving >> individual spec and test suite versions (which I believe is necessary >> for reference purposes and link maintenance). I don't think this is a >> big deal but it will make the post-commit mechanism more complicated >> than simply "if (HTTP POST from allowed_ips[]) then git pull" >> (mirroring). > > Yes, not a big deal imho to rather "git fetch" then loop all version > branches (v*) to generate the reference snapshots on khronos.org. > Otoh this would simplify the repository model for day-to-day > development and allow nice git(hub)-based diff'ing between snapshots. Cedric, Conceptually, using Git branches for the spec and conformance suite revisions sounds nice. However, I'm concerned about a couple of things: 1. Having all snapshots of the conformance suite available in a flat checkout has some maintenance advantages. It's possible to commit updates to multiple versions of the conformance tests simultaneously. This has come in handy during the stabilization of the 1.0.1 suite, where changes go into the top-of-tree test suite as well as the 1.0.1 version. 2. With the current repository structure, checking out portions of the branches on the server side would be fairly complicated. For example, http://www.khronos.org/registry/webgl/specs/[number] would need to be populated with the contents of the git checkout of specs/latest/ at branch [number]. Separately, http://www.khronos.org/registry/webgl/conformance-suites/[number] would need to be populated with the contents of the git checkout of sdk/tests/ at branch [number]. In other words, the server-side checkout would not be as simple as just checking out the branch -- unless we do that into a side directory on the server and set up the URLs above using symlinks. (James, would that be possible?) If you think that the advantages of using branches outweigh any disadvantages, that's fine -- please work with Gregg and James to get the server-side scripts updated appropriately. I'm strongly in favor of preserving the "sdk" directory, including the demos, which demonstrate some canonical code patterns such as recovery from lost context. Gregg's debugging utilities are also there. Splitting these into a separate repository will only make it more difficult to find them and won't simplify ongoing development. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Fri May 11 15:01:32 2012 From: kbr...@ (Kenneth Russell) Date: Fri, 11 May 2012 15:01:32 -0700 Subject: [Public WebGL] Moving WebGL Khronos related stuff to GitHub In-Reply-To: References: <1347907949.5360587.1336601381144.JavaMail.root@mozilla.com> Message-ID: On Fri, May 11, 2012 at 2:57 PM, Kenneth Russell wrote: > On Wed, May 9, 2012 at 7:54 PM, Cedric Vivier wrote: >> On Thu, May 10, 2012 at 10:40 AM, David Sheets wrote: >>> If each version is moved into a separate branch, the HTTP POST agent >>> on khronos.org that receives post-commit messages will need to perform >>> "git pull" against the various version branches to continue serving >>> individual spec and test suite versions (which I believe is necessary >>> for reference purposes and link maintenance). I don't think this is a >>> big deal but it will make the post-commit mechanism more complicated >>> than simply "if (HTTP POST from allowed_ips[]) then git pull" >>> (mirroring). >> >> Yes, not a big deal imho to rather "git fetch" then loop all version >> branches (v*) to generate the reference snapshots on khronos.org. >> Otoh this would simplify the repository model for day-to-day >> development and allow nice git(hub)-based diff'ing between snapshots. > > Cedric, > > Conceptually, using Git branches for the spec and conformance suite > revisions sounds nice. However, I'm concerned about a couple of > things: > > 1. Having all snapshots of the conformance suite available in a flat > checkout has some maintenance advantages. It's possible to commit > updates to multiple versions of the conformance tests simultaneously. > This has come in handy during the stabilization of the 1.0.1 suite, > where changes go into the top-of-tree test suite as well as the 1.0.1 > version. > > 2. With the current repository structure, checking out portions of the > branches on the server side would be fairly complicated. For example, > > ?http://www.khronos.org/registry/webgl/specs/[number] > > would need to be populated with the contents of the git checkout of > specs/latest/ at branch [number]. Separately, > > ?http://www.khronos.org/registry/webgl/conformance-suites/[number] > > would need to be populated with the contents of the git checkout of > sdk/tests/ at branch [number]. Another concern is that it might be more difficult to revise the conformance suite independently of the spec. We need to anticipate the need for version 1.0.1a, 1.0.1b, etc. of the conformance tests, without necessarily revising the spec. -Ken > In other words, the server-side > checkout would not be as simple as just checking out the branch -- > unless we do that into a side directory on the server and set up the > URLs above using symlinks. (James, would that be possible?) > > If you think that the advantages of using branches outweigh any > disadvantages, that's fine -- please work with Gregg and James to get > the server-side scripts updated appropriately. > > I'm strongly in favor of preserving the "sdk" directory, including the > demos, which demonstrate some canonical code patterns such as recovery > from lost context. Gregg's debugging utilities are also there. > Splitting these into a separate repository will only make it more > difficult to find them and won't simplify ongoing development. > > -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jef...@ Mon May 14 07:57:44 2012 From: jef...@ (Jeff Russell) Date: Mon, 14 May 2012 10:57:44 -0400 Subject: [Public WebGL] Premultiplied alpha blending and issues with color accuracy In-Reply-To: <4FACE56E.1040003@thj.no> References: <4FACBD53.6030102@thj.no> <4FACD001.2060805@thj.no> <4FACD05D.1060703@thj.no> <4FACD526.7040300@thj.no> <4FACDC29.8040603@thj.no> <4FACE56E.1040003@thj.no> Message-ID: > > - In addition to this, I have been experimenting with gamma. Linear gamma, > x^(1/2.2), would provide perfect color mixing even with RGB, but this makes > dark grays look like absolute s**t at 8 bits per channel. I then tried a > gamma of 1.8, x^(1.8/2.2), the one used by old Apple machines. The RCT > color mixing was already good, but this took it up another notch, while > still rendering dark shades of gray without banding. I have yet to add the > linear section specified in sRGB. 2.2 is an adequate approximation for the > time being. WebGL still needs an sRGB blending extension; this would allow you to work with linear color values without the banding/precision loss you describe, or having to resort to float16 render targets. This has been solved in desktop GL / D3D / consoles for many years now. WebGL will come around soon, I hope :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Wed May 16 10:47:10 2012 From: kos...@ (David Sheets) Date: Wed, 16 May 2012 10:47:10 -0700 Subject: [Public WebGL] WebGL URI Extension Proposal Message-ID: Hello, world! I am interested in your input on an extension I'd like to propose called URI . The URI extension embeds the WebGL extension namespace into the RFC 3986 Open Web namespace of URIs. This extension is needed because Khronos needs to keep control of the extension namespace despite: - Service providers' desire to understand shader resources (and third-party extensions will not be excluded) - Shader authors' desire to use third-party syntax extensions and standard libraries - Tool developers' desire to provide unambiguous, machine-consumable extension names The use of URIs solves this decentralized, hierarchical naming problem in a web standard way and allows Khronos to decide what authorities to bless for extensions supported by conformant browsers. The proposed extension enables new API and ESSL preprocessor behavior. I have written a JavaScript shim that mimics the behavior of the proposed extension if you are interested. Also I look forward to discussing this proposal after my talk at this month's WebGL Developers San Francisco Meetup on Thursday at 7pm at Google's San Francisco offices (RSVP ). Sincerely, David Sheets PS I apologize if you receive this message multiple times, my previous message does not appear to have posted. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Wed May 16 11:07:44 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 16 May 2012 20:07:44 +0200 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: Message-ID: I don't think I understand this proposal. Perhaps illustrate it with a minimal fully-featured pseudo API usage app example. On Wed, May 16, 2012 at 7:47 PM, David Sheets wrote: > I am interested in your input on an extension I'd like to propose > called URI . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Wed May 16 15:11:27 2012 From: kos...@ (David Sheets) Date: Wed, 16 May 2012 15:11:27 -0700 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: Message-ID: On Wed, May 16, 2012 at 11:07 AM, Florian B?sch wrote: > I don't think I understand this proposal. Perhaps illustrate it with a > minimal fully-featured pseudo API usage app example. In JavaScript, after including some prometheus.tld script: gl.getExtension("URI"); gl.getExtension("http://www.khronos.org/registry/webgl/extensions/OES_texture_float/"); var k = gl.getExtension("http://www.prometheus.tld/webgl/advisories/"); if (k.slow) { fallback(); } else { render_resplendent(); } In GLSL: #extension URI : enable #extension : enable #extension : enable I believe that a standard #pragma directive for declaring metadata-in-comments format would also be extremely helpful. Perhaps something like: #pragma META Standardization of federated hierarchical namespaces yields consistent global use and Web-scale interop. On the Open Web, namespaces are open. It is worth noting that Khronos is presently constructing an ad hoc hierarchical extension namespace with the various extension source and vendor prefixes ("WEBGL","OES","EXT","ANGLE","WEBKIT","NV","ATI", &c.). This extension proposal would unify this namespace with the Web's namespace. David > On Wed, May 16, 2012 at 7:47 PM, David Sheets wrote: >> >> I am interested in your input on an extension I'd like to propose >> called URI . ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gle...@ Wed May 16 15:36:33 2012 From: gle...@ (Glenn Maynard) Date: Wed, 16 May 2012 17:36:33 -0500 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: Message-ID: On Wed, May 16, 2012 at 12:47 PM, David Sheets wrote: > I am interested in your input on an extension I'd like to propose > called URI . > > The URI extension embeds the WebGL extension namespace into the RFC > 3986 Open Web namespace of URIs. > You're fixing a make-believe problem. Namespacing strings with prefixes (eg. "moz" or "EXT") works just fine; both Web APIs and OpenGL have used it successfully for years. All URLs would do is make everything uglier, without fixing any problems that actually exist. http://lists.w3.org/Archives/Public/public-webapps/2011JulSep/1584.html (As before, I'll only bother to respond further if it looks like a WebGL editor is actually taking this seriously, just for reasons of limited free time.) -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Wed May 16 19:11:54 2012 From: cal...@ (Mark Callow) Date: Thu, 17 May 2012 11:11:54 +0900 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: Message-ID: <4FB45E6A.5090501@hicorp.co.jp> On 17/05/2012 07:11, David Sheets wrote: > gl.getExtension("http://www.khronos.org/registry/webgl/extensions/OES_texture_float/"); Seeing something like this in a program would make me think that getExtension will download the extension rather than it having to be built in. Since extensions cannot be downloaded and are implemented by a very small number of people, I think the current prefixing mechanism works just fine. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Wed May 16 19:25:25 2012 From: ben...@ (Ben Vanik) Date: Wed, 16 May 2012 19:25:25 -0700 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: <4FB45E6A.5090501@hicorp.co.jp> References: <4FB45E6A.5090501@hicorp.co.jp> Message-ID: Agreed - as a dev I haven't felt much pain here because tools can go a long way towards making this stuff better, very similar to the webgl-uri library written here, and as user code it's much more flexible. For example, I use a glsl compiler to minify and optimize my shaders ( http://code.google.com/p/glsl-unit/wiki/UsingTheCompiler) - I'd rather have that spit out optimizable JS that enables extensions and does the checks at compile time rather than require massive strings and runtime parsing included in my shipped resources. As user code, something like GLSL Sandbox could do it at runtime, while shipped apps and games could do it much earlier or wherever it made sense. On Wed, May 16, 2012 at 7:11 PM, Mark Callow wrote: > On 17/05/2012 07:11, David Sheets wrote: > > gl.getExtension("http://www.khronos.org/registry/webgl/extensions/OES_texture_float/" ); > > Seeing something like this in a program would make me think that > getExtension will download the extension rather than it having to be built > in. > > Since extensions cannot be downloaded and are implemented by a very small > number of people, I think the current prefixing mechanism works just fine. > > Regards > > -Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed May 16 20:36:11 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 16 May 2012 20:36:11 -0700 Subject: [Public WebGL] getXXXParameter issues In-Reply-To: References: Message-ID: On Thu, May 10, 2012 at 1:01 AM, Gregg Tavares (?) wrote: > I updated gl-object-get-calls?to check for invalid enums and couple of > issues came up. > > 1)?The WebGL spec says if the parameter is invalid then 'null' is returned. > > specifically not 'undefined'. Some browsers don't appear to be returning > 'null' so I thought I should double check. > > 2) For pretty much all these functions the WebGL spec only defines what is > returned when the 'pname' argument is invalid. > > For some functions like > > ? ?getTexParameter(target, pname) > ? ?getBufferParameter(target, pname) > ? ?getRenderbufferParameter(target, pname); > > It doesn't say specifically what they return when the 'target' argument is > invalid. For now I'm assuming they > should return 'null' in those cases too but I just thought I'd bring it up. > The spec probably needs to be updated. > > Similarly > > ? ?getFramebufferAttachmentParameter(target, attachment, pname) > > Also has an 'attachment' parameter that can be invalid. > > ? ?getVertexAttrib(index, pname); > > has an index parameter that can be invalid. Thanks for pointing out these issues. Updated the specs for all of these entry points to indicate that if an OpenGL error is generated (not just the ones in the WebGL spec, but those implicitly referenced in the OpenGL ES 2.0 spec), they return null. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kos...@ Wed May 16 21:24:46 2012 From: kos...@ (David Sheets) Date: Wed, 16 May 2012 21:24:46 -0700 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: <4FB45E6A.5090501@hicorp.co.jp> References: <4FB45E6A.5090501@hicorp.co.jp> Message-ID: On Wed, May 16, 2012 at 7:11 PM, Mark Callow wrote: > On 17/05/2012 07:11, David Sheets wrote: > > gl.getExtension("http://www.khronos.org/registry/webgl/extensions/OES_texture_float/"); > > Seeing something like this in a program would make me think that > getExtension will download the extension rather than it having to be built > in. >From : An individual scheme does not have to be classified as being just one of "name" or "locator". Instances of URIs from any given scheme may have the characteristics of names or locators or both, often depending on the persistence and care in the assignment of identifiers by the naming authority, rather than on any quality of the scheme. If you are concerned about confusion resulting from using "http" scheme URIs for extension indentifiers, Khronos could register a URN namespace with IANA and decree identifiers in that namespace as canonical. This would be much prettier yet still persistent, unique, global, etc (URI includes URN). > Since extensions cannot be downloaded and are implemented by a very small > number of people, I think the current prefixing mechanism works just fine. Many GLSL language extensions can be expressed as resources (syntax transform data, proof checkers, service endpoints, etc) that are used at either compile- or run-time. My proposal primarily concerns the machine comprehension of third-party-extended shaders (like Google's glsl-unit that Ben mentions below). In glsl-unit's case, Google is the authority on the validity and versioning of their language extension. I believe there is a need for a common agreed-upon language extension mechanism that has well-defined semantics beyond an opaque prefixed string. To solve this web-scale naming problem, I propose using the Web's standard naming system, URI. > Regards > > ??? -Mark ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kos...@ Wed May 16 21:27:31 2012 From: kos...@ (David Sheets) Date: Wed, 16 May 2012 21:27:31 -0700 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: <4FB45E6A.5090501@hicorp.co.jp> Message-ID: On Wed, May 16, 2012 at 7:25 PM, Ben Vanik wrote: > Agreed - as a dev I haven't felt much pain here because tools can go a long > way towards making this stuff better, very similar to the webgl-uri library > written here, and as user code it's much more flexible. The webgl-uri shim is a proof of concept, not something developers should have to concern themselves with. For tools to interoperate with each other, we must agree on a standard way to describe non-standard extensions (extensibility) so tools can either consume or reject inputs intelligently without unexpected results. The browser is now the standard consumption device for shaders and thus I supplied a browser shim; however, shader processing tools are the primary beneficiaries of an extension like URI. > For example, I use a > glsl compiler to minify and optimize my shaders > (http://code.google.com/p/glsl-unit/wiki/UsingTheCompiler) - I'd rather have > that spit out optimizable JS that enables extensions and does the checks at > compile time rather than require massive strings and runtime parsing > included in my shipped resources. As user code, something like GLSL Sandbox > could do it at runtime, while shipped apps and games could do it much > earlier or wherever it made sense. I totally agree with you and that is why something like the URI extension is needed. No runtime change is required in the vast majority of cases because, as you point out, the resources can be specialized early. My proposal demonstrates the uniformity of end-to-end URI namespace mapping. That is, Khronos extension names exist in URI-space, too, even if you never refer to them with URIs. My proposal primarily concerns a way to declare that some shader source code is using a specific non-standard extension. glsl-unit implements a non-standard extension to the shading language (crucial semantics in comments). If you were to publish your not-yet-compiled glsl-unit shaders right now, there would be no declaration for automation to examine to understand how to consume the glsl-unit commands in comments. IMHO, this is a problem. glsl-unit implements many features that other tools implement in different ways (e.g. source inclusion). If someone else wants to implement a tool that consumes glsl-unit's template commands as well as some other tool's template commands (or an extension to glsl-unit's commands), they have to sniff and special case these syntaxes without a standard break-out hint that says "I am using concepts from ". As tool developers without a blessed way to declare our language extensions, we are harming ecosystem interoperability. Our tools can work together and make our lives easier by cooperating if we have a standard way to declare our non-standard extension use. My explanations may be confusing or unclear and if you don't understand my reasoning, please don't hesitate to ask. :-) Most likely I have simply not explained my proposal well enough. David > On Wed, May 16, 2012 at 7:11 PM, Mark Callow > wrote: >> >> On 17/05/2012 07:11, David Sheets wrote: >> >> >> gl.getExtension("http://www.khronos.org/registry/webgl/extensions/OES_texture_float/"); >> >> Seeing something like this in a program would make me think that >> getExtension will download the extension rather than it having to be built >> in. >> >> Since extensions cannot be downloaded and are implemented by a very small >> number of people, I think the current prefixing mechanism works just fine. >> >> Regards >> >> ??? -Mark > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Thu May 17 01:26:44 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 17 May 2012 10:26:44 +0200 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: <4FB45E6A.5090501@hicorp.co.jp> Message-ID: On Thu, May 17, 2012 at 6:27 AM, David Sheets wrote: > As tool developers without a blessed way to declare our language > extensions, we are harming ecosystem interoperability. Our tools can > work together and make our lives easier by cooperating if we have a > standard way to declare our non-standard extension use. > > My explanations may be confusing or unclear and if you don't > understand my reasoning, please don't hesitate to ask. :-) Most likely > I have simply not explained my proposal well enough. You're gonna accuse me of setting up a strawman again, but what you say amounts essentially to: "All JS preprocessors and template frameworks should declare an URI serving as a canonical identifier for their syntax, only thus will the various libraries and tools be able to interact." I think this is a flawed assumption on many levels, and I'm not arguing against URIs. - Just having URIs doesn't solve anything by itself, it requires that users strew around the URI white-noise around their code and that the various tool implementors are aware of and go trough the specification of each other toolset and see how they can interoperate with it. - Making various components of their apps interoperate is usually the job of the developer using them, adding glue/facades/wrappers etc. as needed if these toolkits are not prepared to work together, and if they can't work together in a transparent fashion. - Many tools can work together entirely without knowledge about other tools, because they can work transparently and don't have to know how. - Even if you can make everyone who writes tools agree, and even if you can convince all users to strew around URI whitenoise, and even if you can convince every toolkit writer to venture on a lifelong search for URIs he can find to interoperate with, that still doesn't "solve" interoperability problem. Not all extensions are straightforward syntactic suggar and transmoglification of one flavor of turing tape to another. Often what toolkits do is transparent or entirely semantic, not syntactic suggar. Often what a toolkit does can't even be described in some kind of transmoglification scheme. Regardless of that, it's the WSGI debate all over again. At some point in the WSGI community there was "the great interoperability debate", where people argued that a flat dictionary and simple return type was a bad way for stacks of WSGI tools to interoperate. People where arguing passionately that this would hinder future growth and be a great pain for users etc. Somebody went out and wrote a toolkit for stratified WSGI layer management and deployment. Fast forward, nobody uses that framework, WSGI2 never happened, people happily chug their apps along with little or minor irritation at the occasional snippet of glue to write. Grand interoperability debates never go anywhere, because they try to solve an insanely hard problem using sweeping future prophecies as arguments and propose solutions that require everybody to agree. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bag...@ Thu May 17 08:02:48 2012 From: bag...@ (Patrick Baggett) Date: Thu, 17 May 2012 10:02:48 -0500 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: Message-ID: On Wed, May 16, 2012 at 5:11 PM, David Sheets wrote: > > On Wed, May 16, 2012 at 11:07 AM, Florian B?sch wrote: > > I don't think I understand this proposal. Perhaps illustrate it with a > > minimal fully-featured pseudo API usage app example. > > In JavaScript, after including some prometheus.tld script: > > gl.getExtension("URI"); > gl.getExtension(" > http://www.khronos.org/registry/webgl/extensions/OES_texture_float/"); > var k = gl.getExtension("http://www.prometheus.tld/webgl/advisories/"); > if (k.slow) { fallback(); } else { render_resplendent(); } > > Eww. > In GLSL: > > #extension URI : enable > #extension < > http://www.khronos.org/registry/webgl/extensions/OES_standard_derivatives/ > > > : enable > #extension > > : enable > > Eww. > I believe that a standard #pragma directive for declaring > metadata-in-comments format would also be extremely helpful. > > Perhaps something like: > > #pragma META > > Standardization of federated hierarchical namespaces yields consistent > global use and Web-scale interop. On the Open Web, namespaces are > open. > > The elephant in the room is that this isn't really solving any issues that are currently being faced, and 15+ years of OpenGL on the desktop has shown that it won't likely be an issue either. The GL extension mechanism has hardly found itself dying for a new namespace rule. The only thing this accomplishes is devastates source compatibility with GLES and add a bunch of annoying URIs to make things look more web-y. Patrick > It is worth noting that Khronos is presently constructing an ad hoc > hierarchical extension namespace with the various extension source and > vendor prefixes ("WEBGL","OES","EXT","ANGLE","WEBKIT","NV","ATI", > &c.). This extension proposal would unify this namespace with the > Web's namespace. > > David > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu May 17 08:11:08 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 17 May 2012 17:11:08 +0200 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: <4FB45E6A.5090501@hicorp.co.jp> Message-ID: There might be a place/way to add a middleware organizer that allows WebGL/GLSL middleware to co-exist in face of irreconcilable conflict or issues. I don't think the WebGL extension mechanism is the place. But you can attempt to solve that problem with a shim, library or framework for people to use. If it turns out to be popular (like say javascript module-like things) maybe one day it'll make its own standard. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Thu May 17 10:44:35 2012 From: kos...@ (David Sheets) Date: Thu, 17 May 2012 10:44:35 -0700 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: Message-ID: On Thu, May 17, 2012 at 8:02 AM, Patrick Baggett wrote: > > > On Wed, May 16, 2012 at 5:11 PM, David Sheets wrote: >> >> >> On Wed, May 16, 2012 at 11:07 AM, Florian B?sch wrote: >> > I don't think I understand this proposal. Perhaps illustrate it with a >> > minimal fully-featured pseudo API usage app example. >> >> In JavaScript, after including some prometheus.tld script: >> >> gl.getExtension("URI"); >> >> gl.getExtension("http://www.khronos.org/registry/webgl/extensions/OES_texture_float/"); >> var k = gl.getExtension("http://www.prometheus.tld/webgl/advisories/"); >> if (k.slow) { fallback(); } else { render_resplendent(); } >> > > Eww. > >> >> In GLSL: >> >> #extension URI : enable >> #extension >> >> : enable >> #extension >> >> : enable >> > > Eww. > >> >> I believe that a standard #pragma directive for declaring >> metadata-in-comments format would also be extremely helpful. >> >> Perhaps something like: >> >> #pragma META >> >> Standardization of federated hierarchical namespaces yields consistent >> global use and Web-scale interop. On the Open Web, namespaces are >> open. >> > > The elephant in the room is that this isn't really solving any issues that > are currently being faced, and 15+ years of OpenGL on the desktop has shown > that it won't likely be an issue either. The GL extension mechanism has > hardly found itself dying for a new namespace rule. The only thing this > accomplishes is?devastates?source?compatibility?with GLES The simple transform in my shim demonstrates otherwise. Far more devastating to source compatibility with GLES are undeclared ad hoc extensions which fail in unexpected ways instead of "don't know extension http://glslexts.tld/real_type_system". > and add a bunch of > annoying URIs to make things look more web-y. > > Patrick > > >> >> It is worth noting that Khronos is presently constructing an ad hoc >> hierarchical extension namespace with the various extension source and >> vendor prefixes ("WEBGL","OES","EXT","ANGLE","WEBKIT","NV","ATI", >> &c.). This extension proposal would unify this namespace with the >> Web's namespace. >> >> David >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kos...@ Thu May 17 11:07:12 2012 From: kos...@ (David Sheets) Date: Thu, 17 May 2012 11:07:12 -0700 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: <4FB45E6A.5090501@hicorp.co.jp> Message-ID: On Thu, May 17, 2012 at 1:26 AM, Florian B?sch wrote: > On Thu, May 17, 2012 at 6:27 AM, David Sheets wrote: >> >> As tool developers without a blessed way to declare our language >> extensions, we are harming ecosystem interoperability. Our tools can >> work together and make our lives easier by cooperating if we have a >> standard way to declare our non-standard extension use. >> >> My explanations may be confusing or unclear and if you don't >> understand my reasoning, please don't hesitate to ask. :-) Most likely >> I have simply not explained my proposal well enough. > > > You're gonna accuse me of setting up a strawman again, but what you say > amounts essentially to: "All JS preprocessors and template frameworks should > declare an URI serving as a canonical identifier for their syntax, only thus > will the various libraries and tools be able to interact." You are correct that this is a misrepresentation of my position. My position is actually: "When tools processing formal languages wish to describe and consume source using language extensions, a standard method should be used." The OpenGL ES WG clearly agrees with this statement by inclusion of the #extension preprocessor directive. I am simply advocating the expansion of #extension's namespace to the namespace of the Web, URI. > I think this is a flawed assumption on many levels, and I'm not arguing > against URIs. > - Just having URIs doesn't solve anything by itself, it requires that users > strew around the URI white-noise around their code and that the various tool > implementors are aware of and go trough the specification of each other > toolset and see how they can interoperate with it. It requires nothing from users unless they wish to use extensions named by URI. Tool developers will always have to analyze their source and target languages. My proposal is for a common *standard* way to express extension names in the Web namespace for those who wish to use it. A mechanism of this sort belongs in the *Web*GL extension registry/standard because it defines the relation between RFC 3986, the Web's namespace, and WebGL, a new Web browser API. > - Making various components of their apps interoperate is usually the job of > the developer using them, adding glue/facades/wrappers etc. as needed if > these toolkits are not prepared to work together, and if they can't work > together in a transparent fashion. It is the job of the tool developer to lessen this hackery. Quite soon CSS3 will accept custom() shaders for DOM elements and those authors will not care *at all* about glue/facades/wrappers. Using a standard syntax for URIs as extension names greatly improves the ability for tool developers to build interoperable tools. > - Many tools can work together entirely without knowledge about other tools, > because they can work transparently and don't have to know how. > - Even if you can make everyone who writes tools agree, and even if you can > convince all users to strew around URI whitenoise, and even if you can > convince every toolkit writer to venture on a lifelong search for URIs he > can find to interoperate with, that still doesn't "solve" interoperability > problem. Not all extensions are straightforward syntactic suggar and > transmoglification of one flavor of turing tape to another. Often what > toolkits do is transparent or entirely semantic, not syntactic suggar. Often > what a toolkit does can't even be described in some kind of > transmoglification scheme. Using URIs does not solve every problem tool developers face but is a necessary precondition for web-scale tools. > Regardless of that, it's the WSGI debate all over again. At some point in > the WSGI community there was "the great interoperability debate", where > people argued that a flat dictionary and simple return type was a bad way > for stacks of WSGI tools to interoperate. People where arguing passionately > that this would hinder future growth and be a great pain for users etc. > Somebody went out and wrote a toolkit for stratified WSGI layer management > and deployment. Fast forward, nobody uses that framework, WSGI2 never > happened, people happily chug their apps along with little or minor > irritation at the occasional snippet of glue to write. That is a different debate with different circumstances. My draft is proposing a simple combination of RFC3986 URIs (a ubiquitously deployed and massive successful naming system) and *Web*GL. > Grand interoperability debates never go anywhere, because they try to solve > an insanely hard problem using sweeping future prophecies as arguments and > propose solutions that require everybody to agree. This proposal only requires agreement from those who wish to use URI as an identifier type for extensions. RFC3986 (STD 66) is well-known and well-understood. The combination of RFC3986 and WebGL is quite simple. How is this problem insanely hard? I am only seeking agreement regarding the interaction of these two Open Web Standards. David ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bag...@ Thu May 17 12:08:32 2012 From: bag...@ (Patrick Baggett) Date: Thu, 17 May 2012 14:08:32 -0500 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: Message-ID: > > > > The elephant in the room is that this isn't really solving any issues > that > > are currently being faced, and 15+ years of OpenGL on the desktop has > shown > > that it won't likely be an issue either. The GL extension mechanism has > > hardly found itself dying for a new namespace rule. The only thing this > > accomplishes is devastates source compatibility with GLES > > The simple transform in my shim demonstrates otherwise. Far more > devastating to source compatibility with GLES are undeclared ad hoc > extensions which fail in unexpected ways instead of "don't know > extension http://glslexts.tld/real_type_system". > > Is undeclared ad-hoc extensions a problem? It isn't like developers can just make new extensions, only browser authors can. I don't remember anyone commenting about the huge fragmentation due to browser specific "ad-hoc" extensions. If that is your problem with the current system, then it hardly seems like a justification for any changes at all. My 2c. Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu May 17 12:18:46 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 17 May 2012 21:18:46 +0200 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: <4FB45E6A.5090501@hicorp.co.jp> Message-ID: On Thu, May 17, 2012 at 8:07 PM, David Sheets wrote: > You are correct that this is a misrepresentation of my position. My > position is actually: "When tools processing formal languages wish to describe and consume > source using language extensions, a standard method should be used." > > The OpenGL ES WG clearly agrees with this statement by inclusion of > the #extension preprocessor directive. > > I am simply advocating the expansion of #extension's namespace to the > namespace of the Web, URI. I admit defeat. I still don't have the faintest idea what this is supposed to solve. I'm trying to find the use-case in there, and you sure sound like it's perfectly obvious, but darned if I can find it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu May 17 13:50:40 2012 From: kbr...@ (Kenneth Russell) Date: Thu, 17 May 2012 13:50:40 -0700 Subject: [Public WebGL] Restricting WebGL exposure of OES_depth_texture Message-ID: WebGL community, It looks like there may be some difficulty supporting the OES_depth_texture extension in its current form in the ANGLE emulation library. In short, uploading of depth information from the CPU during TexImage2D and TexSubImage2D may be difficult. (The issues actually occur because of interactions between OES_depth_texture and OES_packed_depth_stencil, both of which ANGLE supports, but right now it doesn't look like it's possible to guarantee future support for uploading data to depth textures.) To make cross-platform support more feasible, we would like to impose a restriction in WebGL's exposure of OES_depth_texture that, when calling TexImage2D for DEPTH_COMPONENT textures, the only ArrayBufferView argument supported is NULL. These textures could still be rendered to and sampled from; these are the most common use cases. Some implications of this restriction would be: - It wouldn't be legal to call texSubImage2D for these textures. - It wouldn't be legal to call the texImage2D entry points taking DOM elements and ImageData for these textures. Any comments or concerns about imposing these restrictions? Any objections to making the change in-place to http://www.khronos.org/registry/webgl/extensions/OES_depth_texture/ ? A separate WEBGL_depth_texture extension could be defined instead, but I think it would be confusing to have two copies of similar extensions, only one of which is actually implementable everywhere. If it turns out in the future that ANGLE can fully support OES_depth_texture, the restriction could be removed and any associated conformance tests updated. Thanks, -Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu May 17 14:10:17 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 17 May 2012 23:10:17 +0200 Subject: [Public WebGL] Restricting WebGL exposure of OES_depth_texture In-Reply-To: References: Message-ID: On Thu, May 17, 2012 at 10:50 PM, Kenneth Russell wrote: > It looks like there may be some difficulty supporting the > OES_depth_texture extension in its current form in the ANGLE emulation > library. In short, uploading of depth information from the CPU during > TexImage2D and TexSubImage2D may be difficult. (The issues actually occur > because of interactions between OES_depth_texture and > OES_packed_depth_stencil, both of which ANGLE supports, but right now it > doesn't look like it's possible to guarantee future support for uploading > data to depth textures.) > Do I understand it right that if you enable both packed depth stencil and depth_texture and then you'll get errors/garbage/something like that? I'd assume the same would be the case for all OpenGL implementations, correct? Is the issue handled by the other standards in some way? > To make cross-platform support more feasible, we would like to impose a > restriction in WebGL's exposure of OES_depth_texture that, when calling > TexImage2D for DEPTH_COMPONENT textures, the only ArrayBufferView argument > supported is NULL. > > These textures could still be rendered to and sampled from; these are the > most common use cases. > I don't think it'd be an essential issue. In theory uploading depth could be of benefit for impostors, but at the same time it be a very slow replacement for writing gl_FragDepth, therefore making this usecase unlikely with uploads. It would be conceivable that you have an offline/statically compiled depth in a preprendered static scene and you substitute parts of a scene with depth-tested realtime 3D, however this technique too would require a per-frame upload since you can't clear the depth texture to previous values, and depth test without write is not generally useful to write out geometry. > Some implications of this restriction would be: > > - It wouldn't be legal to call texSubImage2D for these textures. > - It wouldn't be legal to call the texImage2D entry points taking DOM > elements and ImageData for these textures. > Sounds fine to me. > Any comments or concerns about imposing these restrictions? Any objections > to making the change in-place to > http://www.khronos.org/registry/webgl/extensions/OES_depth_texture/ ? A > separate WEBGL_depth_texture extension could be defined instead, but I > think it would be confusing to have two copies of similar extensions, only > one of which is actually implementable everywhere. If it turns out in the > future that ANGLE can fully support OES_depth_texture, the restriction > could be removed and any associated conformance tests updated. > I can't comment if another prefix would be required, I don't think I'd care either way. Although it would seem slightly more correct to use WEBGL_, since it does change a standard behavior, I'm not sure if the sake of correctness in this particular case is adding or subtracting usefulness. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jef...@ Thu May 17 14:10:19 2012 From: jef...@ (Jeff Russell) Date: Thu, 17 May 2012 17:10:19 -0400 Subject: [Public WebGL] Restricting WebGL exposure of OES_depth_texture In-Reply-To: References: Message-ID: Totally fine by me. I was actually under the impression that a lot of graphics APIs don't support this for the reason that depth buffer layouts are sometimes secret driver territory and/or paired with hierarchy info. On Thu, May 17, 2012 at 4:50 PM, Kenneth Russell wrote: > WebGL community, > > It looks like there may be some difficulty supporting the > OES_depth_texture extension in its current form in the ANGLE emulation > library. In short, uploading of depth information from the CPU during > TexImage2D and TexSubImage2D may be difficult. (The issues actually occur > because of interactions between OES_depth_texture and > OES_packed_depth_stencil, both of which ANGLE supports, but right now it > doesn't look like it's possible to guarantee future support for uploading > data to depth textures.) > > To make cross-platform support more feasible, we would like to impose a > restriction in WebGL's exposure of OES_depth_texture that, when calling > TexImage2D for DEPTH_COMPONENT textures, the only ArrayBufferView argument > supported is NULL. > > These textures could still be rendered to and sampled from; these are the > most common use cases. > > Some implications of this restriction would be: > > - It wouldn't be legal to call texSubImage2D for these textures. > - It wouldn't be legal to call the texImage2D entry points taking DOM > elements and ImageData for these textures. > > Any comments or concerns about imposing these restrictions? Any objections > to making the change in-place to > http://www.khronos.org/registry/webgl/extensions/OES_depth_texture/ ? A > separate WEBGL_depth_texture extension could be defined instead, but I > think it would be confusing to have two copies of similar extensions, only > one of which is actually implementable everywhere. If it turns out in the > future that ANGLE can fully support OES_depth_texture, the restriction > could be removed and any associated conformance tests updated. > > Thanks, > > -Ken > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu May 17 14:21:19 2012 From: kbr...@ (Kenneth Russell) Date: Thu, 17 May 2012 14:21:19 -0700 Subject: [Public WebGL] Restricting WebGL exposure of OES_depth_texture In-Reply-To: References: Message-ID: On Thu, May 17, 2012 at 2:10 PM, Florian B?sch wrote: > On Thu, May 17, 2012 at 10:50 PM, Kenneth Russell wrote: >> >> It looks like there may be some difficulty supporting the >> OES_depth_texture extension in its current form in the ANGLE emulation >> library. In short, uploading of depth information from the CPU during >> TexImage2D and TexSubImage2D may be difficult. (The issues actually occur >> because of interactions between OES_depth_texture and >> OES_packed_depth_stencil, both of which ANGLE supports, but right now it >> doesn't look like it's possible to guarantee future support for uploading >> data to depth textures.) > > Do I understand it right that if you enable both packed depth stencil and > depth_texture and then you'll get errors/garbage/something like that? I'd > assume the same would be the case for all OpenGL implementations, correct? > Is the issue handled by the other standards in some way? Actually, the GLES conformance suite requires that OpenGL ES implementations exposing OES_depth_texture must support uploading of packed depth and stencil values. Unfortunately, it looks like uploading stencil data is not supported by the D3D mechanisms that ANGLE would use. (I don't know whether this would be supported portably on D3D versions later than 9.) >> To make cross-platform support more feasible, we would like to impose a >> restriction in WebGL's exposure of OES_depth_texture that, when calling >> TexImage2D for DEPTH_COMPONENT textures, the only ArrayBufferView argument >> supported is NULL. >> >> These textures could still be rendered to and sampled from; these are the >> most common use cases. > > I don't think it'd be an essential issue. In theory uploading depth could be > of benefit for impostors, but at the same time it be a very slow replacement > for writing gl_FragDepth, therefore making this usecase unlikely with > uploads. It would be conceivable that you have an offline/statically > compiled depth in a preprendered static scene and you substitute parts of a > ?scene with depth-tested realtime 3D, however this technique too would > require a per-frame upload since you can't clear the depth texture to > previous values, and depth test without write is not generally useful to > write out geometry. > >> >> Some implications of this restriction would be: >> >> ?- It wouldn't be legal to call texSubImage2D for these textures. >> ?- It wouldn't be legal to call the texImage2D entry points taking DOM >> elements and ImageData for these textures. > > Sounds fine to me. > >> >> Any comments or concerns about imposing these restrictions? Any objections >> to making the change in-place >> to?http://www.khronos.org/registry/webgl/extensions/OES_depth_texture/ ? A >> separate WEBGL_depth_texture extension could be defined instead, but I think >> it would be confusing to have two copies of similar extensions, only one of >> which is actually implementable everywhere. If it turns out in the future >> that ANGLE can fully support OES_depth_texture, the restriction could be >> removed and any associated conformance tests updated. > > I can't comment if another prefix would be required, I don't think I'd care > either way. Although it would seem slightly more correct to use WEBGL_, > since it does change a standard behavior, I'm not sure if the sake of > correctness in this particular case is adding or subtracting usefulness. I agree that it would be more correct to prefix the extension with WEBGL_. However, if it turns out that we can remove the restriction in the future, it would make more sense to leave it as OES_depth_texture. This is the direction I'm leaning. Thanks for the feedback. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Thu May 17 14:23:02 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 17 May 2012 23:23:02 +0200 Subject: [Public WebGL] Restricting WebGL exposure of OES_depth_texture In-Reply-To: References: Message-ID: On Thu, May 17, 2012 at 11:21 PM, Kenneth Russell wrote: > I agree that it would be more correct to prefix the extension with > WEBGL_. However, if it turns out that we can remove the restriction in > the future, it would make more sense to leave it as OES_depth_texture. > This is the direction I'm leaning. > There's that. I was also thinking that if you have two extensions covering the identical functionality, they could come into conflict if both are requested, and one shadows the other. -------------- next part -------------- An HTML attachment was scrubbed... URL: From toj...@ Thu May 17 14:58:42 2012 From: toj...@ (Brandon Jones) Date: Thu, 17 May 2012 14:58:42 -0700 Subject: [Public WebGL] Restricting WebGL exposure of OES_depth_texture In-Reply-To: References: Message-ID: Sounds good to me. Florian's static scene with offline depth is about the only reasonable use that I can come up with for this, and I can think of several hacks to work around that case off the top of my head. The reality is that most of developers want depth textures for shadow maps and post process effects and not much else. Anything that gets us to stop packing depth into RGBA! I'll also throw my vote in for keeping the OES prefix instead of creating a new one. I can see the confused Stack Overflow posts already if there were two nearly-identical extensions... -Brandon On Thu, May 17, 2012 at 2:23 PM, Florian B?sch wrote: > On Thu, May 17, 2012 at 11:21 PM, Kenneth Russell wrote: > >> I agree that it would be more correct to prefix the extension with >> WEBGL_. However, if it turns out that we can remove the restriction in >> the future, it would make more sense to leave it as OES_depth_texture. >> This is the direction I'm leaning. >> > There's that. I was also thinking that if you have two extensions covering > the identical functionality, they could come into conflict if both are > requested, and one shadows the other. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Thu May 17 18:20:21 2012 From: cal...@ (Mark Callow) Date: Fri, 18 May 2012 10:20:21 +0900 Subject: [Public WebGL] Restricting WebGL exposure of OES_depth_texture In-Reply-To: References: Message-ID: <4FB5A3D5.1080909@hicorp.co.jp> On 18/05/2012 06:58, Brandon Jones wrote: > I can see the confused Stack Overflow posts already if there were two > nearly-identical extensions... As opposed to confusion caused by having 2 nearly identical extensions sharing the same name? I think the names should be different so that implementations not using Angle can expose the real OES_depth_texture extension. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Thu May 17 18:53:53 2012 From: ben...@ (Ben Vanik) Date: Thu, 17 May 2012 18:53:53 -0700 Subject: [Public WebGL] Restricting WebGL exposure of OES_depth_texture In-Reply-To: <4FB5A3D5.1080909@hicorp.co.jp> References: <4FB5A3D5.1080909@hicorp.co.jp> Message-ID: As a developer having to deal with extensions, I'd prefer that the browser vendors pick a behavior and stick with that for awhile. If no major use cases can be thought of for the missing functionality when running under ANGLE, I'd much rather have only that one extension to check for/query/handle in my tool chain. If both were exposed right now it'd essentially be fragmenting on platform (browsers with ANGLE, aka Windows, vs. those without, aka everything else), and that's annoying. On Thu, May 17, 2012 at 6:20 PM, Mark Callow wrote: > > > On 18/05/2012 06:58, Brandon Jones wrote: > > I can see the confused Stack Overflow posts already if there were two > nearly-identical extensions... > > As opposed to confusion caused by having 2 nearly identical extensions > sharing the same name? > > I think the names should be different so that implementations not using > Angle can expose the real OES_depth_texture extension. > > Regards > > -Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Sat May 19 21:58:32 2012 From: kos...@ (David Sheets) Date: Sat, 19 May 2012 21:58:32 -0700 Subject: [Public WebGL] WebGL URI Extension Proposal In-Reply-To: References: <4FB45E6A.5090501@hicorp.co.jp> Message-ID: On Thu, May 17, 2012 at 12:18 PM, Florian B?sch wrote: > On Thu, May 17, 2012 at 8:07 PM, David Sheets wrote: >> >> You are correct that this is a misrepresentation of my position. My >> position is actually: >> >> "When tools processing formal languages wish to describe and consume >> source using language extensions, a standard method should be used." >> >> The OpenGL ES WG clearly agrees with this statement by inclusion of >> the #extension preprocessor directive. >> >> I am simply advocating the expansion of #extension's namespace to the >> namespace of the Web, URI. > > I admit defeat. I still don't have the faintest idea what this is supposed > to solve. I'm trying to find the use-case in there, and you sure sound like > it's perfectly obvious, but darned if I can find it. The use case is declarative language extension in a Web environment. The URI extension represents the understanding that the official WebGL extensions have aliases/synonyms in the URI namespace. The official Web names (URI) of the official extensions are the real names of those extensions in a global context. Acceptance/rejection of the extension is not zero-sum. If something like URI is standardized, the most important result will be the ability to standardly represent URIs and URLs in WebGLSL preprocessor source. This is important because WebGL shaders are Web resources and should conform to the shading language standard. If an extension to the language has been used, the shader source is still a resource that is still nearly WebGLSL and should be served with the WebGLSL media type (with the extension noted). Being able to declare extension use by URI in a standard way costs very little to implementers (as I attempted to demonstrate) and makes the shading language an actual Web language (HTML, CSS, JS, XML can all represent URIs). The fundamental components of the Web are resources and links. WebGLSL provides neither so far. This extension is an evolutionary adaptation of *Web*GL to the Web environment and will be a competitive advantage of WebGL over competing graphics stacks. To use a previous city planning analogy, this extension is like a municipal zoning ordinance for WebGLSL meta-programmers. Without this ordinance, meta-programmers will be constructing de facto lock-in for every tool or tool-chain. We will all suffer when some WebGLSL tool becomes popular and leaves its extensions implicit without a commonly agreed way to annotate them if we desire (the tool is not required to emit or consume URI extension declarations but we should be able to annotate extended source in a standard manner). Neither humans nor machines will be able to understand exactly what semantics are intended in a given implicitly extended shader resource. The URI extension proposal makes language extensions into Web names and resources through the use of links. It provides the language a means to grow in an orderly fashion and enables tools which understand many different standard libraries, syntactic and semantic extensions, and source verifications. It reduces the cost and complexity of customizing WebGLSL to the job at hand by representing extensions via an already ubiquitous globally federated namespace. The #extension directive is the natural place in WebGLSL to declare extensions being used in a shader source listing. I have a number of implementations under development that use it for this purpose. This is the purpose for which it was designed. The conversation I would like to have regards not whether something like the URI extension is necessary (it is, doesn't hinder anyone if it exists and is independently implementable anyway) but rather what manner of integration between WebGL and RFC 3986 is most agreeable and could be standardized to prevent a Babel-esque tragedy (I'm looking at you, IE). Do you disagree with the syntax I have chosen? Do you have concerns regarding source comment conflicts? Allowable alphabets? Do you think relative URI references should be allowed or are absolute references safer? They may twist the words "Open" and "Free" and "Standard" but I will not stand for the perversion of "Web". Web is a philosophy and cannot be achieved by nominal fiat alone no matter how titanic. David Sheets ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kos...@ Sat May 19 22:07:14 2012 From: kos...@ (David Sheets) Date: Sat, 19 May 2012 22:07:14 -0700 Subject: [Public WebGL] WebGLSL Media Type Proposal Message-ID: Hello, world! I would like to propose the IANA registration of an official WebGLSL media type by Khronos and the WebGL WG. WebGLSL is a new language for the Web and like the other Web languages should have an associated media type for use in protocols like HTTP and tags like , so why exactly didn't we do that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Sun May 20 09:07:32 2012 From: gle...@ (Glenn Maynard) Date: Sun, 20 May 2012 11:07:32 -0500 Subject: [Public WebGL] WebGLSL Media Type Proposal In-Reply-To: References: <4FB8FB81.40908@thj.no> <774977033.17275669.1337528183632.JavaMail.root@mozilla.com> Message-ID: HTML This may not be JS-interpreted code, but it is GLSL code. It would be nice to be able to pass this element (or similar) to shaderSource(). For smaller demos, it's much nicer to keep the GLSL code along side the WebGL code, since they're pretty tightly entwined because of attrib arrays and uniforms. For larger or more complicated apps, it's clear that separation is preferrable. However, for my simple ping-pong framebuffer-texture implementation of Conway's Life, if I had a file for each shader, it would need an extra four files, each of which contain only a handful of lines. (Not to mention the requirement of a local http server, which most definitely negatively impacts the pick-up-and-hack capabilities of WebGL) -Jeff ----- Original Message ----- From: "Florian B?sch" To: "Thor Harald Johansen" Cc: "public webgl" Sent: Sunday, May 20, 2012 9:43:30 AM Subject: Re: [Public WebGL] WebGLSL Media Type Proposal On Sun, May 20, 2012 at 6:23 PM, Thor Harald Johansen < thj...@ > wrote: This is a false dichotomy. JavaScript strings are not the issue. Having to pass quoted and escaped string _literals_ during initial development is the issue, because it's a plain PITA. I personally never do that anyway, and I'm not using scripttags. I pack my shaders up into JSONs or binary data arrays and load them that way. I would argue that no data has any meaning without a context and an application to process it, as is the case with the numerous types of embedded content that has no meaning without the required plugin. One of the core design choices of HTML/DOM is that any kind of resource element you can place in there, has a meaning the browser can handle. - is displayed -