From din...@ Mon Dec 1 13:58:32 2014 From: din...@ (Dean Jackson) Date: Tue, 2 Dec 2014 08:58:32 +1100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: Message-ID: > On 28 Nov 2014, at 8:15 pm, Florian B?sch wrote: > > The extension WEBGL_debug_renderer_info is implemented by Chrome, Internet Explorer and Opera on any platform that they run WebGL on. > > It's currently not implemented by Firefox and Safari. > > Previous interest in having this extension exposed was registered among others by the google maps team (it helps them deliver a better maps experience). > > Previous resistance to exposing this extension was registered by Mozilla and Apple on privacy grounds. > > Jeff Gilbert indicated to me that Mozilla might be willing to expose the extension. > > Is there any progress or change on the thinking on the part of Mozilla/Apple regarding this extension? No change on our thinking regarding whether or not this *should* be exposed (we still think it is a bad idea). However, now that 50% of the browsers are exposing it, our hand is somewhat forced. If sites start optimising based around this information, then WebKit and Mozilla *have* to expose it otherwise they get penalised. I think this is pretty sad. Dean ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From rog...@ Mon Dec 1 17:49:10 2014 From: rog...@ (Roger Fong) Date: Mon, 1 Dec 2014 17:49:10 -0800 Subject: [Public WebGL] Question about renderBufferStorageMultisample Message-ID: Hello all, I'm trying to implement renderBufferStorageMultisample as a WebGL 1 extension. I have 2 questions. a) The first is more of a general WebGL question: If I take a test site that uses renderBufferStorage, and replace it with renderBufferStorageMultisample (where samples > 0) should the test site still be functional? Or is there additional GL setup that has to be done before hand in order to make the call work? b) A question about the WebGL 2.0 conformance tests. Are there any existing conformance tests testing renderBufferStorageMultisample? I couldn't find any in the WebGL 2.0 conformance test suite, but maybe I missed something. Much appreciated! Roger Fong -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Dec 1 20:28:06 2014 From: kbr...@ (Kenneth Russell) Date: Mon, 1 Dec 2014 20:28:06 -0800 Subject: [Public WebGL] Question about renderBufferStorageMultisample In-Reply-To: References: Message-ID: On Mon, Dec 1, 2014 at 5:49 PM, Roger Fong wrote: > Hello all, > > I'm trying to implement renderBufferStorageMultisample as a WebGL 1 > extension. > I have 2 questions. > a) The first is more of a general WebGL question: > If I take a test site that uses renderBufferStorage, and replace it with > renderBufferStorageMultisample (where samples > 0) should the test site > still be functional? Or is there additional GL setup that has to be done > before hand in order to make the call work? I don't mean to discourage your work but this is no small task either from a specification or implementation standpoint. It would be necessary to spec out a WebGL 1 extension combining the semantics of both the ANGLE_framebuffer_blit and ANGLE_framebuffer_multisample extensions. Users' code also has to be modified to take advantage of multisampled renderbuffers. I think the effort would be better spent getting a WebGL 2 implementation ready, which will include this functionality in the core API. > b) A question about the WebGL 2.0 conformance tests. > Are there any existing conformance tests testing > renderBufferStorageMultisample? I couldn't find any in the WebGL 2.0 > conformance test suite, but maybe I missed something. Not yet, but we hope they'll come fairly quickly with other ongoing work on the WebGL 2.0 conformance suite. Any contributions in this area would also be appreciated. Thanks, -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From khr...@ Tue Dec 2 00:40:36 2014 From: khr...@ (Gregg Tavares) Date: Tue, 2 Dec 2014 00:40:36 -0800 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence Message-ID: So this stackoverflow question came up http://stackoverflow.com/questions/27237696/webgl-detach-and-delete-shaders-after-linking The problem is 2 fold #1) The browsers are not consistent in their behavior #2) The OpenGL ES 2.0 spec doesn't seem particularly clear on what's supposed to happen. Basically the poster is claiming you should be able to do this p = gl.createProgram(); gl.attachShader(p, someVertexShader); gl.attachShader(p, someFragmentShader); gl.linkProgram(p); gl.detachShader(p, someVertexShader); gl.detachShader(p, someFragmentShader); gl.deleteShader(p, someVertexShader); gl.deleteShader(p, someFragmentShader); gl.useProgram(p); And now 'p' should still point to a valid program that you can call 'gl.useProgram' on or `gl.getUniformLocation` etc. This works in Chrome. It fails in Safari. According to the OP it also fails in Firefox but it seemed to work for me. But, reading the spec it's not clear what's supposed to happen. AFAICT the spec says While a valid program object is in use, applications are free to modify > attached > shader objects, compile attached shader objects, attach additional shader > objects, > and detach shader objects. These operations do not affect the link status > or executable > code of the program object. But "program object is in use" is ambiguous. It could be interpreted as the "currentProgram" as in the one currently installed with gl.useProgram So (a) WebGL needs to make it clear what the correct behavior is and (b) a conformance test needs to be written to check that behavior which includes calling useProgram with the program in question, getting locations from it and trying to render with it and checking for success or failure whichever is decided is the correct behavior. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 2 01:35:51 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 2 Dec 2014 10:35:51 +0100 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence In-Reply-To: References: Message-ID: I'd also suggest that clarification of the behaviour would be added to not yet finalized OpenGL ES minor revisions as in 2.0.26, 3.0.5 and 3.1.1 . On Tue, Dec 2, 2014 at 9:40 AM, Gregg Tavares wrote: > So this stackoverflow question came up > > > http://stackoverflow.com/questions/27237696/webgl-detach-and-delete-shaders-after-linking > > The problem is 2 fold > > #1) The browsers are not consistent in their behavior > > #2) The OpenGL ES 2.0 spec doesn't seem particularly clear on what's > supposed to happen. > > Basically the poster is claiming you should be able to do this > > p = gl.createProgram(); > gl.attachShader(p, someVertexShader); > gl.attachShader(p, someFragmentShader); > gl.linkProgram(p); > gl.detachShader(p, someVertexShader); > gl.detachShader(p, someFragmentShader); > gl.deleteShader(p, someVertexShader); > gl.deleteShader(p, someFragmentShader); > gl.useProgram(p); > > And now 'p' should still point to a valid program that you can call > 'gl.useProgram' on or > `gl.getUniformLocation` etc. This works in Chrome. It fails in Safari. > According to the OP > it also fails in Firefox but it seemed to work for me. > > But, reading the spec it's not clear what's supposed to happen. AFAICT the > spec says > > While a valid program object is in use, applications are free to modify >> attached >> shader objects, compile attached shader objects, attach additional shader >> objects, >> and detach shader objects. These operations do not affect the link status >> or executable >> code of the program object. > > > But "program object is in use" is ambiguous. It could be interpreted as > the "currentProgram" as in the one currently installed with gl.useProgram > > > So > > (a) WebGL needs to make it clear what the correct behavior is and > > (b) a conformance test needs to be written to check that behavior which > includes calling useProgram with the program in question, getting locations > from it and trying to render with it and checking for success or failure > whichever is decided is the correct behavior. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Dec 2 05:01:43 2014 From: khr...@ (Mark Callow) Date: Tue, 2 Dec 2014 22:01:43 +0900 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence In-Reply-To: References: Message-ID: > On Dec 2, 2014, at 6:35 PM, Florian B?sch wrote: > > I'd also suggest that clarification of the behaviour would be added to not yet finalized OpenGL ES minor revisions as in 2.0.26, 3.0.5 and 3.1.1 . > Yes. Please file a bug in the Khronos public bugzilla . Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pya...@ Tue Dec 2 06:36:25 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 2 Dec 2014 15:36:25 +0100 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence In-Reply-To: References: Message-ID: On Tue, Dec 2, 2014 at 2:01 PM, Mark Callow wrote: > Yes. Please file a bug in the Khronos public bugzilla > . > https://www.khronos.org/bugzilla/show_bug.cgi?id=1265 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Raf...@ Tue Dec 2 12:44:01 2014 From: Raf...@ (Rafael Cintron) Date: Tue, 2 Dec 2014 20:44:01 +0000 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence In-Reply-To: References: Message-ID: I agree the ?While a valid program is in use . . .? phrase in the spec is confusing. The conformance test that comes closest to testing the behavior in the StackOverflow question is https://www.khronos.org/registry/webgl/sdk/tests/conformance/programs/program-test.html. Unfortunately, it first calls useProgram before performing the questionable operations so it doesn?t directly test this scenario. We should add conformance tests to clarify. Reading the entire section of the OpenGL spec as a whole, it?s clear that LinkProgram creates an ?executable? out of the current state of the program. Calling UseProgram merely installs the executable as the current rendering state of the context if the program has been successfully linked. The only thing that can change the installed executable from that point forward is calling UseProgram with a different program or performing a successful relink of the current program. If the in-use program is unsuccessfully re-linked, you can still render with the installed ?good? executable. But the moment you replace the current program with a different program, then you?ve essentially lost that good executable forever. The attachShader, detachShader, and linkProgram APIs all take program and shader objects as parameters. That means you can perform these operations even through the program you pass to the functions is not the currently ?in use? program set via useProgram. In fact, the program you pass to these functions can be completely different than the one currently ?in use? via useProgram. So I think the sentence ?While a valid program is in use ?? is meant to clarify you can change the shaders of the program independent of whether it is in use or not. Otherwise, attachShader, detachShader and linkProgram would not have taken programs as their first argument in the first place. I tried all three of the examples in the StackOverFlow question in IE, Chrome and Firefox on Windows. All browsers agree ?prog1? can be rendered with at the end of each example. Since the last link of the program is a successful link, the executable created as a result of the link is the one that is used for rendering. Unless I am missing something in my understanding, I think this is the correct behavior according to the spec. --Rafael From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Gregg Tavares Sent: Tuesday, December 2, 2014 12:41 AM To: public_webgl...@ Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence So this stackoverflow question came up http://stackoverflow.com/questions/27237696/webgl-detach-and-delete-shaders-after-linking The problem is 2 fold #1) The browsers are not consistent in their behavior #2) The OpenGL ES 2.0 spec doesn't seem particularly clear on what's supposed to happen. Basically the poster is claiming you should be able to do this p = gl.createProgram(); gl.attachShader(p, someVertexShader); gl.attachShader(p, someFragmentShader); gl.linkProgram(p); gl.detachShader(p, someVertexShader); gl.detachShader(p, someFragmentShader); gl.deleteShader(p, someVertexShader); gl.deleteShader(p, someFragmentShader); gl.useProgram(p); And now 'p' should still point to a valid program that you can call 'gl.useProgram' on or `gl.getUniformLocation` etc. This works in Chrome. It fails in Safari. According to the OP it also fails in Firefox but it seemed to work for me. But, reading the spec it's not clear what's supposed to happen. AFAICT the spec says While a valid program object is in use, applications are free to modify attached shader objects, compile attached shader objects, attach additional shader objects, and detach shader objects. These operations do not affect the link status or executable code of the program object. But "program object is in use" is ambiguous. It could be interpreted as the "currentProgram" as in the one currently installed with gl.useProgram So (a) WebGL needs to make it clear what the correct behavior is and (b) a conformance test needs to be written to check that behavior which includes calling useProgram with the program in question, getting locations from it and trying to render with it and checking for success or failure whichever is decided is the correct behavior. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dko...@ Tue Dec 2 13:07:41 2014 From: dko...@ (Daniel Koch) Date: Tue, 2 Dec 2014 21:07:41 +0000 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence In-Reply-To: References: Message-ID: Rafael, +1 Yes, your understanding of the spec below is correct (at least it matches mine!). I seem to recall implementing it that way in ANGLE as well... -Daniel On 2014-12-02 3:44 PM, "Rafael Cintron" > wrote: I agree the ?While a valid program is in use . . .? phrase in the spec is confusing. The conformance test that comes closest to testing the behavior in the StackOverflow question is https://www.khronos.org/registry/webgl/sdk/tests/conformance/programs/program-test.html. Unfortunately, it first calls useProgram before performing the questionable operations so it doesn?t directly test this scenario. We should add conformance tests to clarify. Reading the entire section of the OpenGL spec as a whole, it?s clear that LinkProgram creates an ?executable? out of the current state of the program. Calling UseProgram merely installs the executable as the current rendering state of the context if the program has been successfully linked. The only thing that can change the installed executable from that point forward is calling UseProgram with a different program or performing a successful relink of the current program. If the in-use program is unsuccessfully re-linked, you can still render with the installed ?good? executable. But the moment you replace the current program with a different program, then you?ve essentially lost that good executable forever. The attachShader, detachShader, and linkProgram APIs all take program and shader objects as parameters. That means you can perform these operations even through the program you pass to the functions is not the currently ?in use? program set via useProgram. In fact, the program you pass to these functions can be completely different than the one currently ?in use? via useProgram. So I think the sentence ?While a valid program is in use ?? is meant to clarify you can change the shaders of the program independent of whether it is in use or not. Otherwise, attachShader, detachShader and linkProgram would not have taken programs as their first argument in the first place. I tried all three of the examples in the StackOverFlow question in IE, Chrome and Firefox on Windows. All browsers agree ?prog1? can be rendered with at the end of each example. Since the last link of the program is a successful link, the executable created as a result of the link is the one that is used for rendering. Unless I am missing something in my understanding, I think this is the correct behavior according to the spec. --Rafael From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Gregg Tavares Sent: Tuesday, December 2, 2014 12:41 AM To: public_webgl...@ Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence So this stackoverflow question came up http://stackoverflow.com/questions/27237696/webgl-detach-and-delete-shaders-after-linking The problem is 2 fold #1) The browsers are not consistent in their behavior #2) The OpenGL ES 2.0 spec doesn't seem particularly clear on what's supposed to happen. Basically the poster is claiming you should be able to do this p = gl.createProgram(); gl.attachShader(p, someVertexShader); gl.attachShader(p, someFragmentShader); gl.linkProgram(p); gl.detachShader(p, someVertexShader); gl.detachShader(p, someFragmentShader); gl.deleteShader(p, someVertexShader); gl.deleteShader(p, someFragmentShader); gl.useProgram(p); And now 'p' should still point to a valid program that you can call 'gl.useProgram' on or `gl.getUniformLocation` etc. This works in Chrome. It fails in Safari. According to the OP it also fails in Firefox but it seemed to work for me. But, reading the spec it's not clear what's supposed to happen. AFAICT the spec says While a valid program object is in use, applications are free to modify attached shader objects, compile attached shader objects, attach additional shader objects, and detach shader objects. These operations do not affect the link status or executable code of the program object. But "program object is in use" is ambiguous. It could be interpreted as the "currentProgram" as in the one currently installed with gl.useProgram So (a) WebGL needs to make it clear what the correct behavior is and (b) a conformance test needs to be written to check that behavior which includes calling useProgram with the program in question, getting locations from it and trying to render with it and checking for success or failure whichever is decided is the correct behavior. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Fra...@ Tue Dec 2 13:26:40 2014 From: Fra...@ (Frank Olivier) Date: Tue, 2 Dec 2014 21:26:40 +0000 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: Message-ID: RldJVywgdGhlcmUgaGFzIGJlZW4gc29tZSBsZWdpc2xhdGl2ZSBhY3Rpdml0eSB3cnQgZmluZ2Vy cHJpbnRpbmcgcmVjZW50bHk6DQpodHRwOi8vd3d3LnRoZWd1YXJkaWFuLmNvbS90ZWNobm9sb2d5 LzIwMTQvbm92LzI4L2V1cm9wZS1wcml2YWN5LXdhci13ZWJzaXRlcy1zaWxlbnRseS10cmFja2lu Zy11c2Vycw0KW0luc2VydCBjYXZhbGNhZGUgb2YgY2F2ZWF0cyB3cnQgJ29idGFpbiB0aGUgdmFs aWQgY29uc2VudCBvZiB0aGUgdXNlcicgaGVyZV0NCg0KaHR0cDovL2VuLndpa2lwZWRpYS5vcmcv d2lraS9EZXZpY2VfZmluZ2VycHJpbnQNCllvdSBjb3VsZCBwZXJoYXBzIHJlZHVjZSBkaXZlcnNp dHkgaGVyZSBieSBvbmx5IGV4cG9zaW5nIGV4YWN0IHJlbmRlcmVyIGluZm8gZm9yIG9sZGVyL3By b2JsZW1hdGljKiBHUFVzL2RyaXZlcnMuDQoNCipBcyByZXBvcnRlZCBieSBjb250ZW50IGNyZWF0 b3JzLg0KDQotLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KRnJvbTogb3duZXJzLXB1YmxpY193 ZWJnbEBraHJvbm9zLm9yZyBbbWFpbHRvOm93bmVycy1wdWJsaWNfd2ViZ2xAa2hyb25vcy5vcmdd IE9uIEJlaGFsZiBPZiBEZWFuIEphY2tzb24NClNlbnQ6IE1vbmRheSwgRGVjZW1iZXIgMSwgMjAx NCAxOjU5IFBNDQpUbzogRmxvcmlhbiBCw7ZzY2gNCkNjOiBwdWJsaWMgd2ViZ2wNClN1YmplY3Q6 IFJlOiBbUHVibGljIFdlYkdMXSBXRUJHTF9kZWJ1Z19yZW5kZXJlcl9pbmZvDQoNCg0KDQo+IE9u IDI4IE5vdiAyMDE0LCBhdCA4OjE1IHBtLCBGbG9yaWFuIELDtnNjaCA8cHlhbG90QGdtYWlsLmNv bT4gd3JvdGU6DQo+IA0KPiBUaGUgZXh0ZW5zaW9uIFdFQkdMX2RlYnVnX3JlbmRlcmVyX2luZm8g aXMgaW1wbGVtZW50ZWQgYnkgQ2hyb21lLCBJbnRlcm5ldCBFeHBsb3JlciBhbmQgT3BlcmEgb24g YW55IHBsYXRmb3JtIHRoYXQgdGhleSBydW4gV2ViR0wgb24uDQo+IA0KPiBJdCdzIGN1cnJlbnRs eSBub3QgaW1wbGVtZW50ZWQgYnkgRmlyZWZveCBhbmQgU2FmYXJpLg0KPiANCj4gUHJldmlvdXMg aW50ZXJlc3QgaW4gaGF2aW5nIHRoaXMgZXh0ZW5zaW9uIGV4cG9zZWQgd2FzIHJlZ2lzdGVyZWQg YW1vbmcgb3RoZXJzIGJ5IHRoZSBnb29nbGUgbWFwcyB0ZWFtIChpdCBoZWxwcyB0aGVtIGRlbGl2 ZXIgYSBiZXR0ZXIgbWFwcyBleHBlcmllbmNlKS4NCj4gDQo+IFByZXZpb3VzIHJlc2lzdGFuY2Ug dG8gZXhwb3NpbmcgdGhpcyBleHRlbnNpb24gd2FzIHJlZ2lzdGVyZWQgYnkgTW96aWxsYSBhbmQg QXBwbGUgb24gcHJpdmFjeSBncm91bmRzLg0KPiANCj4gSmVmZiBHaWxiZXJ0IGluZGljYXRlZCB0 byBtZSB0aGF0IE1vemlsbGEgbWlnaHQgYmUgd2lsbGluZyB0byBleHBvc2UgdGhlIGV4dGVuc2lv bi4NCj4gDQo+IElzIHRoZXJlIGFueSBwcm9ncmVzcyBvciBjaGFuZ2Ugb24gdGhlIHRoaW5raW5n IG9uIHRoZSBwYXJ0IG9mIE1vemlsbGEvQXBwbGUgcmVnYXJkaW5nIHRoaXMgZXh0ZW5zaW9uPw0K DQpObyBjaGFuZ2Ugb24gb3VyIHRoaW5raW5nIHJlZ2FyZGluZyB3aGV0aGVyIG9yIG5vdCB0aGlz ICpzaG91bGQqIGJlIGV4cG9zZWQgKHdlIHN0aWxsIHRoaW5rIGl0IGlzIGEgYmFkIGlkZWEpLiBI b3dldmVyLCBub3cgdGhhdCA1MCUgb2YgdGhlIGJyb3dzZXJzIGFyZSBleHBvc2luZyBpdCwgb3Vy IGhhbmQgaXMgc29tZXdoYXQgZm9yY2VkLiBJZiBzaXRlcyBzdGFydCBvcHRpbWlzaW5nIGJhc2Vk IGFyb3VuZCB0aGlzIGluZm9ybWF0aW9uLCB0aGVuIFdlYktpdCBhbmQgTW96aWxsYSAqaGF2ZSog dG8gZXhwb3NlIGl0IG90aGVyd2lzZSB0aGV5IGdldCBwZW5hbGlzZWQuIEkgdGhpbmsgdGhpcyBp cyBwcmV0dHkgc2FkLg0KDQpEZWFuDQoNCg0KDQotLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQ0KWW91IGFyZSBjdXJyZW50bHkgc3Vic2Ny aWJlZCB0byBwdWJsaWNfd2ViZ2xAa2hyb25vcy5vcmcuDQpUbyB1bnN1YnNjcmliZSwgc2VuZCBh biBlbWFpbCB0byBtYWpvcmRvbW9Aa2hyb25vcy5vcmcgd2l0aCB0aGUgZm9sbG93aW5nIGNvbW1h bmQgaW4gdGhlIGJvZHkgb2YgeW91ciBlbWFpbDoNCnVuc3Vic2NyaWJlIHB1YmxpY193ZWJnbA0K LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0NCg0K ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From khr...@ Tue Dec 2 15:35:29 2014 From: khr...@ (Gregg Tavares) Date: Tue, 2 Dec 2014 15:35:29 -0800 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence In-Reply-To: References: Message-ID: That's still not clear. While I agree that's the way it "should be" the spec is not clear enough IMO On Tue, Dec 2, 2014 at 12:44 PM, Rafael Cintron < Rafael.Cintron...@> wrote: > I agree the ?While a valid program is in use . . .? phrase in the spec > is confusing. > > > > The conformance test that comes closest to testing the behavior in the > StackOverflow question is > https://www.khronos.org/registry/webgl/sdk/tests/conformance/programs/program-test.html. > Unfortunately, it first calls useProgram before performing the questionable > operations so it doesn?t directly test this scenario. We should add > conformance tests to clarify. > > > > Reading the entire section of the OpenGL spec as a whole, it?s clear that > LinkProgram creates an ?executable? out of the current state of the > program. Calling UseProgram merely installs the executable as the current > rendering state of the context if the program has been successfully > linked. The only thing that can change the installed executable from that > point forward is calling UseProgram with a different program or performing > a successful relink of the current program. If the in-use program is > unsuccessfully re-linked, you can still render with the installed ?good? > executable. But the moment you replace the current program with a > different program, then you?ve essentially lost that good executable > forever. > The issue isn't what UseProgram does. That part is clear. The issue is what do AttachShader, DetachShader, etc...do when a program is not "in use". Conceptually a program is something like this class Program { Shaders* shaders[]; // all attached shaders Exe* exe; // the last good executable } And useProgram conceptually works something like this class GLState { Exe* currentProgram; } glUseProgram(id) { Program* prg = getProgramById(id); state.currentProgram = prg; state.currentExe = prg.exe; } With that mental model, the only thing that invalidates or changes the exe reference inside of Program is LinkProgram which is apparently supposed to be implemented like this glLinkProgram(id) { Program* prg = getProgramById(id); prg->exe = Link(shaders); // Good or bad this will set exe // If the link was good and this program is "in use" if (linkWasGood && state.currentProgram == this) { state.currentExe = prg.exe; // use the new exe. } } It's also clear that calling DeleteShader is valid because it won't actually be deleted until it is detached from the Program What's not clear is the behavior of DetachShader, AttachShader, etc. According to the spec this would be valid implementation glDetachShader(prg, sh) { Program* prg = getProgramById(id); prg->RemoveShaderFromArrayOfShaders(sh); * // If this program is NOT in use we can muck with the executable of this program* if (state.currentProgram != this) { prg.exe = NULL; // <<==---------------------- } } The spec only claims the executable code of the program is not affected *IF THE PROGRAM IS IN USE*. And "in use" is clearly defined as the program most recently passed to UseProgram. Programs not "in use" are not covered by the paragraph mentioning Detaching and Attaching shaders. Rather, it seems like the spec should say something that effectively means: The only function that affects the executable code of the program object is LinkProgram Note the OpenGL Wiki claims this behavior is expected . > > The attachShader, detachShader, and linkProgram APIs all take program and > shader objects as parameters. That means you can perform these operations > even through the program you pass to the functions is not the currently ?in > use? program set via useProgram. In fact, the program you pass to these > functions can be completely different than the one currently ?in use? via > useProgram. So I think the sentence ?While a valid program is in use ?? is > meant to clarify you can change the shaders of the program independent of > whether it is in use or not. Otherwise, attachShader, detachShader and > linkProgram would not have taken programs as their first argument in the > first place. > > > > I tried all three of the examples in the StackOverFlow question in IE, > Chrome and Firefox on Windows. All browsers agree ?prog1? can be rendered > with at the end of each example. Since the last link of the program is a > successful link, the executable created as a result of the link is the one > that is used for rendering. Unless I am missing something in my > understanding, I think this is the correct behavior according to the > spec. > > > > --Rafael > > > > > > *From:* owners-public_webgl...@ [mailto: > owners-public_webgl...@] *On Behalf Of *Gregg Tavares > *Sent:* Tuesday, December 2, 2014 12:41 AM > *To:* public_webgl...@ > *Subject:* [Public WebGL] DeleteShader and DetachShader spec ambiguity > and implementation divergence > > > > So this stackoverflow question came up > > > > > http://stackoverflow.com/questions/27237696/webgl-detach-and-delete-shaders-after-linking > > > > The problem is 2 fold > > > > #1) The browsers are not consistent in their behavior > > > > #2) The OpenGL ES 2.0 spec doesn't seem particularly clear on what's > supposed to happen. > > > > Basically the poster is claiming you should be able to do this > > > > p = gl.createProgram(); > > gl.attachShader(p, someVertexShader); > > gl.attachShader(p, someFragmentShader); > > gl.linkProgram(p); > > gl.detachShader(p, someVertexShader); > > gl.detachShader(p, someFragmentShader); > > gl.deleteShader(p, someVertexShader); > > gl.deleteShader(p, someFragmentShader); > > gl.useProgram(p); > > > > And now 'p' should still point to a valid program that you can call > 'gl.useProgram' on or > > `gl.getUniformLocation` etc. This works in Chrome. It fails in Safari. > According to the OP > > it also fails in Firefox but it seemed to work for me. > > > > But, reading the spec it's not clear what's supposed to happen. AFAICT the > spec says > > > > While a valid program object is in use, applications are free to modify > attached > shader objects, compile attached shader objects, attach additional shader > objects, > and detach shader objects. These operations do not affect the link status > or executable > code of the program object. > > > > But "program object is in use" is ambiguous. It could be interpreted as > the "currentProgram" as in the one currently installed with gl.useProgram > > > > > > So > > > > (a) WebGL needs to make it clear what the correct behavior is and > > > > (b) a conformance test needs to be written to check that behavior which > includes calling useProgram with the program in question, getting locations > from it and trying to render with it and checking for success or failure > whichever is decided is the correct behavior. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Raf...@ Tue Dec 2 18:14:44 2014 From: Raf...@ (Rafael Cintron) Date: Wed, 3 Dec 2014 02:14:44 +0000 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence In-Reply-To: References: Message-ID: Before the paragraph in question, there were several paragraphs that talked about attaching and detaching shaders and linking programs. Nowhere in those paragraphs was there mention of those operations only mattering or behaving differently when the program is in use. Here is the beginning of the previous paragraph: If a valid executable is created, it can be made part of the current rendering state with the command void UseProgram( uint program ); This command will install the executable code as part of current rendering state if the program object program contains valid executable code, i.e. has been linked successfully The part in bold implies to me that linking is what creates the executable code. Subsequent paragraphs also talk about linking being the operation that causes executable code to be created for a program, or nullified on an unsuccessful link. Attach/Detach shaders only serve as inputs to the link process. Whether a program is in use only factors into what executable code is set in the render state. Now all that being said, we know of at least one implementation that differs from the rest. In addition, you were confused by the wording of the spec as were people on StackOverflow. That tells me the spec should be clarified in this regard. Your statement: ?The only function that affects the executable code of the program object is LinkProgram? is a great suggestion. I would also clarify the confusing sentence to read: ?Regardless of whether a program is in use or not, applications are free to modify attached shader objects, compile attached shader objects, attach additional shader objects, and detach shader objects?. This way, the subsequent sentence will also apply to used or unused programs: ?These operations do not affect the link status or executable code of the program object.? --Rafael From: Gregg Tavares [mailto:khronos...@] Sent: Tuesday, December 2, 2014 3:35 PM To: Rafael Cintron Cc: Gregg Tavares; public_webgl...@ Subject: Re: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence That's still not clear. While I agree that's the way it "should be" the spec is not clear enough IMO On Tue, Dec 2, 2014 at 12:44 PM, Rafael Cintron > wrote: I agree the ?While a valid program is in use . . .? phrase in the spec is confusing. The conformance test that comes closest to testing the behavior in the StackOverflow question is https://www.khronos.org/registry/webgl/sdk/tests/conformance/programs/program-test.html. Unfortunately, it first calls useProgram before performing the questionable operations so it doesn?t directly test this scenario. We should add conformance tests to clarify. Reading the entire section of the OpenGL spec as a whole, it?s clear that LinkProgram creates an ?executable? out of the current state of the program. Calling UseProgram merely installs the executable as the current rendering state of the context if the program has been successfully linked. The only thing that can change the installed executable from that point forward is calling UseProgram with a different program or performing a successful relink of the current program. If the in-use program is unsuccessfully re-linked, you can still render with the installed ?good? executable. But the moment you replace the current program with a different program, then you?ve essentially lost that good executable forever. The issue isn't what UseProgram does. That part is clear. The issue is what do AttachShader, DetachShader, etc...do when a program is not "in use". Conceptually a program is something like this class Program { Shaders* shaders[]; // all attached shaders Exe* exe; // the last good executable } And useProgram conceptually works something like this class GLState { Exe* currentProgram; } glUseProgram(id) { Program* prg = getProgramById(id); state.currentProgram = prg; state.currentExe = prg.exe; } With that mental model, the only thing that invalidates or changes the exe reference inside of Program is LinkProgram which is apparently supposed to be implemented like this glLinkProgram(id) { Program* prg = getProgramById(id); prg->exe = Link(shaders); // Good or bad this will set exe // If the link was good and this program is "in use" if (linkWasGood && state.currentProgram == this) { state.currentExe = prg.exe; // use the new exe. } } It's also clear that calling DeleteShader is valid because it won't actually be deleted until it is detached from the Program What's not clear is the behavior of DetachShader, AttachShader, etc. According to the spec this would be valid implementation glDetachShader(prg, sh) { Program* prg = getProgramById(id); prg->RemoveShaderFromArrayOfShaders(sh); // If this program is NOT in use we can muck with the executable of this program if (state.currentProgram != this) { prg.exe = NULL; // <<==---------------------- } } The spec only claims the executable code of the program is not affected IF THE PROGRAM IS IN USE. And "in use" is clearly defined as the program most recently passed to UseProgram. Programs not "in use" are not covered by the paragraph mentioning Detaching and Attaching shaders. Rather, it seems like the spec should say something that effectively means: The only function that affects the executable code of the program object is LinkProgram Note the OpenGL Wiki claims this behavior is expected. The attachShader, detachShader, and linkProgram APIs all take program and shader objects as parameters. That means you can perform these operations even through the program you pass to the functions is not the currently ?in use? program set via useProgram. In fact, the program you pass to these functions can be completely different than the one currently ?in use? via useProgram. So I think the sentence ?While a valid program is in use ?? is meant to clarify you can change the shaders of the program independent of whether it is in use or not. Otherwise, attachShader, detachShader and linkProgram would not have taken programs as their first argument in the first place. I tried all three of the examples in the StackOverFlow question in IE, Chrome and Firefox on Windows. All browsers agree ?prog1? can be rendered with at the end of each example. Since the last link of the program is a successful link, the executable created as a result of the link is the one that is used for rendering. Unless I am missing something in my understanding, I think this is the correct behavior according to the spec. --Rafael From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Gregg Tavares Sent: Tuesday, December 2, 2014 12:41 AM To: public_webgl...@ Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence So this stackoverflow question came up http://stackoverflow.com/questions/27237696/webgl-detach-and-delete-shaders-after-linking The problem is 2 fold #1) The browsers are not consistent in their behavior #2) The OpenGL ES 2.0 spec doesn't seem particularly clear on what's supposed to happen. Basically the poster is claiming you should be able to do this p = gl.createProgram(); gl.attachShader(p, someVertexShader); gl.attachShader(p, someFragmentShader); gl.linkProgram(p); gl.detachShader(p, someVertexShader); gl.detachShader(p, someFragmentShader); gl.deleteShader(p, someVertexShader); gl.deleteShader(p, someFragmentShader); gl.useProgram(p); And now 'p' should still point to a valid program that you can call 'gl.useProgram' on or `gl.getUniformLocation` etc. This works in Chrome. It fails in Safari. According to the OP it also fails in Firefox but it seemed to work for me. But, reading the spec it's not clear what's supposed to happen. AFAICT the spec says While a valid program object is in use, applications are free to modify attached shader objects, compile attached shader objects, attach additional shader objects, and detach shader objects. These operations do not affect the link status or executable code of the program object. But "program object is in use" is ambiguous. It could be interpreted as the "currentProgram" as in the one currently installed with gl.useProgram So (a) WebGL needs to make it clear what the correct behavior is and (b) a conformance test needs to be written to check that behavior which includes calling useProgram with the program in question, getting locations from it and trying to render with it and checking for success or failure whichever is decided is the correct behavior. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 2 23:30:06 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 3 Dec 2014 08:30:06 +0100 Subject: [Public WebGL] Re: retrograde webgl support levels In-Reply-To: References: Message-ID: The first 2 days of December are in. Overall it's looking pretty good, iOS and Windows mobiles make rapid progress, but there are some trends developing to keep an eye on. - Firefox on windows is still continuing its downwards trend, standing at 75%, down from 83% in September. - Internet Explorer has hit a peak at 79.2%, down from 79.4% last month. - Safari on OSX is about to hit a peak after making rapid advances, 14% in October, 29% in November (+100% relative), 32% in December (+10% relative) - Android on smartphones is continuing its downtrend, standing at 58% down from 66% in August. iOS smartphones now outrank Android WebGL support with 59.6% - Android tablets seem to be recovering from their nearly year long slump (standing at 55.1%), but iOS tablets now outrank Android tablet WebGL support with 55.2%. A very sore spot for WebGL support is game consoles where it's not supported at all. I own a PS4 (which natively uses WebGL for its UI), I'm disappointed by Sonys browser because: - No WebGL - No fullscreen or pointerlock API - No Web Audio Data - No gamepads API (which is particularly puzzling for gamepads are universally the connected input device on these machines) - Doesn't score well on JS/Canvas benchmarks. This is puzzling because the PS4 is a very powerful machine, but one wouldn't think that when using it to browse the internet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Dec 3 02:42:08 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 3 Dec 2014 11:42:08 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: Message-ID: On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier wrote: > FWIW, there has been some legislative activity wrt fingerprinting recently: > > http://www.theguardian.com/technology/2014/nov/28/europe-privacy-war-websites-silently-tracking-users > [Insert cavalcade of caveats wrt 'obtain the valid consent of the user' > here] > You can have any 2 of these 3 things, but not all 3 together: - A heterogeneous computing device and software ecosystem - Good applications (that is, applications which are well adjusted to run on the given population of hardware) - Privacy If you pick a heterogenous ecosystem and you want good applications, then you have to give applications a lot of information about where they run. If you pick a heterogenous ecosystem and privacy, you cannot have good applications, because everybody is just stabbing in the dark. If you pick privacy and good applications, you cannot have a heterogenous ecosystem, because you need to fix the platform to prevent app developers from stabbing in the dark (aka the iOS way) Pushing for privacy, noble as it may seem, is directly sabotaging the quality of applications you will be able to get. Make your pick. > http://en.wikipedia.org/wiki/Device_fingerprint > You could perhaps reduce diversity here by only exposing exact renderer > info for older/problematic* GPUs/drivers. > > *As reported by content creators. It's not about older/problematic devices per-se. This topic has been discussed extensively on this ML before. - Support handling for Users of your app that have problems. Even if they don't call you personally, or open a support ticket, you can still detect when they have an issue, and note down the GPU. Statistically speaking this helps a *lot*. Because let's say 1 users contacts you that your app didn't work for him. You'd have to tell him to tell you the GPU, which is difficult. But now what, you have the GPU, and one user with a problem. But you know that say 20% of your users have some problem, but which ones? Is it the same ones with that GPU? Is it something specific to that GPU? Should you file a conformance test for it? Should you contact a vendor? Should you get one of these GPUs and devise a workaround? How would you know? You don't. It could be that the bug has nothing to do with that specific GPU, that's just coincidence, and of say, 10'000 people who use your app and get issues, exactly one person has that specific GPU, the other 99'999 have a different one. Great, now you've just wasted a ton of work to help exactly one user. Not that anything's inherently wrong with that, it's just not an efficient use of your support and development resources. - Performance estimation: Do you offer a user the HD default or the SD default? Do they come with a GTX-980? HD-default, you can allocate hundreds of megabytes of VRAM without a problem, and draw millions of triangles. Intel HD 4000? SD default naturally, and cut back on everything. It's not a substitute for measuring actual performance, and adjusting as necessary, and it isn't a substitute for letting the user make adjustments. But it is a good way to get some default that's least disappointing. - And a bunch of other things. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thu...@ Wed Dec 3 03:16:16 2014 From: thu...@ (Ben Adams) Date: Wed, 3 Dec 2014 11:16:16 +0000 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: Message-ID: I set a cookie (and inform the user I set cookies if in Europe); can I now get debug renderer? The user want's to block it, give them the option with cookie privacy to switch it off, informing them it may worsen the experience Perhaps display a generic renderer on incognito/private mode Cross domain messaging is a bigger privacy issue than debug renderer; and you can't really prevent it short of also stopping OAuth/OpenID and crippling federated logins On 3 December 2014 at 10:42, Florian B?sch wrote: > On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier < > Frank.Olivier...@> wrote: > >> FWIW, there has been some legislative activity wrt fingerprinting >> recently: >> >> http://www.theguardian.com/technology/2014/nov/28/europe-privacy-war-websites-silently-tracking-users >> [Insert cavalcade of caveats wrt 'obtain the valid consent of the user' >> here] >> > > You can have any 2 of these 3 things, but not all 3 together: > > - A heterogeneous computing device and software ecosystem > - Good applications (that is, applications which are well adjusted to > run on the given population of hardware) > - Privacy > > If you pick a heterogenous ecosystem and you want good applications, then > you have to give applications a lot of information about where they run. > > If you pick a heterogenous ecosystem and privacy, you cannot have good > applications, because everybody is just stabbing in the dark. > > If you pick privacy and good applications, you cannot have a heterogenous > ecosystem, because you need to fix the platform to prevent app developers > from stabbing in the dark (aka the iOS way) > > Pushing for privacy, noble as it may seem, is directly sabotaging the > quality of applications you will be able to get. Make your pick. > > >> http://en.wikipedia.org/wiki/Device_fingerprint >> You could perhaps reduce diversity here by only exposing exact renderer >> info for older/problematic* GPUs/drivers. >> >> *As reported by content creators. > > It's not about older/problematic devices per-se. This topic has been > discussed extensively on this ML before. > > - Support handling for Users of your app that have problems. Even if > they don't call you personally, or open a support ticket, you can still > detect when they have an issue, and note down the GPU. Statistically > speaking this helps a *lot*. Because let's say 1 users contacts you that > your app didn't work for him. You'd have to tell him to tell you the GPU, > which is difficult. But now what, you have the GPU, and one user with a > problem. But you know that say 20% of your users have some problem, but > which ones? Is it the same ones with that GPU? Is it something specific to > that GPU? Should you file a conformance test for it? Should you contact a > vendor? Should you get one of these GPUs and devise a workaround? How would > you know? You don't. It could be that the bug has nothing to do with that > specific GPU, that's just coincidence, and of say, 10'000 people who use > your app and get issues, exactly one person has that specific GPU, the > other 99'999 have a different one. Great, now you've just wasted a ton of > work to help exactly one user. Not that anything's inherently wrong with > that, it's just not an efficient use of your support and development > resources. > - Performance estimation: Do you offer a user the HD default or the SD > default? Do they come with a GTX-980? HD-default, you can allocate hundreds > of megabytes of VRAM without a problem, and draw millions of triangles. > Intel HD 4000? SD default naturally, and cut back on everything. It's not a > substitute for measuring actual performance, and adjusting as necessary, > and it isn't a substitute for letting the user make adjustments. But it is > a good way to get some default that's least disappointing. > - And a bunch of other things. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thu...@ Wed Dec 3 03:17:37 2014 From: thu...@ (Ben Adams) Date: Wed, 3 Dec 2014 11:17:37 +0000 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: Message-ID: Oh and iframes, redirects, xhr and general links On 3 December 2014 at 11:16, Ben Adams wrote: > I set a cookie (and inform the user I set cookies if in Europe); can I now > get debug renderer? > The user want's to block it, give them the option with cookie privacy to > switch it off, informing them it may worsen the experience > Perhaps display a generic renderer on incognito/private mode > > Cross domain messaging is a bigger privacy issue than debug renderer; and > you can't really prevent it short of also stopping OAuth/OpenID and > crippling federated logins > > On 3 December 2014 at 10:42, Florian B?sch wrote: > >> On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier < >> Frank.Olivier...@> wrote: >> >>> FWIW, there has been some legislative activity wrt fingerprinting >>> recently: >>> >>> http://www.theguardian.com/technology/2014/nov/28/europe-privacy-war-websites-silently-tracking-users >>> [Insert cavalcade of caveats wrt 'obtain the valid consent of the user' >>> here] >>> >> >> You can have any 2 of these 3 things, but not all 3 together: >> >> - A heterogeneous computing device and software ecosystem >> - Good applications (that is, applications which are well adjusted to >> run on the given population of hardware) >> - Privacy >> >> If you pick a heterogenous ecosystem and you want good applications, then >> you have to give applications a lot of information about where they run. >> >> If you pick a heterogenous ecosystem and privacy, you cannot have good >> applications, because everybody is just stabbing in the dark. >> >> If you pick privacy and good applications, you cannot have a heterogenous >> ecosystem, because you need to fix the platform to prevent app developers >> from stabbing in the dark (aka the iOS way) >> >> Pushing for privacy, noble as it may seem, is directly sabotaging the >> quality of applications you will be able to get. Make your pick. >> >> >>> http://en.wikipedia.org/wiki/Device_fingerprint >>> You could perhaps reduce diversity here by only exposing exact renderer >>> info for older/problematic* GPUs/drivers. >>> >>> *As reported by content creators. >> >> It's not about older/problematic devices per-se. This topic has been >> discussed extensively on this ML before. >> >> - Support handling for Users of your app that have problems. Even if >> they don't call you personally, or open a support ticket, you can still >> detect when they have an issue, and note down the GPU. Statistically >> speaking this helps a *lot*. Because let's say 1 users contacts you that >> your app didn't work for him. You'd have to tell him to tell you the GPU, >> which is difficult. But now what, you have the GPU, and one user with a >> problem. But you know that say 20% of your users have some problem, but >> which ones? Is it the same ones with that GPU? Is it something specific to >> that GPU? Should you file a conformance test for it? Should you contact a >> vendor? Should you get one of these GPUs and devise a workaround? How would >> you know? You don't. It could be that the bug has nothing to do with that >> specific GPU, that's just coincidence, and of say, 10'000 people who use >> your app and get issues, exactly one person has that specific GPU, the >> other 99'999 have a different one. Great, now you've just wasted a ton of >> work to help exactly one user. Not that anything's inherently wrong with >> that, it's just not an efficient use of your support and development >> resources. >> - Performance estimation: Do you offer a user the HD default or the >> SD default? Do they come with a GTX-980? HD-default, you can allocate >> hundreds of megabytes of VRAM without a problem, and draw millions of >> triangles. Intel HD 4000? SD default naturally, and cut back on everything. >> It's not a substitute for measuring actual performance, and adjusting as >> necessary, and it isn't a substitute for letting the user make adjustments. >> But it is a good way to get some default that's least disappointing. >> - And a bunch of other things. >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsc...@ Wed Dec 3 03:19:37 2014 From: jsc...@ (Johannes Schmid) Date: Wed, 03 Dec 2014 12:19:37 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: Message-ID: <547EF1C9.3050105@esri.com> This idea has probably been suggested/discussed before, just throwing it in in case it hasn't: When an app queries WebGL for sensitive information, the browser presents the user with a dialog box where the user can prevent the query from returning such information. Apps can fall back to conservative settings if he does, so everybody gets the choice between privacy and optimized performance. Such dialog boxes already exist e.g. for camera access. In this case, coming up with a easy to understand and concise formulation of pros and cons may be quite hard, though. Best, Joe On 03.12.2014 11:42, Florian B?sch wrote: > On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier > > wrote: > > FWIW, there has been some legislative activity wrt fingerprinting > recently: > http://www.theguardian.com/technology/2014/nov/28/europe-privacy-war-websites-silently-tracking-users > [Insert cavalcade of caveats wrt 'obtain the valid consent of the > user' here] > > > You can have any 2 of these 3 things, but not all 3 together: > > * A heterogeneous computing device and software ecosystem > * Good applications (that is, applications which are well adjusted to > run on the given population of hardware) > * Privacy > > If you pick a heterogenous ecosystem and you want good applications, > then you have to give applications a lot of information about where they > run. > > If you pick a heterogenous ecosystem and privacy, you cannot have good > applications, because everybody is just stabbing in the dark. > > If you pick privacy and good applications, you cannot have a > heterogenous ecosystem, because you need to fix the platform to prevent > app developers from stabbing in the dark (aka the iOS way) > > Pushing for privacy, noble as it may seem, is directly sabotaging the > quality of applications you will be able to get. Make your pick. > > http://en.wikipedia.org/wiki/Device_fingerprint > You could perhaps reduce diversity here by only exposing exact > renderer info for older/problematic* GPUs/drivers. > > *As reported by content creators. > > It's not about older/problematic devices per-se. This topic has been > discussed extensively on this ML before. > > * Support handling for Users of your app that have problems. Even if > they don't call you personally, or open a support ticket, you can > still detect when they have an issue, and note down the GPU. > Statistically speaking this helps a *lot*. Because let's say 1 users > contacts you that your app didn't work for him. You'd have to tell > him to tell you the GPU, which is difficult. But now what, you have > the GPU, and one user with a problem. But you know that say 20% of > your users have some problem, but which ones? Is it the same ones > with that GPU? Is it something specific to that GPU? Should you file > a conformance test for it? Should you contact a vendor? Should you > get one of these GPUs and devise a workaround? How would you know? > You don't. It could be that the bug has nothing to do with that > specific GPU, that's just coincidence, and of say, 10'000 people who > use your app and get issues, exactly one person has that specific > GPU, the other 99'999 have a different one. Great, now you've just > wasted a ton of work to help exactly one user. Not that anything's > inherently wrong with that, it's just not an efficient use of your > support and development resources. > * Performance estimation: Do you offer a user the HD default or the SD > default? Do they come with a GTX-980? HD-default, you can allocate > hundreds of megabytes of VRAM without a problem, and draw millions > of triangles. Intel HD 4000? SD default naturally, and cut back on > everything. It's not a substitute for measuring actual performance, > and adjusting as necessary, and it isn't a substitute for letting > the user make adjustments. But it is a good way to get some default > that's least disappointing. > * And a bunch of other things. > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Wed Dec 3 04:11:11 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 3 Dec 2014 13:11:11 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: <547EF1C9.3050105@esri.com> References: <547EF1C9.3050105@esri.com> Message-ID: If you do such a dialog box, you can kiss browser statistics goodbye, and you won't get most statistics, even from your own site, on your own domain, without cross message posting. Sounds like fun wasting time meandering around in the dark. Enjoy your future. On Wed, Dec 3, 2014 at 12:19 PM, Johannes Schmid wrote: > > This idea has probably been suggested/discussed before, just throwing it > in in case it hasn't: > When an app queries WebGL for sensitive information, the browser presents > the user with a dialog box where the user can prevent the query from > returning such information. Apps can fall back to conservative settings if > he does, so everybody gets the choice between privacy and optimized > performance. > > Such dialog boxes already exist e.g. for camera access. In this case, > coming up with a easy to understand and concise formulation of pros and > cons may be quite hard, though. > > Best, > Joe > > On 03.12.2014 11:42, Florian B?sch wrote: > >> On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier >> > wrote: >> >> FWIW, there has been some legislative activity wrt fingerprinting >> recently: >> http://www.theguardian.com/technology/2014/nov/28/europe- >> privacy-war-websites-silently-tracking-users >> [Insert cavalcade of caveats wrt 'obtain the valid consent of the >> user' here] >> >> >> You can have any 2 of these 3 things, but not all 3 together: >> >> * A heterogeneous computing device and software ecosystem >> * Good applications (that is, applications which are well adjusted to >> run on the given population of hardware) >> * Privacy >> >> If you pick a heterogenous ecosystem and you want good applications, >> then you have to give applications a lot of information about where they >> run. >> >> If you pick a heterogenous ecosystem and privacy, you cannot have good >> applications, because everybody is just stabbing in the dark. >> >> If you pick privacy and good applications, you cannot have a >> heterogenous ecosystem, because you need to fix the platform to prevent >> app developers from stabbing in the dark (aka the iOS way) >> >> Pushing for privacy, noble as it may seem, is directly sabotaging the >> quality of applications you will be able to get. Make your pick. >> >> http://en.wikipedia.org/wiki/Device_fingerprint >> You could perhaps reduce diversity here by only exposing exact >> renderer info for older/problematic* GPUs/drivers. >> >> *As reported by content creators. >> >> It's not about older/problematic devices per-se. This topic has been >> discussed extensively on this ML before. >> >> * Support handling for Users of your app that have problems. Even if >> they don't call you personally, or open a support ticket, you can >> still detect when they have an issue, and note down the GPU. >> Statistically speaking this helps a *lot*. Because let's say 1 users >> contacts you that your app didn't work for him. You'd have to tell >> him to tell you the GPU, which is difficult. But now what, you have >> the GPU, and one user with a problem. But you know that say 20% of >> your users have some problem, but which ones? Is it the same ones >> with that GPU? Is it something specific to that GPU? Should you file >> a conformance test for it? Should you contact a vendor? Should you >> get one of these GPUs and devise a workaround? How would you know? >> You don't. It could be that the bug has nothing to do with that >> specific GPU, that's just coincidence, and of say, 10'000 people who >> use your app and get issues, exactly one person has that specific >> GPU, the other 99'999 have a different one. Great, now you've just >> wasted a ton of work to help exactly one user. Not that anything's >> inherently wrong with that, it's just not an efficient use of your >> support and development resources. >> * Performance estimation: Do you offer a user the HD default or the SD >> default? Do they come with a GTX-980? HD-default, you can allocate >> hundreds of megabytes of VRAM without a problem, and draw millions >> of triangles. Intel HD 4000? SD default naturally, and cut back on >> everything. It's not a substitute for measuring actual performance, >> and adjusting as necessary, and it isn't a substitute for letting >> the user make adjustments. But it is a good way to get some default >> that's least disappointing. >> * And a bunch of other things. >> >> >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Dec 3 04:12:50 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 3 Dec 2014 13:12:50 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> Message-ID: In fact, while we're at that, why're you not suggesting the user-agent string shouldn't be sent, by default, unless the user clicks "yes" on every page? That sounds like a good idea donnit? On Wed, Dec 3, 2014 at 1:11 PM, Florian B?sch wrote: > If you do such a dialog box, you can kiss browser statistics goodbye, and > you won't get most statistics, even from your own site, on your own domain, > without cross message posting. Sounds like fun wasting time meandering > around in the dark. Enjoy your future. > > On Wed, Dec 3, 2014 at 12:19 PM, Johannes Schmid wrote: > >> >> This idea has probably been suggested/discussed before, just throwing it >> in in case it hasn't: >> When an app queries WebGL for sensitive information, the browser presents >> the user with a dialog box where the user can prevent the query from >> returning such information. Apps can fall back to conservative settings if >> he does, so everybody gets the choice between privacy and optimized >> performance. >> >> Such dialog boxes already exist e.g. for camera access. In this case, >> coming up with a easy to understand and concise formulation of pros and >> cons may be quite hard, though. >> >> Best, >> Joe >> >> On 03.12.2014 11:42, Florian B?sch wrote: >> >>> On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier >>> > >>> wrote: >>> >>> FWIW, there has been some legislative activity wrt fingerprinting >>> recently: >>> http://www.theguardian.com/technology/2014/nov/28/europe- >>> privacy-war-websites-silently-tracking-users >>> [Insert cavalcade of caveats wrt 'obtain the valid consent of the >>> user' here] >>> >>> >>> You can have any 2 of these 3 things, but not all 3 together: >>> >>> * A heterogeneous computing device and software ecosystem >>> * Good applications (that is, applications which are well adjusted to >>> run on the given population of hardware) >>> * Privacy >>> >>> If you pick a heterogenous ecosystem and you want good applications, >>> then you have to give applications a lot of information about where they >>> run. >>> >>> If you pick a heterogenous ecosystem and privacy, you cannot have good >>> applications, because everybody is just stabbing in the dark. >>> >>> If you pick privacy and good applications, you cannot have a >>> heterogenous ecosystem, because you need to fix the platform to prevent >>> app developers from stabbing in the dark (aka the iOS way) >>> >>> Pushing for privacy, noble as it may seem, is directly sabotaging the >>> quality of applications you will be able to get. Make your pick. >>> >>> http://en.wikipedia.org/wiki/Device_fingerprint >>> You could perhaps reduce diversity here by only exposing exact >>> renderer info for older/problematic* GPUs/drivers. >>> >>> *As reported by content creators. >>> >>> It's not about older/problematic devices per-se. This topic has been >>> discussed extensively on this ML before. >>> >>> * Support handling for Users of your app that have problems. Even if >>> they don't call you personally, or open a support ticket, you can >>> still detect when they have an issue, and note down the GPU. >>> Statistically speaking this helps a *lot*. Because let's say 1 users >>> contacts you that your app didn't work for him. You'd have to tell >>> him to tell you the GPU, which is difficult. But now what, you have >>> the GPU, and one user with a problem. But you know that say 20% of >>> your users have some problem, but which ones? Is it the same ones >>> with that GPU? Is it something specific to that GPU? Should you file >>> a conformance test for it? Should you contact a vendor? Should you >>> get one of these GPUs and devise a workaround? How would you know? >>> You don't. It could be that the bug has nothing to do with that >>> specific GPU, that's just coincidence, and of say, 10'000 people who >>> use your app and get issues, exactly one person has that specific >>> GPU, the other 99'999 have a different one. Great, now you've just >>> wasted a ton of work to help exactly one user. Not that anything's >>> inherently wrong with that, it's just not an efficient use of your >>> support and development resources. >>> * Performance estimation: Do you offer a user the HD default or the SD >>> default? Do they come with a GTX-980? HD-default, you can allocate >>> hundreds of megabytes of VRAM without a problem, and draw millions >>> of triangles. Intel HD 4000? SD default naturally, and cut back on >>> everything. It's not a substitute for measuring actual performance, >>> and adjusting as necessary, and it isn't a substitute for letting >>> the user make adjustments. But it is a good way to get some default >>> that's least disappointing. >>> * And a bunch of other things. >>> >>> >>> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thu...@ Wed Dec 3 04:16:20 2014 From: thu...@ (Ben Adams) Date: Wed, 3 Dec 2014 12:16:20 +0000 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: <547EF1C9.3050105@esri.com> References: <547EF1C9.3050105@esri.com> Message-ID: Perspective-wise (since that is often used in 3d) camera is way way on the other side of the spectrum, way passed confirming a user name and password, way passed proving you have access to an email address, way passed proving you know the numbers on the back and front of a credit card so may be the account holder, way passed dna left at a scene; its a stand up in criminal court level of evidence of who you are, what you are wearing and what you are doing level of privacy issue, so yes it should ask you! Debug render is more on the side of user agent, screen resolution, language; perhaps cleaning up the user agent so it doesn't have all the weirdness in it? On 3 December 2014 at 11:19, Johannes Schmid wrote: > > This idea has probably been suggested/discussed before, just throwing it > in in case it hasn't: > When an app queries WebGL for sensitive information, the browser presents > the user with a dialog box where the user can prevent the query from > returning such information. Apps can fall back to conservative settings if > he does, so everybody gets the choice between privacy and optimized > performance. > > Such dialog boxes already exist e.g. for camera access. In this case, > coming up with a easy to understand and concise formulation of pros and > cons may be quite hard, though. > > Best, > Joe > > On 03.12.2014 11:42, Florian B?sch wrote: > >> On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier >> > wrote: >> >> FWIW, there has been some legislative activity wrt fingerprinting >> recently: >> http://www.theguardian.com/technology/2014/nov/28/europe- >> privacy-war-websites-silently-tracking-users >> [Insert cavalcade of caveats wrt 'obtain the valid consent of the >> user' here] >> >> >> You can have any 2 of these 3 things, but not all 3 together: >> >> * A heterogeneous computing device and software ecosystem >> * Good applications (that is, applications which are well adjusted to >> run on the given population of hardware) >> * Privacy >> >> If you pick a heterogenous ecosystem and you want good applications, >> then you have to give applications a lot of information about where they >> run. >> >> If you pick a heterogenous ecosystem and privacy, you cannot have good >> applications, because everybody is just stabbing in the dark. >> >> If you pick privacy and good applications, you cannot have a >> heterogenous ecosystem, because you need to fix the platform to prevent >> app developers from stabbing in the dark (aka the iOS way) >> >> Pushing for privacy, noble as it may seem, is directly sabotaging the >> quality of applications you will be able to get. Make your pick. >> >> http://en.wikipedia.org/wiki/Device_fingerprint >> You could perhaps reduce diversity here by only exposing exact >> renderer info for older/problematic* GPUs/drivers. >> >> *As reported by content creators. >> >> It's not about older/problematic devices per-se. This topic has been >> discussed extensively on this ML before. >> >> * Support handling for Users of your app that have problems. Even if >> they don't call you personally, or open a support ticket, you can >> still detect when they have an issue, and note down the GPU. >> Statistically speaking this helps a *lot*. Because let's say 1 users >> contacts you that your app didn't work for him. You'd have to tell >> him to tell you the GPU, which is difficult. But now what, you have >> the GPU, and one user with a problem. But you know that say 20% of >> your users have some problem, but which ones? Is it the same ones >> with that GPU? Is it something specific to that GPU? Should you file >> a conformance test for it? Should you contact a vendor? Should you >> get one of these GPUs and devise a workaround? How would you know? >> You don't. It could be that the bug has nothing to do with that >> specific GPU, that's just coincidence, and of say, 10'000 people who >> use your app and get issues, exactly one person has that specific >> GPU, the other 99'999 have a different one. Great, now you've just >> wasted a ton of work to help exactly one user. Not that anything's >> inherently wrong with that, it's just not an efficient use of your >> support and development resources. >> * Performance estimation: Do you offer a user the HD default or the SD >> default? Do they come with a GTX-980? HD-default, you can allocate >> hundreds of megabytes of VRAM without a problem, and draw millions >> of triangles. Intel HD 4000? SD default naturally, and cut back on >> everything. It's not a substitute for measuring actual performance, >> and adjusting as necessary, and it isn't a substitute for letting >> the user make adjustments. But it is a good way to get some default >> that's least disappointing. >> * And a bunch of other things. >> >> >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Dec 3 04:19:26 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 3 Dec 2014 13:19:26 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> Message-ID: On Wed, Dec 3, 2014 at 1:16 PM, Ben Adams wrote: > Debug render is more on the side of user agent, screen resolution, > language; perhaps cleaning up the user agent so it doesn't have all the > weirdness in it? > Ohyes, please. Parsing user-agents is a nightmare. Though it'd take a long time for such a change to percolate trough the browsers, in 10-20 years time, we wouldn't have to do all those regex parsing hacks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsc...@ Wed Dec 3 04:50:22 2014 From: jsc...@ (Johannes Schmid) Date: Wed, 03 Dec 2014 13:50:22 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> Message-ID: <547F070E.4010503@esri.com> I'm not suggesting that such a dialog box should be introduced to guard any information that is already exposed. As a developer of a high-performance app, I completely agree that the lack of information about the host has severe impacts on the UX at the moment. If more detailed information can be made always vailable (or perhaps opt-out, as Ben Adams suggested), that would be great in my opinion. But if there are strong arguments against making more information available, I would much prefer an opt-in solution to no solution at all. Joe On 03.12.2014 13:11, Florian B?sch wrote: > If you do such a dialog box, you can kiss browser statistics goodbye, > and you won't get most statistics, even from your own site, on your own > domain, without cross message posting. Sounds like fun wasting time > meandering around in the dark. Enjoy your future. > > On Wed, Dec 3, 2014 at 12:19 PM, Johannes Schmid > wrote: > > > This idea has probably been suggested/discussed before, just > throwing it in in case it hasn't: > When an app queries WebGL for sensitive information, the browser > presents the user with a dialog box where the user can prevent the > query from returning such information. Apps can fall back to > conservative settings if he does, so everybody gets the choice > between privacy and optimized performance. > > Such dialog boxes already exist e.g. for camera access. In this > case, coming up with a easy to understand and concise formulation of > pros and cons may be quite hard, though. > > Best, > Joe > > On 03.12.2014 11 :42, Florian B?sch wrote: > > On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier > > >> wrote: > > FWIW, there has been some legislative activity wrt > fingerprinting > recently: > http://www.theguardian.com/__technology/2014/nov/28/europe-__privacy-war-websites-silently-__tracking-users > > [Insert cavalcade of caveats wrt 'obtain the valid consent > of the > user' here] > > > You can have any 2 of these 3 things, but not all 3 together: > > * A heterogeneous computing device and software ecosystem > * Good applications (that is, applications which are well > adjusted to > run on the given population of hardware) > * Privacy > > If you pick a heterogenous ecosystem and you want good applications, > then you have to give applications a lot of information about > where they > run. > > If you pick a heterogenous ecosystem and privacy, you cannot > have good > applications, because everybody is just stabbing in the dark. > > If you pick privacy and good applications, you cannot have a > heterogenous ecosystem, because you need to fix the platform to > prevent > app developers from stabbing in the dark (aka the iOS way) > > Pushing for privacy, noble as it may seem, is directly > sabotaging the > quality of applications you will be able to get. Make your pick. > > http://en.wikipedia.org/wiki/__Device_fingerprint > > You could perhaps reduce diversity here by only exposing exact > renderer info for older/problematic* GPUs/drivers. > > *As reported by content creators. > > It's not about older/problematic devices per-se. This topic has been > discussed extensively on this ML before. > > * Support handling for Users of your app that have problems. > Even if > they don't call you personally, or open a support ticket, > you can > still detect when they have an issue, and note down the GPU. > Statistically speaking this helps a *lot*. Because let's > say 1 users > contacts you that your app didn't work for him. You'd have > to tell > him to tell you the GPU, which is difficult. But now what, > you have > the GPU, and one user with a problem. But you know that say > 20% of > your users have some problem, but which ones? Is it the > same ones > with that GPU? Is it something specific to that GPU? Should > you file > a conformance test for it? Should you contact a vendor? > Should you > get one of these GPUs and devise a workaround? How would > you know? > You don't. It could be that the bug has nothing to do with that > specific GPU, that's just coincidence, and of say, 10'000 > people who > use your app and get issues, exactly one person has that > specific > GPU, the other 99'999 have a different one. Great, now > you've just > wasted a ton of work to help exactly one user. Not that > anything's > inherently wrong with that, it's just not an efficient use > of your > support and development resources. > * Performance estimation: Do you offer a user the HD default > or the SD > default? Do they come with a GTX-980? HD-default, you can > allocate > hundreds of megabytes of VRAM without a problem, and draw > millions > of triangles. Intel HD 4000? SD default naturally, and cut > back on > everything. It's not a substitute for measuring actual > performance, > and adjusting as necessary, and it isn't a substitute for > letting > the user make adjustments. But it is a good way to get some > default > that's least disappointing. > * And a bunch of other things. > > > > ------------------------------__----------------------------- > You are currently subscribed to public_webgl...@ > . > To unsubscribe, send an email to majordomo...@ > with > the following command in the body of your email: > unsubscribe public_webgl > ------------------------------__----------------------------- > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Wed Dec 3 05:15:41 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 3 Dec 2014 14:15:41 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: <547F070E.4010503@esri.com> References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> Message-ID: I don't care if the webgl debug renderer info holdouts come up with an "opt-in" solution. I don't care because, it's basically the same as no solution. Nobody opts in. Not because they're against it, but because nobody wants to click popups or go search out some obscure menu sub entry somewhere. So in reality, nobody will do it, and so it's the same as no solution. On Wed, Dec 3, 2014 at 1:50 PM, Johannes Schmid wrote: > > I'm not suggesting that such a dialog box should be introduced to guard > any information that is already exposed. As a developer of a > high-performance app, I completely agree that the lack of information about > the host has severe impacts on the UX at the moment. If more detailed > information can be made always vailable (or perhaps opt-out, as Ben Adams > suggested), that would be great in my opinion. But if there are strong > arguments against making more information available, I would much prefer an > opt-in solution to no solution at all. > > Joe > > On 03.12.2014 13:11, Florian B?sch wrote: > >> If you do such a dialog box, you can kiss browser statistics goodbye, >> and you won't get most statistics, even from your own site, on your own >> domain, without cross message posting. Sounds like fun wasting time >> meandering around in the dark. Enjoy your future. >> >> On Wed, Dec 3, 2014 at 12:19 PM, Johannes Schmid > > wrote: >> >> >> This idea has probably been suggested/discussed before, just >> throwing it in in case it hasn't: >> When an app queries WebGL for sensitive information, the browser >> presents the user with a dialog box where the user can prevent the >> query from returning such information. Apps can fall back to >> conservative settings if he does, so everybody gets the choice >> between privacy and optimized performance. >> >> Such dialog boxes already exist e.g. for camera access. In this >> case, coming up with a easy to understand and concise formulation of >> pros and cons may be quite hard, though. >> >> Best, >> Joe >> >> On 03.12.2014 11 :42, Florian B?sch wrote: >> >> On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier >> > >> > >> wrote: >> >> FWIW, there has been some legislative activity wrt >> fingerprinting >> recently: >> http://www.theguardian.com/__technology/2014/nov/28/europe- >> __privacy-war-websites-silently-__tracking-users >> > privacy-war-websites-silently-tracking-users> >> [Insert cavalcade of caveats wrt 'obtain the valid consent >> of the >> user' here] >> >> >> You can have any 2 of these 3 things, but not all 3 together: >> >> * A heterogeneous computing device and software ecosystem >> * Good applications (that is, applications which are well >> adjusted to >> run on the given population of hardware) >> * Privacy >> >> If you pick a heterogenous ecosystem and you want good >> applications, >> then you have to give applications a lot of information about >> where they >> run. >> >> If you pick a heterogenous ecosystem and privacy, you cannot >> have good >> applications, because everybody is just stabbing in the dark. >> >> If you pick privacy and good applications, you cannot have a >> heterogenous ecosystem, because you need to fix the platform to >> prevent >> app developers from stabbing in the dark (aka the iOS way) >> >> Pushing for privacy, noble as it may seem, is directly >> sabotaging the >> quality of applications you will be able to get. Make your pick. >> >> http://en.wikipedia.org/wiki/__Device_fingerprint >> >> >> You could perhaps reduce diversity here by only exposing >> exact >> renderer info for older/problematic* GPUs/drivers. >> >> *As reported by content creators. >> >> It's not about older/problematic devices per-se. This topic has >> been >> discussed extensively on this ML before. >> >> * Support handling for Users of your app that have problems. >> Even if >> they don't call you personally, or open a support ticket, >> you can >> still detect when they have an issue, and note down the GPU. >> Statistically speaking this helps a *lot*. Because let's >> say 1 users >> contacts you that your app didn't work for him. You'd have >> to tell >> him to tell you the GPU, which is difficult. But now what, >> you have >> the GPU, and one user with a problem. But you know that say >> 20% of >> your users have some problem, but which ones? Is it the >> same ones >> with that GPU? Is it something specific to that GPU? Should >> you file >> a conformance test for it? Should you contact a vendor? >> Should you >> get one of these GPUs and devise a workaround? How would >> you know? >> You don't. It could be that the bug has nothing to do with >> that >> specific GPU, that's just coincidence, and of say, 10'000 >> people who >> use your app and get issues, exactly one person has that >> specific >> GPU, the other 99'999 have a different one. Great, now >> you've just >> wasted a ton of work to help exactly one user. Not that >> anything's >> inherently wrong with that, it's just not an efficient use >> of your >> support and development resources. >> * Performance estimation: Do you offer a user the HD default >> or the SD >> default? Do they come with a GTX-980? HD-default, you can >> allocate >> hundreds of megabytes of VRAM without a problem, and draw >> millions >> of triangles. Intel HD 4000? SD default naturally, and cut >> back on >> everything. It's not a substitute for measuring actual >> performance, >> and adjusting as necessary, and it isn't a substitute for >> letting >> the user make adjustments. But it is a good way to get some >> default >> that's least disappointing. >> * And a bunch of other things. >> >> >> >> ------------------------------__----------------------------- >> You are currently subscribed to public_webgl...@ >> . >> To unsubscribe, send an email to majordomo...@ >> with >> the following command in the body of your email: >> unsubscribe public_webgl >> ------------------------------__----------------------------- >> >> >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsc...@ Wed Dec 3 06:32:26 2014 From: jsc...@ (Johannes Schmid) Date: Wed, 03 Dec 2014 15:32:26 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> Message-ID: <547F1EFA.8080900@esri.com> I can see that happening on the browser statistics side of things, but I disagree from the point of view of a single application. If the app tells the user it is slow or ugly because he didn't click yes in the little popup window, I'm pretty sure that most users will reload the page and click yes instead. A downside of it is that apps could simply refuse to run without the user agreeing to extended profiling and thus get most people to agree, effectively making the "guard" useless. And then users may want to start spoofing the info instead, and history repeats. Again, if there can be a consent of making more information accessible without any such measures, I'm all for it! Joe On 03.12.2014 14:15, Florian B?sch wrote: > I don't care if the webgl debug renderer info holdouts come up with an > "opt-in" solution. I don't care because, it's basically the same as no > solution. Nobody opts in. Not because they're against it, but because > nobody wants to click popups or go search out some obscure menu sub > entry somewhere. So in reality, nobody will do it, and so it's the same > as no solution. > > > > > > On Wed, Dec 3, 2014 at 1:50 PM, Johannes Schmid > wrote: > > > I'm not suggesting that such a dialog box should be introduced to > guard any information that is already exposed. As a developer of a > high-performance app, I completely agree that the lack of > information about the host has severe impacts on the UX at the > moment. If more detailed information can be made always vailable (or > perhaps opt-out, as Ben Adams suggested), that would be great in my > opinion. But if there are strong arguments against making more > information available, I would much prefer an opt-in solution to no > solution at all. > > Joe > > On 03.12.2014 13 :11, Florian B?sch wrote: > > If you do such a dialog box, you can kiss browser statistics > goodbye, > and you won't get most statistics, even from your own site, on > your own > domain, without cross message posting. Sounds like fun wasting time > meandering around in the dark. Enjoy your future. > > On Wed, Dec 3, 2014 at 12:19 PM, Johannes Schmid > > >> wrote: > > > This idea has probably been suggested/discussed before, just > throwing it in in case it hasn't: > When an app queries WebGL for sensitive information, the > browser > presents the user with a dialog box where the user can > prevent the > query from returning such information. Apps can fall back to > conservative settings if he does, so everybody gets the choice > between privacy and optimized performance. > > Such dialog boxes already exist e.g. for camera access. In this > case, coming up with a easy to understand and concise > formulation of > pros and cons may be quite hard, though. > > Best, > Joe > > On 03.12.2014 11 > :42, Florian B?sch wrote: > > On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier > > > > __micros__oft.com > >>> wrote: > > FWIW, there has been some legislative activity wrt > fingerprinting > recently: > http://www.theguardian.com/____technology/2014/nov/28/europe-____privacy-war-websites-__silently-__tracking-users > > > > > [Insert cavalcade of caveats wrt 'obtain the valid > consent > of the > user' here] > > > You can have any 2 of these 3 things, but not all 3 > together: > > * A heterogeneous computing device and software > ecosystem > * Good applications (that is, applications which are > well > adjusted to > run on the given population of hardware) > * Privacy > > If you pick a heterogenous ecosystem and you want good > applications, > then you have to give applications a lot of information > about > where they > run. > > If you pick a heterogenous ecosystem and privacy, you > cannot > have good > applications, because everybody is just stabbing in the > dark. > > If you pick privacy and good applications, you cannot > have a > heterogenous ecosystem, because you need to fix the > platform to > prevent > app developers from stabbing in the dark (aka the iOS way) > > Pushing for privacy, noble as it may seem, is directly > sabotaging the > quality of applications you will be able to get. Make > your pick. > > http://en.wikipedia.org/wiki/____Device_fingerprint > > > > > You could perhaps reduce diversity here by only > exposing exact > renderer info for older/problematic* GPUs/drivers. > > *As reported by content creators. > > It's not about older/problematic devices per-se. This > topic has been > discussed extensively on this ML before. > > * Support handling for Users of your app that have > problems. > Even if > they don't call you personally, or open a support > ticket, > you can > still detect when they have an issue, and note > down the GPU. > Statistically speaking this helps a *lot*. Because > let's > say 1 users > contacts you that your app didn't work for him. > You'd have > to tell > him to tell you the GPU, which is difficult. But > now what, > you have > the GPU, and one user with a problem. But you know > that say > 20% of > your users have some problem, but which ones? Is > it the > same ones > with that GPU? Is it something specific to that > GPU? Should > you file > a conformance test for it? Should you contact a > vendor? > Should you > get one of these GPUs and devise a workaround? How > would > you know? > You don't. It could be that the bug has nothing to > do with that > specific GPU, that's just coincidence, and of say, > 10'000 > people who > use your app and get issues, exactly one person > has that > specific > GPU, the other 99'999 have a different one. Great, now > you've just > wasted a ton of work to help exactly one user. Not > that > anything's > inherently wrong with that, it's just not an > efficient use > of your > support and development resources. > * Performance estimation: Do you offer a user the HD > default > or the SD > default? Do they come with a GTX-980? HD-default, > you can > allocate > hundreds of megabytes of VRAM without a problem, > and draw > millions > of triangles. Intel HD 4000? SD default naturally, > and cut > back on > everything. It's not a substitute for measuring actual > performance, > and adjusting as necessary, and it isn't a > substitute for > letting > the user make adjustments. But it is a good way to > get some > default > that's least disappointing. > * And a bunch of other things. > > > > > ------------------------------____----------------------------__- > You are currently subscribed to public_webgl...@ > > >. > To unsubscribe, send an email to majordomo...@ > > > with > the following command in the body of your email: > unsubscribe public_webgl > > ------------------------------____----------------------------__- > > > > ------------------------------__----------------------------- > You are currently subscribed to public_webgl...@ > . > To unsubscribe, send an email to majordomo...@ > with > the following command in the body of your email: > unsubscribe public_webgl > ------------------------------__----------------------------- > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From ash...@ Wed Dec 3 08:13:28 2014 From: ash...@ (Ashley Gullen) Date: Wed, 3 Dec 2014 16:13:28 +0000 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: <547F1EFA.8080900@esri.com> References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: I'm not sure I understand the privacy objections. It's already effective to fingerprint users with the existing information provided by browsers, such as the user agent string, screen dimensions, HTTP Accept header, time zone, and APIs like WebStorage. This is demonstrated by Panopticlick ( https://panopticlick.eff.org/), which doesn't even use WebGL at the moment. There's an argument not to make fingerprinting even more effective than it already is, but WebGL already provides a lot of hardware/driver/implementation-specific parameters through things like the various supported WebGL extensions, maximum varyings/uniforms/constants, maximum renderbuffer/texture/cube size, point size range, and even performance profiling. All this is already available to improve the effectiveness of fingerprinting. Although I haven't seen any tests examining precisely how effective this is, there is a lot of diversity out there as shown by sites like webglstats.com, so I think it is likely to be effective especially when combined with the other information available. It is difficult to see how browsers can make this less effective, since the only option to reduce its accuracy is to round some parameters down, and then you can't use the full capability of the hardware. I'd guess that analysis of all those parameters combined with the other information available would allow you to make a very good educated guess at the hardware in use anyway. For example certain parameters on a desktop system might correlate to certain models of nVidia graphics card and so on. On systems like iOS it is particularly easy, since the screen size and whether it's an iPad or iPhone alone can often exactly identify the graphics chip in use. On other mobile systems often the device model is in the user agent so it can also be identified easily, such as "Nexus 5" appearing in the Nexus 5's user agent string, precisely identifying the fact it is using an Adreno 330 GPU despite any efforts of the browser to mask the WebGL renderer string. So to me it seems like trying to hide this information doesn't actually help improve anyone's privacy, while impeding legitimate use cases of trying to work around hardware-specific problems. On 3 December 2014 at 14:32, Johannes Schmid wrote: > > I can see that happening on the browser statistics side of things, but I > disagree from the point of view of a single application. If the app tells > the user it is slow or ugly because he didn't click yes in the little popup > window, I'm pretty sure that most users will reload the page and click yes > instead. A downside of it is that apps could simply refuse to run without > the user agreeing to extended profiling and thus get most people to agree, > effectively making the "guard" useless. And then users may want to start > spoofing the info instead, and history repeats. > > Again, if there can be a consent of making more information accessible > without any such measures, I'm all for it! > > Joe > > > On 03.12.2014 14:15, Florian B?sch wrote: > >> I don't care if the webgl debug renderer info holdouts come up with an >> "opt-in" solution. I don't care because, it's basically the same as no >> solution. Nobody opts in. Not because they're against it, but because >> nobody wants to click popups or go search out some obscure menu sub >> entry somewhere. So in reality, nobody will do it, and so it's the same >> as no solution. >> >> >> >> >> >> On Wed, Dec 3, 2014 at 1:50 PM, Johannes Schmid > > wrote: >> >> >> I'm not suggesting that such a dialog box should be introduced to >> guard any information that is already exposed. As a developer of a >> high-performance app, I completely agree that the lack of >> information about the host has severe impacts on the UX at the >> moment. If more detailed information can be made always vailable (or >> perhaps opt-out, as Ben Adams suggested), that would be great in my >> opinion. But if there are strong arguments against making more >> information available, I would much prefer an opt-in solution to no >> solution at all. >> >> Joe >> >> On 03.12.2014 13 :11, Florian B?sch wrote: >> >> If you do such a dialog box, you can kiss browser statistics >> goodbye, >> and you won't get most statistics, even from your own site, on >> your own >> domain, without cross message posting. Sounds like fun wasting >> time >> meandering around in the dark. Enjoy your future. >> >> On Wed, Dec 3, 2014 at 12:19 PM, Johannes Schmid >> >> >> wrote: >> >> >> This idea has probably been suggested/discussed before, just >> throwing it in in case it hasn't: >> When an app queries WebGL for sensitive information, the >> browser >> presents the user with a dialog box where the user can >> prevent the >> query from returning such information. Apps can fall back to >> conservative settings if he does, so everybody gets the >> choice >> between privacy and optimized performance. >> >> Such dialog boxes already exist e.g. for camera access. In >> this >> case, coming up with a easy to understand and concise >> formulation of >> pros and cons may be quite hard, though. >> >> Best, >> Joe >> >> On 03.12.2014 11 >> :42, Florian B?sch wrote: >> >> On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier >> > >> > > >> > __micros__oft.com >> > >>> wrote: >> >> FWIW, there has been some legislative activity wrt >> fingerprinting >> recently: >> http://www.theguardian.com/____technology/2014/nov/28/ >> europe-____privacy-war-websites-__silently-__tracking-users >> > __privacy-war-websites-silently-__tracking-users> >> >> >> > __privacy-war-websites-silently-__tracking-users >> > privacy-war-websites-silently-tracking-users>> >> [Insert cavalcade of caveats wrt 'obtain the valid >> consent >> of the >> user' here] >> >> >> You can have any 2 of these 3 things, but not all 3 >> together: >> >> * A heterogeneous computing device and software >> ecosystem >> * Good applications (that is, applications which are >> well >> adjusted to >> run on the given population of hardware) >> * Privacy >> >> If you pick a heterogenous ecosystem and you want good >> applications, >> then you have to give applications a lot of information >> about >> where they >> run. >> >> If you pick a heterogenous ecosystem and privacy, you >> cannot >> have good >> applications, because everybody is just stabbing in the >> dark. >> >> If you pick privacy and good applications, you cannot >> have a >> heterogenous ecosystem, because you need to fix the >> platform to >> prevent >> app developers from stabbing in the dark (aka the iOS >> way) >> >> Pushing for privacy, noble as it may seem, is directly >> sabotaging the >> quality of applications you will be able to get. Make >> your pick. >> >> http://en.wikipedia.org/wiki/____Device_fingerprint >> >> >> >> > > >> You could perhaps reduce diversity here by only >> exposing exact >> renderer info for older/problematic* GPUs/drivers. >> >> *As reported by content creators. >> >> It's not about older/problematic devices per-se. This >> topic has been >> discussed extensively on this ML before. >> >> * Support handling for Users of your app that have >> problems. >> Even if >> they don't call you personally, or open a support >> ticket, >> you can >> still detect when they have an issue, and note >> down the GPU. >> Statistically speaking this helps a *lot*. Because >> let's >> say 1 users >> contacts you that your app didn't work for him. >> You'd have >> to tell >> him to tell you the GPU, which is difficult. But >> now what, >> you have >> the GPU, and one user with a problem. But you know >> that say >> 20% of >> your users have some problem, but which ones? Is >> it the >> same ones >> with that GPU? Is it something specific to that >> GPU? Should >> you file >> a conformance test for it? Should you contact a >> vendor? >> Should you >> get one of these GPUs and devise a workaround? How >> would >> you know? >> You don't. It could be that the bug has nothing to >> do with that >> specific GPU, that's just coincidence, and of say, >> 10'000 >> people who >> use your app and get issues, exactly one person >> has that >> specific >> GPU, the other 99'999 have a different one. Great, >> now >> you've just >> wasted a ton of work to help exactly one user. Not >> that >> anything's >> inherently wrong with that, it's just not an >> efficient use >> of your >> support and development resources. >> * Performance estimation: Do you offer a user the HD >> default >> or the SD >> default? Do they come with a GTX-980? HD-default, >> you can >> allocate >> hundreds of megabytes of VRAM without a problem, >> and draw >> millions >> of triangles. Intel HD 4000? SD default naturally, >> and cut >> back on >> everything. It's not a substitute for measuring >> actual >> performance, >> and adjusting as necessary, and it isn't a >> substitute for >> letting >> the user make adjustments. But it is a good way to >> get some >> default >> that's least disappointing. >> * And a bunch of other things. >> >> >> >> >> ------------------------------____----------------------------__- >> You are currently subscribed to public_webgl...@ >> >> > >. >> To unsubscribe, send an email to majordomo...@ >> >> > > with >> the following command in the body of your email: >> unsubscribe public_webgl >> >> ------------------------------____----------------------------__- >> >> >> >> ------------------------------__----------------------------- >> You are currently subscribed to public_webgl...@ >> . >> To unsubscribe, send an email to majordomo...@ >> with >> the following command in the body of your email: >> unsubscribe public_webgl >> ------------------------------__----------------------------- >> >> >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vod...@ Wed Dec 3 11:35:13 2014 From: vod...@ (Michal Vodicka) Date: Wed, 3 Dec 2014 20:35:13 +0100 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence In-Reply-To: References: Message-ID: OpenGL section 2.10.3 is clear for me in this case. After going step by step again thru OpenGL spec: 1) create program 2) attach shaders and stuff 3) link program 4) use program *and just for you to know, with this sentence "While a valid program object is in use ... " is meant you are free to do steps 2 and 3 over and over now, because executable has no connection to shaders once it is generated*. Generate is exactly the word used in spec 2.10 Vertex shaders, which implies it has no connection to skeleton program unless spec says the opposite later (which it doesnt). I dont think "program object is in use" is ambiguous, because it is said in the useProgram description part. About sentence: "The only function that affects the executable code of the program object is LinkProgram" I'm not the expert, but I wouldn't say that. Once executable code is generated, it cant be affected by any OpenGL/WebGL function. LinkProgram and DeleteProgram might cause losing reference to executable code. UseProgram will install executable code as a part of current rendering state. Please stop me if I'm wrong. Stackoverflow OP reporter might be only confused because it didnt work in firefox. So far as I know, playcanvas engine always delete shaders after linking program and I didnt find any report about FF malfunction. I guess reporter might did mistake while testing it in FF. So I suggest we get some info from Safari developers and if they point something unclear in the spec, then it can be improved. I don't feel confused this time, but the others might. Gregg, I vote for improving the spec, I just wouldn't use exactly the sentence you did unless my previous statement is wrong :) Regards Michal On Wed, Dec 3, 2014 at 3:14 AM, Rafael Cintron wrote: > Before the paragraph in question, there were several paragraphs that > talked about attaching and detaching shaders and linking programs. > Nowhere in those paragraphs was there mention of those operations only > mattering or behaving differently when the program is in use. > > > > Here is the beginning of the previous paragraph: > > If a valid executable is created, it can be made part of the current > rendering > > state with the command > > void UseProgram( uint program ); > > This command will install the executable code as part of current rendering > state if > > the program object program contains valid *executable code, i.e. has been > linked* > > *successfully* > > > > The part in bold implies to me that linking is what creates the executable > code. Subsequent paragraphs also talk about linking being the operation > that causes executable code to be created for a program, or nullified on an > unsuccessful link. Attach/Detach shaders only serve as inputs to the link > process. Whether a program is in use only factors into what executable > code is set in the render state. > > > > > > Now all that being said, we know of at least one implementation that > differs from the rest. In addition, you were confused by the wording of > the spec as were people on StackOverflow. That tells me the spec should be > clarified in this regard. Your statement: ?The only function that affects > the executable code of the program object is LinkProgram? is a great > suggestion. I would also clarify the confusing sentence to read: ?*Regardless > of whether a program is in use or not*, applications are free to modify > attached shader objects, compile attached shader objects, attach additional > shader objects, and detach shader objects?. This way, the subsequent > sentence will also apply to used or unused programs: ?These operations do > not affect the link status or executable code of the program object.? > > > > --Rafael > > > > *From:* Gregg Tavares [mailto:khronos...@] > *Sent:* Tuesday, December 2, 2014 3:35 PM > *To:* Rafael Cintron > *Cc:* Gregg Tavares; public_webgl...@ > *Subject:* Re: [Public WebGL] DeleteShader and DetachShader spec > ambiguity and implementation divergence > > > > That's still not clear. While I agree that's the way it "should be" the > spec is not clear enough IMO > > > > On Tue, Dec 2, 2014 at 12:44 PM, Rafael Cintron < > Rafael.Cintron...@> wrote: > > I agree the ?While a valid program is in use . . .? phrase in the spec > is confusing. > > > > The conformance test that comes closest to testing the behavior in the > StackOverflow question is > https://www.khronos.org/registry/webgl/sdk/tests/conformance/programs/program-test.html. > Unfortunately, it first calls useProgram before performing the questionable > operations so it doesn?t directly test this scenario. We should add > conformance tests to clarify. > > > > Reading the entire section of the OpenGL spec as a whole, it?s clear that > LinkProgram creates an ?executable? out of the current state of the > program. Calling UseProgram merely installs the executable as the current > rendering state of the context if the program has been successfully > linked. The only thing that can change the installed executable from that > point forward is calling UseProgram with a different program or performing > a successful relink of the current program. If the in-use program is > unsuccessfully re-linked, you can still render with the installed ?good? > executable. But the moment you replace the current program with a > different program, then you?ve essentially lost that good executable > forever. > > > > The issue isn't what UseProgram does. That part is clear. The issue is > what do AttachShader, DetachShader, etc...do when a program is not "in use". > > > > Conceptually a program is something like this > > > > class Program { > > Shaders* shaders[]; // all attached shaders > > Exe* exe; // the last good executable > > } > > > > And useProgram conceptually works something like this > > > > class GLState { > > Exe* currentProgram; > > } > > > > glUseProgram(id) { > > Program* prg = getProgramById(id); > > state.currentProgram = prg; > > state.currentExe = prg.exe; > > } > > > > With that mental model, the only thing that invalidates or changes the exe > reference inside of Program is LinkProgram which is apparently supposed to > be implemented like this > > > > glLinkProgram(id) { > > Program* prg = getProgramById(id); > > prg->exe = Link(shaders); // Good or bad this will set exe > > > > // If the link was good and this program is "in use" > > if (linkWasGood && state.currentProgram == this) { > > state.currentExe = prg.exe; // use the new exe. > > } > > } > > > > It's also clear that calling DeleteShader is valid because it won't > actually be deleted until it is detached from the Program > > > > What's not clear is the behavior of DetachShader, AttachShader, etc. > According to the spec this would be valid implementation > > > > glDetachShader(prg, sh) { > > Program* prg = getProgramById(id); > > prg->RemoveShaderFromArrayOfShaders(sh); > > > > * // If this program is NOT in use we can muck with the executable > of this program* > > if (state.currentProgram != this) { > > prg.exe = NULL; // <<==---------------------- > > } > > } > > > > The spec only claims the executable code of the program is not affected *IF > THE PROGRAM IS IN USE*. And "in use" is clearly defined as the program > most recently passed to UseProgram. Programs not "in use" are not covered > by the paragraph mentioning Detaching and Attaching shaders. > > > > Rather, it seems like the spec should say something that effectively > means: > > > > The only function that affects the executable code of the program > object is LinkProgram > > > > Note the OpenGL Wiki claims this behavior is expected > . > > > > > > > > The attachShader, detachShader, and linkProgram APIs all take program and > shader objects as parameters. That means you can perform these operations > even through the program you pass to the functions is not the currently ?in > use? program set via useProgram. In fact, the program you pass to these > functions can be completely different than the one currently ?in use? via > useProgram. So I think the sentence ?While a valid program is in use ?? is > meant to clarify you can change the shaders of the program independent of > whether it is in use or not. Otherwise, attachShader, detachShader and > linkProgram would not have taken programs as their first argument in the > first place. > > > > I tried all three of the examples in the StackOverFlow question in IE, > Chrome and Firefox on Windows. All browsers agree ?prog1? can be rendered > with at the end of each example. Since the last link of the program is a > successful link, the executable created as a result of the link is the one > that is used for rendering. Unless I am missing something in my > understanding, I think this is the correct behavior according to the > spec. > > > > --Rafael > > > > > > *From:* owners-public_webgl...@ [mailto: > owners-public_webgl...@] *On Behalf Of *Gregg Tavares > *Sent:* Tuesday, December 2, 2014 12:41 AM > *To:* public_webgl...@ > *Subject:* [Public WebGL] DeleteShader and DetachShader spec ambiguity > and implementation divergence > > > > So this stackoverflow question came up > > > > > http://stackoverflow.com/questions/27237696/webgl-detach-and-delete-shaders-after-linking > > > > The problem is 2 fold > > > > #1) The browsers are not consistent in their behavior > > > > #2) The OpenGL ES 2.0 spec doesn't seem particularly clear on what's > supposed to happen. > > > > Basically the poster is claiming you should be able to do this > > > > p = gl.createProgram(); > > gl.attachShader(p, someVertexShader); > > gl.attachShader(p, someFragmentShader); > > gl.linkProgram(p); > > gl.detachShader(p, someVertexShader); > > gl.detachShader(p, someFragmentShader); > > gl.deleteShader(p, someVertexShader); > > gl.deleteShader(p, someFragmentShader); > > gl.useProgram(p); > > > > And now 'p' should still point to a valid program that you can call > 'gl.useProgram' on or > > `gl.getUniformLocation` etc. This works in Chrome. It fails in Safari. > According to the OP > > it also fails in Firefox but it seemed to work for me. > > > > But, reading the spec it's not clear what's supposed to happen. AFAICT the > spec says > > > > While a valid program object is in use, applications are free to modify > attached > shader objects, compile attached shader objects, attach additional shader > objects, > and detach shader objects. These operations do not affect the link status > or executable > code of the program object. > > > > But "program object is in use" is ambiguous. It could be interpreted as > the "currentProgram" as in the one currently installed with gl.useProgram > > > > > > So > > > > (a) WebGL needs to make it clear what the correct behavior is and > > > > (b) a conformance test needs to be written to check that behavior which > includes calling useProgram with the program in question, getting locations > from it and trying to render with it and checking for success or failure > whichever is decided is the correct behavior. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Wed Dec 3 13:29:59 2014 From: khr...@ (Gregg Tavares) Date: Wed, 3 Dec 2014 13:29:59 -0800 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence In-Reply-To: References: Message-ID: On Wed, Dec 3, 2014 at 11:35 AM, Michal Vodicka wrote: > OpenGL section 2.10.3 is clear for me in this case. > > After going step by step again thru OpenGL spec: > 1) create program > 2) attach shaders and stuff > 3) link program > 4) use program *and just for you to know, with this sentence "While a > valid program object is in use ... " is meant you are free to do steps 2 > and 3 over and over now, because executable has no connection to shaders > once it is generated*. Generate is exactly the word used in spec 2.10 > Vertex shaders, which implies it has no connection to skeleton program > unless spec says the opposite later (which it doesnt). > > I dont think "program object is in use" is ambiguous, because it is said > in the useProgram description part. About sentence: > "The only function that affects the executable code of the program object > is LinkProgram" > I'm not the expert, but I wouldn't say that. Once executable code is > generated, it cant be affected by any OpenGL/WebGL function. LinkProgram > and DeleteProgram might cause losing reference to executable code. > UseProgram will install executable code as a part of current rendering > state. Please stop me if I'm wrong. > The question isn't what affects the executable code. Executable code is immutable. The question is what effects the program object's *reference to* executable code. The problem is "While a valid program object is in use" you can do X, Y, & Z. But what about when a program is not in use? "In use" is clearly defined as the program in use though UseProgram. Once a program is no longer "in use" those those rules don't apply. I agree those rules should apply regardless of whether or not a program is "in use" but the fact that the spec calls it out that only programs that are "in use" get those rules suggests that rules don't apply when the program is not "in use". > Stackoverflow OP reporter might be only confused because it didnt work in > firefox. So far as I know, playcanvas engine always delete shaders after > linking program and I didnt find any report about FF malfunction. I guess > reporter might did mistake while testing it in FF. So I suggest we get some > info from Safari developers and if they point something unclear in the > spec, then it can be improved. > > I don't feel confused this time, but the others might. Gregg, I vote for > improving the spec, I just wouldn't use exactly the sentence you did unless > my previous statement is wrong :) > > Regards Michal > > On Wed, Dec 3, 2014 at 3:14 AM, Rafael Cintron < > Rafael.Cintron...@> wrote: > >> Before the paragraph in question, there were several paragraphs that >> talked about attaching and detaching shaders and linking programs. >> Nowhere in those paragraphs was there mention of those operations only >> mattering or behaving differently when the program is in use. >> >> >> >> Here is the beginning of the previous paragraph: >> >> If a valid executable is created, it can be made part of the current >> rendering >> >> state with the command >> >> void UseProgram( uint program ); >> >> This command will install the executable code as part of current >> rendering state if >> >> the program object program contains valid *executable code, i.e. has >> been linked* >> >> *successfully* >> >> >> >> The part in bold implies to me that linking is what creates the >> executable code. Subsequent paragraphs also talk about linking being the >> operation that causes executable code to be created for a program, or >> nullified on an unsuccessful link. Attach/Detach shaders only serve as >> inputs to the link process. Whether a program is in use only factors into >> what executable code is set in the render state. >> >> >> >> >> >> Now all that being said, we know of at least one implementation that >> differs from the rest. In addition, you were confused by the wording of >> the spec as were people on StackOverflow. That tells me the spec should be >> clarified in this regard. Your statement: ?The only function that affects >> the executable code of the program object is LinkProgram? is a great >> suggestion. I would also clarify the confusing sentence to read: ?*Regardless >> of whether a program is in use or not*, applications are free to modify >> attached shader objects, compile attached shader objects, attach additional >> shader objects, and detach shader objects?. This way, the subsequent >> sentence will also apply to used or unused programs: ?These operations do >> not affect the link status or executable code of the program object.? >> >> >> >> --Rafael >> >> >> >> *From:* Gregg Tavares [mailto:khronos...@] >> *Sent:* Tuesday, December 2, 2014 3:35 PM >> *To:* Rafael Cintron >> *Cc:* Gregg Tavares; public_webgl...@ >> *Subject:* Re: [Public WebGL] DeleteShader and DetachShader spec >> ambiguity and implementation divergence >> >> >> >> That's still not clear. While I agree that's the way it "should be" the >> spec is not clear enough IMO >> >> >> >> On Tue, Dec 2, 2014 at 12:44 PM, Rafael Cintron < >> Rafael.Cintron...@> wrote: >> >> I agree the ?While a valid program is in use . . .? phrase in the spec >> is confusing. >> >> >> >> The conformance test that comes closest to testing the behavior in the >> StackOverflow question is >> https://www.khronos.org/registry/webgl/sdk/tests/conformance/programs/program-test.html. >> Unfortunately, it first calls useProgram before performing the questionable >> operations so it doesn?t directly test this scenario. We should add >> conformance tests to clarify. >> >> >> >> Reading the entire section of the OpenGL spec as a whole, it?s clear that >> LinkProgram creates an ?executable? out of the current state of the >> program. Calling UseProgram merely installs the executable as the current >> rendering state of the context if the program has been successfully >> linked. The only thing that can change the installed executable from that >> point forward is calling UseProgram with a different program or performing >> a successful relink of the current program. If the in-use program is >> unsuccessfully re-linked, you can still render with the installed ?good? >> executable. But the moment you replace the current program with a >> different program, then you?ve essentially lost that good executable >> forever. >> >> >> >> The issue isn't what UseProgram does. That part is clear. The issue is >> what do AttachShader, DetachShader, etc...do when a program is not "in use". >> >> >> >> Conceptually a program is something like this >> >> >> >> class Program { >> >> Shaders* shaders[]; // all attached shaders >> >> Exe* exe; // the last good executable >> >> } >> >> >> >> And useProgram conceptually works something like this >> >> >> >> class GLState { >> >> Exe* currentProgram; >> >> } >> >> >> >> glUseProgram(id) { >> >> Program* prg = getProgramById(id); >> >> state.currentProgram = prg; >> >> state.currentExe = prg.exe; >> >> } >> >> >> >> With that mental model, the only thing that invalidates or changes the >> exe reference inside of Program is LinkProgram which is apparently supposed >> to be implemented like this >> >> >> >> glLinkProgram(id) { >> >> Program* prg = getProgramById(id); >> >> prg->exe = Link(shaders); // Good or bad this will set exe >> >> >> >> // If the link was good and this program is "in use" >> >> if (linkWasGood && state.currentProgram == this) { >> >> state.currentExe = prg.exe; // use the new exe. >> >> } >> >> } >> >> >> >> It's also clear that calling DeleteShader is valid because it won't >> actually be deleted until it is detached from the Program >> >> >> >> What's not clear is the behavior of DetachShader, AttachShader, etc. >> According to the spec this would be valid implementation >> >> >> >> glDetachShader(prg, sh) { >> >> Program* prg = getProgramById(id); >> >> prg->RemoveShaderFromArrayOfShaders(sh); >> >> >> >> * // If this program is NOT in use we can muck with the executable >> of this program* >> >> if (state.currentProgram != this) { >> >> prg.exe = NULL; // <<==---------------------- >> >> } >> >> } >> >> >> >> The spec only claims the executable code of the program is not affected *IF >> THE PROGRAM IS IN USE*. And "in use" is clearly defined as the program >> most recently passed to UseProgram. Programs not "in use" are not covered >> by the paragraph mentioning Detaching and Attaching shaders. >> >> >> >> Rather, it seems like the spec should say something that effectively >> means: >> >> >> >> The only function that affects the executable code of the program >> object is LinkProgram >> >> >> >> Note the OpenGL Wiki claims this behavior is expected >> . >> >> >> >> >> >> >> >> The attachShader, detachShader, and linkProgram APIs all take program and >> shader objects as parameters. That means you can perform these operations >> even through the program you pass to the functions is not the currently ?in >> use? program set via useProgram. In fact, the program you pass to these >> functions can be completely different than the one currently ?in use? via >> useProgram. So I think the sentence ?While a valid program is in use ?? is >> meant to clarify you can change the shaders of the program independent of >> whether it is in use or not. Otherwise, attachShader, detachShader and >> linkProgram would not have taken programs as their first argument in the >> first place. >> >> >> >> I tried all three of the examples in the StackOverFlow question in IE, >> Chrome and Firefox on Windows. All browsers agree ?prog1? can be rendered >> with at the end of each example. Since the last link of the program is a >> successful link, the executable created as a result of the link is the one >> that is used for rendering. Unless I am missing something in my >> understanding, I think this is the correct behavior according to the >> spec. >> >> >> >> --Rafael >> >> >> >> >> >> *From:* owners-public_webgl...@ [mailto: >> owners-public_webgl...@] *On Behalf Of *Gregg Tavares >> *Sent:* Tuesday, December 2, 2014 12:41 AM >> *To:* public_webgl...@ >> *Subject:* [Public WebGL] DeleteShader and DetachShader spec ambiguity >> and implementation divergence >> >> >> >> So this stackoverflow question came up >> >> >> >> >> http://stackoverflow.com/questions/27237696/webgl-detach-and-delete-shaders-after-linking >> >> >> >> The problem is 2 fold >> >> >> >> #1) The browsers are not consistent in their behavior >> >> >> >> #2) The OpenGL ES 2.0 spec doesn't seem particularly clear on what's >> supposed to happen. >> >> >> >> Basically the poster is claiming you should be able to do this >> >> >> >> p = gl.createProgram(); >> >> gl.attachShader(p, someVertexShader); >> >> gl.attachShader(p, someFragmentShader); >> >> gl.linkProgram(p); >> >> gl.detachShader(p, someVertexShader); >> >> gl.detachShader(p, someFragmentShader); >> >> gl.deleteShader(p, someVertexShader); >> >> gl.deleteShader(p, someFragmentShader); >> >> gl.useProgram(p); >> >> >> >> And now 'p' should still point to a valid program that you can call >> 'gl.useProgram' on or >> >> `gl.getUniformLocation` etc. This works in Chrome. It fails in Safari. >> According to the OP >> >> it also fails in Firefox but it seemed to work for me. >> >> >> >> But, reading the spec it's not clear what's supposed to happen. AFAICT >> the spec says >> >> >> >> While a valid program object is in use, applications are free to modify >> attached >> shader objects, compile attached shader objects, attach additional shader >> objects, >> and detach shader objects. These operations do not affect the link status >> or executable >> code of the program object. >> >> >> >> But "program object is in use" is ambiguous. It could be interpreted as >> the "currentProgram" as in the one currently installed with gl.useProgram >> >> >> >> >> >> So >> >> >> >> (a) WebGL needs to make it clear what the correct behavior is and >> >> >> >> (b) a conformance test needs to be written to check that behavior which >> includes calling useProgram with the program in question, getting locations >> from it and trying to render with it and checking for success or failure >> whichever is decided is the correct behavior. >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Wed Dec 3 13:46:31 2014 From: khr...@ (Gregg Tavares) Date: Wed, 3 Dec 2014 13:46:31 -0800 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: On Wed, Dec 3, 2014 at 8:13 AM, Ashley Gullen wrote: > I'm not sure I understand the privacy objections. It's already effective > to fingerprint users with the existing information provided by browsers, > such as the user agent string, screen dimensions, HTTP Accept header, time > zone, and APIs like WebStorage. This is demonstrated by Panopticlick ( > https://panopticlick.eff.org/), which doesn't even use WebGL at the > moment. > > There's an argument not to make fingerprinting even more effective than it > already is, but WebGL already provides a lot of > hardware/driver/implementation-specific parameters through things like the > various supported WebGL extensions, maximum varyings/uniforms/constants, > maximum renderbuffer/texture/cube size, point size range, and even > performance profiling. All this is already available to improve the > effectiveness of fingerprinting. Although I haven't seen any tests > examining precisely how effective this is, there is a lot of diversity out > there as shown by sites like webglstats.com, so I think it is likely to > be effective especially when combined with the other information available. > It is difficult to see how browsers can make this less effective, since the > only option to reduce its accuracy is to round some parameters down, and > then you can't use the full capability of the hardware. > > I'd guess that analysis of all those parameters combined with the other > information available would allow you to make a very good educated guess at > the hardware in use anyway. For example certain parameters on a desktop > system might correlate to certain models of nVidia graphics card and so on. > On systems like iOS it is particularly easy, since the screen size and > whether it's an iPad or iPhone alone can often exactly identify the > graphics chip in use. On other mobile systems often the device model is in > the user agent so it can also be identified easily, such as "Nexus 5" > appearing in the Nexus 5's user agent string, precisely identifying the > fact it is using an Adreno 330 GPU despite any efforts of the browser to > mask the WebGL renderer string. > Sounds like you don't need WEBGL_debug_info then. Just make your DB using the existing info and see if it correlates with the info you need. If it does you're done. I'd really like to see someone at least try this. The Web shouldn't be full of if (device == "NVidia 9600") { } elif (device == "AMD 1456") { } elif (device == "NVidia Quadro 456") { } elif (device == "Qualcomm 923") { } elif (device == "AMD Radeon 9865") { } elif (device == "Intel 4400") { } elif (device == "Intel 5000") { } elif (device == "Imagination Technologies 523") { } etc. That way lies madness and even more broken websites. > > So to me it seems like trying to hide this information doesn't actually > help improve anyone's privacy, while impeding legitimate use cases of > trying to work around hardware-specific problems. > > > > On 3 December 2014 at 14:32, Johannes Schmid wrote: > >> >> I can see that happening on the browser statistics side of things, but I >> disagree from the point of view of a single application. If the app tells >> the user it is slow or ugly because he didn't click yes in the little popup >> window, I'm pretty sure that most users will reload the page and click yes >> instead. A downside of it is that apps could simply refuse to run without >> the user agreeing to extended profiling and thus get most people to agree, >> effectively making the "guard" useless. And then users may want to start >> spoofing the info instead, and history repeats. >> >> Again, if there can be a consent of making more information accessible >> without any such measures, I'm all for it! >> >> Joe >> >> >> On 03.12.2014 14:15, Florian B?sch wrote: >> >>> I don't care if the webgl debug renderer info holdouts come up with an >>> "opt-in" solution. I don't care because, it's basically the same as no >>> solution. Nobody opts in. Not because they're against it, but because >>> nobody wants to click popups or go search out some obscure menu sub >>> entry somewhere. So in reality, nobody will do it, and so it's the same >>> as no solution. >>> >>> >>> >>> >>> >>> On Wed, Dec 3, 2014 at 1:50 PM, Johannes Schmid >> > wrote: >>> >>> >>> I'm not suggesting that such a dialog box should be introduced to >>> guard any information that is already exposed. As a developer of a >>> high-performance app, I completely agree that the lack of >>> information about the host has severe impacts on the UX at the >>> moment. If more detailed information can be made always vailable (or >>> perhaps opt-out, as Ben Adams suggested), that would be great in my >>> opinion. But if there are strong arguments against making more >>> information available, I would much prefer an opt-in solution to no >>> solution at all. >>> >>> Joe >>> >>> On 03.12.2014 13 :11, Florian B?sch wrote: >>> >>> If you do such a dialog box, you can kiss browser statistics >>> goodbye, >>> and you won't get most statistics, even from your own site, on >>> your own >>> domain, without cross message posting. Sounds like fun wasting >>> time >>> meandering around in the dark. Enjoy your future. >>> >>> On Wed, Dec 3, 2014 at 12:19 PM, Johannes Schmid >>> >>> >> wrote: >>> >>> >>> This idea has probably been suggested/discussed before, just >>> throwing it in in case it hasn't: >>> When an app queries WebGL for sensitive information, the >>> browser >>> presents the user with a dialog box where the user can >>> prevent the >>> query from returning such information. Apps can fall back to >>> conservative settings if he does, so everybody gets the >>> choice >>> between privacy and optimized performance. >>> >>> Such dialog boxes already exist e.g. for camera access. In >>> this >>> case, coming up with a easy to understand and concise >>> formulation of >>> pros and cons may be quite hard, though. >>> >>> Best, >>> Joe >>> >>> On 03.12.2014 11 >>> :42, Florian B?sch wrote: >>> >>> On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier >>> >> >>> >> > >>> >> __micros__oft.com >>> >> >>> wrote: >>> >>> FWIW, there has been some legislative activity wrt >>> fingerprinting >>> recently: >>> http://www.theguardian.com/____technology/2014/nov/28/ >>> europe-____privacy-war-websites-__silently-__tracking-users >>> >> __privacy-war-websites-silently-__tracking-users> >>> >>> >>> >> __privacy-war-websites-silently-__tracking-users >>> >> privacy-war-websites-silently-tracking-users>> >>> [Insert cavalcade of caveats wrt 'obtain the valid >>> consent >>> of the >>> user' here] >>> >>> >>> You can have any 2 of these 3 things, but not all 3 >>> together: >>> >>> * A heterogeneous computing device and software >>> ecosystem >>> * Good applications (that is, applications which are >>> well >>> adjusted to >>> run on the given population of hardware) >>> * Privacy >>> >>> If you pick a heterogenous ecosystem and you want good >>> applications, >>> then you have to give applications a lot of information >>> about >>> where they >>> run. >>> >>> If you pick a heterogenous ecosystem and privacy, you >>> cannot >>> have good >>> applications, because everybody is just stabbing in the >>> dark. >>> >>> If you pick privacy and good applications, you cannot >>> have a >>> heterogenous ecosystem, because you need to fix the >>> platform to >>> prevent >>> app developers from stabbing in the dark (aka the iOS >>> way) >>> >>> Pushing for privacy, noble as it may seem, is directly >>> sabotaging the >>> quality of applications you will be able to get. Make >>> your pick. >>> >>> http://en.wikipedia.org/wiki/____Device_fingerprint >>> >>> >>> >>> >> > >>> You could perhaps reduce diversity here by only >>> exposing exact >>> renderer info for older/problematic* GPUs/drivers. >>> >>> *As reported by content creators. >>> >>> It's not about older/problematic devices per-se. This >>> topic has been >>> discussed extensively on this ML before. >>> >>> * Support handling for Users of your app that have >>> problems. >>> Even if >>> they don't call you personally, or open a support >>> ticket, >>> you can >>> still detect when they have an issue, and note >>> down the GPU. >>> Statistically speaking this helps a *lot*. Because >>> let's >>> say 1 users >>> contacts you that your app didn't work for him. >>> You'd have >>> to tell >>> him to tell you the GPU, which is difficult. But >>> now what, >>> you have >>> the GPU, and one user with a problem. But you know >>> that say >>> 20% of >>> your users have some problem, but which ones? Is >>> it the >>> same ones >>> with that GPU? Is it something specific to that >>> GPU? Should >>> you file >>> a conformance test for it? Should you contact a >>> vendor? >>> Should you >>> get one of these GPUs and devise a workaround? How >>> would >>> you know? >>> You don't. It could be that the bug has nothing to >>> do with that >>> specific GPU, that's just coincidence, and of say, >>> 10'000 >>> people who >>> use your app and get issues, exactly one person >>> has that >>> specific >>> GPU, the other 99'999 have a different one. Great, >>> now >>> you've just >>> wasted a ton of work to help exactly one user. Not >>> that >>> anything's >>> inherently wrong with that, it's just not an >>> efficient use >>> of your >>> support and development resources. >>> * Performance estimation: Do you offer a user the HD >>> default >>> or the SD >>> default? Do they come with a GTX-980? HD-default, >>> you can >>> allocate >>> hundreds of megabytes of VRAM without a problem, >>> and draw >>> millions >>> of triangles. Intel HD 4000? SD default naturally, >>> and cut >>> back on >>> everything. It's not a substitute for measuring >>> actual >>> performance, >>> and adjusting as necessary, and it isn't a >>> substitute for >>> letting >>> the user make adjustments. But it is a good way to >>> get some >>> default >>> that's least disappointing. >>> * And a bunch of other things. >>> >>> >>> >>> >>> ------------------------------____-------------------------- >>> --__- >>> You are currently subscribed to public_webgl...@ >>> >>> >> >. >>> To unsubscribe, send an email to majordomo...@ >>> >>> >> > with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> >>> ------------------------------____-------------------------- >>> --__- >>> >>> >>> >>> ------------------------------__----------------------------- >>> You are currently subscribed to public_webgl...@ >>> . >>> To unsubscribe, send an email to majordomo...@ >>> with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ------------------------------__----------------------------- >>> >>> >>> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Dec 3 20:23:02 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 4 Dec 2014 05:23:02 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: On Wed, Dec 3, 2014 at 10:46 PM, Gregg Tavares wrote: > Sounds like you don't need WEBGL_debug_info then. Just make your DB using > the existing info and see if it correlates with the info you need. If it > does you're done. > I'd really like to see someone at least try this. > The parameters only fuzzily correlate to GPUs. That doesn't help you really in the usecases where you have issues, or try to assess performance or limitations not expressed in parameters up front. I've not measured this, but I'm sure, if I would, It'd amount to exactly that guess. The Web shouldn't be full of > > if (device == "NVidia 9600") { > } elif (device == "AMD 1456") { > } elif (device == "NVidia Quadro 456") { > } elif (device == "Qualcomm 923") { > } elif (device == "AMD Radeon 9865") { > } elif (device == "Intel 4400") { > } elif (device == "Intel 5000") { > } elif (device == "Imagination Technologies 523") { > } > > etc. That way lies madness and even more broken websites. > Nobody wants to do that, and that's not what you'd do most of the time anyway. Need I remind you that it was yer own google maps team who stated that this information helps them to provide a better google maps, in case you forgot, we went over all of this, years ago. Unless browser/vendors/etc. address these two issues, the GPU string is the only thing that solves it. 1. How to assess the statistical relevance of a driver/GPU bug that your application exercises, so you can appropriately prioritize and deal with this issue and appropriately apply your (limited) developer and support staff time. 2. How to assess the performance characteristic of a given piece of hardware, for your usecase, where performance of (by parameters) identical looking GPUs can be apart by up to 3 orders of magnitude. Previous discussion on providing such features have failed, and because they have failed, GPU string it is, unless you can solve it, in which case I think everybody'd be happy to stop asking for GPU strings. Except of course the solution would also have to involve figuring out which GPU, because it is quite handy to be able to test a bug before submitting bug reports to driver/browser/os vendors... -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Dec 4 00:14:50 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 4 Dec 2014 09:14:50 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: On Thu, Dec 4, 2014 at 5:23 AM, Florian B?sch wrote: > On Wed, Dec 3, 2014 at 10:46 PM, Gregg Tavares > wrote: > >> Sounds like you don't need WEBGL_debug_info then. Just make your DB using >> the existing info and see if it correlates with the info you need. If it >> does you're done. >> > I'd really like to see someone at least try this. >> > The parameters only fuzzily correlate to GPUs. That doesn't help you > really in the usecases where you have issues, or try to assess performance > or limitations not expressed in parameters up front. I've not measured > this, but I'm sure, if I would, It'd amount to exactly that guess. > I've ran trough data from yesterdays log. - Total entries in the log: 386046 - Total entries considered (have webgl and have debug renderer info): 247704 - Unique parameter sets: 176 - Combined unique webgl characteristics: 581 - Unique renderers count before stripping: 3389 - Unique renderers count after stripping: 1292 The prediction methodology is to take the most populous renderer after stripping/lowercasing and space substitution (which improves accuracy because it cuts out directx version and string formatting and whatnot). Considering only parameters (because extensions, extension capabilities and precisions are strongly UA dependent): - avg renderers per parameter: 15.8 - min renderers per parameter: 1 - max renderers per parameter: 602 - prediction accuracy: 35.1% Most frequent wrong guessing sample: guess: nvidia geforce gtx 760 was: nvidia geforce gtx 770 was: nvidia geforce gtx 660 was: amd radeon r9 200 series was: amd radeon hd 7800 series was: amd radeon hd 7700 series guess: intel(r) hd graphics was: intel(r) hd graphics 4000 was: intel(r) hd graphics family was: intel(r) hd graphics 4600 was: intel(r) hd graphics 3000 was: intel(r) hd graphics 4400 guess: intel(r) hd graphics family was: mobile intel(r) 4 series express chipset family was: intel(r) g41 express chipset was: intel(r) hd graphics was: intel(r) q45/q43 express chipset was: intel(r) 4 series internal chipset guess: nvidia geforce 210 was: ati radeon hd 4200 was: nvidia geforce gt 240 was: nvidia geforce gt 220 was: nvidia geforce 310m was: ati radeon hd 4800 series guess: microsoft basic render driver was: intel(r) hd graphics was: intel(r) hd graphics 4000 was: intel(r) hd graphics 4600 was: intel(r) hd graphics family was: intel(r) hd graphics 4400 guess: ati mobility radeon hd 5470 was: ati mobility radeon hd 5650 was: ati radeon hd 5450 was: amd radeon r9 200 series was: amd radeon hd 7700 series was: amd radeon hd 7800 series guess: intel hd graphics 4000 was: intel hd graphics 5000 was: intel iris was: intel iris pro was: chromium guess: nvidia geforce gtx 660 was: nvidia geforce gtx 650 was: nvidia geforce gtx 760 was: nvidia geforce gtx 550 ti was: nvidia geforce gtx 770 was: nvidia geforce gt 630 guess: nvidia geforce 9800 gt was: nvidia geforce 9500 gt was: nvidia geforce gts 250 was: nvidia geforce gtx 260 was: nvidia geforce 8400 gs was: nvidia geforce 9600 gt guess: adreno (tm) 330 was: adreno (tm) 320 was: adreno (tm) 305 was: adreno (tm) 306 was: chromium Considering all characteristics: - avg renderers per unique characteristic: 7.7 - min renderers per unique characteristic: 1 - max renderers per unique characteristic: 596 - prediction accuracy: 38% Most frequent wrong guessing sample: guess: nvidia geforce gtx 760 was: nvidia geforce gtx 770 was: nvidia geforce gtx 660 was: amd radeon r9 200 series was: amd radeon hd 7800 series was: amd radeon hd 7700 series guess: intel(r) hd graphics was: intel(r) hd graphics 4000 was: intel(r) hd graphics family was: intel(r) hd graphics 4600 was: intel(r) hd graphics 3000 was: intel(r) hd graphics 4400 guess: microsoft basic render driver was: intel(r) hd graphics was: intel(r) hd graphics 4000 was: intel(r) hd graphics 4600 was: intel(r) hd graphics family was: intel(r) hd graphics 4400 guess: nvidia geforce 210 was: ati radeon hd 4200 was: nvidia geforce gt 240 was: nvidia geforce gt 220 was: nvidia geforce 310m was: ati radeon hd 4800 series guess: mobile intel(r) 4 series express chipset family was: intel(r) g41 express chipset was: intel(r) q45/q43 express chipset was: intel(r) 4 series internal chipset was: mobile intel(r) 965 express chipset family was: intel(r) g45/g43 express chipset guess: ati mobility radeon hd 5470 was: ati mobility radeon hd 5650 was: ati radeon hd 5450 was: amd radeon r9 200 series was: amd radeon hd 6450 was: amd radeon hd 7800 series guess: intel hd graphics 4000 was: intel hd graphics 5000 was: intel iris was: intel iris pro guess: nvidia geforce gtx 660 was: nvidia geforce gtx 650 was: nvidia geforce gtx 760 was: nvidia geforce gtx 550 ti was: nvidia geforce gtx 770 was: nvidia geforce gt 630 guess: nvidia geforce 9500 gt was: nvidia geforce 9800 gt was: nvidia geforce gts 250 was: nvidia geforce gtx 260 was: nvidia geforce 8400 gs was: nvidia geforce 9600 gt guess: intel(r) hd graphics family was: intel(r) hd graphics was: intel(r) graphics media accelerator hd was: mobile intel(r) hd graphics was: intel(r) hd graphics br-1004-01y1 was: intel(r) hd graphics 4000 *Conclusion* There is some correlation between parameters and GPUs, but it's weak. It bunches together graphics hardware like the the geforce 9500 gt (134 gflops) with the geforce gtx 260 (715 gflops), or the geforce 770 (3213 gflops) with the radeon hd 7700 (~500-100gflops). Worse yet, it mix&mashes together vendors, which will not help you with identifying priorities in addressing bugs with your limited developer/support staff because it doesn't pin a model/vendor. In conclusion: QED -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Dec 4 03:22:25 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 4 Dec 2014 12:22:25 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: Of course this kind of guessing is only possible because the information is exposed in the first place. And the guessing accuracy will deteriorate over time. Statistics built from 4 days with roughly a million entries in November (18th, 19th, 20th and 21st) and then used to categorize yesterdays (december 3rd) data yields a guessing precision of 40.0%. The same days from October 39.8%, September 14.4%, and so forth. On the other hand, using say the last 8 days and categorizing yesterday yields a guessing accuracy 43%. It's not practical guesses for app developers (i'd rather know if something was a radeon r9 and not a gtx 760). But it shows that nearly half of the (uh scary, privacy) bits are already contained in the webgl parameters. And it's only going to get better with WebGL 2.0, because that contains twice as many parameters to query. So as is so often the case, privacy handwaving is again obstructing those who want to do productive work, while it doesn't significantly impede those who will find unethical uses for the data. Can we please stop that? A prime example of this would be querying how many CPU cores a machine has. This can already be shimmed, there's a perfectly ready made shim for it, but some vendors refuse to expose the parameter because "uh scary" privacy. On Thu, Dec 4, 2014 at 9:14 AM, Florian B?sch wrote: > On Thu, Dec 4, 2014 at 5:23 AM, Florian B?sch wrote: > >> On Wed, Dec 3, 2014 at 10:46 PM, Gregg Tavares >> wrote: >> >>> Sounds like you don't need WEBGL_debug_info then. Just make your DB >>> using the existing info and see if it correlates with the info you need. If >>> it does you're done. >>> >> I'd really like to see someone at least try this. >>> >> The parameters only fuzzily correlate to GPUs. That doesn't help you >> really in the usecases where you have issues, or try to assess performance >> or limitations not expressed in parameters up front. I've not measured >> this, but I'm sure, if I would, It'd amount to exactly that guess. >> > > I've ran trough data from yesterdays log. > > - Total entries in the log: 386046 > - Total entries considered (have webgl and have debug renderer info): > 247704 > - Unique parameter sets: 176 > - Combined unique webgl characteristics: 581 > - Unique renderers count before stripping: 3389 > - Unique renderers count after stripping: 1292 > > The prediction methodology is to take the most populous renderer after > stripping/lowercasing and space substitution (which improves accuracy > because it cuts out directx version and string formatting and whatnot). > > Considering only parameters (because extensions, extension capabilities > and precisions are strongly UA dependent): > > - avg renderers per parameter: 15.8 > - min renderers per parameter: 1 > - max renderers per parameter: 602 > - prediction accuracy: 35.1% > > Most frequent wrong guessing sample: > > guess: nvidia geforce gtx 760 > was: nvidia geforce gtx 770 > was: nvidia geforce gtx 660 > was: amd radeon r9 200 series > was: amd radeon hd 7800 series > was: amd radeon hd 7700 series > guess: intel(r) hd graphics > was: intel(r) hd graphics 4000 > was: intel(r) hd graphics family > was: intel(r) hd graphics 4600 > was: intel(r) hd graphics 3000 > was: intel(r) hd graphics 4400 > guess: intel(r) hd graphics family > was: mobile intel(r) 4 series express chipset family > was: intel(r) g41 express chipset > was: intel(r) hd graphics > was: intel(r) q45/q43 express chipset > was: intel(r) 4 series internal chipset > guess: nvidia geforce 210 > was: ati radeon hd 4200 > was: nvidia geforce gt 240 > was: nvidia geforce gt 220 > was: nvidia geforce 310m > was: ati radeon hd 4800 series > guess: microsoft basic render driver > was: intel(r) hd graphics > was: intel(r) hd graphics 4000 > was: intel(r) hd graphics 4600 > was: intel(r) hd graphics family > was: intel(r) hd graphics 4400 > guess: ati mobility radeon hd 5470 > was: ati mobility radeon hd 5650 > was: ati radeon hd 5450 > was: amd radeon r9 200 series > was: amd radeon hd 7700 series > was: amd radeon hd 7800 series > guess: intel hd graphics 4000 > was: intel hd graphics 5000 > was: intel iris > was: intel iris pro > was: chromium > guess: nvidia geforce gtx 660 > was: nvidia geforce gtx 650 > was: nvidia geforce gtx 760 > was: nvidia geforce gtx 550 ti > was: nvidia geforce gtx 770 > was: nvidia geforce gt 630 > guess: nvidia geforce 9800 gt > was: nvidia geforce 9500 gt > was: nvidia geforce gts 250 > was: nvidia geforce gtx 260 > was: nvidia geforce 8400 gs > was: nvidia geforce 9600 gt > guess: adreno (tm) 330 > was: adreno (tm) 320 > was: adreno (tm) 305 > was: adreno (tm) 306 > was: chromium > > Considering all characteristics: > > - avg renderers per unique characteristic: 7.7 > - min renderers per unique characteristic: 1 > - max renderers per unique characteristic: 596 > - prediction accuracy: 38% > > Most frequent wrong guessing sample: > > guess: nvidia geforce gtx 760 > was: nvidia geforce gtx 770 > was: nvidia geforce gtx 660 > was: amd radeon r9 200 series > was: amd radeon hd 7800 series > was: amd radeon hd 7700 series > guess: intel(r) hd graphics > was: intel(r) hd graphics 4000 > was: intel(r) hd graphics family > was: intel(r) hd graphics 4600 > was: intel(r) hd graphics 3000 > was: intel(r) hd graphics 4400 > guess: microsoft basic render driver > was: intel(r) hd graphics > was: intel(r) hd graphics 4000 > was: intel(r) hd graphics 4600 > was: intel(r) hd graphics family > was: intel(r) hd graphics 4400 > guess: nvidia geforce 210 > was: ati radeon hd 4200 > was: nvidia geforce gt 240 > was: nvidia geforce gt 220 > was: nvidia geforce 310m > was: ati radeon hd 4800 series > guess: mobile intel(r) 4 series express chipset family > was: intel(r) g41 express chipset > was: intel(r) q45/q43 express chipset > was: intel(r) 4 series internal chipset > was: mobile intel(r) 965 express chipset family > was: intel(r) g45/g43 express chipset > guess: ati mobility radeon hd 5470 > was: ati mobility radeon hd 5650 > was: ati radeon hd 5450 > was: amd radeon r9 200 series > was: amd radeon hd 6450 > was: amd radeon hd 7800 series > guess: intel hd graphics 4000 > was: intel hd graphics 5000 > was: intel iris > was: intel iris pro > guess: nvidia geforce gtx 660 > was: nvidia geforce gtx 650 > was: nvidia geforce gtx 760 > was: nvidia geforce gtx 550 ti > was: nvidia geforce gtx 770 > was: nvidia geforce gt 630 > guess: nvidia geforce 9500 gt > was: nvidia geforce 9800 gt > was: nvidia geforce gts 250 > was: nvidia geforce gtx 260 > was: nvidia geforce 8400 gs > was: nvidia geforce 9600 gt > guess: intel(r) hd graphics family > was: intel(r) hd graphics > was: intel(r) graphics media accelerator hd > was: mobile intel(r) hd graphics > was: intel(r) hd graphics br-1004-01y1 > was: intel(r) hd graphics 4000 > > *Conclusion* > > There is some correlation between parameters and GPUs, but it's weak. It > bunches together graphics hardware like the the geforce 9500 gt (134 > gflops) with the geforce gtx 260 (715 gflops), or the geforce 770 (3213 > gflops) with the radeon hd 7700 (~500-100gflops). Worse yet, it > mix&mashes together vendors, which will not help you with identifying > priorities in addressing bugs with your limited developer/support staff > because it doesn't pin a model/vendor. > > In conclusion: QED > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Thu Dec 4 11:42:38 2014 From: khr...@ (Gregg Tavares) Date: Thu, 4 Dec 2014 11:42:38 -0800 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: I'm not disagreeing with the need for something ... but it's not just privacy that's at issue Adding a GPU id could lead to this: http://webaim.org/blog/user-agent-string-history/ Sure, amazing programmers like you would never only turn on some feature if they saw a certain GPU ID but lesser programmers will and then there will be pressure to lie about GPU ids and ... Maybe that will never happen. But we have user agent history to show us one possible downside. On Thu, Dec 4, 2014 at 3:22 AM, Florian B?sch wrote: > Of course this kind of guessing is only possible because the information > is exposed in the first place. And the guessing accuracy will deteriorate > over time. > > Statistics built from 4 days with roughly a million entries in November > (18th, 19th, 20th and 21st) and then used to categorize yesterdays > (december 3rd) data yields a guessing precision of 40.0%. The same days > from October 39.8%, September 14.4%, and so forth. > > On the other hand, using say the last 8 days and categorizing yesterday > yields a guessing accuracy 43%. It's not practical guesses for app > developers (i'd rather know if something was a radeon r9 and not a gtx > 760). But it shows that nearly half of the (uh scary, privacy) bits are > already contained in the webgl parameters. And it's only going to get > better with WebGL 2.0, because that contains twice as many parameters to > query. So as is so often the case, privacy handwaving is again obstructing > those who want to do productive work, while it doesn't significantly impede > those who will find unethical uses for the data. Can we please stop that? A > prime example of this would be querying how many CPU cores a machine has. > This can already be shimmed, there's a perfectly ready made shim for it, > but some vendors refuse to expose the parameter because "uh scary" privacy. > > On Thu, Dec 4, 2014 at 9:14 AM, Florian B?sch wrote: > >> On Thu, Dec 4, 2014 at 5:23 AM, Florian B?sch wrote: >> >>> On Wed, Dec 3, 2014 at 10:46 PM, Gregg Tavares >>> wrote: >>> >>>> Sounds like you don't need WEBGL_debug_info then. Just make your DB >>>> using the existing info and see if it correlates with the info you need. If >>>> it does you're done. >>>> >>> I'd really like to see someone at least try this. >>>> >>> The parameters only fuzzily correlate to GPUs. That doesn't help you >>> really in the usecases where you have issues, or try to assess performance >>> or limitations not expressed in parameters up front. I've not measured >>> this, but I'm sure, if I would, It'd amount to exactly that guess. >>> >> >> I've ran trough data from yesterdays log. >> >> - Total entries in the log: 386046 >> - Total entries considered (have webgl and have debug renderer info): >> 247704 >> - Unique parameter sets: 176 >> - Combined unique webgl characteristics: 581 >> - Unique renderers count before stripping: 3389 >> - Unique renderers count after stripping: 1292 >> >> The prediction methodology is to take the most populous renderer after >> stripping/lowercasing and space substitution (which improves accuracy >> because it cuts out directx version and string formatting and whatnot). >> >> Considering only parameters (because extensions, extension capabilities >> and precisions are strongly UA dependent): >> >> - avg renderers per parameter: 15.8 >> - min renderers per parameter: 1 >> - max renderers per parameter: 602 >> - prediction accuracy: 35.1% >> >> Most frequent wrong guessing sample: >> >> guess: nvidia geforce gtx 760 >> was: nvidia geforce gtx 770 >> was: nvidia geforce gtx 660 >> was: amd radeon r9 200 series >> was: amd radeon hd 7800 series >> was: amd radeon hd 7700 series >> guess: intel(r) hd graphics >> was: intel(r) hd graphics 4000 >> was: intel(r) hd graphics family >> was: intel(r) hd graphics 4600 >> was: intel(r) hd graphics 3000 >> was: intel(r) hd graphics 4400 >> guess: intel(r) hd graphics family >> was: mobile intel(r) 4 series express chipset family >> was: intel(r) g41 express chipset >> was: intel(r) hd graphics >> was: intel(r) q45/q43 express chipset >> was: intel(r) 4 series internal chipset >> guess: nvidia geforce 210 >> was: ati radeon hd 4200 >> was: nvidia geforce gt 240 >> was: nvidia geforce gt 220 >> was: nvidia geforce 310m >> was: ati radeon hd 4800 series >> guess: microsoft basic render driver >> was: intel(r) hd graphics >> was: intel(r) hd graphics 4000 >> was: intel(r) hd graphics 4600 >> was: intel(r) hd graphics family >> was: intel(r) hd graphics 4400 >> guess: ati mobility radeon hd 5470 >> was: ati mobility radeon hd 5650 >> was: ati radeon hd 5450 >> was: amd radeon r9 200 series >> was: amd radeon hd 7700 series >> was: amd radeon hd 7800 series >> guess: intel hd graphics 4000 >> was: intel hd graphics 5000 >> was: intel iris >> was: intel iris pro >> was: chromium >> guess: nvidia geforce gtx 660 >> was: nvidia geforce gtx 650 >> was: nvidia geforce gtx 760 >> was: nvidia geforce gtx 550 ti >> was: nvidia geforce gtx 770 >> was: nvidia geforce gt 630 >> guess: nvidia geforce 9800 gt >> was: nvidia geforce 9500 gt >> was: nvidia geforce gts 250 >> was: nvidia geforce gtx 260 >> was: nvidia geforce 8400 gs >> was: nvidia geforce 9600 gt >> guess: adreno (tm) 330 >> was: adreno (tm) 320 >> was: adreno (tm) 305 >> was: adreno (tm) 306 >> was: chromium >> >> Considering all characteristics: >> >> - avg renderers per unique characteristic: 7.7 >> - min renderers per unique characteristic: 1 >> - max renderers per unique characteristic: 596 >> - prediction accuracy: 38% >> >> Most frequent wrong guessing sample: >> >> guess: nvidia geforce gtx 760 >> was: nvidia geforce gtx 770 >> was: nvidia geforce gtx 660 >> was: amd radeon r9 200 series >> was: amd radeon hd 7800 series >> was: amd radeon hd 7700 series >> guess: intel(r) hd graphics >> was: intel(r) hd graphics 4000 >> was: intel(r) hd graphics family >> was: intel(r) hd graphics 4600 >> was: intel(r) hd graphics 3000 >> was: intel(r) hd graphics 4400 >> guess: microsoft basic render driver >> was: intel(r) hd graphics >> was: intel(r) hd graphics 4000 >> was: intel(r) hd graphics 4600 >> was: intel(r) hd graphics family >> was: intel(r) hd graphics 4400 >> guess: nvidia geforce 210 >> was: ati radeon hd 4200 >> was: nvidia geforce gt 240 >> was: nvidia geforce gt 220 >> was: nvidia geforce 310m >> was: ati radeon hd 4800 series >> guess: mobile intel(r) 4 series express chipset family >> was: intel(r) g41 express chipset >> was: intel(r) q45/q43 express chipset >> was: intel(r) 4 series internal chipset >> was: mobile intel(r) 965 express chipset family >> was: intel(r) g45/g43 express chipset >> guess: ati mobility radeon hd 5470 >> was: ati mobility radeon hd 5650 >> was: ati radeon hd 5450 >> was: amd radeon r9 200 series >> was: amd radeon hd 6450 >> was: amd radeon hd 7800 series >> guess: intel hd graphics 4000 >> was: intel hd graphics 5000 >> was: intel iris >> was: intel iris pro >> guess: nvidia geforce gtx 660 >> was: nvidia geforce gtx 650 >> was: nvidia geforce gtx 760 >> was: nvidia geforce gtx 550 ti >> was: nvidia geforce gtx 770 >> was: nvidia geforce gt 630 >> guess: nvidia geforce 9500 gt >> was: nvidia geforce 9800 gt >> was: nvidia geforce gts 250 >> was: nvidia geforce gtx 260 >> was: nvidia geforce 8400 gs >> was: nvidia geforce 9600 gt >> guess: intel(r) hd graphics family >> was: intel(r) hd graphics >> was: intel(r) graphics media accelerator hd >> was: mobile intel(r) hd graphics >> was: intel(r) hd graphics br-1004-01y1 >> was: intel(r) hd graphics 4000 >> >> *Conclusion* >> >> There is some correlation between parameters and GPUs, but it's weak. It >> bunches together graphics hardware like the the geforce 9500 gt (134 >> gflops) with the geforce gtx 260 (715 gflops), or the geforce 770 (3213 >> gflops) with the radeon hd 7700 (~500-100gflops). Worse yet, it >> mix&mashes together vendors, which will not help you with identifying >> priorities in addressing bugs with your limited developer/support staff >> because it doesn't pin a model/vendor. >> >> In conclusion: QED >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Dec 4 11:51:05 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 4 Dec 2014 20:51:05 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: Even if you ignore the need to provide support, and just concentrate on quantifying device performance differences, the last time we tried, it didn't work. It didn't work because nobody could agree on what that should be, or how it's going to relevant to a particular usecase, or how it should work across browsers. The reason GPU strings are different is because you don't need them for capabilities. You can already feature detect anything you'd like without the GPU string, that's fine. You'd be extraordinarily stupid if you'd prefer parsing the GPU strings (of which there's thousands) to infer a feature, which the GL api will readily tell you about. Nobody is going to do that, it just doesn't make sense. There are however some cases where the GPU string is very useful. And they have nothing todo with features, and everything with bugs and performance. And that's why it was added to Chrome and IE. That's why it should also come to Safari and Mozilla. That, and because the support usecase is still important, and it's simply not going to happen that anybody will come up with a usable device performance characteristic metric that works for everyone. On Thu, Dec 4, 2014 at 8:42 PM, Gregg Tavares wrote: > I'm not disagreeing with the need for something ... but it's not just > privacy that's at issue > > Adding a GPU id could lead to this: > > http://webaim.org/blog/user-agent-string-history/ > > Sure, amazing programmers like you would never only turn on some feature > if they saw a certain GPU ID but lesser programmers will and then there > will be pressure to lie about GPU ids and ... > > Maybe that will never happen. But we have user agent history to show us > one possible downside. > > > On Thu, Dec 4, 2014 at 3:22 AM, Florian B?sch wrote: > >> Of course this kind of guessing is only possible because the information >> is exposed in the first place. And the guessing accuracy will deteriorate >> over time. >> >> Statistics built from 4 days with roughly a million entries in November >> (18th, 19th, 20th and 21st) and then used to categorize yesterdays >> (december 3rd) data yields a guessing precision of 40.0%. The same days >> from October 39.8%, September 14.4%, and so forth. >> >> On the other hand, using say the last 8 days and categorizing yesterday >> yields a guessing accuracy 43%. It's not practical guesses for app >> developers (i'd rather know if something was a radeon r9 and not a gtx >> 760). But it shows that nearly half of the (uh scary, privacy) bits are >> already contained in the webgl parameters. And it's only going to get >> better with WebGL 2.0, because that contains twice as many parameters to >> query. So as is so often the case, privacy handwaving is again obstructing >> those who want to do productive work, while it doesn't significantly impede >> those who will find unethical uses for the data. Can we please stop that? A >> prime example of this would be querying how many CPU cores a machine has. >> This can already be shimmed, there's a perfectly ready made shim for it, >> but some vendors refuse to expose the parameter because "uh scary" privacy. >> >> On Thu, Dec 4, 2014 at 9:14 AM, Florian B?sch wrote: >> >>> On Thu, Dec 4, 2014 at 5:23 AM, Florian B?sch wrote: >>> >>>> On Wed, Dec 3, 2014 at 10:46 PM, Gregg Tavares >>>> wrote: >>>> >>>>> Sounds like you don't need WEBGL_debug_info then. Just make your DB >>>>> using the existing info and see if it correlates with the info you need. If >>>>> it does you're done. >>>>> >>>> I'd really like to see someone at least try this. >>>>> >>>> The parameters only fuzzily correlate to GPUs. That doesn't help you >>>> really in the usecases where you have issues, or try to assess performance >>>> or limitations not expressed in parameters up front. I've not measured >>>> this, but I'm sure, if I would, It'd amount to exactly that guess. >>>> >>> >>> I've ran trough data from yesterdays log. >>> >>> - Total entries in the log: 386046 >>> - Total entries considered (have webgl and have debug renderer >>> info): 247704 >>> - Unique parameter sets: 176 >>> - Combined unique webgl characteristics: 581 >>> - Unique renderers count before stripping: 3389 >>> - Unique renderers count after stripping: 1292 >>> >>> The prediction methodology is to take the most populous renderer after >>> stripping/lowercasing and space substitution (which improves accuracy >>> because it cuts out directx version and string formatting and whatnot). >>> >>> Considering only parameters (because extensions, extension capabilities >>> and precisions are strongly UA dependent): >>> >>> - avg renderers per parameter: 15.8 >>> - min renderers per parameter: 1 >>> - max renderers per parameter: 602 >>> - prediction accuracy: 35.1% >>> >>> Most frequent wrong guessing sample: >>> >>> guess: nvidia geforce gtx 760 >>> was: nvidia geforce gtx 770 >>> was: nvidia geforce gtx 660 >>> was: amd radeon r9 200 series >>> was: amd radeon hd 7800 series >>> was: amd radeon hd 7700 series >>> guess: intel(r) hd graphics >>> was: intel(r) hd graphics 4000 >>> was: intel(r) hd graphics family >>> was: intel(r) hd graphics 4600 >>> was: intel(r) hd graphics 3000 >>> was: intel(r) hd graphics 4400 >>> guess: intel(r) hd graphics family >>> was: mobile intel(r) 4 series express chipset family >>> was: intel(r) g41 express chipset >>> was: intel(r) hd graphics >>> was: intel(r) q45/q43 express chipset >>> was: intel(r) 4 series internal chipset >>> guess: nvidia geforce 210 >>> was: ati radeon hd 4200 >>> was: nvidia geforce gt 240 >>> was: nvidia geforce gt 220 >>> was: nvidia geforce 310m >>> was: ati radeon hd 4800 series >>> guess: microsoft basic render driver >>> was: intel(r) hd graphics >>> was: intel(r) hd graphics 4000 >>> was: intel(r) hd graphics 4600 >>> was: intel(r) hd graphics family >>> was: intel(r) hd graphics 4400 >>> guess: ati mobility radeon hd 5470 >>> was: ati mobility radeon hd 5650 >>> was: ati radeon hd 5450 >>> was: amd radeon r9 200 series >>> was: amd radeon hd 7700 series >>> was: amd radeon hd 7800 series >>> guess: intel hd graphics 4000 >>> was: intel hd graphics 5000 >>> was: intel iris >>> was: intel iris pro >>> was: chromium >>> guess: nvidia geforce gtx 660 >>> was: nvidia geforce gtx 650 >>> was: nvidia geforce gtx 760 >>> was: nvidia geforce gtx 550 ti >>> was: nvidia geforce gtx 770 >>> was: nvidia geforce gt 630 >>> guess: nvidia geforce 9800 gt >>> was: nvidia geforce 9500 gt >>> was: nvidia geforce gts 250 >>> was: nvidia geforce gtx 260 >>> was: nvidia geforce 8400 gs >>> was: nvidia geforce 9600 gt >>> guess: adreno (tm) 330 >>> was: adreno (tm) 320 >>> was: adreno (tm) 305 >>> was: adreno (tm) 306 >>> was: chromium >>> >>> Considering all characteristics: >>> >>> - avg renderers per unique characteristic: 7.7 >>> - min renderers per unique characteristic: 1 >>> - max renderers per unique characteristic: 596 >>> - prediction accuracy: 38% >>> >>> Most frequent wrong guessing sample: >>> >>> guess: nvidia geforce gtx 760 >>> was: nvidia geforce gtx 770 >>> was: nvidia geforce gtx 660 >>> was: amd radeon r9 200 series >>> was: amd radeon hd 7800 series >>> was: amd radeon hd 7700 series >>> guess: intel(r) hd graphics >>> was: intel(r) hd graphics 4000 >>> was: intel(r) hd graphics family >>> was: intel(r) hd graphics 4600 >>> was: intel(r) hd graphics 3000 >>> was: intel(r) hd graphics 4400 >>> guess: microsoft basic render driver >>> was: intel(r) hd graphics >>> was: intel(r) hd graphics 4000 >>> was: intel(r) hd graphics 4600 >>> was: intel(r) hd graphics family >>> was: intel(r) hd graphics 4400 >>> guess: nvidia geforce 210 >>> was: ati radeon hd 4200 >>> was: nvidia geforce gt 240 >>> was: nvidia geforce gt 220 >>> was: nvidia geforce 310m >>> was: ati radeon hd 4800 series >>> guess: mobile intel(r) 4 series express chipset family >>> was: intel(r) g41 express chipset >>> was: intel(r) q45/q43 express chipset >>> was: intel(r) 4 series internal chipset >>> was: mobile intel(r) 965 express chipset family >>> was: intel(r) g45/g43 express chipset >>> guess: ati mobility radeon hd 5470 >>> was: ati mobility radeon hd 5650 >>> was: ati radeon hd 5450 >>> was: amd radeon r9 200 series >>> was: amd radeon hd 6450 >>> was: amd radeon hd 7800 series >>> guess: intel hd graphics 4000 >>> was: intel hd graphics 5000 >>> was: intel iris >>> was: intel iris pro >>> guess: nvidia geforce gtx 660 >>> was: nvidia geforce gtx 650 >>> was: nvidia geforce gtx 760 >>> was: nvidia geforce gtx 550 ti >>> was: nvidia geforce gtx 770 >>> was: nvidia geforce gt 630 >>> guess: nvidia geforce 9500 gt >>> was: nvidia geforce 9800 gt >>> was: nvidia geforce gts 250 >>> was: nvidia geforce gtx 260 >>> was: nvidia geforce 8400 gs >>> was: nvidia geforce 9600 gt >>> guess: intel(r) hd graphics family >>> was: intel(r) hd graphics >>> was: intel(r) graphics media accelerator hd >>> was: mobile intel(r) hd graphics >>> was: intel(r) hd graphics br-1004-01y1 >>> was: intel(r) hd graphics 4000 >>> >>> *Conclusion* >>> >>> There is some correlation between parameters and GPUs, but it's weak. It >>> bunches together graphics hardware like the the geforce 9500 gt (134 >>> gflops) with the geforce gtx 260 (715 gflops), or the geforce 770 (3213 >>> gflops) with the radeon hd 7700 (~500-100gflops). Worse yet, it >>> mix&mashes together vendors, which will not help you with identifying >>> priorities in addressing bugs with your limited developer/support staff >>> because it doesn't pin a model/vendor. >>> >>> In conclusion: QED >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Thu Dec 4 13:55:59 2014 From: khr...@ (Gregg Tavares) Date: Thu, 4 Dec 2014 13:55:59 -0800 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: On Thu, Dec 4, 2014 at 11:51 AM, Florian B?sch wrote: > Even if you ignore the need to provide support, and just concentrate on > quantifying device performance differences, the last time we tried, it > didn't work. It didn't work because nobody could agree on what that should > be, or how it's going to relevant to a particular usecase, or how it should > work across browsers. > > The reason GPU strings are different is because you don't need them for > capabilities. You can already feature detect anything you'd like without > the GPU string, that's fine. You'd be extraordinarily stupid if you'd > prefer parsing the GPU strings (of which there's thousands) to infer a > feature, which the GL api will readily tell you about. > Agreed > Nobody is going to do that, it just doesn't make sense. > Provably false. Look at the number of questions and answers on the internet about looking at the user agent instead of using feature detection. Most developers will do the first thing the occurs to them. That's often identify the thing (browser/gpu) not feature detect. It's a constant battle to get people to stop using user agent and start feature detecting. I expect it will be the same for GPU id. There's all kinds of worst practices that are propagated essentially forever. All it has to do is appear in one or 2 popular websites and then it's copied ad infinitum. See http://learningwebgl.com which is still propogating bad practices since day 1. So is http://threejs.org They never go away. Even if those websites changed people will be copying the older examples forever. So there's a certain argument that adding more ways for devs to hang themselves is not a good thing. > > There are however some cases where the GPU string is very useful. And they > have nothing todo with features, and everything with bugs and performance. > And that's why it was added to Chrome and IE. That's why it should also > come to Safari and Mozilla. That, and because the support usecase is still > important, and it's simply not going to happen that anybody will come up > with a usable device performance characteristic metric that works for > everyone. > > > > On Thu, Dec 4, 2014 at 8:42 PM, Gregg Tavares > wrote: > >> I'm not disagreeing with the need for something ... but it's not just >> privacy that's at issue >> >> Adding a GPU id could lead to this: >> >> http://webaim.org/blog/user-agent-string-history/ >> >> Sure, amazing programmers like you would never only turn on some feature >> if they saw a certain GPU ID but lesser programmers will and then there >> will be pressure to lie about GPU ids and ... >> >> Maybe that will never happen. But we have user agent history to show us >> one possible downside. >> >> >> On Thu, Dec 4, 2014 at 3:22 AM, Florian B?sch wrote: >> >>> Of course this kind of guessing is only possible because the information >>> is exposed in the first place. And the guessing accuracy will deteriorate >>> over time. >>> >>> Statistics built from 4 days with roughly a million entries in November >>> (18th, 19th, 20th and 21st) and then used to categorize yesterdays >>> (december 3rd) data yields a guessing precision of 40.0%. The same days >>> from October 39.8%, September 14.4%, and so forth. >>> >>> On the other hand, using say the last 8 days and categorizing yesterday >>> yields a guessing accuracy 43%. It's not practical guesses for app >>> developers (i'd rather know if something was a radeon r9 and not a gtx >>> 760). But it shows that nearly half of the (uh scary, privacy) bits are >>> already contained in the webgl parameters. And it's only going to get >>> better with WebGL 2.0, because that contains twice as many parameters to >>> query. So as is so often the case, privacy handwaving is again obstructing >>> those who want to do productive work, while it doesn't significantly impede >>> those who will find unethical uses for the data. Can we please stop that? A >>> prime example of this would be querying how many CPU cores a machine has. >>> This can already be shimmed, there's a perfectly ready made shim for it, >>> but some vendors refuse to expose the parameter because "uh scary" privacy. >>> >>> On Thu, Dec 4, 2014 at 9:14 AM, Florian B?sch wrote: >>> >>>> On Thu, Dec 4, 2014 at 5:23 AM, Florian B?sch wrote: >>>> >>>>> On Wed, Dec 3, 2014 at 10:46 PM, Gregg Tavares >>>>> wrote: >>>>> >>>>>> Sounds like you don't need WEBGL_debug_info then. Just make your DB >>>>>> using the existing info and see if it correlates with the info you need. If >>>>>> it does you're done. >>>>>> >>>>> I'd really like to see someone at least try this. >>>>>> >>>>> The parameters only fuzzily correlate to GPUs. That doesn't help you >>>>> really in the usecases where you have issues, or try to assess performance >>>>> or limitations not expressed in parameters up front. I've not measured >>>>> this, but I'm sure, if I would, It'd amount to exactly that guess. >>>>> >>>> >>>> I've ran trough data from yesterdays log. >>>> >>>> - Total entries in the log: 386046 >>>> - Total entries considered (have webgl and have debug renderer >>>> info): 247704 >>>> - Unique parameter sets: 176 >>>> - Combined unique webgl characteristics: 581 >>>> - Unique renderers count before stripping: 3389 >>>> - Unique renderers count after stripping: 1292 >>>> >>>> The prediction methodology is to take the most populous renderer after >>>> stripping/lowercasing and space substitution (which improves accuracy >>>> because it cuts out directx version and string formatting and whatnot). >>>> >>>> Considering only parameters (because extensions, extension capabilities >>>> and precisions are strongly UA dependent): >>>> >>>> - avg renderers per parameter: 15.8 >>>> - min renderers per parameter: 1 >>>> - max renderers per parameter: 602 >>>> - prediction accuracy: 35.1% >>>> >>>> Most frequent wrong guessing sample: >>>> >>>> guess: nvidia geforce gtx 760 >>>> was: nvidia geforce gtx 770 >>>> was: nvidia geforce gtx 660 >>>> was: amd radeon r9 200 series >>>> was: amd radeon hd 7800 series >>>> was: amd radeon hd 7700 series >>>> guess: intel(r) hd graphics >>>> was: intel(r) hd graphics 4000 >>>> was: intel(r) hd graphics family >>>> was: intel(r) hd graphics 4600 >>>> was: intel(r) hd graphics 3000 >>>> was: intel(r) hd graphics 4400 >>>> guess: intel(r) hd graphics family >>>> was: mobile intel(r) 4 series express chipset family >>>> was: intel(r) g41 express chipset >>>> was: intel(r) hd graphics >>>> was: intel(r) q45/q43 express chipset >>>> was: intel(r) 4 series internal chipset >>>> guess: nvidia geforce 210 >>>> was: ati radeon hd 4200 >>>> was: nvidia geforce gt 240 >>>> was: nvidia geforce gt 220 >>>> was: nvidia geforce 310m >>>> was: ati radeon hd 4800 series >>>> guess: microsoft basic render driver >>>> was: intel(r) hd graphics >>>> was: intel(r) hd graphics 4000 >>>> was: intel(r) hd graphics 4600 >>>> was: intel(r) hd graphics family >>>> was: intel(r) hd graphics 4400 >>>> guess: ati mobility radeon hd 5470 >>>> was: ati mobility radeon hd 5650 >>>> was: ati radeon hd 5450 >>>> was: amd radeon r9 200 series >>>> was: amd radeon hd 7700 series >>>> was: amd radeon hd 7800 series >>>> guess: intel hd graphics 4000 >>>> was: intel hd graphics 5000 >>>> was: intel iris >>>> was: intel iris pro >>>> was: chromium >>>> guess: nvidia geforce gtx 660 >>>> was: nvidia geforce gtx 650 >>>> was: nvidia geforce gtx 760 >>>> was: nvidia geforce gtx 550 ti >>>> was: nvidia geforce gtx 770 >>>> was: nvidia geforce gt 630 >>>> guess: nvidia geforce 9800 gt >>>> was: nvidia geforce 9500 gt >>>> was: nvidia geforce gts 250 >>>> was: nvidia geforce gtx 260 >>>> was: nvidia geforce 8400 gs >>>> was: nvidia geforce 9600 gt >>>> guess: adreno (tm) 330 >>>> was: adreno (tm) 320 >>>> was: adreno (tm) 305 >>>> was: adreno (tm) 306 >>>> was: chromium >>>> >>>> Considering all characteristics: >>>> >>>> - avg renderers per unique characteristic: 7.7 >>>> - min renderers per unique characteristic: 1 >>>> - max renderers per unique characteristic: 596 >>>> - prediction accuracy: 38% >>>> >>>> Most frequent wrong guessing sample: >>>> >>>> guess: nvidia geforce gtx 760 >>>> was: nvidia geforce gtx 770 >>>> was: nvidia geforce gtx 660 >>>> was: amd radeon r9 200 series >>>> was: amd radeon hd 7800 series >>>> was: amd radeon hd 7700 series >>>> guess: intel(r) hd graphics >>>> was: intel(r) hd graphics 4000 >>>> was: intel(r) hd graphics family >>>> was: intel(r) hd graphics 4600 >>>> was: intel(r) hd graphics 3000 >>>> was: intel(r) hd graphics 4400 >>>> guess: microsoft basic render driver >>>> was: intel(r) hd graphics >>>> was: intel(r) hd graphics 4000 >>>> was: intel(r) hd graphics 4600 >>>> was: intel(r) hd graphics family >>>> was: intel(r) hd graphics 4400 >>>> guess: nvidia geforce 210 >>>> was: ati radeon hd 4200 >>>> was: nvidia geforce gt 240 >>>> was: nvidia geforce gt 220 >>>> was: nvidia geforce 310m >>>> was: ati radeon hd 4800 series >>>> guess: mobile intel(r) 4 series express chipset family >>>> was: intel(r) g41 express chipset >>>> was: intel(r) q45/q43 express chipset >>>> was: intel(r) 4 series internal chipset >>>> was: mobile intel(r) 965 express chipset family >>>> was: intel(r) g45/g43 express chipset >>>> guess: ati mobility radeon hd 5470 >>>> was: ati mobility radeon hd 5650 >>>> was: ati radeon hd 5450 >>>> was: amd radeon r9 200 series >>>> was: amd radeon hd 6450 >>>> was: amd radeon hd 7800 series >>>> guess: intel hd graphics 4000 >>>> was: intel hd graphics 5000 >>>> was: intel iris >>>> was: intel iris pro >>>> guess: nvidia geforce gtx 660 >>>> was: nvidia geforce gtx 650 >>>> was: nvidia geforce gtx 760 >>>> was: nvidia geforce gtx 550 ti >>>> was: nvidia geforce gtx 770 >>>> was: nvidia geforce gt 630 >>>> guess: nvidia geforce 9500 gt >>>> was: nvidia geforce 9800 gt >>>> was: nvidia geforce gts 250 >>>> was: nvidia geforce gtx 260 >>>> was: nvidia geforce 8400 gs >>>> was: nvidia geforce 9600 gt >>>> guess: intel(r) hd graphics family >>>> was: intel(r) hd graphics >>>> was: intel(r) graphics media accelerator hd >>>> was: mobile intel(r) hd graphics >>>> was: intel(r) hd graphics br-1004-01y1 >>>> was: intel(r) hd graphics 4000 >>>> >>>> *Conclusion* >>>> >>>> There is some correlation between parameters and GPUs, but it's weak. >>>> It bunches together graphics hardware like the the geforce 9500 gt (134 >>>> gflops) with the geforce gtx 260 (715 gflops), or the geforce 770 (3213 >>>> gflops) with the radeon hd 7700 (~500-100gflops). Worse yet, it >>>> mix&mashes together vendors, which will not help you with identifying >>>> priorities in addressing bugs with your limited developer/support staff >>>> because it doesn't pin a model/vendor. >>>> >>>> In conclusion: QED >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Dec 4 23:00:51 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 5 Dec 2014 08:00:51 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: I understand the analogy you're trying to evoke, but I maintain it's a completely false equivalency. Before I go into thoroughly debunking this latest boogeyman, let me point out that it's odd that now that the privacy boogeyman is mostly debunked (because nearly half the bits that would be added by a GPU string are already contained in the WebGL parameters), you shift debate from the privacy boogeyman to the UA switching boogeyman. The reason UA switching became somewhat infamously popular has to do with a couple key facts around browsers. - They aren't (and never where) a whole lot of them. 2-3 at the beginning, and we're still basically not to many more than say half a dozen these days. This makes it an attractive target for switching, because the switching code will be relatively precise and will not deteriorate for a long time (this is where I have to give Microsoft a ***HUGE*** credit for retiring MSIE from their UA, cudos guys). - Browsers routinely try to out-joust each other by adding any plethora of custom features to distinguish themselves from the competition. Combined with the next point, this will show why UA switching enjoys popularity. - Authoring web content that use the custom features was attractive and easy (because it made your stuff look cool), but using it in browsers that didn't support it made your content look broken. And because there weren't good ways to provide client-side fallbacks, UA switching was often the only ways to author web content that uses new features yet still produce not broken content for other UAs. - Browsers need not adhere to any standard of compatibility and especially in the early days, much of their functionality was never covered by any sort of conformance test. Despite the best efforts of UAs to destroy the web (a-ha-ha) however, the web hasn't died, and contrary to popular belief, it's not filled by pages that only work on one UA. For sure, sometimes this phenomenon crops up, but everybody involved usually quickly figures out the errors of their ways and the problem goes away one way or another. WebGL and GPU strings differ from browsers in several very important ways. Ways which ensure that a UA-style GPU switching is never going to be popular. - There are thousands of GPUS. Just a 1-day sample that's been sanitized contains over 1200 renderers. I could probably find close to 10'000 different GPUs in use where I to go over a years worth of logs. Switching on GPUs would be pretty hard because there's just so goddamn many of em. - While GPUs might try adding features to out-joust each other, that doesn't matter in WebGL much because all features on offer need to go trough the standards API (of core and extension functionality) which is the only way to get at those features. There's no hidden features you'll be able to use because you know the GPU. - Authoring GPU specific content would be exercising a particular performance benefit of a particular card is going to be insanely difficult and unattractive. It's difficult because writing renderers that exercise different renderpaths is a huge challenge that's not for the faint of heart, and it's unattractive because custom tailoring to a super-specific GPU will ensure that all that work you put in is basically wasted at 0.0001% your visitors. - Authoring WebGL content you are given full access to any feature you will want to use. You're doing it exclusively trough a programming language (not trough a markup and styling language), and that programming language not only can, but is required to talk to the API for defining features in order to work correctly. - Any super-specific GPU code that somebody would promulgate as a bad practice, and which you'd insinuate would "live forever", is in fact doomed to die pretty quickly. The reason is that GPUs is a broiling pot of changing hardware, and the second you codified-in-concrete some GPU detection, it starts becoming decrepit and a source of broken code. The legitimate and attractive uses of GPU strings, the ones that people would actually use, are in the realm statistical fuzzy matching and error handling. Other uses might be attempted, but they're quickly going to die off by themselves for the reasons mentioned above. Equating GPU strings to UAs in terms of pitfalls is just not working as an analogy. On Thu, Dec 4, 2014 at 10:55 PM, Gregg Tavares wrote: > > > On Thu, Dec 4, 2014 at 11:51 AM, Florian B?sch wrote: > >> Even if you ignore the need to provide support, and just concentrate on >> quantifying device performance differences, the last time we tried, it >> didn't work. It didn't work because nobody could agree on what that should >> be, or how it's going to relevant to a particular usecase, or how it should >> work across browsers. >> >> The reason GPU strings are different is because you don't need them for >> capabilities. You can already feature detect anything you'd like without >> the GPU string, that's fine. You'd be extraordinarily stupid if you'd >> prefer parsing the GPU strings (of which there's thousands) to infer a >> feature, which the GL api will readily tell you about. >> > > Agreed > > >> Nobody is going to do that, it just doesn't make sense. >> > > Provably false. Look at the number of questions and answers on the > internet about looking at the user agent instead of using feature > detection. Most developers will do the first thing the occurs to them. > That's often identify the thing (browser/gpu) not feature detect. It's a > constant battle to get people to stop using user agent and start feature > detecting. I expect it will be the same for GPU id. > > There's all kinds of worst practices that are propagated essentially > forever. All it has to do is appear in one or 2 popular websites and then > it's copied ad infinitum. See http://learningwebgl.com which is still > propogating bad practices since day 1. So is http://threejs.org > > They never go away. Even if those websites changed people will be copying > the older examples forever. > > So there's a certain argument that adding more ways for devs to hang > themselves is not a good thing. > > > >> >> There are however some cases where the GPU string is very useful. And >> they have nothing todo with features, and everything with bugs and >> performance. And that's why it was added to Chrome and IE. That's why it >> should also come to Safari and Mozilla. That, and because the support >> usecase is still important, and it's simply not going to happen that >> anybody will come up with a usable device performance characteristic metric >> that works for everyone. >> >> >> >> On Thu, Dec 4, 2014 at 8:42 PM, Gregg Tavares >> wrote: >> >>> I'm not disagreeing with the need for something ... but it's not just >>> privacy that's at issue >>> >>> Adding a GPU id could lead to this: >>> >>> http://webaim.org/blog/user-agent-string-history/ >>> >>> Sure, amazing programmers like you would never only turn on some feature >>> if they saw a certain GPU ID but lesser programmers will and then there >>> will be pressure to lie about GPU ids and ... >>> >>> Maybe that will never happen. But we have user agent history to show us >>> one possible downside. >>> >>> >>> On Thu, Dec 4, 2014 at 3:22 AM, Florian B?sch wrote: >>> >>>> Of course this kind of guessing is only possible because the >>>> information is exposed in the first place. And the guessing accuracy will >>>> deteriorate over time. >>>> >>>> Statistics built from 4 days with roughly a million entries in November >>>> (18th, 19th, 20th and 21st) and then used to categorize yesterdays >>>> (december 3rd) data yields a guessing precision of 40.0%. The same days >>>> from October 39.8%, September 14.4%, and so forth. >>>> >>>> On the other hand, using say the last 8 days and categorizing yesterday >>>> yields a guessing accuracy 43%. It's not practical guesses for app >>>> developers (i'd rather know if something was a radeon r9 and not a gtx >>>> 760). But it shows that nearly half of the (uh scary, privacy) bits are >>>> already contained in the webgl parameters. And it's only going to get >>>> better with WebGL 2.0, because that contains twice as many parameters to >>>> query. So as is so often the case, privacy handwaving is again obstructing >>>> those who want to do productive work, while it doesn't significantly impede >>>> those who will find unethical uses for the data. Can we please stop that? A >>>> prime example of this would be querying how many CPU cores a machine has. >>>> This can already be shimmed, there's a perfectly ready made shim for it, >>>> but some vendors refuse to expose the parameter because "uh scary" privacy. >>>> >>>> On Thu, Dec 4, 2014 at 9:14 AM, Florian B?sch wrote: >>>> >>>>> On Thu, Dec 4, 2014 at 5:23 AM, Florian B?sch >>>>> wrote: >>>>> >>>>>> On Wed, Dec 3, 2014 at 10:46 PM, Gregg Tavares >>>>>> wrote: >>>>>> >>>>>>> Sounds like you don't need WEBGL_debug_info then. Just make your DB >>>>>>> using the existing info and see if it correlates with the info you need. If >>>>>>> it does you're done. >>>>>>> >>>>>> I'd really like to see someone at least try this. >>>>>>> >>>>>> The parameters only fuzzily correlate to GPUs. That doesn't help you >>>>>> really in the usecases where you have issues, or try to assess performance >>>>>> or limitations not expressed in parameters up front. I've not measured >>>>>> this, but I'm sure, if I would, It'd amount to exactly that guess. >>>>>> >>>>> >>>>> I've ran trough data from yesterdays log. >>>>> >>>>> - Total entries in the log: 386046 >>>>> - Total entries considered (have webgl and have debug renderer >>>>> info): 247704 >>>>> - Unique parameter sets: 176 >>>>> - Combined unique webgl characteristics: 581 >>>>> - Unique renderers count before stripping: 3389 >>>>> - Unique renderers count after stripping: 1292 >>>>> >>>>> The prediction methodology is to take the most populous renderer after >>>>> stripping/lowercasing and space substitution (which improves accuracy >>>>> because it cuts out directx version and string formatting and whatnot). >>>>> >>>>> Considering only parameters (because extensions, extension >>>>> capabilities and precisions are strongly UA dependent): >>>>> >>>>> - avg renderers per parameter: 15.8 >>>>> - min renderers per parameter: 1 >>>>> - max renderers per parameter: 602 >>>>> - prediction accuracy: 35.1% >>>>> >>>>> Most frequent wrong guessing sample: >>>>> >>>>> guess: nvidia geforce gtx 760 >>>>> was: nvidia geforce gtx 770 >>>>> was: nvidia geforce gtx 660 >>>>> was: amd radeon r9 200 series >>>>> was: amd radeon hd 7800 series >>>>> was: amd radeon hd 7700 series >>>>> guess: intel(r) hd graphics >>>>> was: intel(r) hd graphics 4000 >>>>> was: intel(r) hd graphics family >>>>> was: intel(r) hd graphics 4600 >>>>> was: intel(r) hd graphics 3000 >>>>> was: intel(r) hd graphics 4400 >>>>> guess: intel(r) hd graphics family >>>>> was: mobile intel(r) 4 series express chipset family >>>>> was: intel(r) g41 express chipset >>>>> was: intel(r) hd graphics >>>>> was: intel(r) q45/q43 express chipset >>>>> was: intel(r) 4 series internal chipset >>>>> guess: nvidia geforce 210 >>>>> was: ati radeon hd 4200 >>>>> was: nvidia geforce gt 240 >>>>> was: nvidia geforce gt 220 >>>>> was: nvidia geforce 310m >>>>> was: ati radeon hd 4800 series >>>>> guess: microsoft basic render driver >>>>> was: intel(r) hd graphics >>>>> was: intel(r) hd graphics 4000 >>>>> was: intel(r) hd graphics 4600 >>>>> was: intel(r) hd graphics family >>>>> was: intel(r) hd graphics 4400 >>>>> guess: ati mobility radeon hd 5470 >>>>> was: ati mobility radeon hd 5650 >>>>> was: ati radeon hd 5450 >>>>> was: amd radeon r9 200 series >>>>> was: amd radeon hd 7700 series >>>>> was: amd radeon hd 7800 series >>>>> guess: intel hd graphics 4000 >>>>> was: intel hd graphics 5000 >>>>> was: intel iris >>>>> was: intel iris pro >>>>> was: chromium >>>>> guess: nvidia geforce gtx 660 >>>>> was: nvidia geforce gtx 650 >>>>> was: nvidia geforce gtx 760 >>>>> was: nvidia geforce gtx 550 ti >>>>> was: nvidia geforce gtx 770 >>>>> was: nvidia geforce gt 630 >>>>> guess: nvidia geforce 9800 gt >>>>> was: nvidia geforce 9500 gt >>>>> was: nvidia geforce gts 250 >>>>> was: nvidia geforce gtx 260 >>>>> was: nvidia geforce 8400 gs >>>>> was: nvidia geforce 9600 gt >>>>> guess: adreno (tm) 330 >>>>> was: adreno (tm) 320 >>>>> was: adreno (tm) 305 >>>>> was: adreno (tm) 306 >>>>> was: chromium >>>>> >>>>> Considering all characteristics: >>>>> >>>>> - avg renderers per unique characteristic: 7.7 >>>>> - min renderers per unique characteristic: 1 >>>>> - max renderers per unique characteristic: 596 >>>>> - prediction accuracy: 38% >>>>> >>>>> Most frequent wrong guessing sample: >>>>> >>>>> guess: nvidia geforce gtx 760 >>>>> was: nvidia geforce gtx 770 >>>>> was: nvidia geforce gtx 660 >>>>> was: amd radeon r9 200 series >>>>> was: amd radeon hd 7800 series >>>>> was: amd radeon hd 7700 series >>>>> guess: intel(r) hd graphics >>>>> was: intel(r) hd graphics 4000 >>>>> was: intel(r) hd graphics family >>>>> was: intel(r) hd graphics 4600 >>>>> was: intel(r) hd graphics 3000 >>>>> was: intel(r) hd graphics 4400 >>>>> guess: microsoft basic render driver >>>>> was: intel(r) hd graphics >>>>> was: intel(r) hd graphics 4000 >>>>> was: intel(r) hd graphics 4600 >>>>> was: intel(r) hd graphics family >>>>> was: intel(r) hd graphics 4400 >>>>> guess: nvidia geforce 210 >>>>> was: ati radeon hd 4200 >>>>> was: nvidia geforce gt 240 >>>>> was: nvidia geforce gt 220 >>>>> was: nvidia geforce 310m >>>>> was: ati radeon hd 4800 series >>>>> guess: mobile intel(r) 4 series express chipset family >>>>> was: intel(r) g41 express chipset >>>>> was: intel(r) q45/q43 express chipset >>>>> was: intel(r) 4 series internal chipset >>>>> was: mobile intel(r) 965 express chipset family >>>>> was: intel(r) g45/g43 express chipset >>>>> guess: ati mobility radeon hd 5470 >>>>> was: ati mobility radeon hd 5650 >>>>> was: ati radeon hd 5450 >>>>> was: amd radeon r9 200 series >>>>> was: amd radeon hd 6450 >>>>> was: amd radeon hd 7800 series >>>>> guess: intel hd graphics 4000 >>>>> was: intel hd graphics 5000 >>>>> was: intel iris >>>>> was: intel iris pro >>>>> guess: nvidia geforce gtx 660 >>>>> was: nvidia geforce gtx 650 >>>>> was: nvidia geforce gtx 760 >>>>> was: nvidia geforce gtx 550 ti >>>>> was: nvidia geforce gtx 770 >>>>> was: nvidia geforce gt 630 >>>>> guess: nvidia geforce 9500 gt >>>>> was: nvidia geforce 9800 gt >>>>> was: nvidia geforce gts 250 >>>>> was: nvidia geforce gtx 260 >>>>> was: nvidia geforce 8400 gs >>>>> was: nvidia geforce 9600 gt >>>>> guess: intel(r) hd graphics family >>>>> was: intel(r) hd graphics >>>>> was: intel(r) graphics media accelerator hd >>>>> was: mobile intel(r) hd graphics >>>>> was: intel(r) hd graphics br-1004-01y1 >>>>> was: intel(r) hd graphics 4000 >>>>> >>>>> *Conclusion* >>>>> >>>>> There is some correlation between parameters and GPUs, but it's weak. >>>>> It bunches together graphics hardware like the the geforce 9500 gt (134 >>>>> gflops) with the geforce gtx 260 (715 gflops), or the geforce 770 (3213 >>>>> gflops) with the radeon hd 7700 (~500-100gflops). Worse yet, it >>>>> mix&mashes together vendors, which will not help you with identifying >>>>> priorities in addressing bugs with your limited developer/support staff >>>>> because it doesn't pin a model/vendor. >>>>> >>>>> In conclusion: QED >>>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Fri Dec 5 06:34:18 2014 From: oet...@ (Olli Etuaho) Date: Fri, 5 Dec 2014 14:34:18 +0000 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Message-ID: Hi all, I've started digging into handling of constant expressions and constant-index-expressions in WebGL, and have found some issues in this area. Consider this example: uniform float a[10]; for (int i = 1; i < 2; ++i) { b += a[i + int(sin(0.0))]; } IE rejects a shader including this code, ANGLE in Chrome does not. The condition for rejection in IE seems to be that there's a built-in function call in the index expression, this other version is okay according to both browsers: uniform float a[10]; for (int i = 1; i < 2; ++i) { b += a[i + int(1.0+2.0)]; } It doesn't seem to be clear based on the spec whether these shaders should be allowed or not: there's actually an open question in the ESSL 1.00 spec's issues section whether integer division or floating point operations should be allowed in index expressions. I think these questions need to be addressed somehow for WebGL 1.0. My suggestion is to allow these operations for the sake of avoiding regressing current implementations, and to pave way for WebGL 2.0 where dynamic indexing is currently in the spec as a valuable new feature. There can be a slight risk from allowing floating-point operations in addition to integer operations, in that they can generate NaN or Inf on some platforms, but it seems even these can be clamped if required as long as they are converted through the integer type before clamping (float(int(x))). Browsers that wish to optimize index expressions by not adding clamping can still do so for the subset of expressions where that is safe. All browsers also seem to have other bugs related to constant expressions and indexing, I think I've found 4 different ones so far. But these seem to be issues that only require new tests and bugfixes, rather than spec changes. - Regards, Olli -------------- next part -------------- An HTML attachment was scrubbed... URL: From dko...@ Fri Dec 5 07:13:33 2014 From: dko...@ (Daniel Koch) Date: Fri, 5 Dec 2014 15:13:33 +0000 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Message-ID: I believe something like e.g. sin(0.0), is actually supposed to be a constant expression as it can be evaluated at compile-time. See issue 10.11 Constant Expressions in the ESSL 1.00 spec. Should built-in functions be allowed in constant expressions? e.g. const float a = sin(1.0); RESOLUTION: Yes, allow built-in functions to be included in constant expressions. Redefinition of built-in functions is now prohibited. User-defined functions are not allowed in constant expressions. So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really be getting the same handling. -Daniel On 2014-12-05 9:34 AM, "Olli Etuaho" > wrote: Hi all, I've started digging into handling of constant expressions and constant-index-expressions in WebGL, and have found some issues in this area. Consider this example: uniform float a[10]; for (int i = 1; i < 2; ++i) { b += a[i + int(sin(0.0))]; } IE rejects a shader including this code, ANGLE in Chrome does not. The condition for rejection in IE seems to be that there's a built-in function call in the index expression, this other version is okay according to both browsers: uniform float a[10]; for (int i = 1; i < 2; ++i) { b += a[i + int(1.0+2.0)]; } It doesn't seem to be clear based on the spec whether these shaders should be allowed or not: there's actually an open question in the ESSL 1.00 spec's issues section whether integer division or floating point operations should be allowed in index expressions. I think these questions need to be addressed somehow for WebGL 1.0. My suggestion is to allow these operations for the sake of avoiding regressing current implementations, and to pave way for WebGL 2.0 where dynamic indexing is currently in the spec as a valuable new feature. There can be a slight risk from allowing floating-point operations in addition to integer operations, in that they can generate NaN or Inf on some platforms, but it seems even these can be clamped if required as long as they are converted through the integer type before clamping (float(int(x))). Browsers that wish to optimize index expressions by not adding clamping can still do so for the subset of expressions where that is safe. All browsers also seem to have other bugs related to constant expressions and indexing, I think I've found 4 different ones so far. But these seem to be issues that only require new tests and bugfixes, rather than spec changes. - Regards, Olli -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Fri Dec 5 07:20:58 2014 From: oet...@ (Olli Etuaho) Date: Fri, 5 Dec 2014 15:20:58 +0000 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 In-Reply-To: References: Message-ID: <8386e9c618f34c54ac672fbddf96e465@UKMAIL101.nvidia.com> Ah, yes, maybe I should have picked a more complex example. Something like a[int(pow(2.0, i))] illustrates the class of problematic expressions from the point of view of the spec better. -Olli ________________________________ From: Daniel Koch Sent: Friday, December 5, 2014 5:13 PM To: Olli Etuaho; public webgl Subject: Re: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 I believe something like e.g. sin(0.0), is actually supposed to be a constant expression as it can be evaluated at compile-time. See issue 10.11 Constant Expressions in the ESSL 1.00 spec. Should built-in functions be allowed in constant expressions? e.g. const float a = sin(1.0); RESOLUTION: Yes, allow built-in functions to be included in constant expressions. Redefinition of built-in functions is now prohibited. User-defined functions are not allowed in constant expressions. So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really be getting the same handling. -Daniel On 2014-12-05 9:34 AM, "Olli Etuaho" > wrote: Hi all, I've started digging into handling of constant expressions and constant-index-expressions in WebGL, and have found some issues in this area. Consider this example: uniform float a[10]; for (int i = 1; i < 2; ++i) { b += a[i + int(sin(0.0))]; } IE rejects a shader including this code, ANGLE in Chrome does not. The condition for rejection in IE seems to be that there's a built-in function call in the index expression, this other version is okay according to both browsers: uniform float a[10]; for (int i = 1; i < 2; ++i) { b += a[i + int(1.0+2.0)]; } It doesn't seem to be clear based on the spec whether these shaders should be allowed or not: there's actually an open question in the ESSL 1.00 spec's issues section whether integer division or floating point operations should be allowed in index expressions. I think these questions need to be addressed somehow for WebGL 1.0. My suggestion is to allow these operations for the sake of avoiding regressing current implementations, and to pave way for WebGL 2.0 where dynamic indexing is currently in the spec as a valuable new feature. There can be a slight risk from allowing floating-point operations in addition to integer operations, in that they can generate NaN or Inf on some platforms, but it seems even these can be clamped if required as long as they are converted through the integer type before clamping (float(int(x))). Browsers that wish to optimize index expressions by not adding clamping can still do so for the subset of expressions where that is safe. All browsers also seem to have other bugs related to constant expressions and indexing, I think I've found 4 different ones so far. But these seem to be issues that only require new tests and bugfixes, rather than spec changes. - Regards, Olli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ash...@ Fri Dec 5 08:22:53 2014 From: ash...@ (Ashley Gullen) Date: Fri, 5 Dec 2014 16:22:53 +0000 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: > Sounds like you don't need WEBGL_debug_info then. Just make your DB using the existing info and see if it correlates with the info you need. If it does you're done. My point was that it's already possible, but with a lot of boring bureaucratic overhead of maintaining a database of how these parameters correlate, and exposing the string directly would save all this effort. It looks like Florian showed guesses aren't that accurate after all, but that could change with WebGL 2. Also remember all this information can be combined with what Panopticlick already uses for fingerprinting, which already appears to be pretty accurate even without any WebGL information. Regarding avoiding a user-agent-string type situation, I think it's important to point out that while user agent strings are obviously used in all sorts of crazy and ridiculous ways, they are actually useful. In practice from time to time you'll run in to some kind of browser bug or spec deviation without any good way to feature-detect it, so you simply have to look at the user agent string to determine the browser or rendering engine in use and run an alternative codepath. There are right and wrong ways to use the user-agent information, but there are still right ways. Without that, you either have to sit tight and let everyone on a certain platform run in to a bug (which is often simply untenable), or you have to try to find another way to detect the platform by the other information available to you in a browser context. If the detection is inaccurate, then you will have to use the cautious approach and give everyone (or an unnecessarily large subset of users) some inferior workaround code; then even users with perfectly capable unaffected systems end up with hacky workaround code being run because there was no way to identify who's really affected. This is not necessarily a better result. In short I don't think user agents are an inherently bad idea, and the fact people mis-use tools is not a reason to withhold them. I think this is perfectly analogous to the debug_renderer_info WebGL extension. On 3 December 2014 at 21:46, Gregg Tavares wrote: > > > On Wed, Dec 3, 2014 at 8:13 AM, Ashley Gullen wrote: > >> I'm not sure I understand the privacy objections. It's already effective >> to fingerprint users with the existing information provided by browsers, >> such as the user agent string, screen dimensions, HTTP Accept header, time >> zone, and APIs like WebStorage. This is demonstrated by Panopticlick ( >> https://panopticlick.eff.org/), which doesn't even use WebGL at the >> moment. >> >> There's an argument not to make fingerprinting even more effective than >> it already is, but WebGL already provides a lot of >> hardware/driver/implementation-specific parameters through things like the >> various supported WebGL extensions, maximum varyings/uniforms/constants, >> maximum renderbuffer/texture/cube size, point size range, and even >> performance profiling. All this is already available to improve the >> effectiveness of fingerprinting. Although I haven't seen any tests >> examining precisely how effective this is, there is a lot of diversity out >> there as shown by sites like webglstats.com, so I think it is likely to >> be effective especially when combined with the other information available. >> It is difficult to see how browsers can make this less effective, since the >> only option to reduce its accuracy is to round some parameters down, and >> then you can't use the full capability of the hardware. >> >> I'd guess that analysis of all those parameters combined with the other >> information available would allow you to make a very good educated guess at >> the hardware in use anyway. For example certain parameters on a desktop >> system might correlate to certain models of nVidia graphics card and so on. >> On systems like iOS it is particularly easy, since the screen size and >> whether it's an iPad or iPhone alone can often exactly identify the >> graphics chip in use. On other mobile systems often the device model is in >> the user agent so it can also be identified easily, such as "Nexus 5" >> appearing in the Nexus 5's user agent string, precisely identifying the >> fact it is using an Adreno 330 GPU despite any efforts of the browser to >> mask the WebGL renderer string. >> > > Sounds like you don't need WEBGL_debug_info then. Just make your DB using > the existing info and see if it correlates with the info you need. If it > does you're done. > > I'd really like to see someone at least try this. > > The Web shouldn't be full of > > if (device == "NVidia 9600") { > } elif (device == "AMD 1456") { > } elif (device == "NVidia Quadro 456") { > } elif (device == "Qualcomm 923") { > } elif (device == "AMD Radeon 9865") { > } elif (device == "Intel 4400") { > } elif (device == "Intel 5000") { > } elif (device == "Imagination Technologies 523") { > } > > etc. That way lies madness and even more broken websites. > > >> >> So to me it seems like trying to hide this information doesn't actually >> help improve anyone's privacy, while impeding legitimate use cases of >> trying to work around hardware-specific problems. >> >> >> >> On 3 December 2014 at 14:32, Johannes Schmid wrote: >> >>> >>> I can see that happening on the browser statistics side of things, but I >>> disagree from the point of view of a single application. If the app tells >>> the user it is slow or ugly because he didn't click yes in the little popup >>> window, I'm pretty sure that most users will reload the page and click yes >>> instead. A downside of it is that apps could simply refuse to run without >>> the user agreeing to extended profiling and thus get most people to agree, >>> effectively making the "guard" useless. And then users may want to start >>> spoofing the info instead, and history repeats. >>> >>> Again, if there can be a consent of making more information accessible >>> without any such measures, I'm all for it! >>> >>> Joe >>> >>> >>> On 03.12.2014 14:15, Florian B?sch wrote: >>> >>>> I don't care if the webgl debug renderer info holdouts come up with an >>>> "opt-in" solution. I don't care because, it's basically the same as no >>>> solution. Nobody opts in. Not because they're against it, but because >>>> nobody wants to click popups or go search out some obscure menu sub >>>> entry somewhere. So in reality, nobody will do it, and so it's the same >>>> as no solution. >>>> >>>> >>>> >>>> >>>> >>>> On Wed, Dec 3, 2014 at 1:50 PM, Johannes Schmid >>> > wrote: >>>> >>>> >>>> I'm not suggesting that such a dialog box should be introduced to >>>> guard any information that is already exposed. As a developer of a >>>> high-performance app, I completely agree that the lack of >>>> information about the host has severe impacts on the UX at the >>>> moment. If more detailed information can be made always vailable (or >>>> perhaps opt-out, as Ben Adams suggested), that would be great in my >>>> opinion. But if there are strong arguments against making more >>>> information available, I would much prefer an opt-in solution to no >>>> solution at all. >>>> >>>> Joe >>>> >>>> On 03.12.2014 13 :11, Florian B?sch wrote: >>>> >>>> If you do such a dialog box, you can kiss browser statistics >>>> goodbye, >>>> and you won't get most statistics, even from your own site, on >>>> your own >>>> domain, without cross message posting. Sounds like fun wasting >>>> time >>>> meandering around in the dark. Enjoy your future. >>>> >>>> On Wed, Dec 3, 2014 at 12:19 PM, Johannes Schmid >>>> >>>> >> wrote: >>>> >>>> >>>> This idea has probably been suggested/discussed before, >>>> just >>>> throwing it in in case it hasn't: >>>> When an app queries WebGL for sensitive information, the >>>> browser >>>> presents the user with a dialog box where the user can >>>> prevent the >>>> query from returning such information. Apps can fall back >>>> to >>>> conservative settings if he does, so everybody gets the >>>> choice >>>> between privacy and optimized performance. >>>> >>>> Such dialog boxes already exist e.g. for camera access. In >>>> this >>>> case, coming up with a easy to understand and concise >>>> formulation of >>>> pros and cons may be quite hard, though. >>>> >>>> Best, >>>> Joe >>>> >>>> On 03.12.2014 11 >>>> :42, Florian B?sch wrote: >>>> >>>> On Tue, Dec 2, 2014 at 10:26 PM, Frank Olivier >>>> >>> >>>> >>> > >>>> >>> __micros__oft.com >>>> >>> >>> wrote: >>>> >>>> FWIW, there has been some legislative activity wrt >>>> fingerprinting >>>> recently: >>>> http://www.theguardian.com/____technology/2014/nov/28/ >>>> europe-____privacy-war-websites-__silently-__tracking-users >>>> >>> __privacy-war-websites-silently-__tracking-users> >>>> >>>> >>>> >>> __privacy-war-websites-silently-__tracking-users >>>> >>> privacy-war-websites-silently-tracking-users>> >>>> [Insert cavalcade of caveats wrt 'obtain the valid >>>> consent >>>> of the >>>> user' here] >>>> >>>> >>>> You can have any 2 of these 3 things, but not all 3 >>>> together: >>>> >>>> * A heterogeneous computing device and software >>>> ecosystem >>>> * Good applications (that is, applications which are >>>> well >>>> adjusted to >>>> run on the given population of hardware) >>>> * Privacy >>>> >>>> If you pick a heterogenous ecosystem and you want good >>>> applications, >>>> then you have to give applications a lot of information >>>> about >>>> where they >>>> run. >>>> >>>> If you pick a heterogenous ecosystem and privacy, you >>>> cannot >>>> have good >>>> applications, because everybody is just stabbing in the >>>> dark. >>>> >>>> If you pick privacy and good applications, you cannot >>>> have a >>>> heterogenous ecosystem, because you need to fix the >>>> platform to >>>> prevent >>>> app developers from stabbing in the dark (aka the iOS >>>> way) >>>> >>>> Pushing for privacy, noble as it may seem, is directly >>>> sabotaging the >>>> quality of applications you will be able to get. Make >>>> your pick. >>>> >>>> http://en.wikipedia.org/wiki/____Device_fingerprint >>>> >>>> >>>> >>>> >>> > >>>> You could perhaps reduce diversity here by only >>>> exposing exact >>>> renderer info for older/problematic* GPUs/drivers. >>>> >>>> *As reported by content creators. >>>> >>>> It's not about older/problematic devices per-se. This >>>> topic has been >>>> discussed extensively on this ML before. >>>> >>>> * Support handling for Users of your app that have >>>> problems. >>>> Even if >>>> they don't call you personally, or open a support >>>> ticket, >>>> you can >>>> still detect when they have an issue, and note >>>> down the GPU. >>>> Statistically speaking this helps a *lot*. Because >>>> let's >>>> say 1 users >>>> contacts you that your app didn't work for him. >>>> You'd have >>>> to tell >>>> him to tell you the GPU, which is difficult. But >>>> now what, >>>> you have >>>> the GPU, and one user with a problem. But you know >>>> that say >>>> 20% of >>>> your users have some problem, but which ones? Is >>>> it the >>>> same ones >>>> with that GPU? Is it something specific to that >>>> GPU? Should >>>> you file >>>> a conformance test for it? Should you contact a >>>> vendor? >>>> Should you >>>> get one of these GPUs and devise a workaround? How >>>> would >>>> you know? >>>> You don't. It could be that the bug has nothing to >>>> do with that >>>> specific GPU, that's just coincidence, and of say, >>>> 10'000 >>>> people who >>>> use your app and get issues, exactly one person >>>> has that >>>> specific >>>> GPU, the other 99'999 have a different one. >>>> Great, now >>>> you've just >>>> wasted a ton of work to help exactly one user. Not >>>> that >>>> anything's >>>> inherently wrong with that, it's just not an >>>> efficient use >>>> of your >>>> support and development resources. >>>> * Performance estimation: Do you offer a user the HD >>>> default >>>> or the SD >>>> default? Do they come with a GTX-980? HD-default, >>>> you can >>>> allocate >>>> hundreds of megabytes of VRAM without a problem, >>>> and draw >>>> millions >>>> of triangles. Intel HD 4000? SD default naturally, >>>> and cut >>>> back on >>>> everything. It's not a substitute for measuring >>>> actual >>>> performance, >>>> and adjusting as necessary, and it isn't a >>>> substitute for >>>> letting >>>> the user make adjustments. But it is a good way to >>>> get some >>>> default >>>> that's least disappointing. >>>> * And a bunch of other things. >>>> >>>> >>>> >>>> >>>> ------------------------------____-------------------------- >>>> --__- >>>> You are currently subscribed to public_webgl...@ >>>> >>>> >>> >. >>>> To unsubscribe, send an email to majordomo...@ >>>> >>>> >>> > with >>>> the following command in the body of your email: >>>> unsubscribe public_webgl >>>> >>>> ------------------------------____-------------------------- >>>> --__- >>>> >>>> >>>> >>>> ------------------------------__----------------------------- >>>> You are currently subscribed to public_webgl...@ >>>> . >>>> To unsubscribe, send an email to majordomo...@ >>>> with >>>> the following command in the body of your email: >>>> unsubscribe public_webgl >>>> ------------------------------__----------------------------- >>>> >>>> >>>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Dec 5 08:36:18 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 5 Dec 2014 17:36:18 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: On Fri, Dec 5, 2014 at 5:22 PM, Ashley Gullen wrote: > My point was that it's already possible, but with a lot of boring > bureaucratic overhead of maintaining a database of how these parameters > correlate, and exposing the string directly would save all this effort. It > looks like Florian showed guesses aren't that accurate after all, > They aren't accurate enough for practical use for important usecases that GPU strings would enable, such as more accurate user support handling, better conformance test targeting, better use of developer/support staff time, better statistical assessment of issues, and so forth. They are accurate enough to give people who want to fingerprint devices for unethical purposes most of the bits they'd want from a GPU string. > but that could change with WebGL 2. Also remember all this information can > be combined with what Panopticlick already uses for fingerprinting, which > already appears to be pretty accurate even without any WebGL information. > WebGL 2.0 will probably improve the guessing accuracy. But I don't think it'd go beyond 60% or so. Most of the "accuracy" is the result of picking the statistical winners of GPUs that happen most often with a certain combination of parameters. It doesn't matter if the guessing accuracy is 40%, or 60%, it's not satisfactory for ethical, productive usecases. It is very useful for unethical ones already, which is why the privacy debate is a red herring. In short I don't think user agents are an inherently bad idea, and the fact > people mis-use tools is not a reason to withhold them. I think this is > perfectly analogous to the debug_renderer_info WebGL extension. > I obviously agree with this, but I'd like to emphasize, GPU strings may be in use similar to UAs. But they are a completely different beast than UAs because: - There's many more of em than UAs - They don't determine features, and they don't give access to special features - Interacting with features is always programmatic in JS, never implicit trough markup/styling - They are also more fuzzy than UAs (because of various platform idiosyncracies) - They're less structured than UAs - They become decrepit/outdated quicker than a UA detection -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Dec 5 08:39:29 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 5 Dec 2014 17:39:29 +0100 Subject: [Public WebGL] WEBGL_debug_renderer_info In-Reply-To: References: <547EF1C9.3050105@esri.com> <547F070E.4010503@esri.com> <547F1EFA.8080900@esri.com> Message-ID: Ah and one more bullet point on the different nature of UAs and GPU strings: - The ones most likely to be targeted for special treatment (because of high performance) are also the least attractive to be targeted, because least users posess that pricey hardware. A situation that's not present as such with UAs, because UAs are essentially free. Graphics cards differ a lot and can go anything to nearly free (integrated intel) to $1000+. On Fri, Dec 5, 2014 at 5:36 PM, Florian B?sch wrote: > On Fri, Dec 5, 2014 at 5:22 PM, Ashley Gullen wrote: > >> My point was that it's already possible, but with a lot of boring >> bureaucratic overhead of maintaining a database of how these parameters >> correlate, and exposing the string directly would save all this effort. It >> looks like Florian showed guesses aren't that accurate after all, >> > They aren't accurate enough for practical use for important usecases that > GPU strings would enable, such as more accurate user support handling, > better conformance test targeting, better use of developer/support staff > time, better statistical assessment of issues, and so forth. > > They are accurate enough to give people who want to fingerprint devices > for unethical purposes most of the bits they'd want from a GPU string. > > >> but that could change with WebGL 2. Also remember all this information >> can be combined with what Panopticlick already uses for fingerprinting, >> which already appears to be pretty accurate even without any WebGL >> information. >> > WebGL 2.0 will probably improve the guessing accuracy. But I don't think > it'd go beyond 60% or so. Most of the "accuracy" is the result of picking > the statistical winners of GPUs that happen most often with a certain > combination of parameters. It doesn't matter if the guessing accuracy is > 40%, or 60%, it's not satisfactory for ethical, productive usecases. It is > very useful for unethical ones already, which is why the privacy debate is > a red herring. > > > In short I don't think user agents are an inherently bad idea, and the >> fact people mis-use tools is not a reason to withhold them. I think this is >> perfectly analogous to the debug_renderer_info WebGL extension. >> > I obviously agree with this, but I'd like to emphasize, GPU strings may be > in use similar to UAs. But they are a completely different beast than UAs > because: > > - There's many more of em than UAs > - They don't determine features, and they don't give access to special > features > - Interacting with features is always programmatic in JS, never > implicit trough markup/styling > - They are also more fuzzy than UAs (because of various platform > idiosyncracies) > - They're less structured than UAs > - They become decrepit/outdated quicker than a UA detection > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Dec 5 13:08:33 2014 From: kbr...@ (Kenneth Russell) Date: Fri, 5 Dec 2014 13:08:33 -0800 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 In-Reply-To: <8386e9c618f34c54ac672fbddf96e465@UKMAIL101.nvidia.com> References: <8386e9c618f34c54ac672fbddf96e465@UKMAIL101.nvidia.com> Message-ID: Olli, It looks like you've found an underspecified area of ESSL 1.0. While "Indexing of Arrays, Vectors and Matrices" in Appendix A of GLSL ES 1.0.17 doesn't explicitly mention built-in functions, it also doesn't explicitly forbid them. It also doesn't say anything about calling user-defined functions, as section 10.11 does. I'm not sure what the best path here is. If WebGL explicitly starts forbidding the use of these builtins, sophisticated content like ShaderToy and Maps might break. If it specifies that they're allowed, a significant number of tests will have to be added, which are likely to fail on at least some drivers. Does the drawElements test suite that's part of the ES 3.0 conformance suite cover this area at all? See https://android.googlesource.com/platform/external/deqp/ . It would help to know that because that'd indicate how robust ES 3.0 implementations are to these sorts of constructs. I'm inclined to suggest not making any spec changes to WebGL 1.0 in this area because it'd be a big distraction to getting WebGL 2.0 implementations out the door. -Ken On Fri, Dec 5, 2014 at 7:20 AM, Olli Etuaho wrote: > Ah, yes, maybe I should have picked a more complex example. Something like > > > a[int(pow(2.0, i))] > > > illustrates the class of problematic expressions from the point of view of > the spec better. > > > -Olli > > ________________________________ > From: Daniel Koch > Sent: Friday, December 5, 2014 5:13 PM > To: Olli Etuaho; public webgl > Subject: Re: [Public WebGL] Ambiguity in indexing restriction spec in WebGL > 1.0 > > I believe something like e.g. sin(0.0), is actually supposed to be a > constant expression as it can be evaluated at compile-time. > See issue 10.11 Constant Expressions in the ESSL 1.00 spec. > > > Should built-in functions be allowed in constant expressions? e.g. > > const float a = sin(1.0); > > RESOLUTION: Yes, allow built-in functions to be included in constant > expressions. Redefinition of > > built-in functions is now prohibited. User-defined functions are not allowed > in constant expressions. > > > So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really > be getting the same handling. > > -Daniel > > > On 2014-12-05 9:34 AM, "Olli Etuaho" wrote: > > Hi all, > > > I've started digging into handling of constant expressions and > constant-index-expressions in WebGL, and have found some issues in this > area. Consider this example: > > > uniform float a[10]; > > > for (int i = 1; i < 2; ++i) { > > b += a[i + int(sin(0.0))]; > > } > > > IE rejects a shader including this code, ANGLE in Chrome does not. The > condition for rejection in IE seems to be that there's a built-in function > call in the index expression, this other version is okay according to both > browsers: > > > uniform float a[10]; > > > for (int i = 1; i < 2; ++i) { > > b += a[i + int(1.0+2.0)]; > > } > > > It doesn't seem to be clear based on the spec whether these shaders should > be allowed or not: there's actually an open question in the ESSL 1.00 spec's > issues section whether integer division or floating point operations should > be allowed in index expressions. I think these questions need to be > addressed somehow for WebGL 1.0. > > > My suggestion is to allow these operations for the sake of avoiding > regressing current implementations, and to pave way for WebGL 2.0 where > dynamic indexing is currently in the spec as a valuable new feature. There > can be a slight risk from allowing floating-point operations in addition to > integer operations, in that they can generate NaN or Inf on some platforms, > but it seems even these can be clamped if required as long as they are > converted through the integer type before clamping (float(int(x))). Browsers > that wish to optimize index expressions by not adding clamping can still do > so for the subset of expressions where that is safe. > > > All browsers also seem to have other bugs related to constant expressions > and indexing, I think I've found 4 different ones so far. But these seem to > be issues that only require new tests and bugfixes, rather than spec > changes. > > - > > Regards, Olli ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Fri Dec 5 23:44:57 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 6 Dec 2014 08:44:57 +0100 Subject: [Public WebGL] WEBGL_subscribe_uniform extension Message-ID: A new proposal was introduced on the 13th of October to provide a mechanism for late transfer of input values to uniforms: https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_subscribe_uniform/ The idea to provide a mechanism for this problem is good because it cuts down on the latency between input and display. The latency arises because the input value is provided from as early (or earlier) than the start of the frame, but rendering might not happen until at the end of the frame. To summarize how it's supposed to work in this extension: Create a value buffer, bind a value buffer, subscribe the value buffer for setup. Then later, bind that value buffer again, call a populate function and then send it to a quasi uniform setter (not using the value buffer at all). I think there's several issues with that extension. *= Semantic complexity =* There's no inherent reason to use the semantic it has. The purpose is to transfer a value that might be more up to date to a uniform at the time things are actually drawn. This only needs a way to indicate "this value -> this uniform". On the example of mouse handling, the most common way to do it today would be to pass the X and Y value of the mouse to a uniform, either as uniform2f(mouseX, mouseY) or uiform2fv(mousePosition); Ideally the extension would be structured in a way, that makes it easy to support this semantic, without having to change the underlying code. This would allow for providing a transparent fallback without dealing with all the complexity. I believe a semantic like this would be much more agreeable: // at initialization mousePosition = mouse.getLateValueProxy2f(); // at rendering gl.uniform2fv(yourUniformLocation, mousePosition); Because then you can switch the code about how to treat mouse position, without having to change your renderer. *= Typedness =* The extension does not deal with typedness of uniforms. Uniforms can have any of a variety of types (float, vec2, vec3, vec4 etc.) and it's usually good practice to precisely match the type that the shader expects, with the type you're sticking in. *= Other input value sources =* The extension states that only the mouse is supported, and introduces a constant (MOUSE_POSITION) to deal with that case. This means that for any other event source, a new extension needs to be introduced, and the array of "older" WEBGL_subscribe_uniform1, WEBGL_subscribe_uniform2, WEBGL_subscribe_uniform3 etc. extensions are still hanging about. Other event sources could be things like: - gamepads (see gamepad API) - WebVR HMD values - tablet input - touch input - etc. It seems to be another instance of a mismatch of what extensions are for, and what browsers can define. *= Transforms =* It is not always the case that values are put into a shader unmodified, even if you want to reduce latency. Input devices can be noisy and you might want to apply some smoothing, depending on context. You might want to switch the severity of the smoothing, and you might want to vary the time dependent component. It's also the case that for high frequency event sources, even if the value is the same as from the previous frame, that some movement has happened in the interrim. In cases like this you'd capture every event as it arrives, and apply some intra-frame smoothing to the values so that you will not lose signals between frames. The extension as proposed cannot satisfy this usecase, whereas event callbacks can. *= Alternative proposal =* I propose that for this functionality, no extension is needed and I will show how that can be smoothly acomplished. 1. The browser can provide an interface to obtain device value proxies, such as mouse.getLateValueProxy2f(), gamepad.someAxis.getLateValueProxy2f() etc. 2. These proxies behave as lists of their specified value sizes, i.e. myproxy[0] etc. if you want to access the values from them in JS (the value in them would obviously be non-late, but it's still useful) 3. The proxies, when passed to a vector uniform, are recognized by the browser to be late value bindings and are properly late populated and transferred. Since the browser implements the value proxy interface, it's supposed to recognize that a late binding value wish is expressed by the developer here. But just in case that the WebGL part of that behavior is not implemented, a late value binding proxy would also behave as an ordinary list, and so would still work (albeit without the late binding). 4. Late value proxies support an event handler, as in myProxy.addEventHandler('change-transform', function(){ ... }) which gets called with an input value event that expresses a precise timestamp of this event (expressed in harmony of performance.now()). The event would have two members, timestap and inValues and outValues (analogous to the Web Audio Data APIs filter function), and the event handler could provide transformations in that way. The default transformation is no transformation. Structuring it this way solves several problems: - The semantic is greatly simplified - The browser can specify and offer additional event sources as desired without having to respecify an extension everytime it happens. A specification on the behavior, event handlers, event and semantic of late value binding proxies can be provided and revisioned independently of WebGL. - The user can provide transparent and automatic fallback for late value binding. - WebGL support for late value binding is not a prerequisite (but naturally strongly recommended) - Typedness is properly handled - No extension is needed (and no change to the specification is introduced) -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Sun Dec 7 05:55:11 2014 From: khr...@ (Mark Callow) Date: Sun, 7 Dec 2014 22:55:11 +0900 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 In-Reply-To: References: Message-ID: <9EA301E5-9CE8-4EEC-94EB-18EC0F8FDDE4@callow.im> > On Dec 6, 2014, at 12:13 AM, Daniel Koch wrote: > > I believe something like e.g. sin(0.0), is actually supposed to be a constant expression as it can be evaluated at compile-time. > See issue 10.11 Constant Expressions in the ESSL 1.00 spec. > > Should built-in functions be allowed in constant expressions? e.g. > const float a = sin(1.0); > RESOLUTION: Yes, allow built-in functions to be included in constant expressions. Redefinition of > built-in functions is now prohibited. User-defined functions are not allowed in constant expressions. > > So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really be getting the same handling. > > Implementations are only required to support constant-index expressions, not constant expressions, in for loops, in fragment-shader for loops at least. I thought the spec and grammar was quite clear on the definition of a constant-index-expression, but IANACE. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From oet...@ Mon Dec 8 07:07:02 2014 From: oet...@ (Olli Etuaho) Date: Mon, 8 Dec 2014 15:07:02 +0000 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 In-Reply-To: <9EA301E5-9CE8-4EEC-94EB-18EC0F8FDDE4@callow.im> References: ,<9EA301E5-9CE8-4EEC-94EB-18EC0F8FDDE4@callow.im> Message-ID: <635e45c2469d406b8ca3a169020468e2@UKMAIL101.nvidia.com> Mark: the first thing the spec says about constant-index expressions is "Definition: constant-index expressions are a superset of constant-expressions". The unclear part is what the spec means when it says "Expressions composed of both of the above", referring to constant expressions and loop indices. Composed how? Operators or calls are also implicitly needed to compose an expression, but which operations are allowed is not mentioned. It could be interpreted so that as soon as you operate on a loop index, it becomes a constant-index-expression and the intermediate value can't be operated on further to form another constant-index-expression. Alternatively, it could be interpreted so that multiple intermediate operations involving indices and constant expressions are allowed. I agree that this is a slight distraction from WebGL 2 work, but it's always a problem when IE and ANGLE differ in their interpretations of the spec. Of course regressing compatibility with content is a concern, but between browsers the compatibility is already broken. Maybe the IE team could comment on how frequently they see shaders failing because of this? The most complex indexing expression I found by sampling 20 featured/popular shaders on ShaderToy was [i + 1], so I don't think there would be that much breakage even if the rules are tightened to only allow integer addition and multiplication, for example. It will also help with fixing other problems related to constant expressions if this is clearly specified, since some of the code is shared between processing constant-index-expressions and constant expressions. -Olli ________________________________ From: Mark Callow Sent: Sunday, December 7, 2014 3:55 PM To: Daniel Koch Cc: Olli Etuaho; public webgl Subject: Re: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 * PGP Signed by an unknown key On Dec 6, 2014, at 12:13 AM, Daniel Koch > wrote: I believe something like e.g. sin(0.0), is actually supposed to be a constant expression as it can be evaluated at compile-time. See issue 10.11 Constant Expressions in the ESSL 1.00 spec. Should built-in functions be allowed in constant expressions? e.g. const float a = sin(1.0); RESOLUTION: Yes, allow built-in functions to be included in constant expressions. Redefinition of built-in functions is now prohibited. User-defined functions are not allowed in constant expressions. So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really be getting the same handling. Implementations are only required to support constant-index expressions, not constant expressions, in for loops, in fragment-shader for loops at least. I thought the spec and grammar was quite clear on the definition of a constant-index-expression, but IANACE. Regards -Mark * Unknown Key * 0x3F001063 -------------- next part -------------- An HTML attachment was scrubbed... URL: From org...@ Mon Dec 8 09:57:06 2014 From: org...@ (Owen Glofcheski) Date: Mon, 8 Dec 2014 09:57:06 -0800 Subject: [Public WebGL] WEBGL_subscribe_uniform extension In-Reply-To: References: Message-ID: > > *= Typedness =* > > The extension does not deal with typedness of uniforms. Uniforms can have > any of a variety of types (float, vec2, vec3, vec4 etc.) and it's usually > good practice to precisely match the type that the shader expects, with the > type you're sticking in. > The typedness of the uniforms are inherent to the target they are using to populate the uniform, (e.g. MOUSE_POSITION is a vec2 or ivec2). The user has to match a valid type, and the browser enforces correct usage. *= Other input value sources =* > > The extension states that only the mouse is supported, and introduces a > constant (MOUSE_POSITION) to deal with that case. This means that for any > other event source, a new extension needs to be introduced, and the array > of "older" WEBGL_subscribe_uniform1, WEBGL_subscribe_uniform2, > WEBGL_subscribe_uniform3 etc. extensions are still hanging about. > > Other event sources could be things like: > > - gamepads (see gamepad API) > - WebVR HMD values > - tablet input > - touch input > - etc. > > It seems to be another instance of a mismatch of what extensions are for, > and what browsers can define. > I was under the impression extension specifications could be versioned based on revision. Is there a reason we don't allow this? The supported constants would be defined by which reversion is specified in the active WebGL spec. *= Transforms =* > > It is not always the case that values are put into a shader unmodified, > even if you want to reduce latency. Input devices can be noisy and you > might want to apply some smoothing, depending on context. You might want to > switch the severity of the smoothing, and you might want to vary the time > dependent component. > > It's also the case that for high frequency event sources, even if the > value is the same as from the previous frame, that some movement has > happened in the interrim. In cases like this you'd capture every event as > it arrives, and apply some intra-frame smoothing to the values so that you > will not lose signals between frames. > > The extension as proposed cannot satisfy this usecase, whereas event > callbacks can. > Technically you could still perform the operation using multiple buffers in time, however, this isn't a very compelling use of the extension. You're right tho, the ability to retrieve values is one of the sore points of the extension. *= Alternative proposal =* > > I propose that for this functionality, no extension is needed and I will > show how that can be smoothly acomplished. > > 1. The browser can provide an interface to obtain device value > proxies, such as mouse.getLateValueProxy2f(), > gamepad.someAxis.getLateValueProxy2f() etc. > 2. These proxies behave as lists of their specified value sizes, i.e. > myproxy[0] etc. if you want to access the values from them in JS (the value > in them would obviously be non-late, but it's still useful) > 3. The proxies, when passed to a vector uniform, are recognized by the > browser to be late value bindings and are properly late populated and > transferred. Since the browser implements the value proxy interface, it's > supposed to recognize that a late binding value wish is expressed by the > developer here. But just in case that the WebGL part of that behavior is > not implemented, a late value binding proxy would also behave as an > ordinary list, and so would still work (albeit without the late binding). > 4. Late value proxies support an event handler, as in > myProxy.addEventHandler('change-transform', function(){ ... }) which gets > called with an input value event that expresses a precise timestamp of this > event (expressed in harmony of performance.now()). The event would have two > members, timestap and inValues and outValues (analogous to the Web Audio > Data APIs filter function), and the event handler could provide > transformations in that way. The default transformation is no > transformation. > > Structuring it this way solves several problems: > > - The semantic is greatly simplified > - The browser can specify and offer additional event sources as > desired without having to respecify an extension everytime it happens. A > specification on the behavior, event handlers, event and semantic of late > value binding proxies can be provided and revisioned independently of WebGL. > - The user can provide transparent and automatic fallback for late > value binding. > - WebGL support for late value binding is not a prerequisite (but > naturally strongly recommended) > - Typedness is properly handled > - No extension is needed (and no change to the specification is > introduced) > > Would the specification not need to be updated to specify uniform[1-4][iv]v behavior when giving a deferred value? One main impetus for driving this forward as a WebGL extension is to demonstrate value to driver vendors so they might potentially implement similar functionality in the future (decreasing perceived latency further). On Fri, Dec 5, 2014 at 11:44 PM, Florian B?sch wrote: > A new proposal was introduced on the 13th of October to provide a > mechanism for late transfer of input values to uniforms: > https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_subscribe_uniform/ > > The idea to provide a mechanism for this problem is good because it cuts > down on the latency between input and display. The latency arises because > the input value is provided from as early (or earlier) than the start of > the frame, but rendering might not happen until at the end of the frame. > > To summarize how it's supposed to work in this extension: Create a value > buffer, bind a value buffer, subscribe the value buffer for setup. Then > later, bind that value buffer again, call a populate function and then send > it to a quasi uniform setter (not using the value buffer at all). > > I think there's several issues with that extension. > > *= Semantic complexity =* > > There's no inherent reason to use the semantic it has. The purpose is to > transfer a value that might be more up to date to a uniform at the time > things are actually drawn. This only needs a way to indicate "this value -> > this uniform". > > On the example of mouse handling, the most common way to do it today would > be to pass the X and Y value of the mouse to a uniform, either as > uniform2f(mouseX, mouseY) or uiform2fv(mousePosition); > > Ideally the extension would be structured in a way, that makes it easy to > support this semantic, without having to change the underlying code. This > would allow for providing a transparent fallback without dealing with all > the complexity. > > I believe a semantic like this would be much more agreeable: > > // at initialization > mousePosition = mouse.getLateValueProxy2f(); > > // at rendering > gl.uniform2fv(yourUniformLocation, mousePosition); > > Because then you can switch the code about how to treat mouse position, > without having to change your renderer. > > *= Typedness =* > > The extension does not deal with typedness of uniforms. Uniforms can have > any of a variety of types (float, vec2, vec3, vec4 etc.) and it's usually > good practice to precisely match the type that the shader expects, with the > type you're sticking in. > > *= Other input value sources =* > > The extension states that only the mouse is supported, and introduces a > constant (MOUSE_POSITION) to deal with that case. This means that for any > other event source, a new extension needs to be introduced, and the array > of "older" WEBGL_subscribe_uniform1, WEBGL_subscribe_uniform2, > WEBGL_subscribe_uniform3 etc. extensions are still hanging about. > > Other event sources could be things like: > > - gamepads (see gamepad API) > - WebVR HMD values > - tablet input > - touch input > - etc. > > It seems to be another instance of a mismatch of what extensions are for, > and what browsers can define. > > *= Transforms =* > > It is not always the case that values are put into a shader unmodified, > even if you want to reduce latency. Input devices can be noisy and you > might want to apply some smoothing, depending on context. You might want to > switch the severity of the smoothing, and you might want to vary the time > dependent component. > > It's also the case that for high frequency event sources, even if the > value is the same as from the previous frame, that some movement has > happened in the interrim. In cases like this you'd capture every event as > it arrives, and apply some intra-frame smoothing to the values so that you > will not lose signals between frames. > > The extension as proposed cannot satisfy this usecase, whereas event > callbacks can. > > *= Alternative proposal =* > > I propose that for this functionality, no extension is needed and I will > show how that can be smoothly acomplished. > > 1. The browser can provide an interface to obtain device value > proxies, such as mouse.getLateValueProxy2f(), > gamepad.someAxis.getLateValueProxy2f() etc. > 2. These proxies behave as lists of their specified value sizes, i.e. > myproxy[0] etc. if you want to access the values from them in JS (the value > in them would obviously be non-late, but it's still useful) > 3. The proxies, when passed to a vector uniform, are recognized by the > browser to be late value bindings and are properly late populated and > transferred. Since the browser implements the value proxy interface, it's > supposed to recognize that a late binding value wish is expressed by the > developer here. But just in case that the WebGL part of that behavior is > not implemented, a late value binding proxy would also behave as an > ordinary list, and so would still work (albeit without the late binding). > 4. Late value proxies support an event handler, as in > myProxy.addEventHandler('change-transform', function(){ ... }) which gets > called with an input value event that expresses a precise timestamp of this > event (expressed in harmony of performance.now()). The event would have two > members, timestap and inValues and outValues (analogous to the Web Audio > Data APIs filter function), and the event handler could provide > transformations in that way. The default transformation is no > transformation. > > Structuring it this way solves several problems: > > - The semantic is greatly simplified > - The browser can specify and offer additional event sources as > desired without having to respecify an extension everytime it happens. A > specification on the behavior, event handlers, event and semantic of late > value binding proxies can be provided and revisioned independently of WebGL. > - The user can provide transparent and automatic fallback for late > value binding. > - WebGL support for late value binding is not a prerequisite (but > naturally strongly recommended) > - Typedness is properly handled > - No extension is needed (and no change to the specification is > introduced) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Mon Dec 8 10:42:07 2014 From: ben...@ (Ben Constable) Date: Mon, 8 Dec 2014 18:42:07 +0000 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 In-Reply-To: <635e45c2469d406b8ca3a169020468e2@UKMAIL101.nvidia.com> References: ,<9EA301E5-9CE8-4EEC-94EB-18EC0F8FDDE4@callow.im> <635e45c2469d406b8ca3a169020468e2@UKMAIL101.nvidia.com> Message-ID: I don't have a large amount of data on how frequently shaders fail because of this. >From a cursory glance playing with shaders, Chrome and IE both fail to compile this: const float n = sin(1.0); And according to 10.11, they should. So we at least have a bug here. But we don't have any test failures that I know of here, so I think that we have a test gap as well. Section 5.10 is pretty clear about builtin functions (including which ones are not usable in constant expressions). When it comes to constant-index-expressions, my interpretation is that it is largely the same as constant-expression, but with loop index variables for well defined loops able to be used as if they were a const variable. From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Olli Etuaho Sent: Monday, December 8, 2014 7:07 AM To: Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Mark: the first thing the spec says about constant-index expressions is "Definition: constant-index expressions are a superset of constant-expressions". The unclear part is what the spec means when it says "Expressions composed of both of the above", referring to constant expressions and loop indices. Composed how? Operators or calls are also implicitly needed to compose an expression, but which operations are allowed is not mentioned. It could be interpreted so that as soon as you operate on a loop index, it becomes a constant-index-expression and the intermediate value can't be operated on further to form another constant-index-expression. Alternatively, it could be interpreted so that multiple intermediate operations involving indices and constant expressions are allowed. I agree that this is a slight distraction from WebGL 2 work, but it's always a problem when IE and ANGLE differ in their interpretations of the spec. Of course regressing compatibility with content is a concern, but between browsers the compatibility is already broken. Maybe the IE team could comment on how frequently they see shaders failing because of this? The most complex indexing expression I found by sampling 20 featured/popular shaders on ShaderToy was [i + 1], so I don't think there would be that much breakage even if the rules are tightened to only allow integer addition and multiplication, for example. It will also help with fixing other problems related to constant expressions if this is clearly specified, since some of the code is shared between processing constant-index-expressions and constant expressions. -Olli ________________________________ From: Mark Callow > Sent: Sunday, December 7, 2014 3:55 PM To: Daniel Koch Cc: Olli Etuaho; public webgl Subject: Re: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 * PGP Signed by an unknown key On Dec 6, 2014, at 12:13 AM, Daniel Koch > wrote: I believe something like e.g. sin(0.0), is actually supposed to be a constant expression as it can be evaluated at compile-time. See issue 10.11 Constant Expressions in the ESSL 1.00 spec. Should built-in functions be allowed in constant expressions? e.g. const float a = sin(1.0); RESOLUTION: Yes, allow built-in functions to be included in constant expressions. Redefinition of built-in functions is now prohibited. User-defined functions are not allowed in constant expressions. So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really be getting the same handling. Implementations are only required to support constant-index expressions, not constant expressions, in for loops, in fragment-shader for loops at least. I thought the spec and grammar was quite clear on the definition of a constant-index-expression, but IANACE. Regards -Mark * Unknown Key * 0x3F001063 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Dec 8 11:01:15 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 8 Dec 2014 20:01:15 +0100 Subject: [Public WebGL] WEBGL_subscribe_uniform extension In-Reply-To: References: Message-ID: On Mon, Dec 8, 2014 at 6:57 PM, Owen Glofcheski wrote: > > The typedness of the uniforms are inherent to the target they are using to > populate the uniform, (e.g. MOUSE_POSITION is a vec2 or ivec2). > The user has to match a valid type, and the browser enforces correct usage. > In principle that's correct, but it's gets kinda more murky if you've got more than 2 or 4 axes to manage. Each uniform is 4-word aligned, so if you pass a vec2, you've consumed one vec4 slot. For this reason it's often the case that you'd want to pack things, because uniform slots are limited. Not sure if you should leave it up to the user how to pack it. > I was under the impression extension specifications could be versioned > based on revision. Is there a reason we don't allow this? > The supported constants would be defined by which reversion is specified > in the active WebGL spec. > Afaik you can't revision extensions, for several reasons. - It's unprecedented, ES and GL define separate extensions if an extension needs an update, as in ARB_internal_format_query, ARB_internal_format_query2, ARB_transform_feedback, ARB_transform_feedback2, ARB_transform_feedback3, EXT_sparse_texture, EXT_sparse_texture2 etc. - In WebGL (or ES/GL) there's no mechanism to request a specific extension by version. It's gl.getExtension('name') - In WebGL (or ES/GL) there's no mechanism to discover the version of an extension you've gotten. As in extension.version isn't a standard. - The goal of an extension is to get Khronos ratified. It's my understanding that this sets it in stone. It can never be removed, nor can it be updated. > Technically you could still perform the operation using multiple buffers > in time, however, this isn't a very compelling use of the extension. > You're right tho, the ability to retrieve values is one of the sore points > of the extension. > If you're enforcing unmodified input pass trough, you're basically limiting it to a very narrow usecase that's mostly just VR-HMDs, which are missing from the extension. > Would the specification not need to be updated to specify > uniform[1-4][iv]v behavior when giving a deferred value? > Maybe. But even if it's required, you can write an extension that's basically just modifying the uniform methods to accept a deferred value. You don't need to specify where from the browser that value comes from, or what you can do with it otherwise, or what it means. Which enables you to have a separate deferred value specification that can be updated, without having to update the extension. > One main impetus for driving this forward as a WebGL extension is to > demonstrate value to driver vendors so they might potentially implement > similar functionality in the future (decreasing perceived latency further). > I understand that motivation, and it's a good thing to do. My feedback aims to: - Prevent extension proliferation (let's face it, nobody likes extensions terribly much) - Inspire better semantics which are easier to use. - Make it more flexible and future proof - Cover more usecases -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Mon Dec 8 11:54:09 2014 From: oet...@ (Olli Etuaho) Date: Mon, 8 Dec 2014 19:54:09 +0000 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 In-Reply-To: References: ,<9EA301E5-9CE8-4EEC-94EB-18EC0F8FDDE4@callow.im> <635e45c2469d406b8ca3a169020468e2@UKMAIL101.nvidia.com>, Message-ID: <798d0073e77e4f15919d1a34ed4779d2@UKMAIL101.nvidia.com> That issue with assigning constant expressions containing calls to built-ins to constant floats is one thing that I was planning to add a test for. Ben, is your interpretation of constant-index-expressions also what IE is moving towards in the WebGL implementation? I could try to come up with more data to see what the impact of disallowing complex index expressions would be for existing shaders. ________________________________ From: Ben Constable Sent: Monday, December 8, 2014 8:42 PM To: Olli Etuaho; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 I don?t have a large amount of data on how frequently shaders fail because of this. >From a cursory glance playing with shaders, Chrome and IE both fail to compile this: const float n = sin(1.0); And according to 10.11, they should. So we at least have a bug here. But we don?t have any test failures that I know of here, so I think that we have a test gap as well. Section 5.10 is pretty clear about builtin functions (including which ones are not usable in constant expressions). When it comes to constant-index-expressions, my interpretation is that it is largely the same as constant-expression, but with loop index variables for well defined loops able to be used as if they were a const variable. From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Olli Etuaho Sent: Monday, December 8, 2014 7:07 AM To: Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Mark: the first thing the spec says about constant-index expressions is "Definition: constant-index expressions are a superset of constant-expressions". The unclear part is what the spec means when it says "Expressions composed of both of the above", referring to constant expressions and loop indices. Composed how? Operators or calls are also implicitly needed to compose an expression, but which operations are allowed is not mentioned. It could be interpreted so that as soon as you operate on a loop index, it becomes a constant-index-expression and the intermediate value can't be operated on further to form another constant-index-expression. Alternatively, it could be interpreted so that multiple intermediate operations involving indices and constant expressions are allowed. I agree that this is a slight distraction from WebGL 2 work, but it's always a problem when IE and ANGLE differ in their interpretations of the spec. Of course regressing compatibility with content is a concern, but between browsers the compatibility is already broken. Maybe the IE team could comment on how frequently they see shaders failing because of this? The most complex indexing expression I found by sampling 20 featured/popular shaders on ShaderToy was [i + 1], so I don't think there would be that much breakage even if the rules are tightened to only allow integer addition and multiplication, for example. It will also help with fixing other problems related to constant expressions if this is clearly specified, since some of the code is shared between processing constant-index-expressions and constant expressions. -Olli ________________________________ From: Mark Callow > Sent: Sunday, December 7, 2014 3:55 PM To: Daniel Koch Cc: Olli Etuaho; public webgl Subject: Re: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 * PGP Signed by an unknown key On Dec 6, 2014, at 12:13 AM, Daniel Koch > wrote: I believe something like e.g. sin(0.0), is actually supposed to be a constant expression as it can be evaluated at compile-time. See issue 10.11 Constant Expressions in the ESSL 1.00 spec. Should built-in functions be allowed in constant expressions? e.g. const float a = sin(1.0); RESOLUTION: Yes, allow built-in functions to be included in constant expressions. Redefinition of built-in functions is now prohibited. User-defined functions are not allowed in constant expressions. So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really be getting the same handling. Implementations are only required to support constant-index expressions, not constant expressions, in for loops, in fragment-shader for loops at least. I thought the spec and grammar was quite clear on the definition of a constant-index-expression, but IANACE. Regards -Mark * Unknown Key * 0x3F001063 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Mon Dec 8 17:08:28 2014 From: ben...@ (Ben Constable) Date: Tue, 9 Dec 2014 01:08:28 +0000 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 In-Reply-To: <798d0073e77e4f15919d1a34ed4779d2@UKMAIL101.nvidia.com> References: ,<9EA301E5-9CE8-4EEC-94EB-18EC0F8FDDE4@callow.im> <635e45c2469d406b8ca3a169020468e2@UKMAIL101.nvidia.com>, <798d0073e77e4f15919d1a34ed4779d2@UKMAIL101.nvidia.com> Message-ID: Well, I wrote most of what we have in our implementation so my interpretation explains what we have, as well as what we will move towards :) I cannot make assertions about when we can fix things but in general, we support having exhaustive tests that support all elements of the specification and we will make every effort to pass said tests. At this point I would think that the specification is very liberal and therefore that there could be content out there relying on the behavior that you describing clamping down on. One case that might be interesting is the output of an intrinsic function when given a constant-index-expression. Is that also considered a constant-index-expression? We can work on the language of the spec but I think that tests document the intent of things far better. From: Olli Etuaho [mailto:oetuaho...@] Sent: Monday, December 8, 2014 11:54 AM To: Ben Constable; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 That issue with assigning constant expressions containing calls to built-ins to constant floats is one thing that I was planning to add a test for. Ben, is your interpretation of constant-index-expressions also what IE is moving towards in the WebGL implementation? I could try to come up with more data to see what the impact of disallowing complex index expressions would be for existing shaders. ________________________________ From: Ben Constable > Sent: Monday, December 8, 2014 8:42 PM To: Olli Etuaho; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 I don't have a large amount of data on how frequently shaders fail because of this. >From a cursory glance playing with shaders, Chrome and IE both fail to compile this: const float n = sin(1.0); And according to 10.11, they should. So we at least have a bug here. But we don't have any test failures that I know of here, so I think that we have a test gap as well. Section 5.10 is pretty clear about builtin functions (including which ones are not usable in constant expressions). When it comes to constant-index-expressions, my interpretation is that it is largely the same as constant-expression, but with loop index variables for well defined loops able to be used as if they were a const variable. From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Olli Etuaho Sent: Monday, December 8, 2014 7:07 AM To: Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Mark: the first thing the spec says about constant-index expressions is "Definition: constant-index expressions are a superset of constant-expressions". The unclear part is what the spec means when it says "Expressions composed of both of the above", referring to constant expressions and loop indices. Composed how? Operators or calls are also implicitly needed to compose an expression, but which operations are allowed is not mentioned. It could be interpreted so that as soon as you operate on a loop index, it becomes a constant-index-expression and the intermediate value can't be operated on further to form another constant-index-expression. Alternatively, it could be interpreted so that multiple intermediate operations involving indices and constant expressions are allowed. I agree that this is a slight distraction from WebGL 2 work, but it's always a problem when IE and ANGLE differ in their interpretations of the spec. Of course regressing compatibility with content is a concern, but between browsers the compatibility is already broken. Maybe the IE team could comment on how frequently they see shaders failing because of this? The most complex indexing expression I found by sampling 20 featured/popular shaders on ShaderToy was [i + 1], so I don't think there would be that much breakage even if the rules are tightened to only allow integer addition and multiplication, for example. It will also help with fixing other problems related to constant expressions if this is clearly specified, since some of the code is shared between processing constant-index-expressions and constant expressions. -Olli ________________________________ From: Mark Callow > Sent: Sunday, December 7, 2014 3:55 PM To: Daniel Koch Cc: Olli Etuaho; public webgl Subject: Re: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 * PGP Signed by an unknown key On Dec 6, 2014, at 12:13 AM, Daniel Koch > wrote: I believe something like e.g. sin(0.0), is actually supposed to be a constant expression as it can be evaluated at compile-time. See issue 10.11 Constant Expressions in the ESSL 1.00 spec. Should built-in functions be allowed in constant expressions? e.g. const float a = sin(1.0); RESOLUTION: Yes, allow built-in functions to be included in constant expressions. Redefinition of built-in functions is now prohibited. User-defined functions are not allowed in constant expressions. So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really be getting the same handling. Implementations are only required to support constant-index expressions, not constant expressions, in for loops, in fragment-shader for loops at least. I thought the spec and grammar was quite clear on the definition of a constant-index-expression, but IANACE. Regards -Mark * Unknown Key * 0x3F001063 -------------- next part -------------- An HTML attachment was scrubbed... URL: From org...@ Tue Dec 9 15:25:52 2014 From: org...@ (Owen Glofcheski) Date: Tue, 9 Dec 2014 15:25:52 -0800 Subject: [Public WebGL] WEBGL_subscribe_uniform extension In-Reply-To: References: Message-ID: > > I understand that motivation, and it's a good thing to do. My feedback > aims to: > > - Prevent extension proliferation (let's face it, nobody likes > extensions terribly much) > - Inspire better semantics which are easier to use. > - Make it more flexible and future proof > - Cover more usecases > > Ah yes, I agree that these points should be addressed. I have a working proof of concept implementation of the proposal (this was brought up in a conference call and a go ahead was given to create a proof of concept implementation despite it not being in the draft stage), so I think in the short term it's best to stick with the current implementation until such a time that it's usefulness is proven. If the functionality garners sufficient interest, we should revisit your ideas to potentially simplify and revise the API. It's also worth noting that other browser vendors don't benefit as much from this extension as their pipelines aren't deferred. It would be nice to know which implementation they felt would impact them the least. With an extension they can choose to not implement it and users can query for the extension. I'm not sure if there is a similar mechanism JS side (e.g. can users query if lateValueProxy is supported?). On Mon, Dec 8, 2014 at 11:01 AM, Florian B?sch wrote: > > > On Mon, Dec 8, 2014 at 6:57 PM, Owen Glofcheski > wrote: >> >> The typedness of the uniforms are inherent to the target they are using >> to populate the uniform, (e.g. MOUSE_POSITION is a vec2 or ivec2). >> The user has to match a valid type, and the browser enforces correct >> usage. >> > In principle that's correct, but it's gets kinda more murky if you've got > more than 2 or 4 axes to manage. Each uniform is 4-word aligned, so if you > pass a vec2, you've consumed one vec4 slot. For this reason it's often the > case that you'd want to pack things, because uniform slots are limited. Not > sure if you should leave it up to the user how to pack it. > > >> I was under the impression extension specifications could be versioned >> based on revision. Is there a reason we don't allow this? >> The supported constants would be defined by which reversion is specified >> in the active WebGL spec. >> > Afaik you can't revision extensions, for several reasons. > > - It's unprecedented, ES and GL define separate extensions if an > extension needs an update, as in ARB_internal_format_query, > ARB_internal_format_query2, ARB_transform_feedback, > ARB_transform_feedback2, ARB_transform_feedback3, EXT_sparse_texture, > EXT_sparse_texture2 etc. > - In WebGL (or ES/GL) there's no mechanism to request a specific > extension by version. It's gl.getExtension('name') > - In WebGL (or ES/GL) there's no mechanism to discover the version of > an extension you've gotten. As in extension.version isn't a standard. > - The goal of an extension is to get Khronos ratified. It's my > understanding that this sets it in stone. It can never be removed, nor can > it be updated. > > > >> Technically you could still perform the operation using multiple buffers >> in time, however, this isn't a very compelling use of the extension. >> You're right tho, the ability to retrieve values is one of the sore >> points of the extension. >> > If you're enforcing unmodified input pass trough, you're basically > limiting it to a very narrow usecase that's mostly just VR-HMDs, which are > missing from the extension. > > >> Would the specification not need to be updated to specify >> uniform[1-4][iv]v behavior when giving a deferred value? >> > Maybe. But even if it's required, you can write an extension that's > basically just modifying the uniform methods to accept a deferred value. > You don't need to specify where from the browser that value comes from, or > what you can do with it otherwise, or what it means. Which enables you to > have a separate deferred value specification that can be updated, without > having to update the extension. > > >> One main impetus for driving this forward as a WebGL extension is to >> demonstrate value to driver vendors so they might potentially implement >> similar functionality in the future (decreasing perceived latency further). >> > I understand that motivation, and it's a good thing to do. My feedback > aims to: > > - Prevent extension proliferation (let's face it, nobody likes > extensions terribly much) > - Inspire better semantics which are easier to use. > - Make it more flexible and future proof > - Cover more usecases > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Raf...@ Tue Dec 9 16:02:59 2014 From: Raf...@ (Rafael Cintron) Date: Wed, 10 Dec 2014 00:02:59 +0000 Subject: [Public WebGL] WEBGL_subscribe_uniform extension In-Reply-To: References: Message-ID: In general, minimizing delay between the generation of WebGL commands and when the product of the commands is shown to the end user is highly desireable. Many variables contribute to the final rendering besides mouse position. Owen, do you have example pages that exhibit the lag that you are trying to minimize with the extension? I am interested in taking a look at these so we can reduce the overall lag as much as possible. --Rafael From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Owen Glofcheski Sent: Tuesday, December 9, 2014 3:26 PM To: Florian B?sch Cc: public webgl Subject: Re: [Public WebGL] WEBGL_subscribe_uniform extension I understand that motivation, and it's a good thing to do. My feedback aims to: ? Prevent extension proliferation (let's face it, nobody likes extensions terribly much) ? Inspire better semantics which are easier to use. ? Make it more flexible and future proof ? Cover more usecases Ah yes, I agree that these points should be addressed. I have a working proof of concept implementation of the proposal (this was brought up in a conference call and a go ahead was given to create a proof of concept implementation despite it not being in the draft stage), so I think in the short term it's best to stick with the current implementation until such a time that it's usefulness is proven. If the functionality garners sufficient interest, we should revisit your ideas to potentially simplify and revise the API. It's also worth noting that other browser vendors don't benefit as much from this extension as their pipelines aren't deferred. It would be nice to know which implementation they felt would impact them the least. With an extension they can choose to not implement it and users can query for the extension. I'm not sure if there is a similar mechanism JS side (e.g. can users query if lateValueProxy is supported?). On Mon, Dec 8, 2014 at 11:01 AM, Florian B?sch > wrote: On Mon, Dec 8, 2014 at 6:57 PM, Owen Glofcheski > wrote: The typedness of the uniforms are inherent to the target they are using to populate the uniform, (e.g. MOUSE_POSITION is a vec2 or ivec2). The user has to match a valid type, and the browser enforces correct usage. In principle that's correct, but it's gets kinda more murky if you've got more than 2 or 4 axes to manage. Each uniform is 4-word aligned, so if you pass a vec2, you've consumed one vec4 slot. For this reason it's often the case that you'd want to pack things, because uniform slots are limited. Not sure if you should leave it up to the user how to pack it. I was under the impression extension specifications could be versioned based on revision. Is there a reason we don't allow this? The supported constants would be defined by which reversion is specified in the active WebGL spec. Afaik you can't revision extensions, for several reasons. * It's unprecedented, ES and GL define separate extensions if an extension needs an update, as in ARB_internal_format_query, ARB_internal_format_query2, ARB_transform_feedback, ARB_transform_feedback2, ARB_transform_feedback3, EXT_sparse_texture, EXT_sparse_texture2 etc. * In WebGL (or ES/GL) there's no mechanism to request a specific extension by version. It's gl.getExtension('name') * In WebGL (or ES/GL) there's no mechanism to discover the version of an extension you've gotten. As in extension.version isn't a standard. * The goal of an extension is to get Khronos ratified. It's my understanding that this sets it in stone. It can never be removed, nor can it be updated. Technically you could still perform the operation using multiple buffers in time, however, this isn't a very compelling use of the extension. You're right tho, the ability to retrieve values is one of the sore points of the extension. If you're enforcing unmodified input pass trough, you're basically limiting it to a very narrow usecase that's mostly just VR-HMDs, which are missing from the extension. Would the specification not need to be updated to specify uniform[1-4][iv]v behavior when giving a deferred value? Maybe. But even if it's required, you can write an extension that's basically just modifying the uniform methods to accept a deferred value. You don't need to specify where from the browser that value comes from, or what you can do with it otherwise, or what it means. Which enables you to have a separate deferred value specification that can be updated, without having to update the extension. One main impetus for driving this forward as a WebGL extension is to demonstrate value to driver vendors so they might potentially implement similar functionality in the future (decreasing perceived latency further). I understand that motivation, and it's a good thing to do. My feedback aims to: * Prevent extension proliferation (let's face it, nobody likes extensions terribly much) * Inspire better semantics which are easier to use. * Make it more flexible and future proof * Cover more usecases -------------- next part -------------- An HTML attachment was scrubbed... URL: From org...@ Tue Dec 9 16:46:20 2014 From: org...@ (Owen Glofcheski) Date: Tue, 9 Dec 2014 16:46:20 -0800 Subject: [Public WebGL] WEBGL_subscribe_uniform extension In-Reply-To: References: Message-ID: Hi Rafael, We've been using Artillery as our primary use case. Since they're creating an RTS is the browser they require responsive mouse input (e.g. for box selection). They're currently trying out the extension in Canary, unfortunately I'm not sure if there game is cross-browser compatible at the moment and is currently in closed beta =/ Any webpage that has a significant response to a users mouse movement is a good candidate (such as dragging in a map application). As an aside back to an early point, it's been brought to my attention that javascript callbacks from the gpu service probably won't fly. On Tue, Dec 9, 2014 at 4:02 PM, Rafael Cintron wrote: > In general, minimizing delay between the generation of WebGL commands > and when the product of the commands is shown to the end user is highly > desireable. Many variables contribute to the final rendering besides mouse > position. > > > > Owen, do you have example pages that exhibit the lag that you are trying > to minimize with the extension? I am interested in taking a look at these > so we can reduce the overall lag as much as possible. > > > > --Rafael > > > > *From:* owners-public_webgl...@ [mailto: > owners-public_webgl...@] *On Behalf Of *Owen Glofcheski > *Sent:* Tuesday, December 9, 2014 3:26 PM > *To:* Florian B?sch > *Cc:* public webgl > *Subject:* Re: [Public WebGL] WEBGL_subscribe_uniform extension > > > > I understand that motivation, and it's a good thing to do. My feedback > aims to: > > ? Prevent extension proliferation (let's face it, nobody likes > extensions terribly much) > > ? Inspire better semantics which are easier to use. > > ? Make it more flexible and future proof > > ? Cover more usecases > > Ah yes, I agree that these points should be addressed. > > > > I have a working proof of concept implementation of the proposal (this was > brought up in a conference call and a go ahead was given to create a proof > of concept implementation despite it not being in the draft stage), so I > think in the short term it's best to stick with the current implementation > until such a time that it's usefulness is proven. > > > > If the functionality garners sufficient interest, we should revisit your > ideas to potentially simplify and revise the API. > > > > It's also worth noting that other browser vendors don't benefit as much > from this extension as their pipelines aren't deferred. > > It would be nice to know which implementation they felt would impact them > the least. > > With an extension they can choose to not implement it and users can query > for the extension. I'm not sure if there is a similar mechanism JS side > (e.g. can users query if lateValueProxy is supported?). > > > > On Mon, Dec 8, 2014 at 11:01 AM, Florian B?sch wrote: > > > > > > On Mon, Dec 8, 2014 at 6:57 PM, Owen Glofcheski > wrote: > > The typedness of the uniforms are inherent to the target they are using > to populate the uniform, (e.g. MOUSE_POSITION is a vec2 or ivec2). > > The user has to match a valid type, and the browser enforces correct usage. > > In principle that's correct, but it's gets kinda more murky if you've > got more than 2 or 4 axes to manage. Each uniform is 4-word aligned, so if > you pass a vec2, you've consumed one vec4 slot. For this reason it's often > the case that you'd want to pack things, because uniform slots are limited. > Not sure if you should leave it up to the user how to pack it. > > > > I was under the impression extension specifications could be versioned > based on revision. Is there a reason we don't allow this? > > The supported constants would be defined by which reversion is specified > in the active WebGL spec. > > Afaik you can't revision extensions, for several reasons. > > - It's unprecedented, ES and GL define separate extensions if an > extension needs an update, as in ARB_internal_format_query, > ARB_internal_format_query2, ARB_transform_feedback, > ARB_transform_feedback2, ARB_transform_feedback3, EXT_sparse_texture, > EXT_sparse_texture2 etc. > - In WebGL (or ES/GL) there's no mechanism to request a specific > extension by version. It's gl.getExtension('name') > - In WebGL (or ES/GL) there's no mechanism to discover the version of > an extension you've gotten. As in extension.version isn't a standard. > - The goal of an extension is to get Khronos ratified. It's my > understanding that this sets it in stone. It can never be removed, nor can > it be updated. > > > > Technically you could still perform the operation using multiple buffers > in time, however, this isn't a very compelling use of the extension. > > You're right tho, the ability to retrieve values is one of the sore points > of the extension. > > If you're enforcing unmodified input pass trough, you're basically > limiting it to a very narrow usecase that's mostly just VR-HMDs, which are > missing from the extension. > > > > Would the specification not need to be updated to specify > uniform[1-4][iv]v behavior when giving a deferred value? > > Maybe. But even if it's required, you can write an extension that's > basically just modifying the uniform methods to accept a deferred value. > You don't need to specify where from the browser that value comes from, or > what you can do with it otherwise, or what it means. Which enables you to > have a separate deferred value specification that can be updated, without > having to update the extension. > > > > One main impetus for driving this forward as a WebGL extension is to > demonstrate value to driver vendors so they might potentially implement > similar functionality in the future (decreasing perceived latency further). > > I understand that motivation, and it's a good thing to do. My feedback > aims to: > > - Prevent extension proliferation (let's face it, nobody likes > extensions terribly much) > - Inspire better semantics which are easier to use. > - Make it more flexible and future proof > - Cover more usecases > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Dec 10 22:37:00 2014 From: kbr...@ (Kenneth Russell) Date: Wed, 10 Dec 2014 22:37:00 -0800 Subject: [Public WebGL] DeleteShader and DetachShader spec ambiguity and implementation divergence In-Reply-To: References: Message-ID: On Wed, Dec 3, 2014 at 1:29 PM, Gregg Tavares wrote: > > > On Wed, Dec 3, 2014 at 11:35 AM, Michal Vodicka > wrote: > >> OpenGL section 2.10.3 is clear for me in this case. >> >> After going step by step again thru OpenGL spec: >> 1) create program >> 2) attach shaders and stuff >> 3) link program >> 4) use program *and just for you to know, with this sentence "While a >> valid program object is in use ... " is meant you are free to do steps 2 >> and 3 over and over now, because executable has no connection to shaders >> once it is generated*. Generate is exactly the word used in spec 2.10 >> Vertex shaders, which implies it has no connection to skeleton program >> unless spec says the opposite later (which it doesnt). >> >> I dont think "program object is in use" is ambiguous, because it is said >> in the useProgram description part. About sentence: >> "The only function that affects the executable code of the program >> object is LinkProgram" >> I'm not the expert, but I wouldn't say that. Once executable code is >> generated, it cant be affected by any OpenGL/WebGL function. LinkProgram >> and DeleteProgram might cause losing reference to executable code. >> UseProgram will install executable code as a part of current rendering >> state. Please stop me if I'm wrong. >> > > The question isn't what affects the executable code. Executable code is > immutable. The question is what effects the program object's *reference > to* executable code. The problem is "While a valid program object is in > use" you can do X, Y, & Z. But what about when a program is not in use? "In > use" is clearly defined as the program in use though UseProgram. Once a > program is no longer "in use" those those rules don't apply. > > I agree those rules should apply regardless of whether or not a program is > "in use" but the fact that the spec calls it out that only programs that > are "in use" get those rules suggests that rules don't apply when the > program is not "in use". > > Sorry for the slow reply. After reading through section 2.10.3 "Program Objects" of the OpenGL ES 2.0.25 spec again, the intent does seem to be that LinkProgram is the only API call which affects the program object's link status and its internal pointer to executable code. The intent of the three paragraphs starting with "While a valid program object is in use" and ending with "After such a program is removed from use, it can not be made part of the current rendering state until it is successfully re-linked" seems to be to special-case the situation where the active program object is re-linked successfully. I think the OpenGL ES and OpenGL specs' definitions of LinkProgram and UseProgram should be clarified per the suggestions earlier in this thread. In the interim, please review this clarification to the WebGL spec: https://github.com/KhronosGroup/WebGL/pull/804 . Let's add more tests to conformance/programs/program-test.html covering this area, so that the difference in behavior Safari exhibits in http://stackoverflow.com/questions/27237696/webgl-detach-and-delete-shaders-after-linking will be tested. If anyone is interested in contributing them please submit a pull request. -Ken > > >> Stackoverflow OP reporter might be only confused because it didnt work in >> firefox. So far as I know, playcanvas engine always delete shaders after >> linking program and I didnt find any report about FF malfunction. I guess >> reporter might did mistake while testing it in FF. So I suggest we get some >> info from Safari developers and if they point something unclear in the >> spec, then it can be improved. >> >> I don't feel confused this time, but the others might. Gregg, I vote for >> improving the spec, I just wouldn't use exactly the sentence you did unless >> my previous statement is wrong :) >> >> Regards Michal >> >> On Wed, Dec 3, 2014 at 3:14 AM, Rafael Cintron < >> Rafael.Cintron...@> wrote: >> >>> Before the paragraph in question, there were several paragraphs that >>> talked about attaching and detaching shaders and linking programs. >>> Nowhere in those paragraphs was there mention of those operations only >>> mattering or behaving differently when the program is in use. >>> >>> >>> >>> Here is the beginning of the previous paragraph: >>> >>> If a valid executable is created, it can be made part of the current >>> rendering >>> >>> state with the command >>> >>> void UseProgram( uint program ); >>> >>> This command will install the executable code as part of current >>> rendering state if >>> >>> the program object program contains valid *executable code, i.e. has >>> been linked* >>> >>> *successfully* >>> >>> >>> >>> The part in bold implies to me that linking is what creates the >>> executable code. Subsequent paragraphs also talk about linking being the >>> operation that causes executable code to be created for a program, or >>> nullified on an unsuccessful link. Attach/Detach shaders only serve as >>> inputs to the link process. Whether a program is in use only factors into >>> what executable code is set in the render state. >>> >>> >>> >>> >>> >>> Now all that being said, we know of at least one implementation that >>> differs from the rest. In addition, you were confused by the wording of >>> the spec as were people on StackOverflow. That tells me the spec should be >>> clarified in this regard. Your statement: ?The only function that affects >>> the executable code of the program object is LinkProgram? is a great >>> suggestion. I would also clarify the confusing sentence to read: ?*Regardless >>> of whether a program is in use or not*, applications are free to modify >>> attached shader objects, compile attached shader objects, attach additional >>> shader objects, and detach shader objects?. This way, the subsequent >>> sentence will also apply to used or unused programs: ?These operations do >>> not affect the link status or executable code of the program object.? >>> >>> >>> >>> --Rafael >>> >>> >>> >>> *From:* Gregg Tavares [mailto:khronos...@] >>> *Sent:* Tuesday, December 2, 2014 3:35 PM >>> *To:* Rafael Cintron >>> *Cc:* Gregg Tavares; public_webgl...@ >>> *Subject:* Re: [Public WebGL] DeleteShader and DetachShader spec >>> ambiguity and implementation divergence >>> >>> >>> >>> That's still not clear. While I agree that's the way it "should be" the >>> spec is not clear enough IMO >>> >>> >>> >>> On Tue, Dec 2, 2014 at 12:44 PM, Rafael Cintron < >>> Rafael.Cintron...@> wrote: >>> >>> I agree the ?While a valid program is in use . . .? phrase in the spec >>> is confusing. >>> >>> >>> >>> The conformance test that comes closest to testing the behavior in the >>> StackOverflow question is >>> https://www.khronos.org/registry/webgl/sdk/tests/conformance/programs/program-test.html. >>> Unfortunately, it first calls useProgram before performing the questionable >>> operations so it doesn?t directly test this scenario. We should add >>> conformance tests to clarify. >>> >>> >>> >>> Reading the entire section of the OpenGL spec as a whole, it?s clear >>> that LinkProgram creates an ?executable? out of the current state of the >>> program. Calling UseProgram merely installs the executable as the current >>> rendering state of the context if the program has been successfully >>> linked. The only thing that can change the installed executable from that >>> point forward is calling UseProgram with a different program or performing >>> a successful relink of the current program. If the in-use program is >>> unsuccessfully re-linked, you can still render with the installed ?good? >>> executable. But the moment you replace the current program with a >>> different program, then you?ve essentially lost that good executable >>> forever. >>> >>> >>> >>> The issue isn't what UseProgram does. That part is clear. The issue is >>> what do AttachShader, DetachShader, etc...do when a program is not "in use". >>> >>> >>> >>> Conceptually a program is something like this >>> >>> >>> >>> class Program { >>> >>> Shaders* shaders[]; // all attached shaders >>> >>> Exe* exe; // the last good executable >>> >>> } >>> >>> >>> >>> And useProgram conceptually works something like this >>> >>> >>> >>> class GLState { >>> >>> Exe* currentProgram; >>> >>> } >>> >>> >>> >>> glUseProgram(id) { >>> >>> Program* prg = getProgramById(id); >>> >>> state.currentProgram = prg; >>> >>> state.currentExe = prg.exe; >>> >>> } >>> >>> >>> >>> With that mental model, the only thing that invalidates or changes the >>> exe reference inside of Program is LinkProgram which is apparently supposed >>> to be implemented like this >>> >>> >>> >>> glLinkProgram(id) { >>> >>> Program* prg = getProgramById(id); >>> >>> prg->exe = Link(shaders); // Good or bad this will set exe >>> >>> >>> >>> // If the link was good and this program is "in use" >>> >>> if (linkWasGood && state.currentProgram == this) { >>> >>> state.currentExe = prg.exe; // use the new exe. >>> >>> } >>> >>> } >>> >>> >>> >>> It's also clear that calling DeleteShader is valid because it won't >>> actually be deleted until it is detached from the Program >>> >>> >>> >>> What's not clear is the behavior of DetachShader, AttachShader, etc. >>> According to the spec this would be valid implementation >>> >>> >>> >>> glDetachShader(prg, sh) { >>> >>> Program* prg = getProgramById(id); >>> >>> prg->RemoveShaderFromArrayOfShaders(sh); >>> >>> >>> >>> * // If this program is NOT in use we can muck with the executable >>> of this program* >>> >>> if (state.currentProgram != this) { >>> >>> prg.exe = NULL; // <<==---------------------- >>> >>> } >>> >>> } >>> >>> >>> >>> The spec only claims the executable code of the program is not affected *IF >>> THE PROGRAM IS IN USE*. And "in use" is clearly defined as the program >>> most recently passed to UseProgram. Programs not "in use" are not covered >>> by the paragraph mentioning Detaching and Attaching shaders. >>> >>> >>> >>> Rather, it seems like the spec should say something that effectively >>> means: >>> >>> >>> >>> The only function that affects the executable code of the program >>> object is LinkProgram >>> >>> >>> >>> Note the OpenGL Wiki claims this behavior is expected >>> . >>> >>> >>> >>> >>> >>> >>> >>> The attachShader, detachShader, and linkProgram APIs all take program >>> and shader objects as parameters. That means you can perform these >>> operations even through the program you pass to the functions is not the >>> currently ?in use? program set via useProgram. In fact, the program you >>> pass to these functions can be completely different than the one currently >>> ?in use? via useProgram. So I think the sentence ?While a valid program is >>> in use ?? is meant to clarify you can change the shaders of the program >>> independent of whether it is in use or not. Otherwise, attachShader, >>> detachShader and linkProgram would not have taken programs as their first >>> argument in the first place. >>> >>> >>> >>> I tried all three of the examples in the StackOverFlow question in IE, >>> Chrome and Firefox on Windows. All browsers agree ?prog1? can be rendered >>> with at the end of each example. Since the last link of the program is a >>> successful link, the executable created as a result of the link is the one >>> that is used for rendering. Unless I am missing something in my >>> understanding, I think this is the correct behavior according to the >>> spec. >>> >>> >>> >>> --Rafael >>> >>> >>> >>> >>> >>> *From:* owners-public_webgl...@ [mailto: >>> owners-public_webgl...@] *On Behalf Of *Gregg Tavares >>> *Sent:* Tuesday, December 2, 2014 12:41 AM >>> *To:* public_webgl...@ >>> *Subject:* [Public WebGL] DeleteShader and DetachShader spec ambiguity >>> and implementation divergence >>> >>> >>> >>> So this stackoverflow question came up >>> >>> >>> >>> >>> http://stackoverflow.com/questions/27237696/webgl-detach-and-delete-shaders-after-linking >>> >>> >>> >>> The problem is 2 fold >>> >>> >>> >>> #1) The browsers are not consistent in their behavior >>> >>> >>> >>> #2) The OpenGL ES 2.0 spec doesn't seem particularly clear on what's >>> supposed to happen. >>> >>> >>> >>> Basically the poster is claiming you should be able to do this >>> >>> >>> >>> p = gl.createProgram(); >>> >>> gl.attachShader(p, someVertexShader); >>> >>> gl.attachShader(p, someFragmentShader); >>> >>> gl.linkProgram(p); >>> >>> gl.detachShader(p, someVertexShader); >>> >>> gl.detachShader(p, someFragmentShader); >>> >>> gl.deleteShader(p, someVertexShader); >>> >>> gl.deleteShader(p, someFragmentShader); >>> >>> gl.useProgram(p); >>> >>> >>> >>> And now 'p' should still point to a valid program that you can call >>> 'gl.useProgram' on or >>> >>> `gl.getUniformLocation` etc. This works in Chrome. It fails in Safari. >>> According to the OP >>> >>> it also fails in Firefox but it seemed to work for me. >>> >>> >>> >>> But, reading the spec it's not clear what's supposed to happen. AFAICT >>> the spec says >>> >>> >>> >>> While a valid program object is in use, applications are free to modify >>> attached >>> shader objects, compile attached shader objects, attach additional >>> shader objects, >>> and detach shader objects. These operations do not affect the link >>> status or executable >>> code of the program object. >>> >>> >>> >>> But "program object is in use" is ambiguous. It could be interpreted as >>> the "currentProgram" as in the one currently installed with gl.useProgram >>> >>> >>> >>> >>> >>> So >>> >>> >>> >>> (a) WebGL needs to make it clear what the correct behavior is and >>> >>> >>> >>> (b) a conformance test needs to be written to check that behavior which >>> includes calling useProgram with the program in question, getting locations >>> from it and trying to render with it and checking for success or failure >>> whichever is decided is the correct behavior. >>> >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Dec 11 03:39:16 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 11 Dec 2014 12:39:16 +0100 Subject: [Public WebGL] HDR image format Message-ID: A recurring problem in rendering, particularly of the PBR based variety is that you will require HDR imagery for various things (materials, environments etc.) There is a variety of HDR formats, all of which have one thing in common. They are all quite large. Large files are a problem for WebGL because everything needs to be downloaded. An alternative to large files is JPEG, but JPEG cannot store HDR images (attempting to do so will lead to severe artifacts). Recently a new format has been proposed http://bellard.org/bpg/ . This format is fine for replacing JPEG on the web (smaller size, better precision in the normal range). However, this format is equally not able to express HDR imagery like any other lossy format. I don't have a working knowledge of image compression, but it seems to me that it should be possible to define a lossy format that's suitable for HDR imagery. I would like to propose the following things: - Browser vendors start talking about a suitable alternative lossy compression format for HDR imagery - The resulting format should have better compression than JPEG and keep pace with bqg - The format needs to be well behaved for use-cases that use repeating texture modes (another issue of JPEG compression) - The format should be standardized by some standards body and be open to use for any browser vendor - It should be implemented efficiently, preferably with hardware acceleration of some variety or other. It's important to solve this problem because the lack of a usable, fast and versatile lossy HDR imagery format complicates and holds back important use-cases for WebGL. For instance Sketchfab is adding PBR to their viewer ( https://labs.sketchfab.com/siggraph2014/) and the size of the texture downloads is quite substantial using non-lossy HDR imagery schemes, leading to undesirable tradeoffs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Fri Dec 12 02:25:40 2014 From: khr...@ (Mark Callow) Date: Fri, 12 Dec 2014 19:25:40 +0900 Subject: [Public WebGL] HDR image format In-Reply-To: References: Message-ID: <09701B58-A210-4FB0-BFC2-B5543874CE32@callow.im> > On Dec 11, 2014, at 8:39 PM, Florian B?sch wrote: > > ... > It's important to solve this problem because the lack of a usable, fast and versatile lossy HDR imagery format complicates and holds back important use-cases for WebGL. For instance Sketchfab is adding PBR to their viewer (https://labs.sketchfab.com/siggraph2014/ ) and the size of the texture downloads is quite substantial using non-lossy HDR imagery schemes, leading to undesirable tradeoffs. > What about ASTC ? You can use KTX as a file format for the ASTC data. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From kri...@ Fri Dec 12 04:40:07 2014 From: kri...@ (Kristian Sons) Date: Fri, 12 Dec 2014 13:40:07 +0100 Subject: [Public WebGL] HDR image format In-Reply-To: References: Message-ID: <548AE227.5010906@dfki.de> One has to distinguish between Texture Compression and Image Compression. Image Compression formats will always beat TC formats, because they use variable bit rates and do global entropy encoding, RLE etc. ASTC supports HDR but will be too large for many use-cases anyway. A common approch (also for LDR) would be to transfer the data in an image compression format and reencode it into a Texture Compression format. Doing this in JavaScript is kind of cumbersome (the better the compression the more complex the prediction stuff you need to implement). I think what would really help would to have an API for that. * HDR Texture Compression Format => Fine with ASTC (wait for support), probably to large to transfer * HDR Image Compression Format => OpenEXR, TIFF, whatever you want use you can transfer with binary XHR. Maybe something more efficient would be nice, but is IMHO out of scope for WebGL, unless an API for decoding is considered * Reencoding in client: unsolved. Another approach is to compress already compressed textures (e.g compress an ASTC texture further for transfer). This has been done for instance for ETC: http://www.jacobstrom.com/publications/StromWennerstenHPG2011.pdf Just my two cents, Kristian Am 11.12.2014 um 12:39 schrieb Florian B?sch: > A recurring problem in rendering, particularly of the PBR based > variety is that you will require HDR imagery for various things > (materials, environments etc.) > > There is a variety of HDR formats, all of which have one thing in > common. They are all quite large. > > Large files are a problem for WebGL because everything needs to be > downloaded. > > An alternative to large files is JPEG, but JPEG cannot store HDR > images (attempting to do so will lead to severe artifacts). > > Recently a new format has been proposed http://bellard.org/bpg/ . > > This format is fine for replacing JPEG on the web (smaller size, > better precision in the normal range). However, this format is equally > not able to express HDR imagery like any other lossy format. > > I don't have a working knowledge of image compression, but it seems to > me that it should be possible to define a lossy format that's suitable > for HDR imagery. I would like to propose the following things: > > * Browser vendors start talking about a suitable alternative lossy > compression format for HDR imagery > * The resulting format should have better compression than JPEG and > keep pace with bqg > * The format needs to be well behaved for use-cases that use > repeating texture modes (another issue of JPEG compression) > * The format should be standardized by some standards body and be > open to use for any browser vendor > * It should be implemented efficiently, preferably with hardware > acceleration of some variety or other. > > It's important to solve this problem because the lack of a usable, > fast and versatile lossy HDR imagery format complicates and holds back > important use-cases for WebGL. For instance Sketchfab is adding PBR to > their viewer (https://labs.sketchfab.com/siggraph2014/) and the size > of the texture downloads is quite substantial using non-lossy HDR > imagery schemes, leading to undesirable tradeoffs. > -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775?2235 kristian.sons...@ http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Dec 12 04:49:17 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 12 Dec 2014 13:49:17 +0100 Subject: [Public WebGL] HDR image format In-Reply-To: <548AE227.5010906@dfki.de> References: <548AE227.5010906@dfki.de> Message-ID: On Fri, Dec 12, 2014 at 1:40 PM, Kristian Sons wrote: > > One has to distinguish between Texture Compression and Image Compression. > Correct, and I'm specifically addressing image compression. It's assumed that paths will exist to upload a compressed image to a variety of texture formats (as they today exist for JPEG, PNG, GIF, etc.). The first hurdle to get content to the client is that the bytes make it trough the network without making the user wait a long time. As I've said, for non HDR content there are viable formats (JPEG, BPG, WebP, etc.). It's for HDR content where the lossy compressing schemes break down because their assumptions about gamma, colorspace, human perception, compression ratio and expected artifacts are no longer true for HDR content. That's the problem that's in need of being addressed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Fri Dec 12 18:33:43 2014 From: khr...@ (Mark Callow) Date: Sat, 13 Dec 2014 11:33:43 +0900 Subject: [Public WebGL] HDR image format In-Reply-To: <548AE227.5010906@dfki.de> References: <548AE227.5010906@dfki.de> Message-ID: <8C94DCEF-BFFD-40CB-A1BA-F52C0BCF3038@callow.im> > On Dec 12, 2014, at 9:40 PM, Kristian Sons wrote: > > One has to distinguish between Texture Compression and Image Compression. I am aware of that. (I don?t think you were responding to my message but?) Florian mentioned hardware support, which is why I suggested a texture compression format. > because they use variable bit rates and do global entropy encoding, RLE etc. ASTC supports variable bit rates from 8-bits per texel down to 0.89 bits per texel. > ... > > * HDR Texture Compression Format => Fine with ASTC (wait for support), probably to large to transfer ASTC is already available on some GPUs and I expect support will spread quite quickly. Given that it supports variable bit rates, it is likely possible to produce HDR textures of a not unreasonable size for transmission. > > Another approach is to compress already compressed textures (e.g compress an ASTC texture further for transfer). This has been done for instance for ETC: > http://www.jacobstrom.com/publications/StromWennerstenHPG2011.pdf It has also been done for DXTC. An approach like this is probably the best for ultimate compression. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From khr...@ Sun Dec 14 22:01:59 2014 From: khr...@ (Mark Callow) Date: Mon, 15 Dec 2014 15:01:59 +0900 Subject: [Public WebGL] HDR image format In-Reply-To: <8C94DCEF-BFFD-40CB-A1BA-F52C0BCF3038@callow.im> References: <548AE227.5010906@dfki.de> <8C94DCEF-BFFD-40CB-A1BA-F52C0BCF3038@callow.im> Message-ID: <9D5BA1CB-FD38-4396-A64F-43DBA5F91ED0@callow.im> > On Dec 13, 2014, at 11:33 AM, Mark Callow wrote: > > ASTC supports variable bit rates from 8-bits per texel down to 0.89 bits per texel. I should point out that the lower bit rates are unlikely to be applicable to HDR. Another longer term approach may be diffusion-curve images. A paper at Siggraph this year showed how to randomly access them, which means you can sample them for texturing. At present there is no hardware support, so custom shaders are required. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From kri...@ Mon Dec 15 06:57:38 2014 From: kri...@ (Kristian Sons) Date: Mon, 15 Dec 2014 15:57:38 +0100 Subject: [Public WebGL] HDR image format In-Reply-To: <8C94DCEF-BFFD-40CB-A1BA-F52C0BCF3038@callow.im> References: <548AE227.5010906@dfki.de> <8C94DCEF-BFFD-40CB-A1BA-F52C0BCF3038@callow.im> Message-ID: <548EF6E2.3020500@dfki.de> Hi, > >> because they use variable bit rates and do global entropy encoding, >> RLE etc. > > ASTC supports variable bit rates from 8-bits per texel down to 0.89 > bits per texel. Yes, ASTC supports a wide choice of bit rates at authoring time, which is a great progress. However, once selected, it stays fixed through the whole image, which is one of the differences to e.g. JPEG. > >> ... >> >> * HDR Texture Compression Format => Fine with ASTC (wait for >> support), probably to large to transfer > > ASTC is already available on some GPUs and I expect support will > spread quite quickly. Given that it supports variable bit rates, it is > likely possible to produce HDR textures of a not unreasonable size for > transmission. I didn't find an extension or extension proposal for ASTC. Sorry, I'm a noob: Is there a specific reason for that? > >> >> Another approach is to compress already compressed textures (e.g >> compress an ASTC texture further for transfer). This has been done >> for instance for ETC: >> http://www.jacobstrom.com/publications/StromWennerstenHPG2011.pdf > > It has also been done for DXTC. An approach like this is probably the > best for ultimate compression. Yes. If someone is interested: http://mrelusive.com/publications/papers/Real-Time-Texture-Streaming-&-Decompression.pdf Kristian > > Regards > > -Mark > > > -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775?2235 kristian.sons...@ http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Dec 15 22:43:08 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 16 Dec 2014 07:43:08 +0100 Subject: [Public WebGL] HDR image format In-Reply-To: <9D5BA1CB-FD38-4396-A64F-43DBA5F91ED0@callow.im> References: <548AE227.5010906@dfki.de> <8C94DCEF-BFFD-40CB-A1BA-F52C0BCF3038@callow.im> <9D5BA1CB-FD38-4396-A64F-43DBA5F91ED0@callow.im> Message-ID: On Mon, Dec 15, 2014 at 7:01 AM, Mark Callow wrote: > > Another longer term approach may be diffusion-curve images. A paper at > Siggraph this year showed how to randomly access them, which means you can > sample them for texturing. At present there is no hardware support, so > custom shaders are required. > That's an interesting idea. As far as I understand it all DCT and variants (wavelets etc.) based algorithms (afaik all of the existing formats, jpeg, webp, hvec etc.) are badly suited to HW-accelerated decoding because they use a non fixed size table of coefficients per patch to achieve compression (by eliminating entries from the full table that're low in energy). This is effectively not paralellizable. -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Dec 16 01:03:37 2014 From: khr...@ (Gregg Tavares) Date: Tue, 16 Dec 2014 01:03:37 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer Message-ID: I just wanted to throw this idea out there to the browser vendors (FYI: I haven't been at Google for a year and a half so I'm not currently in a position to make this happen or implement it) One of the issues with WebGL in particular on mobile is because WebGL has to be composited it loses a lot of performance. What I mean is WebGL renders to a texture. That texture is then drawn into the backbuffer in the correct order with other HTML elements. This effectively means a fullscreen WebGL app is drawing an entire extra screen worth of pixels vs a native app. Many mobile GPUs (and even some desktop GPUs) have a low fill rate. So low they can only draw 1 or 2 screens worth of pixels before they can no longer maintain 60fps. With WebGL adding at least 1 screen of pixels worth of drawing that means WebGL apps are at a big disadvantage over native apps. So, here's 2 ideas. The first is it would be nice if WebGL could request to draw directly to the backbuffer. The second is it would be nice if there was someway to tell the browser when it's safe to composite. In other words, tell the browser "you can't composite until I say so". So, with both of those options on (manual compositing and WebGL drawing directly to the backbuffer), * if the webpage is fullscreen * and if the canvas is the back most element * and if the canvas covers the entire background (its width and height match the screen) * and if it's opaque (ALPHA=false so it doesn't have to be blended with any CSS background style). * and if preserveDrawingBuffer is false Then, let the canvas draw directly to the backbuffer. When the webpage signals that it's okay to composite the browser can draw whatever other HTML elements directly on top of the backbuffer and then flip buffers. Note: Why does the canvas have to be fullscreen and match the size of the screen? Because if you're drawing directly to the backbuffer then gl.scissor, gl.viewport, and especially gl_FragCoord won't work correctly if you're not drawing to the entire buffer. Manual compositing sounds like something many non WebGL apps might like. They could update multiple HTML elements across multiple events and only when they're sure everything is ready they could say "okay, it's safe to composite". So, I suspect that would have to be a web standard as in argued on the whatwg or w3 mailing list WebGL requesting drawing to the backbuffer is fully something Khronos could specify though without manual compositing it would be useless. It might also be possible, given those settings to even support a non-fullscreen canvas but it would either require some unknown driver extensions or else emulating gl_FragCoord so that sounds like too much work. So, for now it seems like just supporting fullscreen would go a long way to getting some perf back since many WebGL apps want to go fullscreen. It would also probably be very useful for Web VR stuff. Anyway it's just an idea. Maybe it will inspire some others. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 16 01:34:58 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 16 Dec 2014 10:34:58 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: I like this idea. Of course there's some drawbacks (no longer using HTML as UI). On Tue, Dec 16, 2014 at 10:03 AM, Gregg Tavares wrote: > > * and if preserveDrawingBuffer is false > "preserving" the drawing buffer is an important part of many applications (for instance half-life). One of the issues you will run into frequently is pixel flickering for a variety of reasons (t-junctions in the content, low vertex shader precision etc.). One of the most popular ways to deal with that issue is to clear depth but not color, and then draw the scene first, and the background (z-tested) afterwards. I've recently been forced to resort to this hack again because CSG.js produces a soup of t-junctions and eliminating the junctions for several tenthousand triangles wasn't feasible in JS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Dec 16 08:36:17 2014 From: khr...@ (Gregg Tavares) Date: Tue, 16 Dec 2014 08:36:17 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 1:34 AM, Florian B?sch wrote: > > I like this idea. Of course there's some drawbacks (no longer using HTML > as UI). > The proposal addresses that. You can still use HTML as your UI. That's why you need to able to manually control compositing AND why you need preserveDrawingBuffer falls. Manual compositing tells the browser when you're done drawing the the backbuffer so it can then draw any HTML above the canvas on top of what you've previously drawn. PreserverDrawingBuffer = false lets the browser clear the new backbuffer once the buffers have been swapped (since HTML has been drawn on top). > > On Tue, Dec 16, 2014 at 10:03 AM, Gregg Tavares > wrote: >> >> * and if preserveDrawingBuffer is false >> > > "preserving" the drawing buffer is an important part of many applications > (for instance half-life). One of the issues you will run into frequently is > pixel flickering for a variety of reasons (t-junctions in the content, low > vertex shader precision etc.). One of the most popular ways to deal with > that issue is to clear depth but not color, and then draw the scene first, > and the background (z-tested) afterwards. > Okay, so maybe if preserveDrawingBuffer = true then you have the added restriction that there can only be 1 element, the canvas. In that case you couldn't use HTML for UI. That's probably fine for games, especially native ports since they almost all render their UI themselves. > > I've recently been forced to resort to this hack again because CSG.js > produces a soup of t-junctions and eliminating the junctions for several > tenthousand triangles wasn't feasible in JS. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Tue Dec 16 08:40:29 2014 From: oet...@ (Olli Etuaho) Date: Tue, 16 Dec 2014 16:40:29 +0000 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 In-Reply-To: References: ,<9EA301E5-9CE8-4EEC-94EB-18EC0F8FDDE4@callow.im> <635e45c2469d406b8ca3a169020468e2@UKMAIL101.nvidia.com>, <798d0073e77e4f15919d1a34ed4779d2@UKMAIL101.nvidia.com>, Message-ID: <18b422eb536c47089f46c324bd6ae303@UKMAIL101.nvidia.com> I asked the maintainers of shadertoy.com about how indexing typically gets used, and they claimed they've only seen constant expressions as indices. However, I couldn't obtain a database of shader code that could be used to verify if complex index expressions really are non-existent. So it's hard to say what the exact impact of making the spec stricter would be. Ben, in your earlier message you gave an interpretation of the spec where intrinsic (built-in) functions would definitely be allowed in constant-index-expressions: "with loop index variables for well defined loops able to be used as if they were a const variable" This agrees with what ANGLE currently implements. And since the risks from making the spec stricter are unclear, it would seem like a reasonable option to make this interpretation explicit in the WebGL spec and tests. However, your latest message sounds like you have some doubts about this, and it would be in the hands of IE team to make the actual behavior consistent across browsers. So can you confirm whether you still think your initial interpretation is the right one? -Olli ________________________________ From: Ben Constable Sent: Tuesday, December 9, 2014 3:08 AM To: Olli Etuaho; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Well, I wrote most of what we have in our implementation so my interpretation explains what we have, as well as what we will move towards :) I cannot make assertions about when we can fix things but in general, we support having exhaustive tests that support all elements of the specification and we will make every effort to pass said tests. At this point I would think that the specification is very liberal and therefore that there could be content out there relying on the behavior that you describing clamping down on. One case that might be interesting is the output of an intrinsic function when given a constant-index-expression. Is that also considered a constant-index-expression? We can work on the language of the spec but I think that tests document the intent of things far better. From: Olli Etuaho [mailto:oetuaho...@] Sent: Monday, December 8, 2014 11:54 AM To: Ben Constable; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 That issue with assigning constant expressions containing calls to built-ins to constant floats is one thing that I was planning to add a test for. Ben, is your interpretation of constant-index-expressions also what IE is moving towards in the WebGL implementation? I could try to come up with more data to see what the impact of disallowing complex index expressions would be for existing shaders. ________________________________ From: Ben Constable > Sent: Monday, December 8, 2014 8:42 PM To: Olli Etuaho; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 I don?t have a large amount of data on how frequently shaders fail because of this. >From a cursory glance playing with shaders, Chrome and IE both fail to compile this: const float n = sin(1.0); And according to 10.11, they should. So we at least have a bug here. But we don?t have any test failures that I know of here, so I think that we have a test gap as well. Section 5.10 is pretty clear about builtin functions (including which ones are not usable in constant expressions). When it comes to constant-index-expressions, my interpretation is that it is largely the same as constant-expression, but with loop index variables for well defined loops able to be used as if they were a const variable. From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Olli Etuaho Sent: Monday, December 8, 2014 7:07 AM To: Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Mark: the first thing the spec says about constant-index expressions is "Definition: constant-index expressions are a superset of constant-expressions". The unclear part is what the spec means when it says "Expressions composed of both of the above", referring to constant expressions and loop indices. Composed how? Operators or calls are also implicitly needed to compose an expression, but which operations are allowed is not mentioned. It could be interpreted so that as soon as you operate on a loop index, it becomes a constant-index-expression and the intermediate value can't be operated on further to form another constant-index-expression. Alternatively, it could be interpreted so that multiple intermediate operations involving indices and constant expressions are allowed. I agree that this is a slight distraction from WebGL 2 work, but it's always a problem when IE and ANGLE differ in their interpretations of the spec. Of course regressing compatibility with content is a concern, but between browsers the compatibility is already broken. Maybe the IE team could comment on how frequently they see shaders failing because of this? The most complex indexing expression I found by sampling 20 featured/popular shaders on ShaderToy was [i + 1], so I don't think there would be that much breakage even if the rules are tightened to only allow integer addition and multiplication, for example. It will also help with fixing other problems related to constant expressions if this is clearly specified, since some of the code is shared between processing constant-index-expressions and constant expressions. -Olli ________________________________ From: Mark Callow > Sent: Sunday, December 7, 2014 3:55 PM To: Daniel Koch Cc: Olli Etuaho; public webgl Subject: Re: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 * PGP Signed by an unknown key On Dec 6, 2014, at 12:13 AM, Daniel Koch > wrote: I believe something like e.g. sin(0.0), is actually supposed to be a constant expression as it can be evaluated at compile-time. See issue 10.11 Constant Expressions in the ESSL 1.00 spec. Should built-in functions be allowed in constant expressions? e.g. const float a = sin(1.0); RESOLUTION: Yes, allow built-in functions to be included in constant expressions. Redefinition of built-in functions is now prohibited. User-defined functions are not allowed in constant expressions. So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really be getting the same handling. Implementations are only required to support constant-index expressions, not constant expressions, in for loops, in fragment-shader for loops at least. I thought the spec and grammar was quite clear on the definition of a constant-index-expression, but IANACE. Regards -Mark * Unknown Key * 0x3F001063 -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Dec 16 08:47:11 2014 From: khr...@ (Gregg Tavares) Date: Tue, 16 Dec 2014 08:47:11 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 8:36 AM, Gregg Tavares wrote: > > > > On Tue, Dec 16, 2014 at 1:34 AM, Florian B?sch wrote: >> >> I like this idea. Of course there's some drawbacks (no longer using HTML >> as UI). >> > > The proposal addresses that. You can still use HTML as your UI. That's why > you need to able to manually control compositing AND why you need > preserveDrawingBuffer falls. Manual compositing tells the browser when > you're done drawing the the backbuffer so it can then draw any HTML above > the canvas on top of what you've previously drawn. PreserverDrawingBuffer = > false lets the browser clear the new backbuffer once the buffers have been > swapped (since HTML has been drawn on top). > > > >> >> On Tue, Dec 16, 2014 at 10:03 AM, Gregg Tavares >> wrote: >>> >>> * and if preserveDrawingBuffer is false >>> >> >> "preserving" the drawing buffer is an important part of many applications >> (for instance half-life). One of the issues you will run into frequently is >> pixel flickering for a variety of reasons (t-junctions in the content, low >> vertex shader precision etc.). One of the most popular ways to deal with >> that issue is to clear depth but not color, and then draw the scene first, >> and the background (z-tested) afterwards. >> > > Okay, so maybe if preserveDrawingBuffer = true then you have the added > restriction that there can only be 1 element, the canvas. In that case you > couldn't use HTML for UI. That's probably fine for games, especially > native ports since they almost all render their UI themselves. > The problem you're going to have with this (and maybe the other proposal) is that the browser sometimes needs to draw UI. Example, the "you've gone fullscreen, press ESC to exit" or the "This app wants to access your location". Of course the browser would be free to switch out of letting you draw directly to the backbuffer. From the app's POV nothing changes. It's only really hints to help get perf. So, if the browser for some reason needs to start doing normal compositing it can copy the current backbuffer to a texture and then do things the normal way. Once it's okay to start drawing to the backbuffer again it could switch back. Although one issue with preserveDrawingBuffer when drawing to the backbuffer is that it might be impossible to totally preserve? For example if the user presses Alt-Tab or ?-tab I don't remember what happens on each OS. Maybe we need a new flag, dontClearDrawingBufferUnlessYouHaveTo or something that suggests that normally it won't be cleared but it might (for example if the user switches in/out of fullscreen). The current canvas, being stored in a texture, doesn't lose contents switching in and out of fullscreen unless you change the size of the canvas. > > > > >> >> I've recently been forced to resort to this hack again because CSG.js >> produces a soup of t-junctions and eliminating the junctions for several >> tenthousand triangles wasn't feasible in JS. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dav...@ Tue Dec 16 08:55:18 2014 From: dav...@ (David Elahee) Date: Tue, 16 Dec 2014 17:55:18 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: I humbly like it very much, we can sort of do that with the old flash runtime with wmode direct and it drastically changes the framerate...sorry if I am off topic .. :) Le 16 d?c. 2014 10:35, "Florian B?sch" a ?crit : > I like this idea. Of course there's some drawbacks (no longer using HTML > as UI). > > On Tue, Dec 16, 2014 at 10:03 AM, Gregg Tavares > wrote: >> >> * and if preserveDrawingBuffer is false >> > > "preserving" the drawing buffer is an important part of many applications > (for instance half-life). One of the issues you will run into frequently is > pixel flickering for a variety of reasons (t-junctions in the content, low > vertex shader precision etc.). One of the most popular ways to deal with > that issue is to clear depth but not color, and then draw the scene first, > and the background (z-tested) afterwards. > > I've recently been forced to resort to this hack again because CSG.js > produces a soup of t-junctions and eliminating the junctions for several > tenthousand triangles wasn't feasible in JS. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juj...@ Tue Dec 16 09:37:18 2014 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Tue, 16 Dec 2014 19:37:18 +0200 Subject: Fwd: Re: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: Oops, forwarding to whole list, accidentally sent to single recipient only. ---------- Forwarded message ---------- From: "Jukka Jyl?nki" Date: Dec 16, 2014 7:35 PM Subject: Re: [Public WebGL] WebGL drawing directly to the backbuffer To: "David Elahee" Cc: I've been battling with the mobile per pixel fillrate issue for months, so I've seen this problem first hand. Because of the heavy compositing (in Firefox it is also tiled for other reasons), there is a considerable GPU performance difference between a native GL app on Android, and an Emscripten compiled version of the same app in mobile browser. To me it sounds like this problem could be avoided by detecting a set of magic conditions (CSS style included) inside the browser compositor when the canvas is covering the whole screen (Emscripten apps don't typically care about the HTML as UI option), but to my knowledge, no browser implements this. If something like this happened explicitly at the spec level, it would probably be very useful. On Dec 16, 2014 6:56 PM, "David Elahee" wrote: > I humbly like it very much, we can sort of do that with the old flash > runtime with wmode direct and it drastically changes the framerate...sorry > if I am off topic > .. :) > Le 16 d?c. 2014 10:35, "Florian B?sch" a ?crit : > >> I like this idea. Of course there's some drawbacks (no longer using HTML >> as UI). >> >> On Tue, Dec 16, 2014 at 10:03 AM, Gregg Tavares >> wrote: >>> >>> * and if preserveDrawingBuffer is false >>> >> >> "preserving" the drawing buffer is an important part of many applications >> (for instance half-life). One of the issues you will run into frequently is >> pixel flickering for a variety of reasons (t-junctions in the content, low >> vertex shader precision etc.). One of the most popular ways to deal with >> that issue is to clear depth but not color, and then draw the scene first, >> and the background (z-tested) afterwards. >> >> I've recently been forced to resort to this hack again because CSG.js >> produces a soup of t-junctions and eliminating the junctions for several >> tenthousand triangles wasn't feasible in JS. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thi...@ Tue Dec 16 09:46:21 2014 From: thi...@ (Thibaut Despoulain) Date: Tue, 16 Dec 2014 09:46:21 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: Strong thumbs up from Artillery. I believe this could also improve performance on the desktop too. I'd rather have an explicit API than a bunch of magic HTML/CSS conditions though. Not being able to use HTML in that mode might not be that much of a problem (at least for us). On Dec 16, 2014 9:38 AM, "David Elahee" wrote: > I humbly like it very much, we can sort of do that with the old flash > runtime with wmode direct and it drastically changes the framerate...sorry > if I am off topic > .. :) > Le 16 d?c. 2014 10:35, "Florian B?sch" a ?crit : > >> I like this idea. Of course there's some drawbacks (no longer using HTML >> as UI). >> >> On Tue, Dec 16, 2014 at 10:03 AM, Gregg Tavares >> wrote: >>> >>> * and if preserveDrawingBuffer is false >>> >> >> "preserving" the drawing buffer is an important part of many applications >> (for instance half-life). One of the issues you will run into frequently is >> pixel flickering for a variety of reasons (t-junctions in the content, low >> vertex shader precision etc.). One of the most popular ways to deal with >> that issue is to clear depth but not color, and then draw the scene first, >> and the background (z-tested) afterwards. >> >> I've recently been forced to resort to this hack again because CSG.js >> produces a soup of t-junctions and eliminating the junctions for several >> tenthousand triangles wasn't feasible in JS. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From baj...@ Tue Dec 16 10:02:30 2014 From: baj...@ (Brandon Jones) Date: Tue, 16 Dec 2014 18:02:30 +0000 Subject: [Public WebGL] WebGL drawing directly to the backbuffer References: Message-ID: I very much like this proposal, and have been considering something similar for a while. There's some significant technical hurdles involved, and I have no idea what kind of timeframe would be involved for implementing it, but I think the core idea is a really good one. On the subject of an explicit flag vs. magic conditions: For a variety of reasons, some of which Gregg already pointed out, I don't think this could work at all outside of fullscreen. A really simple example is that on several platforms (at least Windows and Linux) the backbuffer contains not only the page contents but the browser chrome as well. Fullscreen is the only mode where we wouldn't have to worry about the browser UI being stomped on. As such, at least some combination of magic conditions will be required and I think that meeting said conditions AND requiring an explicit flag is probably overkill. Plus, if there's no flag there's a possibility for existing apps to automatically get faster with no input from the developers. --Brandon -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Dec 16 10:12:31 2014 From: khr...@ (Gregg Tavares) Date: Tue, 16 Dec 2014 10:12:31 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 10:02 AM, Brandon Jones wrote: > > I very much like this proposal, and have been considering something > similar for a while. There's some significant technical hurdles involved, > and I have no idea what kind of timeframe would be involved for > implementing it, but I think the core idea is a really good one. > > On the subject of an explicit flag vs. magic conditions: For a variety of > reasons, some of which Gregg already pointed out, I don't think this could > work at all outside of fullscreen. A really simple example is that on > several platforms (at least Windows and Linux) the backbuffer contains not > only the page contents but the browser chrome as well. Fullscreen is the > only mode where we wouldn't have to worry about the browser UI being > stomped on. As such, at least some combination of magic conditions will be > required and I think that meeting said conditions AND requiring an explicit > flag is probably overkill. Plus, if there's no flag there's a possibility > for existing apps to automatically get faster with no input from the > developers. > > If you can figure out a way without an explicit flag great but I couldn't see a way IF you still want to allow HTML to be composited on top the canvas. If all you want is a canvas with preserverDrawingBuffer = false and nothing else then yea, maybe you can do it without extra flags but that seems like it's really going against the platform (as in you're telling the developer to not use any of the platform, ie HTML5). Heck, as a simple example, even IF I eventually will have only a canvas I'd often like to print some debug information on top of that (say like using stats.js ) > --Brandon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 16 10:27:41 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 16 Dec 2014 19:27:41 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: WebVR is a very good usecase for not having any HTML on top because it doesn't work anyway. There's some work by mozilla to use CSS transforms and rendering the DOM for each eye to make it feasible to have WebVR UIs. But it's never going to look really good and it'll be limited to HUDs. The most outstanding examples of UIs in VR so far have been the ones that meld the 3d content and the UI, and that's not something you'll get done with HTML. Detecting if there's no HTML on top the full-screened canvas would be straightforward and wouldn't require a flag. Btw. why don't you just have the browser draw on top of the backbuffer once the RAF's trough? On Tue, Dec 16, 2014 at 7:12 PM, Gregg Tavares wrote: > > > > On Tue, Dec 16, 2014 at 10:02 AM, Brandon Jones > wrote: >> >> I very much like this proposal, and have been considering something >> similar for a while. There's some significant technical hurdles involved, >> and I have no idea what kind of timeframe would be involved for >> implementing it, but I think the core idea is a really good one. >> >> On the subject of an explicit flag vs. magic conditions: For a variety of >> reasons, some of which Gregg already pointed out, I don't think this could >> work at all outside of fullscreen. A really simple example is that on >> several platforms (at least Windows and Linux) the backbuffer contains not >> only the page contents but the browser chrome as well. Fullscreen is the >> only mode where we wouldn't have to worry about the browser UI being >> stomped on. As such, at least some combination of magic conditions will be >> required and I think that meeting said conditions AND requiring an explicit >> flag is probably overkill. Plus, if there's no flag there's a possibility >> for existing apps to automatically get faster with no input from the >> developers. >> >> > If you can figure out a way without an explicit flag great but I couldn't > see a way IF you still want to allow HTML to be composited on top the > canvas. If all you want is a canvas with preserverDrawingBuffer = false and > nothing else then yea, maybe you can do it without extra flags but that > seems like it's really going against the platform (as in you're telling the > developer to not use any of the platform, ie HTML5). Heck, as a simple > example, even IF I eventually will have only a canvas I'd often like to > print some debug information on top of that (say like using stats.js > ) > > > > > >> --Brandon >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ash...@ Tue Dec 16 10:32:53 2014 From: ash...@ (Ashley Gullen) Date: Tue, 16 Dec 2014 18:32:53 +0000 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: This would be good for Scirra's HTML5 game engine, which uses WebGL for 2D game rendering (WebGL has significant performance and feature advantages over canvas2d even for 2D). However I want to raise that for 2D games a common reason we don't have a canvas the size of the backbuffer is for letterbox mode, which is a good way to easily get a 2D game running on any size screen. The background is black and the canvas is aspect-preserving scaled to fill the screen with black bars down the side. This also applies in fullscreen mode. However in this case it would be acceptable if the browser in fact used a backbuffer the size of the canvas, since there is never any need to draw anything outside the canvas area and it's left black. So maybe it's worth considering controlling the backbuffer dimensions and offset for this feature, particularly if it's using an explicit API. It's possible to implement letterbox mode with a canvas the size of the window, but it can be difficult to do this without increasing the fillrate (e.g. by over-drawing black bars, contradicting the purpose of the feature) or facing the same problems the browser would with gl_FragCoord and such. Ashley On 16 December 2014 at 18:02, Brandon Jones wrote: > > I very much like this proposal, and have been considering something > similar for a while. There's some significant technical hurdles involved, > and I have no idea what kind of timeframe would be involved for > implementing it, but I think the core idea is a really good one. > > On the subject of an explicit flag vs. magic conditions: For a variety of > reasons, some of which Gregg already pointed out, I don't think this could > work at all outside of fullscreen. A really simple example is that on > several platforms (at least Windows and Linux) the backbuffer contains not > only the page contents but the browser chrome as well. Fullscreen is the > only mode where we wouldn't have to worry about the browser UI being > stomped on. As such, at least some combination of magic conditions will be > required and I think that meeting said conditions AND requiring an explicit > flag is probably overkill. Plus, if there's no flag there's a possibility > for existing apps to automatically get faster with no input from the > developers. > > --Brandon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From baj...@ Tue Dec 16 11:01:26 2014 From: baj...@ (Brandon Jones) Date: Tue, 16 Dec 2014 19:01:26 +0000 Subject: [Public WebGL] WebGL drawing directly to the backbuffer References: Message-ID: Even if you did have to "manually" letterbox your content in this mode it should be a net fillrate win. You're effectively cutting the total filrate for the canvas by half, so some additional fill for the letterboxing bars would be negligible. It would still be nice to have a way of specifying canvases smaller than the native screen resolution, though. (Effectively trigger a ChangeDisplaySettings or equivalent) That *would* probably require an explicit flag. And to repeat Florian: WebVR would be a perfect poster child for content like this. :) A lot of ASM.js content would also benefit. On Tue Dec 16 2014 at 10:32:53 AM Ashley Gullen wrote: > This would be good for Scirra's HTML5 game engine, which uses WebGL for 2D > game rendering (WebGL has significant performance and feature advantages > over canvas2d even for 2D). However I want to raise that for 2D games a > common reason we don't have a canvas the size of the backbuffer is for > letterbox mode, which is a good way to easily get a 2D game running on any > size screen. The background is black and the canvas is aspect-preserving > scaled to fill the screen with black bars down the side. This also applies > in fullscreen mode. However in this case it would be acceptable if the > browser in fact used a backbuffer the size of the canvas, since there is > never any need to draw anything outside the canvas area and it's left > black. So maybe it's worth considering controlling the backbuffer > dimensions and offset for this feature, particularly if it's using an > explicit API. > > It's possible to implement letterbox mode with a canvas the size of the > window, but it can be difficult to do this without increasing the fillrate > (e.g. by over-drawing black bars, contradicting the purpose of the feature) > or facing the same problems the browser would with gl_FragCoord and such. > > Ashley > > > > On 16 December 2014 at 18:02, Brandon Jones wrote: >> >> I very much like this proposal, and have been considering something >> similar for a while. There's some significant technical hurdles involved, >> and I have no idea what kind of timeframe would be involved for >> implementing it, but I think the core idea is a really good one. >> >> On the subject of an explicit flag vs. magic conditions: For a variety of >> reasons, some of which Gregg already pointed out, I don't think this could >> work at all outside of fullscreen. A really simple example is that on >> several platforms (at least Windows and Linux) the backbuffer contains not >> only the page contents but the browser chrome as well. Fullscreen is the >> only mode where we wouldn't have to worry about the browser UI being >> stomped on. As such, at least some combination of magic conditions will be >> required and I think that meeting said conditions AND requiring an explicit >> flag is probably overkill. Plus, if there's no flag there's a possibility >> for existing apps to automatically get faster with no input from the >> developers. >> >> --Brandon >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Dec 16 11:06:31 2014 From: khr...@ (Gregg Tavares) Date: Tue, 16 Dec 2014 11:06:31 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 10:27 AM, Florian B?sch wrote: > > WebVR is a very good usecase for not having any HTML on top because it > doesn't work anyway. There's some work by mozilla to use CSS transforms and > rendering the DOM for each eye to make it feasible to have WebVR UIs. But > it's never going to look really good and it'll be limited to HUDs. The most > outstanding examples of UIs in VR so far have been the ones that meld the > 3d content and the UI, and that's not something you'll get done with HTML. > > Detecting if there's no HTML on top the full-screened canvas would be > straightforward and wouldn't require a flag. > > Btw. why don't you just have the browser draw on top of the backbuffer > once the RAF's trough? > Because there's tons of HTML events (mousedown, mouseup, touchstart, touchend, settimeout, click, keydown, keyup). Any one of those events can change HTML content at which point under the current browser model the browser needs to recomposite. When the canvas is in a texture that's easy. It can recomposite anytime it wants to. If the canvas is drawing directly to the backbuffer though the browser can't draw whenever it wants to. > > On Tue, Dec 16, 2014 at 7:12 PM, Gregg Tavares > wrote: >> >> >> >> On Tue, Dec 16, 2014 at 10:02 AM, Brandon Jones >> wrote: >>> >>> I very much like this proposal, and have been considering something >>> similar for a while. There's some significant technical hurdles involved, >>> and I have no idea what kind of timeframe would be involved for >>> implementing it, but I think the core idea is a really good one. >>> >>> On the subject of an explicit flag vs. magic conditions: For a variety >>> of reasons, some of which Gregg already pointed out, I don't think this >>> could work at all outside of fullscreen. A really simple example is that on >>> several platforms (at least Windows and Linux) the backbuffer contains not >>> only the page contents but the browser chrome as well. Fullscreen is the >>> only mode where we wouldn't have to worry about the browser UI being >>> stomped on. As such, at least some combination of magic conditions will be >>> required and I think that meeting said conditions AND requiring an explicit >>> flag is probably overkill. Plus, if there's no flag there's a possibility >>> for existing apps to automatically get faster with no input from the >>> developers. >>> >>> >> If you can figure out a way without an explicit flag great but I couldn't >> see a way IF you still want to allow HTML to be composited on top the >> canvas. If all you want is a canvas with preserverDrawingBuffer = false and >> nothing else then yea, maybe you can do it without extra flags but that >> seems like it's really going against the platform (as in you're telling the >> developer to not use any of the platform, ie HTML5). Heck, as a simple >> example, even IF I eventually will have only a canvas I'd often like to >> print some debug information on top of that (say like using stats.js >> ) >> >> >> >> >> >>> --Brandon >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Dec 16 11:10:09 2014 From: khr...@ (Gregg Tavares) Date: Tue, 16 Dec 2014 11:10:09 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 10:32 AM, Ashley Gullen wrote: > > This would be good for Scirra's HTML5 game engine, which uses WebGL for 2D > game rendering (WebGL has significant performance and feature advantages > over canvas2d even for 2D). However I want to raise that for 2D games a > common reason we don't have a canvas the size of the backbuffer is for > letterbox mode, which is a good way to easily get a 2D game running on any > size screen. The background is black and the canvas is aspect-preserving > scaled to fill the screen with black bars down the side. This also applies > in fullscreen mode. However in this case it would be acceptable if the > browser in fact used a backbuffer the size of the canvas, since there is > never any need to draw anything outside the canvas area and it's left > black. So maybe it's worth considering controlling the backbuffer > dimensions and offset for this feature, particularly if it's using an > explicit API. > > It's possible to implement letterbox mode with a canvas the size of the > window, but it can be difficult to do this without increasing the fillrate > (e.g. by over-drawing black bars, contradicting the purpose of the feature) > or facing the same problems the browser would with gl_FragCoord and such. > Unfortunately you're back to the same issue. Namely that if the canvas is smaller than the screen gl.scissor, gl.viewport and gl_FragCoord will not work correctly. You can do this yourself though, make a fullscreen canvas, set gl.scissor, and gl.viewport to the "letterbox" you want to render to. That way gl.scissor, gl.viewport, and gl_FragCoord will still work exactly as specified and it will be up to you to deal with it. > > Ashley > > > > On 16 December 2014 at 18:02, Brandon Jones wrote: >> >> I very much like this proposal, and have been considering something >> similar for a while. There's some significant technical hurdles involved, >> and I have no idea what kind of timeframe would be involved for >> implementing it, but I think the core idea is a really good one. >> >> On the subject of an explicit flag vs. magic conditions: For a variety of >> reasons, some of which Gregg already pointed out, I don't think this could >> work at all outside of fullscreen. A really simple example is that on >> several platforms (at least Windows and Linux) the backbuffer contains not >> only the page contents but the browser chrome as well. Fullscreen is the >> only mode where we wouldn't have to worry about the browser UI being >> stomped on. As such, at least some combination of magic conditions will be >> required and I think that meeting said conditions AND requiring an explicit >> flag is probably overkill. Plus, if there's no flag there's a possibility >> for existing apps to automatically get faster with no input from the >> developers. >> >> --Brandon >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 16 11:10:28 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 16 Dec 2014 20:10:28 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 8:06 PM, Gregg Tavares wrote: > > Because there's tons of HTML events (mousedown, mouseup, touchstart, > touchend, settimeout, click, keydown, keyup). Any one of those events can > change HTML content at which point under the current browser model the > browser needs to recomposite. When the canvas is in a texture that's easy. > It can recomposite anytime it wants to. If the canvas is drawing directly > to the backbuffer though the browser can't draw whenever it wants to. > If you keep a texture that contains the (alpha blended) contents of whatever's on top of the canvas, and draw it on the canvas one (of the 60x/second) that should be sufficient, shouldn't it be? -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Dec 16 11:16:02 2014 From: khr...@ (Gregg Tavares) Date: Tue, 16 Dec 2014 11:16:02 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 11:10 AM, Florian B?sch wrote: > > On Tue, Dec 16, 2014 at 8:06 PM, Gregg Tavares > wrote: >> >> Because there's tons of HTML events (mousedown, mouseup, touchstart, >> touchend, settimeout, click, keydown, keyup). Any one of those events can >> change HTML content at which point under the current browser model the >> browser needs to recomposite. When the canvas is in a texture that's easy. >> It can recomposite anytime it wants to. If the canvas is drawing directly >> to the backbuffer though the browser can't draw whenever it wants to. >> > If you keep a texture that contains the (alpha blended) contents of > whatever's on top of the canvas, and draw it on the canvas one (of the > 60x/second) that should be sufficient, shouldn't it be? > No: Because the browser doesn't know when it's safe to draw that texture on top your content. setTimeout(function() {someHTMLElement.innerHTML = Date.now() }, 1234); What is the browser supposed to do now? someHTMLElement needs to be re-rendered and re-composited. So it rendered is into this texture you mentioned. Now that texture needs to be drawn on top of your canvas (on top of the backbuffer). When does that happen? There's no guarantee i'm using RaF at all. The browser can't devine that someday I'll do a RaF. Even if I do a RaF there's no guarantee I'm going to draw to the canvas on every RaF nor is the a guarantee I don't have more than one RaF. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 16 11:29:18 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 16 Dec 2014 20:29:18 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: So the flag should ensure that the browser only overdraws the backbuffer with its overlay texture or whatever, once it's "safe" to do so. But that means that without JS communicating to the browser the safeness of overdrawing, that you'll get nothing drawn whatsoever. I think it'd be perfectly legitimate to ignore setTimeout as a method to draw to the backbuffer. It's been known that it doesn't work well, and most demos that did it broke anyway. Which leaves you with requestAnimationFrame, and no matter if you have one or multiple, they are going to complete and then a frame is finished. After that it's safe to draw on top of the backbuffer. And if no further request animation frame is called, you'll get the same effect as when you'd not indicate "compositing safeness", you get a stall, nothing gets drawn. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Dec 16 11:43:02 2014 From: khr...@ (Gregg Tavares) Date: Tue, 16 Dec 2014 11:43:02 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: > think it'd be perfectly legitimate to ignore setTimeout as a method to draw to the backbuffer I'm not even sure what that means? Does it mean you can't draw (get an error if you try?) or does it mean your results are undefined if you do? The first requires lots of overhead to make every function fail. The second one has undefined behavior which often works by accident which leads to pages that work in some cases and not others. RaF isn't a "special" event. Most (all?) events are the same as far as the browser is concerned. The only difference is what triggers them. RaF doesn't cause a composite. Changing content causes a composite. requestAnimationFrame(function() {}); does not cause any compositing This is perfectly legit drawing under the current model. someHTMLElement.addEventListener('mousemove', function(event) { moveCameraBasedOnMousePosition(event); drawSceneToCanvasWithWebGL(); }, false); On Tue, Dec 16, 2014 at 11:29 AM, Florian B?sch wrote: > > So the flag should ensure that the browser only overdraws the backbuffer > with its overlay texture or whatever, once it's "safe" to do so. > > But that means that without JS communicating to the browser the safeness > of overdrawing, that you'll get nothing drawn whatsoever. > > I think it'd be perfectly legitimate to ignore setTimeout as a method to > draw to the backbuffer. It's been known that it doesn't work well, and most > demos that did it broke anyway. Which leaves you with > requestAnimationFrame, and no matter if you have one or multiple, they are > going to complete and then a frame is finished. After that it's safe to > draw on top of the backbuffer. And if no further request animation frame is > called, you'll get the same effect as when you'd not indicate "compositing > safeness", you get a stall, nothing gets drawn. > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 16 11:46:37 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 16 Dec 2014 20:46:37 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: Well then it seems to me that regardless of a flag or whatever, you're stuck with not being able to draw to the backbuffer and so the suggestion to do so is moot. On Tue, Dec 16, 2014 at 8:43 PM, Gregg Tavares wrote: > > > think it'd be perfectly legitimate to ignore setTimeout as a method to > draw to the backbuffer > > I'm not even sure what that means? Does it mean you can't draw (get an > error if you try?) or does it mean your results are undefined if you do? > The first requires lots of overhead to make every function fail. The second > one has undefined behavior which often works by accident which leads to > pages that work in some cases and not others. > > RaF isn't a "special" event. Most (all?) events are the same as far as the > browser is concerned. The only difference is what triggers them. > > RaF doesn't cause a composite. Changing content causes a composite. > > requestAnimationFrame(function() {}); > > does not cause any compositing > > This is perfectly legit drawing under the current model. > > someHTMLElement.addEventListener('mousemove', function(event) { > moveCameraBasedOnMousePosition(event); > drawSceneToCanvasWithWebGL(); > }, false); > > > > > > On Tue, Dec 16, 2014 at 11:29 AM, Florian B?sch wrote: >> >> So the flag should ensure that the browser only overdraws the backbuffer >> with its overlay texture or whatever, once it's "safe" to do so. >> >> But that means that without JS communicating to the browser the safeness >> of overdrawing, that you'll get nothing drawn whatsoever. >> >> I think it'd be perfectly legitimate to ignore setTimeout as a method to >> draw to the backbuffer. It's been known that it doesn't work well, and most >> demos that did it broke anyway. Which leaves you with >> requestAnimationFrame, and no matter if you have one or multiple, they are >> going to complete and then a frame is finished. After that it's safe to >> draw on top of the backbuffer. And if no further request animation frame is >> called, you'll get the same effect as when you'd not indicate "compositing >> safeness", you get a stall, nothing gets drawn. >> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 16 11:51:46 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 16 Dec 2014 20:51:46 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: If the following behavior is observed I'd think it's safe to overdraw the backbuffer with the overlay, and it's safe to sync compositing to drawing: - A canvas is fullscreen - It's contexts alpha setting is false - it initiates a RAF on DOMContentLoad - The RAF initiates another RAF before its finished If the code doesn't do that, you don't let it draw to the backbuffer and fall back to the slower composited path. On Tue, Dec 16, 2014 at 8:46 PM, Florian B?sch wrote: > > Well then it seems to me that regardless of a flag or whatever, you're > stuck with not being able to draw to the backbuffer and so the suggestion > to do so is moot. > > On Tue, Dec 16, 2014 at 8:43 PM, Gregg Tavares > wrote: >> >> > think it'd be perfectly legitimate to ignore setTimeout as a method >> to draw to the backbuffer >> >> I'm not even sure what that means? Does it mean you can't draw (get an >> error if you try?) or does it mean your results are undefined if you do? >> The first requires lots of overhead to make every function fail. The second >> one has undefined behavior which often works by accident which leads to >> pages that work in some cases and not others. >> >> RaF isn't a "special" event. Most (all?) events are the same as far as >> the browser is concerned. The only difference is what triggers them. >> >> RaF doesn't cause a composite. Changing content causes a composite. >> >> requestAnimationFrame(function() {}); >> >> does not cause any compositing >> >> This is perfectly legit drawing under the current model. >> >> someHTMLElement.addEventListener('mousemove', function(event) { >> moveCameraBasedOnMousePosition(event); >> drawSceneToCanvasWithWebGL(); >> }, false); >> >> >> >> >> >> On Tue, Dec 16, 2014 at 11:29 AM, Florian B?sch wrote: >>> >>> So the flag should ensure that the browser only overdraws the backbuffer >>> with its overlay texture or whatever, once it's "safe" to do so. >>> >>> But that means that without JS communicating to the browser the safeness >>> of overdrawing, that you'll get nothing drawn whatsoever. >>> >>> I think it'd be perfectly legitimate to ignore setTimeout as a method to >>> draw to the backbuffer. It's been known that it doesn't work well, and most >>> demos that did it broke anyway. Which leaves you with >>> requestAnimationFrame, and no matter if you have one or multiple, they are >>> going to complete and then a frame is finished. After that it's safe to >>> draw on top of the backbuffer. And if no further request animation frame is >>> called, you'll get the same effect as when you'd not indicate "compositing >>> safeness", you get a stall, nothing gets drawn. >>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Dec 16 12:09:40 2014 From: khr...@ (Gregg Tavares) Date: Tue, 16 Dec 2014 12:09:40 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 11:51 AM, Florian B?sch wrote: > > If the following behavior is observed I'd think it's safe to overdraw the > backbuffer with the overlay, and it's safe to sync compositing to drawing: > > - A canvas is fullscreen > - It's contexts alpha setting is false > - it initiates a RAF on DOMContentLoad > - The RAF initiates another RAF before its finished > > If the code doesn't do that, you don't let it draw to the backbuffer and > fall back to the slower composited path. > you're still assuming only RaF and only one RaF. In your case above what happens if I draw to the canvas during mousemove? That happens if I change the content of an HTML element in a keydown event? That happens if I have 2 RaFs requests? > > On Tue, Dec 16, 2014 at 8:46 PM, Florian B?sch wrote: >> >> Well then it seems to me that regardless of a flag or whatever, you're >> stuck with not being able to draw to the backbuffer and so the suggestion >> to do so is moot. >> >> On Tue, Dec 16, 2014 at 8:43 PM, Gregg Tavares >> wrote: >>> >>> > think it'd be perfectly legitimate to ignore setTimeout as a method >>> to draw to the backbuffer >>> >>> I'm not even sure what that means? Does it mean you can't draw (get an >>> error if you try?) or does it mean your results are undefined if you do? >>> The first requires lots of overhead to make every function fail. The second >>> one has undefined behavior which often works by accident which leads to >>> pages that work in some cases and not others. >>> >>> RaF isn't a "special" event. Most (all?) events are the same as far as >>> the browser is concerned. The only difference is what triggers them. >>> >>> RaF doesn't cause a composite. Changing content causes a composite. >>> >>> requestAnimationFrame(function() {}); >>> >>> does not cause any compositing >>> >>> This is perfectly legit drawing under the current model. >>> >>> someHTMLElement.addEventListener('mousemove', function(event) { >>> moveCameraBasedOnMousePosition(event); >>> drawSceneToCanvasWithWebGL(); >>> }, false); >>> >>> >>> >>> >>> >>> On Tue, Dec 16, 2014 at 11:29 AM, Florian B?sch >>> wrote: >>>> >>>> So the flag should ensure that the browser only overdraws the >>>> backbuffer with its overlay texture or whatever, once it's "safe" to do so. >>>> >>>> But that means that without JS communicating to the browser the >>>> safeness of overdrawing, that you'll get nothing drawn whatsoever. >>>> >>>> I think it'd be perfectly legitimate to ignore setTimeout as a method >>>> to draw to the backbuffer. It's been known that it doesn't work well, and >>>> most demos that did it broke anyway. Which leaves you with >>>> requestAnimationFrame, and no matter if you have one or multiple, they are >>>> going to complete and then a frame is finished. After that it's safe to >>>> draw on top of the backbuffer. And if no further request animation frame is >>>> called, you'll get the same effect as when you'd not indicate "compositing >>>> safeness", you get a stall, nothing gets drawn. >>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 16 12:17:37 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 16 Dec 2014 21:17:37 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 9:09 PM, Gregg Tavares wrote: > > you're still assuming only RaF > Because trying to do realtime drawing as synchronized as possible is usually not done with setTimeout > and only one RaF. In your case above what happens if I draw to the canvas > during mousemove? > You don't do that, it's considered bad form. It's considered bad form because you can easily get dozens of mousemoves between frames. > That happens if I change the content of an HTML element in a keydown > event? > At the next animation frame you get to overdraw the backbuffer. > That happens if I have 2 RaFs requests? > Both will finish, and the frame is done, you can then overdraw the backbuffer. You're not initating the next call back to any RAF initiator before the last frame that hasn't been presented, isn't presented, and so all new RAFs queue on the next frame until you present (and overdraw the backbuffer). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 16 13:22:38 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 16 Dec 2014 22:22:38 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: How about this. At WebGL context creation you need to pass a callback, as in: var gl = canvas.getContext({draw:myfunction}) If this is done, it would mean that: - You can only draw to the frontbuffer (as in no FBO is bound) inside that callback. All other draws (any other event) will be ignored. - If you don't draw in that function, you will get nothing (probably black). - Drawing order not in fullscreen is the usual composited, but in fullscreen it's myFunction(); and then the browser draws on the backbuffer. On Tue, Dec 16, 2014 at 9:17 PM, Florian B?sch wrote: > > On Tue, Dec 16, 2014 at 9:09 PM, Gregg Tavares > wrote: >> >> you're still assuming only RaF >> > Because trying to do realtime drawing as synchronized as possible is > usually not done with setTimeout > > >> and only one RaF. In your case above what happens if I draw to the canvas >> during mousemove? >> > You don't do that, it's considered bad form. It's considered bad form > because you can easily get dozens of mousemoves between frames. > > >> That happens if I change the content of an HTML element in a keydown >> event? >> > At the next animation frame you get to overdraw the backbuffer. > > >> That happens if I have 2 RaFs requests? >> > Both will finish, and the frame is done, you can then overdraw the > backbuffer. You're not initating the next call back to any RAF initiator > before the last frame that hasn't been presented, isn't presented, and so > all new RAFs queue on the next frame until you present (and overdraw the > backbuffer). > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Tue Dec 16 17:24:01 2014 From: jgi...@ (Jeff Gilbert) Date: Tue, 16 Dec 2014 17:24:01 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: Drawing directly into the backbuffer (for fullscreen elements) is something we want to investigate. -Jeff On Tue, Dec 16, 2014 at 1:22 PM, Florian B?sch wrote: > > How about this. At WebGL context creation you need to pass a callback, as > in: > > var gl = canvas.getContext({draw:myfunction}) > > If this is done, it would mean that: > > - You can only draw to the frontbuffer (as in no FBO is bound) inside > that callback. All other draws (any other event) will be ignored. > - If you don't draw in that function, you will get nothing (probably > black). > - Drawing order not in fullscreen is the usual composited, but in > fullscreen it's myFunction(); and then the browser draws on the backbuffer. > > > On Tue, Dec 16, 2014 at 9:17 PM, Florian B?sch wrote: >> >> On Tue, Dec 16, 2014 at 9:09 PM, Gregg Tavares >> wrote: >>> >>> you're still assuming only RaF >>> >> Because trying to do realtime drawing as synchronized as possible is >> usually not done with setTimeout >> >> >>> and only one RaF. In your case above what happens if I draw to the >>> canvas during mousemove? >>> >> You don't do that, it's considered bad form. It's considered bad form >> because you can easily get dozens of mousemoves between frames. >> >> >>> That happens if I change the content of an HTML element in a keydown >>> event? >>> >> At the next animation frame you get to overdraw the backbuffer. >> >> >>> That happens if I have 2 RaFs requests? >>> >> Both will finish, and the frame is done, you can then overdraw the >> backbuffer. You're not initating the next call back to any RAF initiator >> before the last frame that hasn't been presented, isn't presented, and so >> all new RAFs queue on the next frame until you present (and overdraw the >> backbuffer). >> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Wed Dec 17 00:32:31 2014 From: khr...@ (Gregg Tavares) Date: Wed, 17 Dec 2014 00:32:31 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Tue, Dec 16, 2014 at 12:17 PM, Florian B?sch wrote: > > On Tue, Dec 16, 2014 at 9:09 PM, Gregg Tavares > wrote: >> >> you're still assuming only RaF >> > Because trying to do realtime drawing as synchronized as possible is > usually not done with setTimeout > I think you're missing my point (or maybe I'm missing yours). If you don't tell the browser that you want this special behavior it has no way to devine it. It doesn't know whether or not you're going to try to draw in every event or only in raf events. Drawing in every event is fully HTML5 compliant so if you want it to stop behaving compliantly then you need some way to tell it "Hey, stop doing the compliant behavior and do this new thing (manual compositing + backbuffer rendering)" It can't guess otherwise it will break compliant apps. > > >> and only one RaF. In your case above what happens if I draw to the canvas >> during mousemove? >> > You don't do that, it's considered bad form. It's considered bad form > because you can easily get dozens of mousemoves between frames. > > >> That happens if I change the content of an HTML element in a keydown >> event? >> > At the next animation frame you get to overdraw the backbuffer. > > >> That happens if I have 2 RaFs requests? >> > Both will finish, and the frame is done, you can then overdraw the > backbuffer. You're not initating the next call back to any RAF initiator > before the last frame that hasn't been presented, isn't presented, and so > all new RAFs queue on the next frame until you present (and overdraw the > backbuffer). > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Dec 17 00:39:05 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 17 Dec 2014 09:39:05 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: I get argument you're making against divining compliant behavior. That's why I presented the second suggestion, you define one draw callback for a webgl context. There will only be one (so it's not a subscribable event). And because it's the sole and only call drawback for the context, if you don't draw in it, it's do your drawing when its called, or get black. That'd be a perfectly defined and predictable behavior. The draw callback would be called by the browser to populate whatever buffer it needs to populate before it does its own drawing. All the issues of events, poof, gone. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Thu Dec 18 18:50:03 2014 From: ben...@ (Ben Constable) Date: Fri, 19 Dec 2014 02:50:03 +0000 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 In-Reply-To: <18b422eb536c47089f46c324bd6ae303@UKMAIL101.nvidia.com> References: ,<9EA301E5-9CE8-4EEC-94EB-18EC0F8FDDE4@callow.im> <635e45c2469d406b8ca3a169020468e2@UKMAIL101.nvidia.com>, <798d0073e77e4f15919d1a34ed4779d2@UKMAIL101.nvidia.com>, <18b422eb536c47089f46c324bd6ae303@UKMAIL101.nvidia.com> Message-ID: Well, what I was trying to convey was that the language for constant expressions was not ambiguous and that we had a test gap there. The language for constant index expressions is not quite as conclusive to me (and that is subjective). I think that working on making the spec more explicit, and having tests backing it up, would be something good to do. If that means that you can take the cosine of a constant index expression to make another constant index expression, that seems like the most likely interpretation. Does that answer your question? My doubts were more about how clear the language was than the intent. It seems like the intent was to make constant loop indices into things where the min and max value could be calculated as hard constants. Once the language of the spec is updated and the tests to match, I am fine working to get IE compliant. Giving a timeline to that compliance is the tricky part though - it depends both on the cost of doing the work, and the relative priority of that work internally, and the release schedule that the work would go with. Those are too many variables for me to track and give a prediction, even if I was allowed to publicly give answers to all of those things :) From: Olli Etuaho [mailto:oetuaho...@] Sent: Tuesday, December 16, 2014 8:40 AM To: Ben Constable; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 I asked the maintainers of shadertoy.com about how indexing typically gets used, and they claimed they've only seen constant expressions as indices. However, I couldn't obtain a database of shader code that could be used to verify if complex index expressions really are non-existent. So it's hard to say what the exact impact of making the spec stricter would be. Ben, in your earlier message you gave an interpretation of the spec where intrinsic (built-in) functions would definitely be allowed in constant-index-expressions: "with loop index variables for well defined loops able to be used as if they were a const variable" This agrees with what ANGLE currently implements. And since the risks from making the spec stricter are unclear, it would seem like a reasonable option to make this interpretation explicit in the WebGL spec and tests. However, your latest message sounds like you have some doubts about this, and it would be in the hands of IE team to make the actual behavior consistent across browsers. So can you confirm whether you still think your initial interpretation is the right one? -Olli ________________________________ From: Ben Constable > Sent: Tuesday, December 9, 2014 3:08 AM To: Olli Etuaho; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Well, I wrote most of what we have in our implementation so my interpretation explains what we have, as well as what we will move towards :) I cannot make assertions about when we can fix things but in general, we support having exhaustive tests that support all elements of the specification and we will make every effort to pass said tests. At this point I would think that the specification is very liberal and therefore that there could be content out there relying on the behavior that you describing clamping down on. One case that might be interesting is the output of an intrinsic function when given a constant-index-expression. Is that also considered a constant-index-expression? We can work on the language of the spec but I think that tests document the intent of things far better. From: Olli Etuaho [mailto:oetuaho...@] Sent: Monday, December 8, 2014 11:54 AM To: Ben Constable; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 That issue with assigning constant expressions containing calls to built-ins to constant floats is one thing that I was planning to add a test for. Ben, is your interpretation of constant-index-expressions also what IE is moving towards in the WebGL implementation? I could try to come up with more data to see what the impact of disallowing complex index expressions would be for existing shaders. ________________________________ From: Ben Constable > Sent: Monday, December 8, 2014 8:42 PM To: Olli Etuaho; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 I don't have a large amount of data on how frequently shaders fail because of this. >From a cursory glance playing with shaders, Chrome and IE both fail to compile this: const float n = sin(1.0); And according to 10.11, they should. So we at least have a bug here. But we don't have any test failures that I know of here, so I think that we have a test gap as well. Section 5.10 is pretty clear about builtin functions (including which ones are not usable in constant expressions). When it comes to constant-index-expressions, my interpretation is that it is largely the same as constant-expression, but with loop index variables for well defined loops able to be used as if they were a const variable. From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Olli Etuaho Sent: Monday, December 8, 2014 7:07 AM To: Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Mark: the first thing the spec says about constant-index expressions is "Definition: constant-index expressions are a superset of constant-expressions". The unclear part is what the spec means when it says "Expressions composed of both of the above", referring to constant expressions and loop indices. Composed how? Operators or calls are also implicitly needed to compose an expression, but which operations are allowed is not mentioned. It could be interpreted so that as soon as you operate on a loop index, it becomes a constant-index-expression and the intermediate value can't be operated on further to form another constant-index-expression. Alternatively, it could be interpreted so that multiple intermediate operations involving indices and constant expressions are allowed. I agree that this is a slight distraction from WebGL 2 work, but it's always a problem when IE and ANGLE differ in their interpretations of the spec. Of course regressing compatibility with content is a concern, but between browsers the compatibility is already broken. Maybe the IE team could comment on how frequently they see shaders failing because of this? The most complex indexing expression I found by sampling 20 featured/popular shaders on ShaderToy was [i + 1], so I don't think there would be that much breakage even if the rules are tightened to only allow integer addition and multiplication, for example. It will also help with fixing other problems related to constant expressions if this is clearly specified, since some of the code is shared between processing constant-index-expressions and constant expressions. -Olli ________________________________ From: Mark Callow > Sent: Sunday, December 7, 2014 3:55 PM To: Daniel Koch Cc: Olli Etuaho; public webgl Subject: Re: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 * PGP Signed by an unknown key On Dec 6, 2014, at 12:13 AM, Daniel Koch > wrote: I believe something like e.g. sin(0.0), is actually supposed to be a constant expression as it can be evaluated at compile-time. See issue 10.11 Constant Expressions in the ESSL 1.00 spec. Should built-in functions be allowed in constant expressions? e.g. const float a = sin(1.0); RESOLUTION: Yes, allow built-in functions to be included in constant expressions. Redefinition of built-in functions is now prohibited. User-defined functions are not allowed in constant expressions. So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really be getting the same handling. Implementations are only required to support constant-index expressions, not constant expressions, in for loops, in fragment-shader for loops at least. I thought the spec and grammar was quite clear on the definition of a constant-index-expression, but IANACE. Regards -Mark * Unknown Key * 0x3F001063 -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Fri Dec 19 06:52:09 2014 From: oet...@ (Olli Etuaho) Date: Fri, 19 Dec 2014 14:52:09 +0000 Subject: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 In-Reply-To: References: ,<9EA301E5-9CE8-4EEC-94EB-18EC0F8FDDE4@callow.im> <635e45c2469d406b8ca3a169020468e2@UKMAIL101.nvidia.com>, <798d0073e77e4f15919d1a34ed4779d2@UKMAIL101.nvidia.com>, <18b422eb536c47089f46c324bd6ae303@UKMAIL101.nvidia.com>, Message-ID: If being able to calculate min and max value as hard constants is a requirement, that actually requires a stricter version of the spec. Consider this expression which only uses integer addition and subtraction at runtime: (int(pow(2.0, 16.0)) + i) - int(pow(2.0, 16.0)). This can result in integer overflow and thus an undefined value with i >= 1 on GLES2.0, so there's already a problem. If integer division is added in, that has undefined rounding, and floating point operation precision isn't defined either. I imagine at least floating point operations would result in problematic corner cases even if only DirectX backends were considered. This seems to suggest that clamping is the only way to implement even very limited forms of indexing so that the indices as evaluated by the hw can be guaranteed to be on the valid range. In this light the option of opening up the spec and practically requiring implementations to do ANGLE-style clamping does seem like the preferable option. However, I took a look at the drawElements test suite, and unfortunately it doesn't have very comprehensive coverage of indexing - the only tests I could find were testing trivial expressions, nothing with math operations at all. So we can't be sure that all devices actually support the expressions that would be required and are currently allowed by ANGLE. There is the option of unrolling loops and evaluating all indices at compile-time, but that's costly to implement and can result in other problems. So like Ken already said earlier, this is a lot to sort out - it's hard to make the spec much stricter, since that might break content, but it's also hard to make the spec more loose, since some drivers might be based on a stricter interpretation. In both cases it's a lot of effort to find out what the issues actually are. So I think I'll leave this be for now, to be looked at again when there's more time and data. ________________________________ From: Ben Constable Sent: Friday, December 19, 2014 4:50 AM To: Olli Etuaho; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Well, what I was trying to convey was that the language for constant expressions was not ambiguous and that we had a test gap there. The language for constant index expressions is not quite as conclusive to me (and that is subjective). I think that working on making the spec more explicit, and having tests backing it up, would be something good to do. If that means that you can take the cosine of a constant index expression to make another constant index expression, that seems like the most likely interpretation. Does that answer your question? My doubts were more about how clear the language was than the intent. It seems like the intent was to make constant loop indices into things where the min and max value could be calculated as hard constants. Once the language of the spec is updated and the tests to match, I am fine working to get IE compliant. Giving a timeline to that compliance is the tricky part though ? it depends both on the cost of doing the work, and the relative priority of that work internally, and the release schedule that the work would go with. Those are too many variables for me to track and give a prediction, even if I was allowed to publicly give answers to all of those things :) From: Olli Etuaho [mailto:oetuaho...@] Sent: Tuesday, December 16, 2014 8:40 AM To: Ben Constable; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 I asked the maintainers of shadertoy.com about how indexing typically gets used, and they claimed they've only seen constant expressions as indices. However, I couldn't obtain a database of shader code that could be used to verify if complex index expressions really are non-existent. So it's hard to say what the exact impact of making the spec stricter would be. Ben, in your earlier message you gave an interpretation of the spec where intrinsic (built-in) functions would definitely be allowed in constant-index-expressions: "with loop index variables for well defined loops able to be used as if they were a const variable" This agrees with what ANGLE currently implements. And since the risks from making the spec stricter are unclear, it would seem like a reasonable option to make this interpretation explicit in the WebGL spec and tests. However, your latest message sounds like you have some doubts about this, and it would be in the hands of IE team to make the actual behavior consistent across browsers. So can you confirm whether you still think your initial interpretation is the right one? -Olli ________________________________ From: Ben Constable > Sent: Tuesday, December 9, 2014 3:08 AM To: Olli Etuaho; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Well, I wrote most of what we have in our implementation so my interpretation explains what we have, as well as what we will move towards :) I cannot make assertions about when we can fix things but in general, we support having exhaustive tests that support all elements of the specification and we will make every effort to pass said tests. At this point I would think that the specification is very liberal and therefore that there could be content out there relying on the behavior that you describing clamping down on. One case that might be interesting is the output of an intrinsic function when given a constant-index-expression. Is that also considered a constant-index-expression? We can work on the language of the spec but I think that tests document the intent of things far better. From: Olli Etuaho [mailto:oetuaho...@] Sent: Monday, December 8, 2014 11:54 AM To: Ben Constable; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 That issue with assigning constant expressions containing calls to built-ins to constant floats is one thing that I was planning to add a test for. Ben, is your interpretation of constant-index-expressions also what IE is moving towards in the WebGL implementation? I could try to come up with more data to see what the impact of disallowing complex index expressions would be for existing shaders. ________________________________ From: Ben Constable > Sent: Monday, December 8, 2014 8:42 PM To: Olli Etuaho; Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 I don?t have a large amount of data on how frequently shaders fail because of this. >From a cursory glance playing with shaders, Chrome and IE both fail to compile this: const float n = sin(1.0); And according to 10.11, they should. So we at least have a bug here. But we don?t have any test failures that I know of here, so I think that we have a test gap as well. Section 5.10 is pretty clear about builtin functions (including which ones are not usable in constant expressions). When it comes to constant-index-expressions, my interpretation is that it is largely the same as constant-expression, but with loop index variables for well defined loops able to be used as if they were a const variable. From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Olli Etuaho Sent: Monday, December 8, 2014 7:07 AM To: Mark Callow; Daniel Koch Cc: public webgl Subject: RE: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 Mark: the first thing the spec says about constant-index expressions is "Definition: constant-index expressions are a superset of constant-expressions". The unclear part is what the spec means when it says "Expressions composed of both of the above", referring to constant expressions and loop indices. Composed how? Operators or calls are also implicitly needed to compose an expression, but which operations are allowed is not mentioned. It could be interpreted so that as soon as you operate on a loop index, it becomes a constant-index-expression and the intermediate value can't be operated on further to form another constant-index-expression. Alternatively, it could be interpreted so that multiple intermediate operations involving indices and constant expressions are allowed. I agree that this is a slight distraction from WebGL 2 work, but it's always a problem when IE and ANGLE differ in their interpretations of the spec. Of course regressing compatibility with content is a concern, but between browsers the compatibility is already broken. Maybe the IE team could comment on how frequently they see shaders failing because of this? The most complex indexing expression I found by sampling 20 featured/popular shaders on ShaderToy was [i + 1], so I don't think there would be that much breakage even if the rules are tightened to only allow integer addition and multiplication, for example. It will also help with fixing other problems related to constant expressions if this is clearly specified, since some of the code is shared between processing constant-index-expressions and constant expressions. -Olli ________________________________ From: Mark Callow > Sent: Sunday, December 7, 2014 3:55 PM To: Daniel Koch Cc: Olli Etuaho; public webgl Subject: Re: [Public WebGL] Ambiguity in indexing restriction spec in WebGL 1.0 * PGP Signed by an unknown key On Dec 6, 2014, at 12:13 AM, Daniel Koch > wrote: I believe something like e.g. sin(0.0), is actually supposed to be a constant expression as it can be evaluated at compile-time. See issue 10.11 Constant Expressions in the ESSL 1.00 spec. Should built-in functions be allowed in constant expressions? e.g. const float a = sin(1.0); RESOLUTION: Yes, allow built-in functions to be included in constant expressions. Redefinition of built-in functions is now prohibited. User-defined functions are not allowed in constant expressions. So in your examples below, "int(sin(0.0))" and "int(1.0+2.0)" should really be getting the same handling. Implementations are only required to support constant-index expressions, not constant expressions, in for loops, in fragment-shader for loops at least. I thought the spec and grammar was quite clear on the definition of a constant-index-expression, but IANACE. Regards -Mark * Unknown Key * 0x3F001063 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Dec 19 17:10:59 2014 From: kbr...@ (Kenneth Russell) Date: Fri, 19 Dec 2014 17:10:59 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: First, this is a really good idea and one which is sorely needed in order for games and VR apps to reach native performance. I'm eager to prototype this and see what it can do for WebVR in particular. On the form of the API, I think an opt-in context creation attribute is the best approach, rather than specifying a draw callback. There are three basic scenarios for why the browser needs to draw to the screen: 1. The developer's JavaScript code makes a change requiring a redraw (drawing with the WebGLRenderingContext, etc.). 2. Browser-driven operations like CSS animations update the positions of HTML elements on the page. 3. A security warning like "fullscreen was entered" must be drawn. If it weren't for (3) I'd have said that a draw callback is the way to go. The proposal being discussed here makes (2) no longer update what's on the screen in this new mode; only (1) will do so. However, (3) is required for security purposes, and in these situations the browser can't rely on the user's code to render anything. I'm not sure what the UI would look like for this case -- maybe something like Windows' UAC where it dims the screen and draws an overlay -- but it has to be handled. (There might be heuristics like "if the user's code triggers a redraw within 1/10 of a second, the overlay will be presented on top of it" -- that can be figured out later.) This has the added benefit that no matter how the developer's current WebGL rendering code is structured, it'll continue to work. It'd be difficult to enforce a restriction like "all WebGL rendering must be done within this particular callback". -Ken On Wed, Dec 17, 2014 at 12:39 AM, Florian B?sch wrote: > I get argument you're making against divining compliant behavior. That's why > I presented the second suggestion, you define one draw callback for a webgl > context. There will only be one (so it's not a subscribable event). And > because it's the sole and only call drawback for the context, if you don't > draw in it, it's do your drawing when its called, or get black. That'd be a > perfectly defined and predictable behavior. The draw callback would be > called by the browser to populate whatever buffer it needs to populate > before it does its own drawing. All the issues of events, poof, gone. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From thi...@ Fri Dec 19 17:46:26 2014 From: thi...@ (Thibaut Despoulain) Date: Fri, 19 Dec 2014 17:46:26 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: I agree with Ken on this. I'll also add that when you're dealing with a complex rendering pipeline, the single callback approach would be too constraining and awkward to work with. On Dec 19, 2014 5:11 PM, "Kenneth Russell" wrote: > First, this is a really good idea and one which is sorely needed in > order for games and VR apps to reach native performance. I'm eager to > prototype this and see what it can do for WebVR in particular. > > On the form of the API, I think an opt-in context creation attribute > is the best approach, rather than specifying a draw callback. There > are three basic scenarios for why the browser needs to draw to the > screen: > > 1. The developer's JavaScript code makes a change requiring a redraw > (drawing with the WebGLRenderingContext, etc.). > 2. Browser-driven operations like CSS animations update the positions > of HTML elements on the page. > 3. A security warning like "fullscreen was entered" must be drawn. > > If it weren't for (3) I'd have said that a draw callback is the way to > go. The proposal being discussed here makes (2) no longer update > what's on the screen in this new mode; only (1) will do so. However, > (3) is required for security purposes, and in these situations the > browser can't rely on the user's code to render anything. I'm not sure > what the UI would look like for this case -- maybe something like > Windows' UAC where it dims the screen and draws an overlay -- but it > has to be handled. (There might be heuristics like "if the user's code > triggers a redraw within 1/10 of a second, the overlay will be > presented on top of it" -- that can be figured out later.) > > This has the added benefit that no matter how the developer's current > WebGL rendering code is structured, it'll continue to work. It'd be > difficult to enforce a restriction like "all WebGL rendering must be > done within this particular callback". > > -Ken > > > > On Wed, Dec 17, 2014 at 12:39 AM, Florian B?sch wrote: > > I get argument you're making against divining compliant behavior. That's > why > > I presented the second suggestion, you define one draw callback for a > webgl > > context. There will only be one (so it's not a subscribable event). And > > because it's the sole and only call drawback for the context, if you > don't > > draw in it, it's do your drawing when its called, or get black. That'd > be a > > perfectly defined and predictable behavior. The draw callback would be > > called by the browser to populate whatever buffer it needs to populate > > before it does its own drawing. All the issues of events, poof, gone. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Dec 19 22:42:16 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 20 Dec 2014 07:42:16 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: A flag doesn't address the problems of actually making things happen. Let's say you set a flag like canvas.getContext({backbuffer:true}); now what? 1. How's the browser to know when its safe to composit? 2. How's WebGL user to know when he has to perform a draw, because the browser would like to composit? If this flag automagically can do the job, why not always set it to true? Why not make true the default, and let the automagic happen by itself? No, you can't because deducing the aforementioned conditions cannot be easily done (I believe Greg argued forcefully towards that conclusion). If you want to make strong guarantees and easily predictable behavior about both 1) and 2) you need to: 1. Have a way to indicate to the WebGL user that he has to draw now (means right now, do it or lose it). Callbacks is how you do that. 2. Have a way to determine when the WebGL user has finished drawing, you can do this either: 1. By ensuring there is only one draw callback or 2. Add an additional method (like say gl.frameEnd()) to indicate end of drawing (which leaves you open to ambiguity because what if you have multiple draws and multiple end of frames and multiple draw callbacks, which one is the one that counts?) On Sat, Dec 20, 2014 at 2:46 AM, Thibaut Despoulain wrote: > I agree with Ken on this. I'll also add that when you're dealing with a > complex rendering pipeline, the single callback approach would be too > constraining and awkward to work with. > On Dec 19, 2014 5:11 PM, "Kenneth Russell" wrote: > >> First, this is a really good idea and one which is sorely needed in >> order for games and VR apps to reach native performance. I'm eager to >> prototype this and see what it can do for WebVR in particular. >> >> On the form of the API, I think an opt-in context creation attribute >> is the best approach, rather than specifying a draw callback. There >> are three basic scenarios for why the browser needs to draw to the >> screen: >> >> 1. The developer's JavaScript code makes a change requiring a redraw >> (drawing with the WebGLRenderingContext, etc.). >> 2. Browser-driven operations like CSS animations update the positions >> of HTML elements on the page. >> 3. A security warning like "fullscreen was entered" must be drawn. >> >> If it weren't for (3) I'd have said that a draw callback is the way to >> go. The proposal being discussed here makes (2) no longer update >> what's on the screen in this new mode; only (1) will do so. However, >> (3) is required for security purposes, and in these situations the >> browser can't rely on the user's code to render anything. I'm not sure >> what the UI would look like for this case -- maybe something like >> Windows' UAC where it dims the screen and draws an overlay -- but it >> has to be handled. (There might be heuristics like "if the user's code >> triggers a redraw within 1/10 of a second, the overlay will be >> presented on top of it" -- that can be figured out later.) >> >> This has the added benefit that no matter how the developer's current >> WebGL rendering code is structured, it'll continue to work. It'd be >> difficult to enforce a restriction like "all WebGL rendering must be >> done within this particular callback". >> >> -Ken >> >> >> >> On Wed, Dec 17, 2014 at 12:39 AM, Florian B?sch wrote: >> > I get argument you're making against divining compliant behavior. >> That's why >> > I presented the second suggestion, you define one draw callback for a >> webgl >> > context. There will only be one (so it's not a subscribable event). And >> > because it's the sole and only call drawback for the context, if you >> don't >> > draw in it, it's do your drawing when its called, or get black. That'd >> be a >> > perfectly defined and predictable behavior. The draw callback would be >> > called by the browser to populate whatever buffer it needs to populate >> > before it does its own drawing. All the issues of events, poof, gone. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Dec 19 22:47:59 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 20 Dec 2014 07:47:59 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: On Sat, Dec 20, 2014 at 2:10 AM, Kenneth Russell wrote: > 3. A security warning like "fullscreen was entered" must be drawn. > > If it weren't for (3) I'd have said that a draw callback is the way to > go. That's not how it'd work. You're coming to a wrong conclusion. If you have a draw callback, what it means is that: - There is only one - It gets called automatically whenever a frame needs to be presented - The browser is going to draw over the backbuffer regardless of what the draw callback did. If it didn't draw, then the screen is black with ontop whatever the browser wants to draw. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Dec 19 22:52:03 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 20 Dec 2014 07:52:03 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: It's not awkward. Currently you do: var draw = function(){ } requestAnimationFrame(draw); With a callback you'd do either: ctx = canvas.getContext(draw:draw); or ctx.draw = draw; That's how every WebGL application out there works that wants to draw every frame. And even those that don't, it's been (forcefully) argued by the community that drawing in mousemove/etc. is not a good idea (there can be worst case hundreds of mousemoves between frames). And obviously setTimeout is hopelessly outdated. And if you're registering multiple requestAnimationFrame callbacks to draw your application, what does that even mean? You can't know the order in which they're called, so... you end up with garbage on screen half of the time? On Sat, Dec 20, 2014 at 2:46 AM, Thibaut Despoulain wrote: > I agree with Ken on this. I'll also add that when you're dealing with a > complex rendering pipeline, the single callback approach would be too > constraining and awkward to work with. > On Dec 19, 2014 5:11 PM, "Kenneth Russell" wrote: > >> First, this is a really good idea and one which is sorely needed in >> order for games and VR apps to reach native performance. I'm eager to >> prototype this and see what it can do for WebVR in particular. >> >> On the form of the API, I think an opt-in context creation attribute >> is the best approach, rather than specifying a draw callback. There >> are three basic scenarios for why the browser needs to draw to the >> screen: >> >> 1. The developer's JavaScript code makes a change requiring a redraw >> (drawing with the WebGLRenderingContext, etc.). >> 2. Browser-driven operations like CSS animations update the positions >> of HTML elements on the page. >> 3. A security warning like "fullscreen was entered" must be drawn. >> >> If it weren't for (3) I'd have said that a draw callback is the way to >> go. The proposal being discussed here makes (2) no longer update >> what's on the screen in this new mode; only (1) will do so. However, >> (3) is required for security purposes, and in these situations the >> browser can't rely on the user's code to render anything. I'm not sure >> what the UI would look like for this case -- maybe something like >> Windows' UAC where it dims the screen and draws an overlay -- but it >> has to be handled. (There might be heuristics like "if the user's code >> triggers a redraw within 1/10 of a second, the overlay will be >> presented on top of it" -- that can be figured out later.) >> >> This has the added benefit that no matter how the developer's current >> WebGL rendering code is structured, it'll continue to work. It'd be >> difficult to enforce a restriction like "all WebGL rendering must be >> done within this particular callback". >> >> -Ken >> >> >> >> On Wed, Dec 17, 2014 at 12:39 AM, Florian B?sch wrote: >> > I get argument you're making against divining compliant behavior. >> That's why >> > I presented the second suggestion, you define one draw callback for a >> webgl >> > context. There will only be one (so it's not a subscribable event). And >> > because it's the sole and only call drawback for the context, if you >> don't >> > draw in it, it's do your drawing when its called, or get black. That'd >> be a >> > perfectly defined and predictable behavior. The draw callback would be >> > called by the browser to populate whatever buffer it needs to populate >> > before it does its own drawing. All the issues of events, poof, gone. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Dec 19 23:09:16 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 20 Dec 2014 08:09:16 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: And a couple more things: Providing a new semantic when to draw to the frontbuffer (whatever that means back or frontbuffer) will not break existing applications (because they'd need to opt in) By making draws to the frontbuffer (when gl.bindFramebuffer(null)) have no effect unless inside the new draw callback, you can migrate people over to what's a best practice anyway (FFS there's a whole chapter in the new WebGL insights book that teaches you how not to draw in other event handlers). If you're so much in love with calling requestAnimationFrame you can always do this: var rafs = [] window.requestAnimationFrame = function(callback){ rafs.push(callback); } ctx.draw = function(){ for(var i=0; i wrote: > It's not awkward. > > Currently you do: > > var draw = function(){ > } > requestAnimationFrame(draw); > > With a callback you'd do either: > > ctx = canvas.getContext(draw:draw); or > ctx.draw = draw; > > That's how every WebGL application out there works that wants to draw > every frame. And even those that don't, it's been (forcefully) argued by > the community that drawing in mousemove/etc. is not a good idea (there can > be worst case hundreds of mousemoves between frames). And obviously > setTimeout is hopelessly outdated. > > And if you're registering multiple requestAnimationFrame callbacks to draw > your application, what does that even mean? You can't know the order in > which they're called, so... you end up with garbage on screen half of the > time? > > On Sat, Dec 20, 2014 at 2:46 AM, Thibaut Despoulain > wrote: > >> I agree with Ken on this. I'll also add that when you're dealing with a >> complex rendering pipeline, the single callback approach would be too >> constraining and awkward to work with. >> On Dec 19, 2014 5:11 PM, "Kenneth Russell" wrote: >> >>> First, this is a really good idea and one which is sorely needed in >>> order for games and VR apps to reach native performance. I'm eager to >>> prototype this and see what it can do for WebVR in particular. >>> >>> On the form of the API, I think an opt-in context creation attribute >>> is the best approach, rather than specifying a draw callback. There >>> are three basic scenarios for why the browser needs to draw to the >>> screen: >>> >>> 1. The developer's JavaScript code makes a change requiring a redraw >>> (drawing with the WebGLRenderingContext, etc.). >>> 2. Browser-driven operations like CSS animations update the positions >>> of HTML elements on the page. >>> 3. A security warning like "fullscreen was entered" must be drawn. >>> >>> If it weren't for (3) I'd have said that a draw callback is the way to >>> go. The proposal being discussed here makes (2) no longer update >>> what's on the screen in this new mode; only (1) will do so. However, >>> (3) is required for security purposes, and in these situations the >>> browser can't rely on the user's code to render anything. I'm not sure >>> what the UI would look like for this case -- maybe something like >>> Windows' UAC where it dims the screen and draws an overlay -- but it >>> has to be handled. (There might be heuristics like "if the user's code >>> triggers a redraw within 1/10 of a second, the overlay will be >>> presented on top of it" -- that can be figured out later.) >>> >>> This has the added benefit that no matter how the developer's current >>> WebGL rendering code is structured, it'll continue to work. It'd be >>> difficult to enforce a restriction like "all WebGL rendering must be >>> done within this particular callback". >>> >>> -Ken >>> >>> >>> >>> On Wed, Dec 17, 2014 at 12:39 AM, Florian B?sch >>> wrote: >>> > I get argument you're making against divining compliant behavior. >>> That's why >>> > I presented the second suggestion, you define one draw callback for a >>> webgl >>> > context. There will only be one (so it's not a subscribable event). And >>> > because it's the sole and only call drawback for the context, if you >>> don't >>> > draw in it, it's do your drawing when its called, or get black. That'd >>> be a >>> > perfectly defined and predictable behavior. The draw callback would be >>> > called by the browser to populate whatever buffer it needs to populate >>> > before it does its own drawing. All the issues of events, poof, gone. >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Sun Dec 21 02:34:27 2014 From: khr...@ (Mark Callow) Date: Sun, 21 Dec 2014 19:34:27 +0900 Subject: [Public WebGL] HDR image format In-Reply-To: <548EF6E2.3020500@dfki.de> References: <548AE227.5010906@dfki.de> <8C94DCEF-BFFD-40CB-A1BA-F52C0BCF3038@callow.im> <548EF6E2.3020500@dfki.de> Message-ID: > On Dec 15, 2014, at 11:57 PM, Kristian Sons wrote: > > Hi, >> >>> because they use variable bit rates and do global entropy encoding, RLE etc. >> >> ASTC supports variable bit rates from 8-bits per texel down to 0.89 bits per texel. > Yes, ASTC supports a wide choice of bit rates at authoring time, which is a great progress. However, once selected, it stays fixed through the whole image, which is one of the differences to e.g. JPEG. The number of texels per block, thus the bit rate, remains fixed but because the endpoint encodings can vary by block and the texels in a block can be partitioned into sets with similar colors, it is possible to get acceptable quality at a lower bit rate than with other schemes, IIUC. > >> >>> ... >>> >>> * HDR Texture Compression Format => Fine with ASTC (wait for support), probably to large to transfer >> >> ASTC is already available on some GPUs and I expect support will spread quite quickly. Given that it supports variable bit rates, it is likely possible to produce HDR textures of a not unreasonable size for transmission. > I didn't find an extension or extension proposal for ASTC. Sorry, I'm a noob: Is there a specific reason for that? > The extension spec is at https://www.opengl.org/registry/specs/KHR/texture_compression_astc_hdr.txt . ?Specific reason for? what? Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From khr...@ Sun Dec 21 03:07:08 2014 From: khr...@ (Mark Callow) Date: Sun, 21 Dec 2014 20:07:08 +0900 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: Message-ID: <2480A041-304A-4ADE-869F-5E9F21E9B26F@callow.im> > On Dec 20, 2014, at 3:52 PM, Florian B?sch wrote: > > It's not awkward. > > Currently you do: > > var draw = function(){ > } > requestAnimationFrame(draw); > > With a callback you'd do either: > > ctx = canvas.getContext(draw:draw); or > ctx.draw = draw; > I?m struggling to spot any distinction between rAF and the callback you are proposing. Isn?t the function passed to rAF a callback? Isn?t a WebGL application expected to draw something when that function is called? If the browser can control the ordering of WebGL and other rendering, which is the intent of the callback, then I don?t think any further API is necessary. The browser already has all the information it needs to determine if the circumstances are suitable to allow WebGL rendering to go directly to the back buffer. Doing so is browser optimization and the exact circumstance probably vary according to browser architecture. If you were to implement a browser using something like NV_path_rendering then you?d probably draw everything directly to the back-buffer and any API providing hints would be unnecessary. Regards -Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pya...@ Sun Dec 21 03:27:17 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sun, 21 Dec 2014 12:27:17 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: <2480A041-304A-4ADE-869F-5E9F21E9B26F@callow.im> References: <2480A041-304A-4ADE-869F-5E9F21E9B26F@callow.im> Message-ID: On Sun, Dec 21, 2014 at 12:07 PM, Mark Callow wrote: > I?m struggling to spot any distinction between rAF and the callback you > are proposing. Isn?t the function passed to rAF a callback? Isn?t a WebGL > application expected to draw something when that function is called? > In principle I agree with that statement. I don't see a compelling reason for either a flag, or a singular callback, because the browser can "divine" when all drawing of a frame has been done, and it can issue animation frames as needed. But Greg has pointed out that this is messy, and Kenneth agrees. The main difference between requestAnimationFrame, setTimeout, setInterval, document.addEventListener('foobar', ...) etc. and a singular callback is that it's not an event, but a singular callback. There can only be one on a context, and this one callback *has* to do the drawing when called (and no other callback would have any effect on the front/back buffer). This has the advantage vs. any other semantic and proposal put forth here, that it's 100% predictable, 100% defined when drawing should happen, and 100% compatible with any compositing (or compositing avoidance scheme). It's also easy to implement correctly without any ambiguity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jda...@ Wed Dec 24 09:59:09 2014 From: jda...@ (John Davis) Date: Wed, 24 Dec 2014 12:59:09 -0500 Subject: [Public WebGL] OES 3.1 Programming Guide Message-ID: Anyone know when this is coming out? (ie compute shader support) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Dec 24 10:20:49 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 24 Dec 2014 19:20:49 +0100 Subject: [Public WebGL] OES 3.1 Programming Guide In-Reply-To: References: Message-ID: OpenGL ES 3.1 specified here: https://www.khronos.org/registry/gles/specs/3.1/es_spec_3.1.pdf . The specification has been released October 2014. WebGL 2.0, which is an implementation of OpenGL ES 3.0 is not yet released and the specification is not yet finalized. Some discussion has ensued on this mailing list as to how fast WebGL 2.1 would arrive after WebGL 2.0, with the more optimistic estimates putting it quickly after good WebGL 2.0 support. I am not that optimistic about WebGL 2.1, because there are a lot of new features in OpenGL ES 3.1. Regardless of when these API revisions arrive in browsers, it will take substantial amount of time (years) until sufficient hardware/driver support exists for OpenGL ES 3.0 let alone OpenGL ES 3.1. Regarding the arrival in browsers, it's anybodies guess, but these corner dates should provide you an idea. - OpenGL ES 2.0 specification released 2007 - WebGL 1.0 prototypes as early as 2007 - WebGL 1.0 specification released 2011 - WebGL 1.0 implementations publicly 2011 - OpenGL ES 3.0 specification released 2012 - WebGL 2.0 prototypes as early as 2012 - WebGL 2.0 specification released 2014 - OpenGL ES 3.1 specification released 2014 - WebGL 2.0 implementations publicly released: TBD, estimated 2015 - WebGL 2.1 specification released: TBD, estimated 2017 - WebGL 2.1 implementations publicly released: TBD, estimated 2018 (Note that WebGL 2.1 is not the canonical name, but a placeholder for when the committee has made their mind up about what to call it) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Dec 29 00:25:42 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 29 Dec 2014 09:25:42 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels Message-ID: Over the last week or two chrome has shown a sharp dropoff in WebGL support levels for OSX. This is most unfortunate since chrome makes about 50% of OSX browsers. - Range for google chrome during 2012, 2013 and most of 2014: 90% - 95% - Average for November 2014: 95% - Average for December 2014: 75% - Estimated landing point: 50% (well below Firefox and Opera, and soon below safari) - Impact on OSX support levels across all browsers: down to 63% from 67%, estimated landing point at 59%. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zmo...@ Mon Dec 29 10:39:16 2014 From: zmo...@ (Zhenyao Mo) Date: Mon, 29 Dec 2014 10:39:16 -0800 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: That's a surprise that chrome's WebGL support dropped on MacOSX. We definitely didn't do anything that could lead to this (not to my best knowledge). Do you have further information, like the OS version, etc where this dropoff manifest? On Mon, Dec 29, 2014 at 12:25 AM, Florian B?sch wrote: > Over the last week or two chrome has shown a sharp dropoff in WebGL support > levels for OSX. This is most unfortunate since chrome makes about 50% of OSX > browsers. > > Range for google chrome during 2012, 2013 and most of 2014: 90% - 95% > Average for November 2014: 95% > Average for December 2014: 75% > Estimated landing point: 50% (well below Firefox and Opera, and soon below > safari) > Impact on OSX support levels across all browsers: down to 63% from 67%, > estimated landing point at 59%. > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Mon Dec 29 11:29:25 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 29 Dec 2014 20:29:25 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: Crunching the data on this will take some time, I'll get back with numbers as they become available. On Mon, Dec 29, 2014 at 7:39 PM, Zhenyao Mo wrote: > That's a surprise that chrome's WebGL support dropped on MacOSX. We > definitely didn't do anything that could lead to this (not to my best > knowledge). Do you have further information, like the OS version, etc > where this dropoff manifest? > > On Mon, Dec 29, 2014 at 12:25 AM, Florian B?sch wrote: > > Over the last week or two chrome has shown a sharp dropoff in WebGL > support > > levels for OSX. This is most unfortunate since chrome makes about 50% of > OSX > > browsers. > > > > Range for google chrome during 2012, 2013 and most of 2014: 90% - 95% > > Average for November 2014: 95% > > Average for December 2014: 75% > > Estimated landing point: 50% (well below Firefox and Opera, and soon > below > > safari) > > Impact on OSX support levels across all browsers: down to 63% from 67%, > > estimated landing point at 59%. > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Dec 29 13:41:23 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 29 Dec 2014 22:41:23 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: I'm currently running analysis on the last 40 days. Some numbers for the last 12 days are available on this google docs sheet, more detailed analysis will be added there today and tomorrow: https://docs.google.com/spreadsheets/d/1d5jqPkrUxV72-Pwqgw3I7sXE3RK987ZxxOE7USGa1a0/edit?usp=sharing On Mon, Dec 29, 2014 at 8:29 PM, Florian B?sch wrote: > Crunching the data on this will take some time, I'll get back with numbers > as they become available. > > On Mon, Dec 29, 2014 at 7:39 PM, Zhenyao Mo wrote: > >> That's a surprise that chrome's WebGL support dropped on MacOSX. We >> definitely didn't do anything that could lead to this (not to my best >> knowledge). Do you have further information, like the OS version, etc >> where this dropoff manifest? >> >> On Mon, Dec 29, 2014 at 12:25 AM, Florian B?sch wrote: >> > Over the last week or two chrome has shown a sharp dropoff in WebGL >> support >> > levels for OSX. This is most unfortunate since chrome makes about 50% >> of OSX >> > browsers. >> > >> > Range for google chrome during 2012, 2013 and most of 2014: 90% - 95% >> > Average for November 2014: 95% >> > Average for December 2014: 75% >> > Estimated landing point: 50% (well below Firefox and Opera, and soon >> below >> > safari) >> > Impact on OSX support levels across all browsers: down to 63% from 67%, >> > estimated landing point at 59%. >> > >> > >> > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Dec 29 14:41:03 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 29 Dec 2014 23:41:03 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: The dropoff is not related to to the OSX version. It also seems to be more severe than estimated with a low at around 37%. There is a correlation with the browser version, but it doesn't seem to make any sense. The anomaly starts around december 13th or 14th with the appearance of a chrome version "23". This chrome version does not seem to support WebGL, but it is rapidly gaining share on that platformup to about 60% today. Other chrome versions are not affected by the anomaly. This doesn't make sense to me because 23 would be quite an old version and would've fallen out of use long since. On Mon, Dec 29, 2014 at 10:41 PM, Florian B?sch wrote: > I'm currently running analysis on the last 40 days. Some numbers for the > last 12 days are available on this google docs sheet, more detailed > analysis will be added there today and tomorrow: > https://docs.google.com/spreadsheets/d/1d5jqPkrUxV72-Pwqgw3I7sXE3RK987ZxxOE7USGa1a0/edit?usp=sharing > > On Mon, Dec 29, 2014 at 8:29 PM, Florian B?sch wrote: > >> Crunching the data on this will take some time, I'll get back with >> numbers as they become available. >> >> On Mon, Dec 29, 2014 at 7:39 PM, Zhenyao Mo wrote: >> >>> That's a surprise that chrome's WebGL support dropped on MacOSX. We >>> definitely didn't do anything that could lead to this (not to my best >>> knowledge). Do you have further information, like the OS version, etc >>> where this dropoff manifest? >>> >>> On Mon, Dec 29, 2014 at 12:25 AM, Florian B?sch >>> wrote: >>> > Over the last week or two chrome has shown a sharp dropoff in WebGL >>> support >>> > levels for OSX. This is most unfortunate since chrome makes about 50% >>> of OSX >>> > browsers. >>> > >>> > Range for google chrome during 2012, 2013 and most of 2014: 90% - 95% >>> > Average for November 2014: 95% >>> > Average for December 2014: 75% >>> > Estimated landing point: 50% (well below Firefox and Opera, and soon >>> below >>> > safari) >>> > Impact on OSX support levels across all browsers: down to 63% from 67%, >>> > estimated landing point at 59%. >>> > >>> > >>> > >>> > >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zmo...@ Mon Dec 29 14:45:38 2014 From: zmo...@ (Zhenyao Mo) Date: Mon, 29 Dec 2014 14:45:38 -0800 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: Thanks for the data. >From internal data, for the past 7 days, Chrome stable, MacOSX, WebGL allowed rate is 92% - 93%, i.e., we don't blacklist it. Of course, WebGL context creation can still fail. Just I still have no idea why. Anyone whose WebGL support suddenly stopped during the holiday season? On Mon, Dec 29, 2014 at 2:41 PM, Florian B?sch wrote: > The dropoff is not related to to the OSX version. It also seems to be more > severe than estimated with a low at around 37%. There is a correlation with > the browser version, but it doesn't seem to make any sense. The anomaly > starts around december 13th or 14th with the appearance of a chrome version > "23". This chrome version does not seem to support WebGL, but it is rapidly > gaining share on that platformup to about 60% today. Other chrome versions > are not affected by the anomaly. This doesn't make sense to me because 23 > would be quite an old version and would've fallen out of use long since. > > On Mon, Dec 29, 2014 at 10:41 PM, Florian B?sch wrote: >> >> I'm currently running analysis on the last 40 days. Some numbers for the >> last 12 days are available on this google docs sheet, more detailed analysis >> will be added there today and tomorrow: >> https://docs.google.com/spreadsheets/d/1d5jqPkrUxV72-Pwqgw3I7sXE3RK987ZxxOE7USGa1a0/edit?usp=sharing >> >> On Mon, Dec 29, 2014 at 8:29 PM, Florian B?sch wrote: >>> >>> Crunching the data on this will take some time, I'll get back with >>> numbers as they become available. >>> >>> On Mon, Dec 29, 2014 at 7:39 PM, Zhenyao Mo wrote: >>>> >>>> That's a surprise that chrome's WebGL support dropped on MacOSX. We >>>> definitely didn't do anything that could lead to this (not to my best >>>> knowledge). Do you have further information, like the OS version, etc >>>> where this dropoff manifest? >>>> >>>> On Mon, Dec 29, 2014 at 12:25 AM, Florian B?sch >>>> wrote: >>>> > Over the last week or two chrome has shown a sharp dropoff in WebGL >>>> > support >>>> > levels for OSX. This is most unfortunate since chrome makes about 50% >>>> > of OSX >>>> > browsers. >>>> > >>>> > Range for google chrome during 2012, 2013 and most of 2014: 90% - 95% >>>> > Average for November 2014: 95% >>>> > Average for December 2014: 75% >>>> > Estimated landing point: 50% (well below Firefox and Opera, and soon >>>> > below >>>> > safari) >>>> > Impact on OSX support levels across all browsers: down to 63% from >>>> > 67%, >>>> > estimated landing point at 59%. >>>> > >>>> > >>>> > >>>> > >>> >>> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From khr...@ Mon Dec 29 15:14:20 2014 From: khr...@ (Gregg Tavares) Date: Mon, 29 Dec 2014 15:14:20 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: <2480A041-304A-4ADE-869F-5E9F21E9B26F@callow.im> References: <2480A041-304A-4ADE-869F-5E9F21E9B26F@callow.im> Message-ID: The reason it won't work without any flags or anything else is because the browser needs to re-composite for many reasons. Example hello If in JavaScript I then change the content of that div document.getElementById("thisoverlaysthecanvas").innerHTML = "foo"; The browser now has to re-composite the page. Normally it does this by first drawing the canvas's texture as it contains whatever the user previously put in it. Then it draws the div's contents over it. But, if you were drawing directly to the backbuffer there is no canvas texture. There would be no way for the browser to re-composite the page because it doesn't have a copy the canvas's content. On Sun, Dec 21, 2014 at 3:07 AM, Mark Callow wrote: > > > On Dec 20, 2014, at 3:52 PM, Florian B?sch wrote: > > > > It's not awkward. > > > > Currently you do: > > > > var draw = function(){ > > } > > requestAnimationFrame(draw); > > > > With a callback you'd do either: > > > > ctx = canvas.getContext(draw:draw); or > > ctx.draw = draw; > > > > I?m struggling to spot any distinction between rAF and the callback you > are proposing. Isn?t the function passed to rAF a callback? Isn?t a WebGL > application expected to draw something when that function is called? > > If the browser can control the ordering of WebGL and other rendering, > which is the intent of the callback, then I don?t think any further API is > necessary. The browser already has all the information it needs to > determine if the circumstances are suitable to allow WebGL rendering to go > directly to the back buffer. Doing so is browser optimization and the exact > circumstance probably vary according to browser architecture. > > If you were to implement a browser using something like NV_path_rendering > then you?d probably draw everything directly to the back-buffer and any API > providing hints would be unnecessary. > > Regards > > -Mark > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Mon Dec 29 15:54:07 2014 From: khr...@ (Gregg Tavares) Date: Mon, 29 Dec 2014 15:54:07 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: <2480A041-304A-4ADE-869F-5E9F21E9B26F@callow.im> Message-ID: So another idea which is a little more generic. I know it's got issues but just brainstorming / blue skying what if we had a `composite` event? Right now you can change content at any time (raf, timeout, interval, mousemove, keydown). The browser composites whenever it wants to. The 2 are not related. Even RAF is not directly related to compositing. All RAF is is a request to get called at the refresh rate of the monitor. RAF doesn't cause a composite and is not triggered by a composite either. What if we added `oncomposite` as in someElement.addEventListener('composite', someFunction, ...); Which would get called just before "someElement" is drawn. This would let JavaScript change the content of that element. I know that's problematic since changing the content of an element might require re-flow but here's some benefits to WebGL at least If there was some way for WebGL to request drawing to the backbuffer you could still support multiple canvases and WebGL wouldn't have to be the bottom element. The browser could draw stuff behind the canvas, then call that canvas's composite events, then draw more elements over the results. This would require a few things off the top of my head * If the canvas is not the bottom element and is not the full size of the backbuffer then implementations would need to emulate gl_FragCoord and adjust gl.viewport and gl.scissor, gl.readPixels and gl.copyTexSubImage2d Of course an implementation would not be required to do this. It's just ones that did might see faster speeds than ones that don't. In other words a simple implementation would allow this only when the canvas is the backmost element and is the same size as the backbuffer (ie, fullscreen). A complex implementation would re-write shaders so that it could allow this in more situations. * If alpha = true then readPixels, copyTex(Sub)Image2D, would generate security exceptions (otherwise you'd be able to read from the backbuffer stuff you don't own) I don't know if that's making any sense. I don't think it would be that hard to re-write shaders with gl_FragCoord. It would eat one uniform on browser that implement it. I don't know if there are any new ES 3.0 or ES 3.1 features that would have similar issues as gl_FragCoord. As for how you request drawing to the backbuffer my suggestion would be something like "preseverDrawingBuffer: never" (yea, that's not going to work), maybe ("clearAlways: true") which would basically mean that effectively if you don't draw every frame your content is lost immediately. This is so even if the implementation does NOT let you draw directly to the backbuffer you at least have consistent behavior across browsers. Problems with this idea off the top of my head. * There's nothing preventing JavaScript from affecting other elements in a `composite` event. In other words 'someCanvas.addEventListener('composite', someFunc, ...)` someFunc could change the contents of other HTML elements, it could issue draw commands on other WebGL contexts. I'm not sure what the rules should be there. * The reflow issue mentioned above Maybe this could be solved just by saying reflow doesn't happen until a composite is finished. * Is a scissor'ed gl.clear performant on mobile GPUs? For non-fullscreen canvas'es you'd need to clear the sub rect of the backbuffer the canvas represents. I believe some GPUs achieve this by drawing a quad which would kind of remove any perf benefits for the non-fullscreen case. * compositing is complex and calling JS while it's happening might be a non-starter for certain browsers that have multi-threaded compositors etc..? Anyway, whether or not this is a viable solution I like that it's somewhat generic. It supports multiple canvases of any size (not just fullscreen) and they don't have to be the bottom element. One other idea is to let the app opt into this using RAF where RAF becomes the on composite event. Change the RAF signature from requestAnimationFrame(callback, opt_element) to requestAnimationFrame(callback, opt_element, opt_callOnComposite) If "true" is passed in as the 3rd parameter then when opt_element is composited callback will be called. The advantage to that is no code has to change. A WebGL app doing things the current way will still work with no changes. Opting in just requires the WebGL app to supply both the opt_element parameter and pass "true" for opt_callOnComposite and also set some context creation parameter (clearAlways: true). An app that has added those will still work in a browser that implements none of this. -g -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Dec 29 16:02:33 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 30 Dec 2014 01:02:33 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: I'm not sure at this point what I'm seeing is real or some error in my collection/analysis that's somehow confined just to OSX (I don't see the dropoff effect for other platforms). What the numbers tell me is that some mysterious chrome version 23 is rapidly replacing all other chromes on OSX. I could not observe this on my macbook and Jeff Gilbert couldn't either. It'd be helpful if this effect could be corroborated or disproven by others on their webserver logs by grepping out OSX and chrome and the version and see if they see something odd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Dec 29 16:17:30 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 30 Dec 2014 01:17:30 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: <2480A041-304A-4ADE-869F-5E9F21E9B26F@callow.im> Message-ID: Those ideas are basically synonymous with a canvas.draw callback, with the difference that they go trough eventing. And so there's no restriction of having multiple composite event listeners or multiple RAFs. Which sounds to me like it's ye olde "events are messy and we can't make it work automatically" story. On any account, the browser would be required not to issue a composit/draw/event/specialRAF/whatever call more than once a frame. Otherwise you'll end up in a situation where draw is called many times between frames, and your performance goes straight to the crappers. It also doesn't address what'll happen if draw commands are emitted by JS to the backbuffer when it's not in this special/composit/draw/event/specialextraRAF/whatever, because that's undefined (the browser can't let it draw over the backbuffer willy nilly). Hence, as I've mentioned, draw commands webgl would emit, that affect the backbuffer, when this behavior is enabled, need to be defined as undefined and need to throw an exception informing the user they're doing something they shouldn't. Likewise the compositwhatever needs to prohibit DOM modificiations of course. On Tue, Dec 30, 2014 at 12:54 AM, Gregg Tavares wrote: > So another idea which is a little more generic. I know it's got issues but > just brainstorming / blue skying > > what if we had a `composite` event? > > Right now you can change content at any time (raf, timeout, interval, > mousemove, keydown). The browser composites whenever it wants to. The 2 are > not related. Even RAF is not directly related to compositing. All RAF is is > a request to get called at the refresh rate of the monitor. RAF doesn't > cause a composite and is not triggered by a composite either. > > What if we added `oncomposite` as in > > someElement.addEventListener('composite', someFunction, ...); > > Which would get called just before "someElement" is drawn. This would let > JavaScript change the content of that element. I know that's problematic > since changing the content of an element might require re-flow but here's > some benefits to WebGL at least > > If there was some way for WebGL to request drawing to the backbuffer you > could still support multiple canvases and WebGL wouldn't have to be the > bottom element. The browser could draw stuff behind the canvas, then call > that canvas's composite events, then draw more elements over the results. > > This would require a few things off the top of my head > > * If the canvas is not the bottom element and is not the full size of the > backbuffer then implementations would need to emulate gl_FragCoord and > adjust gl.viewport and gl.scissor, gl.readPixels and gl.copyTexSubImage2d > > Of course an implementation would not be required to do this. It's just > ones that did might see faster speeds than ones that don't. In other words > a simple implementation would allow this only when the canvas is the > backmost element and is the same size as the backbuffer (ie, fullscreen). > A complex implementation would re-write shaders so that it could allow this > in more situations. > > * If alpha = true then readPixels, copyTex(Sub)Image2D, would generate > security exceptions (otherwise you'd be able to read from the backbuffer > stuff you don't own) > > I don't know if that's making any sense. I don't think it would be that > hard to re-write shaders with gl_FragCoord. It would eat one uniform on > browser that implement it. I don't know if there are any new ES 3.0 or ES > 3.1 features that would have similar issues as gl_FragCoord. > > As for how you request drawing to the backbuffer my suggestion would be > something like "preseverDrawingBuffer: never" (yea, that's not going to > work), maybe ("clearAlways: true") which would basically mean that > effectively if you don't draw every frame your content is lost immediately. > This is so even if the implementation does NOT let you draw directly to the > backbuffer you at least have consistent behavior across browsers. > > Problems with this idea off the top of my head. > > * There's nothing preventing JavaScript from affecting other elements in a > `composite` event. > > In other words 'someCanvas.addEventListener('composite', someFunc, ...)` > someFunc could change the contents of other HTML elements, it could issue > draw commands on other WebGL contexts. I'm not sure what the rules should > be there. > > * The reflow issue mentioned above > > Maybe this could be solved just by saying reflow doesn't happen until a > composite is finished. > > * Is a scissor'ed gl.clear performant on mobile GPUs? > > For non-fullscreen canvas'es you'd need to clear the sub rect of the > backbuffer the canvas represents. I believe some GPUs achieve this by > drawing a quad which would kind of remove any perf benefits for the > non-fullscreen case. > > * compositing is complex and calling JS while it's happening might be a > non-starter for certain browsers that have multi-threaded compositors etc..? > > Anyway, whether or not this is a viable solution I like that it's somewhat > generic. It supports multiple canvases of any size (not just fullscreen) > and they don't have to be the bottom element. > > > > One other idea is to let the app opt into this using RAF where RAF becomes > the on composite event. Change the RAF signature from > > requestAnimationFrame(callback, opt_element) > > to > > requestAnimationFrame(callback, opt_element, opt_callOnComposite) > > If "true" is passed in as the 3rd parameter then when opt_element is > composited callback will be called. > > The advantage to that is no code has to change. A WebGL app doing things > the current way will still work with no changes. Opting in just requires > the WebGL app to supply both the opt_element parameter and pass "true" for > opt_callOnComposite and also set some context creation parameter > (clearAlways: true). An app that has added those will still work in a > browser that implements none of this. > > -g > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 30 01:03:30 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 30 Dec 2014 10:03:30 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: I've ran some more analysis and found a further correlation for the source of the anomaly. - The chrome version causing it is 23.0.1271.97 - Out of 11 referers that refer traffic to me for that version, only one shows an unusual rise of this version (I've contacted this referrer to analyze the problem with them) To summarize, the problem is confined to OSX, a specific (but popular) referer and a specific (probably bogus) chrome version. More details will be posted when they become available. On Tue, Dec 30, 2014 at 1:02 AM, Florian B?sch wrote: > I'm not sure at this point what I'm seeing is real or some error in my > collection/analysis that's somehow confined just to OSX (I don't see the > dropoff effect for other platforms). > > What the numbers tell me is that some mysterious chrome version 23 is > rapidly replacing all other chromes on OSX. I could not observe this on my > macbook and Jeff Gilbert couldn't either. > > It'd be helpful if this effect could be corroborated or disproven by > others on their webserver logs by grepping out OSX and chrome and the > version and see if they see something odd. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 30 01:29:39 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 30 Dec 2014 10:29:39 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: Further inquiry into the data has shown that the user-agent for that chrome version is malformed: Mozilla/5.0 (Macintosh*)*; Intel Mac OS X 10_7_5) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 And that most requests bearing the bogus data come from a narrow range of IPv4 subnets (always leading with the A-net 54). The reported OSX version mostly seems to be 10_7_5, and a lot of capabilities are always identically missing (fullscreen, pointerlock, gamepads, etc.). On Tue, Dec 30, 2014 at 10:03 AM, Florian B?sch wrote: > I've ran some more analysis and found a further correlation for the source > of the anomaly. > > - The chrome version causing it is 23.0.1271.97 > - Out of 11 referers that refer traffic to me for that version, only > one shows an unusual rise of this version (I've contacted this referrer to > analyze the problem with them) > > To summarize, the problem is confined to OSX, a specific (but popular) > referer and a specific (probably bogus) chrome version. More details will > be posted when they become available. > > On Tue, Dec 30, 2014 at 1:02 AM, Florian B?sch wrote: > >> I'm not sure at this point what I'm seeing is real or some error in my >> collection/analysis that's somehow confined just to OSX (I don't see the >> dropoff effect for other platforms). >> >> What the numbers tell me is that some mysterious chrome version 23 is >> rapidly replacing all other chromes on OSX. I could not observe this on my >> macbook and Jeff Gilbert couldn't either. >> >> It'd be helpful if this effect could be corroborated or disproven by >> others on their webserver logs by grepping out OSX and chrome and the >> version and see if they see something odd. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 30 01:32:34 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 30 Dec 2014 10:32:34 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: A reverse lookup on the addresses reveals that they all seem to originate from amazonaws.com. I believe the mystery is lifting, it seems somebody is using amazonaws.com to crawl this referers site aggressively. On Tue, Dec 30, 2014 at 10:29 AM, Florian B?sch wrote: > Further inquiry into the data has shown that the user-agent for that > chrome version is malformed: > > Mozilla/5.0 (Macintosh*)*; Intel Mac OS X 10_7_5) AppleWebKit/537.11 > (KHTML, like Gecko) Chrome/23.0.1271.97 Safari/537.11 > > And that most requests bearing the bogus data come from a narrow range of > IPv4 subnets (always leading with the A-net 54). > > The reported OSX version mostly seems to be 10_7_5, and a lot of > capabilities are always identically missing (fullscreen, pointerlock, > gamepads, etc.). > > On Tue, Dec 30, 2014 at 10:03 AM, Florian B?sch wrote: > >> I've ran some more analysis and found a further correlation for the >> source of the anomaly. >> >> - The chrome version causing it is 23.0.1271.97 >> - Out of 11 referers that refer traffic to me for that version, only >> one shows an unusual rise of this version (I've contacted this referrer to >> analyze the problem with them) >> >> To summarize, the problem is confined to OSX, a specific (but popular) >> referer and a specific (probably bogus) chrome version. More details will >> be posted when they become available. >> >> On Tue, Dec 30, 2014 at 1:02 AM, Florian B?sch wrote: >> >>> I'm not sure at this point what I'm seeing is real or some error in my >>> collection/analysis that's somehow confined just to OSX (I don't see the >>> dropoff effect for other platforms). >>> >>> What the numbers tell me is that some mysterious chrome version 23 is >>> rapidly replacing all other chromes on OSX. I could not observe this on my >>> macbook and Jeff Gilbert couldn't either. >>> >>> It'd be helpful if this effect could be corroborated or disproven by >>> others on their webserver logs by grepping out OSX and chrome and the >>> version and see if they see something odd. >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 30 02:23:51 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 30 Dec 2014 11:23:51 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: I've added an IP-range filtering mechanism to the http://webglstats.com/ analysis that looks for the IP-ranges of amazon-aws https://ip-ranges.amazonaws.com/ip-ranges.json . This should eliminate the anomaly in the new year. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 30 02:29:06 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 30 Dec 2014 11:29:06 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: It'd be useful if google and microsoft could publish ip-range lists for their compute services at a canonical up-to-date URL too. On Tue, Dec 30, 2014 at 11:23 AM, Florian B?sch wrote: > I've added an IP-range filtering mechanism to the http://webglstats.com/ > analysis that looks for the IP-ranges of amazon-aws > https://ip-ranges.amazonaws.com/ip-ranges.json . This should eliminate > the anomaly in the new year. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 30 04:01:53 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 30 Dec 2014 13:01:53 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: I'm regenerating the statistics starting from december 11th with the ip-block in place. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Dec 30 04:56:56 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 30 Dec 2014 13:56:56 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: The statistics are regenerated and the anomaly is now no longer present. Changes resulting from the filtering are: - A total of 100'000 less visits since the 11th - All of those where categorized as OSX/Chrome, which is now down from 52% of that platforms share to46% (but it's still the most used browser on OSX) - Overall WebGL activation success is up 1.1% from 82.6% to 83.7%. - OSX WebGL activation success is up 8% from 63% to 71%. - Other platforms and browsers where not affected by the anomaly. Sorry for the hubhub around this. I'm hopeful the measures I put in place to block cloud provider IP-ranges will help prevent a recurrence of such an event. These measures are on top of other already existing mechanisms to filter out bots, high single source traffic and erroneously looking traffic (malformed JSON, UAs, parameters, etc.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From baj...@ Tue Dec 30 08:56:41 2014 From: baj...@ (Brandon Jones) Date: Tue, 30 Dec 2014 16:56:41 +0000 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels References: Message-ID: Thanks for digging deep into this! It's good to know that this is truly anomalous data and not indicative of real-world regressions. Even with that being the case, though, I'm glad you brought it up early. Imperfect though they may be your stats remain one of the best signals we have for how well WebGL is working outside of a lab, and I'd rather have a few false positive scares than miss a real issue. --Brandon On Tue Dec 30 2014 at 4:57:29 AM Florian B?sch wrote: > The statistics are regenerated and the anomaly is now no longer present. > > Changes resulting from the filtering are: > > - A total of 100'000 less visits since the 11th > - All of those where categorized as OSX/Chrome, which is now down from > 52% of that platforms share to46% (but it's still the most used browser on > OSX) > - Overall WebGL activation success is up 1.1% from 82.6% to 83.7%. > - OSX WebGL activation success is up 8% from 63% to 71%. > - Other platforms and browsers where not affected by the anomaly. > > Sorry for the hubhub around this. I'm hopeful the measures I put in place > to block cloud provider IP-ranges will help prevent a recurrence of such an > event. These measures are on top of other already existing mechanisms to > filter out bots, high single source traffic and erroneously looking traffic > (malformed JSON, UAs, parameters, etc.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zmo...@ Tue Dec 30 10:20:53 2014 From: zmo...@ (Zhenyao Mo) Date: Tue, 30 Dec 2014 10:20:53 -0800 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: Thanks for getting to the bottom of this, although 71% on OSX is still lower than what I expected. On Tue, Dec 30, 2014 at 4:56 AM, Florian B?sch wrote: > The statistics are regenerated and the anomaly is now no longer present. > > Changes resulting from the filtering are: > > A total of 100'000 less visits since the 11th > All of those where categorized as OSX/Chrome, which is now down from 52% of > that platforms share to46% (but it's still the most used browser on OSX) > Overall WebGL activation success is up 1.1% from 82.6% to 83.7%. > OSX WebGL activation success is up 8% from 63% to 71%. > Other platforms and browsers where not affected by the anomaly. > > Sorry for the hubhub around this. I'm hopeful the measures I put in place to > block cloud provider IP-ranges will help prevent a recurrence of such an > event. These measures are on top of other already existing mechanisms to > filter out bots, high single source traffic and erroneously looking traffic > (malformed JSON, UAs, parameters, etc.) ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Tue Dec 30 10:29:25 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 30 Dec 2014 19:29:25 +0100 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: It's 71% osx overall, 95.6% for chrome. On Tue, Dec 30, 2014 at 7:20 PM, Zhenyao Mo wrote: > Thanks for getting to the bottom of this, although 71% on OSX is still > lower than what I expected. > > On Tue, Dec 30, 2014 at 4:56 AM, Florian B?sch wrote: > > The statistics are regenerated and the anomaly is now no longer present. > > > > Changes resulting from the filtering are: > > > > A total of 100'000 less visits since the 11th > > All of those where categorized as OSX/Chrome, which is now down from 52% > of > > that platforms share to46% (but it's still the most used browser on OSX) > > Overall WebGL activation success is up 1.1% from 82.6% to 83.7%. > > OSX WebGL activation success is up 8% from 63% to 71%. > > Other platforms and browsers where not affected by the anomaly. > > > > Sorry for the hubhub around this. I'm hopeful the measures I put in > place to > > block cloud provider IP-ranges will help prevent a recurrence of such an > > event. These measures are on top of other already existing mechanisms to > > filter out bots, high single source traffic and erroneously looking > traffic > > (malformed JSON, UAs, parameters, etc.) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zmo...@ Tue Dec 30 10:34:30 2014 From: zmo...@ (Zhenyao Mo) Date: Tue, 30 Dec 2014 10:34:30 -0800 Subject: [Public WebGL] early warning: sharp dropoff in google chrome OSX webgl support levels In-Reply-To: References: Message-ID: Ah, thanks for the clarification. That matches our expectation. Cheers On Tue, Dec 30, 2014 at 10:29 AM, Florian B?sch wrote: > It's 71% osx overall, 95.6% for chrome. > > On Tue, Dec 30, 2014 at 7:20 PM, Zhenyao Mo wrote: >> >> Thanks for getting to the bottom of this, although 71% on OSX is still >> lower than what I expected. >> >> On Tue, Dec 30, 2014 at 4:56 AM, Florian B?sch wrote: >> > The statistics are regenerated and the anomaly is now no longer present. >> > >> > Changes resulting from the filtering are: >> > >> > A total of 100'000 less visits since the 11th >> > All of those where categorized as OSX/Chrome, which is now down from 52% >> > of >> > that platforms share to46% (but it's still the most used browser on OSX) >> > Overall WebGL activation success is up 1.1% from 82.6% to 83.7%. >> > OSX WebGL activation success is up 8% from 63% to 71%. >> > Other platforms and browsers where not affected by the anomaly. >> > >> > Sorry for the hubhub around this. I'm hopeful the measures I put in >> > place to >> > block cloud provider IP-ranges will help prevent a recurrence of such an >> > event. These measures are on top of other already existing mechanisms to >> > filter out bots, high single source traffic and erroneously looking >> > traffic >> > (malformed JSON, UAs, parameters, etc.) > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From khr...@ Tue Dec 30 16:39:32 2014 From: khr...@ (Gregg Tavares) Date: Tue, 30 Dec 2014 16:39:32 -0800 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: <2480A041-304A-4ADE-869F-5E9F21E9B26F@callow.im> Message-ID: On Mon, Dec 29, 2014 at 4:17 PM, Florian B?sch wrote: > Those ideas are basically synonymous with a canvas.draw callback, with the > difference that they go trough eventing. And so there's no restriction of > having multiple composite event listeners or multiple RAFs. Which sounds to > me like it's ye olde "events are messy and we can't make it work > automatically" story. > The problem with the canvas callback is it's not generic. I can't be used for anything but WebGL. You can't have more than one which makes it not usable by libraries. Also any app using it will stop running on a browser that doesn't support it without some convoluted logic. > > On any account, the browser would be required not to issue a > composit/draw/event/specialRAF/whatever call more than once a frame. > Otherwise you'll end up in a situation where draw is called many times > between frames, and your performance goes straight to the crappers. > I think you're overestimating the overhead of a callback. In a simple implementation it's just an array/list of function objects to call. And besides, no one is going to put thousands of composite events in their app and expect any kind of perf. But I can certainly imagine a few of them. For example a lib like stats.js > > It also doesn't address what'll happen if draw commands are emitted by JS > to the backbuffer when it's not in this > special/composit/draw/event/specialextraRAF/whatever, because that's > undefined (the browser can't let it draw over the backbuffer willy nilly). > Hence, as I've mentioned, draw commands webgl would emit, that affect the > backbuffer, when this behavior is enabled, need to be defined as undefined > and need to throw an exception informing the user they're doing something > they shouldn't. > Let's explore that. Maybe they don't? Maybe this idea that each HTML element is a world unto itself should be discarded in this mode. You basically just get a secure way to render to the backbuffer bounded by the canvas size. Because gl.scissor, gl.viewport are overridden and gl.readPixels, gl.copyTexImage2D, toDataURL are checked you can't access stuff you shouldn't be able to. So, maybe you should just be able to render anytime. Note I'm just thinking out loud. I'm not suggesting this is the right way to do things just exploring the idea. What would be the draw back? Sure you could splat pixels on top of other elements but it seems like they'd still be rendered consistently across browsers. Another idea, if you set "clearAlways: true" then the behavior could be defined as the canvas being effectively cleared on the first draw call in any event. The current spec says when "preseverDrawingBuffer" is false the canvas is cleared after a composite but that could be modified that if "clearAlways" is true the canvas is cleared on the first draw call in any event. That would effectively prevent drawing to the same canvas across multiple events in this mode because you'd be discarding the results of everything but the last event for a particular canvas so they're be no point. > Likewise the compositwhatever needs to prohibit DOM modificiations of > course. > I don't believe it needs to prohibit DOM modifications. It just needs to specify what happens when they're modified. Possibly they don't take effect until the next composite. Anyway, I think it worth discussing the possibilities and options, pluses and minuses. -g > > On Tue, Dec 30, 2014 at 12:54 AM, Gregg Tavares > wrote: > >> So another idea which is a little more generic. I know it's got issues >> but just brainstorming / blue skying >> >> what if we had a `composite` event? >> >> Right now you can change content at any time (raf, timeout, interval, >> mousemove, keydown). The browser composites whenever it wants to. The 2 are >> not related. Even RAF is not directly related to compositing. All RAF is is >> a request to get called at the refresh rate of the monitor. RAF doesn't >> cause a composite and is not triggered by a composite either. >> >> What if we added `oncomposite` as in >> >> someElement.addEventListener('composite', someFunction, ...); >> >> Which would get called just before "someElement" is drawn. This would let >> JavaScript change the content of that element. I know that's problematic >> since changing the content of an element might require re-flow but here's >> some benefits to WebGL at least >> >> If there was some way for WebGL to request drawing to the backbuffer you >> could still support multiple canvases and WebGL wouldn't have to be the >> bottom element. The browser could draw stuff behind the canvas, then call >> that canvas's composite events, then draw more elements over the results. >> >> This would require a few things off the top of my head >> >> * If the canvas is not the bottom element and is not the full size of >> the backbuffer then implementations would need to emulate gl_FragCoord and >> adjust gl.viewport and gl.scissor, gl.readPixels and gl.copyTexSubImage2d >> >> Of course an implementation would not be required to do this. It's just >> ones that did might see faster speeds than ones that don't. In other words >> a simple implementation would allow this only when the canvas is the >> backmost element and is the same size as the backbuffer (ie, fullscreen). >> A complex implementation would re-write shaders so that it could allow this >> in more situations. >> >> * If alpha = true then readPixels, copyTex(Sub)Image2D, would generate >> security exceptions (otherwise you'd be able to read from the backbuffer >> stuff you don't own) >> >> I don't know if that's making any sense. I don't think it would be that >> hard to re-write shaders with gl_FragCoord. It would eat one uniform on >> browser that implement it. I don't know if there are any new ES 3.0 or ES >> 3.1 features that would have similar issues as gl_FragCoord. >> >> As for how you request drawing to the backbuffer my suggestion would be >> something like "preseverDrawingBuffer: never" (yea, that's not going to >> work), maybe ("clearAlways: true") which would basically mean that >> effectively if you don't draw every frame your content is lost immediately. >> This is so even if the implementation does NOT let you draw directly to the >> backbuffer you at least have consistent behavior across browsers. >> >> Problems with this idea off the top of my head. >> >> * There's nothing preventing JavaScript from affecting other elements in >> a `composite` event. >> >> In other words 'someCanvas.addEventListener('composite', someFunc, ...)` >> someFunc could change the contents of other HTML elements, it could issue >> draw commands on other WebGL contexts. I'm not sure what the rules should >> be there. >> >> * The reflow issue mentioned above >> >> Maybe this could be solved just by saying reflow doesn't happen until a >> composite is finished. >> >> * Is a scissor'ed gl.clear performant on mobile GPUs? >> >> For non-fullscreen canvas'es you'd need to clear the sub rect of the >> backbuffer the canvas represents. I believe some GPUs achieve this by >> drawing a quad which would kind of remove any perf benefits for the >> non-fullscreen case. >> >> * compositing is complex and calling JS while it's happening might be a >> non-starter for certain browsers that have multi-threaded compositors etc..? >> >> Anyway, whether or not this is a viable solution I like that it's >> somewhat generic. It supports multiple canvases of any size (not just >> fullscreen) and they don't have to be the bottom element. >> >> >> >> One other idea is to let the app opt into this using RAF where RAF >> becomes the on composite event. Change the RAF signature from >> >> requestAnimationFrame(callback, opt_element) >> >> to >> >> requestAnimationFrame(callback, opt_element, opt_callOnComposite) >> >> If "true" is passed in as the 3rd parameter then when opt_element is >> composited callback will be called. >> >> The advantage to that is no code has to change. A WebGL app doing things >> the current way will still work with no changes. Opting in just requires >> the WebGL app to supply both the opt_element parameter and pass "true" for >> opt_callOnComposite and also set some context creation parameter >> (clearAlways: true). An app that has added those will still work in a >> browser that implements none of this. >> >> -g >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Dec 31 00:58:47 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 31 Dec 2014 09:58:47 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: <2480A041-304A-4ADE-869F-5E9F21E9B26F@callow.im> Message-ID: On Wed, Dec 31, 2014 at 1:39 AM, Gregg Tavares wrote: > > On Mon, Dec 29, 2014 at 4:17 PM, Florian B?sch wrote: > >> Those ideas are basically synonymous with a canvas.draw callback, with >> the difference that they go trough eventing. And so there's no restriction >> of having multiple composite event listeners or multiple RAFs. Which sounds >> to me like it's ye olde "events are messy and we can't make it work >> automatically" story. >> > > The problem with the canvas callback is it's not generic. I can't be used > for anything but WebGL. > Drawing to the backbuffer is something you can only do if you have a canvas. Having a composit/draw/whatever callback on canvas will work with any context you'd derive from canvas (2D or WebGL context). > You can't have more than one which makes it not usable by libraries. > It's not any worse than requestAnimationFrame, which you cannot safely do multiple times for a single canvas either. Every library in existence uses methods to call for it to perform its drawing operation so a user can hook it up orderly to his drawing loop. > Also any app using it will stop running on a browser that doesn't support > it without some convoluted logic. > It's opt in, nothing will stop running. if(canvas.draw?){ canvas.draw = myDraw; } else{ var draw = function(){ myDraw(); requestAnimationFrame(draw); } requestAnimationFrame(draw); } > I think you're overestimating the overhead of a callback. In a simple > implementation it's just an array/list of function objects to call. And > besides, no one is going to put thousands of composite events in their app > and expect any kind of perf. But I can certainly imagine a few of them. For > example a lib like stats.js > As I've pointed out now multiple times, it doesn't make sense to have multiple RAFs to draw to a canvas (an argument you rejected, but now you're contradicting yourself). And it's not about people attaching a zillion events, it's about: raf, composit, composit, composit, composit, composit, raf, raf, composit, composit, composit, composit. That something that should not happen. Because if you draw in composit, and composit is issued multiple times between rafs, like with mouse events, or the newfangled animation stuff from w3c, you'll just get incredibly bad performance because suddenly you have 2x, 3x, 4x or Nx the load per frame just because you touched the DOM or got a mouse-hover. > Let's explore that. Maybe they don't? Maybe this idea that each HTML > element is a world unto itself should be discarded in this mode. You > basically just get a secure way to render to the backbuffer bounded by the > canvas size. Because gl.scissor, gl.viewport are overridden and > gl.readPixels, gl.copyTexImage2D, toDataURL are checked you can't access > stuff you shouldn't be able to. So, maybe you should just be able to render > anytime. Note I'm just thinking out loud. I'm not suggesting this is the > right way to do things just exploring the idea. What would be the draw > back? Sure you could splat pixels on top of other elements but it seems > like they'd still be rendered consistently across browsers. > > Another idea, if you set "clearAlways: true" then the behavior could be > defined as the canvas being effectively cleared on the first draw call in > any event. The current spec says when "preseverDrawingBuffer" is false the > canvas is cleared after a composite but that could be modified that if > "clearAlways" is true the canvas is cleared on the first draw call in any > event. That would effectively prevent drawing to the same canvas across > multiple events in this mode because you'd be discarding the results of > everything but the last event for a particular canvas so they're be no > point. > > > >> Likewise the compositwhatever needs to prohibit DOM modificiations of >> course. >> > > I don't believe it needs to prohibit DOM modifications. It just needs to > specify what happens when they're modified. Possibly they don't take effect > until the next composite. > Now I'm confused. You argue that you can let JS do anything, without restriction, use any event, without restriction, and draw anytime, and you can *still* make it work with the backbuffer? Well, why don't we just do that then? Skip this whole flag/extra-event/callback/whatever, and make it the bloddy default behavior? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Dec 31 01:34:30 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 31 Dec 2014 10:34:30 +0100 Subject: [Public WebGL] WebGL drawing directly to the backbuffer In-Reply-To: References: <2480A041-304A-4ADE-869F-5E9F21E9B26F@callow.im> Message-ID: Let's have a more fundamental discussion here. requestAnimationFrame is a bad API. - It causes trouble for usecases (such as the one we're discussing) where the browser needs to orderly kickoff drawing - It breaks if you move the canvas to another window (like a popup) and you now need to detach your drawing from one windows raf, and switch to another raf (yes, I've run into that problem) - It's inconvenient to operate, if you want to repeatedly have it called (because you need to recall it every frame). - It's got an unspecified behavior of what happens if you run multiple at the same time, and use em to draw to the same canvas (in what order do they get called?) - It gets called, even if the canvas is not in the current viewport or is obscured by another element completely, (because it's not attached to the canvas), which necessitates yet another API (visibility api) so a user can figure that out. This is gunk, and it's how nobody else orders their drawing logic, in any other application. - GLUT uses glutDisplayFunc - GLFW uses while(1){ poll(); draw(); swap(); } - pyglet uses @window.event on_draw - Direct3D uses while(1){ poll(); draw(); present(); } - processing uses a draw() callback In order to handle drawing inside a page (with different compositing needs) properly you will need: - a draw callback (or event, if you prefer, and you can call it composit, present, or whatever) - This needs to be attached to a canvas, so that no matter where the canvas goes (this window or another window, in the page or in fullscreen) it continues to be synced to that canvas, and no other canvas or window. - the draw callback needs to be called when it becomes necessary to fill the canvas for whatever reason, but no more often than absolutely necessary. - if you have a draw callback, that's the only section of code that can affect the frontbuffer, because everything is undefined and probably undesirable (you can still render to whatever bound framebuffer object, just not to the frontbuffer) - Since the compositor may clear at its digression, you are required to draw in the draw callback, if you do not, you will lose that frame (to black). Above is how it works everywhere else, but browsers. Above is how it should work in browsers too. Above presents a problem with at least one flag on the context: preserveDrawingBuffer: true It's therefore obvious that in order to work with such an event/callback, preserveDrawingBuffer needs to be false. Additionally, you will need to indicate your wish to change to this semantic, which necessitates a flag of some kind. The easiest flag to catch is if somebody makes use of that functionality, that's when you switch. But you can also introduce it explicitly somehow, don't matter jack. It's opt in, and only then you'll get new behavior. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juj...@ Wed Dec 31 06:02:02 2014 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Wed, 31 Dec 2014 16:02:02 +0200 Subject: [Public WebGL] Losing GL context during rAF execution? Message-ID: Hi there, Is it possible to lose the WebGL context *during* the execution of a rAF handler (or for that matter, during some other JS code)? That is, if I have the following code: function rAF() { if (gl.isContextLost()) { // handle loss return; } gl.xxx(); gl.yyy(); window.requestAnimationFrame(rAF); } is it allowed that the GL context gets lost during the GL calls in the middle? Thanks, Jukka -------------- next part -------------- An HTML attachment was scrubbed... URL: