From bja...@ Sat Oct 1 15:54:53 2011 From: bja...@ (Benoit Jacob) Date: Sat, 01 Oct 2011 18:54:53 -0400 Subject: [Public WebGL] Specify more reasonable minimum values for MAX_TEXTURE_SIZE and friends Message-ID: <4E879A3D.405@mozilla.com> Hi list, The WebGL 1.0 specification does not change the minimum values specified in the OpenGL ES 2.0 spec for system-dependent MAX_xxx constants, and some of them have really absurdly low minimum values, while even the least powerful current mobile hardware supports much higher values. Specifically, these ones: GL_MAX_TEXTURE_SIZE minimum value is 64 GL_MAX_CUBE_MAP_TEXTURE_SIZE minimum value is 16 GL_MAX_RENDERBUFFER_SIZE minimum value is 1 Would you agree to have the WebGL 1.0.1 spec explicitly override them as follows: MAX_TEXTURE_SIZE minimum value is 1024 MAX_CUBE_MAP_TEXTURE_SIZE minimum value is 512 MAX_RENDERBUFFER_SIZE minimum value is 1024 Some background: * Firefox is implementing a minimum-capabilities mode to help developers ensure their pages will work everywhere: https://bugzilla.mozilla.org/show_bug.cgi?id=686732 * The Intel 945 from 10 years ago support 2D textures of size 2048 and cube map textures of size 1024, see: http://software.intel.com/en-us/articles/intel-gma-3000-and-x3000-developers-guide/ * Likewise the mobile GPUs I've seen, like the Tegra and PowerVR, support 2D textures of size 2048. Maybe the thing I'm the least sure about is the MAX_RENDERBUFFER_SIZE. Do you think that 1024 might be too optimistic? Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Sat Oct 1 15:55:12 2011 From: bja...@ (Benoit Jacob) Date: Sat, 01 Oct 2011 18:55:12 -0400 Subject: [Public WebGL] Specify more reasonable minimum values for MAX_TEXTURE_SIZE and friends Message-ID: <4E879A50.4020805@mozilla.com> Hi list, The WebGL 1.0 specification does not change the minimum values specified in the OpenGL ES 2.0 spec for system-dependent MAX_xxx constants, and some of them have really absurdly low minimum values, while even the least powerful current mobile hardware supports much higher values. Specifically, these ones: GL_MAX_TEXTURE_SIZE minimum value is 64 GL_MAX_CUBE_MAP_TEXTURE_SIZE minimum value is 16 GL_MAX_RENDERBUFFER_SIZE minimum value is 1 Would you agree to have the WebGL 1.0.1 spec explicitly override them as follows: MAX_TEXTURE_SIZE minimum value is 1024 MAX_CUBE_MAP_TEXTURE_SIZE minimum value is 512 MAX_RENDERBUFFER_SIZE minimum value is 1024 Some background: * Firefox is implementing a minimum-capabilities mode to help developers ensure their pages will work everywhere: https://bugzilla.mozilla.org/show_bug.cgi?id=686732 * The Intel 945 from 10 years ago support 2D textures of size 2048 and cube map textures of size 1024, see: http://software.intel.com/en-us/articles/intel-gma-3000-and-x3000-developers-guide/ * Likewise the mobile GPUs I've seen, like the Tegra and PowerVR, support 2D textures of size 2048. Maybe the thing I'm the least sure about is the MAX_RENDERBUFFER_SIZE. Do you think that 1024 might be too optimistic? Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Tue Oct 4 13:19:09 2011 From: bja...@ (Benoit Jacob) Date: Tue, 04 Oct 2011 16:19:09 -0400 Subject: [Public WebGL] preserveDrawingBuffer questions Message-ID: <4E8B6A3D.5030908@mozilla.com> Hi, Section 2.2 has a non-normative block saying "While it is sometimes desirable to preserve the drawing buffer, it can cause significant performance loss on some platforms." Could someone please point me to the corresponding discussion, which I can't find back? What were these platforms and why is that slow on them? Another question: implementing the default semantics (preserveDrawingBuffer=false) on a multi-process browser seems tricky to do efficiently: as far as I can see, this requires double buffering? Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From vla...@ Tue Oct 4 13:28:30 2011 From: vla...@ (Vladimir Vukicevic) Date: Tue, 4 Oct 2011 16:28:30 -0400 Subject: [Public WebGL] preserveDrawingBuffer questions In-Reply-To: <4E8B6A3D.5030908@mozilla.com> References: <4E8B6A3D.5030908@mozilla.com> Message-ID: On Tue, Oct 4, 2011 at 4:19 PM, Benoit Jacob wrote: > > Hi, > > Section 2.2 has a non-normative block saying "While it is sometimes > desirable to preserve the drawing buffer, it can cause significant > performance loss on some platforms." Could someone please point me to the > corresponding discussion, which I can't find back? What were these platforms > and why is that slow on them? Mobile devices in particular can benefit from this, especially if they're already using a compositor to get GL content to the screen. In that case, you hand off a buffer/texture to be drawn to the screen, and if preserve=true, then you would need to make a copy of it before doing so in order to keep drawing on top of the same scene. > Another question: implementing the default semantics > (preserveDrawingBuffer=false) on a multi-process browser seems tricky to do > efficiently: as far as I can see, this requires double buffering? A multiprocess implementation is actually precisely when preserve=false can provide a speedup. You have to double buffer no matter what -- when you're ready to composite, if you're multiprocess, you have to hand off buffer A for drawing to the screen while you draw into buffer B. If preserve=false, you just swap A and B, clear B, and draw into it while A is being composited. If preserve=true, you have to copy A->B before handing off A and moving drawing to B. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From zhe...@ Wed Oct 5 11:38:29 2011 From: zhe...@ (Mo, Zhenyao) Date: Wed, 5 Oct 2011 11:38:29 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions Message-ID: I wrote up drafts for two possible WebGL extensions: WEBGL_debug_shaders and WEBGL_debug_gpu_info. https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/proposals/WEBGL_debug_gpu_info.html https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/proposals/WEBGL_debug_shaders.html WEBGL_debug_shaders exposes the translated shader source. WEBGL_debug_gpu_info exposes the unmasked VENDOR and RENDERER strings from underlying graphics driver. I believe these two extensions provide important information to WebGL developers. However, as stated in the drafts, both extensions should only be available to privileged code in a browser, not the regular content due to user privacy concerns. Comments are welcome. Mo -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed Oct 5 12:08:37 2011 From: bja...@ (Benoit Jacob) Date: Wed, 05 Oct 2011 15:08:37 -0400 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: Message-ID: <4E8CAB35.8050300@mozilla.com> On 05/10/11 02:38 PM, Mo, Zhenyao wrote: > I wrote up drafts for two possible WebGL extensions: WEBGL_debug_shaders > and WEBGL_debug_gpu_info. > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/proposals/WEBGL_debug_gpu_info.html > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/proposals/WEBGL_debug_shaders.html > > WEBGL_debug_shaders exposes the translated shader source. > WEBGL_debug_gpu_info exposes the unmasked VENDOR and RENDERER strings > from underlying graphics driver. > > I believe these two extensions provide important information to WebGL > developers. However, as stated in the drafts, both extensions should > only be available to privileged code in a browser, not the regular > content due to user privacy concerns. I'm OK with the two extensions as currently drafted; just a couple of remarks: *WEBGL_debug_gpu_info might be better named WEBGL_debug_renderer_info (or see below, WEBGL_privileged_renderer_info) ? Also, it says that that info should not be exposed to unprivileged content, so should the WebGL spec also be updated to be consistent with that? Currently the WebGL spec does not mention the concern about these strings. Also, I wonder if PRIVILEGED would be a better word than UNMASKED, so it would tell in a more explicit and neutral way what the difference is with the current strings from the spec. Similarly, the extension might be better named WEBGL_privileged_renderer_info? * WEBGL_debug_shaders might not be a specific enough name? How about WEBGL_get_translated_shader_source or some such. The text says that this should not be exposed to unprivileged content because this could be used to identify the GPU. Personally, my concern is a bit different. I'm not that much concerned about this particular privacy issue as it doesn't seem to expose a lot more information than we already expose (through getShaderInfoLog + getParameter + UA string), and doesn't make it more convenient to obtain. What I'm more concerned with is that it exposes precisely which workarounds we use, so if an attacker was fuzzing our ANGLE workarounds to find corner cases where we miss a workaround, that could be handy. Cheers, Benoit > > Comments are welcome. > > Mo ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Wed Oct 5 12:56:34 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 5 Oct 2011 12:56:34 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: <4E8CAB35.8050300@mozilla.com> References: <4E8CAB35.8050300@mozilla.com> Message-ID: On Wed, Oct 5, 2011 at 12:08 PM, Benoit Jacob wrote: > > On 05/10/11 02:38 PM, Mo, Zhenyao wrote: > >> I wrote up drafts for two possible WebGL extensions: WEBGL_debug_shaders >> and WEBGL_debug_gpu_info. >> >> https://cvs.khronos.org/svn/**repos/registry/trunk/public/** >> webgl/extensions/proposals/**WEBGL_debug_gpu_info.html >> https://cvs.khronos.org/svn/**repos/registry/trunk/public/** >> webgl/extensions/proposals/**WEBGL_debug_shaders.html >> >> WEBGL_debug_shaders exposes the translated shader source. >> WEBGL_debug_gpu_info exposes the unmasked VENDOR and RENDERER strings >> from underlying graphics driver. >> >> I believe these two extensions provide important information to WebGL >> developers. However, as stated in the drafts, both extensions should >> only be available to privileged code in a browser, not the regular >> content due to user privacy concerns. >> > > I'm OK with the two extensions as currently drafted; just a couple of > remarks: > > *WEBGL_debug_gpu_info might be better named WEBGL_debug_renderer_info (or > see below, WEBGL_privileged_renderer_**info) ? Also, it says that that > info should not be exposed to unprivileged content, so should the WebGL spec > also be updated to be consistent with that? Currently the WebGL spec does > not mention the concern about these strings. Also, I wonder if PRIVILEGED > would be a better word than UNMASKED, so it would tell in a more explicit > and neutral way what the difference is with the current strings from the > spec. Similarly, the extension might be better named > WEBGL_privileged_renderer_**info? > > * WEBGL_debug_shaders might not be a specific enough name? How about > WEBGL_get_translated_shader_**source or some such. The text says that this > should not be exposed to unprivileged content because this could be used to > identify the GPU. Personally, my concern is a bit different. I'm not that > much concerned about this particular privacy issue as it doesn't seem to > expose a lot more information than we already expose (through > getShaderInfoLog + getParameter + UA string), and doesn't make it more > convenient to obtain. What I'm more concerned with is that it exposes > precisely which workarounds we use, so if an attacker was fuzzing our ANGLE > workarounds to find corner cases where we miss a workaround, that could be > handy. > How is that any different from today? If an attacker wants to find out which workarounds we use, at least for Firefox and Chrome they can just download the source and find out. Yes this makes it slightly easier, they don't have to compile themselves and add a single printf, but it doesn't expose anything they couldn't already get. > > Cheers, > Benoit > > >> Comments are welcome. >> >> Mo >> > > > ------------------------------**----------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ------------------------------**----------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed Oct 5 16:22:25 2011 From: bja...@ (Benoit Jacob) Date: Wed, 05 Oct 2011 19:22:25 -0400 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> Message-ID: <4E8CE6B1.6080002@mozilla.com> On 05/10/11 03:56 PM, Gregg Tavares (wrk) wrote: > > > On Wed, Oct 5, 2011 at 12:08 PM, Benoit Jacob > wrote: > > > On 05/10/11 02:38 PM, Mo, Zhenyao wrote: > > I wrote up drafts for two possible WebGL extensions: > WEBGL_debug_shaders > and WEBGL_debug_gpu_info. > > https://cvs.khronos.org/svn/__repos/registry/trunk/public/__webgl/extensions/proposals/__WEBGL_debug_gpu_info.html > > https://cvs.khronos.org/svn/__repos/registry/trunk/public/__webgl/extensions/proposals/__WEBGL_debug_shaders.html > > > WEBGL_debug_shaders exposes the translated shader source. > WEBGL_debug_gpu_info exposes the unmasked VENDOR and RENDERER > strings > from underlying graphics driver. > > I believe these two extensions provide important information to > WebGL > developers. However, as stated in the drafts, both extensions > should > only be available to privileged code in a browser, not the regular > content due to user privacy concerns. > > > I'm OK with the two extensions as currently drafted; just a couple > of remarks: > > *WEBGL_debug_gpu_info might be better named > WEBGL_debug_renderer_info (or see below, > WEBGL_privileged_renderer___info) ? Also, it says that that info > should not be exposed to unprivileged content, so should the WebGL > spec also be updated to be consistent with that? Currently the WebGL > spec does not mention the concern about these strings. Also, I > wonder if PRIVILEGED would be a better word than UNMASKED, so it > would tell in a more explicit and neutral way what the difference is > with the current strings from the spec. Similarly, the extension > might be better named WEBGL_privileged_renderer___info? > > * WEBGL_debug_shaders might not be a specific enough name? How > about WEBGL_get_translated_shader___source or some such. The text > says that this should not be exposed to unprivileged content because > this could be used to identify the GPU. Personally, my concern is a > bit different. I'm not that much concerned about this particular > privacy issue as it doesn't seem to expose a lot more information > than we already expose (through getShaderInfoLog + getParameter + UA > string), and doesn't make it more convenient to obtain. What I'm > more concerned with is that it exposes precisely which workarounds > we use, so if an attacker was fuzzing our ANGLE workarounds to find > corner cases where we miss a workaround, that could be handy. > > > How is that any different from today? If an attacker wants to find out > which workarounds we use, at least for Firefox and Chrome they can just > download the source and find out. Yes this makes it slightly easier, > they don't have to compile themselves and add a single printf, but it > doesn't expose anything they couldn't already get. So that's true for the two browsers that currently ship WebGL enabled by default. How much is that something that we can rely on, as opposed to just an 'accident', I don't know. But it surely isn't my job to advocate security-by-obscurity so I won't push this point further :-) Benoit > > > > Cheers, > Benoit > > > Comments are welcome. > > Mo > > > > ------------------------------__----------------------------- > You are currently subscribed to public_webgl...@ > . > To unsubscribe, send an email to majordomo...@ > with > the following command in the body of your email: > unsubscribe public_webgl > ------------------------------__----------------------------- > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Oct 5 16:57:41 2011 From: kbr...@ (Kenneth Russell) Date: Wed, 5 Oct 2011 16:57:41 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: <4E8CE6B1.6080002@mozilla.com> References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: On Wed, Oct 5, 2011 at 4:22 PM, Benoit Jacob wrote: > > On 05/10/11 03:56 PM, Gregg Tavares (wrk) wrote: > >> >> >> On Wed, Oct 5, 2011 at 12:08 PM, Benoit Jacob > > wrote: >> >> >> On 05/10/11 02:38 PM, Mo, Zhenyao wrote: >> >> I wrote up drafts for two possible WebGL extensions: >> WEBGL_debug_shaders >> and WEBGL_debug_gpu_info. >> >> https://cvs.khronos.org/svn/__**repos/registry/trunk/public/__** >> webgl/extensions/proposals/__**WEBGL_debug_gpu_info.html >> > webgl/extensions/proposals/**WEBGL_debug_gpu_info.html >> > >> https://cvs.khronos.org/svn/__**repos/registry/trunk/public/__** >> webgl/extensions/proposals/__**WEBGL_debug_shaders.html >> >> > webgl/extensions/proposals/**WEBGL_debug_shaders.html >> > >> >> WEBGL_debug_shaders exposes the translated shader source. >> WEBGL_debug_gpu_info exposes the unmasked VENDOR and RENDERER >> strings >> from underlying graphics driver. >> >> I believe these two extensions provide important information to >> WebGL >> developers. However, as stated in the drafts, both extensions >> should >> only be available to privileged code in a browser, not the regular >> content due to user privacy concerns. >> >> >> I'm OK with the two extensions as currently drafted; just a couple >> of remarks: >> >> *WEBGL_debug_gpu_info might be better named >> WEBGL_debug_renderer_info (or see below, >> WEBGL_privileged_renderer___**info) ? Also, it says that that info >> >> should not be exposed to unprivileged content, so should the WebGL >> spec also be updated to be consistent with that? Currently the WebGL >> spec does not mention the concern about these strings. >> > Whether to report the real RENDERER and VENDOR strings in the core WebGL API is still an active area of discussion -- or at least there have been discussions very recently on the topic. For this reason I think we should leave that spec as is for now. However, it might be worth adding some clarifying documentation to WEBGL_debug_gpu_info. Maybe modify the Overview: WebGL implementations might mask the RENDERER and VENDOR strings of the underlying graphics driver for privacy reasons. This extension exposes new tokens to query this information in a guaranteed manner for debugging purposes. Also, I >> wonder if PRIVILEGED would be a better word than UNMASKED, so it >> would tell in a more explicit and neutral way what the difference is >> with the current strings from the spec. Similarly, the extension >> might be better named WEBGL_privileged_renderer___**info? > > >> >> * WEBGL_debug_shaders might not be a specific enough name? How >> about WEBGL_get_translated_shader___**source or some such. The text >> >> says that this should not be exposed to unprivileged content because >> this could be used to identify the GPU. Personally, my concern is a >> bit different. I'm not that much concerned about this particular >> privacy issue as it doesn't seem to expose a lot more information >> than we already expose (through getShaderInfoLog + getParameter + UA >> string), and doesn't make it more convenient to obtain. What I'm >> more concerned with is that it exposes precisely which workarounds >> we use, so if an attacker was fuzzing our ANGLE workarounds to find >> corner cases where we miss a workaround, that could be handy. >> >> >> How is that any different from today? If an attacker wants to find out >> which workarounds we use, at least for Firefox and Chrome they can just >> download the source and find out. Yes this makes it slightly easier, >> they don't have to compile themselves and add a single printf, but it >> doesn't expose anything they couldn't already get. >> > > So that's true for the two browsers that currently ship WebGL enabled by > default. How much is that something that we can rely on, as opposed to just > an 'accident', I don't know. But it surely isn't my job to advocate > security-by-obscurity so I won't push this point further :-) > The availability of this extension definitely makes it much easier to query the post-translation source code, and it isn't currently available to JavaScript, so I think it's safer to expose it only when explicitly enabled. Regarding the naming convention, Ben Vanik suggested in an off-list email that we could expose more extensions with the WEBGL_debug_ prefix which may be useful in implementing tools like the WebGL Inspector. This sounds like a really good idea, so I'd vote to keep the current convention. -Ken > Benoit > > >> >> >> Cheers, >> Benoit >> >> >> Comments are welcome. >> >> Mo >> >> >> >> ------------------------------**__----------------------------**- >> >> You are currently subscribed to public_webgl...@ >> >. >> >> To unsubscribe, send an email to majordomo...@ >> with >> >> the following command in the body of your email: >> unsubscribe public_webgl >> ------------------------------**__----------------------------**- >> >> >> > > ------------------------------**----------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ------------------------------**----------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Wed Oct 5 17:02:13 2011 From: ben...@ (Ben Vanik) Date: Wed, 5 Oct 2011 17:02:13 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: semantics, but I'd like to second Benoit's suggestion to use 'renderer' instead of 'gpu' - there may not always be a real GPU behind an implementation :) On Wed, Oct 5, 2011 at 4:57 PM, Kenneth Russell wrote: > On Wed, Oct 5, 2011 at 4:22 PM, Benoit Jacob wrote: > >> >> On 05/10/11 03:56 PM, Gregg Tavares (wrk) wrote: >> >>> >>> >>> On Wed, Oct 5, 2011 at 12:08 PM, Benoit Jacob >> > wrote: >>> >>> >>> On 05/10/11 02:38 PM, Mo, Zhenyao wrote: >>> >>> I wrote up drafts for two possible WebGL extensions: >>> WEBGL_debug_shaders >>> and WEBGL_debug_gpu_info. >>> >>> https://cvs.khronos.org/svn/__**repos/registry/trunk/public/__** >>> webgl/extensions/proposals/__**WEBGL_debug_gpu_info.html >>> >> webgl/extensions/proposals/**WEBGL_debug_gpu_info.html >>> > >>> https://cvs.khronos.org/svn/__**repos/registry/trunk/public/__** >>> webgl/extensions/proposals/__**WEBGL_debug_shaders.html >>> >>> >> webgl/extensions/proposals/**WEBGL_debug_shaders.html >>> > >>> >>> WEBGL_debug_shaders exposes the translated shader source. >>> WEBGL_debug_gpu_info exposes the unmasked VENDOR and RENDERER >>> strings >>> from underlying graphics driver. >>> >>> I believe these two extensions provide important information to >>> WebGL >>> developers. However, as stated in the drafts, both extensions >>> should >>> only be available to privileged code in a browser, not the regular >>> content due to user privacy concerns. >>> >>> >>> I'm OK with the two extensions as currently drafted; just a couple >>> of remarks: >>> >>> *WEBGL_debug_gpu_info might be better named >>> WEBGL_debug_renderer_info (or see below, >>> WEBGL_privileged_renderer___**info) ? Also, it says that that info >>> >>> should not be exposed to unprivileged content, so should the WebGL >>> spec also be updated to be consistent with that? Currently the WebGL >>> spec does not mention the concern about these strings. >>> >> > Whether to report the real RENDERER and VENDOR strings in the core WebGL > API is still an active area of discussion -- or at least there have been > discussions very recently on the topic. For this reason I think we should > leave that spec as is for now. However, it might be worth adding some > clarifying documentation to WEBGL_debug_gpu_info. Maybe modify the Overview: > > WebGL implementations might mask the RENDERER and VENDOR strings of the > underlying graphics driver for privacy reasons. This extension exposes new > tokens to query this information in a guaranteed manner for debugging > purposes. > > > Also, I >>> wonder if PRIVILEGED would be a better word than UNMASKED, so it >>> would tell in a more explicit and neutral way what the difference is >>> with the current strings from the spec. Similarly, the extension >>> might be better named WEBGL_privileged_renderer___**info? >> >> >>> >>> * WEBGL_debug_shaders might not be a specific enough name? How >>> about WEBGL_get_translated_shader___**source or some such. The text >>> >>> says that this should not be exposed to unprivileged content because >>> this could be used to identify the GPU. Personally, my concern is a >>> bit different. I'm not that much concerned about this particular >>> privacy issue as it doesn't seem to expose a lot more information >>> than we already expose (through getShaderInfoLog + getParameter + UA >>> string), and doesn't make it more convenient to obtain. What I'm >>> more concerned with is that it exposes precisely which workarounds >>> we use, so if an attacker was fuzzing our ANGLE workarounds to find >>> corner cases where we miss a workaround, that could be handy. >>> >>> >>> How is that any different from today? If an attacker wants to find out >>> which workarounds we use, at least for Firefox and Chrome they can just >>> download the source and find out. Yes this makes it slightly easier, >>> they don't have to compile themselves and add a single printf, but it >>> doesn't expose anything they couldn't already get. >>> >> >> So that's true for the two browsers that currently ship WebGL enabled by >> default. How much is that something that we can rely on, as opposed to just >> an 'accident', I don't know. But it surely isn't my job to advocate >> security-by-obscurity so I won't push this point further :-) >> > > The availability of this extension definitely makes it much easier to query > the post-translation source code, and it isn't currently available to > JavaScript, so I think it's safer to expose it only when explicitly enabled. > > Regarding the naming convention, Ben Vanik suggested in an off-list email > that we could expose more extensions with the WEBGL_debug_ prefix which may > be useful in implementing tools like the WebGL Inspector. This sounds like a > really good idea, so I'd vote to keep the current convention. > > -Ken > > >> Benoit >> >> >>> >>> >>> Cheers, >>> Benoit >>> >>> >>> Comments are welcome. >>> >>> Mo >>> >>> >>> >>> ------------------------------**__----------------------------**- >>> >>> You are currently subscribed to public_webgl...@ >>> >. >>> >>> To unsubscribe, send an email to majordomo...@ >>> with >>> >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ------------------------------**__----------------------------**- >>> >>> >>> >> >> ------------------------------**----------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ------------------------------**----------------------------- >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Oct 5 17:07:04 2011 From: kbr...@ (Kenneth Russell) Date: Wed, 5 Oct 2011 17:07:04 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: Fair point. On Wed, Oct 5, 2011 at 5:02 PM, Ben Vanik wrote: > semantics, but I'd like to second Benoit's suggestion to use 'renderer' > instead of 'gpu' - there may not always be a real GPU behind an > implementation :) > > > On Wed, Oct 5, 2011 at 4:57 PM, Kenneth Russell wrote: > >> On Wed, Oct 5, 2011 at 4:22 PM, Benoit Jacob wrote: >> >>> >>> On 05/10/11 03:56 PM, Gregg Tavares (wrk) wrote: >>> >>>> >>>> >>>> On Wed, Oct 5, 2011 at 12:08 PM, Benoit Jacob >>> > wrote: >>>> >>>> >>>> On 05/10/11 02:38 PM, Mo, Zhenyao wrote: >>>> >>>> I wrote up drafts for two possible WebGL extensions: >>>> WEBGL_debug_shaders >>>> and WEBGL_debug_gpu_info. >>>> >>>> https://cvs.khronos.org/svn/__**repos/registry/trunk/public/__** >>>> webgl/extensions/proposals/__**WEBGL_debug_gpu_info.html >>>> >>> webgl/extensions/proposals/**WEBGL_debug_gpu_info.html >>>> > >>>> https://cvs.khronos.org/svn/__**repos/registry/trunk/public/__** >>>> webgl/extensions/proposals/__**WEBGL_debug_shaders.html >>>> >>>> >>> webgl/extensions/proposals/**WEBGL_debug_shaders.html >>>> > >>>> >>>> WEBGL_debug_shaders exposes the translated shader source. >>>> WEBGL_debug_gpu_info exposes the unmasked VENDOR and RENDERER >>>> strings >>>> from underlying graphics driver. >>>> >>>> I believe these two extensions provide important information to >>>> WebGL >>>> developers. However, as stated in the drafts, both extensions >>>> should >>>> only be available to privileged code in a browser, not the >>>> regular >>>> content due to user privacy concerns. >>>> >>>> >>>> I'm OK with the two extensions as currently drafted; just a couple >>>> of remarks: >>>> >>>> *WEBGL_debug_gpu_info might be better named >>>> WEBGL_debug_renderer_info (or see below, >>>> WEBGL_privileged_renderer___**info) ? Also, it says that that info >>>> >>>> should not be exposed to unprivileged content, so should the WebGL >>>> spec also be updated to be consistent with that? Currently the WebGL >>>> spec does not mention the concern about these strings. >>>> >>> >> Whether to report the real RENDERER and VENDOR strings in the core WebGL >> API is still an active area of discussion -- or at least there have been >> discussions very recently on the topic. For this reason I think we should >> leave that spec as is for now. However, it might be worth adding some >> clarifying documentation to WEBGL_debug_gpu_info. Maybe modify the Overview: >> >> WebGL implementations might mask the RENDERER and VENDOR strings of the >> underlying graphics driver for privacy reasons. This extension exposes new >> tokens to query this information in a guaranteed manner for debugging >> purposes. >> >> >> Also, I >>>> wonder if PRIVILEGED would be a better word than UNMASKED, so it >>>> would tell in a more explicit and neutral way what the difference is >>>> with the current strings from the spec. Similarly, the extension >>>> might be better named WEBGL_privileged_renderer___**info? >>> >>> >>>> >>>> * WEBGL_debug_shaders might not be a specific enough name? How >>>> about WEBGL_get_translated_shader___**source or some such. The text >>>> >>>> says that this should not be exposed to unprivileged content because >>>> this could be used to identify the GPU. Personally, my concern is a >>>> bit different. I'm not that much concerned about this particular >>>> privacy issue as it doesn't seem to expose a lot more information >>>> than we already expose (through getShaderInfoLog + getParameter + UA >>>> string), and doesn't make it more convenient to obtain. What I'm >>>> more concerned with is that it exposes precisely which workarounds >>>> we use, so if an attacker was fuzzing our ANGLE workarounds to find >>>> corner cases where we miss a workaround, that could be handy. >>>> >>>> >>>> How is that any different from today? If an attacker wants to find out >>>> which workarounds we use, at least for Firefox and Chrome they can just >>>> download the source and find out. Yes this makes it slightly easier, >>>> they don't have to compile themselves and add a single printf, but it >>>> doesn't expose anything they couldn't already get. >>>> >>> >>> So that's true for the two browsers that currently ship WebGL enabled by >>> default. How much is that something that we can rely on, as opposed to just >>> an 'accident', I don't know. But it surely isn't my job to advocate >>> security-by-obscurity so I won't push this point further :-) >>> >> >> The availability of this extension definitely makes it much easier to >> query the post-translation source code, and it isn't currently available to >> JavaScript, so I think it's safer to expose it only when explicitly enabled. >> >> Regarding the naming convention, Ben Vanik suggested in an off-list email >> that we could expose more extensions with the WEBGL_debug_ prefix which may >> be useful in implementing tools like the WebGL Inspector. This sounds like a >> really good idea, so I'd vote to keep the current convention. >> >> -Ken >> >> >>> Benoit >>> >>> >>>> >>>> >>>> Cheers, >>>> Benoit >>>> >>>> >>>> Comments are welcome. >>>> >>>> Mo >>>> >>>> >>>> >>>> ------------------------------**__----------------------------**- >>>> >>>> You are currently subscribed to public_webgl...@ >>>> >. >>>> >>>> To unsubscribe, send an email to majordomo...@ >>>> with >>>> >>>> the following command in the body of your email: >>>> unsubscribe public_webgl >>>> ------------------------------**__----------------------------**- >>>> >>>> >>>> >>> >>> ------------------------------**----------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ------------------------------**----------------------------- >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Wed Oct 5 23:26:19 2011 From: cal...@ (Mark Callow) Date: Thu, 06 Oct 2011 15:26:19 +0900 Subject: [Public WebGL] preserveDrawingBuffer questions In-Reply-To: References: <4E8B6A3D.5030908@mozilla.com> Message-ID: <4E8D4A0B.4070909@hicorp.co.jp> What Vlad says is true, but the issue goes deeper than that. On a tiling GPU, which architecture is in the majority in the mobile/low-power space, if preserve=true the area of the previous frame's render-buffer corresponding to the tile being worked on has to be copied into the high-speed tile memory. Regards -Mark On 2011/10/05 5:28, Vladimir Vukicevic wrote: > On Tue, Oct 4, 2011 at 4:19 PM, Benoit Jacob wrote: >> Hi, >> >> Section 2.2 has a non-normative block saying "While it is sometimes >> desirable to preserve the drawing buffer, it can cause significant >> performance loss on some platforms." Could someone please point me to the >> corresponding discussion, which I can't find back? What were these platforms >> and why is that slow on them? > Mobile devices in particular can benefit from this, especially if > they're already using a compositor to get GL content to the screen. > In that case, you hand off a buffer/texture to be drawn to the screen, > and if preserve=true, then you would need to make a copy of it before > doing so in order to keep drawing on top of the same scene. > >> Another question: implementing the default semantics >> (preserveDrawingBuffer=false) on a multi-process browser seems tricky to do >> efficiently: as far as I can see, this requires double buffering? > A multiprocess implementation is actually precisely when > preserve=false can provide a speedup. You have to double buffer no > matter what -- when you're ready to composite, if you're multiprocess, > you have to hand off buffer A for drawing to the screen while you draw > into buffer B. If preserve=false, you just swap A and B, clear B, and > draw into it while A is being composited. If preserve=true, you have > to copy A->B before handing off A and moving drawing to B. > > - Vlad > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Oct 10 16:08:15 2011 From: kbr...@ (Kenneth Russell) Date: Mon, 10 Oct 2011 16:08:15 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: Mo, could you also rename UNMASKED_VENDOR and UNMASKED_RENDERER to UNMASKED_VENDOR_WEBGL and UNMASKED_RENDERER_WEBGL? We've been using the _WEBGL suffix for new, WebGL-specific enums. Once that's done I can help you get values assigned to them -- assuming we're all in agreement. -Ken On Wed, Oct 5, 2011 at 5:07 PM, Kenneth Russell wrote: > Fair point. > > On Wed, Oct 5, 2011 at 5:02 PM, Ben Vanik wrote: >> >> semantics, but I'd like to second Benoit's suggestion to use 'renderer' >> instead of 'gpu' - there may not always be a real GPU behind an >> implementation :) >> >> On Wed, Oct 5, 2011 at 4:57 PM, Kenneth Russell wrote: >>> >>> On Wed, Oct 5, 2011 at 4:22 PM, Benoit Jacob wrote: >>>> >>>> On 05/10/11 03:56 PM, Gregg Tavares (wrk) wrote: >>>>> >>>>> >>>>> On Wed, Oct 5, 2011 at 12:08 PM, Benoit Jacob >>>> > wrote: >>>>> >>>>> >>>>> ? ?On 05/10/11 02:38 PM, Mo, Zhenyao wrote: >>>>> >>>>> ? ? ? ?I wrote up drafts for two possible WebGL extensions: >>>>> ? ? ? ?WEBGL_debug_shaders >>>>> ? ? ? ?and WEBGL_debug_gpu_info. >>>>> >>>>> >>>>> ?https://cvs.khronos.org/svn/__repos/registry/trunk/public/__webgl/extensions/proposals/__WEBGL_debug_gpu_info.html >>>>> >>>>> ? >>>>> >>>>> ?https://cvs.khronos.org/svn/__repos/registry/trunk/public/__webgl/extensions/proposals/__WEBGL_debug_shaders.html >>>>> >>>>> ? >>>>> >>>>> ? ? ? ?WEBGL_debug_shaders exposes the translated shader source. >>>>> ? ? ? ?WEBGL_debug_gpu_info exposes the unmasked VENDOR and RENDERER >>>>> ? ? ? ?strings >>>>> ? ? ? ?from underlying graphics driver. >>>>> >>>>> ? ? ? ?I believe these two extensions provide important information to >>>>> ? ? ? ?WebGL >>>>> ? ? ? ?developers. ?However, as stated in the drafts, both extensions >>>>> ? ? ? ?should >>>>> ? ? ? ?only be available to privileged code in a browser, not the >>>>> regular >>>>> ? ? ? ?content due to user privacy concerns. >>>>> >>>>> >>>>> ? ?I'm OK with the two extensions as currently drafted; just a couple >>>>> ? ?of remarks: >>>>> >>>>> ? ? ?*WEBGL_debug_gpu_info might be better named >>>>> ? ?WEBGL_debug_renderer_info (or see below, >>>>> ? ?WEBGL_privileged_renderer___info) ? Also, it says that that info >>>>> ? ?should not be exposed to unprivileged content, so should the WebGL >>>>> ? ?spec also be updated to be consistent with that? Currently the WebGL >>>>> ? ?spec does not mention the concern about these strings. >>> >>> Whether to report the real RENDERER and VENDOR strings in the core WebGL >>> API is still an active area of discussion -- or at least there have been >>> discussions very recently on the topic. For this reason I think we should >>> leave that spec as is for now.?However, it might be worth adding some >>> clarifying documentation to WEBGL_debug_gpu_info.?Maybe modify the Overview: >>> WebGL implementations might mask the RENDERER and VENDOR strings of the >>> underlying graphics driver for privacy reasons. This extension exposes new >>> tokens to query this information in a guaranteed manner for debugging >>> purposes. >>> >>>>> Also, I >>>>> ? ?wonder if PRIVILEGED would be a better word than UNMASKED, so it >>>>> ? ?would tell in a more explicit and neutral way what the difference is >>>>> ? ?with the current strings from the spec. Similarly, the extension >>>>> ? ?might be better named WEBGL_privileged_renderer___info? >>>>> >>>>> ? ? ?* WEBGL_debug_shaders might not be a specific enough name? How >>>>> ? ?about WEBGL_get_translated_shader___source or some such. The text >>>>> ? ?says that this should not be exposed to unprivileged content because >>>>> ? ?this could be used to identify the GPU. Personally, my concern is a >>>>> ? ?bit different. I'm not that much concerned about this particular >>>>> ? ?privacy issue as it doesn't seem to expose a lot more information >>>>> ? ?than we already expose (through getShaderInfoLog + getParameter + UA >>>>> ? ?string), and doesn't make it more convenient to obtain. What I'm >>>>> ? ?more concerned with is that it exposes precisely which workarounds >>>>> ? ?we use, so if an attacker was fuzzing our ANGLE workarounds to find >>>>> ? ?corner cases where we miss a workaround, that could be handy. >>>>> >>>>> >>>>> How is that any different from today? If an attacker wants to find out >>>>> which workarounds we use, at least for Firefox and Chrome they can just >>>>> download the source and find out. Yes this makes it slightly easier, >>>>> they don't have to compile themselves and add a single printf, but it >>>>> doesn't expose anything they couldn't already get. >>>> >>>> So that's true for the two browsers that currently ship WebGL enabled by >>>> default. How much is that something that we can rely on, as opposed to just >>>> an 'accident', I don't know. But it surely isn't my job to advocate >>>> security-by-obscurity so I won't push this point further :-) >>> >>> The availability of this extension definitely makes it much easier to >>> query the post-translation source code, and it isn't currently available to >>> JavaScript, so I think it's safer to expose it only when explicitly enabled. >>> Regarding the naming convention, Ben Vanik suggested in an off-list email >>> that we could expose more extensions with the WEBGL_debug_ prefix which may >>> be useful in implementing tools like the WebGL Inspector. This sounds like a >>> really good idea, so I'd vote to keep the current convention. >>> -Ken >>>> >>>> Benoit >>>> >>>>> >>>>> >>>>> >>>>> ? ?Cheers, >>>>> ? ?Benoit >>>>> >>>>> >>>>> ? ? ? ?Comments are welcome. >>>>> >>>>> ? ? ? ?Mo >>>>> >>>>> >>>>> >>>>> ? ?------------------------------__----------------------------- >>>>> ? ?You are currently subscribed to public_webgl...@ >>>>> ? ?. >>>>> ? ?To unsubscribe, send an email to majordomo...@ >>>>> ? ? with >>>>> ? ?the following command in the body of your email: >>>>> ? ?unsubscribe public_webgl >>>>> ? ?------------------------------__----------------------------- >>>>> >>>>> >>>> >>>> >>>> ----------------------------------------------------------- >>>> You are currently subscribed to public_webgl...@ >>>> To unsubscribe, send an email to majordomo...@ with >>>> the following command in the body of your email: >>>> unsubscribe public_webgl >>>> ----------------------------------------------------------- >>>> >>> >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From zhe...@ Tue Oct 11 11:01:24 2011 From: zhe...@ (Mo, Zhenyao) Date: Tue, 11 Oct 2011 11:01:24 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: Thanks for all the feedbacks. I revised the draft accordingly. 1) name WEBGL_debug_gpu_info to WEBGL_debug_renderer_info 2) name UNMASKED_VENDOR/UNMASKED_RENDERER to UNMASKED_VENDOR_WEBGL/UNMASKED_RENDERER_WEBGL 3) revised the overview of WEBGL_debug_renderer_info to explain why we need this extension. If no further feedbacks, I will add the extensions to the WebGL extension registry? Mo On Mon, Oct 10, 2011 at 4:08 PM, Kenneth Russell wrote: > Mo, could you also rename UNMASKED_VENDOR and UNMASKED_RENDERER to > UNMASKED_VENDOR_WEBGL and UNMASKED_RENDERER_WEBGL? We've been using > the _WEBGL suffix for new, WebGL-specific enums. > > Once that's done I can help you get values assigned to them -- > assuming we're all in agreement. > > -Ken > > On Wed, Oct 5, 2011 at 5:07 PM, Kenneth Russell wrote: > > Fair point. > > > > On Wed, Oct 5, 2011 at 5:02 PM, Ben Vanik wrote: > >> > >> semantics, but I'd like to second Benoit's suggestion to use 'renderer' > >> instead of 'gpu' - there may not always be a real GPU behind an > >> implementation :) > >> > >> On Wed, Oct 5, 2011 at 4:57 PM, Kenneth Russell wrote: > >>> > >>> On Wed, Oct 5, 2011 at 4:22 PM, Benoit Jacob > wrote: > >>>> > >>>> On 05/10/11 03:56 PM, Gregg Tavares (wrk) wrote: > >>>>> > >>>>> > >>>>> On Wed, Oct 5, 2011 at 12:08 PM, Benoit Jacob >>>>> > wrote: > >>>>> > >>>>> > >>>>> On 05/10/11 02:38 PM, Mo, Zhenyao wrote: > >>>>> > >>>>> I wrote up drafts for two possible WebGL extensions: > >>>>> WEBGL_debug_shaders > >>>>> and WEBGL_debug_gpu_info. > >>>>> > >>>>> > >>>>> > https://cvs.khronos.org/svn/__repos/registry/trunk/public/__webgl/extensions/proposals/__WEBGL_debug_gpu_info.html > >>>>> > >>>>> < > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/proposals/WEBGL_debug_gpu_info.html > > > >>>>> > >>>>> > https://cvs.khronos.org/svn/__repos/registry/trunk/public/__webgl/extensions/proposals/__WEBGL_debug_shaders.html > >>>>> > >>>>> < > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/proposals/WEBGL_debug_shaders.html > > > >>>>> > >>>>> WEBGL_debug_shaders exposes the translated shader source. > >>>>> WEBGL_debug_gpu_info exposes the unmasked VENDOR and RENDERER > >>>>> strings > >>>>> from underlying graphics driver. > >>>>> > >>>>> I believe these two extensions provide important information > to > >>>>> WebGL > >>>>> developers. However, as stated in the drafts, both extensions > >>>>> should > >>>>> only be available to privileged code in a browser, not the > >>>>> regular > >>>>> content due to user privacy concerns. > >>>>> > >>>>> > >>>>> I'm OK with the two extensions as currently drafted; just a couple > >>>>> of remarks: > >>>>> > >>>>> *WEBGL_debug_gpu_info might be better named > >>>>> WEBGL_debug_renderer_info (or see below, > >>>>> WEBGL_privileged_renderer___info) ? Also, it says that that info > >>>>> should not be exposed to unprivileged content, so should the WebGL > >>>>> spec also be updated to be consistent with that? Currently the > WebGL > >>>>> spec does not mention the concern about these strings. > >>> > >>> Whether to report the real RENDERER and VENDOR strings in the core > WebGL > >>> API is still an active area of discussion -- or at least there have > been > >>> discussions very recently on the topic. For this reason I think we > should > >>> leave that spec as is for now. However, it might be worth adding some > >>> clarifying documentation to WEBGL_debug_gpu_info. Maybe modify the > Overview: > >>> WebGL implementations might mask the RENDERER and VENDOR strings of the > >>> underlying graphics driver for privacy reasons. This extension exposes > new > >>> tokens to query this information in a guaranteed manner for debugging > >>> purposes. > >>> > >>>>> Also, I > >>>>> wonder if PRIVILEGED would be a better word than UNMASKED, so it > >>>>> would tell in a more explicit and neutral way what the difference > is > >>>>> with the current strings from the spec. Similarly, the extension > >>>>> might be better named WEBGL_privileged_renderer___info? > >>>>> > >>>>> * WEBGL_debug_shaders might not be a specific enough name? How > >>>>> about WEBGL_get_translated_shader___source or some such. The text > >>>>> says that this should not be exposed to unprivileged content > because > >>>>> this could be used to identify the GPU. Personally, my concern is > a > >>>>> bit different. I'm not that much concerned about this particular > >>>>> privacy issue as it doesn't seem to expose a lot more information > >>>>> than we already expose (through getShaderInfoLog + getParameter + > UA > >>>>> string), and doesn't make it more convenient to obtain. What I'm > >>>>> more concerned with is that it exposes precisely which workarounds > >>>>> we use, so if an attacker was fuzzing our ANGLE workarounds to > find > >>>>> corner cases where we miss a workaround, that could be handy. > >>>>> > >>>>> > >>>>> How is that any different from today? If an attacker wants to find > out > >>>>> which workarounds we use, at least for Firefox and Chrome they can > just > >>>>> download the source and find out. Yes this makes it slightly easier, > >>>>> they don't have to compile themselves and add a single printf, but it > >>>>> doesn't expose anything they couldn't already get. > >>>> > >>>> So that's true for the two browsers that currently ship WebGL enabled > by > >>>> default. How much is that something that we can rely on, as opposed to > just > >>>> an 'accident', I don't know. But it surely isn't my job to advocate > >>>> security-by-obscurity so I won't push this point further :-) > >>> > >>> The availability of this extension definitely makes it much easier to > >>> query the post-translation source code, and it isn't currently > available to > >>> JavaScript, so I think it's safer to expose it only when explicitly > enabled. > >>> Regarding the naming convention, Ben Vanik suggested in an off-list > email > >>> that we could expose more extensions with the WEBGL_debug_ prefix which > may > >>> be useful in implementing tools like the WebGL Inspector. This sounds > like a > >>> really good idea, so I'd vote to keep the current convention. > >>> -Ken > >>>> > >>>> Benoit > >>>> > >>>>> > >>>>> > >>>>> > >>>>> Cheers, > >>>>> Benoit > >>>>> > >>>>> > >>>>> Comments are welcome. > >>>>> > >>>>> Mo > >>>>> > >>>>> > >>>>> > >>>>> ------------------------------__----------------------------- > >>>>> You are currently subscribed to public_webgl...@ > >>>>> . > >>>>> To unsubscribe, send an email to majordomo...@ > >>>>> with > >>>>> the following command in the body of your email: > >>>>> unsubscribe public_webgl > >>>>> ------------------------------__----------------------------- > >>>>> > >>>>> > >>>> > >>>> > >>>> ----------------------------------------------------------- > >>>> You are currently subscribed to public_webgl...@ > >>>> To unsubscribe, send an email to majordomo...@ with > >>>> the following command in the body of your email: > >>>> unsubscribe public_webgl > >>>> ----------------------------------------------------------- > >>>> > >>> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Oct 11 12:32:30 2011 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 11 Oct 2011 21:32:30 +0200 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: On Tue, Oct 11, 2011 at 8:01 PM, Mo, Zhenyao wrote: > If no further feedbacks, I will add the extensions to the WebGL extension > registry? > Since this is intended for priviledge use only, I think it's of minimal to no use. Most people won't be writing their experiment/game/app as browser extensions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Tue Oct 11 12:43:07 2011 From: bja...@ (Benoit Jacob) Date: Tue, 11 Oct 2011 15:43:07 -0400 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: <4E949C4B.7080102@mozilla.com> On 11/10/11 03:32 PM, Florian B?sch wrote: > On Tue, Oct 11, 2011 at 8:01 PM, Mo, Zhenyao > wrote: > > If no further feedbacks, I will add the extensions to the WebGL > extension registry? > > Since this is intended for priviledge use only, I think it's of minimal > to no use. Most people won't be writing their experiment/game/app as > browser extensions. Certainly browser extensions are a smaller audience than web pages; but they do exist and I welcome any effort to standardize also the privileged interfaces that they use. Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Tue Oct 11 13:04:12 2011 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 11 Oct 2011 22:04:12 +0200 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: <4E949C4B.7080102@mozilla.com> References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> <4E949C4B.7080102@mozilla.com> Message-ID: On Tue, Oct 11, 2011 at 9:43 PM, Benoit Jacob wrote: > Certainly browser extensions are a smaller audience than web pages; but > they do exist and I welcome any effort to standardize also the privileged > interfaces that they use. I just think it's ironic that due to privacy concerns these information should be sacrificed. The User-Agent includes everything&the kitchensink on information. From a technical point of view, all of the bits in the user-agent are of no practical use to make sites work better. Yet the two bits (gpu vendor and model) that would *actually* help making webgl apps run better, are withheld. If somebody is going trough the trouble of identification of people by their user agent, they'll likely also sniff out webgl support and capabilities, flash version, ad-block settings, cookie-settings, local-storage settings and so on. For all intends and purpose, people are already uniquely identified GPU brand/model or no. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Tue Oct 11 13:12:05 2011 From: bja...@ (Benoit Jacob) Date: Tue, 11 Oct 2011 16:12:05 -0400 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> <4E949C4B.7080102@mozilla.com> Message-ID: <4E94A315.3060702@mozilla.com> On 11/10/11 04:04 PM, Florian B?sch wrote: > On Tue, Oct 11, 2011 at 9:43 PM, Benoit Jacob > wrote: > > Certainly browser extensions are a smaller audience than web pages; > but they do exist and I welcome any effort to standardize also the > privileged interfaces that they use. > > I just think it's ironic that due to privacy concerns these information > should be sacrificed. The User-Agent includes everything&the kitchensink > on information. Firefox's UA string, in stable releases, only gives the OS, CPU architecture, and the Firefox version. The rest is frozen to arbitrary meaningless values. For example: Mozilla/5.0 (Windows NT 5.0; WOW64; rv:6.0) Gecko/20100101 Firefox/6.0 Thus it doesn't give any information that couldn't be easily inferred by other means, except perhaps 1 bit for the CPU architecture (32/64 bit). Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From zhe...@ Tue Oct 11 13:19:47 2011 From: zhe...@ (Mo, Zhenyao) Date: Tue, 11 Oct 2011 13:19:47 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: On Tue, Oct 11, 2011 at 12:32 PM, Florian B?sch wrote: > On Tue, Oct 11, 2011 at 8:01 PM, Mo, Zhenyao wrote: > >> If no further feedbacks, I will add the extensions to the WebGL extension >> registry? >> > Since this is intended for priviledge use only, I think it's of minimal to > no use. Most people won't be writing their experiment/game/app as browser > extensions. > The purpose of the extensions are to provide a way to access the information if they are critical to the apps. At least there is an option. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Oct 11 13:23:08 2011 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 11 Oct 2011 22:23:08 +0200 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: <4E94A315.3060702@mozilla.com> References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> <4E949C4B.7080102@mozilla.com> <4E94A315.3060702@mozilla.com> Message-ID: The day will come, that companies start doing WebGL apps and sell them. The day will come they will have a support forum and an engineering team tasked with making the application as widely accessible as possible despite the heterogeneous GPU landscape. The day will come on which you will wake up to an unpleasant and widely popularized blog post by some or another graphics/game producing bigwig slamming WebGL in the most colorful of phrases. The day will come, and and I've told you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Oct 12 14:24:46 2011 From: kbr...@ (Kenneth Russell) Date: Wed, 12 Oct 2011 14:24:46 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: I've tentatively assigned, out of the WebGL enum block, UNMASKED_VENDOR_WEBGL to 0x9245 and UNMASKED_RENDERER_WEBGL to 0x9246 . These will be final once they've been added to the OpenGL registry; I've filed a request for that. I also renamed the file to WEBGL_debug_renderer_info.html to match the new name in the text, so the new URL is http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_debug_renderer_info.html . The name in the IDL needs to be updated to WEBGL_debug_renderer_info. Issue 1 can now be resolved. Looks On Tue, Oct 11, 2011 at 11:01 AM, Mo, Zhenyao wrote: > Thanks for all the feedbacks. ?I revised the draft accordingly. > 1) name WEBGL_debug_gpu_info to WEBGL_debug_renderer_info > 2) name UNMASKED_VENDOR/UNMASKED_RENDERER to > UNMASKED_VENDOR_WEBGL/UNMASKED_RENDERER_WEBGL > 3) revised the overview of WEBGL_debug_renderer_info to explain why we need > this extension. > If no further feedbacks, I will add the extensions to the WebGL extension > registry? > Mo > > On Mon, Oct 10, 2011 at 4:08 PM, Kenneth Russell wrote: >> >> Mo, could you also rename UNMASKED_VENDOR and UNMASKED_RENDERER to >> UNMASKED_VENDOR_WEBGL and UNMASKED_RENDERER_WEBGL? We've been using >> the _WEBGL suffix for new, WebGL-specific enums. >> >> Once that's done I can help you get values assigned to them -- >> assuming we're all in agreement. >> >> -Ken >> >> On Wed, Oct 5, 2011 at 5:07 PM, Kenneth Russell wrote: >> > Fair point. >> > >> > On Wed, Oct 5, 2011 at 5:02 PM, Ben Vanik wrote: >> >> >> >> semantics, but I'd like to second Benoit's suggestion to use 'renderer' >> >> instead of 'gpu' - there may not always be a real GPU behind an >> >> implementation :) >> >> >> >> On Wed, Oct 5, 2011 at 4:57 PM, Kenneth Russell wrote: >> >>> >> >>> On Wed, Oct 5, 2011 at 4:22 PM, Benoit Jacob >> >>> wrote: >> >>>> >> >>>> On 05/10/11 03:56 PM, Gregg Tavares (wrk) wrote: >> >>>>> >> >>>>> >> >>>>> On Wed, Oct 5, 2011 at 12:08 PM, Benoit Jacob > >>>>> > wrote: >> >>>>> >> >>>>> >> >>>>> ? ?On 05/10/11 02:38 PM, Mo, Zhenyao wrote: >> >>>>> >> >>>>> ? ? ? ?I wrote up drafts for two possible WebGL extensions: >> >>>>> ? ? ? ?WEBGL_debug_shaders >> >>>>> ? ? ? ?and WEBGL_debug_gpu_info. >> >>>>> >> >>>>> >> >>>>> >> >>>>> ?https://cvs.khronos.org/svn/__repos/registry/trunk/public/__webgl/extensions/proposals/__WEBGL_debug_gpu_info.html >> >>>>> >> >>>>> >> >>>>> ? >> >>>>> >> >>>>> >> >>>>> ?https://cvs.khronos.org/svn/__repos/registry/trunk/public/__webgl/extensions/proposals/__WEBGL_debug_shaders.html >> >>>>> >> >>>>> >> >>>>> ? >> >>>>> >> >>>>> ? ? ? ?WEBGL_debug_shaders exposes the translated shader source. >> >>>>> ? ? ? ?WEBGL_debug_gpu_info exposes the unmasked VENDOR and RENDERER >> >>>>> ? ? ? ?strings >> >>>>> ? ? ? ?from underlying graphics driver. >> >>>>> >> >>>>> ? ? ? ?I believe these two extensions provide important information >> >>>>> to >> >>>>> ? ? ? ?WebGL >> >>>>> ? ? ? ?developers. ?However, as stated in the drafts, both >> >>>>> extensions >> >>>>> ? ? ? ?should >> >>>>> ? ? ? ?only be available to privileged code in a browser, not the >> >>>>> regular >> >>>>> ? ? ? ?content due to user privacy concerns. >> >>>>> >> >>>>> >> >>>>> ? ?I'm OK with the two extensions as currently drafted; just a >> >>>>> couple >> >>>>> ? ?of remarks: >> >>>>> >> >>>>> ? ? ?*WEBGL_debug_gpu_info might be better named >> >>>>> ? ?WEBGL_debug_renderer_info (or see below, >> >>>>> ? ?WEBGL_privileged_renderer___info) ? Also, it says that that info >> >>>>> ? ?should not be exposed to unprivileged content, so should the >> >>>>> WebGL >> >>>>> ? ?spec also be updated to be consistent with that? Currently the >> >>>>> WebGL >> >>>>> ? ?spec does not mention the concern about these strings. >> >>> >> >>> Whether to report the real RENDERER and VENDOR strings in the core >> >>> WebGL >> >>> API is still an active area of discussion -- or at least there have >> >>> been >> >>> discussions very recently on the topic. For this reason I think we >> >>> should >> >>> leave that spec as is for now.?However, it might be worth adding some >> >>> clarifying documentation to WEBGL_debug_gpu_info.?Maybe modify the >> >>> Overview: >> >>> WebGL implementations might mask the RENDERER and VENDOR strings of >> >>> the >> >>> underlying graphics driver for privacy reasons. This extension exposes >> >>> new >> >>> tokens to query this information in a guaranteed manner for debugging >> >>> purposes. >> >>> >> >>>>> Also, I >> >>>>> ? ?wonder if PRIVILEGED would be a better word than UNMASKED, so it >> >>>>> ? ?would tell in a more explicit and neutral way what the difference >> >>>>> is >> >>>>> ? ?with the current strings from the spec. Similarly, the extension >> >>>>> ? ?might be better named WEBGL_privileged_renderer___info? >> >>>>> >> >>>>> ? ? ?* WEBGL_debug_shaders might not be a specific enough name? How >> >>>>> ? ?about WEBGL_get_translated_shader___source or some such. The text >> >>>>> ? ?says that this should not be exposed to unprivileged content >> >>>>> because >> >>>>> ? ?this could be used to identify the GPU. Personally, my concern is >> >>>>> a >> >>>>> ? ?bit different. I'm not that much concerned about this particular >> >>>>> ? ?privacy issue as it doesn't seem to expose a lot more information >> >>>>> ? ?than we already expose (through getShaderInfoLog + getParameter + >> >>>>> UA >> >>>>> ? ?string), and doesn't make it more convenient to obtain. What I'm >> >>>>> ? ?more concerned with is that it exposes precisely which >> >>>>> workarounds >> >>>>> ? ?we use, so if an attacker was fuzzing our ANGLE workarounds to >> >>>>> find >> >>>>> ? ?corner cases where we miss a workaround, that could be handy. >> >>>>> >> >>>>> >> >>>>> How is that any different from today? If an attacker wants to find >> >>>>> out >> >>>>> which workarounds we use, at least for Firefox and Chrome they can >> >>>>> just >> >>>>> download the source and find out. Yes this makes it slightly easier, >> >>>>> they don't have to compile themselves and add a single printf, but >> >>>>> it >> >>>>> doesn't expose anything they couldn't already get. >> >>>> >> >>>> So that's true for the two browsers that currently ship WebGL enabled >> >>>> by >> >>>> default. How much is that something that we can rely on, as opposed >> >>>> to just >> >>>> an 'accident', I don't know. But it surely isn't my job to advocate >> >>>> security-by-obscurity so I won't push this point further :-) >> >>> >> >>> The availability of this extension definitely makes it much easier to >> >>> query the post-translation source code, and it isn't currently >> >>> available to >> >>> JavaScript, so I think it's safer to expose it only when explicitly >> >>> enabled. >> >>> Regarding the naming convention, Ben Vanik suggested in an off-list >> >>> email >> >>> that we could expose more extensions with the WEBGL_debug_ prefix >> >>> which may >> >>> be useful in implementing tools like the WebGL Inspector. This sounds >> >>> like a >> >>> really good idea, so I'd vote to keep the current convention. >> >>> -Ken >> >>>> >> >>>> Benoit >> >>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> ? ?Cheers, >> >>>>> ? ?Benoit >> >>>>> >> >>>>> >> >>>>> ? ? ? ?Comments are welcome. >> >>>>> >> >>>>> ? ? ? ?Mo >> >>>>> >> >>>>> >> >>>>> >> >>>>> ? ?------------------------------__----------------------------- >> >>>>> ? ?You are currently subscribed to public_webgl...@ >> >>>>> ? ?. >> >>>>> ? ?To unsubscribe, send an email to majordomo...@ >> >>>>> ? ? with >> >>>>> ? ?the following command in the body of your email: >> >>>>> ? ?unsubscribe public_webgl >> >>>>> ? ?------------------------------__----------------------------- >> >>>>> >> >>>>> >> >>>> >> >>>> >> >>>> ----------------------------------------------------------- >> >>>> You are currently subscribed to public_webgl...@ >> >>>> To unsubscribe, send an email to majordomo...@ with >> >>>> the following command in the body of your email: >> >>>> unsubscribe public_webgl >> >>>> ----------------------------------------------------------- >> >>>> >> >>> >> >> >> > >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From zhe...@ Wed Oct 12 16:04:36 2011 From: zhe...@ (Mo, Zhenyao) Date: Wed, 12 Oct 2011 16:04:36 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: Thanks Ken. I've updated the draft for the wrong name in the IDL and the issue resolved. So I added the two extensions to the WebGL extension registry: WEBGL_debug_renderer_info: extension #6 WEBGL_debug_shaders: extension #7 https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/WEBGL_debug_renderer_info/index.html https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/WEBGL_debug_shaders/index.html They are ready to be implemented. Thanks for all the feedbacks. Mo On Wed, Oct 12, 2011 at 2:24 PM, Kenneth Russell wrote: > I've tentatively assigned, out of the WebGL enum block, > UNMASKED_VENDOR_WEBGL to 0x9245 and UNMASKED_RENDERER_WEBGL to 0x9246 > . These will be final once they've been added to the OpenGL registry; > I've filed a request for that. > > I also renamed the file to WEBGL_debug_renderer_info.html to match the > new name in the text, so the new URL is > > http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_debug_renderer_info.html > . > > The name in the IDL needs to be updated to WEBGL_debug_renderer_info. > Issue 1 can now be resolved. > > Looks > > On Tue, Oct 11, 2011 at 11:01 AM, Mo, Zhenyao wrote: > > Thanks for all the feedbacks. I revised the draft accordingly. > > 1) name WEBGL_debug_gpu_info to WEBGL_debug_renderer_info > > 2) name UNMASKED_VENDOR/UNMASKED_RENDERER to > > UNMASKED_VENDOR_WEBGL/UNMASKED_RENDERER_WEBGL > > 3) revised the overview of WEBGL_debug_renderer_info to explain why we > need > > this extension. > > If no further feedbacks, I will add the extensions to the WebGL extension > > registry? > > Mo > > > > On Mon, Oct 10, 2011 at 4:08 PM, Kenneth Russell wrote: > >> > >> Mo, could you also rename UNMASKED_VENDOR and UNMASKED_RENDERER to > >> UNMASKED_VENDOR_WEBGL and UNMASKED_RENDERER_WEBGL? We've been using > >> the _WEBGL suffix for new, WebGL-specific enums. > >> > >> Once that's done I can help you get values assigned to them -- > >> assuming we're all in agreement. > >> > >> -Ken > >> > >> On Wed, Oct 5, 2011 at 5:07 PM, Kenneth Russell wrote: > >> > Fair point. > >> > > >> > On Wed, Oct 5, 2011 at 5:02 PM, Ben Vanik > wrote: > >> >> > >> >> semantics, but I'd like to second Benoit's suggestion to use > 'renderer' > >> >> instead of 'gpu' - there may not always be a real GPU behind an > >> >> implementation :) > >> >> > >> >> On Wed, Oct 5, 2011 at 4:57 PM, Kenneth Russell > wrote: > >> >>> > >> >>> On Wed, Oct 5, 2011 at 4:22 PM, Benoit Jacob > >> >>> wrote: > >> >>>> > >> >>>> On 05/10/11 03:56 PM, Gregg Tavares (wrk) wrote: > >> >>>>> > >> >>>>> > >> >>>>> On Wed, Oct 5, 2011 at 12:08 PM, Benoit Jacob >> >>>>> > wrote: > >> >>>>> > >> >>>>> > >> >>>>> On 05/10/11 02:38 PM, Mo, Zhenyao wrote: > >> >>>>> > >> >>>>> I wrote up drafts for two possible WebGL extensions: > >> >>>>> WEBGL_debug_shaders > >> >>>>> and WEBGL_debug_gpu_info. > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> > https://cvs.khronos.org/svn/__repos/registry/trunk/public/__webgl/extensions/proposals/__WEBGL_debug_gpu_info.html > >> >>>>> > >> >>>>> > >> >>>>> < > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/proposals/WEBGL_debug_gpu_info.html > > > >> >>>>> > >> >>>>> > >> >>>>> > https://cvs.khronos.org/svn/__repos/registry/trunk/public/__webgl/extensions/proposals/__WEBGL_debug_shaders.html > >> >>>>> > >> >>>>> > >> >>>>> < > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/proposals/WEBGL_debug_shaders.html > > > >> >>>>> > >> >>>>> WEBGL_debug_shaders exposes the translated shader source. > >> >>>>> WEBGL_debug_gpu_info exposes the unmasked VENDOR and > RENDERER > >> >>>>> strings > >> >>>>> from underlying graphics driver. > >> >>>>> > >> >>>>> I believe these two extensions provide important > information > >> >>>>> to > >> >>>>> WebGL > >> >>>>> developers. However, as stated in the drafts, both > >> >>>>> extensions > >> >>>>> should > >> >>>>> only be available to privileged code in a browser, not the > >> >>>>> regular > >> >>>>> content due to user privacy concerns. > >> >>>>> > >> >>>>> > >> >>>>> I'm OK with the two extensions as currently drafted; just a > >> >>>>> couple > >> >>>>> of remarks: > >> >>>>> > >> >>>>> *WEBGL_debug_gpu_info might be better named > >> >>>>> WEBGL_debug_renderer_info (or see below, > >> >>>>> WEBGL_privileged_renderer___info) ? Also, it says that that > info > >> >>>>> should not be exposed to unprivileged content, so should the > >> >>>>> WebGL > >> >>>>> spec also be updated to be consistent with that? Currently the > >> >>>>> WebGL > >> >>>>> spec does not mention the concern about these strings. > >> >>> > >> >>> Whether to report the real RENDERER and VENDOR strings in the core > >> >>> WebGL > >> >>> API is still an active area of discussion -- or at least there have > >> >>> been > >> >>> discussions very recently on the topic. For this reason I think we > >> >>> should > >> >>> leave that spec as is for now. However, it might be worth adding > some > >> >>> clarifying documentation to WEBGL_debug_gpu_info. Maybe modify the > >> >>> Overview: > >> >>> WebGL implementations might mask the RENDERER and VENDOR strings of > >> >>> the > >> >>> underlying graphics driver for privacy reasons. This extension > exposes > >> >>> new > >> >>> tokens to query this information in a guaranteed manner for > debugging > >> >>> purposes. > >> >>> > >> >>>>> Also, I > >> >>>>> wonder if PRIVILEGED would be a better word than UNMASKED, so > it > >> >>>>> would tell in a more explicit and neutral way what the > difference > >> >>>>> is > >> >>>>> with the current strings from the spec. Similarly, the > extension > >> >>>>> might be better named WEBGL_privileged_renderer___info? > >> >>>>> > >> >>>>> * WEBGL_debug_shaders might not be a specific enough name? > How > >> >>>>> about WEBGL_get_translated_shader___source or some such. The > text > >> >>>>> says that this should not be exposed to unprivileged content > >> >>>>> because > >> >>>>> this could be used to identify the GPU. Personally, my concern > is > >> >>>>> a > >> >>>>> bit different. I'm not that much concerned about this > particular > >> >>>>> privacy issue as it doesn't seem to expose a lot more > information > >> >>>>> than we already expose (through getShaderInfoLog + getParameter > + > >> >>>>> UA > >> >>>>> string), and doesn't make it more convenient to obtain. What > I'm > >> >>>>> more concerned with is that it exposes precisely which > >> >>>>> workarounds > >> >>>>> we use, so if an attacker was fuzzing our ANGLE workarounds to > >> >>>>> find > >> >>>>> corner cases where we miss a workaround, that could be handy. > >> >>>>> > >> >>>>> > >> >>>>> How is that any different from today? If an attacker wants to find > >> >>>>> out > >> >>>>> which workarounds we use, at least for Firefox and Chrome they can > >> >>>>> just > >> >>>>> download the source and find out. Yes this makes it slightly > easier, > >> >>>>> they don't have to compile themselves and add a single printf, but > >> >>>>> it > >> >>>>> doesn't expose anything they couldn't already get. > >> >>>> > >> >>>> So that's true for the two browsers that currently ship WebGL > enabled > >> >>>> by > >> >>>> default. How much is that something that we can rely on, as opposed > >> >>>> to just > >> >>>> an 'accident', I don't know. But it surely isn't my job to advocate > >> >>>> security-by-obscurity so I won't push this point further :-) > >> >>> > >> >>> The availability of this extension definitely makes it much easier > to > >> >>> query the post-translation source code, and it isn't currently > >> >>> available to > >> >>> JavaScript, so I think it's safer to expose it only when explicitly > >> >>> enabled. > >> >>> Regarding the naming convention, Ben Vanik suggested in an off-list > >> >>> email > >> >>> that we could expose more extensions with the WEBGL_debug_ prefix > >> >>> which may > >> >>> be useful in implementing tools like the WebGL Inspector. This > sounds > >> >>> like a > >> >>> really good idea, so I'd vote to keep the current convention. > >> >>> -Ken > >> >>>> > >> >>>> Benoit > >> >>>> > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> Cheers, > >> >>>>> Benoit > >> >>>>> > >> >>>>> > >> >>>>> Comments are welcome. > >> >>>>> > >> >>>>> Mo > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> ------------------------------__----------------------------- > >> >>>>> You are currently subscribed to public_webgl...@ > >> >>>>> . > >> >>>>> To unsubscribe, send an email to majordomo...@ > >> >>>>> with > >> >>>>> the following command in the body of your email: > >> >>>>> unsubscribe public_webgl > >> >>>>> ------------------------------__----------------------------- > >> >>>>> > >> >>>>> > >> >>>> > >> >>>> > >> >>>> ----------------------------------------------------------- > >> >>>> You are currently subscribed to public_webgl...@ > >> >>>> To unsubscribe, send an email to majordomo...@ with > >> >>>> the following command in the body of your email: > >> >>>> unsubscribe public_webgl > >> >>>> ----------------------------------------------------------- > >> >>>> > >> >>> > >> >> > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed Oct 12 18:25:32 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 12 Oct 2011 18:25:32 -0700 Subject: [Public WebGL] Handling context lost in WebGL Message-ID: I just got through updating all the demos on the Khronos WebGL SDKto handle context lost I wrote up some of the issues I ran into and their solutions and put them on the WebGL Wiki here http://www.khronos.org/webgl/wiki/HandlingContextLost I hope that's useful. -gregg -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed Oct 12 18:46:23 2011 From: bja...@ (Benoit Jacob) Date: Wed, 12 Oct 2011 21:46:23 -0400 Subject: [Public WebGL] Handling context lost in WebGL In-Reply-To: References: Message-ID: <4E9642EF.4060306@mozilla.com> On 12/10/11 09:25 PM, Gregg Tavares (wrk) wrote: > I just got through updating all the demos on the Khronos WebGL SDK > to handle context lost > > I wrote up some of the issues I ran into and their solutions and put > them on the WebGL Wiki here > > http://www.khronos.org/webgl/wiki/HandlingContextLost > > I hope that's useful. It is useful. Firefox will soon start sending these events on ARB_robustness / EGL_CONTEXT_LOST events, and will soon have the WEBKIT_lose_context extension too. Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Wed Oct 12 19:33:22 2011 From: cal...@ (Mark Callow) Date: Thu, 13 Oct 2011 11:33:22 +0900 Subject: [Public WebGL] Handling context lost in WebGL In-Reply-To: References: Message-ID: <4E964DF2.6030907@hicorp.co.jp> In function init() { canvas.removeEventListener('mousemove', mouseMoveHandler, false); ... mouseUniformLocation = gl.getUniformLocation(program, "uMouse"); ... function mouseMoveHandler(event) { gl.uniform2f(mouseIniformLocation, event.x, event.y); } canvas.addEventListener('mousemove', mouseMoveHandler, false); ... do you really need the first line? Since the handler is no longer an anonymous function, it should use the current value of mouseUniformLocation when called. I think this is a very useful document. Thanks for preparing it. Regards -Mark On 13/10/2011 10:25, Gregg Tavares (wrk) wrote: > I just got through updating all the demos on the Khronos WebGL SDK > to handle context > lost > > I wrote up some of the issues I ran into and their solutions and put > them on the WebGL Wiki here > > http://www.khronos.org/webgl/wiki/HandlingContextLost > > I hope that's useful. > > -gregg > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Oct 12 20:01:32 2011 From: kbr...@ (Kenneth Russell) Date: Wed, 12 Oct 2011 20:01:32 -0700 Subject: [Public WebGL] Handling context lost in WebGL In-Reply-To: References: Message-ID: Thanks, this is great! I added a link to it from the front page of the wiki. -Ken On Wed, Oct 12, 2011 at 6:25 PM, Gregg Tavares (wrk) wrote: > I just got through updating?all the demos on the Khronos WebGL SDK to handle > context lost > I wrote up some of the issues I ran into and their solutions and put them on > the WebGL Wiki here > http://www.khronos.org/webgl/wiki/HandlingContextLost > I hope that's useful. > -gregg > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Wed Oct 12 22:37:44 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 12 Oct 2011 22:37:44 -0700 Subject: [Public WebGL] Handling context lost in WebGL In-Reply-To: <4E964DF2.6030907@hicorp.co.jp> References: <4E964DF2.6030907@hicorp.co.jp> Message-ID: On Wed, Oct 12, 2011 at 7:33 PM, Mark Callow wrote: > In > > function init() { > canvas.removeEventListener(?mousemove?, mouseMoveHandler, false); > ... > var mouseUniformLocation = gl.getUniformLocation(program, ?uMouse?); > ... > function mouseMoveHandler(event) { > gl.uniform2f(mouseIniformLocation, event.x, event.y); > } > > canvas.addEventListener(?mousemove?, mouseMoveHandler, false); > ... > > do you really need the first line? > Yes. I fixed a typo above (and on the wiki). Maybe that makes it clearer? There were 2 issues 1) init is called more than once (at the beginning and each time the context is restored). That means a new mouseUniformLocation is created each time. If you made mouseUniformLocation a global instead of a local it would work for example. 2) when you use an anonymous function as an eventListener there is no way to remove it since you need to pass it to removeEventListener > Since the handler is no longer an anonymous function, it should use the > current value of mouseUniformLocation when called. > > I think this is a very useful document. Thanks for preparing it. > > Regards > > -Mark > > > > On 13/10/2011 10:25, Gregg Tavares (wrk) wrote: > > I just got through updating all the demos on the Khronos WebGL SDKto handle context lost > > I wrote up some of the issues I ran into and their solutions and put them > on the WebGL Wiki here > > http://www.khronos.org/webgl/wiki/HandlingContextLost > > I hope that's useful. > > -gregg > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shr...@ Fri Oct 14 05:28:17 2011 From: shr...@ (Shropshire, Andrew A) Date: Fri, 14 Oct 2011 08:28:17 -0400 Subject: [Public WebGL] Stereoscopic monitors Message-ID: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> If I write WebGl and my website has WebGl content, will it appear in 3D on a stereoscopic monitor (3D monitor), if I purchase one? Ie is stereoscopic monitor support a benefit of using WebGl? Andrew Shropshire AT&T Government Solutions, Inc. 703-506-5708 shropshire...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2281 bytes Desc: not available URL: From ste...@ Fri Oct 14 08:44:01 2011 From: ste...@ (Steve Baker) Date: Fri, 14 Oct 2011 10:44:01 -0500 Subject: [Public WebGL] Stereoscopic monitors In-Reply-To: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> References: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> Message-ID: <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> No, unless the browser had some kind of special support, WebGL will not appear any different from normal 2D images. I doubt such support is likely anytime soon because it would imply massive changes to at least the subsystem - and likely throughout all of HTML. Stereo monitors just aren't popular enough to make the effort that this would entail remotely worthwhile. Technically: To use stereoscopic displays, you have to render the entire scene twice, once from the left-eye perspective, and again from the right eye. These two images then have to be overlaid or combined or written into two separate rendering buffers. There is support for doing this kind of thing in OpenGL via various quad-buffer extensions and such. But none of that is present in WebGL (AFAICT). Even if the extensions were available, the whole concept of how the compositing pipeline would work in stereo is not considered at all. Also, IMHO, stereoscopic monitors are a complete waste of money. Except in very niche applications, stereoscopic 3D is highly problematic. Issues of dynamic depth of focus mean that barring some pretty stunning technological leaps, these technologies will always cause people to suffer headaches and other nasty symptoms - just as they do in 3D televisions. To avoid this, the 3D-ness of the scene and the positioning of the camera and set/lighting design has to be carefully considered. It's not just a matter of displaying the material correctly. Shropshire, Andrew A wrote: > If I write WebGl and my website has WebGl content, will it appear in 3D on > a > stereoscopic monitor (3D monitor), if I purchase one? Ie is stereoscopic > monitor support a benefit of using WebGl? > > > > Andrew Shropshire > > > > AT&T Government Solutions, Inc. > > 703-506-5708 > > shropshire...@ > > > > -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From shr...@ Fri Oct 14 08:59:50 2011 From: shr...@ (Shropshire, Andrew A) Date: Fri, 14 Oct 2011 11:59:50 -0400 Subject: [Public WebGL] Stereoscopic monitors In-Reply-To: <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> References: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> Message-ID: <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> Ok thanks for the information. Perhaps some non-stereoscopic 3D display will come along in the future that will be cheap. If I were designing the 3D apis, I would anticipate this and make the projection part of the pipeline removable. Also it might be helpful to have more text handling routines and font support like SVG, to facilitate drawing of text in 3D so that mundane work like improving the look of buttons, scrollbars etc in business applications could be improved. Maybe this is what Windows 8 will do. Andrew Shropshire AT&T Government Solutions, Inc. 703-506-5708 shropshire...@ -----Original Message----- From: Steve Baker [mailto:steve...@] Sent: Friday, October 14, 2011 11:44 AM To: Shropshire, Andrew A Cc: 'public_webgl...@' Subject: Re: [Public WebGL] Stereoscopic monitors No, unless the browser had some kind of special support, WebGL will not appear any different from normal 2D images. I doubt such support is likely anytime soon because it would imply massive changes to at least the subsystem - and likely throughout all of HTML. Stereo monitors just aren't popular enough to make the effort that this would entail remotely worthwhile. Technically: To use stereoscopic displays, you have to render the entire scene twice, once from the left-eye perspective, and again from the right eye. These two images then have to be overlaid or combined or written into two separate rendering buffers. There is support for doing this kind of thing in OpenGL via various quad-buffer extensions and such. But none of that is present in WebGL (AFAICT). Even if the extensions were available, the whole concept of how the compositing pipeline would work in stereo is not considered at all. Also, IMHO, stereoscopic monitors are a complete waste of money. Except in very niche applications, stereoscopic 3D is highly problematic. Issues of dynamic depth of focus mean that barring some pretty stunning technological leaps, these technologies will always cause people to suffer headaches and other nasty symptoms - just as they do in 3D televisions. To avoid this, the 3D-ness of the scene and the positioning of the camera and set/lighting design has to be carefully considered. It's not just a matter of displaying the material correctly. Shropshire, Andrew A wrote: > If I write WebGl and my website has WebGl content, will it appear in 3D on > a > stereoscopic monitor (3D monitor), if I purchase one? Ie is stereoscopic > monitor support a benefit of using WebGl? > > > > Andrew Shropshire > > > > AT&T Government Solutions, Inc. > > 703-506-5708 > > shropshire...@ > > > > -- Steve -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2281 bytes Desc: not available URL: From won...@ Fri Oct 14 09:26:16 2011 From: won...@ (Won Chun) Date: Fri, 14 Oct 2011 12:26:16 -0400 Subject: [Public WebGL] Stereoscopic monitors In-Reply-To: <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> References: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> Message-ID: On Fri, Oct 14, 2011 at 11:59 AM, Shropshire, Andrew A wrote: > Ok thanks for the information. Perhaps some non-stereoscopic 3D display > will come along in the future that will be cheap. If I were designing the > 3D apis, I would anticipate this and make the projection part of the > pipeline removable. Modern games actually don't do the "render left/right" approach because they don't have the budget. They use a reprojection technique that takes a single depth/color image and makes left/right views off of that. In theory, WebGL could do that during compositing into HTML, although there is the sticking issue of how to deal with disocclusion (i.e. the parts of the scene visible from one eye but not the source render) that tends to be application-specific. And, there's the issue whether it is worth it. I've spent 5 years developing 3-D displays, and don't really see much progress yet; it's one of those things that is perpetually an "emerging technology." > Also it might be helpful to have more text handling > routines and font support like SVG, to facilitate drawing of text in 3D so > that mundane work like improving the look of buttons, scrollbars etc in > business applications could be improved. Maybe this is what Windows 8 will > do. > WebGL is in the OpenGL spirit of being fairly low-level and minimal. This is the kind of thing that you expect to see in a library written on top of WebGL. One popular library is three.js: http://www.aerotwist.com/lab/getting-started-with-three-js/ -Won > > Andrew Shropshire > > AT&T Government Solutions, Inc. > 703-506-5708 > shropshire...@ > > -----Original Message----- > From: Steve Baker [mailto:steve...@] > Sent: Friday, October 14, 2011 11:44 AM > To: Shropshire, Andrew A > Cc: 'public_webgl...@' > Subject: Re: [Public WebGL] Stereoscopic monitors > > No, unless the browser had some kind of special support, WebGL will not > appear any different from normal 2D images. I doubt such support is > likely anytime soon because it would imply massive changes to at least the > subsystem - and likely throughout all of HTML. Stereo monitors > just aren't popular enough to make the effort that this would entail > remotely worthwhile. > > Technically: To use stereoscopic displays, you have to render the entire > scene twice, once from the left-eye perspective, and again from the right > eye. These two images then have to be overlaid or combined or written > into two separate rendering buffers. There is support for doing this kind > of thing in OpenGL via various quad-buffer extensions and such. But none > of that is present in WebGL (AFAICT). Even if the extensions were > available, the whole concept of how the compositing pipeline would work in > stereo is not considered at all. > > Also, IMHO, stereoscopic monitors are a complete waste of money. Except > in very niche applications, stereoscopic 3D is highly problematic. Issues > of dynamic depth of focus mean that barring some pretty stunning > technological leaps, these technologies will always cause people to suffer > headaches and other nasty symptoms - just as they do in 3D televisions. > To avoid this, the 3D-ness of the scene and the positioning of the camera > and set/lighting design has to be carefully considered. It's not just a > matter of displaying the material correctly. > > Shropshire, Andrew A wrote: > > If I write WebGl and my website has WebGl content, will it appear in 3D > on > > a > > stereoscopic monitor (3D monitor), if I purchase one? Ie is stereoscopic > > monitor support a benefit of using WebGl? > > > > > > > > Andrew Shropshire > > > > > > > > AT&T Government Solutions, Inc. > > > > 703-506-5708 > > > > shropshire...@ > > > > > > > > > > > -- Steve > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ily...@ Fri Oct 14 10:03:48 2011 From: ily...@ (Ilyes Gouta) Date: Fri, 14 Oct 2011 18:03:48 +0100 Subject: [Public WebGL] Stereoscopic monitors In-Reply-To: References: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> Message-ID: Hi Won, Could you please tell a little bit more on the reprojection technique that's used in games to recreate the depth perception from just one buffer? Thanks, -Ilyes On Oct 14, 2011 5:28 PM, "Won Chun" wrote: > On Fri, Oct 14, 2011 at 11:59 AM, Shropshire, Andrew A > wrote: > >> Ok thanks for the information. Perhaps some non-stereoscopic 3D display >> will come along in the future that will be cheap. If I were designing the >> 3D apis, I would anticipate this and make the projection part of the >> pipeline removable. > > > Modern games actually don't do the "render left/right" approach because > they don't have the budget. They use a reprojection technique that takes a > single depth/color image and makes left/right views off of that. In theory, > WebGL could do that during compositing into HTML, although there is the > sticking issue of how to deal with disocclusion (i.e. the parts of the scene > visible from one eye but not the source render) that tends to be > application-specific. And, there's the issue whether it is worth it. I've > spent 5 years developing 3-D displays, and don't really see much progress > yet; it's one of those things that is perpetually an "emerging technology." > > >> Also it might be helpful to have more text handling >> routines and font support like SVG, to facilitate drawing of text in 3D so >> that mundane work like improving the look of buttons, scrollbars etc in >> business applications could be improved. Maybe this is what Windows 8 >> will >> do. >> > > WebGL is in the OpenGL spirit of being fairly low-level and minimal. This > is the kind of thing that you expect to see in a library written on top of > WebGL. One popular library is three.js: > > http://www.aerotwist.com/lab/getting-started-with-three-js/ > > -Won > > >> >> Andrew Shropshire >> >> AT&T Government Solutions, Inc. >> 703-506-5708 >> shropshire...@ >> >> -----Original Message----- >> From: Steve Baker [mailto:steve...@] >> Sent: Friday, October 14, 2011 11:44 AM >> To: Shropshire, Andrew A >> Cc: 'public_webgl...@' >> Subject: Re: [Public WebGL] Stereoscopic monitors >> >> No, unless the browser had some kind of special support, WebGL will not >> appear any different from normal 2D images. I doubt such support is >> likely anytime soon because it would imply massive changes to at least the >> subsystem - and likely throughout all of HTML. Stereo monitors >> just aren't popular enough to make the effort that this would entail >> remotely worthwhile. >> >> Technically: To use stereoscopic displays, you have to render the entire >> scene twice, once from the left-eye perspective, and again from the right >> eye. These two images then have to be overlaid or combined or written >> into two separate rendering buffers. There is support for doing this kind >> of thing in OpenGL via various quad-buffer extensions and such. But none >> of that is present in WebGL (AFAICT). Even if the extensions were >> available, the whole concept of how the compositing pipeline would work in >> stereo is not considered at all. >> >> Also, IMHO, stereoscopic monitors are a complete waste of money. Except >> in very niche applications, stereoscopic 3D is highly problematic. Issues >> of dynamic depth of focus mean that barring some pretty stunning >> technological leaps, these technologies will always cause people to suffer >> headaches and other nasty symptoms - just as they do in 3D televisions. >> To avoid this, the 3D-ness of the scene and the positioning of the camera >> and set/lighting design has to be carefully considered. It's not just a >> matter of displaying the material correctly. >> >> Shropshire, Andrew A wrote: >> > If I write WebGl and my website has WebGl content, will it appear in 3D >> on >> > a >> > stereoscopic monitor (3D monitor), if I purchase one? Ie is >> stereoscopic >> > monitor support a benefit of using WebGl? >> > >> > >> > >> > Andrew Shropshire >> > >> > >> > >> > AT&T Government Solutions, Inc. >> > >> > 703-506-5708 >> > >> > shropshire...@ >> > >> > >> > >> > >> >> >> -- Steve >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From won...@ Fri Oct 14 10:25:54 2011 From: won...@ (Won Chun) Date: Fri, 14 Oct 2011 13:25:54 -0400 Subject: [Public WebGL] Stereoscopic monitors In-Reply-To: References: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> Message-ID: > > Hi Won, > > Could you please tell a little bit more on the reprojection technique > that's used in games to recreate the depth perception from just one buffer? > > Thanks, > > -Ilyes > http://advances.realtimerendering.com/s2011/index.html The "Cars 2" and the "CryENGINE 3" talks both mention it, but they were targeting game consoles. To be clear, I wasn't suggesting to implement this using WebGL; in particular, accessing the depth buffer is a challenge. I was suggesting that it could be done in the browser as a post-process before the scene is composited. And this is all speculation on my part; I don't actually work on Chrome GPU. -Won > On Oct 14, 2011 5:28 PM, "Won Chun" wrote: > >> On Fri, Oct 14, 2011 at 11:59 AM, Shropshire, Andrew A < >> shropshire...@> wrote: >> >>> Ok thanks for the information. Perhaps some non-stereoscopic 3D display >>> will come along in the future that will be cheap. If I were designing >>> the >>> 3D apis, I would anticipate this and make the projection part of the >>> pipeline removable. >> >> >> Modern games actually don't do the "render left/right" approach because >> they don't have the budget. They use a reprojection technique that takes a >> single depth/color image and makes left/right views off of that. In theory, >> WebGL could do that during compositing into HTML, although there is the >> sticking issue of how to deal with disocclusion (i.e. the parts of the scene >> visible from one eye but not the source render) that tends to be >> application-specific. And, there's the issue whether it is worth it. I've >> spent 5 years developing 3-D displays, and don't really see much progress >> yet; it's one of those things that is perpetually an "emerging technology." >> >> >>> Also it might be helpful to have more text handling >>> routines and font support like SVG, to facilitate drawing of text in 3D >>> so >>> that mundane work like improving the look of buttons, scrollbars etc in >>> business applications could be improved. Maybe this is what Windows 8 >>> will >>> do. >>> >> >> WebGL is in the OpenGL spirit of being fairly low-level and minimal. This >> is the kind of thing that you expect to see in a library written on top of >> WebGL. One popular library is three.js: >> >> http://www.aerotwist.com/lab/getting-started-with-three-js/ >> >> -Won >> >> >>> >>> Andrew Shropshire >>> >>> AT&T Government Solutions, Inc. >>> 703-506-5708 >>> shropshire...@ >>> >>> -----Original Message----- >>> From: Steve Baker [mailto:steve...@] >>> Sent: Friday, October 14, 2011 11:44 AM >>> To: Shropshire, Andrew A >>> Cc: 'public_webgl...@' >>> Subject: Re: [Public WebGL] Stereoscopic monitors >>> >>> No, unless the browser had some kind of special support, WebGL will not >>> appear any different from normal 2D images. I doubt such support is >>> likely anytime soon because it would imply massive changes to at least >>> the >>> subsystem - and likely throughout all of HTML. Stereo monitors >>> just aren't popular enough to make the effort that this would entail >>> remotely worthwhile. >>> >>> Technically: To use stereoscopic displays, you have to render the entire >>> scene twice, once from the left-eye perspective, and again from the right >>> eye. These two images then have to be overlaid or combined or written >>> into two separate rendering buffers. There is support for doing this >>> kind >>> of thing in OpenGL via various quad-buffer extensions and such. But none >>> of that is present in WebGL (AFAICT). Even if the extensions were >>> available, the whole concept of how the compositing pipeline would work >>> in >>> stereo is not considered at all. >>> >>> Also, IMHO, stereoscopic monitors are a complete waste of money. Except >>> in very niche applications, stereoscopic 3D is highly problematic. >>> Issues >>> of dynamic depth of focus mean that barring some pretty stunning >>> technological leaps, these technologies will always cause people to >>> suffer >>> headaches and other nasty symptoms - just as they do in 3D televisions. >>> To avoid this, the 3D-ness of the scene and the positioning of the camera >>> and set/lighting design has to be carefully considered. It's not just a >>> matter of displaying the material correctly. >>> >>> Shropshire, Andrew A wrote: >>> > If I write WebGl and my website has WebGl content, will it appear in 3D >>> on >>> > a >>> > stereoscopic monitor (3D monitor), if I purchase one? Ie is >>> stereoscopic >>> > monitor support a benefit of using WebGl? >>> > >>> > >>> > >>> > Andrew Shropshire >>> > >>> > >>> > >>> > AT&T Government Solutions, Inc. >>> > >>> > 703-506-5708 >>> > >>> > shropshire...@ >>> > >>> > >>> > >>> > >>> >>> >>> -- Steve >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ily...@ Fri Oct 14 10:41:05 2011 From: ily...@ (Ilyes Gouta) Date: Fri, 14 Oct 2011 18:41:05 +0100 Subject: [Public WebGL] Stereoscopic monitors In-Reply-To: References: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> Message-ID: Hi Won, Yes, I'm mostly interested in the technique itself for 3DTV applications. But then a browser implementation is definetly plausible as the depth buffer could be a native surface attachement to the rendering FBO as it's done for the WebKit/Cairo implementation. -Ilyes On Oct 14, 2011 6:25 PM, "Won Chun" wrote: > Hi Won, >> >> Could you please tell a little bit more on the reprojection technique >> that's used in games to recreate the depth perception from just one buffer? >> >> Thanks, >> >> -Ilyes >> > > http://advances.realtimerendering.com/s2011/index.html > > The "Cars 2" and the "CryENGINE 3" talks both mention it, but they were > targeting game consoles. To be clear, I wasn't suggesting to implement this > using WebGL; in particular, accessing the depth buffer is a challenge. I was > suggesting that it could be done in the browser as a post-process before the > scene is composited. > > And this is all speculation on my part; I don't actually work on Chrome > GPU. > > -Won > > >> On Oct 14, 2011 5:28 PM, "Won Chun" wrote: >> >>> On Fri, Oct 14, 2011 at 11:59 AM, Shropshire, Andrew A < >>> shropshire...@> wrote: >>> >>>> Ok thanks for the information. Perhaps some non-stereoscopic 3D display >>>> will come along in the future that will be cheap. If I were designing >>>> the >>>> 3D apis, I would anticipate this and make the projection part of the >>>> pipeline removable. >>> >>> >>> Modern games actually don't do the "render left/right" approach because >>> they don't have the budget. They use a reprojection technique that takes a >>> single depth/color image and makes left/right views off of that. In theory, >>> WebGL could do that during compositing into HTML, although there is the >>> sticking issue of how to deal with disocclusion (i.e. the parts of the scene >>> visible from one eye but not the source render) that tends to be >>> application-specific. And, there's the issue whether it is worth it. I've >>> spent 5 years developing 3-D displays, and don't really see much progress >>> yet; it's one of those things that is perpetually an "emerging technology." >>> >>> >>>> Also it might be helpful to have more text handling >>>> routines and font support like SVG, to facilitate drawing of text in 3D >>>> so >>>> that mundane work like improving the look of buttons, scrollbars etc in >>>> business applications could be improved. Maybe this is what Windows 8 >>>> will >>>> do. >>>> >>> >>> WebGL is in the OpenGL spirit of being fairly low-level and minimal. This >>> is the kind of thing that you expect to see in a library written on top of >>> WebGL. One popular library is three.js: >>> >>> http://www.aerotwist.com/lab/getting-started-with-three-js/ >>> >>> -Won >>> >>> >>>> >>>> Andrew Shropshire >>>> >>>> AT&T Government Solutions, Inc. >>>> 703-506-5708 >>>> shropshire...@ >>>> >>>> -----Original Message----- >>>> From: Steve Baker [mailto:steve...@] >>>> Sent: Friday, October 14, 2011 11:44 AM >>>> To: Shropshire, Andrew A >>>> Cc: 'public_webgl...@' >>>> Subject: Re: [Public WebGL] Stereoscopic monitors >>>> >>>> No, unless the browser had some kind of special support, WebGL will not >>>> appear any different from normal 2D images. I doubt such support is >>>> likely anytime soon because it would imply massive changes to at least >>>> the >>>> subsystem - and likely throughout all of HTML. Stereo monitors >>>> just aren't popular enough to make the effort that this would entail >>>> remotely worthwhile. >>>> >>>> Technically: To use stereoscopic displays, you have to render the entire >>>> scene twice, once from the left-eye perspective, and again from the >>>> right >>>> eye. These two images then have to be overlaid or combined or written >>>> into two separate rendering buffers. There is support for doing this >>>> kind >>>> of thing in OpenGL via various quad-buffer extensions and such. But >>>> none >>>> of that is present in WebGL (AFAICT). Even if the extensions were >>>> available, the whole concept of how the compositing pipeline would work >>>> in >>>> stereo is not considered at all. >>>> >>>> Also, IMHO, stereoscopic monitors are a complete waste of money. Except >>>> in very niche applications, stereoscopic 3D is highly problematic. >>>> Issues >>>> of dynamic depth of focus mean that barring some pretty stunning >>>> technological leaps, these technologies will always cause people to >>>> suffer >>>> headaches and other nasty symptoms - just as they do in 3D televisions. >>>> To avoid this, the 3D-ness of the scene and the positioning of the >>>> camera >>>> and set/lighting design has to be carefully considered. It's not just a >>>> matter of displaying the material correctly. >>>> >>>> Shropshire, Andrew A wrote: >>>> > If I write WebGl and my website has WebGl content, will it appear in >>>> 3D on >>>> > a >>>> > stereoscopic monitor (3D monitor), if I purchase one? Ie is >>>> stereoscopic >>>> > monitor support a benefit of using WebGl? >>>> > >>>> > >>>> > >>>> > Andrew Shropshire >>>> > >>>> > >>>> > >>>> > AT&T Government Solutions, Inc. >>>> > >>>> > 703-506-5708 >>>> > >>>> > shropshire...@ >>>> > >>>> > >>>> > >>>> > >>>> >>>> >>>> -- Steve >>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Fri Oct 14 10:54:26 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Fri, 14 Oct 2011 10:54:26 -0700 Subject: [Public WebGL] Stereoscopic monitors In-Reply-To: <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> References: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> Message-ID: If you'd like to see stereoscopic support the #1 thing you need to do is get the GPU vendors to provide an API that meets the needs of a browser. All of their APIs to date assume there will be 1 (or very few) 3d areas. In other words they were designed for 1 game that takes over the screen or 1 video that is player. That's not a limitation a browser can live with IMO I can have multiple browser windows open and they can all have multiple 3d areas If one of the GPU/3d display vendors provided APIs that were flexible enough I'd certainly consider trying to support it. On Fri, Oct 14, 2011 at 8:59 AM, Shropshire, Andrew A wrote: > Ok thanks for the information. Perhaps some non-stereoscopic 3D display > will come along in the future that will be cheap. If I were designing the > 3D apis, I would anticipate this and make the projection part of the > pipeline removable. Also it might be helpful to have more text handling > routines and font support like SVG, to facilitate drawing of text in 3D so > that mundane work like improving the look of buttons, scrollbars etc in > business applications could be improved. Maybe this is what Windows 8 will > do. > > Andrew Shropshire > > AT&T Government Solutions, Inc. > 703-506-5708 > shropshire...@ > > -----Original Message----- > From: Steve Baker [mailto:steve...@] > Sent: Friday, October 14, 2011 11:44 AM > To: Shropshire, Andrew A > Cc: 'public_webgl...@' > Subject: Re: [Public WebGL] Stereoscopic monitors > > No, unless the browser had some kind of special support, WebGL will not > appear any different from normal 2D images. I doubt such support is > likely anytime soon because it would imply massive changes to at least the > subsystem - and likely throughout all of HTML. Stereo monitors > just aren't popular enough to make the effort that this would entail > remotely worthwhile. > > Technically: To use stereoscopic displays, you have to render the entire > scene twice, once from the left-eye perspective, and again from the right > eye. These two images then have to be overlaid or combined or written > into two separate rendering buffers. There is support for doing this kind > of thing in OpenGL via various quad-buffer extensions and such. But none > of that is present in WebGL (AFAICT). Even if the extensions were > available, the whole concept of how the compositing pipeline would work in > stereo is not considered at all. > > Also, IMHO, stereoscopic monitors are a complete waste of money. Except > in very niche applications, stereoscopic 3D is highly problematic. Issues > of dynamic depth of focus mean that barring some pretty stunning > technological leaps, these technologies will always cause people to suffer > headaches and other nasty symptoms - just as they do in 3D televisions. > To avoid this, the 3D-ness of the scene and the positioning of the camera > and set/lighting design has to be carefully considered. It's not just a > matter of displaying the material correctly. > > Shropshire, Andrew A wrote: > > If I write WebGl and my website has WebGl content, will it appear in 3D > on > > a > > stereoscopic monitor (3D monitor), if I purchase one? Ie is stereoscopic > > monitor support a benefit of using WebGl? > > > > > > > > Andrew Shropshire > > > > > > > > AT&T Government Solutions, Inc. > > > > 703-506-5708 > > > > shropshire...@ > > > > > > > > > > > -- Steve > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shr...@ Fri Oct 14 11:17:57 2011 From: shr...@ (Shropshire, Andrew A) Date: Fri, 14 Oct 2011 14:17:57 -0400 Subject: [Public WebGL] Stereoscopic monitors In-Reply-To: References: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> Message-ID: <6BC169D8374E26408B53658874112798F6222748@VNAX.gsi.grci.com> Thanks for your comments. I was impressed with the idea that some sort of layer cake API approach like the TCP/IP protocol stack might apply to graphics so that as a developer working at the top layer, my stuff would work out-of-the box with not only any browser, but various hardware such as a stereoscopic monitor, a regular monitor, or perhaps some future 3D-swept display or laser-voxel display, or whatever. Seems this area is very active and such abstractions do not exist. Would also be nice if the same api could be used for both 3d and 2d. I could then draw a room and put 2D applications on the walls for example, and all this would work in any 3D display device. I could also apply all the baked in physics and lighting stuff to basically 2D interfaces too - ie instead of faking 3D with gradients and fake shadows, have a more realistic 2D interface. Also an API that could be bare bones and minimalist, but also have stuff built on that to handle many routine tasks to make things efficient for development. I wouldn't want to write say a spreadsheet application front-end using OpenGL. However, with the convergence of 3d and 2d, I do think there is a need for an api that I could use for a spreadsheet as well as, say, 3d graphing or virtual reality. Andrew Shropshire AT&T Government Solutions, Inc. 703-506-5708 shropshire...@ From: Gregg Tavares (wrk) [mailto:gman...@] Sent: Friday, October 14, 2011 1:54 PM To: Shropshire, Andrew A Cc: steve...@; public_webgl...@ Subject: Re: [Public WebGL] Stereoscopic monitors If you'd like to see stereoscopic support the #1 thing you need to do is get the GPU vendors to provide an API that meets the needs of a browser. All of their APIs to date assume there will be 1 (or very few) 3d areas. In other words they were designed for 1 game that takes over the screen or 1 video that is player. That's not a limitation a browser can live with IMO I can have multiple browser windows open and they can all have multiple 3d areas If one of the GPU/3d display vendors provided APIs that were flexible enough I'd certainly consider trying to support it. On Fri, Oct 14, 2011 at 8:59 AM, Shropshire, Andrew A wrote: Ok thanks for the information. Perhaps some non-stereoscopic 3D display will come along in the future that will be cheap. If I were designing the 3D apis, I would anticipate this and make the projection part of the pipeline removable. Also it might be helpful to have more text handling routines and font support like SVG, to facilitate drawing of text in 3D so that mundane work like improving the look of buttons, scrollbars etc in business applications could be improved. Maybe this is what Windows 8 will do. Andrew Shropshire AT&T Government Solutions, Inc. 703-506-5708 shropshire...@ -----Original Message----- From: Steve Baker [mailto:steve...@] Sent: Friday, October 14, 2011 11:44 AM To: Shropshire, Andrew A Cc: 'public_webgl...@' Subject: Re: [Public WebGL] Stereoscopic monitors No, unless the browser had some kind of special support, WebGL will not appear any different from normal 2D images. I doubt such support is likely anytime soon because it would imply massive changes to at least the subsystem - and likely throughout all of HTML. Stereo monitors just aren't popular enough to make the effort that this would entail remotely worthwhile. Technically: To use stereoscopic displays, you have to render the entire scene twice, once from the left-eye perspective, and again from the right eye. These two images then have to be overlaid or combined or written into two separate rendering buffers. There is support for doing this kind of thing in OpenGL via various quad-buffer extensions and such. But none of that is present in WebGL (AFAICT). Even if the extensions were available, the whole concept of how the compositing pipeline would work in stereo is not considered at all. Also, IMHO, stereoscopic monitors are a complete waste of money. Except in very niche applications, stereoscopic 3D is highly problematic. Issues of dynamic depth of focus mean that barring some pretty stunning technological leaps, these technologies will always cause people to suffer headaches and other nasty symptoms - just as they do in 3D televisions. To avoid this, the 3D-ness of the scene and the positioning of the camera and set/lighting design has to be carefully considered. It's not just a matter of displaying the material correctly. Shropshire, Andrew A wrote: > If I write WebGl and my website has WebGl content, will it appear in 3D on > a > stereoscopic monitor (3D monitor), if I purchase one? Ie is stereoscopic > monitor support a benefit of using WebGl? > > > > Andrew Shropshire > > > > AT&T Government Solutions, Inc. > > 703-506-5708 > > shropshire...@ > > > > -- Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 2281 bytes Desc: not available URL: From ily...@ Fri Oct 14 13:45:28 2011 From: ily...@ (Ilyes Gouta) Date: Fri, 14 Oct 2011 21:45:28 +0100 Subject: [Public WebGL] Stereoscopic monitors In-Reply-To: References: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> Message-ID: Hi Gregg, This is doable to some extent for the embedded systems via drawing libraries such as DirectFB, where this one offers to middle-ware such as WebKit a stereo surface w/ separate left and right buffers where rendering can happen (so render to a pixmap). Such a surface can be composited with other classic 2D content before being written into the primary surface (which is stereo too) that serves for final display on screen. So what's really lacking is stereo window management which can be provided either by a dedicated toolkit back-end (in WebKit) or by WebKit itself via a compositing layer that supports stereoscopy (which we could map then to the corresponding toolkit capability). This really can be done like it's almost there. Having the WebGL post-processing to have the left and right buffers ready plus the compositing support would definitely help. -Ilyes On Fri, Oct 14, 2011 at 6:54 PM, Gregg Tavares (wrk) wrote: > If you'd like to see stereoscopic support the #1 thing you need to do is get > the GPU vendors to provide an API that meets the needs of a browser. All of > their APIs to date assume there will be 1 (or very few) 3d areas. In other > words they were designed for 1 game that takes over the screen or 1 video > that is player.?That's not a limitation a browser can live with IMO > I can have multiple browser windows open and they can all have multiple 3d > areas > If one of the GPU/3d display vendors provided APIs that were flexible enough > I'd certainly consider trying to support it. > > On Fri, Oct 14, 2011 at 8:59 AM, Shropshire, Andrew A > wrote: >> >> Ok thanks for the information. ?Perhaps some non-stereoscopic 3D display >> will come along in the future that will be cheap. ?If I were designing the >> 3D apis, I would anticipate this and make the projection part of the >> pipeline removable. ?Also it might be helpful to have more text handling >> routines and font support like SVG, to facilitate drawing of text in 3D so >> that mundane work like improving the look of buttons, scrollbars etc in >> business applications could be improved. ?Maybe this is what Windows 8 >> will >> do. >> >> Andrew Shropshire >> >> AT&T Government Solutions, Inc. >> 703-506-5708 >> shropshire...@ >> >> -----Original Message----- >> From: Steve Baker [mailto:steve...@] >> Sent: Friday, October 14, 2011 11:44 AM >> To: Shropshire, Andrew A >> Cc: 'public_webgl...@' >> Subject: Re: [Public WebGL] Stereoscopic monitors >> >> No, unless the browser had some kind of special support, WebGL will not >> appear any different from normal 2D images. ?I doubt such support is >> likely anytime soon because it would imply massive changes to at least the >> subsystem - and likely throughout all of HTML. ?Stereo monitors >> just aren't popular enough to make the effort that this would entail >> remotely worthwhile. >> >> Technically: To use stereoscopic displays, you have to render the entire >> scene twice, once from the left-eye perspective, and again from the right >> eye. ?These two images then have to be overlaid or combined or written >> into two separate rendering buffers. ?There is support for doing this kind >> of thing in OpenGL via various quad-buffer extensions and such. ?But none >> of that is present in WebGL (AFAICT). ?Even if the extensions were >> available, the whole concept of how the compositing pipeline would work in >> stereo is not considered at all. >> >> Also, IMHO, stereoscopic monitors are a complete waste of money. ?Except >> in very niche applications, stereoscopic 3D is highly problematic. ?Issues >> of dynamic depth of focus mean that barring some pretty stunning >> technological leaps, these technologies will always cause people to suffer >> headaches and other nasty symptoms - just as they do in 3D televisions. >> To avoid this, the 3D-ness of the scene and the positioning of the camera >> and set/lighting design has to be carefully considered. ?It's not just a >> matter of displaying the material correctly. >> >> Shropshire, Andrew A wrote: >> > If I write WebGl and my website has WebGl content, will it appear in 3D >> > on >> > a >> > stereoscopic monitor (3D monitor), if I purchase one? ?Ie is >> > stereoscopic >> > monitor support a benefit of using WebGl? >> > >> > >> > >> > Andrew Shropshire >> > >> > >> > >> > AT&T Government Solutions, Inc. >> > >> > 703-506-5708 >> > >> > shropshire...@ >> > >> > >> > >> > >> >> >> ?-- Steve >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From joe...@ Sat Oct 15 16:35:58 2011 From: joe...@ (Joe D Williams) Date: Sat, 15 Oct 2011 16:35:58 -0700 Subject: [Public WebGL] Stereoscopic monitors Message-ID: Also see Web3D.org X3D and Khronos.org Collada for some work on techniques and applications you mention. Best Wishes, Joe ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Sun Oct 16 19:42:50 2011 From: cal...@ (Mark Callow) Date: Mon, 17 Oct 2011 11:42:50 +0900 Subject: [Public WebGL] Stereoscopic monitors In-Reply-To: References: <6BC169D8374E26408B53658874112798F6222741@VNAX.gsi.grci.com> <1f171e4b1e6afd10bf9630d38eabaf9c.squirrel@webmail.sjbaker.org> <6BC169D8374E26408B53658874112798F6222745@VNAX.gsi.grci.com> Message-ID: <4E9B962A.3010407@hicorp.co.jp> On 15/10/2011 01:26, Won Chun wrote: > On Fri, Oct 14, 2011 at 11:59 AM, Shropshire, Andrew A > > wrote: > > Ok thanks for the information. Perhaps some non-stereoscopic 3D > display > will come along in the future that will be cheap. If I were > designing the > 3D apis, I would anticipate this and make the projection part of the > pipeline removable. > > > Modern games actually don't do the "render left/right" approach > because they don't have the budget. They use a reprojection technique > that takes a single depth/color image and makes left/right views off > of that. Actually autostereo, a.k.a. glasses-free, displays require more than 2 views. For the parallax barrier or lenticular lens type typically somewhere between 5 - 12 are needed. For a holographic display a very very large number of views is needed. Not sure if or how reprojection could be applied to these types of display. > I've spent 5 years developing 3-D displays, and don't really see much > progress yet; it's one of those things that is perpetually an > "emerging technology." Yes. It does seem like that. However at the recent CEATEC show in Tokyo I saw a system called "Holo Table " that projects a 3D image that you can walk around and see with perfect clarity. It was displaying a model of a convertible sports car with the top down and a young girl getting out of the passenger seat. Except for the table underneath with its spinning disc, it is the closest thing I've seen yet to the holy grail of the Princess Leia effect. Unfortunately the model was not itself animated. The people showing it said the table is capable of displaying 24-30 fps but they do not yet have any suitable content. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Mon Oct 17 15:45:52 2011 From: cma...@ (Chris Marrin) Date: Mon, 17 Oct 2011 15:45:52 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> Message-ID: <22487EA6-DE55-47CE-946A-2B4CC83EA70C@apple.com> On Oct 12, 2011, at 4:04 PM, Mo, Zhenyao wrote: > Thanks Ken. I've updated the draft for the wrong name in the IDL and the issue resolved. > > So I added the two extensions to the WebGL extension registry: > > WEBGL_debug_renderer_info: extension #6 > WEBGL_debug_shaders: extension #7 > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/WEBGL_debug_renderer_info/index.html > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/WEBGL_debug_shaders/index.html > > They are ready to be implemented. Thanks for all the feedbacks. I'm glad these have been added to the registry. It will put to rest the debate about real vs. obfuscated renderer info. But I think the specs should make more clear what is meant by "privileged code in the browser". This isn't a concept that exists from a standard's standpoint. Perhaps a sentence like: The extension shall be available only to content determined to be privileged by some user agent specific means. would make it a bit more clear? ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cma...@ Mon Oct 17 15:47:39 2011 From: cma...@ (Chris Marrin) Date: Mon, 17 Oct 2011 15:47:39 -0700 Subject: [Public WebGL] Handling context lost in WebGL In-Reply-To: References: Message-ID: <393A3E3A-2ABD-4D9C-85AC-437434D6B818@apple.com> On Oct 12, 2011, at 6:25 PM, Gregg Tavares (wrk) wrote: > I just got through updating all the demos on the Khronos WebGL SDK to handle context lost > > I wrote up some of the issues I ran into and their solutions and put them on the WebGL Wiki here > > http://www.khronos.org/webgl/wiki/HandlingContextLost > > I hope that's useful. Is the flowery prose at the beginning of the page really necessary? I think everyone already thinks WebGL is A Good Think (tm) :-) ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From zhe...@ Tue Oct 18 09:44:13 2011 From: zhe...@ (Mo, Zhenyao) Date: Tue, 18 Oct 2011 09:44:13 -0700 Subject: [Public WebGL] Proposals for two new WebGL extensions In-Reply-To: <22487EA6-DE55-47CE-946A-2B4CC83EA70C@apple.com> References: <4E8CAB35.8050300@mozilla.com> <4E8CE6B1.6080002@mozilla.com> <22487EA6-DE55-47CE-946A-2B4CC83EA70C@apple.com> Message-ID: On Mon, Oct 17, 2011 at 3:45 PM, Chris Marrin wrote: > > On Oct 12, 2011, at 4:04 PM, Mo, Zhenyao wrote: > > > Thanks Ken. I've updated the draft for the wrong name in the IDL and the > issue resolved. > > > > So I added the two extensions to the WebGL extension registry: > > > > WEBGL_debug_renderer_info: extension #6 > > WEBGL_debug_shaders: extension #7 > > > > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/WEBGL_debug_renderer_info/index.html > > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/extensions/WEBGL_debug_shaders/index.html > > > > They are ready to be implemented. Thanks for all the feedbacks. > > I'm glad these have been added to the registry. It will put to rest the > debate about real vs. obfuscated renderer info. But I think the specs should > make more clear what is meant by "privileged code in the browser". This > isn't a concept that exists from a standard's standpoint. > > Perhaps a sentence like: > > The extension shall be available only to content determined to be > privileged by some user agent specific means. > > would make it a bit more clear? > Thanks for the advice. I've updated the specs. > > ----- > ~Chris > cmarrin...@ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From RTr...@ Thu Oct 20 10:10:44 2011 From: RTr...@ (Robert Tray) Date: Thu, 20 Oct 2011 10:10:44 -0700 Subject: [Public WebGL] Issues with WebGL Conformance 1.0.1b Message-ID: The current webgl-conformance-tests.html fails a few tests that I'm hoping can be remedied with test changes. 1) Context/context-attributes-alpha-depth-stencil-antialias.html: The test makes assumptions about the location of attributes 'pos' and 'colorIn'. The test is missing either a bindAttribLocation before the program link or a getAttribLocation after. 2) Glsl/functions/glsl-function-mod-gentype.html: The test is enhanced over the GLES2 version of conformance by using two different modulo divisors that are more unique than the '1.0' used in GLES2 conformance. That's a good thing. However the hardware I'm testing on has a 1 bit lsb difference between the built-in and the emulated shader image results. The 1 bit lsdb delta happens at the top and bottom extremes for the 'y' component. I would like to request a 'tolerance:1,' parameter be used for glsl-function-mod-gentype.html so that it will pass. 3) Glsl/functions/glsl-function-normalize.html: The test is enhanced over the GLES2 version of conformance by extending the range of the vector components. This seems like a good thing too. But I am seeing a 1 bit lsb difference for a few pixels when comparing the results for the built-in versus emulated shaders. I would like to request a 'tolerance:1,' parameter for the glsl-function-normalize.html test too. Robert Tray -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu Oct 20 11:06:06 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Thu, 20 Oct 2011 11:06:06 -0700 Subject: [Public WebGL] Issues with WebGL Conformance 1.0.1b In-Reply-To: References: Message-ID: Thank you for reviewing these On Thu, Oct 20, 2011 at 10:10 AM, Robert Tray wrote: > ** ** > > The current webgl-conformance-tests.html fails a few tests that I?m hoping > can be remedied with test changes.**** > > ** ** > > **1) **Context/context-attributes-alpha-depth-stencil-antialias.html: > The test makes assumptions about the location of attributes ?pos? and > ?colorIn?. The test is missing either a bindAttribLocation before the > program link or a getAttribLocation after. > this test actually does call bindAttribLocation in the utility function createProgram > **** > > **2) **Glsl/functions/glsl-function-mod-gentype.html: The test is > enhanced over the GLES2 version of conformance by using two different modulo > divisors that are more unique than the ?1.0? used in GLES2 conformance. > That?s a good thing. However the hardware I?m testing on has a 1 bit lsb > difference between the built-in and the emulated shader image results. The > 1 bit lsdb delta happens at the top and bottom extremes for the ?y? > component. I would like to request a ?tolerance:1,? parameter be used for > glsl-function-mod-gentype.html so that it will pass.**** > > ** > done > 3) **Glsl/functions/glsl-function-normalize.html: The test is > enhanced over the GLES2 version of conformance by extending the range of the > vector components. This seems like a good thing too. But I am seeing a 1 > bit lsb difference for a few pixels when comparing the results for the > built-in versus emulated shaders. I would like to request a ?tolerance:1,? > parameter for the glsl-function-normalize.html test too. > done > **** > > ** ** > > Robert Tray**** > > ** ** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From RTr...@ Thu Oct 20 11:12:15 2011 From: RTr...@ (Robert Tray) Date: Thu, 20 Oct 2011 11:12:15 -0700 Subject: [Public WebGL] Issues with WebGL Conformance 1.0.1b In-Reply-To: References: Message-ID: Thanks Gregg. > this test actually does call bindAttribLocation in the utility function createProgram You're right. I missed that. Now that I think about it, there was an unrelated problem fixed after I debugged that test that would explain what I saw. Thanks for the quick action. Robert Tray From: Gregg Tavares (wrk) [mailto:gman...@] Sent: Thursday, October 20, 2011 12:06 PM To: Robert Tray Cc: public_webgl...@ Subject: Re: [Public WebGL] Issues with WebGL Conformance 1.0.1b Thank you for reviewing these On Thu, Oct 20, 2011 at 10:10 AM, Robert Tray > wrote: The current webgl-conformance-tests.html fails a few tests that I'm hoping can be remedied with test changes. 1) Context/context-attributes-alpha-depth-stencil-antialias.html: The test makes assumptions about the location of attributes 'pos' and 'colorIn'. The test is missing either a bindAttribLocation before the program link or a getAttribLocation after. this test actually does call bindAttribLocation in the utility function createProgram 2) Glsl/functions/glsl-function-mod-gentype.html: The test is enhanced over the GLES2 version of conformance by using two different modulo divisors that are more unique than the '1.0' used in GLES2 conformance. That's a good thing. However the hardware I'm testing on has a 1 bit lsb difference between the built-in and the emulated shader image results. The 1 bit lsdb delta happens at the top and bottom extremes for the 'y' component. I would like to request a 'tolerance:1,' parameter be used for glsl-function-mod-gentype.html so that it will pass. done 3) Glsl/functions/glsl-function-normalize.html: The test is enhanced over the GLES2 version of conformance by extending the range of the vector components. This seems like a good thing too. But I am seeing a 1 bit lsb difference for a few pixels when comparing the results for the built-in versus emulated shaders. I would like to request a 'tolerance:1,' parameter for the glsl-function-normalize.html test too. done Robert Tray -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Fri Oct 21 08:20:26 2011 From: cma...@ (Chris Marrin) Date: Fri, 21 Oct 2011 08:20:26 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking Message-ID: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Currently the WebKit implementation of WebGL keeps around the current index array so it can be used to do bounds checking on draw calls. This is mandated in section 6.4 of the spec. The ARB_robustness extension requires that out-of-bounds array accesses be forbidden. But by my reading it doesn't require the behavior mandated in the WebGL spec. They simply guarantee that no fetches outside the array will happen. Compliant behavior would be to simply return 0 values for these accesses. So bounds checking on the WebGL side would still be required. If this is the case, I think we should make changes so drivers implementing this extension can avoid the WebGL side bounds checking. I'm not sure it's practical to change the ARB_isolation extension at this point. So I would be in favor of changing section 6.4 of the spec to match the behavior required in the extension. ----- ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Fri Oct 21 08:37:10 2011 From: cma...@ (Chris Marrin) Date: Fri, 21 Oct 2011 08:37:10 -0700 Subject: [Public WebGL] Conformance tests for Message-ID: I don't see any conformance tests for Appendix A, section 4 "Control Flow" requirements. This section requires that for loops must be constrained so "the maximum number of iterations can easily be determined at compile time". Are they there and I haven't yet found them, or are they not there yet? Also, does ANGLE currently validate for these constraints? ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Fri Oct 21 10:02:48 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Fri, 21 Oct 2011 10:02:48 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: On Fri, Oct 21, 2011 at 8:20 AM, Chris Marrin wrote: > > Currently the WebKit implementation of WebGL keeps around the current index > array so it can be used to do bounds checking on draw calls. This is > mandated in section 6.4 of the spec. The ARB_robustness extension requires > that out-of-bounds array accesses be forbidden. But by my reading it doesn't > require the behavior mandated in the WebGL spec. They simply guarantee that > no fetches outside the array will happen. Compliant behavior would be to > simply return 0 values for these accesses. So bounds checking on the WebGL > side would still be required. > > If this is the case, I think we should make changes so drivers implementing > this extension can avoid the WebGL side bounds checking. I'm not sure it's > practical to change the ARB_isolation extension at this point. So I would be > in favor of changing section 6.4 of the spec to match the behavior required > in the extension. > If I understand correctly the issue with ARB_isolation is there is no guarantee the driver is obeying anything. It doesn't report errors for out of bounds access so there is no way to test that it's actually working. For WebGL we wanted something testable as far as I remember. > > ----- > ~Chris > cmarrin...@ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Oct 21 11:10:31 2011 From: kbr...@ (Kenneth Russell) Date: Fri, 21 Oct 2011 11:10:31 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: On Fri, Oct 21, 2011 at 10:02 AM, Gregg Tavares (wrk) wrote: > > > On Fri, Oct 21, 2011 at 8:20 AM, Chris Marrin wrote: >> >> Currently the WebKit implementation of WebGL keeps around the current >> index array so it can be used to do bounds checking on draw calls. This is >> mandated in section 6.4 of the spec. The ARB_robustness extension requires >> that out-of-bounds array accesses be forbidden. But by my reading it doesn't >> require the behavior mandated in the WebGL spec. They simply guarantee that >> no fetches outside the array will happen. Compliant behavior would be to >> simply return 0 values for these accesses. So bounds checking on the WebGL >> side would still be required. >> If this is the case, I think we should make changes so drivers >> implementing this extension can avoid the WebGL side bounds checking. I'm >> not sure it's practical to change the ARB_isolation extension at this point. >> So I would be in favor of changing section 6.4 of the spec to match the >> behavior required in the extension. > > If I understand correctly the issue with ARB_isolation is there is no > guarantee the driver is obeying anything. It doesn't report errors for out > of bounds access so there is no way to test that it's actually working. For > WebGL we wanted something testable as far as I remember. This is my recollection as well. There have been discussions in some of the working groups about strengthening the guarantees in ARB_robustness' robust buffer access: for example, either clamping the access to within the buffer, or returning a specified constant value, so that the extension's behavior is testable. Otherwise the best test that can be written is one which accesses wildly out-of-range values and passes if it doesn't crash, which isn't very good. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Fri Oct 21 12:35:02 2011 From: kbr...@ (Kenneth Russell) Date: Fri, 21 Oct 2011 12:35:02 -0700 Subject: [Public WebGL] Conformance tests for In-Reply-To: References: Message-ID: On Fri, Oct 21, 2011 at 8:37 AM, Chris Marrin wrote: > > I don't see any conformance tests for Appendix A, section 4 "Control Flow" > requirements. This section requires that for loops must be constrained so > "the maximum number of?iterations can easily be determined at compile time". > Are they there and I haven't yet found them, or are they not there yet? Looking back, I thought that we had added tests for these when ANGLE was updated to validate them, but it looks like I was wrong. I've just committed (simple) tests for these into sdk/tests/conformance/glsl/misc/ : shader-with-limited-indexing.frag.html shader-with-arbitrary-indexing.vert.html shader-with-uniform-in-loop-condition.vert.html shader-with-arbitrary-indexing.frag.html > Also, does ANGLE currently validate for these constraints? Yes. The above tests pass with the current ANGLE. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Sat Oct 22 22:38:42 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Sat, 22 Oct 2011 22:38:42 -0700 Subject: [Public WebGL] Texture Compression in WebGL Message-ID: If you're a developer making a WebGL app please tells us your needs and opinions regarding WebGL supporting texture compression. There's been some discussion of this among the various implementers of WebGL but it would be helpful to have more input from actual developers. Thank you -gregg -------------- next part -------------- An HTML attachment was scrubbed... URL: From rko...@ Sun Oct 23 00:30:30 2011 From: rko...@ (Kornmann, Ralf) Date: Sun, 23 Oct 2011 08:30:30 +0100 Subject: AW: [Public WebGL] Texture Compression in WebGL In-Reply-To: References: Message-ID: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> Hi Gregg, Thank you for this opportunity. During evaluating of different techniques to bring 3D games to the browser the missing texture compression support was one of the cons for WebGL. There are three primary reasons why we would like to see compression support: 1. Reducing the download time for the players. We still have many customers with slower (~1 MBIt/s) connections. 2. Reducing the bandwidth cost. This may not a problem for many sites today but if we deliver game content to Millions of people this would require quite an amount of bandwidth that is not for free. 3. Improving performances for people with lower class GFX hardware. To be honest I am not sure if we will run in memory limitations before we hit the limit of the script engine. But as the script engines improve faster than the hardware it is likely. Therefore I would like to see two features for the compressed texture support: 1. Allow downloading data for different compression methods separately. We already can (and need) handle this well for different sound formats so there is no magic behind this. A combined storage for all formats could be fine for people who don?t care that much about bandwidth but as said we prefer to only transfer data to the client that could be used there. 2. Allow separated transfer of the different mip levels. This way we could stream the lower resolutions first. 3. Make it somewhat generic. I think the standards don?t need to list the supported formats itself. Just a way to tell the application what is supported would be enough. This way there would no need to update the standard if a new format shows up. -Ralf ________________________________ Von: owner-public_webgl...@ [owner-public_webgl...@] im Auftrag von Gregg Tavares (wrk) [gman...@] Gesendet: Sonntag, 23. Oktober 2011 07:38 An: public webgl Betreff: [Public WebGL] Texture Compression in WebGL If you're a developer making a WebGL app please tells us your needs and opinions regarding WebGL supporting texture compression. There's been some discussion of this among the various implementers of WebGL but it would be helpful to have more input from actual developers. Thank you -gregg -------------- next part -------------- An HTML attachment was scrubbed... URL: From tu...@ Sun Oct 23 03:13:58 2011 From: tu...@ (Thatcher Ulrich) Date: Sun, 23 Oct 2011 06:13:58 -0400 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: References: Message-ID: Things I would like: * ability to transcode from a wire format (e.g. jpg) into the gpu compressed format, without blocking the main thread (I.e. async interface, taking advantage of multicore where available). * some way to express channel and quality preferences when picking an available gpu format. I don't know what this interface should look like since it's a bit tricky. Part of me would like to just say "give me a color format with at least k bits of alpha", and part of me wants an enumeration of specific formats that are available so I can write my code to choose among them, using specific knowledge of the formats. * ability to cache the transcoded texture via the file api, and efficiently get it back into webgl. Likewise, the ability to pull prebaked compressed assets from the network and pass them to webgl. -T On Oct 23, 2011 1:38 AM, "Gregg Tavares (wrk)" wrote: > If you're a developer making a WebGL app please tells us your needs and > opinions regarding WebGL supporting texture compression. > > There's been some discussion of this among the various implementers of > WebGL but it would be helpful to have more input from actual developers. > > Thank you > > -gregg > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cvi...@ Sun Oct 23 03:37:06 2011 From: cvi...@ (Cedric Vivier) Date: Sun, 23 Oct 2011 18:37:06 +0800 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: Hey, On Sun, Oct 23, 2011 at 15:30, Kornmann, Ralf wrote: > 1.?????? Reducing the download time for the players. We still have many > customers with slower (~1 MBIt/s) connections. > 2.?????? Reducing the bandwidth cost. This may not a problem for many sites > today but if we deliver game content to Millions of people this would > require quite an amount of bandwidth that is not for free. Please let's not be confused here. Texture compression has not much to do with improving network download times wrt network bandwidth, can be the opposite even (without adding counter-measures like say, transferring with gzip content encoding). An uncompressed texture (ie. loaded from a png or jpeg) usually has much better compression ratios than a regular ETC/DXT compressed texture (ie. 6:1). Of course, for _GPU memory_ bandwidth/upload time on the other hand, compressed textures help a lot. Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From won...@ Sun Oct 23 15:46:19 2011 From: won...@ (Won Chun) Date: Sun, 23 Oct 2011 18:46:19 -0400 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: 1) Prefer an existing compressed texture format. The problem with texture compression is that they all make different size/quality tradeoffs. It isn't so much the fact that you have to manage many versions, but that you also have to qualify them and the different kinds of artifacts. I understand this is mostly not a technical problem, but maybe the IP issues are resolvable. That being said, I would rather have to query for supported formats than to rely on a standard, novel, format for just WebGL. 2) +1 to Thatcher's transcoding. One possible advantage here is that if it is part of the WebGL, then the transcoded could actually be hardware accelerated (this could take many forms). I believe Rage has the option to use CUDA to transcode compressed (JPEG XR?) textures on-the-fly. As Thatcher points out, there are lots of details to work out, and I'm not really sure how these get exposed to the developer. For example: * How much time should be spent transcoding? For certain formats, like DXT1, you can spend a lot of time trying to find the highest-quality encoding. I suppose a good way to do this is with hints. * Is the expectation that the transcoding goes from RGB to RGB? You can save on color-space conversion work if you support keeping things in the encoded color space. There are more complex cases. It is possible to use DXT5's alpha as a luminance channel, and RGB as chrominance. This is generally a higher-quality format than DXT1. * What about chroma subsampling? Or more generally, encoding different channels at different resolutions? JPEG (optionally) and WebP start off this way, and some platforms even support chroma subsampled textures. You could give the option to encode luminance and chrominance at different resolutions by using multiple textures (e.g. luminance as full-resolution 8-bit, chrominance at quarter-resolution compressed). You could even set their texture filtering differently through texture LOD bias, or anisotropic filtering if it were supported. This last bit may be a bit aggressive, but it is what I would think of if I were writing a native app. 3) Normal map compression. -Won On Sun, Oct 23, 2011 at 6:37 AM, Cedric Vivier wrote: > > Hey, > > On Sun, Oct 23, 2011 at 15:30, Kornmann, Ralf wrote: > > 1. Reducing the download time for the players. We still have many > > customers with slower (~1 MBIt/s) connections. > > 2. Reducing the bandwidth cost. This may not a problem for many > sites > > today but if we deliver game content to Millions of people this would > > require quite an amount of bandwidth that is not for free. > > Please let's not be confused here. > > Texture compression has not much to do with improving network download > times wrt network bandwidth, can be the opposite even (without adding > counter-measures like say, transferring with gzip content encoding). > > An uncompressed texture (ie. loaded from a png or jpeg) usually has > much better compression ratios than a regular ETC/DXT compressed > texture (ie. 6:1). > > Of course, for _GPU memory_ bandwidth/upload time on the other hand, > compressed textures help a lot. > > Regards, > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Mon Oct 24 09:48:09 2011 From: cma...@ (Chris Marrin) Date: Mon, 24 Oct 2011 09:48:09 -0700 Subject: [Public WebGL] Conformance tests for In-Reply-To: References: Message-ID: On Oct 21, 2011, at 12:35 PM, Kenneth Russell wrote: > > On Fri, Oct 21, 2011 at 8:37 AM, Chris Marrin wrote: >> >> I don't see any conformance tests for Appendix A, section 4 "Control Flow" >> requirements. This section requires that for loops must be constrained so >> "the maximum number of iterations can easily be determined at compile time". >> Are they there and I haven't yet found them, or are they not there yet? > > Looking back, I thought that we had added tests for these when ANGLE > was updated to validate them, but it looks like I was wrong. > > I've just committed (simple) tests for these into > sdk/tests/conformance/glsl/misc/ : > shader-with-limited-indexing.frag.html > shader-with-arbitrary-indexing.vert.html > shader-with-uniform-in-loop-condition.vert.html > shader-with-arbitrary-indexing.frag.html > >> Also, does ANGLE currently validate for these constraints? > > Yes. The above tests pass with the current ANGLE. Perfect. Thanks, Ken ----- ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Mon Oct 24 09:59:01 2011 From: cma...@ (Chris Marrin) Date: Mon, 24 Oct 2011 09:59:01 -0700 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Oct 23, 2011, at 12:30 AM, Kornmann, Ralf wrote: > Hi Gregg, > > Thank you for this opportunity. During evaluating of different techniques to bring 3D games to the browser the missing texture compression support was one of the cons for WebGL. There are three primary reasons why we would like to see compression support: > > 1. Reducing the download time for the players. We still have many customers with slower (~1 MBIt/s) connections. > 2. Reducing the bandwidth cost. This may not a problem for many sites today but if we deliver game content to Millions of people this would require quite an amount of bandwidth that is not for free. But ETC texture compression only gives you 6x. You can easily beat that with high quality using JPEG. Downloading of JPEG (and PNG and GIF) compressed images is supported today. > 3. Improving performances for people with lower class GFX hardware. To be honest I am not sure if we will run in memory limitations before we hit the limit of the script engine. But as the script engines improve faster than the hardware it is likely. > I understand that this is the biggest issue for some hardware. The problem I see is that I don't think there is any texture compression format supported universally. > Therefore I would like to see two features for the compressed texture support: > > 1. Allow downloading data for different compression methods separately. We already can (and need) handle this well for different sound formats so there is no magic behind this. A combined storage for all formats could be fine for people who don?t care that much about bandwidth but as said we prefer to only transfer data to the client that could be used there. > 2. Allow separated transfer of the different mip levels. This way we could stream the lower resolutions first. Aside from the compression issue, you can download different mip levels separately today. > 3. Make it somewhat generic. I think the standards don?t need to list the supported formats itself. Just a way to tell the application what is supported would be enough. This way there would no need to update the standard if a new format shows up. > There's an issue we've discussed before, but I don't think we had the benefit of game developer input. Would it be reasonable to add a hint that would say "please use texture compression if available, and I understand this may result in somewhat lower quality or performance issues for the initial runtime compression"? Or is the actual compression of the texture too finely tuned for a runtime algorithm to do a reasonable job? Because if we were able to do that, we could use the texture compression capabilities of the hardware without the author needing to deal with the details. ----- ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Mon Oct 24 10:05:27 2011 From: cma...@ (Chris Marrin) Date: Mon, 24 Oct 2011 10:05:27 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: On Oct 21, 2011, at 11:10 AM, Kenneth Russell wrote: > > On Fri, Oct 21, 2011 at 10:02 AM, Gregg Tavares (wrk) wrote: >> >> >> On Fri, Oct 21, 2011 at 8:20 AM, Chris Marrin wrote: >>> >>> Currently the WebKit implementation of WebGL keeps around the current >>> index array so it can be used to do bounds checking on draw calls. This is >>> mandated in section 6.4 of the spec. The ARB_robustness extension requires >>> that out-of-bounds array accesses be forbidden. But by my reading it doesn't >>> require the behavior mandated in the WebGL spec. They simply guarantee that >>> no fetches outside the array will happen. Compliant behavior would be to >>> simply return 0 values for these accesses. So bounds checking on the WebGL >>> side would still be required. >>> If this is the case, I think we should make changes so drivers >>> implementing this extension can avoid the WebGL side bounds checking. I'm >>> not sure it's practical to change the ARB_isolation extension at this point. >>> So I would be in favor of changing section 6.4 of the spec to match the >>> behavior required in the extension. >> >> If I understand correctly the issue with ARB_isolation is there is no >> guarantee the driver is obeying anything. It doesn't report errors for out >> of bounds access so there is no way to test that it's actually working. For >> WebGL we wanted something testable as far as I remember. > > This is my recollection as well. There have been discussions in some > of the working groups about strengthening the guarantees in > ARB_robustness' robust buffer access: for example, either clamping the > access to within the buffer, or returning a specified constant value, > so that the extension's behavior is testable. Otherwise the best test > that can be written is one which accesses wildly out-of-range values > and passes if it doesn't crash, which isn't very good. I'm not sure I understand the issue. ARB_robustness states that out-of-bounds accesses are forbidden. Do we care if an error is reported or not? The only thing that would help with is debugging. And we can always add a debugging extension like the two we already have which would do bounds checking and report errors as stated in today's spec. But it would be nice if we could avoid this overhead for hardware that complies with ARB_robustness. ----- ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Oct 24 10:08:44 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Mon, 24 Oct 2011 10:08:44 -0700 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Mon, Oct 24, 2011 at 9:59 AM, Chris Marrin wrote: > > On Oct 23, 2011, at 12:30 AM, Kornmann, Ralf wrote: > > Hi Gregg,**** > > Thank you for this opportunity. During evaluating of different techniques > to bring 3D games to the browser the missing texture compression support was > one of the cons for WebGL. There are three primary reasons why we would like > to see compression support:**** > 1. Reducing the download time for the players. We still have many > customers with slower (~1 MBIt/s) connections.**** > 2. Reducing the bandwidth cost. This may not a problem for many > sites today but if we deliver game content to Millions of people this would > require quite an amount of bandwidth that is not for free. > > > But ETC texture compression only gives you 6x. You can easily beat that > with high quality using JPEG. Downloading of JPEG (and PNG and GIF) > compressed images is supported today. > JPEG only covers RGB. Nearly all games need textures with RGBA and not just for color and there is no lossy compression format on the web that covers that case. So, lots of games will get a bandwidth savings from compressed textures. > > **** > > 3. Improving performances for people with lower class GFX hardware. > To be honest I am not sure if we will run in memory limitations before we > hit the limit of the script engine. But as the script engines improve faster > than the hardware it is likely. > > I understand that this is the biggest issue for some hardware. The problem > I see is that I don't think there is any texture compression format > supported universally. > > **** > > Therefore I would like to see two features for the compressed texture > support:**** > 1. Allow downloading data for different compression methods > separately. We already can (and need) handle this well for different sound > formats so there is no magic behind this. A combined storage for all formats > could be fine for people who don?t care that much about bandwidth but as > said we prefer to only transfer data to the client that could be used there. > **** > 2. Allow separated transfer of the different mip levels. This way we > could stream the lower resolutions first. > > > Aside from the compression issue, you can download different mip levels > separately today. > > **** > > 3. Make it somewhat generic. I think the standards don?t need to > list the supported formats itself. Just a way to tell the application what > is supported would be enough. This way there would no need to update the > standard if a new format shows up. > > There's an issue we've discussed before, but I don't think we had the > benefit of game developer input. Would it be reasonable to add a hint that > would say "please use texture compression if available, and I understand > this may result in somewhat lower quality or performance issues for the > initial runtime compression"? Or is the actual compression of the texture > too finely tuned for a runtime algorithm to do a reasonable job? Because if > we were able to do that, we could use the texture compression capabilities > of the hardware without the author needing to deal with the details. > > ----- > ~Chris > cmarrin...@ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Oct 24 10:14:42 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Mon, 24 Oct 2011 10:14:42 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: On Mon, Oct 24, 2011 at 10:05 AM, Chris Marrin wrote: > > On Oct 21, 2011, at 11:10 AM, Kenneth Russell wrote: > > > On Fri, Oct 21, 2011 at 10:02 AM, Gregg Tavares (wrk) > wrote: > > > > On Fri, Oct 21, 2011 at 8:20 AM, Chris Marrin wrote: > > > Currently the WebKit implementation of WebGL keeps around the current > > index array so it can be used to do bounds checking on draw calls. This is > > mandated in section 6.4 of the spec. The ARB_robustness extension requires > > that out-of-bounds array accesses be forbidden. But by my reading it > doesn't > > require the behavior mandated in the WebGL spec. They simply guarantee that > > no fetches outside the array will happen. Compliant behavior would be to > > simply return 0 values for these accesses. So bounds checking on the WebGL > > side would still be required. > > If this is the case, I think we should make changes so drivers > > implementing this extension can avoid the WebGL side bounds checking. I'm > > not sure it's practical to change the ARB_isolation extension at this > point. > > So I would be in favor of changing section 6.4 of the spec to match the > > behavior required in the extension. > > > If I understand correctly the issue with ARB_isolation is there is no > > guarantee the driver is obeying anything. It doesn't report errors for out > > of bounds access so there is no way to test that it's actually working. For > > WebGL we wanted something testable as far as I remember. > > > This is my recollection as well. There have been discussions in some > of the working groups about strengthening the guarantees in > ARB_robustness' robust buffer access: for example, either clamping the > access to within the buffer, or returning a specified constant value, > so that the extension's behavior is testable. Otherwise the best test > that can be written is one which accesses wildly out-of-range values > and passes if it doesn't crash, which isn't very good. > > > I'm not sure I understand the issue. ARB_robustness states that > out-of-bounds accesses are forbidden. Do we care if an error is reported or > not? The only thing that would help with is debugging. And we can always add > a debugging extension like the two we already have which would do bounds > checking and report errors as stated in today's spec. But it would be nice > if we could avoid this overhead for hardware that complies with > ARB_robustness. > Not to be snarky but seriously? How many driver bugs have we found? And we want trust that some untestable feature is actually working? If it's not testable then it's not tested. If it's not tested it's not working. It would also be nice if the behavior was consistent because if drivers handle this differently (some return a specific value, others return the ends of arrays, others return a different value) you can be sure that people will ship apps, see that it works on their own hardware and then only later find that it's draw crazy stuff on other hardware. > > ----- > ~Chris > cmarrin...@ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rko...@ Mon Oct 24 10:48:36 2011 From: rko...@ (Kornmann, Ralf) Date: Mon, 24 Oct 2011 18:48:36 +0100 Subject: AW: [Public WebGL] Texture Compression in WebGL In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com>, Message-ID: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F8@EU-MAIL-1-1.rws.ad.ea.com> Hi Chris, I know that image compression formats can give better compression results but there is another problem that we already face with our 2D HTML5 Canvas games. The number of files is always increasing and therefore we like to pack assets (geometry data, textures and whatever we need) in blobs. I might have overlooked it but so far I haven?t seen an API that allows decoding compressed images that are delivered as blob. I know there is a way by using data urls but this locks not nice for me. I am although don?t want to port jpeg/png reader code to JavaScript if it can be avoided. I haven?t checked it yet but I hope that binary http requests will support compression like the current text based, too. We already make heavy use of this for Json and BASE91 encoded data. (I know this is not your stuff as we talk about WebGL here) Please don?t get me wrong on the format side. As I already said as professional game developers we are very used to work with different hardware that supports different formats. We solve such problems with assets pipelines. Therefore we would not have problems to generate assets for different formats if necessary. Based on my DXT experiences artists are in general not very happy with the results of runtime compression. They prefer to do it offline with more heavy calculations that provides better results. Beside of this online compression increases the startup time. Maybe not for the first time when we still need to fetch data from the network but if you play again from the cache. In the Webgame business this is a quite common case that people play games multiple times if the like them. We have optimized our file handling to a level that we can reach 100% local file cache hit rate for static data without a single request to the CDN. OK this is no magic but we have seen many websites that aren?t care in this area at all. This startup time problem might be occur with shaders, too. But this is another thing that might be considered when moving the standard to the next level. So what we need is just a list of supported formats that we can use to load the correct asset from the CDN. If we don?t have something that match we can generate an alert in the system that we need to extend the asset pipeline and go with an uncompressed assets until it is done. That?s what I was thinking about when I called it generic. Maybe you can just forward what the driver reports as supported. -Ralf PS: this is just from a game developer view. I am pretty sure that people who are doing other kinds of web sites/application may have other requests. ________________________________ Von: Chris Marrin [cmarrin...@] Gesendet: Montag, 24. Oktober 2011 18:59 An: Kornmann, Ralf Cc: Gregg Tavares (wrk); public webgl Betreff: Re: [Public WebGL] Texture Compression in WebGL On Oct 23, 2011, at 12:30 AM, Kornmann, Ralf wrote: Hi Gregg, Thank you for this opportunity. During evaluating of different techniques to bring 3D games to the browser the missing texture compression support was one of the cons for WebGL. There are three primary reasons why we would like to see compression support: 1. Reducing the download time for the players. We still have many customers with slower (~1 MBIt/s) connections. 2. Reducing the bandwidth cost. This may not a problem for many sites today but if we deliver game content to Millions of people this would require quite an amount of bandwidth that is not for free. But ETC texture compression only gives you 6x. You can easily beat that with high quality using JPEG. Downloading of JPEG (and PNG and GIF) compressed images is supported today. 3. Improving performances for people with lower class GFX hardware. To be honest I am not sure if we will run in memory limitations before we hit the limit of the script engine. But as the script engines improve faster than the hardware it is likely. I understand that this is the biggest issue for some hardware. The problem I see is that I don't think there is any texture compression format supported universally. Therefore I would like to see two features for the compressed texture support: 1. Allow downloading data for different compression methods separately. We already can (and need) handle this well for different sound formats so there is no magic behind this. A combined storage for all formats could be fine for people who don?t care that much about bandwidth but as said we prefer to only transfer data to the client that could be used there. 2. Allow separated transfer of the different mip levels. This way we could stream the lower resolutions first. Aside from the compression issue, you can download different mip levels separately today. 3. Make it somewhat generic. I think the standards don?t need to list the supported formats itself. Just a way to tell the application what is supported would be enough. This way there would no need to update the standard if a new format shows up. There's an issue we've discussed before, but I don't think we had the benefit of game developer input. Would it be reasonable to add a hint that would say "please use texture compression if available, and I understand this may result in somewhat lower quality or performance issues for the initial runtime compression"? Or is the actual compression of the texture too finely tuned for a runtime algorithm to do a reasonable job? Because if we were able to do that, we could use the texture compression capabilities of the hardware without the author needing to deal with the details. ----- ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Oct 24 11:16:03 2011 From: kbr...@ (Kenneth Russell) Date: Mon, 24 Oct 2011 11:16:03 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: On Mon, Oct 24, 2011 at 10:14 AM, Gregg Tavares (wrk) wrote: > > > On Mon, Oct 24, 2011 at 10:05 AM, Chris Marrin wrote: >> >> On Oct 21, 2011, at 11:10 AM, Kenneth Russell wrote: >> >> On Fri, Oct 21, 2011 at 10:02 AM, Gregg Tavares (wrk) >> wrote: >> >> >> On Fri, Oct 21, 2011 at 8:20 AM, Chris Marrin wrote: >> >> Currently the WebKit implementation of WebGL keeps around the current >> >> index array so it can be used to do bounds checking on draw calls. This is >> >> mandated in section 6.4 of the spec. The ARB_robustness extension requires >> >> that out-of-bounds array accesses be forbidden. But by my reading it >> doesn't >> >> require the behavior mandated in the WebGL spec. They simply guarantee >> that >> >> no fetches outside the array will happen. Compliant behavior would be to >> >> simply return 0 values for these accesses. So bounds checking on the WebGL >> >> side would still be required. >> >> If this is the case, I think we should make changes so drivers >> >> implementing this extension can avoid the WebGL side bounds checking. I'm >> >> not sure it's practical to change the ARB_isolation extension at this >> point. >> >> So I would be in favor of changing section 6.4 of the spec to match the >> >> behavior required in the extension. >> >> If I understand correctly the issue with ARB_isolation is there is no >> >> guarantee the driver is obeying anything. It doesn't report errors for out >> >> of bounds access so there is no way to test that it's actually working. >> For >> >> WebGL we wanted something testable as far as I remember. >> >> This is my recollection as well. There have been discussions in some >> of the working groups about strengthening the guarantees in >> ARB_robustness' robust buffer access: for example, either clamping the >> access to within the buffer, or returning a specified constant value, >> so that the extension's behavior is testable. Otherwise the best test >> that can be written is one which accesses wildly out-of-range values >> and passes if it doesn't crash, which isn't very good. >> >> I'm not sure I understand the issue. ARB_robustness states that >> out-of-bounds accesses are forbidden. Do we care if an error is reported or >> not? The only thing that would help with is debugging. And we can always add >> a debugging extension like the two we already have which would do bounds >> checking and report errors as stated in today's spec. But it would be nice >> if we could avoid this overhead for hardware that complies with >> ARB_robustness. > > Not to be snarky but seriously? How many driver bugs have we found? And we > want trust that some untestable feature is actually working??If it's not > testable then it's not tested.?If it's not tested it's not working. > It would also be nice if the behavior was consistent because if drivers > handle this differently (some return a specific value, others return the > ends of arrays, others return a different value) you can be sure that people > will ship apps, see that it works on their own hardware and then only later > find that it's draw crazy stuff on other hardware. I agree with Gregg. In this key area it's crucial to be able to test that the guaranteed behavior is working. Let's push for more tightly defining the behavior for out of range buffer accesses. At a minimum, we should try to understand what can be supported today, and is likely to be supportable on future hardware. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Mon Oct 24 11:12:37 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Mon, 24 Oct 2011 11:12:37 -0700 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F8@EU-MAIL-1-1.rws.ad.ea.com> References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF91596017521BD44F8@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Mon, Oct 24, 2011 at 10:48 AM, Kornmann, Ralf wrote: > Hi Chris, ** **** > > I know that image compression formats can give better compression results > but there is another problem that we already face with our 2D HTML5 Canvas > games. The number of files is always increasing and therefore we like to > pack assets (geometry data, textures and whatever we need) in blobs. I might > have overlooked it but so far I haven?t seen an API that allows decoding > compressed images that are delivered as blob. > That should be possible. Download your data with XMLHttpRequest set responseType to "arraybuffer". This gives you binary. Split/decompress that binary anyway you want into the FileAPI. Use FileAPI URLs with images. (note: I have not tried it) > I know there is a way by using data urls but this locks not nice for me. I > am although don?t want to port jpeg/png reader code to JavaScript if it can > be avoided. I haven?t checked it yet but I hope that binary http requests > will support compression like the current text based, too. > They do. > We already make heavy use of this for Json and BASE91 encoded data. (I know > this is not your stuff as we talk about WebGL here)**** > > Please don?t get me wrong on the format side. As I already said as > professional game developers we are very used to work with different > hardware that supports different formats. We solve such problems with assets > pipelines. Therefore we would not have problems to generate assets for > different formats if necessary.**** > > Based on my DXT experiences artists are in general not very happy with the > results of runtime compression. They prefer to do it offline with more heavy > calculations that provides better results. Beside of this online compression > increases the startup time. Maybe not for the first time when we still need > to fetch data from the network but if you play again from the cache. > If there was in browser compression the results would be cached or at least possible to cache. So there wouldn't be a startup cost the second time. > In the Webgame business this is a quite common case that people play games > multiple times if the like them. We have optimized our file handling to a > level that we can reach 100% local file cache hit rate for static data > without a single request to the CDN. OK this is no magic but we have seen > many websites that aren?t care in this area at all. This startup time > problem might be occur with shaders, too. But this is another thing that > might be considered when moving the standard to the next level.**** > > So what we need is just a list of supported formats that we can use to load > the correct asset from the CDN. If we don?t have something that match we can > generate an alert in the system that we need to extend the asset pipeline > and go with an uncompressed assets until it is done. That?s what I was > thinking about when I called it generic. Maybe you can just forward what the > driver reports as supported.**** > > -Ralf**** > > PS: this is just from a game developer view. I am pretty sure that people > who are doing other kinds of web sites/application may have other requests > > Just FYI, I'm not arguing against getting a list of supported formats. In fact considering the options I personally feel that's the best way forward. I just wanted to point out there are solutions to some of the issues you brought up that are not really arguments for or against various texture compression options. > **** > > ------------------------------ > *Von:* Chris Marrin [cmarrin...@] > *Gesendet:* Montag, 24. Oktober 2011 18:59 > *An:* Kornmann, Ralf > *Cc:* Gregg Tavares (wrk); public webgl > *Betreff:* Re: [Public WebGL] Texture Compression in WebGL > > > On Oct 23, 2011, at 12:30 AM, Kornmann, Ralf wrote: > > Hi Gregg, > > Thank you for this opportunity. During evaluating of different techniques > to bring 3D games to the browser the missing texture compression support was > one of the cons for WebGL. There are three primary reasons why we would like > to see compression support: > 1. Reducing the download time for the players. We still have many > customers with slower (~1 MBIt/s) connections. > 2. Reducing the bandwidth cost. This may not a problem for many > sites today but if we deliver game content to Millions of people this would > require quite an amount of bandwidth that is not for free. > > > But ETC texture compression only gives you 6x. You can easily beat that > with high quality using JPEG. Downloading of JPEG (and PNG and GIF) > compressed images is supported today. > > 3. Improving performances for people with lower class GFX > hardware. To be honest I am not sure if we will run in memory limitations > before we hit the limit of the script engine. But as the script engines > improve faster than the hardware it is likely. > > I understand that this is the biggest issue for some hardware. The problem > I see is that I don't think there is any texture compression format > supported universally. > > Therefore I would like to see two features for the compressed texture > support: > 1. Allow downloading data for different compression methods > separately. We already can (and need) handle this well for different sound > formats so there is no magic behind this. A combined storage for all formats > could be fine for people who don?t care that much about bandwidth but as > said we prefer to only transfer data to the client that could be used there. > 2. Allow separated transfer of the different mip levels. This way we > could stream the lower resolutions first. > > > Aside from the compression issue, you can download different mip levels > separately today. > > 3. Make it somewhat generic. I think the standards don?t need to > list the supported formats itself. Just a way to tell the application what > is supported would be enough. This way there would no need to update the > standard if a new format shows up. > > There's an issue we've discussed before, but I don't think we had the > benefit of game developer input. Would it be reasonable to add a hint that > would say "please use texture compression if available, and I understand > this may result in somewhat lower quality or performance issues for the > initial runtime compression"? Or is the actual compression of the texture > too finely tuned for a runtime algorithm to do a reasonable job? Because if > we were able to do that, we could use the texture compression capabilities > of the hardware without the author needing to deal with the details. > > ----- > ~Chris > cmarrin...@ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Mon Oct 24 11:25:53 2011 From: vla...@ (Vladimir Vukicevic) Date: Mon, 24 Oct 2011 14:25:53 -0400 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: On Mon, Oct 24, 2011 at 1:14 PM, Gregg Tavares (wrk) wrote: > Not to be snarky but seriously? How many driver bugs have we found? And we > want trust that some untestable feature is actually working??If it's not > testable then it's not tested.?If it's not tested it's not working. > It would also be nice if the behavior was consistent because if drivers > handle this differently (some return a specific value, others return the > ends of arrays, others return a different value) you can be sure that people > will ship apps, see that it works on their own hardware and then only later > find that it's draw crazy stuff on other hardware. Yep, this was the issue -- from what I remember, all the vendors agreed that out-of-bounds access wouldn't read arbitrary memory, but there wasn't consensus on whether it returned a constant value, the end/beginning of array value, etc. We pointed out back then that that wasn't sufficient for the consistency reasons that Gregg points out. Having a query as to what value is returned seems like a useful thing -- then if the driver will happen to do whatever the WebGL spec specifies, then we can skip the bounds checks (and the webgl spec remains testable). - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Mon Oct 24 11:29:30 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Mon, 24 Oct 2011 11:29:30 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: To be more constructive, (and I think this has been suggested before), if we could come up with an short list of acceptable behaviors then we could test for them. Hopefully it would only take a short list to be acceptable to all GPU vendors. Of the top of my head *) Clamp to the ends of the array As in: someArray[clamp(someIndex, 0, lengthOfSomeArray - 1)]; *) Return 0 As in: ((someIndex >= 0 || someIndex < lengthOfSomeArray) ? someArray[someIndex] : typeOfSomeArray(0.0)); *) Mod the index As in: someArray[mod(abs(someIndex), lengthOfSomeArray]; Are there any other solutions that would need to be considered? That doesn't solve the consistent behavior problem but it would at least solve the testing problem. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Mon Oct 24 11:40:20 2011 From: gle...@ (Glenn Maynard) Date: Mon, 24 Oct 2011 14:40:20 -0400 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F8@EU-MAIL-1-1.rws.ad.ea.com> References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF91596017521BD44F8@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Mon, Oct 24, 2011 at 1:48 PM, Kornmann, Ralf wrote: > I know there is a way by using data urls but this locks not nice for me. Why? It's a clean API, and this is exactly the sort of thing it's for. If you have problems with the URL API, you should bring them up with w3-webapps, not bypass it. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From rko...@ Mon Oct 24 11:38:59 2011 From: rko...@ (Kornmann, Ralf) Date: Mon, 24 Oct 2011 19:38:59 +0100 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF91596017521BD44F8@EU-MAIL-1-1.rws.ad.ea.com>, Message-ID: <9B3BEF16CBC82A45900B3F159DF91596017521BD44FC@EU-MAIL-1-1.rws.ad.ea.com> I think we agree here. Offering more options for other use cases is always a good thing. We game developers are only a small part of the content provider for the web. Therefore I have no reason to argue against additional functions that would make life for other parties easier. Maybe they could be useful for us, too. I just wanted to make sure that a function to get a list of formats would be part of the specification. It looks like you have already considered this before. Btw: I am not a lawyer. Therefore I might be remembering wrong. But weren?t there some legal issues when it comes to online texture compression? At least for some formats. -Ralf ________________________________ Von: Gregg Tavares (wrk) [gman...@] Gesendet: Montag, 24. Oktober 2011 20:12 An: Kornmann, Ralf Cc: Chris Marrin; public webgl Betreff: Re: [Public WebGL] Texture Compression in WebGL On Mon, Oct 24, 2011 at 10:48 AM, Kornmann, Ralf > wrote: Hi Chris, I know that image compression formats can give better compression results but there is another problem that we already face with our 2D HTML5 Canvas games. The number of files is always increasing and therefore we like to pack assets (geometry data, textures and whatever we need) in blobs. I might have overlooked it but so far I haven?t seen an API that allows decoding compressed images that are delivered as blob. That should be possible. Download your data with XMLHttpRequest set responseType to "arraybuffer". This gives you binary. Split/decompress that binary anyway you want into the FileAPI. Use FileAPI URLs with images. (note: I have not tried it) I know there is a way by using data urls but this locks not nice for me. I am although don?t want to port jpeg/png reader code to JavaScript if it can be avoided. I haven?t checked it yet but I hope that binary http requests will support compression like the current text based, too. They do. We already make heavy use of this for Json and BASE91 encoded data. (I know this is not your stuff as we talk about WebGL here) Please don?t get me wrong on the format side. As I already said as professional game developers we are very used to work with different hardware that supports different formats. We solve such problems with assets pipelines. Therefore we would not have problems to generate assets for different formats if necessary. Based on my DXT experiences artists are in general not very happy with the results of runtime compression. They prefer to do it offline with more heavy calculations that provides better results. Beside of this online compression increases the startup time. Maybe not for the first time when we still need to fetch data from the network but if you play again from the cache. If there was in browser compression the results would be cached or at least possible to cache. So there wouldn't be a startup cost the second time. In the Webgame business this is a quite common case that people play games multiple times if the like them. We have optimized our file handling to a level that we can reach 100% local file cache hit rate for static data without a single request to the CDN. OK this is no magic but we have seen many websites that aren?t care in this area at all. This startup time problem might be occur with shaders, too. But this is another thing that might be considered when moving the standard to the next level. So what we need is just a list of supported formats that we can use to load the correct asset from the CDN. If we don?t have something that match we can generate an alert in the system that we need to extend the asset pipeline and go with an uncompressed assets until it is done. That?s what I was thinking about when I called it generic. Maybe you can just forward what the driver reports as supported. -Ralf PS: this is just from a game developer view. I am pretty sure that people who are doing other kinds of web sites/application may have other requests Just FYI, I'm not arguing against getting a list of supported formats. In fact considering the options I personally feel that's the best way forward. I just wanted to point out there are solutions to some of the issues you brought up that are not really arguments for or against various texture compression options. ________________________________ Von: Chris Marrin [cmarrin...@] Gesendet: Montag, 24. Oktober 2011 18:59 An: Kornmann, Ralf Cc: Gregg Tavares (wrk); public webgl Betreff: Re: [Public WebGL] Texture Compression in WebGL On Oct 23, 2011, at 12:30 AM, Kornmann, Ralf wrote: Hi Gregg, Thank you for this opportunity. During evaluating of different techniques to bring 3D games to the browser the missing texture compression support was one of the cons for WebGL. There are three primary reasons why we would like to see compression support: 1. Reducing the download time for the players. We still have many customers with slower (~1 MBIt/s) connections. 2. Reducing the bandwidth cost. This may not a problem for many sites today but if we deliver game content to Millions of people this would require quite an amount of bandwidth that is not for free. But ETC texture compression only gives you 6x. You can easily beat that with high quality using JPEG. Downloading of JPEG (and PNG and GIF) compressed images is supported today. 3. Improving performances for people with lower class GFX hardware. To be honest I am not sure if we will run in memory limitations before we hit the limit of the script engine. But as the script engines improve faster than the hardware it is likely. I understand that this is the biggest issue for some hardware. The problem I see is that I don't think there is any texture compression format supported universally. Therefore I would like to see two features for the compressed texture support: 1. Allow downloading data for different compression methods separately. We already can (and need) handle this well for different sound formats so there is no magic behind this. A combined storage for all formats could be fine for people who don?t care that much about bandwidth but as said we prefer to only transfer data to the client that could be used there. 2. Allow separated transfer of the different mip levels. This way we could stream the lower resolutions first. Aside from the compression issue, you can download different mip levels separately today. 3. Make it somewhat generic. I think the standards don?t need to list the supported formats itself. Just a way to tell the application what is supported would be enough. This way there would no need to update the standard if a new format shows up. There's an issue we've discussed before, but I don't think we had the benefit of game developer input. Would it be reasonable to add a hint that would say "please use texture compression if available, and I understand this may result in somewhat lower quality or performance issues for the initial runtime compression"? Or is the actual compression of the texture too finely tuned for a runtime algorithm to do a reasonable job? Because if we were able to do that, we could use the texture compression capabilities of the hardware without the author needing to deal with the details. ----- ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rko...@ Mon Oct 24 11:51:26 2011 From: rko...@ (Kornmann, Ralf) Date: Mon, 24 Oct 2011 19:51:26 +0100 Subject: AW: [Public WebGL] Texture Compression in WebGL In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF91596017521BD44F8@EU-MAIL-1-1.rws.ad.ea.com>, Message-ID: <9B3BEF16CBC82A45900B3F159DF91596017521BD44FD@EU-MAIL-1-1.rws.ad.ea.com> It?s not the data urls itself. They are fine for what they are designed for. I am just having a problem with wasting CPU performance to encode a binary block to BASE64 only to let the browser runtime it decode to a binary block again. Might be a game developer problem as we are always try to reduce startup times and getting 60 fps. Or at least 30 if the hardware is not good enough. But as Gregg already noticed the FileAPI might be a good solution here. ________________________________ Von: Glenn Maynard [glenn...@] Gesendet: Montag, 24. Oktober 2011 20:40 An: Kornmann, Ralf Cc: Chris Marrin; Gregg Tavares (wrk); public webgl Betreff: Re: [Public WebGL] Texture Compression in WebGL On Mon, Oct 24, 2011 at 1:48 PM, Kornmann, Ralf > wrote: > I know there is a way by using data urls but this locks not nice for me. Why? It's a clean API, and this is exactly the sort of thing it's for. If you have problems with the URL API, you should bring them up with w3-webapps, not bypass it. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Mon Oct 24 11:57:29 2011 From: vla...@ (Vladimir Vukicevic) Date: Mon, 24 Oct 2011 14:57:29 -0400 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: On Mon, Oct 24, 2011 at 2:29 PM, Gregg Tavares (wrk) wrote: > To be more constructive, (and I think this has been suggested before), if we > could come up with an short list of acceptable behaviors then we could test > for them. Hopefully it would only take a short list to be acceptable to all > GPU vendors. > Of the top of my head > *) Clamp to the ends of the array > As in: > ? ? someArray[clamp(someIndex, 0, lengthOfSomeArray - 1)]; > *) Return 0 > As in: > ? ? ((someIndex >= 0 || someIndex < lengthOfSomeArray) > ??someArray[someIndex] : typeOfSomeArray(0.0)); > *) Mod the index > As in: > ? ? someArray[mod(abs(someIndex), lengthOfSomeArray]; > Are there any other solutions that would need to be considered? > That doesn't solve the consistent behavior problem but it would at least > solve the testing problem. Hmm -- I don't think having a short list helps WebGL itself though, as it really needs just one behaviour. Doesn't really matter if it's testable if it's not consistent, unless we're talking about just being able to test ARB_robustness++ itself, and not WebGL with that extension. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Mon Oct 24 11:58:56 2011 From: kbr...@ (Kenneth Russell) Date: Mon, 24 Oct 2011 11:58:56 -0700 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF91596017521BD44F8@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Mon, Oct 24, 2011 at 11:12 AM, Gregg Tavares (wrk) wrote: > > > On Mon, Oct 24, 2011 at 10:48 AM, Kornmann, Ralf wrote: > >> Hi Chris, ** **** >> >> I know that image compression formats can give better compression results >> but there is another problem that we already face with our 2D HTML5 Canvas >> games. The number of files is always increasing and therefore we like to >> pack assets (geometry data, textures and whatever we need) in blobs. I might >> have overlooked it but so far I haven?t seen an API that allows decoding >> compressed images that are delivered as blob. >> > That should be possible. Download your data with XMLHttpRequest set > responseType to "arraybuffer". This gives you binary. Split/decompress that > binary anyway you want into the FileAPI. Use FileAPI URLs with images. > (note: I have not tried it) > > > >> I know there is a way by using data urls but this locks not nice for me. I >> am although don?t want to port jpeg/png reader code to JavaScript if it can >> be avoided. I haven?t checked it yet but I hope that binary http requests >> will support compression like the current text based, too. >> > They do. > > >> We already make heavy use of this for Json and BASE91 encoded data. (I >> know this is not your stuff as we talk about WebGL here)**** >> >> Please don?t get me wrong on the format side. As I already said as >> professional game developers we are very used to work with different >> hardware that supports different formats. We solve such problems with assets >> pipelines. Therefore we would not have problems to generate assets for >> different formats if necessary.**** >> >> Based on my DXT experiences artists are in general not very happy with the >> results of runtime compression. They prefer to do it offline with more heavy >> calculations that provides better results. Beside of this online compression >> increases the startup time. Maybe not for the first time when we still need >> to fetch data from the network but if you play again from the cache. >> > If there was in browser compression the results would be cached or at least > possible to cache. So there wouldn't be a startup cost the second time. > The primary issue seems to be that artists and game developers want their assets to have a guaranteed appearance after they've been compressed. That in my mind is one of the main arguments for directly exposing the compressed texture formats supported by the card. Trying to specify the behavior of online texture compression will be difficult, if not impossible, and will likely make the API unsuitable to high-end game developers. -Ken > >> In the Webgame business this is a quite common case that people play >> games multiple times if the like them. We have optimized our file handling >> to a level that we can reach 100% local file cache hit rate for static data >> without a single request to the CDN. OK this is no magic but we have seen >> many websites that aren?t care in this area at all. This startup time >> problem might be occur with shaders, too. But this is another thing that >> might be considered when moving the standard to the next level.**** >> >> So what we need is just a list of supported formats that we can use to >> load the correct asset from the CDN. If we don?t have something that match >> we can generate an alert in the system that we need to extend the asset >> pipeline and go with an uncompressed assets until it is done. That?s what I >> was thinking about when I called it generic. Maybe you can just forward what >> the driver reports as supported.**** >> >> -Ralf**** >> >> PS: this is just from a game developer view. I am pretty sure that people >> who are doing other kinds of web sites/application may have other requests >> >> > > Just FYI, I'm not arguing against getting a list of supported formats. In > fact considering the options I personally feel that's the best way forward. > I just wanted to point out there are solutions to some of the issues you > brought up that are not really arguments for or against various texture > compression options. > > >> **** >> >> ------------------------------ >> *Von:* Chris Marrin [cmarrin...@] >> *Gesendet:* Montag, 24. Oktober 2011 18:59 >> *An:* Kornmann, Ralf >> *Cc:* Gregg Tavares (wrk); public webgl >> *Betreff:* Re: [Public WebGL] Texture Compression in WebGL >> >> >> On Oct 23, 2011, at 12:30 AM, Kornmann, Ralf wrote: >> >> Hi Gregg, >> >> Thank you for this opportunity. During evaluating of different techniques >> to bring 3D games to the browser the missing texture compression support was >> one of the cons for WebGL. There are three primary reasons why we would like >> to see compression support: >> 1. Reducing the download time for the players. We still have many >> customers with slower (~1 MBIt/s) connections. >> 2. Reducing the bandwidth cost. This may not a problem for many >> sites today but if we deliver game content to Millions of people this would >> require quite an amount of bandwidth that is not for free. >> >> >> But ETC texture compression only gives you 6x. You can easily beat that >> with high quality using JPEG. Downloading of JPEG (and PNG and GIF) >> compressed images is supported today. >> >> 3. Improving performances for people with lower class GFX >> hardware. To be honest I am not sure if we will run in memory limitations >> before we hit the limit of the script engine. But as the script engines >> improve faster than the hardware it is likely. >> >> I understand that this is the biggest issue for some hardware. The problem >> I see is that I don't think there is any texture compression format >> supported universally. >> >> Therefore I would like to see two features for the compressed texture >> support: >> 1. Allow downloading data for different compression methods >> separately. We already can (and need) handle this well for different sound >> formats so there is no magic behind this. A combined storage for all formats >> could be fine for people who don?t care that much about bandwidth but as >> said we prefer to only transfer data to the client that could be used there. >> 2. Allow separated transfer of the different mip levels. This way >> we could stream the lower resolutions first. >> >> >> Aside from the compression issue, you can download different mip levels >> separately today. >> >> 3. Make it somewhat generic. I think the standards don?t need to >> list the supported formats itself. Just a way to tell the application what >> is supported would be enough. This way there would no need to update the >> standard if a new format shows up. >> >> There's an issue we've discussed before, but I don't think we had the >> benefit of game developer input. Would it be reasonable to add a hint that >> would say "please use texture compression if available, and I understand >> this may result in somewhat lower quality or performance issues for the >> initial runtime compression"? Or is the actual compression of the texture >> too finely tuned for a runtime algorithm to do a reasonable job? Because if >> we were able to do that, we could use the texture compression capabilities >> of the hardware without the author needing to deal with the details. >> >> ----- >> ~Chris >> cmarrin...@ >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Oct 24 12:40:14 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Mon, 24 Oct 2011 12:40:14 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: On Mon, Oct 24, 2011 at 11:57 AM, Vladimir Vukicevic wrote: > On Mon, Oct 24, 2011 at 2:29 PM, Gregg Tavares (wrk) > wrote: > > To be more constructive, (and I think this has been suggested before), if > we > > could come up with an short list of acceptable behaviors then we could > test > > for them. Hopefully it would only take a short list to be acceptable to > all > > GPU vendors. > > Of the top of my head > > *) Clamp to the ends of the array > > As in: > > someArray[clamp(someIndex, 0, lengthOfSomeArray - 1)]; > > *) Return 0 > > As in: > > ((someIndex >= 0 || someIndex < lengthOfSomeArray) > > ? someArray[someIndex] : typeOfSomeArray(0.0)); > > *) Mod the index > > As in: > > someArray[mod(abs(someIndex), lengthOfSomeArray]; > > Are there any other solutions that would need to be considered? > > That doesn't solve the consistent behavior problem but it would at least > > solve the testing problem. > > Hmm -- I don't think having a short list helps WebGL itself though, as > it really needs just one behaviour. Doesn't really matter if it's > testable if it's not consistent, unless we're talking about just being > able to test ARB_robustness++ itself, and not WebGL with that > extension. > There are 2 issues (1) testability (2) consistent behavior. Having a fixed set of allowed solutions solves (1) but not (2) For this particular issue solving (1) is enough I hope. (2) only needs to be solved for programs that are doing the wrong thing and various WebGL wrappers or debug flags could help devs find those issues if and when they run into them. > > - Vlad > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gle...@ Mon Oct 24 14:36:32 2011 From: gle...@ (Glenn Maynard) Date: Mon, 24 Oct 2011 17:36:32 -0400 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF91596017521BD44F8@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Mon, Oct 24, 2011 at 2:12 PM, Gregg Tavares (wrk) wrote: > That should be possible. Download your data with XMLHttpRequest set > responseType to "arraybuffer". This gives you binary. Split/decompress that > binary anyway you want into the FileAPI. Use FileAPI URLs with images. > (note: I have not tried it) > Tangental, but a quick note: you're better off using "blob" than "arraybuffer" for this, at least once it's widely supported (FF6 does, stable Chrome doesn't just yet). It's easier to pass to things like FileAPI and createObjectURL without making extra copies. -- Glenn Maynard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Oct 24 16:16:57 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Mon, 24 Oct 2011 16:16:57 -0700 Subject: [Public WebGL] Texture Compression in WebGL In-Reply-To: <9B3BEF16CBC82A45900B3F159DF91596017521BD44FD@EU-MAIL-1-1.rws.ad.ea.com> References: <9B3BEF16CBC82A45900B3F159DF91596017521BD44F7@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF91596017521BD44F8@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF91596017521BD44FD@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Mon, Oct 24, 2011 at 11:51 AM, Kornmann, Ralf wrote: > > > It?s not the data urls itself. They are fine for what they are designed > for. I am just having a problem with wasting CPU performance to encode a > binary block to BASE64 only to let the browser runtime it decode to a binary > block again. Might be a game developer problem as we are always try to > reduce startup times and getting 60 fps. Or at least 30 if the hardware is > not good enough. ** **** > > But as Gregg already noticed the FileAPI might be a good solution here. > Here's an example of using the FileAPI for storing large game assets https://github.com/borismus/game-asset-loader and an article about it http://smus.com/game-asset-loader > **** > ------------------------------ > *Von:* Glenn Maynard [glenn...@] > *Gesendet:* Montag, 24. Oktober 2011 20:40 > *An:* Kornmann, Ralf > *Cc:* Chris Marrin; Gregg Tavares (wrk); public webgl > > *Betreff:* Re: [Public WebGL] Texture Compression in WebGL > > On Mon, Oct 24, 2011 at 1:48 PM, Kornmann, Ralf wrote: > > I know there is a way by using data urls but this locks not nice for me. > > Why? It's a clean API, and this is exactly the sort of thing it's for. If > you have problems with the URL API, you should bring them up with > w3-webapps, not bypass it. > > -- > Glenn Maynard > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Tue Oct 25 09:29:41 2011 From: cma...@ (Chris Marrin) Date: Tue, 25 Oct 2011 09:29:41 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: On Oct 24, 2011, at 10:14 AM, Gregg Tavares (wrk) wrote: > > >> ...This is my recollection as well. There have been discussions in some >> of the working groups about strengthening the guarantees in >> ARB_robustness' robust buffer access: for example, either clamping the >> access to within the buffer, or returning a specified constant value, >> so that the extension's behavior is testable. Otherwise the best test >> that can be written is one which accesses wildly out-of-range values >> and passes if it doesn't crash, which isn't very good. > > I'm not sure I understand the issue. ARB_robustness states that out-of-bounds accesses are forbidden. Do we care if an error is reported or not? The only thing that would help with is debugging. And we can always add a debugging extension like the two we already have which would do bounds checking and report errors as stated in today's spec. But it would be nice if we could avoid this overhead for hardware that complies with ARB_robustness. > > Not to be snarky but seriously? How many driver bugs have we found? And we want trust that some untestable feature is actually working? If it's not testable then it's not tested. If it's not tested it's not working. That is very snarky and not particularly helpful. If we made "testability" a requirement, there are many drivers functions that would have to change. Look at section 4.5 of the spec. It states that all shaders "must not be allowed to read or write array elements that lie outside the bounds of the array". And then goes on to state that an implementation may generate an error or return some constant value in place of the invalid access. So why in section 6.4 do we require errors to be generated on vertex array accesses? > > It would also be nice if the behavior was consistent because if drivers handle this differently (some return a specific value, others return the ends of arrays, others return a different value) you can be sure that people will ship apps, see that it works on their own hardware and then only later find that it's draw crazy stuff on other hardware. There are many error conditions that different drivers handle differently. The OpenGL specs are left purposely flexible in some areas to avoid overly constraining drivers with error handling that would be too expensive. We may be able to tighten some of the error handling requirements at the WebGL layer without adding undue overhead. But in the case of vertex array bounds checking, today's requirements add significant overhead. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Tue Oct 25 09:44:49 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Tue, 25 Oct 2011 09:44:49 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: On Tue, Oct 25, 2011 at 9:29 AM, Chris Marrin wrote: > > On Oct 24, 2011, at 10:14 AM, Gregg Tavares (wrk) wrote: > > > > > > >> ...This is my recollection as well. There have been discussions in some > >> of the working groups about strengthening the guarantees in > >> ARB_robustness' robust buffer access: for example, either clamping the > >> access to within the buffer, or returning a specified constant value, > >> so that the extension's behavior is testable. Otherwise the best test > >> that can be written is one which accesses wildly out-of-range values > >> and passes if it doesn't crash, which isn't very good. > > > > I'm not sure I understand the issue. ARB_robustness states that > out-of-bounds accesses are forbidden. Do we care if an error is reported or > not? The only thing that would help with is debugging. And we can always add > a debugging extension like the two we already have which would do bounds > checking and report errors as stated in today's spec. But it would be nice > if we could avoid this overhead for hardware that complies with > ARB_robustness. > > > > Not to be snarky but seriously? How many driver bugs have we found? And > we want trust that some untestable feature is actually working? If it's not > testable then it's not tested. If it's not tested it's not working. > > That is very snarky and not particularly helpful. If we made "testability" > a requirement, there are many drivers functions that would have to change. > Look at section 4.5 of the spec. It states that all shaders "must not be > allowed to read or write array elements that lie outside the bounds of the > array". And then goes on to state that an implementation may generate an > error or return some constant value in place of the invalid access. So why > in section 6.4 do we require errors to be generated on vertex array > accesses? > I'd say just the opposite. We should change section 4.5 of the spec to require 1 of N testable solutions. The GPU vendor can pick one of those N solutions and we and write tests that try each of the N solutions and make sure at least one of them holds true. Same for section 6.4 That seems like it fits both needs. The need be flexible and the need to be testable. > > > > > It would also be nice if the behavior was consistent because if drivers > handle this differently (some return a specific value, others return the > ends of arrays, others return a different value) you can be sure that people > will ship apps, see that it works on their own hardware and then only later > find that it's draw crazy stuff on other hardware. > > There are many error conditions that different drivers handle differently. > The OpenGL specs are left purposely flexible in some areas to avoid overly > constraining drivers with error handling that would be too expensive. We may > be able to tighten some of the error handling requirements at the WebGL > layer without adding undue overhead. But in the case of vertex array bounds > checking, today's requirements add significant overhead. > > ----- > ~Chris > cmarrin...@ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Tue Oct 25 10:08:18 2011 From: cma...@ (Chris Marrin) Date: Tue, 25 Oct 2011 10:08:18 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> Message-ID: <44347A02-BCB3-4CFA-B4C8-9BE672534661@apple.com> On Oct 25, 2011, at 9:44 AM, Gregg Tavares (wrk) wrote: > > > On Tue, Oct 25, 2011 at 9:29 AM, Chris Marrin wrote: > > On Oct 24, 2011, at 10:14 AM, Gregg Tavares (wrk) wrote: > > > > > > >> ...This is my recollection as well. There have been discussions in some > >> of the working groups about strengthening the guarantees in > >> ARB_robustness' robust buffer access: for example, either clamping the > >> access to within the buffer, or returning a specified constant value, > >> so that the extension's behavior is testable. Otherwise the best test > >> that can be written is one which accesses wildly out-of-range values > >> and passes if it doesn't crash, which isn't very good. > > > > I'm not sure I understand the issue. ARB_robustness states that out-of-bounds accesses are forbidden. Do we care if an error is reported or not? The only thing that would help with is debugging. And we can always add a debugging extension like the two we already have which would do bounds checking and report errors as stated in today's spec. But it would be nice if we could avoid this overhead for hardware that complies with ARB_robustness. > > > > Not to be snarky but seriously? How many driver bugs have we found? And we want trust that some untestable feature is actually working? If it's not testable then it's not tested. If it's not tested it's not working. > > That is very snarky and not particularly helpful. If we made "testability" a requirement, there are many drivers functions that would have to change. Look at section 4.5 of the spec. It states that all shaders "must not be allowed to read or write array elements that lie outside the bounds of the array". And then goes on to state that an implementation may generate an error or return some constant value in place of the invalid access. So why in section 6.4 do we require errors to be generated on vertex array accesses? > > I'd say just the opposite. We should change section 4.5 of the spec to require 1 of N testable solutions. The GPU vendor can pick one of those N solutions and we and write tests that try each of the N solutions and make sure at least one of them holds true. Same for section 6.4 > > That seems like it fits both needs. The need be flexible and the need to be testable. Requiring driver vendors to use one-of-n fallback techniques will eventually require some vendor to go down a slow path or ignore your requirements. I think the way section 4.5 is written today is sufficient. We should make section 6.4 match that, which would make it match the ARB_robustness spec. I don't have a solution for your testability requirement, and I think testability is a nice goal, but in this case not at the expense of performance. Remember that drivers stream commands. The reason it's unreasonable to require an error to be returned from, for instance, an out-of-bounds vertex array access is because there's no place to return the error. Even if you were able to stash it in glError at some later time, to which of the possibly many drawArrays calls that you made would the error correspond? ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Tue Oct 25 10:23:02 2011 From: gma...@ (Gregg Tavares (wrk)) Date: Tue, 25 Oct 2011 10:23:02 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: <44347A02-BCB3-4CFA-B4C8-9BE672534661@apple.com> References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> <44347A02-BCB3-4CFA-B4C8-9BE672534661@apple.com> Message-ID: On Tue, Oct 25, 2011 at 10:08 AM, Chris Marrin wrote: > > On Oct 25, 2011, at 9:44 AM, Gregg Tavares (wrk) wrote: > > > > > > > On Tue, Oct 25, 2011 at 9:29 AM, Chris Marrin wrote: > > > > On Oct 24, 2011, at 10:14 AM, Gregg Tavares (wrk) wrote: > > > > > > > > > > >> ...This is my recollection as well. There have been discussions in > some > > >> of the working groups about strengthening the guarantees in > > >> ARB_robustness' robust buffer access: for example, either clamping the > > >> access to within the buffer, or returning a specified constant value, > > >> so that the extension's behavior is testable. Otherwise the best test > > >> that can be written is one which accesses wildly out-of-range values > > >> and passes if it doesn't crash, which isn't very good. > > > > > > I'm not sure I understand the issue. ARB_robustness states that > out-of-bounds accesses are forbidden. Do we care if an error is reported or > not? The only thing that would help with is debugging. And we can always add > a debugging extension like the two we already have which would do bounds > checking and report errors as stated in today's spec. But it would be nice > if we could avoid this overhead for hardware that complies with > ARB_robustness. > > > > > > Not to be snarky but seriously? How many driver bugs have we found? And > we want trust that some untestable feature is actually working? If it's not > testable then it's not tested. If it's not tested it's not working. > > > > That is very snarky and not particularly helpful. If we made > "testability" a requirement, there are many drivers functions that would > have to change. Look at section 4.5 of the spec. It states that all shaders > "must not be allowed to read or write array elements that lie outside the > bounds of the array". And then goes on to state that an implementation may > generate an error or return some constant value in place of the invalid > access. So why in section 6.4 do we require errors to be generated on vertex > array accesses? > > > > I'd say just the opposite. We should change section 4.5 of the spec to > require 1 of N testable solutions. The GPU vendor can pick one of those N > solutions and we and write tests that try each of the N solutions and make > sure at least one of them holds true. Same for section 6.4 > > > > That seems like it fits both needs. The need be flexible and the need to > be testable. > > Requiring driver vendors to use one-of-n fallback techniques will > eventually require some vendor to go down a slow path or ignore your > requirements. I think the way section 4.5 is written today is sufficient. We > should make section 6.4 match that, which would make it match the > ARB_robustness spec. I don't have a solution for your testability > requirement, and I think testability is a nice goal, but in this case not at > the expense of performance. > This is a security issue. Doesn't Apple want to be able to verify that you can't read data you're not supposed to be able to read? > > Remember that drivers stream commands. The reason it's unreasonable to > require an error to be returned from, for instance, an out-of-bounds vertex > array access is because there's no place to return the error. Even if you > were able to stash it in glError at some later time, to which of the > possibly many drawArrays calls that you made would the error correspond? > I'm not asking for an error. I'm asking for a testable solution. I gave 3 examples of testable solutions none of which required errors. > > ----- > ~Chris > cmarrin...@ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From RTr...@ Tue Oct 25 16:13:17 2011 From: RTr...@ (Robert Tray) Date: Tue, 25 Oct 2011 16:13:17 -0700 Subject: [Public WebGL] Issues with WebGL Conformance 1.0.1b In-Reply-To: References: Message-ID: Gregg, I have another issue with a different webgl conformance test. The limits/gl-min-textures.html test enables 8 active textures and the fragment shader sums up the texel values. The expectation is that arithmetic on the texel values in the fragment shader should be equivalent to the same arithmetic on the ubyte components from the texture definition. I am seeing 1 bit lsb differences for some particular texture results and would like to request a tolerance allowance (with a ',1' added as the last parameter to the wtu.checkCanvas call). The texel component values go through type and range conversions between definition in glTexImage2D and when they are used in the fragment shader. At glTexImage2D time the components are typically ubyte, [0..255]. In the fragment shader the texture2D returns a float per component with a range greater than [0.. 1.0]. When the color fragment is written to the destination surface there's another type and range conversion back to [0..255] per component. On lower precision hardware, if the fragment shader is doing arithmetic on the texture values, then the result could fall into a range that ends up with an unexpected 1 bit lsb difference after conversion to the framebuffer format. A simplified description would be something like [0..0xFF] gets converted to the range [0..0x100] and back to [0..0xFF]. The intermediate range has 'holes' that lose on the conversion to the final range. Robert Tray From: Robert Tray Sent: Thursday, October 20, 2011 12:12 PM To: 'Gregg Tavares (wrk)' Cc: public_webgl...@ Subject: RE: [Public WebGL] Issues with WebGL Conformance 1.0.1b Thanks Gregg. > this test actually does call bindAttribLocation in the utility function createProgram You're right. I missed that. Now that I think about it, there was an unrelated problem fixed after I debugged that test that would explain what I saw. Thanks for the quick action. Robert Tray From: Gregg Tavares (wrk) [mailto:gman...@] Sent: Thursday, October 20, 2011 12:06 PM To: Robert Tray Cc: public_webgl...@ Subject: Re: [Public WebGL] Issues with WebGL Conformance 1.0.1b Thank you for reviewing these On Thu, Oct 20, 2011 at 10:10 AM, Robert Tray > wrote: The current webgl-conformance-tests.html fails a few tests that I'm hoping can be remedied with test changes. 1) Context/context-attributes-alpha-depth-stencil-antialias.html: The test makes assumptions about the location of attributes 'pos' and 'colorIn'. The test is missing either a bindAttribLocation before the program link or a getAttribLocation after. this test actually does call bindAttribLocation in the utility function createProgram 2) Glsl/functions/glsl-function-mod-gentype.html: The test is enhanced over the GLES2 version of conformance by using two different modulo divisors that are more unique than the '1.0' used in GLES2 conformance. That's a good thing. However the hardware I'm testing on has a 1 bit lsb difference between the built-in and the emulated shader image results. The 1 bit lsdb delta happens at the top and bottom extremes for the 'y' component. I would like to request a 'tolerance:1,' parameter be used for glsl-function-mod-gentype.html so that it will pass. done 3) Glsl/functions/glsl-function-normalize.html: The test is enhanced over the GLES2 version of conformance by extending the range of the vector components. This seems like a good thing too. But I am seeing a 1 bit lsb difference for a few pixels when comparing the results for the built-in versus emulated shaders. I would like to request a 'tolerance:1,' parameter for the glsl-function-normalize.html test too. done Robert Tray -------------- next part -------------- An HTML attachment was scrubbed... URL: From hua...@ Tue Oct 25 19:00:37 2011 From: hua...@ (=?gb2312?B?u8bQwruv?=) Date: Wed, 26 Oct 2011 10:00:37 +0800 Subject: [Public WebGL] unsubscribe Message-ID: <3AA2F23B0079458AAA39E5967AADFAE4@RoyHuangTHINK> unsubscribe -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Tue Oct 25 19:35:16 2011 From: cal...@ (Mark Callow) Date: Wed, 26 Oct 2011 11:35:16 +0900 Subject: [Public WebGL] Issues with WebGL Conformance 1.0.1b In-Reply-To: References: Message-ID: <4EA771E4.5060009@hicorp.co.jp> > In the fragment shader the texture2D returns a float per component > with a range greater than [0.. 1.0]. I do not think this complies with the OpenGL specification. Section 2.1.2 of the OpenGL ES 2.0 describes a conversion that gives a float component with the range [0..1.0] Regards -Mark On 26/10/2011 08:13, Robert Tray wrote: > > The texel component values go through type and range conversions > between definition in glTexImage2D and when they are used in the > fragment shader. At glTexImage2D time the components are typically > ubyte, [0..255]. In the fragment shader the texture2D returns a float > per component with a range greater than [0.. 1.0]. When the color > fragment is written to the destination surface there's another type > and range conversion back to [0..255] per component. > > On lower precision hardware, if the fragment shader is doing > arithmetic on the texture values, then the result could fall into a > range that ends up with an unexpected 1 bit lsb difference after > conversion to the framebuffer format. > > > > A simplified description would be something like [0..0xFF] gets > converted to the range [0..0x100] and back to [0..0xFF]. The > intermediate range has 'holes' that lose on the conversion to the > final range. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From RTr...@ Wed Oct 26 08:37:56 2011 From: RTr...@ (Robert Tray) Date: Wed, 26 Oct 2011 08:37:56 -0700 Subject: [Public WebGL] Issues with WebGL Conformance 1.0.1b In-Reply-To: <4EA771E4.5060009@hicorp.co.jp> References: <4EA771E4.5060009@hicorp.co.jp> Message-ID: > In the fragment shader the texture2D returns a float per component with a range greater than [0.. 1.0]. > I do not think this complies with the OpenGL specification. A poor choice of words on my part. What I meant is that texture2D returns float components between [0.. 1.0]. However the float type can extend beyond the range [0.. 1.0] so the data type has sufficient size to handle a larger range. The point being that the ranges of the data types are different and don't map perfectly. Robert Tray From: Mark Callow [mailto:callow_mark...@] Sent: Tuesday, October 25, 2011 8:35 PM To: Robert Tray Cc: 'Gregg Tavares (wrk)'; 'public_webgl...@' Subject: Re: [Public WebGL] Issues with WebGL Conformance 1.0.1b In the fragment shader the texture2D returns a float per component with a range greater than [0.. 1.0]. I do not think this complies with the OpenGL specification. Section 2.1.2 of the OpenGL ES 2.0 describes a conversion that gives a float component with the range [0..1.0] Regards -Mark On 26/10/2011 08:13, Robert Tray wrote: The texel component values go through type and range conversions between definition in glTexImage2D and when they are used in the fragment shader. At glTexImage2D time the components are typically ubyte, [0..255]. In the fragment shader the texture2D returns a float per component with a range greater than [0.. 1.0]. When the color fragment is written to the destination surface there's another type and range conversion back to [0..255] per component. On lower precision hardware, if the fragment shader is doing arithmetic on the texture values, then the result could fall into a range that ends up with an unexpected 1 bit lsb difference after conversion to the framebuffer format. A simplified description would be something like [0..0xFF] gets converted to the range [0..0x100] and back to [0..0xFF]. The intermediate range has 'holes' that lose on the conversion to the final range. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Wed Oct 26 11:53:21 2011 From: cma...@ (Chris Marrin) Date: Wed, 26 Oct 2011 11:53:21 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> <44347A02-BCB3-4CFA-B4C8-9BE672534661@apple.com> Message-ID: On Oct 25, 2011, at 10:23 AM, Gregg Tavares (wrk) wrote: > > > ...Requiring driver vendors to use one-of-n fallback techniques will eventually require some vendor to go down a slow path or ignore your requirements. I think the way section 4.5 is written today is sufficient. We should make section 6.4 match that, which would make it match the ARB_robustness spec. I don't have a solution for your testability requirement, and I think testability is a nice goal, but in this case not at the expense of performance. > > This is a security issue. Doesn't Apple want to be able to verify that you can't read data you're not supposed to be able to read? I don't believe testability is an overriding requirement of WebGL. It's a good goal, but I don't think it should be required in the spec. We should (continue to) strive for maximizing performance. This thread started with me proposing we drop the requirement in section 6.4 that an out-of-bounds array access return an error, thus requiring expensive intervention at the WebGL layer. If the wording of section 6.4 matched that in section 4.5, then we could make WebGL implementations more performant if the driver can guarantee appropriate handling of out-of-range accesses. In particular, the wording from section 6.4: ----- If a vertex attribute is enabled as an array, a buffer is bound to that attribute, and the attribute is consumed by the current program, then calls to drawArrays and drawElements will verify that each referenced vertex lies within the storage of the bound buffer. If the range specified in drawArrays or any referenced index in drawElements lies outside the storage of the bound buffer, an INVALID_OPERATION error is generated and no geometry is drawn. ----- should be changed to: ----- If a vertex attribute is enabled as an array, a buffer is bound to that attribute, and the attribute is consumed by the current program, then calls to drawArrays and drawElements shall ensure that no accesses outside the bounds of that vertex buffer occur. If detected when the drawArrays or drawElements call is made, an INVALID_OPERATION error is generated and no geometry is drawn. Otherwise, at runtime and invalid access may return a constant value (such as 0), or the value at any valid index of the same vertex buffer. In this case (possibly incorrect) geometry would be rendered and no error would be generated. But no accesses outside the vertex buffer bounds would occur. ----- I agree that it's possible for a cleverly written test to intuit the correct behavior, possibly by knowing something about the particular hardware or driver. If so, that's fine. But I don't think we should over-specify the behavior and lose performance. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Oct 26 13:24:27 2011 From: kbr...@ (Kenneth Russell) Date: Wed, 26 Oct 2011 13:24:27 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> <44347A02-BCB3-4CFA-B4C8-9BE672534661@apple.com> Message-ID: On Wed, Oct 26, 2011 at 11:53 AM, Chris Marrin wrote: > > On Oct 25, 2011, at 10:23 AM, Gregg Tavares (wrk) wrote: > > > > > > > ...Requiring driver vendors to use one-of-n fallback techniques will > eventually require some vendor to go down a slow path or ignore your > requirements. I think the way section 4.5 is written today is sufficient. We > should make section 6.4 match that, which would make it match the > ARB_robustness spec. I don't have a solution for your testability > requirement, and I think testability is a nice goal, but in this case not at > the expense of performance. > > > > This is a security issue. Doesn't Apple want to be able to verify that > you can't read data you're not supposed to be able to read? > > I don't believe testability is an overriding requirement of WebGL. It's a > good goal, but I don't think it should be required in the spec. We should > (continue to) strive for maximizing performance. > > This thread started with me proposing we drop the requirement in section > 6.4 that an out-of-bounds array access return an error, thus requiring > expensive intervention at the WebGL layer. If the wording of section 6.4 > matched that in section 4.5, then we could make WebGL implementations more > performant if the driver can guarantee appropriate handling of out-of-range > accesses. In particular, the wording from section 6.4: > > ----- > If a vertex attribute is enabled as an array, a buffer is bound to that > attribute, and the attribute is consumed by the current program, then calls > to drawArrays and drawElements will verify that each referenced vertex lies > within the storage of the bound buffer. If the range specified in drawArrays > or any referenced index in drawElements lies outside the storage of the > bound buffer, an INVALID_OPERATION error is generated and no geometry is > drawn. > ----- > > should be changed to: > > ----- > If a vertex attribute is enabled as an array, a buffer is bound to that > attribute, and the attribute is consumed by the current program, then calls > to drawArrays and drawElements shall ensure that no accesses outside the > bounds of that vertex buffer occur. If detected when the drawArrays or > drawElements call is made, an INVALID_OPERATION error is generated and no > geometry is drawn. Otherwise, at runtime and invalid access may return a > constant value (such as 0), or the value at any valid index of the same > vertex buffer. In this case (possibly incorrect) geometry would be rendered > and no error would be generated. But no accesses outside the vertex buffer > bounds would occur. > ----- > (typo: runtime and => runtime an) This change wouldn't allow the use of GL_ARB_robustness' robust buffer access as specified today. Robust buffer access only guarantees the following: "indices within the vertex array that lie outside the arrays defined for enabled attributes result in undefined values for the corresponding attributes, but cannot result in application failure". See http://www.opengl.org/registry/specs/ARB/robustness.txt . For example, it might fetch other random data reachable by the context (and, hopefully, produced by that context). What's the goal? To allow the use of robust buffer access as currently written, or to specify stronger guarantees for ARB_robustness and incorporate those into the WebGL spec? If the latter, then I think we should spec the stronger robust buffer access first, so that GPU vendors are on board, and we know what the parameters will be. >From a process standpoint I'd like to wait to make a change like this until after we've snapshotted the next version of the WebGL spec, because this is a significant change requiring a good deal of consideration. -Ken > > I agree that it's possible for a cleverly written test to intuit the > correct behavior, possibly by knowing something about the particular > hardware or driver. If so, that's fine. But I don't think we should > over-specify the behavior and lose performance. > > ----- > ~Chris > cmarrin...@ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Wed Oct 26 15:16:18 2011 From: cma...@ (Chris Marrin) Date: Wed, 26 Oct 2011 15:16:18 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> <44347A02-BCB3-4CFA-B4C8-9BE672534661@apple.com> Message-ID: On Oct 26, 2011, at 1:24 PM, Kenneth Russell wrote: > On Wed, Oct 26, 2011 at 11:53 AM, Chris Marrin wrote: > > On Oct 25, 2011, at 10:23 AM, Gregg Tavares (wrk) wrote: > > > > > > > ...Requiring driver vendors to use one-of-n fallback techniques will eventually require some vendor to go down a slow path or ignore your requirements. I think the way section 4.5 is written today is sufficient. We should make section 6.4 match that, which would make it match the ARB_robustness spec. I don't have a solution for your testability requirement, and I think testability is a nice goal, but in this case not at the expense of performance. > > > > This is a security issue. Doesn't Apple want to be able to verify that you can't read data you're not supposed to be able to read? > > I don't believe testability is an overriding requirement of WebGL. It's a good goal, but I don't think it should be required in the spec. We should (continue to) strive for maximizing performance. > > This thread started with me proposing we drop the requirement in section 6.4 that an out-of-bounds array access return an error, thus requiring expensive intervention at the WebGL layer. If the wording of section 6.4 matched that in section 4.5, then we could make WebGL implementations more performant if the driver can guarantee appropriate handling of out-of-range accesses. In particular, the wording from section 6.4: > > ----- > If a vertex attribute is enabled as an array, a buffer is bound to that attribute, and the attribute is consumed by the current program, then calls to drawArrays and drawElements will verify that each referenced vertex lies within the storage of the bound buffer. If the range specified in drawArrays or any referenced index in drawElements lies outside the storage of the bound buffer, an INVALID_OPERATION error is generated and no geometry is drawn. > ----- > > should be changed to: > > ----- > If a vertex attribute is enabled as an array, a buffer is bound to that attribute, and the attribute is consumed by the current program, then calls to drawArrays and drawElements shall ensure that no accesses outside the bounds of that vertex buffer occur. If detected when the drawArrays or drawElements call is made, an INVALID_OPERATION error is generated and no geometry is drawn. Otherwise, at runtime and invalid access may return a constant value (such as 0), or the value at any valid index of the same vertex buffer. In this case (possibly incorrect) geometry would be rendered and no error would be generated. But no accesses outside the vertex buffer bounds would occur. > ----- > > (typo: runtime and => runtime an) > > This change wouldn't allow the use of GL_ARB_robustness' robust buffer access as specified today. Robust buffer access only guarantees the following: "indices within the vertex array that lie outside the arrays defined for enabled attributes result in undefined values for the corresponding attributes, but cannot result in application failure". See http://www.opengl.org/registry/specs/ARB/robustness.txt . For example, it might fetch other random data reachable by the context (and, hopefully, produced by that context). Yes, I noticed that. But I also thought the text I wrote essentially matches the DX10 behavior mandated by Microsoft. If so, this should be an achievable goal for driver vendors. > > What's the goal? To allow the use of robust buffer access as currently written, or to specify stronger guarantees for ARB_robustness and incorporate those into the WebGL spec? If the latter, then I think we should spec the stronger robust buffer access first, so that GPU vendors are on board, and we know what the parameters will be. Right. We should get with the driver vendors and understand their current, DX10 motivated, behavior. Then we can hopefully mandate that behavior in ARB_robustness. > > From a process standpoint I'd like to wait to make a change like this until after we've snapshotted the next version of the WebGL spec, because this is a significant change requiring a good deal of consideration. I can live with that. The whole point of this is to allow us to eventually, on some hardware, get rid of the shadow copy of the array indices and the need to run through them on each call to drawElements. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Oct 26 15:53:52 2011 From: kbr...@ (Kenneth Russell) Date: Wed, 26 Oct 2011 15:53:52 -0700 Subject: [Public WebGL] ARB_robustness and array bounds checking In-Reply-To: References: <3AF7E570-2BE9-4D25-BAA5-2F4EA30CD801@apple.com> <44347A02-BCB3-4CFA-B4C8-9BE672534661@apple.com> Message-ID: On Wed, Oct 26, 2011 at 3:16 PM, Chris Marrin wrote: > > On Oct 26, 2011, at 1:24 PM, Kenneth Russell wrote: > > On Wed, Oct 26, 2011 at 11:53 AM, Chris Marrin wrote: > >> >> On Oct 25, 2011, at 10:23 AM, Gregg Tavares (wrk) wrote: >> >> > >> > >> > ...Requiring driver vendors to use one-of-n fallback techniques will >> eventually require some vendor to go down a slow path or ignore your >> requirements. I think the way section 4.5 is written today is sufficient. We >> should make section 6.4 match that, which would make it match the >> ARB_robustness spec. I don't have a solution for your testability >> requirement, and I think testability is a nice goal, but in this case not at >> the expense of performance. >> > >> > This is a security issue. Doesn't Apple want to be able to verify that >> you can't read data you're not supposed to be able to read? >> >> I don't believe testability is an overriding requirement of WebGL. It's a >> good goal, but I don't think it should be required in the spec. We should >> (continue to) strive for maximizing performance. >> >> This thread started with me proposing we drop the requirement in section >> 6.4 that an out-of-bounds array access return an error, thus requiring >> expensive intervention at the WebGL layer. If the wording of section 6.4 >> matched that in section 4.5, then we could make WebGL implementations more >> performant if the driver can guarantee appropriate handling of out-of-range >> accesses. In particular, the wording from section 6.4: >> >> ----- >> If a vertex attribute is enabled as an array, a buffer is bound to that >> attribute, and the attribute is consumed by the current program, then calls >> to drawArrays and drawElements will verify that each referenced vertex lies >> within the storage of the bound buffer. If the range specified in drawArrays >> or any referenced index in drawElements lies outside the storage of the >> bound buffer, an INVALID_OPERATION error is generated and no geometry is >> drawn. >> ----- >> >> should be changed to: >> >> ----- >> If a vertex attribute is enabled as an array, a buffer is bound to that >> attribute, and the attribute is consumed by the current program, then calls >> to drawArrays and drawElements shall ensure that no accesses outside the >> bounds of that vertex buffer occur. If detected when the drawArrays or >> drawElements call is made, an INVALID_OPERATION error is generated and no >> geometry is drawn. Otherwise, at runtime and invalid access may return a >> constant value (such as 0), or the value at any valid index of the same >> vertex buffer. In this case (possibly incorrect) geometry would be rendered >> and no error would be generated. But no accesses outside the vertex buffer >> bounds would occur. >> ----- >> > > (typo: runtime and => runtime an) > > This change wouldn't allow the use of GL_ARB_robustness' robust buffer > access as specified today. Robust buffer access only guarantees the > following: "indices within the vertex array that lie outside the arrays > defined for enabled attributes result in undefined values for the > corresponding attributes, but cannot result in application failure". See > http://www.opengl.org/registry/specs/ARB/robustness.txt . For example, it > might fetch other random data reachable by the context (and, hopefully, > produced by that context). > > > Yes, I noticed that. But I also thought the text I wrote essentially > matches the DX10 behavior mandated by Microsoft. If so, this should be an > achievable goal for driver vendors. > > > What's the goal? To allow the use of robust buffer access as currently > written, or to specify stronger guarantees for ARB_robustness and > incorporate those into the WebGL spec? If the latter, then I think we should > spec the stronger robust buffer access first, so that GPU vendors are on > board, and we know what the parameters will be. > > > Right. We should get with the driver vendors and understand their current, > DX10 motivated, behavior. Then we can hopefully mandate that behavior in > ARB_robustness. > Sounds like a good plan. > From a process standpoint I'd like to wait to make a change like this until > after we've snapshotted the next version of the WebGL spec, because this is > a significant change requiring a good deal of consideration. > > > > I can live with that. The whole point of this is to allow us to eventually, > on some hardware, get rid of the shadow copy of the array indices and the > need to run through them on each call to drawElements. > Understood. It's likely that every WebGL implementation already does caching to avoid re-scanning indices where possible, but making it unnecessary would dramatically speed up some use cases. -Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From RTr...@ Fri Oct 28 17:14:55 2011 From: RTr...@ (Robert Tray) Date: Fri, 28 Oct 2011 17:14:55 -0700 Subject: [Public WebGL] Conformance 1.0.1b testing name string length Message-ID: Gregg & Ken, I see that the number of tests continues to increase which is good for ensuring robust implementations. Some of the most recent additions to the conformance suite test the max name string length of 256 (for bindAttribLocation, getAttribLocation, getUniformLocation). I'm trying to track down which spec has that info. Some stuff comes from the webgl spec, others from gles, or glsl, or opengl2. If you could give me a clue where to look I'd appreciate it. (I need the info for code reviews.) I notice some tests mention which spec or where to find the requirement their testing. It would be cool if some of the future tests that exercised the fringes of the spec could maybe have a comment in the source mentioning the requirement location. Most of the time it's obvious but sometimes it's hard to find. In the glsl/misc/attrib-location-length-limits.html test there may be a precedence of errors question. (I haven't debugged this so I'm not positive.) The last check in that test is that getAttribLocation should fail and the error code should be INVALID_VALUE. But since the program is not linked then INVALID_OPERATION could be returned. Does the spec list precedence of errors? Robert Tray -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Oct 28 17:51:48 2011 From: kbr...@ (Kenneth Russell) Date: Fri, 28 Oct 2011 17:51:48 -0700 Subject: [Public WebGL] Conformance 1.0.1b testing name string length In-Reply-To: References: Message-ID: On Fri, Oct 28, 2011 at 5:14 PM, Robert Tray wrote: > Gregg & Ken, > > > > I see that the number of tests continues to increase which is good for > ensuring robust implementations. > > > > Some of the most recent additions to the conformance suite test the max name > string length of 256 (for bindAttribLocation, getAttribLocation, > getUniformLocation).? I?m trying to track down which spec has that info. > ?Some stuff comes from the webgl spec, others from gles, or glsl, or > opengl2. > > If you could give me a clue where to look I?d appreciate it.? (I need the > info for code reviews.)? I notice some tests mention which spec or where to > find the requirement their testing.? It would be cool if some of the future > tests that exercised the fringes of the spec could maybe have a comment in > the source mentioning the requirement location.? Most of the time it?s > obvious but sometimes it?s hard to find. Hi Robert, Thanks for pointing this out. I've fixed a few recently added tests to indicate where the limits come from. Please point out if I missed any. > In the glsl/misc/attrib-location-length-limits.html test there may be a > precedence of errors question.? (I haven?t debugged this so I?m not > positive.)? The last check in that test is that getAttribLocation should > fail and the error code should be INVALID_VALUE.? But since the program is > not linked then INVALID_OPERATION could be returned.? Does the spec list > precedence of errors? Thanks for pointing this out also; I've fixed the test. If you find any other errors please highlight them. -Ken > > > > > > Robert Tray > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From dsh...@ Mon Oct 31 14:15:23 2011 From: dsh...@ (Doug Sherk) Date: Mon, 31 Oct 2011 14:15:23 -0700 Subject: [Public WebGL] WEBKIT_ extensions Message-ID: <4EAF0FEB.1040604@mozilla.com> We have implemented the WEBKIT_lose_context extension in Firefox, but we're unsure of what the naming scheme here should be. It is written as a Khronos spec (http://www.khronos.org/registry/webgl/extensions/WEBKIT_lose_context/) but the tag implies that is specific to WebKit. Should we retag it, perhaps as MOZ_lose_context? We have also been talking about trying to standardize tag like "WEBGL_" for things like this that are useful for more than one vendor. Regards, -- Doug ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Mon Oct 31 14:53:18 2011 From: bja...@ (Benoit Jacob) Date: Mon, 31 Oct 2011 17:53:18 -0400 Subject: [Public WebGL] WEBKIT_ extensions In-Reply-To: <4EAF0FEB.1040604@mozilla.com> References: <4EAF0FEB.1040604@mozilla.com> Message-ID: <4EAF18CE.8050401@mozilla.com> In the case of WEBKIT_lose_context, since it is so simple and useful for all browsers, I would really like it to become WEBGL_lose_context. Does anybody object to that and what steps need to be taken to make that happen? Benoit On 31/10/11 05:15 PM, Doug Sherk wrote: > > We have implemented the WEBKIT_lose_context extension in Firefox, but > we're unsure of what the naming scheme here should be. It is written as > a Khronos spec > (http://www.khronos.org/registry/webgl/extensions/WEBKIT_lose_context/) > but the tag implies that is specific to WebKit. Should we retag it, > perhaps as MOZ_lose_context? We have also been talking about trying to > standardize tag like "WEBGL_" for things like this that are useful for > more than one vendor. > > Regards, > > -- > Doug > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl -----------------------------------------------------------