From jda...@ Mon Jul 7 18:22:24 2014 From: jda...@ (John Davis) Date: Mon, 7 Jul 2014 21:22:24 -0400 Subject: [Public WebGL] IE instruction count Message-ID: For those interested in vertex and fragment shader instruction counts for IE, here's the latest ... http://connect.microsoft.com/IE/feedback/details/797995/shader-size-limitation-in-webgl-is-too-small#tabs -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Thu Jul 10 17:00:24 2014 From: jgi...@ (Jeff Gilbert) Date: Thu, 10 Jul 2014 17:00:24 -0700 (PDT) Subject: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? In-Reply-To: <2038075810.6015116.1405036503733.JavaMail.zimbra@mozilla.com> Message-ID: <1269154715.6015861.1405036824223.JavaMail.zimbra@mozilla.com> Currently neither Chrome nor Firefox (at least) appear to expose these interfaces (such as `WebGLVertexArrayObjectOES` and `OES_vertex_array_object`), but in the specs, they are not marked as `[NoInterfaceObject]`. Should they be exposed on the `window` object, allowing for `foo instanceof WebGLVertexArrayObjectOES`? -Jeff ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From oet...@ Fri Jul 11 06:56:29 2014 From: oet...@ (Olli Etuaho) Date: Fri, 11 Jul 2014 15:56:29 +0200 Subject: [Public WebGL] Moving WEBGL_lose_context to WebGL 2 core? Message-ID: Hi all, now that WebGL 2 is still being defined, I think we could take the opportunity to make loseContext/restoreContext calls part of the core specification to reduce the number of extensions and make the functions as easily accessible to app developers as possible. Any reasons why we shouldn't do this? I imagine the extra effort required by this would be quite small. -Olli ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Fri Jul 11 16:43:38 2014 From: jgi...@ (Jeff Gilbert) Date: Fri, 11 Jul 2014 16:43:38 -0700 (PDT) Subject: [Public WebGL] Moving WEBGL_lose_context to WebGL 2 core? In-Reply-To: References: Message-ID: <744653840.6195644.1405122218746.JavaMail.zimbra@mozilla.com> WEBGL_lose_context as it stands today is only useful for debugging context loss and restoration, but is not actually useful for the game once it's running on a user's machine. If anything, it would probably be fine if we moved this extension behind a development flag, since it's really only for development and conformance testing purposes. ----- Original Message ----- From: "Olli Etuaho" To: "public webgl" Sent: Friday, July 11, 2014 6:56:29 AM Subject: [Public WebGL] Moving WEBGL_lose_context to WebGL 2 core? Hi all, now that WebGL 2 is still being defined, I think we could take the opportunity to make loseContext/restoreContext calls part of the core specification to reduce the number of extensions and make the functions as easily accessible to app developers as possible. Any reasons why we shouldn't do this? I imagine the extra effort required by this would be quite small. -Olli ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Fri Jul 11 16:44:37 2014 From: jgi...@ (Jeff Gilbert) Date: Fri, 11 Jul 2014 16:44:37 -0700 (PDT) Subject: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? In-Reply-To: <1269154715.6015861.1405036824223.JavaMail.zimbra@mozilla.com> References: <1269154715.6015861.1405036824223.JavaMail.zimbra@mozilla.com> Message-ID: <968534166.6195690.1405122277234.JavaMail.zimbra@mozilla.com> Dan Glastonbury threw together this page, which dumps the extension types for WebGL: http://jsbin.com/xikiwuzo/1/edit ----- Original Message ----- From: "Jeff Gilbert" To: "public webgl" Sent: Thursday, July 10, 2014 5:00:24 PM Subject: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? Currently neither Chrome nor Firefox (at least) appear to expose these interfaces (such as `WebGLVertexArrayObjectOES` and `OES_vertex_array_object`), but in the specs, they are not marked as `[NoInterfaceObject]`. Should they be exposed on the `window` object, allowing for `foo instanceof WebGLVertexArrayObjectOES`? -Jeff ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Fri Jul 11 16:53:49 2014 From: jgi...@ (Jeff Gilbert) Date: Fri, 11 Jul 2014 16:53:49 -0700 (PDT) Subject: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? In-Reply-To: <968534166.6195690.1405122277234.JavaMail.zimbra@mozilla.com> References: <1269154715.6015861.1405036824223.JavaMail.zimbra@mozilla.com> <968534166.6195690.1405122277234.JavaMail.zimbra@mozilla.com> Message-ID: <703220469.6198136.1405122829800.JavaMail.zimbra@mozilla.com> To save people work, here are the results I get: Firefox Nightly: EXT_frag_depth WebGLExtensionFragDepth EXT_texture_filter_anisotropic WebGLExtensionTextureFilterAnisotropic OES_element_index_uint WebGLExtensionElementIndexUint OES_standard_derivatives WebGLExtensionStandardDerivatives OES_texture_float WebGLExtensionTextureFloat OES_texture_float_linear WebGLExtensionTextureFloatLinear OES_texture_half_float WebGLExtensionTextureHalfFloat OES_texture_half_float_linear WebGLExtensionTextureHalfFloatLinear OES_vertex_array_object WebGLExtensionVertexArray WEBGL_compressed_texture_s3tc WebGLExtensionCompressedTextureS3TC WEBGL_depth_texture WebGLExtensionDepthTexture WEBGL_lose_context WebGLExtensionLoseContext MOZ_WEBGL_lose_context WebGLExtensionLoseContext MOZ_WEBGL_compressed_texture_s3tc WebGLExtensionCompressedTextureS3TC MOZ_WEBGL_depth_texture WebGLExtensionDepthTexture Chrome Dev: ANGLE_instanced_arrays ANGLEInstancedArrays EXT_blend_minmax EXTBlendMinMax EXT_frag_depth EXTFragDepth EXT_shader_texture_lod EXTShaderTextureLOD EXT_texture_filter_anisotropic EXTTextureFilterAnisotropic WEBKIT_EXT_texture_filter_anisotropic EXTTextureFilterAnisotropic OES_element_index_uint OESElementIndexUint OES_standard_derivatives OESStandardDerivatives OES_texture_float OESTextureFloat OES_texture_float_linear OESTextureFloatLinear OES_texture_half_float OESTextureHalfFloat OES_texture_half_float_linear OESTextureHalfFloatLinear OES_vertex_array_object OESVertexArrayObject WEBGL_compressed_texture_s3tc WebGLCompressedTextureS3TC WEBKIT_WEBGL_compressed_texture_s3tc WebGLCompressedTextureS3TC WEBGL_debug_renderer_info WebGLDebugRendererInfo WEBGL_debug_shaders WebGLDebugShaders WEBGL_depth_texture WebGLDepthTexture WEBKIT_WEBGL_depth_texture WebGLDepthTexture WEBGL_draw_buffers WebGLDrawBuffers WEBGL_lose_context WebGLLoseContext WEBKIT_WEBGL_lose_context WebGLLoseContext While it probably doesn't matter much, we should probably add conformance tests to assure that these extensions match the type established in the webidl. ----- Original Message ----- From: "Jeff Gilbert" To: "public webgl" Sent: Friday, July 11, 2014 4:44:37 PM Subject: Re: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? Dan Glastonbury threw together this page, which dumps the extension types for WebGL: http://jsbin.com/xikiwuzo/1/edit ----- Original Message ----- From: "Jeff Gilbert" To: "public webgl" Sent: Thursday, July 10, 2014 5:00:24 PM Subject: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? Currently neither Chrome nor Firefox (at least) appear to expose these interfaces (such as `WebGLVertexArrayObjectOES` and `OES_vertex_array_object`), but in the specs, they are not marked as `[NoInterfaceObject]`. Should they be exposed on the `window` object, allowing for `foo instanceof WebGLVertexArrayObjectOES`? -Jeff ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Fri Jul 11 16:58:53 2014 From: bja...@ (Benoit Jacob) Date: Fri, 11 Jul 2014 16:58:53 -0700 (PDT) Subject: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? In-Reply-To: <703220469.6198136.1405122829800.JavaMail.zimbra@mozilla.com> References: <1269154715.6015861.1405036824223.JavaMail.zimbra@mozilla.com> <968534166.6195690.1405122277234.JavaMail.zimbra@mozilla.com> <703220469.6198136.1405122829800.JavaMail.zimbra@mozilla.com> Message-ID: <805171206.3815874.1405123133559.JavaMail.zimbra@mozilla.com> I believe that interfaces in WebGL extensions should be no different from interfaces in the core WebGL spec in this respect, so as to minimize the incompatibility risk as people from (WebGL version N plus extensions) to (WebGL version N+1). Benoit ----- Original Message ----- > > To save people work, here are the results I get: > Firefox Nightly: > EXT_frag_depth WebGLExtensionFragDepth > EXT_texture_filter_anisotropic WebGLExtensionTextureFilterAnisotropic > OES_element_index_uint WebGLExtensionElementIndexUint > OES_standard_derivatives WebGLExtensionStandardDerivatives > OES_texture_float WebGLExtensionTextureFloat > OES_texture_float_linear WebGLExtensionTextureFloatLinear > OES_texture_half_float WebGLExtensionTextureHalfFloat > OES_texture_half_float_linear WebGLExtensionTextureHalfFloatLinear > OES_vertex_array_object WebGLExtensionVertexArray > WEBGL_compressed_texture_s3tc WebGLExtensionCompressedTextureS3TC > WEBGL_depth_texture WebGLExtensionDepthTexture > WEBGL_lose_context WebGLExtensionLoseContext > MOZ_WEBGL_lose_context WebGLExtensionLoseContext > MOZ_WEBGL_compressed_texture_s3tc WebGLExtensionCompressedTextureS3TC > MOZ_WEBGL_depth_texture WebGLExtensionDepthTexture > > Chrome Dev: > ANGLE_instanced_arrays ANGLEInstancedArrays > EXT_blend_minmax EXTBlendMinMax > EXT_frag_depth EXTFragDepth > EXT_shader_texture_lod EXTShaderTextureLOD > EXT_texture_filter_anisotropic EXTTextureFilterAnisotropic > WEBKIT_EXT_texture_filter_anisotropic EXTTextureFilterAnisotropic > OES_element_index_uint OESElementIndexUint > OES_standard_derivatives OESStandardDerivatives > OES_texture_float OESTextureFloat > OES_texture_float_linear OESTextureFloatLinear > OES_texture_half_float OESTextureHalfFloat > OES_texture_half_float_linear OESTextureHalfFloatLinear > OES_vertex_array_object OESVertexArrayObject > WEBGL_compressed_texture_s3tc WebGLCompressedTextureS3TC > WEBKIT_WEBGL_compressed_texture_s3tc WebGLCompressedTextureS3TC > WEBGL_debug_renderer_info WebGLDebugRendererInfo > WEBGL_debug_shaders WebGLDebugShaders > WEBGL_depth_texture WebGLDepthTexture > WEBKIT_WEBGL_depth_texture WebGLDepthTexture > WEBGL_draw_buffers WebGLDrawBuffers > WEBGL_lose_context WebGLLoseContext > WEBKIT_WEBGL_lose_context WebGLLoseContext > > While it probably doesn't matter much, we should probably add conformance > tests to assure that these extensions match the type established in the > webidl. > > ----- Original Message ----- > From: "Jeff Gilbert" > To: "public webgl" > Sent: Friday, July 11, 2014 4:44:37 PM > Subject: Re: [Public WebGL] Should extension-related webidl interfaces be > `[NoInterfaceObject]`? > > > Dan Glastonbury threw together this page, which dumps the extension types for > WebGL: > http://jsbin.com/xikiwuzo/1/edit > > ----- Original Message ----- > From: "Jeff Gilbert" > To: "public webgl" > Sent: Thursday, July 10, 2014 5:00:24 PM > Subject: [Public WebGL] Should extension-related webidl interfaces be > `[NoInterfaceObject]`? > > > Currently neither Chrome nor Firefox (at least) appear to expose these > interfaces (such as `WebGLVertexArrayObjectOES` and > `OES_vertex_array_object`), but in the specs, they are not marked as > `[NoInterfaceObject]`. > > Should they be exposed on the `window` object, allowing for `foo instanceof > WebGLVertexArrayObjectOES`? > > -Jeff > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Sat Jul 12 03:06:14 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 12 Jul 2014 12:06:14 +0200 Subject: [Public WebGL] Moving WEBGL_lose_context to WebGL 2 core? In-Reply-To: <744653840.6195644.1405122218746.JavaMail.zimbra@mozilla.com> References: <744653840.6195644.1405122218746.JavaMail.zimbra@mozilla.com> Message-ID: On Sat, Jul 12, 2014 at 1:43 AM, Jeff Gilbert wrote: > > WEBGL_lose_context as it stands today is only useful for debugging context > loss and restoration, but is not actually useful for the game once it's > running on a user's machine. If anything, it would probably be fine if we > moved this extension behind a development flag, since it's really only for > development and conformance testing purposes. I disagree with that assertion on the grounds that it would make user-driven and/or automated and/or distributed testing of context loss handling impossible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Tue Jul 15 04:53:18 2014 From: oet...@ (Olli Etuaho) Date: Tue, 15 Jul 2014 13:53:18 +0200 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 Message-ID: I submitted a pull request removing drawRangeElements from WebGL 2, along with a detailed explanation why: https://github.com/KhronosGroup/WebGL/pull/624 Please raise your objections if you have any. -Olli ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Tue Jul 15 07:45:16 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 15 Jul 2014 16:45:16 +0200 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: Message-ID: I see the difficulty with supporting this function. Nevertheless it is part of ES 3.0 and WebGL 2.0 is supposed to be a fully compliant implementation of ES 3.0. A similar problem as that of drawRangeElements exists already with drawElements, and although not solved very satisfactory (validation can still occur), it is possible to avoid validation repeatedly in drawElements (by not changing the predicate upon which a check is cached). Is there some extension or core specification in other GL versions that makes guarantees about drawRangeElements? What is the interaction of this function with ARB_robustness? On Tue, Jul 15, 2014 at 1:53 PM, Olli Etuaho wrote: > > I submitted a pull request removing drawRangeElements from WebGL 2, along > with a detailed explanation why: > https://github.com/KhronosGroup/WebGL/pull/624 > > Please raise your objections if you have any. > > -Olli > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Tue Jul 15 08:02:47 2014 From: cal...@ (Mark Callow) Date: Tue, 15 Jul 2014 08:02:47 -0700 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: Message-ID: <53C54297.2020805@artspark.co.jp> On 2014/07/15 4:53, Olli Etuaho wrote: > I submitted a pull request removing drawRangeElements from WebGL 2, along with a detailed explanation why: https://github.com/KhronosGroup/WebGL/pull/624 > > Please raise your objections if you have any. > There may be no performance benefit to this call but there is still a benefit to app writers in not having to change their ES 3.0 code, especially those porting apps via Emscripten. Only if the checks in dRE have a performance cost much larger than that of the similar checks in drawElements, should the topic of removal be discussed. Regards -Mark -- ??:????????????????????????????????????????????????????????????? ???????????????????????????????????????????????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Tue Jul 15 08:03:05 2014 From: bja...@ (Benoit Jacob) Date: Tue, 15 Jul 2014 08:03:05 -0700 (PDT) Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: Message-ID: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> ----- Original Message ----- > > I submitted a pull request removing drawRangeElements from WebGL 2, along > with a detailed explanation why: > https://github.com/KhronosGroup/WebGL/pull/624 I would need more detail than that, to understand what the problem with drawRangeElements is. "WebGL would need to validate that the indices are in the start, end range to ensure consistent behavior..." Sure thing - and that's mostly what WebGL has to do anyway for drawElements, except more expensive: for drawElements, it has to track the maximum-element-in-any-contiguous-sub-array, and now it also has to track the minimum-element-in-any-contiguous-sub-array. "...and no performance benefits could be realized by exposing this function." I would like to understand how that conclusion follows. As far as I can see, the overhead for drawRangeElements is strictly less than 2x the overhead for drawElements, for the reason explained above. So it's still the same order of magnitude. Have you found any testcase where Firefox Nightly's drawElements validation overhead was large? I know there's https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html which did hit one such case, but we fixed it in Firefox 32 (currently on the Aurora channel). Is there any problem left? If not, then why do you expect there to be problems with drawRangeElements? Benoit > > Please raise your objections if you have any. > > -Olli > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From oet...@ Tue Jul 15 08:42:54 2014 From: oet...@ (Olli Etuaho) Date: Tue, 15 Jul 2014 17:42:54 +0200 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> References: ,<260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> Message-ID: You are correct that the drawRangeElements overhead imposed by WebGL should be < 2x drawElements overhead, and if start is set to zero it can be made identical to drawElements overhead. But since the only reason for drawRangeElements to originally exist is to provide improved performance over drawElements, additional overhead could make it worthless. A WebGL implementation is either way able to use drawRangeElements under the hood to get the possible performance benefits from the driver. Maybe it's not as clear-cut as I initially thought, though: calling drawRangeElements in WebGL with start index that's much greater than 0 could still provide the WebGL implementation a good hint that it is worthwhile to actually check the minimum index and call underlying drawRangeElements with the original start value. To answer Florian's comments, I don't think that other GL specs than GLES3 address this either. Removing the function from WebGL 2 would also reduce compatibility with GLES3, as Mark pointed out. I'd like to point out that providing a compatibility shim is trivial in this case, though. I'd like to still hear more comments on this, but based on the conversation so far I'm not as keen on removing the function any more. -Olli ________________________________________ From: Benoit Jacob [bjacob...@] Sent: Tuesday, July 15, 2014 6:03 PM To: Olli Etuaho Cc: public webgl Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 ----- Original Message ----- > > I submitted a pull request removing drawRangeElements from WebGL 2, along > with a detailed explanation why: > https://github.com/KhronosGroup/WebGL/pull/624 I would need more detail than that, to understand what the problem with drawRangeElements is. "WebGL would need to validate that the indices are in the start, end range to ensure consistent behavior..." Sure thing - and that's mostly what WebGL has to do anyway for drawElements, except more expensive: for drawElements, it has to track the maximum-element-in-any-contiguous-sub-array, and now it also has to track the minimum-element-in-any-contiguous-sub-array. "...and no performance benefits could be realized by exposing this function." I would like to understand how that conclusion follows. As far as I can see, the overhead for drawRangeElements is strictly less than 2x the overhead for drawElements, for the reason explained above. So it's still the same order of magnitude. Have you found any testcase where Firefox Nightly's drawElements validation overhead was large? I know there's https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html which did hit one such case, but we fixed it in Firefox 32 (currently on the Aurora channel). Is there any problem left? If not, then why do you expect there to be problems with drawRangeElements? Benoit > > Please raise your objections if you have any. > > -Olli > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From thu...@ Tue Jul 15 09:42:13 2014 From: thu...@ (Ben Adams) Date: Tue, 15 Jul 2014 17:42:13 +0100 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> Message-ID: I see the issue with drawRangeElements as there could be a mis-match between what the indices say and what the parameters say so it needs checking; thus being self defeating. On a related question if drawElements is called for the full buffer once; and the index buffer data is never changed; but the other buffer's subdata is changed continuously; I would assume there would be no revalidation when drawElements is called with various different ranges e.g. for object pooling - but is this what happens? Kind regards Ben Adams @ben_adams On 15 July 2014 16:42, Olli Etuaho wrote: > > You are correct that the drawRangeElements overhead imposed by WebGL > should be < 2x drawElements overhead, and if start is set to zero it can be > made identical to drawElements overhead. But since the only reason for > drawRangeElements to originally exist is to provide improved performance > over drawElements, additional overhead could make it worthless. A WebGL > implementation is either way able to use drawRangeElements under the hood > to get the possible performance benefits from the driver. Maybe it's not as > clear-cut as I initially thought, though: calling drawRangeElements in > WebGL with start index that's much greater than 0 could still provide the > WebGL implementation a good hint that it is worthwhile to actually check > the minimum index and call underlying drawRangeElements with the original > start value. > > To answer Florian's comments, I don't think that other GL specs than GLES3 > address this either. > > Removing the function from WebGL 2 would also reduce compatibility with > GLES3, as Mark pointed out. I'd like to point out that providing a > compatibility shim is trivial in this case, though. > > I'd like to still hear more comments on this, but based on the > conversation so far I'm not as keen on removing the function any more. > > -Olli > ________________________________________ > From: Benoit Jacob [bjacob...@] > Sent: Tuesday, July 15, 2014 6:03 PM > To: Olli Etuaho > Cc: public webgl > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > > ----- Original Message ----- > > > > I submitted a pull request removing drawRangeElements from WebGL 2, along > > with a detailed explanation why: > > https://github.com/KhronosGroup/WebGL/pull/624 > > I would need more detail than that, to understand what the problem with > drawRangeElements is. > > "WebGL would need to validate that the indices are in the start, end range > to ensure consistent behavior..." > > Sure thing - and that's mostly what WebGL has to do anyway for > drawElements, except more expensive: for drawElements, it has to track the > maximum-element-in-any-contiguous-sub-array, and now it also has to track > the minimum-element-in-any-contiguous-sub-array. > > "...and no performance benefits could be realized by exposing this > function." > > I would like to understand how that conclusion follows. As far as I can > see, the overhead for drawRangeElements is strictly less than 2x the > overhead for drawElements, for the reason explained above. So it's still > the same order of magnitude. Have you found any testcase where Firefox > Nightly's drawElements validation overhead was large? I know there's > https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html > which did hit one such case, but we fixed it in Firefox 32 (currently on > the Aurora channel). Is there any problem left? If not, then why do you > expect there to be problems with drawRangeElements? > > Benoit > > > > > Please raise your objections if you have any. > > > > -Olli > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Jul 15 11:12:22 2014 From: kbr...@ (Kenneth Russell) Date: Tue, 15 Jul 2014 11:12:22 -0700 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> Message-ID: While the primary goal of the WebGL spec is to maintain compatibility with OpenGL ES, it's also true that the WebGL spec has aimed to eliminate undefined behavior throughout. If DrawRangeElements is to remain in the WebGL 2 spec, then its error behavior will have to be defined and tests will have to be written. Currently WebGL's drawElements call requires indices to be validated, resulting in performance problems in some applications. Work is underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL ES. The WebGL spec could then be relaxed to no longer require index validation, but still have testable behavior. Until this happens, drawRangeElements would require the same index validation that drawElements does. Also note that an optimized WebGL implementation could translate drawElements calls to DrawRangeElements calls internally. If there's any performance benefit from calling DrawRangeElements, it can be achieved without exposing drawRangeElements to JavaScript and having to validate the indices against the start and end range. (This would still require the index validation code to be present in the WebGL implementation, to be able to rapidly query the minimum and maximum index within a given range of an element array buffer.) I'd like to hear from GPU vendors whether DrawRangeElements provides any significant speedup over DrawElements when all of the buffers are allocated on the GPU -- i.e., no client side vertices or indices are in use. If it does, then the entry point should be exposed and WebGL implementations should aim to capture all the available performance, in particular when index validation can be delegated to the GPU. If not, it should be removed, and the focus should be on making drawElements go as fast as possible. -Ken On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho wrote: > > You are correct that the drawRangeElements overhead imposed by WebGL should be < 2x drawElements overhead, and if start is set to zero it can be made identical to drawElements overhead. But since the only reason for drawRangeElements to originally exist is to provide improved performance over drawElements, additional overhead could make it worthless. A WebGL implementation is either way able to use drawRangeElements under the hood to get the possible performance benefits from the driver. Maybe it's not as clear-cut as I initially thought, though: calling drawRangeElements in WebGL with start index that's much greater than 0 could still provide the WebGL implementation a good hint that it is worthwhile to actually check the minimum index and call underlying drawRangeElements with the original start value. > > To answer Florian's comments, I don't think that other GL specs than GLES3 address this either. > > Removing the function from WebGL 2 would also reduce compatibility with GLES3, as Mark pointed out. I'd like to point out that providing a compatibility shim is trivial in this case, though. > > I'd like to still hear more comments on this, but based on the conversation so far I'm not as keen on removing the function any more. > > -Olli > ________________________________________ > From: Benoit Jacob [bjacob...@] > Sent: Tuesday, July 15, 2014 6:03 PM > To: Olli Etuaho > Cc: public webgl > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > > ----- Original Message ----- >> >> I submitted a pull request removing drawRangeElements from WebGL 2, along >> with a detailed explanation why: >> https://github.com/KhronosGroup/WebGL/pull/624 > > I would need more detail than that, to understand what the problem with drawRangeElements is. > > "WebGL would need to validate that the indices are in the start, end range to ensure consistent behavior..." > > Sure thing - and that's mostly what WebGL has to do anyway for drawElements, except more expensive: for drawElements, it has to track the maximum-element-in-any-contiguous-sub-array, and now it also has to track the minimum-element-in-any-contiguous-sub-array. > > "...and no performance benefits could be realized by exposing this > function." > > I would like to understand how that conclusion follows. As far as I can see, the overhead for drawRangeElements is strictly less than 2x the overhead for drawElements, for the reason explained above. So it's still the same order of magnitude. Have you found any testcase where Firefox Nightly's drawElements validation overhead was large? I know there's https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html which did hit one such case, but we fixed it in Firefox 32 (currently on the Aurora channel). Is there any problem left? If not, then why do you expect there to be problems with drawRangeElements? > > Benoit > >> >> Please raise your objections if you have any. >> >> -Olli >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Tue Jul 15 11:34:29 2014 From: bja...@ (Benoit Jacob) Date: Tue, 15 Jul 2014 11:34:29 -0700 (PDT) Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> Message-ID: <874825933.4137605.1405449269243.JavaMail.zimbra@mozilla.com> ----- Original Message ----- > While the primary goal of the WebGL spec is to maintain compatibility > with OpenGL ES, it's also true that the WebGL spec has aimed to > eliminate undefined behavior throughout. If DrawRangeElements is to > remain in the WebGL 2 spec, then its error behavior will have to be > defined and tests will have to be written. > > Currently WebGL's drawElements call requires indices to be validated, > resulting in performance problems in some applications. Work is > underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL ES. There's a mismatch of timeframes here. In the best of cases, it would be several years before a large enough majority of the installed base has GL_ARB_robust_buffer_access_behavior, for browser vendors to want to rely on it. The timeframe for shipping WebGL 2 seems much nearer than that. > The WebGL spec could then be relaxed to no longer require index > validation, but still have testable behavior. Until this happens, > drawRangeElements would require the same index validation that > drawElements does. > > Also note that an optimized WebGL implementation could translate > drawElements calls to DrawRangeElements calls internally. If there's > any performance benefit from calling DrawRangeElements, it can be > achieved without exposing drawRangeElements to JavaScript and having > to validate the indices against the start and end range. Firefox already does that optimization: http://hg.mozilla.org/mozilla-central/file/835e22069c1a/content/canvas/src/WebGLContextDraw.cpp#l316 However, that does not give all of the benefits that drawRangeElements can give, because that does not allow to take advantage of when there is a nonzero lower-bound on the indices. That only automatically gives the other half of drawRangeElements, the upper bound. > (This would > still require the index validation code to be present in the WebGL > implementation, to be able to rapidly query the minimum and maximum > index within a given range of an element array buffer.) > > I'd like to hear from GPU vendors whether DrawRangeElements provides > any significant speedup over DrawElements when all of the buffers are > allocated on the GPU -- i.e., no client side vertices or indices are > in use. If it does, then the entry point should be exposed and WebGL > implementations should aim to capture all the available performance, > in particular when index validation can be delegated to the GPU. If > not, it should be removed, and the focus should be on making > drawElements go as fast as possible. Some issues here: - 1. "being allocated on the GPU", by which I mean "in GPU memory", isn't the only issue or even a thing at all. Think of unified memory systems, where there is no separate video memory at all. Those are all of mobile and the majority of desktops these days (think Intel integrated graphics, which are 60% of Firefox users). - 2. What is more relevant is whether we are in the GPU's cache memory. - 3. But even that is not the full story. It's not just a question of taking a buffer and copying it into a cache. The cached vertex data is not just a copy of the data submitted to glBufferData(). The driver typically reencodes vertex data. That makes it even more important to help the driver's heuristics to determine ahead of time what vertex data is going to be used and how it is going to be used. The "how it is going to be used" part is specified in the draw[Range]Elements call. So a lot of work can't even start before the draw call is submitted. With drawElements, a lot of work still can't happen immediately, because the renderer still has to traverse the consumed part of the index buffer to be able to know the range of consumed index values. By contrast, with drawRangeElements, that work can start immediately. I recently had conversations with a GPU vendor that I can't name here, who explained some of the above to me (though some of that is my own extrapolation), so that applies to at least one vendor; I thought of these concepts as reasonably vendor-neutral reasons to think that drawRangeElements could be a perf win across various implementations. Benoit > > -Ken > > > On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho wrote: > > > > You are correct that the drawRangeElements overhead imposed by WebGL should > > be < 2x drawElements overhead, and if start is set to zero it can be made > > identical to drawElements overhead. But since the only reason for > > drawRangeElements to originally exist is to provide improved performance > > over drawElements, additional overhead could make it worthless. A WebGL > > implementation is either way able to use drawRangeElements under the hood > > to get the possible performance benefits from the driver. Maybe it's not > > as clear-cut as I initially thought, though: calling drawRangeElements in > > WebGL with start index that's much greater than 0 could still provide the > > WebGL implementation a good hint that it is worthwhile to actually check > > the minimum index and call underlying drawRangeElements with the original > > start value. > > > > To answer Florian's comments, I don't think that other GL specs than GLES3 > > address this either. > > > > Removing the function from WebGL 2 would also reduce compatibility with > > GLES3, as Mark pointed out. I'd like to point out that providing a > > compatibility shim is trivial in this case, though. > > > > I'd like to still hear more comments on this, but based on the conversation > > so far I'm not as keen on removing the function any more. > > > > -Olli > > ________________________________________ > > From: Benoit Jacob [bjacob...@] > > Sent: Tuesday, July 15, 2014 6:03 PM > > To: Olli Etuaho > > Cc: public webgl > > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > > > > ----- Original Message ----- > >> > >> I submitted a pull request removing drawRangeElements from WebGL 2, along > >> with a detailed explanation why: > >> https://github.com/KhronosGroup/WebGL/pull/624 > > > > I would need more detail than that, to understand what the problem with > > drawRangeElements is. > > > > "WebGL would need to validate that the indices are in the start, end range > > to ensure consistent behavior..." > > > > Sure thing - and that's mostly what WebGL has to do anyway for > > drawElements, except more expensive: for drawElements, it has to track the > > maximum-element-in-any-contiguous-sub-array, and now it also has to track > > the minimum-element-in-any-contiguous-sub-array. > > > > "...and no performance benefits could be realized by exposing this > > function." > > > > I would like to understand how that conclusion follows. As far as I can > > see, the overhead for drawRangeElements is strictly less than 2x the > > overhead for drawElements, for the reason explained above. So it's still > > the same order of magnitude. Have you found any testcase where Firefox > > Nightly's drawElements validation overhead was large? I know there's > > https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html > > which did hit one such case, but we fixed it in Firefox 32 (currently on > > the Aurora channel). Is there any problem left? If not, then why do you > > expect there to be problems with drawRangeElements? > > > > Benoit > > > >> > >> Please raise your objections if you have any. > >> > >> -Olli > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > >> unsubscribe public_webgl > >> ----------------------------------------------------------- > >> > >> > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Tue Jul 15 11:58:00 2014 From: cal...@ (Mark Callow) Date: Tue, 15 Jul 2014 11:58:00 -0700 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> Message-ID: <53C579B8.8070500@artspark.co.jp> On 2014/07/15 11:12, Kenneth Russell wrote: > While the primary goal of the WebGL spec is to maintain compatibility > with OpenGL ES, This sure doesn't come across in the rest of your message (cut for brevity) which advances fairly weak arguments for breaking compatibility and even contradicts its primary argument, by saying that implementations will likely do the min range bounds checking anyway. > I'd like to hear from GPU vendors whether DrawRangeElements provides > any significant speedup When this issue came up before, I asked the GPU vendors this. At least one said "yes". I don't recall the details, as it was many months ago. I will defer to Benoit's thorough response on this topic. Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Jul 15 17:26:40 2014 From: kbr...@ (Kenneth Russell) Date: Tue, 15 Jul 2014 17:26:40 -0700 Subject: [Public WebGL] Moving WEBGL_lose_context to WebGL 2 core? In-Reply-To: References: Message-ID: On Fri, Jul 11, 2014 at 6:56 AM, Olli Etuaho wrote: > > Hi all, > > now that WebGL 2 is still being defined, I think we could take the opportunity to make loseContext/restoreContext calls part of the core specification to reduce the number of extensions and make the functions as easily accessible to app developers as possible. Any reasons why we shouldn't do this? I imagine the extra effort required by this would be quite small. I'd be hesitant to guarantee to app developers that they can call loseContext in order to immediately discard all resources from a given WebGL context, and further, that if the context was lost from their own call to loseContext, a later call to restoreContext would restore the context. The WEBGL_lose_context extension is valuable not only for app developers to ensure their code is robust to lost context events, but also for browser developers to make sure the context loss and restoration paths work reliably. Still, putting these routines in the core spec would encourage developers to manually manage contexts' lifetimes in their apps and that's not really compatible with the automatic resource reclamation that the web's built around. (The delete* routines in WebGL reflect the reality that automatic GC can't magically handle all cleanup of expensive resources.) If any change in this area were made, I'd suggest a way to irrevocably shut down a context, but not to restore it afterward. That would be compatible with the "close()" method that resources like WebSockets support. -Ken > -Olli > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Jul 15 18:18:38 2014 From: kbr...@ (Kenneth Russell) Date: Tue, 15 Jul 2014 18:18:38 -0700 Subject: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? In-Reply-To: <1269154715.6015861.1405036824223.JavaMail.zimbra@mozilla.com> References: <2038075810.6015116.1405036503733.JavaMail.zimbra@mozilla.com> <1269154715.6015861.1405036824223.JavaMail.zimbra@mozilla.com> Message-ID: The extension registry's been updated to add [NoInterfaceObject] to all WebGL extensions' interfaces. I don't think there's any value to developers in exposing, specifying and testing these interface types. -Ken On Thu, Jul 10, 2014 at 5:00 PM, Jeff Gilbert wrote: > > Currently neither Chrome nor Firefox (at least) appear to expose these interfaces (such as `WebGLVertexArrayObjectOES` and `OES_vertex_array_object`), but in the specs, they are not marked as `[NoInterfaceObject]`. > > Should they be exposed on the `window` object, allowing for `foo instanceof WebGLVertexArrayObjectOES`? > > -Jeff > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Tue Jul 15 18:27:16 2014 From: bja...@ (Benoit Jacob) Date: Tue, 15 Jul 2014 18:27:16 -0700 (PDT) Subject: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? In-Reply-To: References: <2038075810.6015116.1405036503733.JavaMail.zimbra@mozilla.com> <1269154715.6015861.1405036824223.JavaMail.zimbra@mozilla.com> Message-ID: <737457807.4194216.1405474036492.JavaMail.zimbra@mozilla.com> ----- Original Message ----- > > The extension registry's been updated to add [NoInterfaceObject] to > all WebGL extensions' interfaces. I don't think there's any value to > developers in exposing, specifying and testing these interface types. Why would interfaces be treated differently depending on whether they are in the core spec or in extensions? What about the compatibility concern expressed in my previous email? Benoit > > -Ken > > > On Thu, Jul 10, 2014 at 5:00 PM, Jeff Gilbert wrote: > > > > Currently neither Chrome nor Firefox (at least) appear to expose these > > interfaces (such as `WebGLVertexArrayObjectOES` and > > `OES_vertex_array_object`), but in the specs, they are not marked as > > `[NoInterfaceObject]`. > > > > Should they be exposed on the `window` object, allowing for `foo instanceof > > WebGLVertexArrayObjectOES`? > > > > -Jeff > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Jul 15 18:45:02 2014 From: kbr...@ (Kenneth Russell) Date: Tue, 15 Jul 2014 18:45:02 -0700 Subject: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? In-Reply-To: <737457807.4194216.1405474036492.JavaMail.zimbra@mozilla.com> References: <2038075810.6015116.1405036503733.JavaMail.zimbra@mozilla.com> <1269154715.6015861.1405036824223.JavaMail.zimbra@mozilla.com> <737457807.4194216.1405474036492.JavaMail.zimbra@mozilla.com> Message-ID: I doubt any developer cares about doing instanceof operations on returned extension objects. This has never been raised as a concern by any app developer in the past few years. Rather than spending time writing conformance tests for extensions' types, I think our collective time would be better spent writing conformance tests for OpenGL ES 3.0 / WebGL 2.0 functionality. On Tue, Jul 15, 2014 at 6:27 PM, Benoit Jacob wrote: > > > ----- Original Message ----- >> >> The extension registry's been updated to add [NoInterfaceObject] to >> all WebGL extensions' interfaces. I don't think there's any value to >> developers in exposing, specifying and testing these interface types. > > Why would interfaces be treated differently depending on whether they are in the core spec or in extensions? What about the compatibility concern expressed in my previous email? > > Benoit > >> >> -Ken >> >> >> On Thu, Jul 10, 2014 at 5:00 PM, Jeff Gilbert wrote: >> > >> > Currently neither Chrome nor Firefox (at least) appear to expose these >> > interfaces (such as `WebGLVertexArrayObjectOES` and >> > `OES_vertex_array_object`), but in the specs, they are not marked as >> > `[NoInterfaceObject]`. >> > >> > Should they be exposed on the `window` object, allowing for `foo instanceof >> > WebGLVertexArrayObjectOES`? >> > >> > -Jeff >> > >> > ----------------------------------------------------------- >> > You are currently subscribed to public_webgl...@ >> > To unsubscribe, send an email to majordomo...@ with >> > the following command in the body of your email: >> > unsubscribe public_webgl >> > ----------------------------------------------------------- >> > >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Wed Jul 16 04:15:10 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 16 Jul 2014 13:15:10 +0200 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented Message-ID: The EXT_disjoint_timer_query extension ( http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) has been moved to draft by Kenneth 5 months ago. Tickets for implementation of the functionality are created: - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 The extension would be very useful for a variety of performance profiling usecases and LOD determination algorithms. The extension is part of OpenGL ES 3.1 core, the WebGL version of which would probably still be a few years out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dko...@ Wed Jul 16 06:31:08 2014 From: dko...@ (Daniel Koch) Date: Wed, 16 Jul 2014 06:31:08 -0700 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: References: Message-ID: On 2014-07-16 7:15 AM, "Florian B?sch" > wrote: The EXT_disjoint_timer_query extension (http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) has been moved to draft by Kenneth 5 months ago. Tickets for implementation of the functionality are created: - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 The extension would be very useful for a variety of performance profiling usecases and LOD determination algorithms. The extension is part of OpenGL ES 3.1 core, the WebGL version of which would probably still be a few years out. This last statement is incorrect. It is NOT part of OpenGL ES 3.1. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Jul 16 06:34:12 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 16 Jul 2014 15:34:12 +0200 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: References: Message-ID: Oh, right, I thought it was. Weird. Anyways, makes it even more relevant to get it done. On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch wrote: > On 2014-07-16 7:15 AM, "Florian B?sch" wrote: > > The EXT_disjoint_timer_query extension ( > http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) > has been moved to draft by Kenneth 5 months ago. > > Tickets for implementation of the functionality are created: > > - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 > - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 > - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 > - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 > > The extension would be very useful for a variety of performance profiling > usecases and LOD determination algorithms. > > The extension is part of OpenGL ES 3.1 core, the WebGL version of which > would probably still be a few years out. > > > This last statement is incorrect. It is NOT part of OpenGL ES 3.1. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Wed Jul 16 08:01:35 2014 From: oet...@ (Olli Etuaho) Date: Wed, 16 Jul 2014 17:01:35 +0200 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> , Message-ID: > Currently WebGL's drawElements call requires indices to be validated, > resulting in performance problems in some applications. Work is > underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL ES. > The WebGL spec could then be relaxed to no longer require index > validation, but still have testable behavior. Until this happens, > drawRangeElements would require the same index validation that > drawElements does." Are you sure that ARB_robust_buffer_access_behavior even removes the requirement to do index validation on drawRangeElements? With ARB_robust_buffer_access_behavior, access is restricted within the buffer object, not the specified range, so there's still some room for implementation-dependent behavior of drawRangeElements. Maybe this is good enough to make it testable, but not completely consistent. Re: performance, on our hardware there is no performance benefit from drawRangeElements over drawElements when using buffers allocated on the GPU. -Olli ________________________________________ From: Kenneth Russell [kbr...@] Sent: Tuesday, July 15, 2014 9:12 PM To: Olli Etuaho Cc: Benoit Jacob; public webgl Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 Also note that an optimized WebGL implementation could translate drawElements calls to DrawRangeElements calls internally. If there's any performance benefit from calling DrawRangeElements, it can be achieved without exposing drawRangeElements to JavaScript and having to validate the indices against the start and end range. (This would still require the index validation code to be present in the WebGL implementation, to be able to rapidly query the minimum and maximum index within a given range of an element array buffer.) I'd like to hear from GPU vendors whether DrawRangeElements provides any significant speedup over DrawElements when all of the buffers are allocated on the GPU -- i.e., no client side vertices or indices are in use. If it does, then the entry point should be exposed and WebGL implementations should aim to capture all the available performance, in particular when index validation can be delegated to the GPU. If not, it should be removed, and the focus should be on making drawElements go as fast as possible. -Ken On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho wrote: > > You are correct that the drawRangeElements overhead imposed by WebGL should be < 2x drawElements overhead, and if start is set to zero it can be made identical to drawElements overhead. But since the only reason for drawRangeElements to originally exist is to provide improved performance over drawElements, additional overhead could make it worthless. A WebGL implementation is either way able to use drawRangeElements under the hood to get the possible performance benefits from the driver. Maybe it's not as clear-cut as I initially thought, though: calling drawRangeElements in WebGL with start index that's much greater than 0 could still provide the WebGL implementation a good hint that it is worthwhile to actually check the minimum index and call underlying drawRangeElements with the original start value. > > To answer Florian's comments, I don't think that other GL specs than GLES3 address this either. > > Removing the function from WebGL 2 would also reduce compatibility with GLES3, as Mark pointed out. I'd like to point out that providing a compatibility shim is trivial in this case, though. > > I'd like to still hear more comments on this, but based on the conversation so far I'm not as keen on removing the function any more. > > -Olli > ________________________________________ > From: Benoit Jacob [bjacob...@] > Sent: Tuesday, July 15, 2014 6:03 PM > To: Olli Etuaho > Cc: public webgl > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > > ----- Original Message ----- >> >> I submitted a pull request removing drawRangeElements from WebGL 2, along >> with a detailed explanation why: >> https://github.com/KhronosGroup/WebGL/pull/624 > > I would need more detail than that, to understand what the problem with drawRangeElements is. > > "WebGL would need to validate that the indices are in the start, end range to ensure consistent behavior..." > > Sure thing - and that's mostly what WebGL has to do anyway for drawElements, except more expensive: for drawElements, it has to track the maximum-element-in-any-contiguous-sub-array, and now it also has to track the minimum-element-in-any-contiguous-sub-array. > > "...and no performance benefits could be realized by exposing this > function." > > I would like to understand how that conclusion follows. As far as I can see, the overhead for drawRangeElements is strictly less than 2x the overhead for drawElements, for the reason explained above. So it's still the same order of magnitude. Have you found any testcase where Firefox Nightly's drawElements validation overhead was large? I know there's https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html which did hit one such case, but we fixed it in Firefox 32 (currently on the Aurora channel). Is there any problem left? If not, then why do you expect there to be problems with drawRangeElements? > > Benoit > >> >> Please raise your objections if you have any. >> >> -Olli >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Wed Jul 16 10:46:07 2014 From: bja...@ (Benoit Jacob) Date: Wed, 16 Jul 2014 10:46:07 -0700 (PDT) Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> Message-ID: <1577684777.4280106.1405532767389.JavaMail.zimbra@mozilla.com> ----- Original Message ----- > > Currently WebGL's drawElements call requires indices to be validated, > > resulting in performance problems in some applications. Work is > > underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL ES. > > The WebGL spec could then be relaxed to no longer require index > > validation, but still have testable behavior. Until this happens, > > drawRangeElements would require the same index validation that > > drawElements does." > > Are you sure that ARB_robust_buffer_access_behavior even removes the > requirement to do index validation on drawRangeElements? With > ARB_robust_buffer_access_behavior, access is restricted within the buffer > object, not the specified range, so there's still some room for > implementation-dependent behavior of drawRangeElements. Maybe this is good > enough to make it testable, but not completely consistent. On the topic of doubting whether WebGL implementations would ever be able to rely 'robustness' extensions to guard element array access, here is another one, that applies also to drawElements. A while ago, playing with a Firefox patched to not do any element array access validation, running Olli's benchmark https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html that exercises some out-of-range buffer access, on Intel linux drivers, I noticed that that was running much slower than with element array access validation. It seemed that the out-of-range accesses were somehow causing the driver to stall. That's scary as that seems like it could reasonably generalize to more than one driver and remove the feasibility of depending on drivers to handle that for us: even if they do it correctly, they might not do it fast enough. Benoit > > Re: performance, on our hardware there is no performance benefit from > drawRangeElements over drawElements when using buffers allocated on the GPU. > > -Olli > ________________________________________ > From: Kenneth Russell [kbr...@] > Sent: Tuesday, July 15, 2014 9:12 PM > To: Olli Etuaho > Cc: Benoit Jacob; public webgl > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > > Also note that an optimized WebGL implementation could translate > drawElements calls to DrawRangeElements calls internally. If there's > any performance benefit from calling DrawRangeElements, it can be > achieved without exposing drawRangeElements to JavaScript and having > to validate the indices against the start and end range. (This would > still require the index validation code to be present in the WebGL > implementation, to be able to rapidly query the minimum and maximum > index within a given range of an element array buffer.) > > I'd like to hear from GPU vendors whether DrawRangeElements provides > any significant speedup over DrawElements when all of the buffers are > allocated on the GPU -- i.e., no client side vertices or indices are > in use. If it does, then the entry point should be exposed and WebGL > implementations should aim to capture all the available performance, > in particular when index validation can be delegated to the GPU. If > not, it should be removed, and the focus should be on making > drawElements go as fast as possible. > > -Ken > > > On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho wrote: > > > > You are correct that the drawRangeElements overhead imposed by WebGL should > > be < 2x drawElements overhead, and if start is set to zero it can be made > > identical to drawElements overhead. But since the only reason for > > drawRangeElements to originally exist is to provide improved performance > > over drawElements, additional overhead could make it worthless. A WebGL > > implementation is either way able to use drawRangeElements under the hood > > to get the possible performance benefits from the driver. Maybe it's not > > as clear-cut as I initially thought, though: calling drawRangeElements in > > WebGL with start index that's much greater than 0 could still provide the > > WebGL implementation a good hint that it is worthwhile to actually check > > the minimum index and call underlying drawRangeElements with the original > > start value. > > > > To answer Florian's comments, I don't think that other GL specs than GLES3 > > address this either. > > > > Removing the function from WebGL 2 would also reduce compatibility with > > GLES3, as Mark pointed out. I'd like to point out that providing a > > compatibility shim is trivial in this case, though. > > > > I'd like to still hear more comments on this, but based on the conversation > > so far I'm not as keen on removing the function any more. > > > > -Olli > > ________________________________________ > > From: Benoit Jacob [bjacob...@] > > Sent: Tuesday, July 15, 2014 6:03 PM > > To: Olli Etuaho > > Cc: public webgl > > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > > > > ----- Original Message ----- > >> > >> I submitted a pull request removing drawRangeElements from WebGL 2, along > >> with a detailed explanation why: > >> https://github.com/KhronosGroup/WebGL/pull/624 > > > > I would need more detail than that, to understand what the problem with > > drawRangeElements is. > > > > "WebGL would need to validate that the indices are in the start, end range > > to ensure consistent behavior..." > > > > Sure thing - and that's mostly what WebGL has to do anyway for > > drawElements, except more expensive: for drawElements, it has to track the > > maximum-element-in-any-contiguous-sub-array, and now it also has to track > > the minimum-element-in-any-contiguous-sub-array. > > > > "...and no performance benefits could be realized by exposing this > > function." > > > > I would like to understand how that conclusion follows. As far as I can > > see, the overhead for drawRangeElements is strictly less than 2x the > > overhead for drawElements, for the reason explained above. So it's still > > the same order of magnitude. Have you found any testcase where Firefox > > Nightly's drawElements validation overhead was large? I know there's > > https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html > > which did hit one such case, but we fixed it in Firefox 32 (currently on > > the Aurora channel). Is there any problem left? If not, then why do you > > expect there to be problems with drawRangeElements? > > > > Benoit > > > >> > >> Please raise your objections if you have any. > >> > >> -Olli > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > >> unsubscribe public_webgl > >> ----------------------------------------------------------- > >> > >> > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From baj...@ Wed Jul 16 11:24:31 2014 From: baj...@ (Brandon Jones) Date: Wed, 16 Jul 2014 18:24:31 +0000 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> <1577684777.4280106.1405532767389.JavaMail.zimbra@mozilla.com> Message-ID: Even in the drivers handle that case horrifically slow, I don't think it's a problem as long as the driver-side validation doesn't cause serious regressions in the performance of valid draws. Out of range draws are an error, and I don't think we should have any expectation of fast performance on an error case. If you don't want the performance hit then don't submit invalid draw calls, simple as that! --Brandon On Wed Jul 16 2014 at 10:46:58 AM, Benoit Jacob wrote: > > > > ----- Original Message ----- > > > Currently WebGL's drawElements call requires indices to be validated, > > > resulting in performance problems in some applications. Work is > > > underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL ES. > > > The WebGL spec could then be relaxed to no longer require index > > > validation, but still have testable behavior. Until this happens, > > > drawRangeElements would require the same index validation that > > > drawElements does." > > > > Are you sure that ARB_robust_buffer_access_behavior even removes the > > requirement to do index validation on drawRangeElements? With > > ARB_robust_buffer_access_behavior, access is restricted within the > buffer > > object, not the specified range, so there's still some room for > > implementation-dependent behavior of drawRangeElements. Maybe this is > good > > enough to make it testable, but not completely consistent. > > On the topic of doubting whether WebGL implementations would ever be able > to rely 'robustness' extensions to guard element array access, here is > another one, that applies also to drawElements. > > A while ago, playing with a Firefox patched to not do any element array > access validation, running Olli's benchmark https://www.khronos.org/ > registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html that > exercises some out-of-range buffer access, on Intel linux drivers, I > noticed that that was running much slower than with element array access > validation. It seemed that the out-of-range accesses were somehow causing > the driver to stall. That's scary as that seems like it could reasonably > generalize to more than one driver and remove the feasibility of depending > on drivers to handle that for us: even if they do it correctly, they might > not do it fast enough. > > Benoit > > > > > Re: performance, on our hardware there is no performance benefit from > > drawRangeElements over drawElements when using buffers allocated on the > GPU. > > > > -Olli > > ________________________________________ > > From: Kenneth Russell [kbr...@] > > Sent: Tuesday, July 15, 2014 9:12 PM > > To: Olli Etuaho > > Cc: Benoit Jacob; public webgl > > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > > > > Also note that an optimized WebGL implementation could translate > > drawElements calls to DrawRangeElements calls internally. If there's > > any performance benefit from calling DrawRangeElements, it can be > > achieved without exposing drawRangeElements to JavaScript and having > > to validate the indices against the start and end range. (This would > > still require the index validation code to be present in the WebGL > > implementation, to be able to rapidly query the minimum and maximum > > index within a given range of an element array buffer.) > > > > I'd like to hear from GPU vendors whether DrawRangeElements provides > > any significant speedup over DrawElements when all of the buffers are > > allocated on the GPU -- i.e., no client side vertices or indices are > > in use. If it does, then the entry point should be exposed and WebGL > > implementations should aim to capture all the available performance, > > in particular when index validation can be delegated to the GPU. If > > not, it should be removed, and the focus should be on making > > drawElements go as fast as possible. > > > > -Ken > > > > > > On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho wrote: > > > > > > You are correct that the drawRangeElements overhead imposed by WebGL > should > > > be < 2x drawElements overhead, and if start is set to zero it can be > made > > > identical to drawElements overhead. But since the only reason for > > > drawRangeElements to originally exist is to provide improved > performance > > > over drawElements, additional overhead could make it worthless. A WebGL > > > implementation is either way able to use drawRangeElements under the > hood > > > to get the possible performance benefits from the driver. Maybe it's > not > > > as clear-cut as I initially thought, though: calling drawRangeElements > in > > > WebGL with start index that's much greater than 0 could still provide > the > > > WebGL implementation a good hint that it is worthwhile to actually > check > > > the minimum index and call underlying drawRangeElements with the > original > > > start value. > > > > > > To answer Florian's comments, I don't think that other GL specs than > GLES3 > > > address this either. > > > > > > Removing the function from WebGL 2 would also reduce compatibility with > > > GLES3, as Mark pointed out. I'd like to point out that providing a > > > compatibility shim is trivial in this case, though. > > > > > > I'd like to still hear more comments on this, but based on the > conversation > > > so far I'm not as keen on removing the function any more. > > > > > > -Olli > > > ________________________________________ > > > From: Benoit Jacob [bjacob...@] > > > Sent: Tuesday, July 15, 2014 6:03 PM > > > To: Olli Etuaho > > > Cc: public webgl > > > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > > > > > > ----- Original Message ----- > > >> > > >> I submitted a pull request removing drawRangeElements from WebGL 2, > along > > >> with a detailed explanation why: > > >> https://github.com/KhronosGroup/WebGL/pull/624 > > > > > > I would need more detail than that, to understand what the problem with > > > drawRangeElements is. > > > > > > "WebGL would need to validate that the indices are in the start, end > range > > > to ensure consistent behavior..." > > > > > > Sure thing - and that's mostly what WebGL has to do anyway for > > > drawElements, except more expensive: for drawElements, it has to track > the > > > maximum-element-in-any-contiguous-sub-array, and now it also has to > track > > > the minimum-element-in-any-contiguous-sub-array. > > > > > > "...and no performance benefits could be realized by exposing this > > > function." > > > > > > I would like to understand how that conclusion follows. As far as I can > > > see, the overhead for drawRangeElements is strictly less than 2x the > > > overhead for drawElements, for the reason explained above. So it's > still > > > the same order of magnitude. Have you found any testcase where Firefox > > > Nightly's drawElements validation overhead was large? I know there's > > > https://www.khronos.org/registry/webgl/sdk/tests/ > extra/webgl-drawelements-validation.html > > > which did hit one such case, but we fixed it in Firefox 32 (currently > on > > > the Aurora channel). Is there any problem left? If not, then why do you > > > expect there to be problems with drawRangeElements? > > > > > > Benoit > > > > > >> > > >> Please raise your objections if you have any. > > >> > > >> -Olli > > >> ----------------------------------------------------------- > > >> You are currently subscribed to public_webgl...@ > > >> To unsubscribe, send an email to majordomo...@ with > > >> the following command in the body of your email: > > >> unsubscribe public_webgl > > >> ----------------------------------------------------------- > > >> > > >> > > > > > > ----------------------------------------------------------- > > > You are currently subscribed to public_webgl...@ > > > To unsubscribe, send an email to majordomo...@ with > > > the following command in the body of your email: > > > unsubscribe public_webgl > > > ----------------------------------------------------------- > > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Jul 16 11:31:12 2014 From: kbr...@ (Kenneth Russell) Date: Wed, 16 Jul 2014 11:31:12 -0700 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> Message-ID: On Wed, Jul 16, 2014 at 8:01 AM, Olli Etuaho wrote: >> Currently WebGL's drawElements call requires indices to be validated, >> resulting in performance problems in some applications. Work is >> underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL ES. >> The WebGL spec could then be relaxed to no longer require index >> validation, but still have testable behavior. Until this happens, >> drawRangeElements would require the same index validation that >> drawElements does." > > Are you sure that ARB_robust_buffer_access_behavior even removes the requirement to do index validation on drawRangeElements? With ARB_robust_buffer_access_behavior, access is restricted within the buffer object, not the specified range, so there's still some room for implementation-dependent behavior of drawRangeElements. Maybe this is good enough to make it testable, but not completely consistent. Yes, robust_buffer_access_behavior removes the requirement of manual index validation. While it would be ideal to have the same behavior on all GPUs in error conditions where indices reference data outside buffers, unfortunately this isn't the reality of current hardware. The most important thing is to prevent untrusted programs from accessing data that doesn't belong to them. robust_buffer_access_behavior defines the behavior sufficiently to both provide that guarantee, and allow it to be tested. Actually verifying that robust_buffer_access_behavior works as specified is crucial. DrawElements and family are the core of the OpenGL APIs, and their performance is paramount. It's known that WebGL's current security requirements have imposed performance penalties. To allow sophisticated apps to be written, these penalties must be minimized without compromising security of the API. This is one area of the API where slight variability in error behavior is admissible in order to achieve much better performance. > Re: performance, on our hardware there is no performance benefit from drawRangeElements over drawElements when using buffers allocated on the GPU. Thanks, that's useful feedback. -Ken > -Olli > ________________________________________ > From: Kenneth Russell [kbr...@] > Sent: Tuesday, July 15, 2014 9:12 PM > To: Olli Etuaho > Cc: Benoit Jacob; public webgl > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > > Also note that an optimized WebGL implementation could translate > drawElements calls to DrawRangeElements calls internally. If there's > any performance benefit from calling DrawRangeElements, it can be > achieved without exposing drawRangeElements to JavaScript and having > to validate the indices against the start and end range. (This would > still require the index validation code to be present in the WebGL > implementation, to be able to rapidly query the minimum and maximum > index within a given range of an element array buffer.) > > I'd like to hear from GPU vendors whether DrawRangeElements provides > any significant speedup over DrawElements when all of the buffers are > allocated on the GPU -- i.e., no client side vertices or indices are > in use. If it does, then the entry point should be exposed and WebGL > implementations should aim to capture all the available performance, > in particular when index validation can be delegated to the GPU. If > not, it should be removed, and the focus should be on making > drawElements go as fast as possible. > > -Ken > > > On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho wrote: >> >> You are correct that the drawRangeElements overhead imposed by WebGL should be < 2x drawElements overhead, and if start is set to zero it can be made identical to drawElements overhead. But since the only reason for drawRangeElements to originally exist is to provide improved performance over drawElements, additional overhead could make it worthless. A WebGL implementation is either way able to use drawRangeElements under the hood to get the possible performance benefits from the driver. Maybe it's not as clear-cut as I initially thought, though: calling drawRangeElements in WebGL with start index that's much greater than 0 could still provide the WebGL implementation a good hint that it is worthwhile to actually check the minimum index and call underlying drawRangeElements with the original start value. >> >> To answer Florian's comments, I don't think that other GL specs than GLES3 address this either. >> >> Removing the function from WebGL 2 would also reduce compatibility with GLES3, as Mark pointed out. I'd like to point out that providing a compatibility shim is trivial in this case, though. >> >> I'd like to still hear more comments on this, but based on the conversation so far I'm not as keen on removing the function any more. >> >> -Olli >> ________________________________________ >> From: Benoit Jacob [bjacob...@] >> Sent: Tuesday, July 15, 2014 6:03 PM >> To: Olli Etuaho >> Cc: public webgl >> Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 >> >> ----- Original Message ----- >>> >>> I submitted a pull request removing drawRangeElements from WebGL 2, along >>> with a detailed explanation why: >>> https://github.com/KhronosGroup/WebGL/pull/624 >> >> I would need more detail than that, to understand what the problem with drawRangeElements is. >> >> "WebGL would need to validate that the indices are in the start, end range to ensure consistent behavior..." >> >> Sure thing - and that's mostly what WebGL has to do anyway for drawElements, except more expensive: for drawElements, it has to track the maximum-element-in-any-contiguous-sub-array, and now it also has to track the minimum-element-in-any-contiguous-sub-array. >> >> "...and no performance benefits could be realized by exposing this >> function." >> >> I would like to understand how that conclusion follows. As far as I can see, the overhead for drawRangeElements is strictly less than 2x the overhead for drawElements, for the reason explained above. So it's still the same order of magnitude. Have you found any testcase where Firefox Nightly's drawElements validation overhead was large? I know there's https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html which did hit one such case, but we fixed it in Firefox 32 (currently on the Aurora channel). Is there any problem left? If not, then why do you expect there to be problems with drawRangeElements? >> >> Benoit >> >>> >>> Please raise your objections if you have any. >>> >>> -Olli >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> >>> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Jul 16 11:33:23 2014 From: kbr...@ (Kenneth Russell) Date: Wed, 16 Jul 2014 11:33:23 -0700 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> <1577684777.4280106.1405532767389.JavaMail.zimbra@mozilla.com> Message-ID: Good point. Note also that even if the requirement to generate an INVALID_OPERATION error for a draw call with invalid indices is lifted, it will still be allowed to generate such an error, and the conformance tests will take that into account. This means that if a WebGL implementer decides that a given graphics driver's out-of-range index handling is poor, it can always fall back to manual index validation. On Wed, Jul 16, 2014 at 11:24 AM, Brandon Jones wrote: > Even in the drivers handle that case horrifically slow, I don't think it's a > problem as long as the driver-side validation doesn't cause serious > regressions in the performance of valid draws. Out of range draws are an > error, and I don't think we should have any expectation of fast performance > on an error case. If you don't want the performance hit then don't submit > invalid draw calls, simple as that! > > --Brandon > > On Wed Jul 16 2014 at 10:46:58 AM, Benoit Jacob wrote: >> >> >> >> >> ----- Original Message ----- >> > > Currently WebGL's drawElements call requires indices to be validated, >> > > resulting in performance problems in some applications. Work is >> > > underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL ES. >> > > The WebGL spec could then be relaxed to no longer require index >> > > validation, but still have testable behavior. Until this happens, >> > > drawRangeElements would require the same index validation that >> > > drawElements does." >> > >> > Are you sure that ARB_robust_buffer_access_behavior even removes the >> > requirement to do index validation on drawRangeElements? With >> > ARB_robust_buffer_access_behavior, access is restricted within the >> > buffer >> > object, not the specified range, so there's still some room for >> > implementation-dependent behavior of drawRangeElements. Maybe this is >> > good >> > enough to make it testable, but not completely consistent. >> >> On the topic of doubting whether WebGL implementations would ever be able >> to rely 'robustness' extensions to guard element array access, here is >> another one, that applies also to drawElements. >> >> A while ago, playing with a Firefox patched to not do any element array >> access validation, running Olli's benchmark >> https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html >> that exercises some out-of-range buffer access, on Intel linux drivers, I >> noticed that that was running much slower than with element array access >> validation. It seemed that the out-of-range accesses were somehow causing >> the driver to stall. That's scary as that seems like it could reasonably >> generalize to more than one driver and remove the feasibility of depending >> on drivers to handle that for us: even if they do it correctly, they might >> not do it fast enough. >> >> Benoit >> >> > >> > Re: performance, on our hardware there is no performance benefit from >> > drawRangeElements over drawElements when using buffers allocated on the >> > GPU. >> > >> > -Olli >> > ________________________________________ >> > From: Kenneth Russell [kbr...@] >> > Sent: Tuesday, July 15, 2014 9:12 PM >> > To: Olli Etuaho >> > Cc: Benoit Jacob; public webgl >> > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 >> > >> > Also note that an optimized WebGL implementation could translate >> > drawElements calls to DrawRangeElements calls internally. If there's >> > any performance benefit from calling DrawRangeElements, it can be >> > achieved without exposing drawRangeElements to JavaScript and having >> > to validate the indices against the start and end range. (This would >> > still require the index validation code to be present in the WebGL >> > implementation, to be able to rapidly query the minimum and maximum >> > index within a given range of an element array buffer.) >> > >> > I'd like to hear from GPU vendors whether DrawRangeElements provides >> > any significant speedup over DrawElements when all of the buffers are >> > allocated on the GPU -- i.e., no client side vertices or indices are >> > in use. If it does, then the entry point should be exposed and WebGL >> > implementations should aim to capture all the available performance, >> > in particular when index validation can be delegated to the GPU. If >> > not, it should be removed, and the focus should be on making >> > drawElements go as fast as possible. >> > >> > -Ken >> > >> > >> > On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho wrote: >> > > >> > > You are correct that the drawRangeElements overhead imposed by WebGL >> > > should >> > > be < 2x drawElements overhead, and if start is set to zero it can be >> > > made >> > > identical to drawElements overhead. But since the only reason for >> > > drawRangeElements to originally exist is to provide improved >> > > performance >> > > over drawElements, additional overhead could make it worthless. A >> > > WebGL >> > > implementation is either way able to use drawRangeElements under the >> > > hood >> > > to get the possible performance benefits from the driver. Maybe it's >> > > not >> > > as clear-cut as I initially thought, though: calling drawRangeElements >> > > in >> > > WebGL with start index that's much greater than 0 could still provide >> > > the >> > > WebGL implementation a good hint that it is worthwhile to actually >> > > check >> > > the minimum index and call underlying drawRangeElements with the >> > > original >> > > start value. >> > > >> > > To answer Florian's comments, I don't think that other GL specs than >> > > GLES3 >> > > address this either. >> > > >> > > Removing the function from WebGL 2 would also reduce compatibility >> > > with >> > > GLES3, as Mark pointed out. I'd like to point out that providing a >> > > compatibility shim is trivial in this case, though. >> > > >> > > I'd like to still hear more comments on this, but based on the >> > > conversation >> > > so far I'm not as keen on removing the function any more. >> > > >> > > -Olli >> > > ________________________________________ >> > > From: Benoit Jacob [bjacob...@] >> > > Sent: Tuesday, July 15, 2014 6:03 PM >> > > To: Olli Etuaho >> > > Cc: public webgl >> > > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 >> > > >> > > ----- Original Message ----- >> > >> >> > >> I submitted a pull request removing drawRangeElements from WebGL 2, >> > >> along >> > >> with a detailed explanation why: >> > >> https://github.com/KhronosGroup/WebGL/pull/624 >> > > >> > > I would need more detail than that, to understand what the problem >> > > with >> > > drawRangeElements is. >> > > >> > > "WebGL would need to validate that the indices are in the start, end >> > > range >> > > to ensure consistent behavior..." >> > > >> > > Sure thing - and that's mostly what WebGL has to do anyway for >> > > drawElements, except more expensive: for drawElements, it has to track >> > > the >> > > maximum-element-in-any-contiguous-sub-array, and now it also has to >> > > track >> > > the minimum-element-in-any-contiguous-sub-array. >> > > >> > > "...and no performance benefits could be realized by exposing this >> > > function." >> > > >> > > I would like to understand how that conclusion follows. As far as I >> > > can >> > > see, the overhead for drawRangeElements is strictly less than 2x the >> > > overhead for drawElements, for the reason explained above. So it's >> > > still >> > > the same order of magnitude. Have you found any testcase where Firefox >> > > Nightly's drawElements validation overhead was large? I know there's >> > > >> > > https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html >> > > which did hit one such case, but we fixed it in Firefox 32 (currently >> > > on >> > > the Aurora channel). Is there any problem left? If not, then why do >> > > you >> > > expect there to be problems with drawRangeElements? >> > > >> > > Benoit >> > > >> > >> >> > >> Please raise your objections if you have any. >> > >> >> > >> -Olli >> > >> ----------------------------------------------------------- >> > >> You are currently subscribed to public_webgl...@ >> > >> To unsubscribe, send an email to majordomo...@ with >> > >> the following command in the body of your email: >> > >> unsubscribe public_webgl >> > >> ----------------------------------------------------------- >> > >> >> > >> >> > > >> > > ----------------------------------------------------------- >> > > You are currently subscribed to public_webgl...@ >> > > To unsubscribe, send an email to majordomo...@ with >> > > the following command in the body of your email: >> > > unsubscribe public_webgl >> > > ----------------------------------------------------------- >> > > >> > >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Wed Jul 16 15:03:44 2014 From: bja...@ (Benoit Jacob) Date: Wed, 16 Jul 2014 15:03:44 -0700 (PDT) Subject: [Public WebGL] Should extension-related webidl interfaces be `[NoInterfaceObject]`? In-Reply-To: References: <2038075810.6015116.1405036503733.JavaMail.zimbra@mozilla.com> <1269154715.6015861.1405036824223.JavaMail.zimbra@mozilla.com> <737457807.4194216.1405474036492.JavaMail.zimbra@mozilla.com> Message-ID: <57913591.4339525.1405548224951.JavaMail.zimbra@mozilla.com> ----- Original Message ----- > I doubt any developer cares about doing instanceof operations on > returned extension objects. This has never been raised as a concern by > any app developer in the past few years. Of course it hasn't been: WebGL 2 isn't out yet, and I was talking about compatibility between (WebGL N plus extensions) and (WebGL N+1). Benoit > Rather than spending time > writing conformance tests for extensions' types, I think our > collective time would be better spent writing conformance tests for > OpenGL ES 3.0 / WebGL 2.0 functionality. > > > On Tue, Jul 15, 2014 at 6:27 PM, Benoit Jacob wrote: > > > > > > ----- Original Message ----- > >> > >> The extension registry's been updated to add [NoInterfaceObject] to > >> all WebGL extensions' interfaces. I don't think there's any value to > >> developers in exposing, specifying and testing these interface types. > > > > Why would interfaces be treated differently depending on whether they are > > in the core spec or in extensions? What about the compatibility concern > > expressed in my previous email? > > > > Benoit > > > >> > >> -Ken > >> > >> > >> On Thu, Jul 10, 2014 at 5:00 PM, Jeff Gilbert > >> wrote: > >> > > >> > Currently neither Chrome nor Firefox (at least) appear to expose these > >> > interfaces (such as `WebGLVertexArrayObjectOES` and > >> > `OES_vertex_array_object`), but in the specs, they are not marked as > >> > `[NoInterfaceObject]`. > >> > > >> > Should they be exposed on the `window` object, allowing for `foo > >> > instanceof > >> > WebGLVertexArrayObjectOES`? > >> > > >> > -Jeff > >> > > >> > ----------------------------------------------------------- > >> > You are currently subscribed to public_webgl...@ > >> > To unsubscribe, send an email to majordomo...@ with > >> > the following command in the body of your email: > >> > unsubscribe public_webgl > >> > ----------------------------------------------------------- > >> > > >> > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > >> unsubscribe public_webgl > >> ----------------------------------------------------------- > >> > >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Wed Jul 16 15:14:22 2014 From: bja...@ (Benoit Jacob) Date: Wed, 16 Jul 2014 15:14:22 -0700 (PDT) Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> <1577684777.4280106.1405532767389.JavaMail.zimbra@mozilla.com> Message-ID: <866543802.4341186.1405548861988.JavaMail.zimbra@mozilla.com> ----- Original Message ----- > Good point. Note also that even if the requirement to generate an > INVALID_OPERATION error for a draw call with invalid indices is > lifted, it will still be allowed to generate such an error, So now we are talking about implementation-defined behavior.... > and the > conformance tests will take that into account. ... by leaving that outside of the scope of the test. That sounds like exactly what we've consistently tried to avoid in WebGL; I don't understand why this case is different. Benoit > This means that if a > WebGL implementer decides that a given graphics driver's out-of-range > index handling is poor, it can always fall back to manual index > validation. > > > On Wed, Jul 16, 2014 at 11:24 AM, Brandon Jones wrote: > > Even in the drivers handle that case horrifically slow, I don't think it's > > a > > problem as long as the driver-side validation doesn't cause serious > > regressions in the performance of valid draws. Out of range draws are an > > error, and I don't think we should have any expectation of fast performance > > on an error case. If you don't want the performance hit then don't submit > > invalid draw calls, simple as that! > > > > --Brandon > > > > On Wed Jul 16 2014 at 10:46:58 AM, Benoit Jacob wrote: > >> > >> > >> > >> > >> ----- Original Message ----- > >> > > Currently WebGL's drawElements call requires indices to be validated, > >> > > resulting in performance problems in some applications. Work is > >> > > underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL ES. > >> > > The WebGL spec could then be relaxed to no longer require index > >> > > validation, but still have testable behavior. Until this happens, > >> > > drawRangeElements would require the same index validation that > >> > > drawElements does." > >> > > >> > Are you sure that ARB_robust_buffer_access_behavior even removes the > >> > requirement to do index validation on drawRangeElements? With > >> > ARB_robust_buffer_access_behavior, access is restricted within the > >> > buffer > >> > object, not the specified range, so there's still some room for > >> > implementation-dependent behavior of drawRangeElements. Maybe this is > >> > good > >> > enough to make it testable, but not completely consistent. > >> > >> On the topic of doubting whether WebGL implementations would ever be able > >> to rely 'robustness' extensions to guard element array access, here is > >> another one, that applies also to drawElements. > >> > >> A while ago, playing with a Firefox patched to not do any element array > >> access validation, running Olli's benchmark > >> https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html > >> that exercises some out-of-range buffer access, on Intel linux drivers, I > >> noticed that that was running much slower than with element array access > >> validation. It seemed that the out-of-range accesses were somehow causing > >> the driver to stall. That's scary as that seems like it could reasonably > >> generalize to more than one driver and remove the feasibility of depending > >> on drivers to handle that for us: even if they do it correctly, they might > >> not do it fast enough. > >> > >> Benoit > >> > >> > > >> > Re: performance, on our hardware there is no performance benefit from > >> > drawRangeElements over drawElements when using buffers allocated on the > >> > GPU. > >> > > >> > -Olli > >> > ________________________________________ > >> > From: Kenneth Russell [kbr...@] > >> > Sent: Tuesday, July 15, 2014 9:12 PM > >> > To: Olli Etuaho > >> > Cc: Benoit Jacob; public webgl > >> > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > >> > > >> > Also note that an optimized WebGL implementation could translate > >> > drawElements calls to DrawRangeElements calls internally. If there's > >> > any performance benefit from calling DrawRangeElements, it can be > >> > achieved without exposing drawRangeElements to JavaScript and having > >> > to validate the indices against the start and end range. (This would > >> > still require the index validation code to be present in the WebGL > >> > implementation, to be able to rapidly query the minimum and maximum > >> > index within a given range of an element array buffer.) > >> > > >> > I'd like to hear from GPU vendors whether DrawRangeElements provides > >> > any significant speedup over DrawElements when all of the buffers are > >> > allocated on the GPU -- i.e., no client side vertices or indices are > >> > in use. If it does, then the entry point should be exposed and WebGL > >> > implementations should aim to capture all the available performance, > >> > in particular when index validation can be delegated to the GPU. If > >> > not, it should be removed, and the focus should be on making > >> > drawElements go as fast as possible. > >> > > >> > -Ken > >> > > >> > > >> > On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho wrote: > >> > > > >> > > You are correct that the drawRangeElements overhead imposed by WebGL > >> > > should > >> > > be < 2x drawElements overhead, and if start is set to zero it can be > >> > > made > >> > > identical to drawElements overhead. But since the only reason for > >> > > drawRangeElements to originally exist is to provide improved > >> > > performance > >> > > over drawElements, additional overhead could make it worthless. A > >> > > WebGL > >> > > implementation is either way able to use drawRangeElements under the > >> > > hood > >> > > to get the possible performance benefits from the driver. Maybe it's > >> > > not > >> > > as clear-cut as I initially thought, though: calling drawRangeElements > >> > > in > >> > > WebGL with start index that's much greater than 0 could still provide > >> > > the > >> > > WebGL implementation a good hint that it is worthwhile to actually > >> > > check > >> > > the minimum index and call underlying drawRangeElements with the > >> > > original > >> > > start value. > >> > > > >> > > To answer Florian's comments, I don't think that other GL specs than > >> > > GLES3 > >> > > address this either. > >> > > > >> > > Removing the function from WebGL 2 would also reduce compatibility > >> > > with > >> > > GLES3, as Mark pointed out. I'd like to point out that providing a > >> > > compatibility shim is trivial in this case, though. > >> > > > >> > > I'd like to still hear more comments on this, but based on the > >> > > conversation > >> > > so far I'm not as keen on removing the function any more. > >> > > > >> > > -Olli > >> > > ________________________________________ > >> > > From: Benoit Jacob [bjacob...@] > >> > > Sent: Tuesday, July 15, 2014 6:03 PM > >> > > To: Olli Etuaho > >> > > Cc: public webgl > >> > > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > >> > > > >> > > ----- Original Message ----- > >> > >> > >> > >> I submitted a pull request removing drawRangeElements from WebGL 2, > >> > >> along > >> > >> with a detailed explanation why: > >> > >> https://github.com/KhronosGroup/WebGL/pull/624 > >> > > > >> > > I would need more detail than that, to understand what the problem > >> > > with > >> > > drawRangeElements is. > >> > > > >> > > "WebGL would need to validate that the indices are in the start, end > >> > > range > >> > > to ensure consistent behavior..." > >> > > > >> > > Sure thing - and that's mostly what WebGL has to do anyway for > >> > > drawElements, except more expensive: for drawElements, it has to track > >> > > the > >> > > maximum-element-in-any-contiguous-sub-array, and now it also has to > >> > > track > >> > > the minimum-element-in-any-contiguous-sub-array. > >> > > > >> > > "...and no performance benefits could be realized by exposing this > >> > > function." > >> > > > >> > > I would like to understand how that conclusion follows. As far as I > >> > > can > >> > > see, the overhead for drawRangeElements is strictly less than 2x the > >> > > overhead for drawElements, for the reason explained above. So it's > >> > > still > >> > > the same order of magnitude. Have you found any testcase where Firefox > >> > > Nightly's drawElements validation overhead was large? I know there's > >> > > > >> > > https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html > >> > > which did hit one such case, but we fixed it in Firefox 32 (currently > >> > > on > >> > > the Aurora channel). Is there any problem left? If not, then why do > >> > > you > >> > > expect there to be problems with drawRangeElements? > >> > > > >> > > Benoit > >> > > > >> > >> > >> > >> Please raise your objections if you have any. > >> > >> > >> > >> -Olli > >> > >> ----------------------------------------------------------- > >> > >> You are currently subscribed to public_webgl...@ > >> > >> To unsubscribe, send an email to majordomo...@ with > >> > >> the following command in the body of your email: > >> > >> unsubscribe public_webgl > >> > >> ----------------------------------------------------------- > >> > >> > >> > >> > >> > > > >> > > ----------------------------------------------------------- > >> > > You are currently subscribed to public_webgl...@ > >> > > To unsubscribe, send an email to majordomo...@ with > >> > > the following command in the body of your email: > >> > > unsubscribe public_webgl > >> > > ----------------------------------------------------------- > >> > > > >> > > >> > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > >> unsubscribe public_webgl > >> ----------------------------------------------------------- > >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Jul 16 15:18:12 2014 From: kbr...@ (Kenneth Russell) Date: Wed, 16 Jul 2014 15:18:12 -0700 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: <866543802.4341186.1405548861988.JavaMail.zimbra@mozilla.com> References: <260900090.4097920.1405436585005.JavaMail.zimbra@mozilla.com> <1577684777.4280106.1405532767389.JavaMail.zimbra@mozilla.com> <866543802.4341186.1405548861988.JavaMail.zimbra@mozilla.com> Message-ID: On Wed, Jul 16, 2014 at 3:14 PM, Benoit Jacob wrote: > > > ----- Original Message ----- >> Good point. Note also that even if the requirement to generate an >> INVALID_OPERATION error for a draw call with invalid indices is >> lifted, it will still be allowed to generate such an error, > > So now we are talking about implementation-defined behavior.... > >> and the >> conformance tests will take that into account. > > ... by leaving that outside of the scope of the test. No. The test will verify for bad draw calls that either INVALID_OPERATION is generated, or one of the behaviors defined in the robust_buffer_access_behavior spec occurs. -Ken > That sounds like exactly what we've consistently tried to avoid in WebGL; I don't understand why this case is different. > > Benoit > >> This means that if a >> WebGL implementer decides that a given graphics driver's out-of-range >> index handling is poor, it can always fall back to manual index >> validation. >> >> >> On Wed, Jul 16, 2014 at 11:24 AM, Brandon Jones wrote: >> > Even in the drivers handle that case horrifically slow, I don't think it's >> > a >> > problem as long as the driver-side validation doesn't cause serious >> > regressions in the performance of valid draws. Out of range draws are an >> > error, and I don't think we should have any expectation of fast performance >> > on an error case. If you don't want the performance hit then don't submit >> > invalid draw calls, simple as that! >> > >> > --Brandon >> > >> > On Wed Jul 16 2014 at 10:46:58 AM, Benoit Jacob wrote: >> >> >> >> >> >> >> >> >> >> ----- Original Message ----- >> >> > > Currently WebGL's drawElements call requires indices to be validated, >> >> > > resulting in performance problems in some applications. Work is >> >> > > underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL ES. >> >> > > The WebGL spec could then be relaxed to no longer require index >> >> > > validation, but still have testable behavior. Until this happens, >> >> > > drawRangeElements would require the same index validation that >> >> > > drawElements does." >> >> > >> >> > Are you sure that ARB_robust_buffer_access_behavior even removes the >> >> > requirement to do index validation on drawRangeElements? With >> >> > ARB_robust_buffer_access_behavior, access is restricted within the >> >> > buffer >> >> > object, not the specified range, so there's still some room for >> >> > implementation-dependent behavior of drawRangeElements. Maybe this is >> >> > good >> >> > enough to make it testable, but not completely consistent. >> >> >> >> On the topic of doubting whether WebGL implementations would ever be able >> >> to rely 'robustness' extensions to guard element array access, here is >> >> another one, that applies also to drawElements. >> >> >> >> A while ago, playing with a Firefox patched to not do any element array >> >> access validation, running Olli's benchmark >> >> https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html >> >> that exercises some out-of-range buffer access, on Intel linux drivers, I >> >> noticed that that was running much slower than with element array access >> >> validation. It seemed that the out-of-range accesses were somehow causing >> >> the driver to stall. That's scary as that seems like it could reasonably >> >> generalize to more than one driver and remove the feasibility of depending >> >> on drivers to handle that for us: even if they do it correctly, they might >> >> not do it fast enough. >> >> >> >> Benoit >> >> >> >> > >> >> > Re: performance, on our hardware there is no performance benefit from >> >> > drawRangeElements over drawElements when using buffers allocated on the >> >> > GPU. >> >> > >> >> > -Olli >> >> > ________________________________________ >> >> > From: Kenneth Russell [kbr...@] >> >> > Sent: Tuesday, July 15, 2014 9:12 PM >> >> > To: Olli Etuaho >> >> > Cc: Benoit Jacob; public webgl >> >> > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 >> >> > >> >> > Also note that an optimized WebGL implementation could translate >> >> > drawElements calls to DrawRangeElements calls internally. If there's >> >> > any performance benefit from calling DrawRangeElements, it can be >> >> > achieved without exposing drawRangeElements to JavaScript and having >> >> > to validate the indices against the start and end range. (This would >> >> > still require the index validation code to be present in the WebGL >> >> > implementation, to be able to rapidly query the minimum and maximum >> >> > index within a given range of an element array buffer.) >> >> > >> >> > I'd like to hear from GPU vendors whether DrawRangeElements provides >> >> > any significant speedup over DrawElements when all of the buffers are >> >> > allocated on the GPU -- i.e., no client side vertices or indices are >> >> > in use. If it does, then the entry point should be exposed and WebGL >> >> > implementations should aim to capture all the available performance, >> >> > in particular when index validation can be delegated to the GPU. If >> >> > not, it should be removed, and the focus should be on making >> >> > drawElements go as fast as possible. >> >> > >> >> > -Ken >> >> > >> >> > >> >> > On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho wrote: >> >> > > >> >> > > You are correct that the drawRangeElements overhead imposed by WebGL >> >> > > should >> >> > > be < 2x drawElements overhead, and if start is set to zero it can be >> >> > > made >> >> > > identical to drawElements overhead. But since the only reason for >> >> > > drawRangeElements to originally exist is to provide improved >> >> > > performance >> >> > > over drawElements, additional overhead could make it worthless. A >> >> > > WebGL >> >> > > implementation is either way able to use drawRangeElements under the >> >> > > hood >> >> > > to get the possible performance benefits from the driver. Maybe it's >> >> > > not >> >> > > as clear-cut as I initially thought, though: calling drawRangeElements >> >> > > in >> >> > > WebGL with start index that's much greater than 0 could still provide >> >> > > the >> >> > > WebGL implementation a good hint that it is worthwhile to actually >> >> > > check >> >> > > the minimum index and call underlying drawRangeElements with the >> >> > > original >> >> > > start value. >> >> > > >> >> > > To answer Florian's comments, I don't think that other GL specs than >> >> > > GLES3 >> >> > > address this either. >> >> > > >> >> > > Removing the function from WebGL 2 would also reduce compatibility >> >> > > with >> >> > > GLES3, as Mark pointed out. I'd like to point out that providing a >> >> > > compatibility shim is trivial in this case, though. >> >> > > >> >> > > I'd like to still hear more comments on this, but based on the >> >> > > conversation >> >> > > so far I'm not as keen on removing the function any more. >> >> > > >> >> > > -Olli >> >> > > ________________________________________ >> >> > > From: Benoit Jacob [bjacob...@] >> >> > > Sent: Tuesday, July 15, 2014 6:03 PM >> >> > > To: Olli Etuaho >> >> > > Cc: public webgl >> >> > > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 >> >> > > >> >> > > ----- Original Message ----- >> >> > >> >> >> > >> I submitted a pull request removing drawRangeElements from WebGL 2, >> >> > >> along >> >> > >> with a detailed explanation why: >> >> > >> https://github.com/KhronosGroup/WebGL/pull/624 >> >> > > >> >> > > I would need more detail than that, to understand what the problem >> >> > > with >> >> > > drawRangeElements is. >> >> > > >> >> > > "WebGL would need to validate that the indices are in the start, end >> >> > > range >> >> > > to ensure consistent behavior..." >> >> > > >> >> > > Sure thing - and that's mostly what WebGL has to do anyway for >> >> > > drawElements, except more expensive: for drawElements, it has to track >> >> > > the >> >> > > maximum-element-in-any-contiguous-sub-array, and now it also has to >> >> > > track >> >> > > the minimum-element-in-any-contiguous-sub-array. >> >> > > >> >> > > "...and no performance benefits could be realized by exposing this >> >> > > function." >> >> > > >> >> > > I would like to understand how that conclusion follows. As far as I >> >> > > can >> >> > > see, the overhead for drawRangeElements is strictly less than 2x the >> >> > > overhead for drawElements, for the reason explained above. So it's >> >> > > still >> >> > > the same order of magnitude. Have you found any testcase where Firefox >> >> > > Nightly's drawElements validation overhead was large? I know there's >> >> > > >> >> > > https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html >> >> > > which did hit one such case, but we fixed it in Firefox 32 (currently >> >> > > on >> >> > > the Aurora channel). Is there any problem left? If not, then why do >> >> > > you >> >> > > expect there to be problems with drawRangeElements? >> >> > > >> >> > > Benoit >> >> > > >> >> > >> >> >> > >> Please raise your objections if you have any. >> >> > >> >> >> > >> -Olli >> >> > >> ----------------------------------------------------------- >> >> > >> You are currently subscribed to public_webgl...@ >> >> > >> To unsubscribe, send an email to majordomo...@ with >> >> > >> the following command in the body of your email: >> >> > >> unsubscribe public_webgl >> >> > >> ----------------------------------------------------------- >> >> > >> >> >> > >> >> >> > > >> >> > > ----------------------------------------------------------- >> >> > > You are currently subscribed to public_webgl...@ >> >> > > To unsubscribe, send an email to majordomo...@ with >> >> > > the following command in the body of your email: >> >> > > unsubscribe public_webgl >> >> > > ----------------------------------------------------------- >> >> > > >> >> > >> >> >> >> ----------------------------------------------------------- >> >> You are currently subscribed to public_webgl...@ >> >> To unsubscribe, send an email to majordomo...@ with >> >> the following command in the body of your email: >> >> unsubscribe public_webgl >> >> ----------------------------------------------------------- >> >> >> > >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Wed Jul 16 16:55:14 2014 From: bja...@ (Benoit Jacob) Date: Wed, 16 Jul 2014 16:55:14 -0700 (PDT) Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <1577684777.4280106.1405532767389.JavaMail.zimbra@mozilla.com> <866543802.4341186.1405548861988.JavaMail.zimbra@mozilla.com> Message-ID: <1863419594.4357803.1405554914489.JavaMail.zimbra@mozilla.com> Right, that's what I called "implementation-defined behavior" above. As far as WebGL is concerned, I would like the INVALID_OPERATION behavior to be mandatory. Likewise regarding webgl.drawRangeElements, I would expect INVALID_OPERATION to be mandatory if the lower or upper bound parameter is conflicting with the values in the element array, similar to what happens already with bad webgl.drawElements calls. AFAICS the only argument against that has been the concern that it might be too much overhead but I haven't seen conclusive evidence of that. Benoit ----- Original Message ----- > On Wed, Jul 16, 2014 at 3:14 PM, Benoit Jacob wrote: > > > > > > ----- Original Message ----- > >> Good point. Note also that even if the requirement to generate an > >> INVALID_OPERATION error for a draw call with invalid indices is > >> lifted, it will still be allowed to generate such an error, > > > > So now we are talking about implementation-defined behavior.... > > > >> and the > >> conformance tests will take that into account. > > > > ... by leaving that outside of the scope of the test. > > No. The test will verify for bad draw calls that either > INVALID_OPERATION is generated, or one of the behaviors defined in the > robust_buffer_access_behavior spec occurs. > > -Ken > > > > That sounds like exactly what we've consistently tried to avoid in WebGL; I > > don't understand why this case is different. > > > > Benoit > > > >> This means that if a > >> WebGL implementer decides that a given graphics driver's out-of-range > >> index handling is poor, it can always fall back to manual index > >> validation. > >> > >> > >> On Wed, Jul 16, 2014 at 11:24 AM, Brandon Jones > >> wrote: > >> > Even in the drivers handle that case horrifically slow, I don't think > >> > it's > >> > a > >> > problem as long as the driver-side validation doesn't cause serious > >> > regressions in the performance of valid draws. Out of range draws are an > >> > error, and I don't think we should have any expectation of fast > >> > performance > >> > on an error case. If you don't want the performance hit then don't > >> > submit > >> > invalid draw calls, simple as that! > >> > > >> > --Brandon > >> > > >> > On Wed Jul 16 2014 at 10:46:58 AM, Benoit Jacob > >> > wrote: > >> >> > >> >> > >> >> > >> >> > >> >> ----- Original Message ----- > >> >> > > Currently WebGL's drawElements call requires indices to be > >> >> > > validated, > >> >> > > resulting in performance problems in some applications. Work is > >> >> > > underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL > >> >> > > ES. > >> >> > > The WebGL spec could then be relaxed to no longer require index > >> >> > > validation, but still have testable behavior. Until this happens, > >> >> > > drawRangeElements would require the same index validation that > >> >> > > drawElements does." > >> >> > > >> >> > Are you sure that ARB_robust_buffer_access_behavior even removes the > >> >> > requirement to do index validation on drawRangeElements? With > >> >> > ARB_robust_buffer_access_behavior, access is restricted within the > >> >> > buffer > >> >> > object, not the specified range, so there's still some room for > >> >> > implementation-dependent behavior of drawRangeElements. Maybe this is > >> >> > good > >> >> > enough to make it testable, but not completely consistent. > >> >> > >> >> On the topic of doubting whether WebGL implementations would ever be > >> >> able > >> >> to rely 'robustness' extensions to guard element array access, here is > >> >> another one, that applies also to drawElements. > >> >> > >> >> A while ago, playing with a Firefox patched to not do any element array > >> >> access validation, running Olli's benchmark > >> >> https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html > >> >> that exercises some out-of-range buffer access, on Intel linux drivers, > >> >> I > >> >> noticed that that was running much slower than with element array > >> >> access > >> >> validation. It seemed that the out-of-range accesses were somehow > >> >> causing > >> >> the driver to stall. That's scary as that seems like it could > >> >> reasonably > >> >> generalize to more than one driver and remove the feasibility of > >> >> depending > >> >> on drivers to handle that for us: even if they do it correctly, they > >> >> might > >> >> not do it fast enough. > >> >> > >> >> Benoit > >> >> > >> >> > > >> >> > Re: performance, on our hardware there is no performance benefit from > >> >> > drawRangeElements over drawElements when using buffers allocated on > >> >> > the > >> >> > GPU. > >> >> > > >> >> > -Olli > >> >> > ________________________________________ > >> >> > From: Kenneth Russell [kbr...@] > >> >> > Sent: Tuesday, July 15, 2014 9:12 PM > >> >> > To: Olli Etuaho > >> >> > Cc: Benoit Jacob; public webgl > >> >> > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > >> >> > > >> >> > Also note that an optimized WebGL implementation could translate > >> >> > drawElements calls to DrawRangeElements calls internally. If there's > >> >> > any performance benefit from calling DrawRangeElements, it can be > >> >> > achieved without exposing drawRangeElements to JavaScript and having > >> >> > to validate the indices against the start and end range. (This would > >> >> > still require the index validation code to be present in the WebGL > >> >> > implementation, to be able to rapidly query the minimum and maximum > >> >> > index within a given range of an element array buffer.) > >> >> > > >> >> > I'd like to hear from GPU vendors whether DrawRangeElements provides > >> >> > any significant speedup over DrawElements when all of the buffers are > >> >> > allocated on the GPU -- i.e., no client side vertices or indices are > >> >> > in use. If it does, then the entry point should be exposed and WebGL > >> >> > implementations should aim to capture all the available performance, > >> >> > in particular when index validation can be delegated to the GPU. If > >> >> > not, it should be removed, and the focus should be on making > >> >> > drawElements go as fast as possible. > >> >> > > >> >> > -Ken > >> >> > > >> >> > > >> >> > On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho > >> >> > wrote: > >> >> > > > >> >> > > You are correct that the drawRangeElements overhead imposed by > >> >> > > WebGL > >> >> > > should > >> >> > > be < 2x drawElements overhead, and if start is set to zero it can > >> >> > > be > >> >> > > made > >> >> > > identical to drawElements overhead. But since the only reason for > >> >> > > drawRangeElements to originally exist is to provide improved > >> >> > > performance > >> >> > > over drawElements, additional overhead could make it worthless. A > >> >> > > WebGL > >> >> > > implementation is either way able to use drawRangeElements under > >> >> > > the > >> >> > > hood > >> >> > > to get the possible performance benefits from the driver. Maybe > >> >> > > it's > >> >> > > not > >> >> > > as clear-cut as I initially thought, though: calling > >> >> > > drawRangeElements > >> >> > > in > >> >> > > WebGL with start index that's much greater than 0 could still > >> >> > > provide > >> >> > > the > >> >> > > WebGL implementation a good hint that it is worthwhile to actually > >> >> > > check > >> >> > > the minimum index and call underlying drawRangeElements with the > >> >> > > original > >> >> > > start value. > >> >> > > > >> >> > > To answer Florian's comments, I don't think that other GL specs > >> >> > > than > >> >> > > GLES3 > >> >> > > address this either. > >> >> > > > >> >> > > Removing the function from WebGL 2 would also reduce compatibility > >> >> > > with > >> >> > > GLES3, as Mark pointed out. I'd like to point out that providing a > >> >> > > compatibility shim is trivial in this case, though. > >> >> > > > >> >> > > I'd like to still hear more comments on this, but based on the > >> >> > > conversation > >> >> > > so far I'm not as keen on removing the function any more. > >> >> > > > >> >> > > -Olli > >> >> > > ________________________________________ > >> >> > > From: Benoit Jacob [bjacob...@] > >> >> > > Sent: Tuesday, July 15, 2014 6:03 PM > >> >> > > To: Olli Etuaho > >> >> > > Cc: public webgl > >> >> > > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 > >> >> > > > >> >> > > ----- Original Message ----- > >> >> > >> > >> >> > >> I submitted a pull request removing drawRangeElements from WebGL > >> >> > >> 2, > >> >> > >> along > >> >> > >> with a detailed explanation why: > >> >> > >> https://github.com/KhronosGroup/WebGL/pull/624 > >> >> > > > >> >> > > I would need more detail than that, to understand what the problem > >> >> > > with > >> >> > > drawRangeElements is. > >> >> > > > >> >> > > "WebGL would need to validate that the indices are in the start, > >> >> > > end > >> >> > > range > >> >> > > to ensure consistent behavior..." > >> >> > > > >> >> > > Sure thing - and that's mostly what WebGL has to do anyway for > >> >> > > drawElements, except more expensive: for drawElements, it has to > >> >> > > track > >> >> > > the > >> >> > > maximum-element-in-any-contiguous-sub-array, and now it also has to > >> >> > > track > >> >> > > the minimum-element-in-any-contiguous-sub-array. > >> >> > > > >> >> > > "...and no performance benefits could be realized by exposing this > >> >> > > function." > >> >> > > > >> >> > > I would like to understand how that conclusion follows. As far as I > >> >> > > can > >> >> > > see, the overhead for drawRangeElements is strictly less than 2x > >> >> > > the > >> >> > > overhead for drawElements, for the reason explained above. So it's > >> >> > > still > >> >> > > the same order of magnitude. Have you found any testcase where > >> >> > > Firefox > >> >> > > Nightly's drawElements validation overhead was large? I know > >> >> > > there's > >> >> > > > >> >> > > https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html > >> >> > > which did hit one such case, but we fixed it in Firefox 32 > >> >> > > (currently > >> >> > > on > >> >> > > the Aurora channel). Is there any problem left? If not, then why do > >> >> > > you > >> >> > > expect there to be problems with drawRangeElements? > >> >> > > > >> >> > > Benoit > >> >> > > > >> >> > >> > >> >> > >> Please raise your objections if you have any. > >> >> > >> > >> >> > >> -Olli > >> >> > >> ----------------------------------------------------------- > >> >> > >> You are currently subscribed to public_webgl...@ > >> >> > >> To unsubscribe, send an email to majordomo...@ with > >> >> > >> the following command in the body of your email: > >> >> > >> unsubscribe public_webgl > >> >> > >> ----------------------------------------------------------- > >> >> > >> > >> >> > >> > >> >> > > > >> >> > > ----------------------------------------------------------- > >> >> > > You are currently subscribed to public_webgl...@ > >> >> > > To unsubscribe, send an email to majordomo...@ with > >> >> > > the following command in the body of your email: > >> >> > > unsubscribe public_webgl > >> >> > > ----------------------------------------------------------- > >> >> > > > >> >> > > >> >> > >> >> ----------------------------------------------------------- > >> >> You are currently subscribed to public_webgl...@ > >> >> To unsubscribe, send an email to majordomo...@ with > >> >> the following command in the body of your email: > >> >> unsubscribe public_webgl > >> >> ----------------------------------------------------------- > >> >> > >> > > >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Jul 16 17:45:34 2014 From: kbr...@ (Kenneth Russell) Date: Wed, 16 Jul 2014 17:45:34 -0700 Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: <1863419594.4357803.1405554914489.JavaMail.zimbra@mozilla.com> References: <1577684777.4280106.1405532767389.JavaMail.zimbra@mozilla.com> <866543802.4341186.1405548861988.JavaMail.zimbra@mozilla.com> <1863419594.4357803.1405554914489.JavaMail.zimbra@mozilla.com> Message-ID: Your desire is noted. Nonetheless effort will proceed toward eliminating the need for this validation. It has been stated by multiple developers that it imposes too high a cost for their applications, in particular when dynamically generating geometry. As more evidence of the performance cost of CPU-side index validation is amassed, a case will be made to the WebGL working group for changing the spec. On Wed, Jul 16, 2014 at 4:55 PM, Benoit Jacob wrote: > Right, that's what I called "implementation-defined behavior" above. > > As far as WebGL is concerned, I would like the INVALID_OPERATION behavior to be mandatory. > > Likewise regarding webgl.drawRangeElements, I would expect INVALID_OPERATION to be mandatory if the lower or upper bound parameter is conflicting with the values in the element array, similar to what happens already with bad webgl.drawElements calls. > > AFAICS the only argument against that has been the concern that it might be too much overhead but I haven't seen conclusive evidence of that. > > Benoit > > ----- Original Message ----- >> On Wed, Jul 16, 2014 at 3:14 PM, Benoit Jacob wrote: >> > >> > >> > ----- Original Message ----- >> >> Good point. Note also that even if the requirement to generate an >> >> INVALID_OPERATION error for a draw call with invalid indices is >> >> lifted, it will still be allowed to generate such an error, >> > >> > So now we are talking about implementation-defined behavior.... >> > >> >> and the >> >> conformance tests will take that into account. >> > >> > ... by leaving that outside of the scope of the test. >> >> No. The test will verify for bad draw calls that either >> INVALID_OPERATION is generated, or one of the behaviors defined in the >> robust_buffer_access_behavior spec occurs. >> >> -Ken >> >> >> > That sounds like exactly what we've consistently tried to avoid in WebGL; I >> > don't understand why this case is different. >> > >> > Benoit >> > >> >> This means that if a >> >> WebGL implementer decides that a given graphics driver's out-of-range >> >> index handling is poor, it can always fall back to manual index >> >> validation. >> >> >> >> >> >> On Wed, Jul 16, 2014 at 11:24 AM, Brandon Jones >> >> wrote: >> >> > Even in the drivers handle that case horrifically slow, I don't think >> >> > it's >> >> > a >> >> > problem as long as the driver-side validation doesn't cause serious >> >> > regressions in the performance of valid draws. Out of range draws are an >> >> > error, and I don't think we should have any expectation of fast >> >> > performance >> >> > on an error case. If you don't want the performance hit then don't >> >> > submit >> >> > invalid draw calls, simple as that! >> >> > >> >> > --Brandon >> >> > >> >> > On Wed Jul 16 2014 at 10:46:58 AM, Benoit Jacob >> >> > wrote: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> ----- Original Message ----- >> >> >> > > Currently WebGL's drawElements call requires indices to be >> >> >> > > validated, >> >> >> > > resulting in performance problems in some applications. Work is >> >> >> > > underway to bring GL_ARB_robust_buffer_access_behavior to OpenGL >> >> >> > > ES. >> >> >> > > The WebGL spec could then be relaxed to no longer require index >> >> >> > > validation, but still have testable behavior. Until this happens, >> >> >> > > drawRangeElements would require the same index validation that >> >> >> > > drawElements does." >> >> >> > >> >> >> > Are you sure that ARB_robust_buffer_access_behavior even removes the >> >> >> > requirement to do index validation on drawRangeElements? With >> >> >> > ARB_robust_buffer_access_behavior, access is restricted within the >> >> >> > buffer >> >> >> > object, not the specified range, so there's still some room for >> >> >> > implementation-dependent behavior of drawRangeElements. Maybe this is >> >> >> > good >> >> >> > enough to make it testable, but not completely consistent. >> >> >> >> >> >> On the topic of doubting whether WebGL implementations would ever be >> >> >> able >> >> >> to rely 'robustness' extensions to guard element array access, here is >> >> >> another one, that applies also to drawElements. >> >> >> >> >> >> A while ago, playing with a Firefox patched to not do any element array >> >> >> access validation, running Olli's benchmark >> >> >> https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html >> >> >> that exercises some out-of-range buffer access, on Intel linux drivers, >> >> >> I >> >> >> noticed that that was running much slower than with element array >> >> >> access >> >> >> validation. It seemed that the out-of-range accesses were somehow >> >> >> causing >> >> >> the driver to stall. That's scary as that seems like it could >> >> >> reasonably >> >> >> generalize to more than one driver and remove the feasibility of >> >> >> depending >> >> >> on drivers to handle that for us: even if they do it correctly, they >> >> >> might >> >> >> not do it fast enough. >> >> >> >> >> >> Benoit >> >> >> >> >> >> > >> >> >> > Re: performance, on our hardware there is no performance benefit from >> >> >> > drawRangeElements over drawElements when using buffers allocated on >> >> >> > the >> >> >> > GPU. >> >> >> > >> >> >> > -Olli >> >> >> > ________________________________________ >> >> >> > From: Kenneth Russell [kbr...@] >> >> >> > Sent: Tuesday, July 15, 2014 9:12 PM >> >> >> > To: Olli Etuaho >> >> >> > Cc: Benoit Jacob; public webgl >> >> >> > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 >> >> >> > >> >> >> > Also note that an optimized WebGL implementation could translate >> >> >> > drawElements calls to DrawRangeElements calls internally. If there's >> >> >> > any performance benefit from calling DrawRangeElements, it can be >> >> >> > achieved without exposing drawRangeElements to JavaScript and having >> >> >> > to validate the indices against the start and end range. (This would >> >> >> > still require the index validation code to be present in the WebGL >> >> >> > implementation, to be able to rapidly query the minimum and maximum >> >> >> > index within a given range of an element array buffer.) >> >> >> > >> >> >> > I'd like to hear from GPU vendors whether DrawRangeElements provides >> >> >> > any significant speedup over DrawElements when all of the buffers are >> >> >> > allocated on the GPU -- i.e., no client side vertices or indices are >> >> >> > in use. If it does, then the entry point should be exposed and WebGL >> >> >> > implementations should aim to capture all the available performance, >> >> >> > in particular when index validation can be delegated to the GPU. If >> >> >> > not, it should be removed, and the focus should be on making >> >> >> > drawElements go as fast as possible. >> >> >> > >> >> >> > -Ken >> >> >> > >> >> >> > >> >> >> > On Tue, Jul 15, 2014 at 8:42 AM, Olli Etuaho >> >> >> > wrote: >> >> >> > > >> >> >> > > You are correct that the drawRangeElements overhead imposed by >> >> >> > > WebGL >> >> >> > > should >> >> >> > > be < 2x drawElements overhead, and if start is set to zero it can >> >> >> > > be >> >> >> > > made >> >> >> > > identical to drawElements overhead. But since the only reason for >> >> >> > > drawRangeElements to originally exist is to provide improved >> >> >> > > performance >> >> >> > > over drawElements, additional overhead could make it worthless. A >> >> >> > > WebGL >> >> >> > > implementation is either way able to use drawRangeElements under >> >> >> > > the >> >> >> > > hood >> >> >> > > to get the possible performance benefits from the driver. Maybe >> >> >> > > it's >> >> >> > > not >> >> >> > > as clear-cut as I initially thought, though: calling >> >> >> > > drawRangeElements >> >> >> > > in >> >> >> > > WebGL with start index that's much greater than 0 could still >> >> >> > > provide >> >> >> > > the >> >> >> > > WebGL implementation a good hint that it is worthwhile to actually >> >> >> > > check >> >> >> > > the minimum index and call underlying drawRangeElements with the >> >> >> > > original >> >> >> > > start value. >> >> >> > > >> >> >> > > To answer Florian's comments, I don't think that other GL specs >> >> >> > > than >> >> >> > > GLES3 >> >> >> > > address this either. >> >> >> > > >> >> >> > > Removing the function from WebGL 2 would also reduce compatibility >> >> >> > > with >> >> >> > > GLES3, as Mark pointed out. I'd like to point out that providing a >> >> >> > > compatibility shim is trivial in this case, though. >> >> >> > > >> >> >> > > I'd like to still hear more comments on this, but based on the >> >> >> > > conversation >> >> >> > > so far I'm not as keen on removing the function any more. >> >> >> > > >> >> >> > > -Olli >> >> >> > > ________________________________________ >> >> >> > > From: Benoit Jacob [bjacob...@] >> >> >> > > Sent: Tuesday, July 15, 2014 6:03 PM >> >> >> > > To: Olli Etuaho >> >> >> > > Cc: public webgl >> >> >> > > Subject: Re: [Public WebGL] Removing drawRangeElements from WebGL 2 >> >> >> > > >> >> >> > > ----- Original Message ----- >> >> >> > >> >> >> >> > >> I submitted a pull request removing drawRangeElements from WebGL >> >> >> > >> 2, >> >> >> > >> along >> >> >> > >> with a detailed explanation why: >> >> >> > >> https://github.com/KhronosGroup/WebGL/pull/624 >> >> >> > > >> >> >> > > I would need more detail than that, to understand what the problem >> >> >> > > with >> >> >> > > drawRangeElements is. >> >> >> > > >> >> >> > > "WebGL would need to validate that the indices are in the start, >> >> >> > > end >> >> >> > > range >> >> >> > > to ensure consistent behavior..." >> >> >> > > >> >> >> > > Sure thing - and that's mostly what WebGL has to do anyway for >> >> >> > > drawElements, except more expensive: for drawElements, it has to >> >> >> > > track >> >> >> > > the >> >> >> > > maximum-element-in-any-contiguous-sub-array, and now it also has to >> >> >> > > track >> >> >> > > the minimum-element-in-any-contiguous-sub-array. >> >> >> > > >> >> >> > > "...and no performance benefits could be realized by exposing this >> >> >> > > function." >> >> >> > > >> >> >> > > I would like to understand how that conclusion follows. As far as I >> >> >> > > can >> >> >> > > see, the overhead for drawRangeElements is strictly less than 2x >> >> >> > > the >> >> >> > > overhead for drawElements, for the reason explained above. So it's >> >> >> > > still >> >> >> > > the same order of magnitude. Have you found any testcase where >> >> >> > > Firefox >> >> >> > > Nightly's drawElements validation overhead was large? I know >> >> >> > > there's >> >> >> > > >> >> >> > > https://www.khronos.org/registry/webgl/sdk/tests/extra/webgl-drawelements-validation.html >> >> >> > > which did hit one such case, but we fixed it in Firefox 32 >> >> >> > > (currently >> >> >> > > on >> >> >> > > the Aurora channel). Is there any problem left? If not, then why do >> >> >> > > you >> >> >> > > expect there to be problems with drawRangeElements? >> >> >> > > >> >> >> > > Benoit >> >> >> > > >> >> >> > >> >> >> >> > >> Please raise your objections if you have any. >> >> >> > >> >> >> >> > >> -Olli >> >> >> > >> ----------------------------------------------------------- >> >> >> > >> You are currently subscribed to public_webgl...@ >> >> >> > >> To unsubscribe, send an email to majordomo...@ with >> >> >> > >> the following command in the body of your email: >> >> >> > >> unsubscribe public_webgl >> >> >> > >> ----------------------------------------------------------- >> >> >> > >> >> >> >> > >> >> >> >> > > >> >> >> > > ----------------------------------------------------------- >> >> >> > > You are currently subscribed to public_webgl...@ >> >> >> > > To unsubscribe, send an email to majordomo...@ with >> >> >> > > the following command in the body of your email: >> >> >> > > unsubscribe public_webgl >> >> >> > > ----------------------------------------------------------- >> >> >> > > >> >> >> > >> >> >> >> >> >> ----------------------------------------------------------- >> >> >> You are currently subscribed to public_webgl...@ >> >> >> To unsubscribe, send an email to majordomo...@ with >> >> >> the following command in the body of your email: >> >> >> unsubscribe public_webgl >> >> >> ----------------------------------------------------------- >> >> >> >> >> > >> >> >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Wed Jul 16 20:57:38 2014 From: bja...@ (Benoit Jacob) Date: Wed, 16 Jul 2014 20:57:38 -0700 (PDT) Subject: [Public WebGL] Removing drawRangeElements from WebGL 2 In-Reply-To: References: <1577684777.4280106.1405532767389.JavaMail.zimbra@mozilla.com> <866543802.4341186.1405548861988.JavaMail.zimbra@mozilla.com> <1863419594.4357803.1405554914489.JavaMail.zimbra@mozilla.com> Message-ID: <1871050472.4414446.1405569458855.JavaMail.zimbra@mozilla.com> ----- Original Message ----- > Your desire is noted. Nonetheless effort will proceed toward > eliminating the need for this validation. It has been stated by > multiple developers that it imposes too high a cost for their > applications, in particular when dynamically generating geometry. Has there been any complain about Firefox? I asked but didn't get any positive response. My point here is that until bugs are confirmed on all browsers, we are talking about performance bugs in particular browsers. Benoit > As > more evidence of the performance cost of CPU-side index validation is > amassed, a case will be made to the WebGL working group for changing > the spec. > > > > On Wed, Jul 16, 2014 at 4:55 PM, Benoit Jacob wrote: > > Right, that's what I called "implementation-defined behavior" above. > > > > As far as WebGL is concerned, I would like the INVALID_OPERATION behavior > > to be mandatory. > > > > Likewise regarding webgl.drawRangeElements, I would expect > > INVALID_OPERATION to be mandatory if the lower or upper bound parameter is > > conflicting with the values in the element array, similar to what happens > > already with bad webgl.drawElements calls. > > > > AFAICS the only argument against that has been the concern that it might be > > too much overhead but I haven't seen conclusive evidence of that. > > > > Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Fri Jul 18 09:07:09 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 18 Jul 2014 18:07:09 +0200 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: References: Message-ID: EXT_disjoint_timer query is desperately needed. Measuring performance by way of FPS is completely unusable, as it mostly just boils down to 3 answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL programs, and hence they cannot optimize them. This is extremely bad. On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch wrote: > Oh, right, I thought it was. Weird. Anyways, makes it even more relevant > to get it done. > > > On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch wrote: > >> On 2014-07-16 7:15 AM, "Florian B?sch" wrote: >> >> The EXT_disjoint_timer_query extension ( >> http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) >> has been moved to draft by Kenneth 5 months ago. >> >> Tickets for implementation of the functionality are created: >> >> - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 >> - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 >> - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 >> - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 >> >> The extension would be very useful for a variety of performance profiling >> usecases and LOD determination algorithms. >> >> The extension is part of OpenGL ES 3.1 core, the WebGL version of which >> would probably still be a few years out. >> >> >> This last statement is incorrect. It is NOT part of OpenGL ES 3.1. >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Fri Jul 18 10:01:43 2014 From: bja...@ (Benoit Jacob) Date: Fri, 18 Jul 2014 10:01:43 -0700 (PDT) Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: References: Message-ID: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> It's complicated. Measuring the speed of GPU code execution is made hard by the fact that we have no control over what code runs on the GPU. The OpenGL commands that we issue are only a high-level description of that. The details, which are what you would measure with timer queries, are opaque and implementation-dependent. That's basically the reason why to this day most GPU's performance measuring tools are GPU-specific and only an imperfect and small minority gets standardized (like these timer queries). For now, you can get around the vsync quantization problem ("30fps or 60fps") by instead running your benchmark as measuring how much you can increase the complexity of your scene without falling below 60 FPS ("running with 123,000 dinosaurs at 60 FPS"). That will also give you more relevant performance measurements: what you really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. Benoit ----- Original Message ----- > EXT_disjoint_timer query is desperately needed. Measuring performance by way > of FPS is completely unusable, as it mostly just boils down to 3 answers, > 15fps, 30fps or 60fps. Programmers can't measure their WebGL programs, and > hence they cannot optimize them. This is extremely bad. > On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch < pyalot...@ > wrote: > > Oh, right, I thought it was. Weird. Anyways, makes it even more relevant to > > get it done. > > > On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch < dkoch...@ > wrote: > > > > On 2014-07-16 7:15 AM, "Florian B?sch" < pyalot...@ > wrote: > > > > > > > The EXT_disjoint_timer_query extension ( > > > > http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/ > > > > ) > > > > has been moved to draft by Kenneth 5 months ago. > > > > > > > > > > Tickets for implementation of the functionality are created: > > > > > > > > > > - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 > > > > > > > > > > - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 > > > > > > > > > > - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 > > > > > > > > > > - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 > > > > > > > > > > The extension would be very useful for a variety of performance > > > > profiling > > > > usecases and LOD determination algorithms. > > > > > > > > > > The extension is part of OpenGL ES 3.1 core, the WebGL version of which > > > > would > > > > probably still be a few years out. > > > > > > > > > This last statement is incorrect. It is NOT part of OpenGL ES 3.1. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jul 18 10:07:21 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 18 Jul 2014 19:07:21 +0200 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> Message-ID: What you describe is not a good way to measure performance, at all. First of all, it cannot catch improvements. Second, it requires instrumenting/fiddling with the code in artificial load-settings, that might or might not have anything relevant to do with how the code performs under realistic settings. Lastly, it's extremely tedious. I, and everybody else is just about at the end of their rope trying to squeeze out performance from even moderately complex WebGL applications, and what we have doesn't work. If there wasn't vsync, and if there wasn't vsync quantization, which is how native app-developers measure performance, it'd be somewhat workable. But there is vsync, and there is vsync quantization, and hence, I'm screwed, and so is everybody else. This is not a sustainable situation, and it needs to change, yesterday. And the only thing I see for that to change is EXT_disjoint_timer_query, warts and all, because it can't possibly be worse than what we have now. On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob wrote: > It's complicated. Measuring the speed of GPU code execution is made hard > by the fact that we have no control over what code runs on the GPU. The > OpenGL commands that we issue are only a high-level description of that. > The details, which are what you would measure with timer queries, are > opaque and implementation-dependent. > > That's basically the reason why to this day most GPU's performance > measuring tools are GPU-specific and only an imperfect and small minority > gets standardized (like these timer queries). > > For now, you can get around the vsync quantization problem ("30fps or > 60fps") by instead running your benchmark as measuring how much you can > increase the complexity of your scene without falling below 60 FPS > ("running with 123,000 dinosaurs at 60 FPS"). > > That will also give you more relevant performance measurements: what you > really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. > > Benoit > > ------------------------------ > > EXT_disjoint_timer query is desperately needed. Measuring performance by way of FPS is completely unusable, as it mostly just boils down to 3 answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL programs, and hence they cannot optimize them. This is extremely bad. > > > > On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch wrote: > >> Oh, right, I thought it was. Weird. Anyways, makes it even more relevant >> to get it done. >> >> >> On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch wrote: >> >>> On 2014-07-16 7:15 AM, "Florian B?sch" wrote: >>> >>> The EXT_disjoint_timer_query extension ( >>> http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) >>> has been moved to draft by Kenneth 5 months ago. >>> >>> Tickets for implementation of the functionality are created: >>> >>> - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 >>> - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 >>> - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 >>> - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 >>> >>> The extension would be very useful for a variety of performance >>> profiling usecases and LOD determination algorithms. >>> >>> The extension is part of OpenGL ES 3.1 core, the WebGL version of which >>> would probably still be a few years out. >>> >>> >>> This last statement is incorrect. It is NOT part of OpenGL ES 3.1. >>> >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thi...@ Fri Jul 18 11:19:28 2014 From: thi...@ (thi...@) Date: Fri, 18 Jul 2014 18:19:28 +0000 Subject: [Public WebGL] =?utf-8?Q?Re:_[Public_WebGL]_EXT=5Fdisjoint=5Ftimer=5Fquery_encouraged_to?= =?utf-8?Q?_be_implemented?= In-Reply-To: References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com>, Message-ID: <53c96763.e233460a.3b6b.ffff875e@mx.google.com> I can?t help but agree with Florian here. Tools to benchmark GPU performance (especially passes/shaders) in WebGL would be invaluable to us at Artillery. Knowing which shader or pass is the bottleneck would save us a great deal of time and fiddling. Getting the most out of our renderer is not only crucial for visual fluidity but also to minimize input lag since frame drops also mean input lag in the current implementation. When you?re developing a high quality game and game engine, those two factors can make the difference between a playable and an unplayable game. In short, we would love to see more GPU performance tools and APIs be implemented. Thibaut. @BKcore @Artillery From: Florian B?sch Sent: ?Friday?, ?July? ?18?, ?2014 ?10?:?07? ?AM To: Benoit Jacob Cc: Daniel Koch, public webgl What you describe is not a good way to measure performance, at all. First of all, it cannot catch improvements. Second, it requires instrumenting/fiddling with the code in artificial load-settings, that might or might not have anything relevant to do with how the code performs under realistic settings. Lastly, it's extremely tedious. I, and everybody else is just about at the end of their rope trying to squeeze out performance from even moderately complex WebGL applications, and what we have doesn't work. If there wasn't vsync, and if there wasn't vsync quantization, which is how native app-developers measure performance, it'd be somewhat workable. But there is vsync, and there is vsync quantization, and hence, I'm screwed, and so is everybody else. This is not a sustainable situation, and it needs to change, yesterday. And the only thing I see for that to change is EXT_disjoint_timer_query, warts and all, because it can't possibly be worse than what we have now. On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob wrote: It's complicated. Measuring the speed of GPU code execution is made hard by the fact that we have no control over what code runs on the GPU. The OpenGL commands that we issue are only a high-level description of that. The details, which are what you would measure with timer queries, are opaque and implementation-dependent. That's basically the reason why to this day most GPU's performance measuring tools are GPU-specific and only an imperfect and small minority gets standardized (like these timer queries). For now, you can get around the vsync quantization problem ("30fps or 60fps") by instead running your benchmark as measuring how much you can increase the complexity of your scene without falling below 60 FPS ("running with 123,000 dinosaurs at 60 FPS"). That will also give you more relevant performance measurements: what you really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. Benoit EXT_disjoint_timer query is desperately needed. Measuring performance by way of FPS is completely unusable, as it mostly just boils down to 3 answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL programs, and hence they cannot optimize them. This is extremely bad. On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch wrote: Oh, right, I thought it was. Weird. Anyways, makes it even more relevant to get it done. On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch wrote: On 2014-07-16 7:15 AM, "Florian B?sch" wrote: The EXT_disjoint_timer_query extension (http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) has been moved to draft by Kenneth 5 months ago. Tickets for implementation of the functionality are created: - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 The extension would be very useful for a variety of performance profiling usecases and LOD determination algorithms. The extension is part of OpenGL ES 3.1 core, the WebGL version of which would probably still be a few years out. This last statement is incorrect. It is NOT part of OpenGL ES 3.1. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jul 18 11:40:39 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 18 Jul 2014 20:40:39 +0200 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: <53c96763.e233460a.3b6b.ffff875e@mx.google.com> References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> <53c96763.e233460a.3b6b.ffff875e@mx.google.com> Message-ID: The problem that Benoit mentions is discussed in this ticket briefly http://code.google.com/p/chromium/issues/detail?id=345227 . It's about how the command stream from multiple virtual contexts intermingles in one actual context, and hence throws off timing. There are platforms, such as apparently on Linux, where virtual contexts aren't used, so there at least once could get something out of EXT_disjoint_timer_query. I'd assume the same holds true for Android and OSX correct? (i.e. are virtual context a D3D angle thing?). But regardless of the feasibility of EXT_disjoint_timer_query in face of virtual contexts, some way to measure performance more fine grained (than 15, 30 or 60 fps and beyond), in production situations, from your machine, and from machines of users is crucial. It's unrealistic to presume that you can write performance sensitive/hardware bound software for hardware that exhibits a spread of 100x or more difference in performance without any reliable way to measure how much a concrete change affects performance for you, for the machines you test on and for your users once you roll it out. Native app-developers simply toggle off vsync and they don't deal with vsync quantization at least on their own machines (and not on their test machines). It can't be that WebGL is "that thing you can't debug for performance". WebGL isn't a second class citizen to Desktop GL. Somebody even suggested to me I could setup the application as a native app, so that I can go test performance... -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Fri Jul 18 11:40:58 2014 From: bja...@ (Benoit Jacob) Date: Fri, 18 Jul 2014 11:40:58 -0700 (PDT) Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: <53c96763.e233460a.3b6b.ffff875e@mx.google.com> References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> <53c96763.e233460a.3b6b.ffff875e@mx.google.com> Message-ID: <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> Not saying that timer queries are useless or that we'll never ship them in WebGL - but they are neither as portable nor as informative as you and Florian seem to think they are. They are a portable interface to measuring _something_ that happens on your GPU; but _what_ exactly that thing is, and how it is affected by other things, is GPU-dependent. To know in depth where time is being spent on a given system, you probably need to use the performance measurement tools provided by your GPU vendor. Those will be able to tell you a great deal of detail about what is going on in your GPU. That you can do today, without waiting for WebGL implementations to provide anything. Just apply your GPU vendor profiling tools to your browser running your WebGL app. Benoit ----- Original Message ----- > I can?t help but agree with Florian here. Tools to benchmark GPU performance > (especially passes/shaders) in WebGL would be invaluable to us at Artillery. > Knowing which shader or pass is the bottleneck would save us a great deal of > time and fiddling. > Getting the most out of our renderer is not only crucial for visual fluidity > but also to minimize input lag since frame drops also mean input lag in the > current implementation. When you?re developing a high quality game and game > engine, those two factors can make the difference between a playable and an > unplayable game. > In short, we would love to see more GPU performance tools and APIs be > implemented. > Thibaut. > @BKcore > @Artillery > From: Florian B?sch > Sent: ?Friday?, ?July? ?18?, ?2014 ?10?:?07? ?AM > To: Benoit Jacob > Cc: Daniel Koch , public webgl > What you describe is not a good way to measure performance, at all. First of > all, it cannot catch improvements. Second, it requires > instrumenting/fiddling with the code in artificial load-settings, that might > or might not have anything relevant to do with how the code performs under > realistic settings. Lastly, it's extremely tedious. I, and everybody else is > just about at the end of their rope trying to squeeze out performance from > even moderately complex WebGL applications, and what we have doesn't work. > If there wasn't vsync, and if there wasn't vsync quantization, which is how > native app-developers measure performance, it'd be somewhat workable. But > there is vsync, and there is vsync quantization, and hence, I'm screwed, and > so is everybody else. This is not a sustainable situation, and it needs to > change, yesterday. And the only thing I see for that to change is > EXT_disjoint_timer_query, warts and all, because it can't possibly be worse > than what we have now. > On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob < bjacob...@ > wrote: > > It's complicated. Measuring the speed of GPU code execution is made hard by > > the fact that we have no control over what code runs on the GPU. The OpenGL > > commands that we issue are only a high-level description of that. The > > details, which are what you would measure with timer queries, are opaque > > and > > implementation-dependent. > > > That's basically the reason why to this day most GPU's performance > > measuring > > tools are GPU-specific and only an imperfect and small minority gets > > standardized (like these timer queries). > > > For now, you can get around the vsync quantization problem ("30fps or > > 60fps") > > by instead running your benchmark as measuring how much you can increase > > the > > complexity of your scene without falling below 60 FPS ("running with > > 123,000 > > dinosaurs at 60 FPS"). > > > That will also give you more relevant performance measurements: what you > > really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. > > > Benoit > > > > EXT_disjoint_timer query is desperately needed. Measuring performance by > > > way > > > of FPS is completely unusable, as it mostly just boils down to 3 answers, > > > 15fps, 30fps or 60fps. Programmers can't measure their WebGL programs, > > > and > > > hence they cannot optimize them. This is extremely bad. > > > > > > On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch < pyalot...@ > > > > wrote: > > > > > > > Oh, right, I thought it was. Weird. Anyways, makes it even more > > > > relevant > > > > to > > > > get it done. > > > > > > > > > > On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch < dkoch...@ > > > > > wrote: > > > > > > > > > > > On 2014-07-16 7:15 AM, "Florian B?sch" < pyalot...@ > wrote: > > > > > > > > > > > > > > > > The EXT_disjoint_timer_query extension ( > > > > > > http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/ > > > > > > ) > > > > > > has been moved to draft by Kenneth 5 months ago. > > > > > > > > > > > > > > > > > > > > > Tickets for implementation of the functionality are created: > > > > > > > > > > > > > > > > > > > > > - Chrome: > > > > > > https://code.google.com/p/chromium/issues/detail?id=345227 > > > > > > > > > > > > > > > > > > > > > - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 > > > > > > > > > > > > > > > > > > > > > - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 > > > > > > > > > > > > > > > > > > > > > - Angle: > > > > > > https://code.google.com/p/angleproject/issues/detail?id=657 > > > > > > > > > > > > > > > > > > > > > The extension would be very useful for a variety of performance > > > > > > profiling > > > > > > usecases and LOD determination algorithms. > > > > > > > > > > > > > > > > > > > > > The extension is part of OpenGL ES 3.1 core, the WebGL version of > > > > > > which > > > > > > would > > > > > > probably still be a few years out. > > > > > > > > > > > > > > > > > > > > This last statement is incorrect. It is NOT part of OpenGL ES 3.1. > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jul 18 12:07:24 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 18 Jul 2014 21:07:24 +0200 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> <53c96763.e233460a.3b6b.ffff875e@mx.google.com> <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> Message-ID: This is where we stand. - We can't measure the time on our own machines (and no, performance tools from a vendor aren't a solution for a variety of reasons such as automation, statistics, measuring your things and not other things, and so forth) - We can't measure the time on our users machines. We can't measure how fast or slow our applications are beyond 3 answers (fast enough, too slow, way too slow). This can't go. This isn't how we will successfully be able to deploy WebGL applications that run for a lot of people. Something has to be done about it. If it isn't EXT_disjoint_timer_query, fine, but something's got to be done about it. This isn't about this vendors or that vendors tool, this is about the future of WebGL, at all. If you cannot provide a system, that allows to take the necessary metrics to figure out how fast things go on it, it'll die. It'll become this thing nobody wants to touch because it's undebuggable. Whatever needs to happen, even if it is introducing new functionality into drivers/GPUs that solve this issue, this needs to start right now. On Fri, Jul 18, 2014 at 8:40 PM, Benoit Jacob wrote: > Not saying that timer queries are useless or that we'll never ship them in > WebGL - but they are neither as portable nor as informative as you and > Florian seem to think they are. They are a portable interface to measuring > _something_ that happens on your GPU; but _what_ exactly that thing is, and > how it is affected by other things, is GPU-dependent. > > To know in depth where time is being spent on a given system, you probably > need to use the performance measurement tools provided by your GPU vendor. > Those will be able to tell you a great deal of detail about what is going > on in your GPU. > > That you can do today, without waiting for WebGL implementations to > provide anything. Just apply your GPU vendor profiling tools to your > browser running your WebGL app. > > Benoit > > ------------------------------ > > I can?t help but agree with Florian here. Tools to benchmark GPU > performance (especially passes/shaders) in WebGL would be invaluable to us > at Artillery. Knowing which shader or pass is the bottleneck would save us > a great deal of time and fiddling. > Getting the most out of our renderer is not only crucial for visual > fluidity but also to minimize input lag since frame drops also mean input > lag in the current implementation. When you?re developing a high quality > game and game engine, those two factors can make the difference between a > playable and an unplayable game. > > In short, we would love to see more GPU performance tools and APIs be > implemented. > > Thibaut. > @BKcore > @Artillery > > *From:* Florian B?sch > *Sent:* ?Friday?, ?July? ?18?, ?2014 ?10?:?07? ?AM > *To:* Benoit Jacob > *Cc:* Daniel Koch , public webgl > > > What you describe is not a good way to measure performance, at all. First > of all, it cannot catch improvements. Second, it requires > instrumenting/fiddling with the code in artificial load-settings, that > might or might not have anything relevant to do with how the code performs > under realistic settings. Lastly, it's extremely tedious. I, and everybody > else is just about at the end of their rope trying to squeeze out > performance from even moderately complex WebGL applications, and what we > have doesn't work. If there wasn't vsync, and if there wasn't vsync > quantization, which is how native app-developers measure performance, it'd > be somewhat workable. But there is vsync, and there is vsync quantization, > and hence, I'm screwed, and so is everybody else. This is not a sustainable > situation, and it needs to change, yesterday. And the only thing I see for > that to change is EXT_disjoint_timer_query, warts and all, because it can't > possibly be worse than what we have now. > > > On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob wrote: > >> It's complicated. Measuring the speed of GPU code execution is made hard >> by the fact that we have no control over what code runs on the GPU. The >> OpenGL commands that we issue are only a high-level description of that. >> The details, which are what you would measure with timer queries, are >> opaque and implementation-dependent. >> >> That's basically the reason why to this day most GPU's performance >> measuring tools are GPU-specific and only an imperfect and small minority >> gets standardized (like these timer queries). >> >> For now, you can get around the vsync quantization problem ("30fps or >> 60fps") by instead running your benchmark as measuring how much you can >> increase the complexity of your scene without falling below 60 FPS >> ("running with 123,000 dinosaurs at 60 FPS"). >> >> That will also give you more relevant performance measurements: what you >> really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. >> >> Benoit >> >> ------------------------------ >> >> EXT_disjoint_timer query is desperately needed. Measuring performance by way of FPS is completely unusable, as it mostly just boils down to 3 answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL programs, and hence they cannot optimize them. This is extremely bad. >> >> >> >> On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch wrote: >> >>> Oh, right, I thought it was. Weird. Anyways, makes it even more relevant >>> to get it done. >>> >>> >>> On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch wrote: >>> >>>> On 2014-07-16 7:15 AM, "Florian B?sch" wrote: >>>> >>>> The EXT_disjoint_timer_query extension ( >>>> http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) >>>> has been moved to draft by Kenneth 5 months ago. >>>> >>>> Tickets for implementation of the functionality are created: >>>> >>>> - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 >>>> - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 >>>> - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 >>>> - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 >>>> >>>> The extension would be very useful for a variety of performance >>>> profiling usecases and LOD determination algorithms. >>>> >>>> The extension is part of OpenGL ES 3.1 core, the WebGL version of which >>>> would probably still be a few years out. >>>> >>>> >>>> This last statement is incorrect. It is NOT part of OpenGL ES 3.1. >>>> >>>> >>> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sin...@ Fri Jul 18 12:58:28 2014 From: sin...@ (Colin Mackenzie) Date: Fri, 18 Jul 2014 15:58:28 -0400 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> <53c96763.e233460a.3b6b.ffff875e@mx.google.com> <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> Message-ID: This seems a classic case of one group screaming "we need this," while another screams "we can't do this." There must be some middle ground. Florian is right that the current situation is untenable. It's no secret that graphics applications are performance sensitive, and without some (accessible!) profiling solution, producing anything more than a pretty demo (and sometimes even those!) will be very difficult or impossible. > They are a portable interface to measuring _something_ that happens on your GPU; but _what_ exactly that thing is, and how it is affected by other things, is GPU-dependent So results for one class of hardware may not translate to other hardware because different implementations apply different instructions. However, if it were available, could a creative developer not turn it on in production under strict conditions (say, if current FPS is already unacceptable) and use Google analytics or some such to collect the results? (Is there a way to query the hardware class or any GPU information at all, which would help with filtering of results?) They may not be able to test directly on the same hardware and the results might be somewhat skewed by differing implementations, but this could at least tell them (over a very long period) whether they were on the right track. If the FPS heuristic is used, it may cut out a lot of the "noise" produced by faster hardware. > use the performance measurement tools provided by your GPU vendor. Clearly the smarter option, if it is an option. Perhaps we need to work to introduce a new standard (possibly but not necessarily part of WebGL itself) that would provide a consistent API for gathering equivalent metrics across GPUs. I don't know if this would be possible, but if so I would see it as the best option. This is just me rattling thoughts off to try and find some common ground. I don't pretend to have the answers, but clearly we need to think about how to solve the problem of testability if we want to see production quality applications on more than a handful of machines. On Jul 18, 2014 3:08 PM, "Florian B?sch" wrote: > This is where we stand. > > - We can't measure the time on our own machines (and no, performance > tools from a vendor aren't a solution for a variety of reasons such as > automation, statistics, measuring your things and not other things, and so > forth) > - We can't measure the time on our users machines. > > We can't measure how fast or slow our applications are beyond 3 answers > (fast enough, too slow, way too slow). > > This can't go. This isn't how we will successfully be able to deploy WebGL > applications that run for a lot of people. > > Something has to be done about it. If it isn't EXT_disjoint_timer_query, > fine, but something's got to be done about it. This isn't about this > vendors or that vendors tool, this is about the future of WebGL, at all. If > you cannot provide a system, that allows to take the necessary metrics to > figure out how fast things go on it, it'll die. It'll become this thing > nobody wants to touch because it's undebuggable. > > Whatever needs to happen, even if it is introducing new functionality into > drivers/GPUs that solve this issue, this needs to start right now. > > > On Fri, Jul 18, 2014 at 8:40 PM, Benoit Jacob wrote: > >> Not saying that timer queries are useless or that we'll never ship them >> in WebGL - but they are neither as portable nor as informative as you and >> Florian seem to think they are. They are a portable interface to measuring >> _something_ that happens on your GPU; but _what_ exactly that thing is, and >> how it is affected by other things, is GPU-dependent. >> >> To know in depth where time is being spent on a given system, you >> probably need to use the performance measurement tools provided by your GPU >> vendor. Those will be able to tell you a great deal of detail about what is >> going on in your GPU. >> >> That you can do today, without waiting for WebGL implementations to >> provide anything. Just apply your GPU vendor profiling tools to your >> browser running your WebGL app. >> >> Benoit >> >> ------------------------------ >> >> I can?t help but agree with Florian here. Tools to benchmark GPU >> performance (especially passes/shaders) in WebGL would be invaluable to us >> at Artillery. Knowing which shader or pass is the bottleneck would save us >> a great deal of time and fiddling. >> Getting the most out of our renderer is not only crucial for visual >> fluidity but also to minimize input lag since frame drops also mean input >> lag in the current implementation. When you?re developing a high quality >> game and game engine, those two factors can make the difference between a >> playable and an unplayable game. >> >> In short, we would love to see more GPU performance tools and APIs be >> implemented. >> >> Thibaut. >> @BKcore >> @Artillery >> >> *From:* Florian B?sch >> *Sent:* ?Friday?, ?July? ?18?, ?2014 ?10?:?07? ?AM >> *To:* Benoit Jacob >> *Cc:* Daniel Koch , public webgl >> >> >> What you describe is not a good way to measure performance, at all. First >> of all, it cannot catch improvements. Second, it requires >> instrumenting/fiddling with the code in artificial load-settings, that >> might or might not have anything relevant to do with how the code performs >> under realistic settings. Lastly, it's extremely tedious. I, and everybody >> else is just about at the end of their rope trying to squeeze out >> performance from even moderately complex WebGL applications, and what we >> have doesn't work. If there wasn't vsync, and if there wasn't vsync >> quantization, which is how native app-developers measure performance, it'd >> be somewhat workable. But there is vsync, and there is vsync quantization, >> and hence, I'm screwed, and so is everybody else. This is not a sustainable >> situation, and it needs to change, yesterday. And the only thing I see for >> that to change is EXT_disjoint_timer_query, warts and all, because it can't >> possibly be worse than what we have now. >> >> >> On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob wrote: >> >>> It's complicated. Measuring the speed of GPU code execution is made hard >>> by the fact that we have no control over what code runs on the GPU. The >>> OpenGL commands that we issue are only a high-level description of that. >>> The details, which are what you would measure with timer queries, are >>> opaque and implementation-dependent. >>> >>> That's basically the reason why to this day most GPU's performance >>> measuring tools are GPU-specific and only an imperfect and small minority >>> gets standardized (like these timer queries). >>> >>> For now, you can get around the vsync quantization problem ("30fps or >>> 60fps") by instead running your benchmark as measuring how much you can >>> increase the complexity of your scene without falling below 60 FPS >>> ("running with 123,000 dinosaurs at 60 FPS"). >>> >>> That will also give you more relevant performance measurements: what you >>> really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. >>> >>> Benoit >>> >>> ------------------------------ >>> >>> EXT_disjoint_timer query is desperately needed. Measuring performance by way of FPS is completely unusable, as it mostly just boils down to 3 answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL programs, and hence they cannot optimize them. This is extremely bad. >>> >>> >>> >>> On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch wrote: >>> >>>> Oh, right, I thought it was. Weird. Anyways, makes it even more >>>> relevant to get it done. >>>> >>>> >>>> On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch wrote: >>>> >>>>> On 2014-07-16 7:15 AM, "Florian B?sch" wrote: >>>>> >>>>> The EXT_disjoint_timer_query extension ( >>>>> http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) >>>>> has been moved to draft by Kenneth 5 months ago. >>>>> >>>>> Tickets for implementation of the functionality are created: >>>>> >>>>> - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 >>>>> - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 >>>>> - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 >>>>> - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 >>>>> >>>>> The extension would be very useful for a variety of performance >>>>> profiling usecases and LOD determination algorithms. >>>>> >>>>> The extension is part of OpenGL ES 3.1 core, the WebGL version of >>>>> which would probably still be a few years out. >>>>> >>>>> >>>>> This last statement is incorrect. It is NOT part of OpenGL ES 3.1. >>>>> >>>>> >>>> >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jda...@ Fri Jul 18 14:01:00 2014 From: jda...@ (James Darpinian) Date: Fri, 18 Jul 2014 14:01:00 -0700 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> <53c96763.e233460a.3b6b.ffff875e@mx.google.com> <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> Message-ID: > Just apply your GPU vendor profiling tools to your browser I would love if this were possible. Unfortunately, GPU vendor profiling tools do not work on WebGL apps running inside browsers. They are designed solely for games and fail or crash when pointed at WebGL apps running in a browser window. Browser vendors could help either by providing a special mode to simplify the rendering pipeline so that GPU vendor profiling tools don't crash, or by collaborating with GPU vendors to fix their tools. On Fri, Jul 18, 2014 at 11:40 AM, Benoit Jacob wrote: > Not saying that timer queries are useless or that we'll never ship them in > WebGL - but they are neither as portable nor as informative as you and > Florian seem to think they are. They are a portable interface to measuring > _something_ that happens on your GPU; but _what_ exactly that thing is, and > how it is affected by other things, is GPU-dependent. > > To know in depth where time is being spent on a given system, you probably > need to use the performance measurement tools provided by your GPU vendor. > Those will be able to tell you a great deal of detail about what is going > on in your GPU. > > That you can do today, without waiting for WebGL implementations to > provide anything. Just apply your GPU vendor profiling tools to your > browser running your WebGL app. > > Benoit > > ------------------------------ > > I can?t help but agree with Florian here. Tools to benchmark GPU > performance (especially passes/shaders) in WebGL would be invaluable to us > at Artillery. Knowing which shader or pass is the bottleneck would save us > a great deal of time and fiddling. > Getting the most out of our renderer is not only crucial for visual > fluidity but also to minimize input lag since frame drops also mean input > lag in the current implementation. When you?re developing a high quality > game and game engine, those two factors can make the difference between a > playable and an unplayable game. > > In short, we would love to see more GPU performance tools and APIs be > implemented. > > Thibaut. > @BKcore > @Artillery > > *From:* Florian B?sch > *Sent:* ?Friday?, ?July? ?18?, ?2014 ?10?:?07? ?AM > *To:* Benoit Jacob > *Cc:* Daniel Koch , public webgl > > > What you describe is not a good way to measure performance, at all. First > of all, it cannot catch improvements. Second, it requires > instrumenting/fiddling with the code in artificial load-settings, that > might or might not have anything relevant to do with how the code performs > under realistic settings. Lastly, it's extremely tedious. I, and everybody > else is just about at the end of their rope trying to squeeze out > performance from even moderately complex WebGL applications, and what we > have doesn't work. If there wasn't vsync, and if there wasn't vsync > quantization, which is how native app-developers measure performance, it'd > be somewhat workable. But there is vsync, and there is vsync quantization, > and hence, I'm screwed, and so is everybody else. This is not a sustainable > situation, and it needs to change, yesterday. And the only thing I see for > that to change is EXT_disjoint_timer_query, warts and all, because it can't > possibly be worse than what we have now. > > > On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob wrote: > >> It's complicated. Measuring the speed of GPU code execution is made hard >> by the fact that we have no control over what code runs on the GPU. The >> OpenGL commands that we issue are only a high-level description of that. >> The details, which are what you would measure with timer queries, are >> opaque and implementation-dependent. >> >> That's basically the reason why to this day most GPU's performance >> measuring tools are GPU-specific and only an imperfect and small minority >> gets standardized (like these timer queries). >> >> For now, you can get around the vsync quantization problem ("30fps or >> 60fps") by instead running your benchmark as measuring how much you can >> increase the complexity of your scene without falling below 60 FPS >> ("running with 123,000 dinosaurs at 60 FPS"). >> >> That will also give you more relevant performance measurements: what you >> really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. >> >> Benoit >> >> ------------------------------ >> >> EXT_disjoint_timer query is desperately needed. Measuring performance by way of FPS is completely unusable, as it mostly just boils down to 3 answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL programs, and hence they cannot optimize them. This is extremely bad. >> >> >> >> On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch wrote: >> >>> Oh, right, I thought it was. Weird. Anyways, makes it even more relevant >>> to get it done. >>> >>> >>> On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch wrote: >>> >>>> On 2014-07-16 7:15 AM, "Florian B?sch" wrote: >>>> >>>> The EXT_disjoint_timer_query extension ( >>>> http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) >>>> has been moved to draft by Kenneth 5 months ago. >>>> >>>> Tickets for implementation of the functionality are created: >>>> >>>> - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 >>>> - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 >>>> - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 >>>> - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 >>>> >>>> The extension would be very useful for a variety of performance >>>> profiling usecases and LOD determination algorithms. >>>> >>>> The extension is part of OpenGL ES 3.1 core, the WebGL version of which >>>> would probably still be a few years out. >>>> >>>> >>>> This last statement is incorrect. It is NOT part of OpenGL ES 3.1. >>>> >>>> >>> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jul 18 14:05:21 2014 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 18 Jul 2014 23:05:21 +0200 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> <53c96763.e233460a.3b6b.ffff875e@mx.google.com> <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> Message-ID: Vendor GPU tools are also not available for some platforms/configurations (such as often not on Linux, not on Android, not on Some revisions of OSX, not on Android and so forth). Vendor tools also don't solve the issue that you don't want to take just your measurements (even if you can), you also want to take measurements from your users. For example, suppose you've got a good machine, and you're always way down in GPU usage and hence at a rock-solid 60fps. So you conclude, everything's fine. However, let's say 80% of your users barely manage to run at 60fps, but they manage. So you'd measure their FPS, and you'd conclude, everything's fine. Then you make a tiny change, something that degrades performance by say, 20%. Suddenly 80% of your userbase runs on 30fps. The monitoring guy stampedes over to you and starts talking about judgement day. Ooops. On Fri, Jul 18, 2014 at 11:01 PM, James Darpinian wrote: > > Just apply your GPU vendor profiling tools to your browser > > I would love if this were possible. Unfortunately, GPU vendor profiling > tools do not work on WebGL apps running inside browsers. They are designed > solely for games and fail or crash when pointed at WebGL apps running in a > browser window. Browser vendors could help either by providing a special > mode to simplify the rendering pipeline so that GPU vendor profiling tools > don't crash, or by collaborating with GPU vendors to fix their tools. > > > On Fri, Jul 18, 2014 at 11:40 AM, Benoit Jacob wrote: > >> Not saying that timer queries are useless or that we'll never ship them >> in WebGL - but they are neither as portable nor as informative as you and >> Florian seem to think they are. They are a portable interface to measuring >> _something_ that happens on your GPU; but _what_ exactly that thing is, and >> how it is affected by other things, is GPU-dependent. >> >> To know in depth where time is being spent on a given system, you >> probably need to use the performance measurement tools provided by your GPU >> vendor. Those will be able to tell you a great deal of detail about what is >> going on in your GPU. >> >> That you can do today, without waiting for WebGL implementations to >> provide anything. Just apply your GPU vendor profiling tools to your >> browser running your WebGL app. >> >> Benoit >> >> ------------------------------ >> >> I can?t help but agree with Florian here. Tools to benchmark GPU >> performance (especially passes/shaders) in WebGL would be invaluable to us >> at Artillery. Knowing which shader or pass is the bottleneck would save us >> a great deal of time and fiddling. >> Getting the most out of our renderer is not only crucial for visual >> fluidity but also to minimize input lag since frame drops also mean input >> lag in the current implementation. When you?re developing a high quality >> game and game engine, those two factors can make the difference between a >> playable and an unplayable game. >> >> In short, we would love to see more GPU performance tools and APIs be >> implemented. >> >> Thibaut. >> @BKcore >> @Artillery >> >> *From:* Florian B?sch >> *Sent:* ?Friday?, ?July? ?18?, ?2014 ?10?:?07? ?AM >> *To:* Benoit Jacob >> *Cc:* Daniel Koch , public webgl >> >> >> What you describe is not a good way to measure performance, at all. First >> of all, it cannot catch improvements. Second, it requires >> instrumenting/fiddling with the code in artificial load-settings, that >> might or might not have anything relevant to do with how the code performs >> under realistic settings. Lastly, it's extremely tedious. I, and everybody >> else is just about at the end of their rope trying to squeeze out >> performance from even moderately complex WebGL applications, and what we >> have doesn't work. If there wasn't vsync, and if there wasn't vsync >> quantization, which is how native app-developers measure performance, it'd >> be somewhat workable. But there is vsync, and there is vsync quantization, >> and hence, I'm screwed, and so is everybody else. This is not a sustainable >> situation, and it needs to change, yesterday. And the only thing I see for >> that to change is EXT_disjoint_timer_query, warts and all, because it can't >> possibly be worse than what we have now. >> >> >> On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob wrote: >> >>> It's complicated. Measuring the speed of GPU code execution is made hard >>> by the fact that we have no control over what code runs on the GPU. The >>> OpenGL commands that we issue are only a high-level description of that. >>> The details, which are what you would measure with timer queries, are >>> opaque and implementation-dependent. >>> >>> That's basically the reason why to this day most GPU's performance >>> measuring tools are GPU-specific and only an imperfect and small minority >>> gets standardized (like these timer queries). >>> >>> For now, you can get around the vsync quantization problem ("30fps or >>> 60fps") by instead running your benchmark as measuring how much you can >>> increase the complexity of your scene without falling below 60 FPS >>> ("running with 123,000 dinosaurs at 60 FPS"). >>> >>> That will also give you more relevant performance measurements: what you >>> really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. >>> >>> Benoit >>> >>> ------------------------------ >>> >>> EXT_disjoint_timer query is desperately needed. Measuring performance by way of FPS is completely unusable, as it mostly just boils down to 3 answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL programs, and hence they cannot optimize them. This is extremely bad. >>> >>> >>> >>> On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch wrote: >>> >>>> Oh, right, I thought it was. Weird. Anyways, makes it even more >>>> relevant to get it done. >>>> >>>> >>>> On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch wrote: >>>> >>>>> On 2014-07-16 7:15 AM, "Florian B?sch" wrote: >>>>> >>>>> The EXT_disjoint_timer_query extension ( >>>>> http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) >>>>> has been moved to draft by Kenneth 5 months ago. >>>>> >>>>> Tickets for implementation of the functionality are created: >>>>> >>>>> - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 >>>>> - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 >>>>> - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 >>>>> - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 >>>>> >>>>> The extension would be very useful for a variety of performance >>>>> profiling usecases and LOD determination algorithms. >>>>> >>>>> The extension is part of OpenGL ES 3.1 core, the WebGL version of >>>>> which would probably still be a few years out. >>>>> >>>>> >>>>> This last statement is incorrect. It is NOT part of OpenGL ES 3.1. >>>>> >>>>> >>>> >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fab...@ Fri Jul 18 14:21:09 2014 From: fab...@ (Fabrice Robinet) Date: Fri, 18 Jul 2014 14:21:09 -0700 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> <53c96763.e233460a.3b6b.ffff875e@mx.google.com> <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> Message-ID: Agreed with James suggestion, the only satisfying option for both parties is to have browser vendors and gpu vendors collaborating and making this a priority. The browser who will propose good perf tools for webgl will become the primary platform of choice for webgl developers.... Agreeing it is hard to achieve because of different implementations is one thing, telling developers they are pretty much on their own on the matter is another more questionable, especially with active members of the WG from Nvidia and AMD. Sent from my iPhone > On Jul 18, 2014, at 2:05 PM, Florian B?sch wrote: > > Vendor GPU tools are also not available for some platforms/configurations (such as often not on Linux, not on Android, not on Some revisions of OSX, not on Android and so forth). > > Vendor tools also don't solve the issue that you don't want to take just your measurements (even if you can), you also want to take measurements from your users. > > For example, suppose you've got a good machine, and you're always way down in GPU usage and hence at a rock-solid 60fps. So you conclude, everything's fine. However, let's say 80% of your users barely manage to run at 60fps, but they manage. So you'd measure their FPS, and you'd conclude, everything's fine. Then you make a tiny change, something that degrades performance by say, 20%. Suddenly 80% of your userbase runs on 30fps. The monitoring guy stampedes over to you and starts talking about judgement day. Ooops. > > >> On Fri, Jul 18, 2014 at 11:01 PM, James Darpinian wrote: >> > Just apply your GPU vendor profiling tools to your browser >> >> I would love if this were possible. Unfortunately, GPU vendor profiling tools do not work on WebGL apps running inside browsers. They are designed solely for games and fail or crash when pointed at WebGL apps running in a browser window. Browser vendors could help either by providing a special mode to simplify the rendering pipeline so that GPU vendor profiling tools don't crash, or by collaborating with GPU vendors to fix their tools. >> >> >>> On Fri, Jul 18, 2014 at 11:40 AM, Benoit Jacob wrote: >>> Not saying that timer queries are useless or that we'll never ship them in WebGL - but they are neither as portable nor as informative as you and Florian seem to think they are. They are a portable interface to measuring _something_ that happens on your GPU; but _what_ exactly that thing is, and how it is affected by other things, is GPU-dependent. >>> >>> To know in depth where time is being spent on a given system, you probably need to use the performance measurement tools provided by your GPU vendor. Those will be able to tell you a great deal of detail about what is going on in your GPU. >>> >>> That you can do today, without waiting for WebGL implementations to provide anything. Just apply your GPU vendor profiling tools to your browser running your WebGL app. >>> >>> Benoit >>> >>> I can?t help but agree with Florian here. Tools to benchmark GPU performance (especially passes/shaders) in WebGL would be invaluable to us at Artillery. Knowing which shader or pass is the bottleneck would save us a great deal of time and fiddling. >>> Getting the most out of our renderer is not only crucial for visual fluidity but also to minimize input lag since frame drops also mean input lag in the current implementation. When you?re developing a high quality game and game engine, those two factors can make the difference between a playable and an unplayable game. >>> >>> In short, we would love to see more GPU performance tools and APIs be implemented. >>> >>> Thibaut. >>> @BKcore >>> @Artillery >>> >>> From: Florian B?sch >>> Sent: ?Friday?, ?July? ?18?, ?2014 ?10?:?07? ?AM >>> To: Benoit Jacob >>> Cc: Daniel Koch, public webgl >>> >>> What you describe is not a good way to measure performance, at all. First of all, it cannot catch improvements. Second, it requires instrumenting/fiddling with the code in artificial load-settings, that might or might not have anything relevant to do with how the code performs under realistic settings. Lastly, it's extremely tedious. I, and everybody else is just about at the end of their rope trying to squeeze out performance from even moderately complex WebGL applications, and what we have doesn't work. If there wasn't vsync, and if there wasn't vsync quantization, which is how native app-developers measure performance, it'd be somewhat workable. But there is vsync, and there is vsync quantization, and hence, I'm screwed, and so is everybody else. This is not a sustainable situation, and it needs to change, yesterday. And the only thing I see for that to change is EXT_disjoint_timer_query, warts and all, because it can't possibly be worse than what we have now. >>> >>> >>>> On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob wrote: >>>> It's complicated. Measuring the speed of GPU code execution is made hard by the fact that we have no control over what code runs on the GPU. The OpenGL commands that we issue are only a high-level description of that. The details, which are what you would measure with timer queries, are opaque and implementation-dependent. >>>> >>>> That's basically the reason why to this day most GPU's performance measuring tools are GPU-specific and only an imperfect and small minority gets standardized (like these timer queries). >>>> >>>> For now, you can get around the vsync quantization problem ("30fps or 60fps") by instead running your benchmark as measuring how much you can increase the complexity of your scene without falling below 60 FPS ("running with 123,000 dinosaurs at 60 FPS"). >>>> >>>> That will also give you more relevant performance measurements: what you really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. >>>> >>>> Benoit >>>> >>>> EXT_disjoint_timer query is desperately needed. Measuring performance by way of FPS is completely unusable, as it mostly just boils down to 3 answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL programs, and hence they cannot optimize them. This is extremely bad. >>>> >>>> >>>>> On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch wrote: >>>>> Oh, right, I thought it was. Weird. Anyways, makes it even more relevant to get it done. >>>>> >>>>> >>>>>> On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch wrote: >>>>>> On 2014-07-16 7:15 AM, "Florian B?sch" wrote: >>>>>> >>>>>> The EXT_disjoint_timer_query extension (http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) has been moved to draft by Kenneth 5 months ago. >>>>>> >>>>>> Tickets for implementation of the functionality are created: >>>>>> >>>>>> - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 >>>>>> - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 >>>>>> - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 >>>>>> - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 >>>>>> >>>>>> The extension would be very useful for a variety of performance profiling usecases and LOD determination algorithms. >>>>>> >>>>>> The extension is part of OpenGL ES 3.1 core, the WebGL version of which would probably still be a few years out. >>>>>> >>>>>> This last statement is incorrect. It is NOT part of OpenGL ES 3.1. >>>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Jul 18 14:49:52 2014 From: kbr...@ (Kenneth Russell) Date: Fri, 18 Jul 2014 14:49:52 -0700 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> <53c96763.e233460a.3b6b.ffff875e@mx.google.com> <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> Message-ID: On Fri, Jul 18, 2014 at 2:01 PM, James Darpinian wrote: >> Just apply your GPU vendor profiling tools to your browser > > I would love if this were possible. Unfortunately, GPU vendor profiling > tools do not work on WebGL apps running inside browsers. They are designed > solely for games and fail or crash when pointed at WebGL apps running in a > browser window. Browser vendors could help either by providing a special > mode to simplify the rendering pipeline so that GPU vendor profiling tools > don't crash, or by collaborating with GPU vendors to fix their tools. Anecdotally, our group's manager Vangelis was able to get Crytek's new Renderdoc debugger to work with Chrome's GPU process. http://cryengine.com/renderdoc https://github.com/baldurk/renderdoc It's more a debugger than a profiler. Of course we would be happy to work with any GPU vendor to get their profiling tools to work with the browser, and I'm sure the same holds for any browser implementer. We've collaborated with Intel in the past and I believe their GPA tools work with Chrome. -Ken > On Fri, Jul 18, 2014 at 11:40 AM, Benoit Jacob wrote: >> >> Not saying that timer queries are useless or that we'll never ship them in >> WebGL - but they are neither as portable nor as informative as you and >> Florian seem to think they are. They are a portable interface to measuring >> _something_ that happens on your GPU; but _what_ exactly that thing is, and >> how it is affected by other things, is GPU-dependent. >> >> To know in depth where time is being spent on a given system, you probably >> need to use the performance measurement tools provided by your GPU vendor. >> Those will be able to tell you a great deal of detail about what is going on >> in your GPU. >> >> That you can do today, without waiting for WebGL implementations to >> provide anything. Just apply your GPU vendor profiling tools to your browser >> running your WebGL app. >> >> Benoit >> >> ________________________________ >> >> I can?t help but agree with Florian here. Tools to benchmark GPU >> performance (especially passes/shaders) in WebGL would be invaluable to us >> at Artillery. Knowing which shader or pass is the bottleneck would save us a >> great deal of time and fiddling. >> Getting the most out of our renderer is not only crucial for visual >> fluidity but also to minimize input lag since frame drops also mean input >> lag in the current implementation. When you?re developing a high quality >> game and game engine, those two factors can make the difference between a >> playable and an unplayable game. >> >> In short, we would love to see more GPU performance tools and APIs be >> implemented. >> >> Thibaut. >> @BKcore >> @Artillery >> >> From: Florian B?sch >> Sent: ?Friday?, ?July? ?18?, ?2014 ?10?:?07? ?AM >> To: Benoit Jacob >> Cc: Daniel Koch, public webgl >> >> What you describe is not a good way to measure performance, at all. First >> of all, it cannot catch improvements. Second, it requires >> instrumenting/fiddling with the code in artificial load-settings, that might >> or might not have anything relevant to do with how the code performs under >> realistic settings. Lastly, it's extremely tedious. I, and everybody else is >> just about at the end of their rope trying to squeeze out performance from >> even moderately complex WebGL applications, and what we have doesn't work. >> If there wasn't vsync, and if there wasn't vsync quantization, which is how >> native app-developers measure performance, it'd be somewhat workable. But >> there is vsync, and there is vsync quantization, and hence, I'm screwed, and >> so is everybody else. This is not a sustainable situation, and it needs to >> change, yesterday. And the only thing I see for that to change is >> EXT_disjoint_timer_query, warts and all, because it can't possibly be worse >> than what we have now. >> >> >> On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob wrote: >>> >>> It's complicated. Measuring the speed of GPU code execution is made hard >>> by the fact that we have no control over what code runs on the GPU. The >>> OpenGL commands that we issue are only a high-level description of that. The >>> details, which are what you would measure with timer queries, are opaque and >>> implementation-dependent. >>> >>> That's basically the reason why to this day most GPU's performance >>> measuring tools are GPU-specific and only an imperfect and small minority >>> gets standardized (like these timer queries). >>> >>> For now, you can get around the vsync quantization problem ("30fps or >>> 60fps") by instead running your benchmark as measuring how much you can >>> increase the complexity of your scene without falling below 60 FPS ("running >>> with 123,000 dinosaurs at 60 FPS"). >>> >>> That will also give you more relevant performance measurements: what you >>> really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. >>> >>> Benoit >>> >>> ________________________________ >>> >>> EXT_disjoint_timer query is desperately needed. Measuring performance by >>> way of FPS is completely unusable, as it mostly just boils down to 3 >>> answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL >>> programs, and hence they cannot optimize them. This is extremely bad. >>> >>> >>> >>> On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch wrote: >>>> >>>> Oh, right, I thought it was. Weird. Anyways, makes it even more relevant >>>> to get it done. >>>> >>>> >>>> On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch wrote: >>>>> >>>>> On 2014-07-16 7:15 AM, "Florian B?sch" wrote: >>>>> >>>>> The EXT_disjoint_timer_query extension >>>>> (http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/) >>>>> has been moved to draft by Kenneth 5 months ago. >>>>> >>>>> Tickets for implementation of the functionality are created: >>>>> >>>>> - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 >>>>> - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 >>>>> - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 >>>>> - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 >>>>> >>>>> The extension would be very useful for a variety of performance >>>>> profiling usecases and LOD determination algorithms. >>>>> >>>>> The extension is part of OpenGL ES 3.1 core, the WebGL version of which >>>>> would probably still be a few years out. >>>>> >>>>> >>>>> This last statement is incorrect. It is NOT part of OpenGL ES 3.1. >>>>> >>>> >>> >>> >> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jda...@ Fri Jul 18 16:08:39 2014 From: jda...@ (James Darpinian) Date: Fri, 18 Jul 2014 16:08:39 -0700 Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> <53c96763.e233460a.3b6b.ffff875e@mx.google.com> <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> Message-ID: I have not been able to get Intel's GPA frame analyzer to work with WebGL in recent versions of Chrome or Firefox. It would be awesome if there was some documentation about how to configure the browser and GPU profiling tools to work with each other. On Fri, Jul 18, 2014 at 2:49 PM, Kenneth Russell wrote: > On Fri, Jul 18, 2014 at 2:01 PM, James Darpinian > wrote: > >> Just apply your GPU vendor profiling tools to your browser > > > > I would love if this were possible. Unfortunately, GPU vendor profiling > > tools do not work on WebGL apps running inside browsers. They are > designed > > solely for games and fail or crash when pointed at WebGL apps running in > a > > browser window. Browser vendors could help either by providing a special > > mode to simplify the rendering pipeline so that GPU vendor profiling > tools > > don't crash, or by collaborating with GPU vendors to fix their tools. > > Anecdotally, our group's manager Vangelis was able to get Crytek's new > Renderdoc debugger to work with Chrome's GPU process. > > http://cryengine.com/renderdoc > https://github.com/baldurk/renderdoc > > It's more a debugger than a profiler. Of course we would be happy to > work with any GPU vendor to get their profiling tools to work with the > browser, and I'm sure the same holds for any browser implementer. > We've collaborated with Intel in the past and I believe their GPA > tools work with Chrome. > > -Ken > > > > On Fri, Jul 18, 2014 at 11:40 AM, Benoit Jacob > wrote: > >> > >> Not saying that timer queries are useless or that we'll never ship them > in > >> WebGL - but they are neither as portable nor as informative as you and > >> Florian seem to think they are. They are a portable interface to > measuring > >> _something_ that happens on your GPU; but _what_ exactly that thing is, > and > >> how it is affected by other things, is GPU-dependent. > >> > >> To know in depth where time is being spent on a given system, you > probably > >> need to use the performance measurement tools provided by your GPU > vendor. > >> Those will be able to tell you a great deal of detail about what is > going on > >> in your GPU. > >> > >> That you can do today, without waiting for WebGL implementations to > >> provide anything. Just apply your GPU vendor profiling tools to your > browser > >> running your WebGL app. > >> > >> Benoit > >> > >> ________________________________ > >> > >> I can?t help but agree with Florian here. Tools to benchmark GPU > >> performance (especially passes/shaders) in WebGL would be invaluable to > us > >> at Artillery. Knowing which shader or pass is the bottleneck would save > us a > >> great deal of time and fiddling. > >> Getting the most out of our renderer is not only crucial for visual > >> fluidity but also to minimize input lag since frame drops also mean > input > >> lag in the current implementation. When you?re developing a high quality > >> game and game engine, those two factors can make the difference between > a > >> playable and an unplayable game. > >> > >> In short, we would love to see more GPU performance tools and APIs be > >> implemented. > >> > >> Thibaut. > >> @BKcore > >> @Artillery > >> > >> From: Florian B?sch > >> Sent: ?Friday?, ?July? ?18?, ?2014 ?10?:?07? ?AM > >> To: Benoit Jacob > >> Cc: Daniel Koch, public webgl > >> > >> What you describe is not a good way to measure performance, at all. > First > >> of all, it cannot catch improvements. Second, it requires > >> instrumenting/fiddling with the code in artificial load-settings, that > might > >> or might not have anything relevant to do with how the code performs > under > >> realistic settings. Lastly, it's extremely tedious. I, and everybody > else is > >> just about at the end of their rope trying to squeeze out performance > from > >> even moderately complex WebGL applications, and what we have doesn't > work. > >> If there wasn't vsync, and if there wasn't vsync quantization, which is > how > >> native app-developers measure performance, it'd be somewhat workable. > But > >> there is vsync, and there is vsync quantization, and hence, I'm > screwed, and > >> so is everybody else. This is not a sustainable situation, and it needs > to > >> change, yesterday. And the only thing I see for that to change is > >> EXT_disjoint_timer_query, warts and all, because it can't possibly be > worse > >> than what we have now. > >> > >> > >> On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob > wrote: > >>> > >>> It's complicated. Measuring the speed of GPU code execution is made > hard > >>> by the fact that we have no control over what code runs on the GPU. The > >>> OpenGL commands that we issue are only a high-level description of > that. The > >>> details, which are what you would measure with timer queries, are > opaque and > >>> implementation-dependent. > >>> > >>> That's basically the reason why to this day most GPU's performance > >>> measuring tools are GPU-specific and only an imperfect and small > minority > >>> gets standardized (like these timer queries). > >>> > >>> For now, you can get around the vsync quantization problem ("30fps or > >>> 60fps") by instead running your benchmark as measuring how much you can > >>> increase the complexity of your scene without falling below 60 FPS > ("running > >>> with 123,000 dinosaurs at 60 FPS"). > >>> > >>> That will also give you more relevant performance measurements: what > you > >>> really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. > >>> > >>> Benoit > >>> > >>> ________________________________ > >>> > >>> EXT_disjoint_timer query is desperately needed. Measuring performance > by > >>> way of FPS is completely unusable, as it mostly just boils down to 3 > >>> answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL > >>> programs, and hence they cannot optimize them. This is extremely bad. > >>> > >>> > >>> > >>> On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch > wrote: > >>>> > >>>> Oh, right, I thought it was. Weird. Anyways, makes it even more > relevant > >>>> to get it done. > >>>> > >>>> > >>>> On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch > wrote: > >>>>> > >>>>> On 2014-07-16 7:15 AM, "Florian B?sch" wrote: > >>>>> > >>>>> The EXT_disjoint_timer_query extension > >>>>> ( > http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/ > ) > >>>>> has been moved to draft by Kenneth 5 months ago. > >>>>> > >>>>> Tickets for implementation of the functionality are created: > >>>>> > >>>>> - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 > >>>>> - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 > >>>>> - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 > >>>>> - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 > >>>>> > >>>>> The extension would be very useful for a variety of performance > >>>>> profiling usecases and LOD determination algorithms. > >>>>> > >>>>> The extension is part of OpenGL ES 3.1 core, the WebGL version of > which > >>>>> would probably still be a few years out. > >>>>> > >>>>> > >>>>> This last statement is incorrect. It is NOT part of OpenGL ES 3.1. > >>>>> > >>>> > >>> > >>> > >> > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Fri Jul 18 22:38:42 2014 From: bja...@ (Benoit Jacob) Date: Fri, 18 Jul 2014 22:38:42 -0700 (PDT) Subject: [Public WebGL] EXT_disjoint_timer_query encouraged to be implemented In-Reply-To: References: <1731189349.4617282.1405702903583.JavaMail.zimbra@mozilla.com> <53c96763.e233460a.3b6b.ffff875e@mx.google.com> <372329026.4629282.1405708858376.JavaMail.zimbra@mozilla.com> Message-ID: <283406399.4685917.1405748322090.JavaMail.zimbra@mozilla.com> I wonder if the way that browsers are different from most applications, that is causing them to be unusual cases for GPU profilers, is that they typically don't use standard SwapBuffers functions to delimit frames - instead, typically implementing their own swap-chains passing around textures. Indeed, if a GPU profiling tool is accumulating information about the current frame and fails to understand when we finish a frame, then one can easily imagine that it would accumulate unbounded frame data... Solutions to that, if that is actually the problem, could include: 1) (long term only, outside of special OSes controlled by browser vendors) Either find a way to integrate our custom offscreen swapchains into native swapchains so that we would be using SwapBuffers like everyone else (there is work under way for that on Firefox OS, https://bugzilla.mozilla.org/show_bug.cgi?id=767484 ) 2) (possibly shorter term / more practical in general) Or standardize an arbitrary way to communicate frame boundaries to drivers other than SwapBuffers and have GPU profiling tools understand that. ( Could glDiscardFramebuffer be that thing? Would it be understood by GPU profiling tools today? Worth a try...) Benoit ----- Original Message ----- > I have not been able to get Intel's GPA frame analyzer to work with WebGL in > recent versions of Chrome or Firefox. It would be awesome if there was some > documentation about how to configure the browser and GPU profiling tools to > work with each other. > On Fri, Jul 18, 2014 at 2:49 PM, Kenneth Russell < kbr...@ > wrote: > > On Fri, Jul 18, 2014 at 2:01 PM, James Darpinian < jdarpinian...@ > > > wrote: > > > >> Just apply your GPU vendor profiling tools to your browser > > > > > > > > I would love if this were possible. Unfortunately, GPU vendor profiling > > > > tools do not work on WebGL apps running inside browsers. They are > > > designed > > > > solely for games and fail or crash when pointed at WebGL apps running in > > > a > > > > browser window. Browser vendors could help either by providing a special > > > > mode to simplify the rendering pipeline so that GPU vendor profiling > > > tools > > > > don't crash, or by collaborating with GPU vendors to fix their tools. > > > Anecdotally, our group's manager Vangelis was able to get Crytek's new > > > Renderdoc debugger to work with Chrome's GPU process. > > > http://cryengine.com/renderdoc > > > https://github.com/baldurk/renderdoc > > > It's more a debugger than a profiler. Of course we would be happy to > > > work with any GPU vendor to get their profiling tools to work with the > > > browser, and I'm sure the same holds for any browser implementer. > > > We've collaborated with Intel in the past and I believe their GPA > > > tools work with Chrome. > > > -Ken > > > > On Fri, Jul 18, 2014 at 11:40 AM, Benoit Jacob < bjacob...@ > > > > wrote: > > > >> > > > >> Not saying that timer queries are useless or that we'll never ship them > > >> in > > > >> WebGL - but they are neither as portable nor as informative as you and > > > >> Florian seem to think they are. They are a portable interface to > > >> measuring > > > >> _something_ that happens on your GPU; but _what_ exactly that thing is, > > >> and > > > >> how it is affected by other things, is GPU-dependent. > > > >> > > > >> To know in depth where time is being spent on a given system, you > > >> probably > > > >> need to use the performance measurement tools provided by your GPU > > >> vendor. > > > >> Those will be able to tell you a great deal of detail about what is > > >> going > > >> on > > > >> in your GPU. > > > >> > > > >> That you can do today, without waiting for WebGL implementations to > > > >> provide anything. Just apply your GPU vendor profiling tools to your > > >> browser > > > >> running your WebGL app. > > > >> > > > >> Benoit > > > >> > > > >> ________________________________ > > > >> > > > >> I can?t help but agree with Florian here. Tools to benchmark GPU > > > >> performance (especially passes/shaders) in WebGL would be invaluable to > > >> us > > > >> at Artillery. Knowing which shader or pass is the bottleneck would save > > >> us > > >> a > > > >> great deal of time and fiddling. > > > >> Getting the most out of our renderer is not only crucial for visual > > > >> fluidity but also to minimize input lag since frame drops also mean > > >> input > > > >> lag in the current implementation. When you?re developing a high quality > > > >> game and game engine, those two factors can make the difference between > > >> a > > > >> playable and an unplayable game. > > > >> > > > >> In short, we would love to see more GPU performance tools and APIs be > > > >> implemented. > > > >> > > > >> Thibaut. > > > >> @BKcore > > > >> @Artillery > > > >> > > > >> From: Florian B?sch > > > >> Sent: ?Friday?, ?July? ?18?, ?2014 ?10?:?07? ?AM > > > >> To: Benoit Jacob > > > >> Cc: Daniel Koch, public webgl > > > >> > > > >> What you describe is not a good way to measure performance, at all. > > >> First > > > >> of all, it cannot catch improvements. Second, it requires > > > >> instrumenting/fiddling with the code in artificial load-settings, that > > >> might > > > >> or might not have anything relevant to do with how the code performs > > >> under > > > >> realistic settings. Lastly, it's extremely tedious. I, and everybody > > >> else > > >> is > > > >> just about at the end of their rope trying to squeeze out performance > > >> from > > > >> even moderately complex WebGL applications, and what we have doesn't > > >> work. > > > >> If there wasn't vsync, and if there wasn't vsync quantization, which is > > >> how > > > >> native app-developers measure performance, it'd be somewhat workable. > > >> But > > > >> there is vsync, and there is vsync quantization, and hence, I'm screwed, > > >> and > > > >> so is everybody else. This is not a sustainable situation, and it needs > > >> to > > > >> change, yesterday. And the only thing I see for that to change is > > > >> EXT_disjoint_timer_query, warts and all, because it can't possibly be > > >> worse > > > >> than what we have now. > > > >> > > > >> > > > >> On Fri, Jul 18, 2014 at 7:01 PM, Benoit Jacob < bjacob...@ > > > >> wrote: > > > >>> > > > >>> It's complicated. Measuring the speed of GPU code execution is made > > >>> hard > > > >>> by the fact that we have no control over what code runs on the GPU. The > > > >>> OpenGL commands that we issue are only a high-level description of > > >>> that. > > >>> The > > > >>> details, which are what you would measure with timer queries, are > > >>> opaque > > >>> and > > > >>> implementation-dependent. > > > >>> > > > >>> That's basically the reason why to this day most GPU's performance > > > >>> measuring tools are GPU-specific and only an imperfect and small > > >>> minority > > > >>> gets standardized (like these timer queries). > > > >>> > > > >>> For now, you can get around the vsync quantization problem ("30fps or > > > >>> 60fps") by instead running your benchmark as measuring how much you can > > > >>> increase the complexity of your scene without falling below 60 FPS > > >>> ("running > > > >>> with 123,000 dinosaurs at 60 FPS"). > > > >>> > > > >>> That will also give you more relevant performance measurements: what > > >>> you > > > >>> really want is to stay at 60 FPS. Micro vs macro benchmarks, etc. > > > >>> > > > >>> Benoit > > > >>> > > > >>> ________________________________ > > > >>> > > > >>> EXT_disjoint_timer query is desperately needed. Measuring performance > > >>> by > > > >>> way of FPS is completely unusable, as it mostly just boils down to 3 > > > >>> answers, 15fps, 30fps or 60fps. Programmers can't measure their WebGL > > > >>> programs, and hence they cannot optimize them. This is extremely bad. > > > >>> > > > >>> > > > >>> > > > >>> On Wed, Jul 16, 2014 at 3:34 PM, Florian B?sch < pyalot...@ > > > >>> wrote: > > > >>>> > > > >>>> Oh, right, I thought it was. Weird. Anyways, makes it even more > > >>>> relevant > > > >>>> to get it done. > > > >>>> > > > >>>> > > > >>>> On Wed, Jul 16, 2014 at 3:31 PM, Daniel Koch < dkoch...@ > > > >>>> wrote: > > > >>>>> > > > >>>>> On 2014-07-16 7:15 AM, "Florian B?sch" < pyalot...@ > wrote: > > > >>>>> > > > >>>>> The EXT_disjoint_timer_query extension > > > >>>>> ( > > >>>>> http://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/ > > >>>>> ) > > > >>>>> has been moved to draft by Kenneth 5 months ago. > > > >>>>> > > > >>>>> Tickets for implementation of the functionality are created: > > > >>>>> > > > >>>>> - Chrome: https://code.google.com/p/chromium/issues/detail?id=345227 > > > >>>>> - Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=974832 > > > >>>>> - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 > > > >>>>> - Angle: https://code.google.com/p/angleproject/issues/detail?id=657 > > > >>>>> > > > >>>>> The extension would be very useful for a variety of performance > > > >>>>> profiling usecases and LOD determination algorithms. > > > >>>>> > > > >>>>> The extension is part of OpenGL ES 3.1 core, the WebGL version of > > >>>>> which > > > >>>>> would probably still be a few years out. > > > >>>>> > > > >>>>> > > > >>>>> This last statement is incorrect. It is NOT part of OpenGL ES 3.1. > > > >>>>> > > > >>>> > > > >>> > > > >>> > > > >> > > > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Jul 21 16:22:23 2014 From: kbr...@ (Kenneth Russell) Date: Mon, 21 Jul 2014 16:22:23 -0700 Subject: [Public WebGL] SIGGRAPH 2014 WebGL BOF: call for participation Message-ID: [cross-posted to webgl-dev-list] WebGL enthusiasts, Please join Khronos for the WebGL Birds of a Feather session at SIGGRAPH 2014 on Wednesday, August 13 at 4:00 PM. Detailed schedule and location at https://www.khronos.org/news/events/siggraph-vancouver-2014 . The BOF will include lightning talks and demos, updates on the WebGL specification and implementations, and Q&A and discussion. The working group would like to invite the community to show leading-edge uses of WebGL in the field. If you have a demonstration you'd like to show at the BOF, please email me directly. Looking forward to seeing you at the BOF! -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From dgl...@ Tue Jul 22 16:35:41 2014 From: dgl...@ (Dan Glastonbury) Date: Wed, 23 Jul 2014 09:35:41 +1000 Subject: [Public WebGL] WEBGL_color_buffer_float status Message-ID: <53CEF54D.1080308@mozilla.com> Is it possible to move WEBGL_color_buffer_float and EXT_color_buffer_half_float out of draft status into Community approved or Ratified? Currently WebGL has a ratified extension, OES_texture_float, that references a draft extension, WEBGL_color_buffer_float. Specifically, this language: /Implementations supporting float rendering via this extension will implicitly enable the //WEBGL_color_buffer_float //extension and follow its requirements. This ensures correct behavior when a texture with pixel type //|FLOAT|//is attached to an FBO. Although this feature has historically been allowed, new implementations should not implicitly support float rendering and applications should be modified to explicitly enable //WEBGL_color_buffer_float //./ Under the draft rules, we would have to expose the extension with a vendor prefix but the decision for Firefox has been to expose draft extensions via enabling a preference. This leaves us with a bit of a "chicken or egg" problem of how to handle OES_texture_float when draft extensions aren't enable. Given the specifics of these extensions, Mozilla would like to propose that they are moved out of draft. In general, we must first implement draft extensions and only after they're reasonably implemented, stable and tested should they move to community approved, but WEBGL_color_buffer_float and EXT_color_buffer_half are a special case to fix an already ratified extension. thanks - Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Jul 22 16:42:49 2014 From: kbr...@ (Kenneth Russell) Date: Tue, 22 Jul 2014 16:42:49 -0700 Subject: [Public WebGL] WEBGL_color_buffer_float status In-Reply-To: <53CEF54D.1080308@mozilla.com> References: <53CEF54D.1080308@mozilla.com> Message-ID: Hi Dan, I apologize for the mess that this has caused. In an effort to allow rendering to floating-point targets early in WebGL's lifetime, enabling the OES_texture_float extension also allowed rendering to FP targets, in contradiction to the OpenGL ES 2.0 spec on which WebGL is based. Unfortunately, both spec and implementation problems were encountered while trying to implement WEBGL_color_buffer_float against OpenGL ES 2.0 (ANGLE, in particular). The issues were deep enough that we decided to abandon attempts to shoehorn this extension into WebGL 1 and focus instead on getting WebGL 2 out the door. WebGL 2, being based on OpenGL ES 3.0, will trivially allow the EXT_color_buffer_float extension to be imported from OpenGL ES 3.0 into WebGL. There was a thread about this recently but the intent is to deprecate WEBGL_color_buffer_float and never implement it. Instead EXT_color_buffer_float will be specified against WebGL 2. -Ken On Tue, Jul 22, 2014 at 4:35 PM, Dan Glastonbury wrote: > Is it possible to move WEBGL_color_buffer_float and > EXT_color_buffer_half_float out of draft status into Community approved or > Ratified? > > Currently WebGL has a ratified extension, OES_texture_float, that references > a draft extension, WEBGL_color_buffer_float. Specifically, this language: > > Implementations supporting float rendering via this extension will > implicitly enable the WEBGL_color_buffer_float extension and follow its > requirements. This ensures correct behavior when a texture with pixel type > FLOAT is attached to an FBO. Although this feature has historically been > allowed, new implementations should not implicitly support float rendering > and applications should be modified to explicitly enable > WEBGL_color_buffer_float. > > Under the draft rules, we would have to expose the extension with a vendor > prefix but the decision for Firefox has been to expose draft extensions via > enabling a preference. This leaves us with a bit of a "chicken or egg" > problem of how to handle OES_texture_float when draft extensions aren't > enable. > > Given the specifics of these extensions, Mozilla would like to propose that > they are moved out of draft. > > In general, we must first implement draft extensions and only after they're > reasonably implemented, stable and tested should they move to community > approved, but WEBGL_color_buffer_float and EXT_color_buffer_half are a > special case to fix an already ratified extension. > > thanks > - Dan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Tue Jul 22 17:16:52 2014 From: cal...@ (Mark Callow) Date: Tue, 22 Jul 2014 17:16:52 -0700 Subject: [Public WebGL] WEBGL_color_buffer_float status In-Reply-To: References: <53CEF54D.1080308@mozilla.com> Message-ID: <53CEFEF4.2020105@artspark.co.jp> On 2014/07/22 16:42, Kenneth Russell wrote: > Hi Dan, > > Unfortunately, both spec and implementation problems were encountered > while trying to implement WEBGL_color_buffer_float against OpenGL ES > 2.0 (ANGLE, in particular). The issues were deep enough that we > decided to abandon attempts to shoehorn this extension into WebGL 1 > and focus instead on getting WebGL 2 out the door. WebGL 2, being > based on OpenGL ES 3.0, will trivially allow the > EXT_color_buffer_float extension to be imported from OpenGL ES 3.0 > into WebGL. I am happy to fix any spec issues. The only one I recall being raised is lack of an OpenGL ES 2 extension spec to base it on. But there is one: EXT_color_buffer_half_float. The only difference between this and a theoretical EXT_color_buffer_float is the set of texture and renderbuffer formats supported. The important language changes regarding clamping apply exactly the same to both half_float and float. That is why WEBGL_color_buffer_float draws on the half_float extension for this language. An ANGLE_color_color_buffer_float to enable ANGLE to expose this functionality could also draw on EXT_color_buffer_half_float and be as short as WEBGL_color_buffer_float Of course the extension can't be implemented on top of any OpenGL ES 2.0 implementation but the same is true of float rendering via OES_texture_float. As for implementation issues, since Chrome, et al, already support float rendering via OES_texture_float, it is very difficult to understand what those might be. Clearly it is can be done. The only intended differences with the broken implementations today (probably not all are broken) is that fragment shader outputs should not be clamped and neither should values set via gl.clearColor() and gl.blendColor() values. As it is, clamping makes float rendering almost useless. Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dgl...@ Tue Jul 22 18:16:39 2014 From: dgl...@ (Dan Glastonbury) Date: Wed, 23 Jul 2014 11:16:39 +1000 Subject: [Public WebGL] WEBGL_color_buffer_float status In-Reply-To: References: <53CEF54D.1080308@mozilla.com> Message-ID: <53CF0CF7.4050800@mozilla.com> On 23/07/2014 9:42 am, Kenneth Russell wrote: > Unfortunately, both spec and implementation problems were encountered > while trying to implement WEBGL_color_buffer_float against OpenGL ES > 2.0 (ANGLE, in particular). The issues were deep enough that we > decided to abandon attempts to shoehorn this extension into WebGL 1 > and focus instead on getting WebGL 2 out the door. WebGL 2, being > based on OpenGL ES 3.0, will trivially allow the > EXT_color_buffer_float extension to be imported from OpenGL ES 3.0 > into WebGL. Hi Ken, Does ANGLE support rendering to floating-point textures/renderbuffers at all, even if it's in a non-spec compliant way? thanks - Dan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Jul 22 19:29:55 2014 From: kbr...@ (Kenneth Russell) Date: Tue, 22 Jul 2014 19:29:55 -0700 Subject: [Public WebGL] WEBGL_color_buffer_float status In-Reply-To: <53CF0CF7.4050800@mozilla.com> References: <53CEF54D.1080308@mozilla.com> <53CF0CF7.4050800@mozilla.com> Message-ID: On Tue, Jul 22, 2014 at 6:16 PM, Dan Glastonbury wrote: > > On 23/07/2014 9:42 am, Kenneth Russell wrote: >> >> Unfortunately, both spec and implementation problems were encountered >> while trying to implement WEBGL_color_buffer_float against OpenGL ES >> 2.0 (ANGLE, in particular). The issues were deep enough that we >> decided to abandon attempts to shoehorn this extension into WebGL 1 >> and focus instead on getting WebGL 2 out the door. WebGL 2, being >> based on OpenGL ES 3.0, will trivially allow the >> EXT_color_buffer_float extension to be imported from OpenGL ES 3.0 >> into WebGL. > > Hi Ken, > > Does ANGLE support rendering to floating-point textures/renderbuffers at > all, even if it's in a non-spec compliant way? Hi Dan, Yes, it does. When the OES_texture_float extension is enabled in Chromium's WebGL implementation on Windows, it uses ANGLE's silent support of this feature. Chromium does not currently advertise the WEBGL_color_buffer_float or EXT_color_buffer_half_float extensions. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From juj...@ Wed Jul 23 10:06:44 2014 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Wed, 23 Jul 2014 20:06:44 +0300 Subject: [Public WebGL] How to initialize a D24_UNORM or a D32_FLOAT depth buffer for the rendering context? Message-ID: The current WebGL 1 context creation params only state that with depth=true, a context with at least 16 bits of depth will be created. This is a very weak guarantee. Is there a way to request at least a D24_UNORM or a D32_FLOAT depth buffer, perhaps in the upcoming WebGL 2 specification? Jukka -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Jul 23 10:17:26 2014 From: kbr...@ (Kenneth Russell) Date: Wed, 23 Jul 2014 10:17:26 -0700 Subject: [Public WebGL] How to initialize a D24_UNORM or a D32_FLOAT depth buffer for the rendering context? In-Reply-To: References: Message-ID: In practice, nearly every WebGL implementation already provides a 24-bit depth buffer, using the OES_packed_depth_stencil extension internally. WebGL 2 will minimally provide the new renderbuffer formats DEPTH_COMPONENT32F, DEPTH_COMPONENT24, DEPTH_COMPONENT16, DEPTH32F_STENCIL8 and DEPTH24_STENCIL8 to give the application more control over framebuffers they create. We can investigate providing more control over the canvas's depth buffer once WebGL 2 implementations are farther along. Does that satisfy your need? -Ken On Wed, Jul 23, 2014 at 10:06 AM, Jukka Jyl?nki wrote: > The current WebGL 1 context creation params only state that with depth=true, > a context with at least 16 bits of depth will be created. This is a very > weak guarantee. > > Is there a way to request at least a D24_UNORM or a D32_FLOAT depth buffer, > perhaps in the upcoming WebGL 2 specification? > > Jukka > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From juj...@ Wed Jul 23 12:01:35 2014 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Wed, 23 Jul 2014 22:01:35 +0300 Subject: [Public WebGL] How to initialize a D24_UNORM or a D32_FLOAT depth buffer for the rendering context? In-Reply-To: References: Message-ID: Thanks, I did mean to reply to the list, that was a misclick. The two messages below were sent off the list, replying to the list to post them online. I have developed a 3D engine where I call directly to WGL, GLX, EGL and CGL to create GL contexts, and also D3D11 contexts. Sure, not all platforms offer the same features, but the depth+stencil+msaa combos are pretty basic across all and I don't think it was impossible or even that difficult to abstract over them. Do you have details of what exactly were the critical things that such an abstraction failed on? 2014-07-23 21:40 GMT+03:00 Kenneth Russell : > (Did you mean to CC: the list or not?) > > On Wed, Jul 23, 2014 at 10:31 AM, Jukka Jyl?nki wrote: > > OpenGL has a limitation that one cannot mix system-provided > > color/depth/stencil buffers with custom offscreen-created > > textures/renderbuffers in a single framebuffer. I would not like to > require > > creating custom offscreen renderbuffers to be able to do this, since it > adds > > to memory copy pressure, requiring first rendering all contents to a > > separate render target, and then blitting to screen. Even a single > > fullscreen blit per frame is a horrible performance killer on mobile. > Also, > > it would require separate code paths for native code and JS code for > > ascertaining that the application indeed does have a 24+ or 32 bit depth > > buffer available. > > > > The same observation applies for stencil buffering. Some algorithms may > > depend on that the stencil buffer has exactly 8 bits in it, e.g. with > > GL_INVERT or one of the WRAP stencil operations. I'd prefer that the > > application would explicitly be able to say "I need exact this many bits > for > > stencil buffer, or fail." and "I need exact this many bits for depth > buffer, > > or fail." > > > > Also, I'd ideally like the same thing to apply for multisampling: "please > > give me at least this many samples, or fail." and querying "what is the > max > > samples that the impl supports?" When we get newer GLES functionality > over > > to WebGL, we start running into algorithms where users explicitly expect > > that they have a certain number of samples in the buffer they are > resolving. > > > > Right now the context creation functionality is a bit like you get > > something, but afterwards aren't even able ask what that something was > that > > you got. This is painful for mapping a native parity with Emscripten, in > > which will either have to pretend with "in practice when we tested, we > had > > this many bits in all (two) tested environments, so seems to work" but > it'd > > be important to explicitly able to say and know in code what kind of > context > > we got. > > The WebGLContextAttributes mechanism is deliberately limited. When > considering the native pixel format selection APIs in WGL, GLX, CGL > (OS X), and EGL, it is difficult to design an API that spans all of > them. One was developed in the Java binding to OpenGL (JOGL). In the > end, experience showed that it was not possible to write a > cross-platform pixel format selection algorithm. A hint was needed > from the window system about what the "preferred" pixel format was. If > one deviated from that choice, context creation was likely to fail on > at least some operating systems. > > WebGL is different because it (generally) doesn't use the window > system's provided framebuffer but creates its own FBO and attachments > internally. It would still be difficult to enumerate the supported > "pixel formats" to JavaScript and let the application choose one. We > don't want to get in the business of calling a JavaScript callback > during calls to canvas.getContext(), either. > > It should be possible to query the created context's attributes > (getContextAttributes) as well as parameters like SAMPLES and > DEPTH_BITS to know whether the created context satisfies the app's > needs. > > All of this having been said, it would be good to figure out how to > extend the context creation attributes and process for Emscripten's > needs. Could you give this some thought and add a page to > https://www.khronos.org/webgl/wiki/Other_Proposals ? > > > > 2014-07-23 20:17 GMT+03:00 Kenneth Russell : > > > >> In practice, nearly every WebGL implementation already provides a > >> 24-bit depth buffer, using the OES_packed_depth_stencil extension > >> internally. WebGL 2 will minimally provide the new renderbuffer > >> formats DEPTH_COMPONENT32F, DEPTH_COMPONENT24, DEPTH_COMPONENT16, > >> DEPTH32F_STENCIL8 and DEPTH24_STENCIL8 to give the application more > >> control over framebuffers they create. We can investigate providing > >> more control over the canvas's depth buffer once WebGL 2 > >> implementations are farther along. Does that satisfy your need? > >> > >> -Ken > >> > >> > >> On Wed, Jul 23, 2014 at 10:06 AM, Jukka Jyl?nki > wrote: > >> > The current WebGL 1 context creation params only state that with > >> > depth=true, > >> > a context with at least 16 bits of depth will be created. This is a > very > >> > weak guarantee. > >> > > >> > Is there a way to request at least a D24_UNORM or a D32_FLOAT depth > >> > buffer, > >> > perhaps in the upcoming WebGL 2 specification? > >> > > >> > Jukka > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Jul 23 12:52:46 2014 From: kbr...@ (Kenneth Russell) Date: Wed, 23 Jul 2014 12:52:46 -0700 Subject: [Public WebGL] How to initialize a D24_UNORM or a D32_FLOAT depth buffer for the rendering context? In-Reply-To: References: Message-ID: On Wed, Jul 23, 2014 at 12:01 PM, Jukka Jyl?nki wrote: > Thanks, I did mean to reply to the list, that was a misclick. The two > messages below were sent off the list, replying to the list to post them > online. > > I have developed a 3D engine where I call directly to WGL, GLX, EGL and CGL > to create GL contexts, and also D3D11 contexts. Sure, not all platforms > offer the same features, but the depth+stencil+msaa combos are pretty basic > across all and I don't think it was impossible or even that difficult to > abstract over them. Do you have details of what exactly were the critical > things that such an abstraction failed on? Yes. The ordering of window creation and pixel format selection, and their interaction with OpenGL context creation, are different on the different window systems. GLX and X11 require OpenGL pixel format selection to be performed before the window is created, in order to select its visual. WGL requires the pixel format to be chosen and set against an already-created window before creating an OpenGL context. CGL/NSOpenGLContext require the pixel format to be passed in when the context is created. GLX and WGL allow enumeration of the supported pixel formats. CGL doesn't in practice (there are too many "supported" formats for the different CGL renderers available per display). EGL behaves mostly like GLX as I recall. Take a look at the Javadoc for JOGL's GLDrawableFactory [1] and related classes [2, 3]. It attempted to expose the pixel format selection algorithm to cross-platform code. While working on this library I spent quite some time attempting to write a portable pixel format selection algorithm. In the end the only approach that worked reliably was to delegate the visual/pixel format selection to the window system. Maybe you'll have more success. Based on this past experience I think the best approach for the web platform is to keep things simple. The browser should continue to be responsible for selecting the back buffer's format based on user input, not call the user back and let them choose from one of many supported formats. In order to guarantee behavior among browsers, some WebGL context creation attributes are "mandatory". For portability, WebGL also provides guarantees about supported framebuffer attachment combinations [4]. If the context creation attributes are to be expanded in, say, WebGL 2, there should continue to be testable guarantees about supported formats. -Ken [1] http://jogamp.org/deployment/jogamp-next/javadoc/jogl/javadoc/javax/media/opengl/GLDrawableFactory.html [2] http://jogamp.org/deployment/jogamp-next/javadoc/jogl/javadoc/javax/media/nativewindow/CapabilitiesChooser.html [3] http://jogamp.org/deployment/jogamp-next/javadoc/jogl/javadoc/javax/media/opengl/DefaultGLCapabilitiesChooser.html [4] http://www.khronos.org/registry/webgl/specs/latest/1.0/#6.6 -Ken > 2014-07-23 21:40 GMT+03:00 Kenneth Russell : > >> (Did you mean to CC: the list or not?) >> >> On Wed, Jul 23, 2014 at 10:31 AM, Jukka Jyl?nki wrote: >> > OpenGL has a limitation that one cannot mix system-provided >> > color/depth/stencil buffers with custom offscreen-created >> > textures/renderbuffers in a single framebuffer. I would not like to >> > require >> > creating custom offscreen renderbuffers to be able to do this, since it >> > adds >> > to memory copy pressure, requiring first rendering all contents to a >> > separate render target, and then blitting to screen. Even a single >> > fullscreen blit per frame is a horrible performance killer on mobile. >> > Also, >> > it would require separate code paths for native code and JS code for >> > ascertaining that the application indeed does have a 24+ or 32 bit depth >> > buffer available. >> > >> > The same observation applies for stencil buffering. Some algorithms may >> > depend on that the stencil buffer has exactly 8 bits in it, e.g. with >> > GL_INVERT or one of the WRAP stencil operations. I'd prefer that the >> > application would explicitly be able to say "I need exact this many bits >> > for >> > stencil buffer, or fail." and "I need exact this many bits for depth >> > buffer, >> > or fail." >> > >> > Also, I'd ideally like the same thing to apply for multisampling: >> > "please >> > give me at least this many samples, or fail." and querying "what is the >> > max >> > samples that the impl supports?" When we get newer GLES functionality >> > over >> > to WebGL, we start running into algorithms where users explicitly expect >> > that they have a certain number of samples in the buffer they are >> > resolving. >> > >> > Right now the context creation functionality is a bit like you get >> > something, but afterwards aren't even able ask what that something was >> > that >> > you got. This is painful for mapping a native parity with Emscripten, in >> > which will either have to pretend with "in practice when we tested, we >> > had >> > this many bits in all (two) tested environments, so seems to work" but >> > it'd >> > be important to explicitly able to say and know in code what kind of >> > context >> > we got. >> >> The WebGLContextAttributes mechanism is deliberately limited. When >> considering the native pixel format selection APIs in WGL, GLX, CGL >> (OS X), and EGL, it is difficult to design an API that spans all of >> them. One was developed in the Java binding to OpenGL (JOGL). In the >> end, experience showed that it was not possible to write a >> cross-platform pixel format selection algorithm. A hint was needed >> from the window system about what the "preferred" pixel format was. If >> one deviated from that choice, context creation was likely to fail on >> at least some operating systems. >> >> WebGL is different because it (generally) doesn't use the window >> system's provided framebuffer but creates its own FBO and attachments >> internally. It would still be difficult to enumerate the supported >> "pixel formats" to JavaScript and let the application choose one. We >> don't want to get in the business of calling a JavaScript callback >> during calls to canvas.getContext(), either. >> >> It should be possible to query the created context's attributes >> (getContextAttributes) as well as parameters like SAMPLES and >> DEPTH_BITS to know whether the created context satisfies the app's >> needs. >> >> All of this having been said, it would be good to figure out how to >> extend the context creation attributes and process for Emscripten's >> needs. Could you give this some thought and add a page to >> https://www.khronos.org/webgl/wiki/Other_Proposals ? >> >> >> > 2014-07-23 20:17 GMT+03:00 Kenneth Russell : >> > >> >> In practice, nearly every WebGL implementation already provides a >> >> 24-bit depth buffer, using the OES_packed_depth_stencil extension >> >> internally. WebGL 2 will minimally provide the new renderbuffer >> >> formats DEPTH_COMPONENT32F, DEPTH_COMPONENT24, DEPTH_COMPONENT16, >> >> DEPTH32F_STENCIL8 and DEPTH24_STENCIL8 to give the application more >> >> control over framebuffers they create. We can investigate providing >> >> more control over the canvas's depth buffer once WebGL 2 >> >> implementations are farther along. Does that satisfy your need? >> >> >> >> -Ken >> >> >> >> >> >> On Wed, Jul 23, 2014 at 10:06 AM, Jukka Jyl?nki >> >> wrote: >> >> > The current WebGL 1 context creation params only state that with >> >> > depth=true, >> >> > a context with at least 16 bits of depth will be created. This is a >> >> > very >> >> > weak guarantee. >> >> > >> >> > Is there a way to request at least a D24_UNORM or a D32_FLOAT depth >> >> > buffer, >> >> > perhaps in the upcoming WebGL 2 specification? >> >> > >> >> > Jukka >> >> > >> > >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From juj...@ Thu Jul 24 03:44:06 2014 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Thu, 24 Jul 2014 13:44:06 +0300 Subject: [Public WebGL] How to initialize a D24_UNORM or a D32_FLOAT depth buffer for the rendering context? In-Reply-To: References: Message-ID: Thanks for the detailed reply. Are you implying that the rendering window is created far before ahead of time before the JavaScript code is executed that creates the GL context? Is it not possible to create the window at the time when the GL context is created? E.g. on Windows, I believe it's possible to create a HWND as a child control of the main browser window at any desired time and render to that? Just double-checking, are current implementations required to report DEPTH_BITS, STENCIL_BITS and SAMPLES from http://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.3 correctly according to what they actually gave to the user? Or is there something WebGL-specific allowed there? It sounds like the specification is driven by existing implementation and not the other way around. I'm not sure what to propose here that would have support. Ideally I'd like to be able to give context creation parameters of form { depthBits: 0/16/24/32, alphaBits: 0/8, stencilBits: 0/8, samples: 1/2/4/.../maxSupportedSamples() }. I'd be ok if for some platforms depthBits/samples was substituted with something else that was available, as long as I'm able to query when I get the context what that was. 2014-07-23 22:52 GMT+03:00 Kenneth Russell : > On Wed, Jul 23, 2014 at 12:01 PM, Jukka Jyl?nki wrote: > > Thanks, I did mean to reply to the list, that was a misclick. The two > > messages below were sent off the list, replying to the list to post them > > online. > > > > I have developed a 3D engine where I call directly to WGL, GLX, EGL and > CGL > > to create GL contexts, and also D3D11 contexts. Sure, not all platforms > > offer the same features, but the depth+stencil+msaa combos are pretty > basic > > across all and I don't think it was impossible or even that difficult to > > abstract over them. Do you have details of what exactly were the critical > > things that such an abstraction failed on? > > Yes. The ordering of window creation and pixel format selection, and > their interaction with OpenGL context creation, are different on the > different window systems. GLX and X11 require OpenGL pixel format > selection to be performed before the window is created, in order to > select its visual. WGL requires the pixel format to be chosen and set > against an already-created window before creating an OpenGL context. > CGL/NSOpenGLContext require the pixel format to be passed in when the > context is created. GLX and WGL allow enumeration of the supported > pixel formats. CGL doesn't in practice (there are too many "supported" > formats for the different CGL renderers available per display). EGL > behaves mostly like GLX as I recall. Take a look at the Javadoc for > JOGL's GLDrawableFactory [1] and related classes [2, 3]. It attempted > to expose the pixel format selection algorithm to cross-platform code. > While working on this library I spent quite some time attempting to > write a portable pixel format selection algorithm. In the end the only > approach that worked reliably was to delegate the visual/pixel format > selection to the window system. Maybe you'll have more success. > > Based on this past experience I think the best approach for the web > platform is to keep things simple. The browser should continue to be > responsible for selecting the back buffer's format based on user > input, not call the user back and let them choose from one of many > supported formats. In order to guarantee behavior among browsers, some > WebGL context creation attributes are "mandatory". For portability, > WebGL also provides guarantees about supported framebuffer attachment > combinations [4]. If the context creation attributes are to be > expanded in, say, WebGL 2, there should continue to be testable > guarantees about supported formats. > > -Ken > > > [1] > http://jogamp.org/deployment/jogamp-next/javadoc/jogl/javadoc/javax/media/opengl/GLDrawableFactory.html > [2] > http://jogamp.org/deployment/jogamp-next/javadoc/jogl/javadoc/javax/media/nativewindow/CapabilitiesChooser.html > [3] > http://jogamp.org/deployment/jogamp-next/javadoc/jogl/javadoc/javax/media/opengl/DefaultGLCapabilitiesChooser.html > [4] http://www.khronos.org/registry/webgl/specs/latest/1.0/#6.6 > > -Ken > > > > 2014-07-23 21:40 GMT+03:00 Kenneth Russell : > > > >> (Did you mean to CC: the list or not?) > >> > >> On Wed, Jul 23, 2014 at 10:31 AM, Jukka Jyl?nki > wrote: > >> > OpenGL has a limitation that one cannot mix system-provided > >> > color/depth/stencil buffers with custom offscreen-created > >> > textures/renderbuffers in a single framebuffer. I would not like to > >> > require > >> > creating custom offscreen renderbuffers to be able to do this, since > it > >> > adds > >> > to memory copy pressure, requiring first rendering all contents to a > >> > separate render target, and then blitting to screen. Even a single > >> > fullscreen blit per frame is a horrible performance killer on mobile. > >> > Also, > >> > it would require separate code paths for native code and JS code for > >> > ascertaining that the application indeed does have a 24+ or 32 bit > depth > >> > buffer available. > >> > > >> > The same observation applies for stencil buffering. Some algorithms > may > >> > depend on that the stencil buffer has exactly 8 bits in it, e.g. with > >> > GL_INVERT or one of the WRAP stencil operations. I'd prefer that the > >> > application would explicitly be able to say "I need exact this many > bits > >> > for > >> > stencil buffer, or fail." and "I need exact this many bits for depth > >> > buffer, > >> > or fail." > >> > > >> > Also, I'd ideally like the same thing to apply for multisampling: > >> > "please > >> > give me at least this many samples, or fail." and querying "what is > the > >> > max > >> > samples that the impl supports?" When we get newer GLES functionality > >> > over > >> > to WebGL, we start running into algorithms where users explicitly > expect > >> > that they have a certain number of samples in the buffer they are > >> > resolving. > >> > > >> > Right now the context creation functionality is a bit like you get > >> > something, but afterwards aren't even able ask what that something was > >> > that > >> > you got. This is painful for mapping a native parity with Emscripten, > in > >> > which will either have to pretend with "in practice when we tested, we > >> > had > >> > this many bits in all (two) tested environments, so seems to work" but > >> > it'd > >> > be important to explicitly able to say and know in code what kind of > >> > context > >> > we got. > >> > >> The WebGLContextAttributes mechanism is deliberately limited. When > >> considering the native pixel format selection APIs in WGL, GLX, CGL > >> (OS X), and EGL, it is difficult to design an API that spans all of > >> them. One was developed in the Java binding to OpenGL (JOGL). In the > >> end, experience showed that it was not possible to write a > >> cross-platform pixel format selection algorithm. A hint was needed > >> from the window system about what the "preferred" pixel format was. If > >> one deviated from that choice, context creation was likely to fail on > >> at least some operating systems. > >> > >> WebGL is different because it (generally) doesn't use the window > >> system's provided framebuffer but creates its own FBO and attachments > >> internally. It would still be difficult to enumerate the supported > >> "pixel formats" to JavaScript and let the application choose one. We > >> don't want to get in the business of calling a JavaScript callback > >> during calls to canvas.getContext(), either. > >> > >> It should be possible to query the created context's attributes > >> (getContextAttributes) as well as parameters like SAMPLES and > >> DEPTH_BITS to know whether the created context satisfies the app's > >> needs. > >> > >> All of this having been said, it would be good to figure out how to > >> extend the context creation attributes and process for Emscripten's > >> needs. Could you give this some thought and add a page to > >> https://www.khronos.org/webgl/wiki/Other_Proposals ? > >> > >> > >> > 2014-07-23 20:17 GMT+03:00 Kenneth Russell : > >> > > >> >> In practice, nearly every WebGL implementation already provides a > >> >> 24-bit depth buffer, using the OES_packed_depth_stencil extension > >> >> internally. WebGL 2 will minimally provide the new renderbuffer > >> >> formats DEPTH_COMPONENT32F, DEPTH_COMPONENT24, DEPTH_COMPONENT16, > >> >> DEPTH32F_STENCIL8 and DEPTH24_STENCIL8 to give the application more > >> >> control over framebuffers they create. We can investigate providing > >> >> more control over the canvas's depth buffer once WebGL 2 > >> >> implementations are farther along. Does that satisfy your need? > >> >> > >> >> -Ken > >> >> > >> >> > >> >> On Wed, Jul 23, 2014 at 10:06 AM, Jukka Jyl?nki > >> >> wrote: > >> >> > The current WebGL 1 context creation params only state that with > >> >> > depth=true, > >> >> > a context with at least 16 bits of depth will be created. This is a > >> >> > very > >> >> > weak guarantee. > >> >> > > >> >> > Is there a way to request at least a D24_UNORM or a D32_FLOAT depth > >> >> > buffer, > >> >> > perhaps in the upcoming WebGL 2 specification? > >> >> > > >> >> > Jukka > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rem...@ Mon Jul 28 09:41:18 2014 From: rem...@ (=?UTF-8?Q?R=C3=A9mi_Arnaud?=) Date: Mon, 28 Jul 2014 16:41:18 +0000 (UTC) Subject: [Public WebGL] I'd like to add you to my professional network on LinkedIn Message-ID: <1356296632.5445110.1406565678625.JavaMail.app@ela4-app2491.prod> Hi Public, I'd like to add you to my professional network on LinkedIn. - R?mi Accept: https://www.linkedin.com/e/v2?e=-dtg6nj-hy60vaw0-9&a=preRegInvite&tracking=eml-guest-invite-cta&ek=invite_guest&invitationID=5899563887262453760&sharedKey=J1lGMrJn You are receiving Invitation emails. Unsubscribe here: https://www.linkedin.com/e/v2?e=-dtg6nj-hy60vaw0-9&t=uns&midToken=AQFmrFVhGEd36Q&tracking=eml-guest-invite-unsubscribe&ek=invite_guest&id=20061&mid=-1&aid=6g2d2dnhhojuhjo&eid=-dtg6nj-hy60vaw0-9&email=public_webgl%40khronos%2Eorg This email was intended for Public Webgl. Learn why we included this at the following link: http://www.linkedin.com/e/v2?e=-dtg6nj-hy60vaw0-9&a=customerServiceUrl&ek=invite_guest&articleId=4788 © 2014, LinkedIn Corporation. 2029 Stierlin Ct. Mountain View, CA 94043, USA -------------- next part -------------- An HTML attachment was scrubbed... URL: From tib...@ Tue Jul 29 14:38:43 2014 From: tib...@ (Tibor Ouden, den) Date: Tue, 29 Jul 2014 23:38:43 +0200 Subject: [Public WebGL] Location of test in conformance suite Message-ID: I would like to add a test to the conformance suite for this issue : https://code.google.com/p/angleproject/issues/detail?id=706 What is the best location under sdk/tests/conformance ? I see a glsl/bugs but this issue is not a glsl bug or is it ? There is no bug dir under conformance. Other locations ? Tibor den Ouden -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Jul 29 16:27:41 2014 From: kbr...@ (Kenneth Russell) Date: Tue, 29 Jul 2014 16:27:41 -0700 Subject: [Public WebGL] Location of test in conformance suite In-Reply-To: References: Message-ID: I think glsl/bugs is the best place. It's a bug related to GLSL compilation. -Ken On Tue, Jul 29, 2014 at 2:38 PM, Tibor Ouden, den wrote: > I would like to add a test to the conformance suite for this issue : > https://code.google.com/p/angleproject/issues/detail?id=706 > > What is the best location under sdk/tests/conformance ? > > I see a glsl/bugs > but this issue is not a glsl bug or is it ? > There is no bug dir under conformance. > > Other locations ? > > Tibor den Ouden > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl -----------------------------------------------------------