From pya...@ Mon Jan 2 08:22:17 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 2 Jan 2017 17:22:17 +0100 Subject: [Public WebGL] Proposal for the WEBGL_debug extension In-Reply-To: References: <20150918143018.GA11808@localhost.cbg.collabora.co.uk> <20151001171856.GA4333@localhost.cbg.collabora.co.uk> <20151001181937.GA5572@localhost.cbg.collabora.co.uk> <20151002111439.GA4324@localhost.cbg.collabora.co.uk> <20151005103748.GA1372@localhost.cbg.collabora.co.uk> <20151005105948.GA1828@localhost.cbg.collabora.co.uk> Message-ID: What is the status of the WEBGL_debug extension? Its last update has been in 2015. - Are there specification changes required to make it D3D compatible? - Have any vendors taken a stab at implementing it? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 2 08:48:17 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 2 Jan 2017 17:48:17 +0100 Subject: [Public WebGL] WEBGL_get_buffer_sub_data_async Message-ID: This extension https://www.khronos.org/registry/webgl/extensions/WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to draft without any public discussion. In a nutshell it proposes a new WebGL2 function called getBufferSubDataAsync which returns a promise that will be called eventually with the buffer data. I think there are several problems: 1. The extension process states that "*Extensions move through four states during their development: proposed, draft, community approved, and Khronos ratified**"*. This extension never moved through the proposal stage. 2. The extension introduces promises to the WebGL API. This requires a more fundamental discussion. 3. A discussion if this extension is required if WebWorkers can access the same context as the main thread has not happened. This extension should be in proposal status, and the necessary discussions should happen first. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 2 08:51:10 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 2 Jan 2017 17:51:10 +0100 Subject: [Public WebGL] EXT_clip_cull_distance Message-ID: Status of: https://www.khronos.org/registry/webgl/extensions/proposals/EXT_clip_cull_distance/ ? 1. No change has occurred on this extension since August 2016, is the specification finalized? 2. Can it be elevated to draft? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 2 08:52:59 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 2 Jan 2017 17:52:59 +0100 Subject: [Public WebGL] EXT_float_blend Message-ID: No Change has occured to https://www.khronos.org/registry/webgl/extensions/proposals/EXT_float_blend/ since April 2015 Can this extension be elevated to draft? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 2 08:54:16 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 2 Jan 2017 17:54:16 +0100 Subject: [Public WebGL] EXT_texture_storage Message-ID: No change has occured on https://www.khronos.org/registry/webgl/extensions/proposals/EXT_texture_storage/ since September 2015 Can this extension be elevated to draft? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 2 08:57:55 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 2 Jan 2017 17:57:55 +0100 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: References: Message-ID: What is the status of the https://www.khronos.org/registry/webgl/extensions/proposals/OES_EGL_image_external/ extension? Is work on the spec still going on or is this the final version that takes into account dynamic_texture concerns? On Wed, Oct 12, 2016 at 3:59 AM, Byungseon Shin wrote: > Hi Ken, > > Thanks for the feedback. > Of course, we are willing to devote time and eager to enhance WebGL video > rendering performance. > We prefer to use our proposal as a base but we could revise > dynamic_texture as well. > > Kind regards, > Byungseon Shin > > > On Wed, Oct 12, 2016 at 8:48 AM Kenneth Russell wrote: > >> Hi Byungseon, >> >> Thanks for putting together this extension proposal. >> >> In conversations with groups within Google that are trying to do 360 >> degree video rendering in WebGL, they need more information than your >> extension would provide; specifically, they would need the exact timestamp >> of the frame currently being rendered, as well as other per-frame metadata >> like the current width and height of the texture (for variable bitrate >> video streams). >> >> Mark Callow's WEBGL_dynamic_texture extension proposal >> https://www.khronos.org/registry/webgl/extensions/ >> proposals/WEBGL_dynamic_texture/ provides the controls needed. However, >> per working group discussions, the current extension's a little too >> complicated. I think if that extension were simplified a little bit that it >> would provide all of the performance benefits yours offers, and the >> controls that are known to be necessary. >> >> Do you think you'd be willing to devote some time to either extending >> your proposal (I can provide specific feedback) or editing down Mark's >> proposal to handle these use cases? I'd prefer to edit down >> WEBGL_dynamic_texture, if you're willing. >> >> Thanks, >> >> -Ken >> >> >> >> On Tue, Oct 11, 2016 at 8:27 AM, Byungseon Shin >> wrote: >> >> Hi Corentin, >> >> On Tue, Oct 11, 2016 at 11:16 PM Corentin Wallez >> wrote: >> >> The problem with using an EGL_image type of extension is that the texture >> data store becomes shared between the new texture and the object it was >> created from. This is great when the application controls both sides and >> can ensure read and writes are probably synchronized, but in the case of >> video here, one side would write without the application being able to >> control synchronization. >> >> The standard way to do this type of video / texture binding in EGL is to >> use EGL_KHR_stream >> which >> is a much more complex and might not be an improvement over what developers >> can do with the current APIs. >> >> >> As in proposal revision#3 and as Florian already mentioned, >> Current proposal support "bind once and update implicitly" concept : " >> ext.EGLImageTargetTexture2DOES(ext.TEXTURE_EXTERNAL_OES, videoElement); >> that the texture is "bound" to the video and no further calls need to be >> emitted (as would presently be the case)" >> >> In a that sense, we could abstract the details implementation into >> MediaPlayer Side like synchronizing issues. >> >> Even EGL_KHR_stream provides more details synchronization issues but >> still lot's of GPU driver and Video Decoder need to support the latest >> specification. >> But OES_EGL_image_external extension is already supported by most of the >> GPU drivers and Video decoders. >> >> And one of the performance bottleneck is to converting EGLImage input to >> TEXTURE_2D to handled by WebGL application. >> >> >> On Tue, Oct 11, 2016 at 6:54 AM, Byungseon Shin >> wrote: >> >> Hi Florian, Maksims, >> >> Thanks for the feedback. >> >> I have updated proposal to explain how application works. >> Please see the attached updated proposal. >> >> To render an update frame, we need to call ext.EGLImageTargetTexture2DOES >> to bind newly generated texture buffer from video decoder. >> >> By supporting OES_EGL_image_external extension, application can use >> direct texture when video driver provides EGLImage like OpenGL ES natvie >> applications. >> >> Our proposal is just focusing on extending format of texture compatible >> with OpenGL ES extension and does not conflict with previous proposals. >> >> So, we provides a simplest way to adopt OES_EGL_image_external extension. >> >> By using references implementations of browser, WebGL renders video more >> than 10 times faster than TEXTURE_2D format with Full HD resolution output. >> >> Kind regards, >> Byungseon Shin >> >> On Tue, Oct 11, 2016 at 4:51 PM Maksims Mihejevs >> wrote: >> >> Worth mentioning that in web there are multiple sources that have >> independent redraw mechanics, that includes video as well as canvas >> elements. It might make sense to have single extension for providing direct >> access to those sources without need to re-upload it. >> >> I have assumptions that internally in browsers, both video and canvas >> have their buffers, so it might lead to very similar implementation as from >> internal side as well as from webgl api side. >> >> >> On 11 Oct 2016 12:12 p.m., "Florian B?sch" wrote: >> >> Am I correct in assuming that once you've called ext.EGLImageTargetTextu >> re2DOES(ext.TEXTURE_EXTERNAL_OES, videoElement); that the texture is >> "bound" to the video and no further calls need to be emitted (as would >> presently be the case) for it to stay current? >> >> If that's the case, I think it's a good idea, it takes some work off web >> application programmers to make sure the video is updated, and it should >> also in all cases avoid a texture download/reupload. >> >> There are some other extensions that have attempted to deal with this >> problem. >> >> - https://www.khronos.org/registry/webgl/extensions/ >> rejected/WEBGL_texture_from_depth_video/ >> >> : this extension is rejected because it introduced a complex behavioral >> changes/semantics whose benefits where unclear and the champions of this >> extension stopped responding to questions and didn't offer any improvements >> on the extension specification. >> - https://www.khronos.org/registry/webgl/extensions/ >> proposals/WEBGL_dynamic_texture/ >> >> : this extension is currently in a proposal state and it covers similar >> functionality as EGL_image_external. Where it fundamentally differs is that >> it also deals with format conversions (YUV 442 to rgb etc.) and accurate >> timing for presentation of a video frame. >> >> Perhaps you could elaborate a little why OES_EGL_image_external would be >> preferrable to WEBGL_dynamic_texture, and how the issues that >> EGL_image_external does not tackle (timing, format conversion, other >> things) would be handled by the web application programmer? >> >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 2 09:00:20 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 2 Jan 2017 18:00:20 +0100 Subject: [Public WebGL] WEBGL_multiview discussion Message-ID: This extension https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiview/ has been posted in December 2016 by Olli Etuaho and no discussion has occured on it so far. I'll suggest everybody take a look at it and hope we can move this to Draft as soon as possible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Mon Jan 2 09:14:13 2017 From: oet...@ (Olli Etuaho) Date: Mon, 2 Jan 2017 17:14:13 +0000 Subject: [Public WebGL] RE: WEBGL_multiview discussion In-Reply-To: References: Message-ID: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> There?s been some discussion on GitHub, though the extension hasn?t been publicized much on the mailing list: https://github.com/KhronosGroup/WebGL/issues/2167 https://github.com/KhronosGroup/WebGL/pull/2176 More feedback is welcome. Some open questions that are currently up for consideration are: 1. Why not just enable WEBGL_multiview extension automatically when a stereo canvas is requested? Support for stereo canvas requires core spec changes either way. The biggest hurdle for this I see is that WEBGL_multiview requires layout qualifiers, which do not exist in core WebGL 1.0 shaders, so it would be a bad fit for WebGL 1.0. 2. Should drawing commands that don?t use multiview shaders be compatible with choosing both default framebuffer draw buffers with gl.BACK? This could be done similarly for clears as well. Prototyping the extension is also in progress, with some support already added in ANGLE ? the spec is only a small part of the overall work, and writing the implementations is expected to take more time. Prototyping the extension will also inform further changes to the spec. Cheers, Olli From: Florian B?sch [mailto:pyalot...@] Sent: maanantaina 2. tammikuuta 2017 17.00 To: public webgl ; Olli Etuaho Subject: WEBGL_multiview discussion This extension https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiview/ has been posted in December 2016 by Olli Etuaho and no discussion has occured on it so far. I'll suggest everybody take a look at it and hope we can move this to Draft as soon as possible. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 2 12:49:55 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 2 Jan 2017 21:49:55 +0100 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: Upon thinking about this extension, I don't think it should exist at all. Ideally the mapBuffer semantic would be exposed. But even if it isn't, it shall not be that an extension is required to express functionality already found in the core functionality of the underlying ES specification. Furthermore, getBufferSubDataAsync does not adequately express the reality of map/flush/unmap, and hides the fact that unmap/flush are still synchronizing calls happening. However getBufferSubDataAsync obstructs appropriate code dealing with proper insertion of synchronization points. In addition, it would lead to allocating promises once or many times per frame, and since tracking would be required in some instances, would also lead to allocating a closure once or many times a call. An issue that map buffer range does not exhibit. Due to the lack of discussion of this feature, I believe a great disservice is done to WebGL 2 by the introduction of these ideas/APIs and I strongly suggest to withdraw this from draft immediately and go back to the drawing board. On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch wrote: > This extension https://www.khronos.org/registry/webgl/extensions/ > WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to > draft without any public discussion. > > In a nutshell it proposes a new WebGL2 function called > getBufferSubDataAsync which returns a promise that will be called > eventually with the buffer data. > > I think there are several problems: > > 1. The extension process states that "*Extensions move through four > states during their development: proposed, draft, community approved, and > Khronos ratified**"*. This extension never moved through the proposal > stage. > 2. The extension introduces promises to the WebGL API. This requires a > more fundamental discussion. > 3. A discussion if this extension is required if WebWorkers can access > the same context as the main thread has not happened. > > This extension should be in proposal status, and the necessary discussions > should happen first. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Mon Jan 2 14:27:22 2017 From: max...@ (Maksims Mihejevs) Date: Mon, 2 Jan 2017 22:27:22 +0000 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: Worth mentioning that promises are extremely bad for GC and real-time applications, they do not provide a developer enough control to structure logic so to avoids any allocations. Promises - are not good for real-time at all, and lead to issues with GC. Any API in WebGL that is meant to be used in real-time applications should not be based on API's that are not real-time friendly. On 2 January 2017 at 20:49, Florian B?sch wrote: > Upon thinking about this extension, I don't think it should exist at all. > Ideally the mapBuffer semantic would be exposed. But even if it isn't, it > shall not be that an extension is required to express functionality already > found in the core functionality of the underlying ES specification. > > Furthermore, getBufferSubDataAsync does not adequately express the reality > of map/flush/unmap, and hides the fact that unmap/flush are still > synchronizing calls happening. However getBufferSubDataAsync obstructs > appropriate code dealing with proper insertion of synchronization points. > > In addition, it would lead to allocating promises once or many times per > frame, and since tracking would be required in some instances, would also > lead to allocating a closure once or many times a call. An issue that map > buffer range does not exhibit. > > Due to the lack of discussion of this feature, I believe a great > disservice is done to WebGL 2 by the introduction of these ideas/APIs and I > strongly suggest to withdraw this from draft immediately and go back to the > drawing board. > > > > On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch wrote: > >> This extension https://www.khronos.org/registry/webgl/extensions/ >> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to >> draft without any public discussion. >> >> In a nutshell it proposes a new WebGL2 function called >> getBufferSubDataAsync which returns a promise that will be called >> eventually with the buffer data. >> >> I think there are several problems: >> >> 1. The extension process states that "*Extensions move through four >> states during their development: proposed, draft, community approved, and >> Khronos ratified**"*. This extension never moved through the proposal >> stage. >> 2. The extension introduces promises to the WebGL API. This requires >> a more fundamental discussion. >> 3. A discussion if this extension is required if WebWorkers can >> access the same context as the main thread has not happened. >> >> This extension should be in proposal status, and the necessary >> discussions should happen first. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Mon Jan 2 15:09:43 2017 From: jgi...@ (Jeff Gilbert) Date: Mon, 2 Jan 2017 15:09:43 -0800 Subject: [Public WebGL] EXT_float_blend In-Reply-To: References: Message-ID: Generally we only promote to draft when someone is working on an extension. It doesn't help to promote extensions no one's working on, and will likely send the wrong signals about the state of the extension spec. On Mon, Jan 2, 2017 at 8:52 AM, Florian B?sch wrote: > No Change has occured to > https://www.khronos.org/registry/webgl/extensions/proposals/EXT_float_blend/ > since April 2015 > > Can this extension be elevated to draft? ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From sun...@ Mon Jan 2 15:48:28 2017 From: sun...@ (Byungseon Shin) Date: Mon, 02 Jan 2017 23:48:28 +0000 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: References: Message-ID: Hi Florian, We are working on update which will cover dynamic_texture concerns. I plan to upload new version on github for reviewing by this week. Happy 2017 year! Byungseon Shin On Tue, Jan 3, 2017 at 1:58 AM Florian B?sch wrote: > What is the status of the > https://www.khronos.org/registry/webgl/extensions/proposals/OES_EGL_image_external/ > extension? Is work on the spec still going on or is this the final version > that takes into account dynamic_texture concerns? > > On Wed, Oct 12, 2016 at 3:59 AM, Byungseon Shin > wrote: > > Hi Ken, > > Thanks for the feedback. > Of course, we are willing to devote time and eager to enhance WebGL video > rendering performance. > We prefer to use our proposal as a base but we could revise > dynamic_texture as well. > > Kind regards, > Byungseon Shin > > > On Wed, Oct 12, 2016 at 8:48 AM Kenneth Russell wrote: > > Hi Byungseon, > > Thanks for putting together this extension proposal. > > In conversations with groups within Google that are trying to do 360 > degree video rendering in WebGL, they need more information than your > extension would provide; specifically, they would need the exact timestamp > of the frame currently being rendered, as well as other per-frame metadata > like the current width and height of the texture (for variable bitrate > video streams). > > Mark Callow's WEBGL_dynamic_texture extension proposal > https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/ > provides the controls needed. However, per working group discussions, the > current extension's a little too complicated. I think if that extension > were simplified a little bit that it would provide all of the performance > benefits yours offers, and the controls that are known to be necessary. > > Do you think you'd be willing to devote some time to either extending your > proposal (I can provide specific feedback) or editing down Mark's proposal > to handle these use cases? I'd prefer to edit down WEBGL_dynamic_texture, > if you're willing. > > Thanks, > > -Ken > > > > On Tue, Oct 11, 2016 at 8:27 AM, Byungseon Shin > wrote: > > Hi Corentin, > > On Tue, Oct 11, 2016 at 11:16 PM Corentin Wallez > wrote: > > The problem with using an EGL_image type of extension is that the texture > data store becomes shared between the new texture and the object it was > created from. This is great when the application controls both sides and > can ensure read and writes are probably synchronized, but in the case of > video here, one side would write without the application being able to > control synchronization. > > The standard way to do this type of video / texture binding in EGL is to > use EGL_KHR_stream > which > is a much more complex and might not be an improvement over what developers > can do with the current APIs. > > > As in proposal revision#3 and as Florian already mentioned, > Current proposal support "bind once and update implicitly" concept : "ext.EGLImageTargetTexture2DOES(ext.TEXTURE_EXTERNAL_OES, > videoElement); that the texture is "bound" to the video and no further > calls need to be emitted (as would presently be the case)" > > In a that sense, we could abstract the details implementation into > MediaPlayer Side like synchronizing issues. > > Even EGL_KHR_stream provides more details synchronization issues but still > lot's of GPU driver and Video Decoder need to support the latest > specification. > But OES_EGL_image_external extension is already supported by most of the > GPU drivers and Video decoders. > > And one of the performance bottleneck is to converting EGLImage input to > TEXTURE_2D to handled by WebGL application. > > > On Tue, Oct 11, 2016 at 6:54 AM, Byungseon Shin > wrote: > > Hi Florian, Maksims, > > Thanks for the feedback. > > I have updated proposal to explain how application works. > Please see the attached updated proposal. > > To render an update frame, we need to call ext.EGLImageTargetTexture2DOES > to bind newly generated texture buffer from video decoder. > > By supporting OES_EGL_image_external extension, application can use direct > texture when video driver provides EGLImage like OpenGL ES natvie > applications. > > Our proposal is just focusing on extending format of texture compatible > with OpenGL ES extension and does not conflict with previous proposals. > > So, we provides a simplest way to adopt OES_EGL_image_external extension. > > By using references implementations of browser, WebGL renders video more > than 10 times faster than TEXTURE_2D format with Full HD resolution output. > > Kind regards, > Byungseon Shin > > On Tue, Oct 11, 2016 at 4:51 PM Maksims Mihejevs > wrote: > > Worth mentioning that in web there are multiple sources that have > independent redraw mechanics, that includes video as well as canvas > elements. It might make sense to have single extension for providing direct > access to those sources without need to re-upload it. > > I have assumptions that internally in browsers, both video and canvas have > their buffers, so it might lead to very similar implementation as from > internal side as well as from webgl api side. > > > On 11 Oct 2016 12:12 p.m., "Florian B?sch" wrote: > > Am I correct in assuming that once you've called ext. > EGLImageTargetTexture2DOES(ext.TEXTURE_EXTERNAL_OES, videoElement); that > the texture is "bound" to the video and no further calls need to be emitted > (as would presently be the case) for it to stay current? > > If that's the case, I think it's a good idea, it takes some work off web > application programmers to make sure the video is updated, and it should > also in all cases avoid a texture download/reupload. > > There are some other extensions that have attempted to deal with this > problem. > > - > https://www.khronos.org/registry/webgl/extensions/rejected/WEBGL_texture_from_depth_video/ > : this extension is rejected because it introduced a complex behavioral > changes/semantics whose benefits where unclear and the champions of this > extension stopped responding to questions and didn't offer any improvements > on the extension specification. > - > https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/ > : this extension is currently in a proposal state and it covers similar > functionality as EGL_image_external. Where it fundamentally differs is that > it also deals with format conversions (YUV 442 to rgb etc.) and accurate > timing for presentation of a video frame. > > Perhaps you could elaborate a little why OES_EGL_image_external would be > preferrable to WEBGL_dynamic_texture, and how the issues that > EGL_image_external does not tackle (timing, format conversion, other > things) would be handled by the web application programmer? > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Mon Jan 2 17:44:01 2017 From: khr...@ (Mark Callow) Date: Tue, 3 Jan 2017 10:44:01 +0900 Subject: [Public WebGL] EXT_float_blend In-Reply-To: References: Message-ID: <76EBEB85-7668-4882-BFCE-F50D40AAA2B9@callow.im> > On Jan 3, 2017, at 8:09, Jeff Gilbert wrote: > > Generally we only promote to draft when someone is working on an > extension. It doesn't help to promote extensions no one's working on, > and will likely send the wrong signals about the state of the > extension spec. > This is very misleading. What do you mean by ?working on"? In this case this simple specification is complete and ready for implementation. According to our rules, a spec should be promoted to draft before anyone attempts to implement it. This spec should be promoted so interested parties can implement it. Promotion will send precisely the right signals. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Mon Jan 2 17:48:01 2017 From: khr...@ (Mark Callow) Date: Tue, 3 Jan 2017 10:48:01 +0900 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> Message-ID: <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> > On Jan 3, 2017, at 2:14, Olli Etuaho wrote: > > The biggest hurdle for this I see is that WEBGL_multiview requires layout qualifiers, which do not exist in core WebGL 1.0 shaders, so it would be a bad fit for WebGL 1.0. Why does this extension need to be supported on WebGL 1.0? Browsers are on the verge of removing the flags that are currently hiding their WebGL 2 implementations. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Mon Jan 2 18:03:52 2017 From: max...@ (Maksims Mihejevs) Date: Tue, 3 Jan 2017 02:03:52 +0000 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> Message-ID: Correct me if I'm wrong, but extensions in most cases have no relationship to version of standard, but to the hardware support. Some of the extensions can be supported by WebGL 1.0 and WebGL 2.0 capable platforms, and it is up to a hardware and driver to support then, and up to WebGL (regardless of version) to expose access to them, so even after standard and WebGL core functionality of any version is released, improvements and enhancements to the platform can be implemented later by supporting extensions. WebGL 2.0 does not mean WebGL 1.0 is going away. There will be hardware where WebGL 2.0 won't be available, and it is within interest of web community to have wide support for things, with choice driven by a developer of using less supported feature with potential fallback or not. If WebGL 1.0 does not goes away, does not mean that there should be no extensions work for WebGL platform as a whole that will make WebGL 1.0 better too. Looking at webglstats.com - is great example, where developers seeing how support for certain extensions allows them to decide to work on implementations relying on them with fallbacks if needed. And features made on extensions that are available for WebGL 2.0 and WebGL 1.0 - is good example. Developer want to drive his decision about feature based on extension availability, not based on WebGL version. On 3 January 2017 at 01:48, Mark Callow wrote: > > On Jan 3, 2017, at 2:14, Olli Etuaho wrote: > > The biggest hurdle for this I see is that WEBGL_multiview requires layout > qualifiers, which do not exist in core WebGL 1.0 shaders, so it would be a > bad fit for WebGL 1.0. > > > Why does this extension need to be supported on WebGL 1.0? Browsers are on > the verge of removing the flags that are currently hiding their WebGL 2 > implementations. > > Regards > > -Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Mon Jan 2 18:19:45 2017 From: khr...@ (Mark Callow) Date: Tue, 3 Jan 2017 11:19:45 +0900 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> Message-ID: <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> > On Jan 3, 2017, at 11:03, Maksims Mihejevs wrote: > > There will be hardware where WebGL 2.0 won't be available ... The question in this case is whether any of this hardware can support stereo canvases? Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Jan 3 00:36:55 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 3 Jan 2017 09:36:55 +0100 Subject: [Public WebGL] EXT_float_blend In-Reply-To: References: Message-ID: Please consult "Extension Development Process": https://www.khronos.org/registry/webgl/extensions/ Extension Development Process Extensions move through four states during their development: *proposed*, *draft*, *community approved*, and *Khronos ratified*. Every extension should advance to *Khronos ratified*. If an extension cannot advance through the extension process it can be *rejected*. - *Proposed* extensions are intended for discussion on the public WebGL mailing list, in order to move to *draft* status; they should not be implemented, even under a vendor prefix. If consensus is reached in the community, the extension can be moved to *draft* status. - *Draft* extensions may be implemented under a vendor prefix or behind a run-time option for experimentation purposes, in order to gain experience with the extension before finalizing it. Draft extensions should not be exposed by default by WebGL implementations. Once consensus is reached in the community, the extension can be moved to *community approved* status. - *Community approved* extensions should be implemented without a vendor prefix. When a draft extension moves to community approved status, any existing implementation should immediately remove support for any vendor-prefixed extension name. Once implemented by a vendor, support should not be removed unless there is a serious issue with the extension, such as a security flaw. - *Khronos ratified* extensions are those community approved extensions which have been voted upon by the Khronos Board of Promoters. - *Rejected* extensions should never be implemented. An extension enters *rejected* status because consensus on it could not be reached at the *proposal* stage or technical difficulties arise during implementation at the *draft* stage. A *community approved* extension can only be rejected in extraordinary circumstances. A *Khronos ratified* extension cannot be rejected. ----- The extension has been in proposal since April 2015 with no change. It has to be assumed that no more specification work is going on with that extension. If that is so, then this is a finalized specification in proposal. If it is a finalized specification in proposal and you are unwilling to promote it to draft, then it is a contentious extension. If it is a contentious extension, it has to be rejected. Do you wish to reject this extension? On Tue, Jan 3, 2017 at 12:09 AM, Jeff Gilbert wrote: > Generally we only promote to draft when someone is working on an > extension. It doesn't help to promote extensions no one's working on, > and will likely send the wrong signals about the state of the > extension spec. > > On Mon, Jan 2, 2017 at 8:52 AM, Florian B?sch wrote: > > No Change has occured to > > https://www.khronos.org/registry/webgl/extensions/ > proposals/EXT_float_blend/ > > since April 2015 > > > > Can this extension be elevated to draft? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Jan 3 13:10:44 2017 From: kbr...@ (Kenneth Russell) Date: Tue, 3 Jan 2017 13:10:44 -0800 Subject: [Public WebGL] EXT_float_blend In-Reply-To: <76EBEB85-7668-4882-BFCE-F50D40AAA2B9@callow.im> References: <76EBEB85-7668-4882-BFCE-F50D40AAA2B9@callow.im> Message-ID: Now would be a good time to start exposing this extension, since it requires WebGL 2.0, which is about to be enabled by default in multiple browsers. Google intends to support this extension; just filed http://crbug.com/678064 about doing so. We support moving it to draft status; Jeff (from Mozilla), any comment? -Ken On Mon, Jan 2, 2017 at 5:44 PM, Mark Callow wrote: > > On Jan 3, 2017, at 8:09, Jeff Gilbert wrote: > > Generally we only promote to draft when someone is working on an > extension. It doesn't help to promote extensions no one's working on, > and will likely send the wrong signals about the state of the > extension spec. > > > This is very misleading. What do *you* mean by ?working on"? > > In this case this simple specification is complete and ready for > implementation. According to our rules, a spec should be promoted to draft > before anyone attempts to implement it. This spec should be promoted so > interested parties can implement it. Promotion will send precisely the > right signals. > > Regards > > -Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Tue Jan 3 13:53:07 2017 From: jgi...@ (Jeff Gilbert) Date: Tue, 3 Jan 2017 13:53:07 -0800 Subject: [Public WebGL] EXT_float_blend In-Reply-To: References: <76EBEB85-7668-4882-BFCE-F50D40AAA2B9@callow.im> Message-ID: No objections here. On Tue, Jan 3, 2017 at 1:10 PM, Kenneth Russell wrote: > Now would be a good time to start exposing this extension, since it requires > WebGL 2.0, which is about to be enabled by default in multiple browsers. > > Google intends to support this extension; just filed http://crbug.com/678064 > about doing so. We support moving it to draft status; Jeff (from Mozilla), > any comment? > > -Ken > > > On Mon, Jan 2, 2017 at 5:44 PM, Mark Callow wrote: >> >> >> On Jan 3, 2017, at 8:09, Jeff Gilbert wrote: >> >> Generally we only promote to draft when someone is working on an >> extension. It doesn't help to promote extensions no one's working on, >> and will likely send the wrong signals about the state of the >> extension spec. >> >> >> This is very misleading. What do you mean by ?working on"? >> >> In this case this simple specification is complete and ready for >> implementation. According to our rules, a spec should be promoted to draft >> before anyone attempts to implement it. This spec should be promoted so >> interested parties can implement it. Promotion will send precisely the right >> signals. >> >> Regards >> >> -Mark >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Jan 3 14:51:51 2017 From: kbr...@ (Kenneth Russell) Date: Tue, 3 Jan 2017 14:51:51 -0800 Subject: [Public WebGL] EXT_float_blend In-Reply-To: References: <76EBEB85-7668-4882-BFCE-F50D40AAA2B9@callow.im> Message-ID: Great. Pull request https://github.com/KhronosGroup/WebGL/pull/2230 is up promoting it to draft status. On Tue, Jan 3, 2017 at 1:53 PM, Jeff Gilbert wrote: > No objections here. > > On Tue, Jan 3, 2017 at 1:10 PM, Kenneth Russell wrote: > > Now would be a good time to start exposing this extension, since it > requires > > WebGL 2.0, which is about to be enabled by default in multiple browsers. > > > > Google intends to support this extension; just filed > http://crbug.com/678064 > > about doing so. We support moving it to draft status; Jeff (from > Mozilla), > > any comment? > > > > -Ken > > > > > > On Mon, Jan 2, 2017 at 5:44 PM, Mark Callow wrote: > >> > >> > >> On Jan 3, 2017, at 8:09, Jeff Gilbert wrote: > >> > >> Generally we only promote to draft when someone is working on an > >> extension. It doesn't help to promote extensions no one's working on, > >> and will likely send the wrong signals about the state of the > >> extension spec. > >> > >> > >> This is very misleading. What do you mean by ?working on"? > >> > >> In this case this simple specification is complete and ready for > >> implementation. According to our rules, a spec should be promoted to > draft > >> before anyone attempts to implement it. This spec should be promoted so > >> interested parties can implement it. Promotion will send precisely the > right > >> signals. > >> > >> Regards > >> > >> -Mark > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zmo...@ Tue Jan 3 16:18:14 2017 From: zmo...@ (Zhenyao Mo) Date: Tue, 3 Jan 2017 16:18:14 -0800 Subject: [Public WebGL] EXT_texture_storage In-Reply-To: References: Message-ID: My main concern is we won't have full EXT_texture_storage on top of DX9, on which some WebGL1 implementations are based. To me, a better path is just to switch to WebGL2 whenever it's possible, where texture storage is part of core. On Mon, Jan 2, 2017 at 8:54 AM, Florian B?sch wrote: > No change has occured on https://www.khronos.org/ > registry/webgl/extensions/proposals/EXT_texture_storage/ since September > 2015 > > Can this extension be elevated to draft? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Jan 3 21:27:54 2017 From: kbr...@ (Kenneth Russell) Date: Tue, 3 Jan 2017 21:27:54 -0800 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: Apologies for not discussing this extension on public_webgl before introducing it as a draft in the WebGL extension registry. The cost of synchronous glReadPixels has been a longstanding problem in WebGL. The Chrome browser specifically has a particularly deep graphics pipeline, and draining it with a synchronous call each frame imposes a too-great performance penalty. This has forced applications to rewrite certain algorithms when porting to WebGL. getBufferSubDataAsync is a direct parallel to getBufferSubData, and solves these performance pitfalls in Chrome. We've gathered data from two test cases so far, a GPU-based picking algorithm and a GPGPU global illumination algorithm, and the results look good. We will present this data on public_webgl soon, when making a case for moving the extension forward. -Ken On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs wrote: > Worth mentioning that promises are extremely bad for GC and real-time > applications, they do not provide a developer enough control to structure > logic so to avoids any allocations. > > Promises - are not good for real-time at all, and lead to issues with GC. > Any API in WebGL that is meant to be used in real-time applications should > not be based on API's that are not real-time friendly. > > On 2 January 2017 at 20:49, Florian B?sch wrote: > >> Upon thinking about this extension, I don't think it should exist at all. >> Ideally the mapBuffer semantic would be exposed. But even if it isn't, it >> shall not be that an extension is required to express functionality already >> found in the core functionality of the underlying ES specification. >> >> Furthermore, getBufferSubDataAsync does not adequately express the >> reality of map/flush/unmap, and hides the fact that unmap/flush are still >> synchronizing calls happening. However getBufferSubDataAsync obstructs >> appropriate code dealing with proper insertion of synchronization points. >> >> In addition, it would lead to allocating promises once or many times per >> frame, and since tracking would be required in some instances, would also >> lead to allocating a closure once or many times a call. An issue that map >> buffer range does not exhibit. >> >> Due to the lack of discussion of this feature, I believe a great >> disservice is done to WebGL 2 by the introduction of these ideas/APIs and I >> strongly suggest to withdraw this from draft immediately and go back to the >> drawing board. >> >> >> >> On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch wrote: >> >>> This extension https://www.khronos.org/registry/webgl/extensions/ >>> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to >>> draft without any public discussion. >>> >>> In a nutshell it proposes a new WebGL2 function called >>> getBufferSubDataAsync which returns a promise that will be called >>> eventually with the buffer data. >>> >>> I think there are several problems: >>> >>> 1. The extension process states that "*Extensions move through four >>> states during their development: proposed, draft, community approved, and >>> Khronos ratified**"*. This extension never moved through the >>> proposal stage. >>> 2. The extension introduces promises to the WebGL API. This requires >>> a more fundamental discussion. >>> 3. A discussion if this extension is required if WebWorkers can >>> access the same context as the main thread has not happened. >>> >>> This extension should be in proposal status, and the necessary >>> discussions should happen first. >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Jan 3 21:29:29 2017 From: kbr...@ (Kenneth Russell) Date: Tue, 3 Jan 2017 21:29:29 -0800 Subject: [Public WebGL] EXT_texture_storage In-Reply-To: References: Message-ID: I agree that it'd be better to reject this extension rather than move it forward. Now that WebGL 2.0's on the verge of shipping in multiple browsers, I think we should encourage more implementations rather than continue to add extensions to WebGL 1.0. On Tue, Jan 3, 2017 at 4:18 PM, Zhenyao Mo wrote: > My main concern is we won't have full EXT_texture_storage on top of DX9, > on which some WebGL1 implementations are based. > > To me, a better path is just to switch to WebGL2 whenever it's possible, > where texture storage is part of core. > > On Mon, Jan 2, 2017 at 8:54 AM, Florian B?sch wrote: > >> No change has occured on https://www.khronos.org/regist >> ry/webgl/extensions/proposals/EXT_texture_storage/ since September 2015 >> >> Can this extension be elevated to draft? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Jan 4 03:01:23 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 4 Jan 2017 12:01:23 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> Message-ID: On Mon, Jan 2, 2017 at 6:14 PM, Olli Etuaho wrote: > > 1. Why not just enable WEBGL_multiview extension automatically when a > stereo canvas is requested? Support for stereo canvas requires core spec > changes either way. The biggest hurdle for this I see is that > WEBGL_multiview requires layout qualifiers, which do not exist in core > WebGL 1.0 shaders, so it would be a bad fit for WebGL 1.0. > I'm not sure I understand the question. How is a stereo canvas requested? 2. Should drawing commands that don?t use multiview shaders be compatible > with choosing both default framebuffer draw buffers with gl.BACK? This > could be done similarly for clears as well. Also not sure I understand this one, can you elaborate? On Tue, Jan 3, 2017 at 2:48 AM, Mark Callow wrote: > > Why does this extension need to be supported on WebGL 1.0? Browsers are on > the verge of removing the flags that are currently hiding their WebGL 2 > implementations. > If something can be supported in WebGL1 in principle, it's worth investigating because WebGL1 will be the only choice for a long time long after WebGL2 activation by some or other UA. That's because either other UAs haven't implemented WebGL2 yet, or because the hardware does not support it, or the hardware/drivers exhibit blacklistable flags for the WebGL2 implementation (but not WebGL1). On Tue, Jan 3, 2017 at 3:19 AM, Mark Callow wrote: > The question in this case is whether any of this hardware can support > stereo canvases? > That's irrelevant. The ES extension for multiview https://www.khronos.org/registry/gles/extensions/OVR/multiview.txt refers to OpenGL ES 3.0 as required. It doesn't matter what the hardware supports or not, unless an API revision of ES 3.0 is used (both on the front-end as well as on the backend, or something better), OVR_multiview cannot be exposed. It's unfortunate that that means OVR_multiview can't go on WebGL1, but it is what it is. And unless we're willing to break with the parent standards intentionally (please don't), that's the way it is. ---- I have some questions regarding this extension: 1. The extension definition does not follow the customary form of extensions so far defined that mirror underlying ES extensions. Can you elaborate why? 2. How does an application switch between stereo and non stereo rendering? (a common thing that web pages would do depending if the user wishes to interact with a HMD or with a monitor) 3. Does this extension integrate with WebVR in any way? -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Wed Jan 4 03:53:31 2017 From: oet...@ (Olli Etuaho) Date: Wed, 4 Jan 2017 11:53:31 +0000 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> Message-ID: <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> Trying to answer the questions here: The extension document includes proposed changes to the core WebGL 1.0 spec to enable creation of stereo default framebuffers. This is required to introduce an opaque stereo framebuffer concept which can be implemented under the hood in various ways, such as side-by-side images inside a single texture, as a texture array, or as a native stereo framebuffer. This makes it possible to avoid extra copies in the display pipeline. Basically the API user would set context creation parameter ?stereo: true? and then get a stereo default framebuffer back. Support can be checked by querying the actual context creation parameters. We want stereo canvases in WebGL 1.0 so that existing WebVR applications that are built on WebGL 1.0 will be able to easily migrate to using them. It also makes the spec simpler. I would expect practically all WebVR capable hardware to eventually support WebGL 2.0, though. In the current proposal using shaders that don?t declare numViews can?t be used to render to multiple views with a single draw command. Whether multiple views can be cleared with a single clear() command is not clearly defined in the native OVR_multiview spec either, and needs to be either fixed there or defined in the WebGL version of the extension spec. The question is whether these things should be possible, maybe even in the core spec when using a stereo framebuffer. We?re trying to keep the WebGL extension spec close to OVR_multiview, but unfortunately OVR_multiview is quite vague in many respects and needs to be tightened for WebGL. Also the WebGL spec needs to be reasonably implementable on top of different APIs like DirectX as well, which requires some changes like the support for opaque stereo framebuffers. -Olli From: Florian B?sch [mailto:pyalot...@] Sent: keskiviikkona 4. tammikuuta 2017 11.01 To: Mark Callow Cc: Maksims Mihejevs ; Olli Etuaho ; public webgl Subject: Re: [Public WebGL] WEBGL_multiview discussion On Mon, Jan 2, 2017 at 6:14 PM, Olli Etuaho > wrote: 1. Why not just enable WEBGL_multiview extension automatically when a stereo canvas is requested? Support for stereo canvas requires core spec changes either way. The biggest hurdle for this I see is that WEBGL_multiview requires layout qualifiers, which do not exist in core WebGL 1.0 shaders, so it would be a bad fit for WebGL 1.0. I'm not sure I understand the question. How is a stereo canvas requested? 2. Should drawing commands that don?t use multiview shaders be compatible with choosing both default framebuffer draw buffers with gl.BACK? This could be done similarly for clears as well. Also not sure I understand this one, can you elaborate? On Tue, Jan 3, 2017 at 2:48 AM, Mark Callow > wrote: Why does this extension need to be supported on WebGL 1.0? Browsers are on the verge of removing the flags that are currently hiding their WebGL 2 implementations. If something can be supported in WebGL1 in principle, it's worth investigating because WebGL1 will be the only choice for a long time long after WebGL2 activation by some or other UA. That's because either other UAs haven't implemented WebGL2 yet, or because the hardware does not support it, or the hardware/drivers exhibit blacklistable flags for the WebGL2 implementation (but not WebGL1). On Tue, Jan 3, 2017 at 3:19 AM, Mark Callow > wrote: The question in this case is whether any of this hardware can support stereo canvases? That's irrelevant. The ES extension for multiview https://www.khronos.org/registry/gles/extensions/OVR/multiview.txt refers to OpenGL ES 3.0 as required. It doesn't matter what the hardware supports or not, unless an API revision of ES 3.0 is used (both on the front-end as well as on the backend, or something better), OVR_multiview cannot be exposed. It's unfortunate that that means OVR_multiview can't go on WebGL1, but it is what it is. And unless we're willing to break with the parent standards intentionally (please don't), that's the way it is. ---- I have some questions regarding this extension: 1. The extension definition does not follow the customary form of extensions so far defined that mirror underlying ES extensions. Can you elaborate why? 2. How does an application switch between stereo and non stereo rendering? (a common thing that web pages would do depending if the user wishes to interact with a HMD or with a monitor) 3. Does this extension integrate with WebVR in any way? -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Wed Jan 4 05:07:52 2017 From: max...@ (Maksims Mihejevs) Date: Wed, 4 Jan 2017 13:07:52 +0000 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: >From PlayCanvas side, we express a need for async glReadPixels path too. We and our users have been using it in many ways, some of the ways: 1. GPU picking: ID encoded in unique colour, reading pixel under mouse. 2. GPU screen to world: reading pixel from depth texture, and using frustum with math reconstructing world position. 3. Render Target to another Canvas. In Editor we have thumbnail previews for materials, models, cubemaps and other assets. We render them into render target in main context and then reading pixels to create ImageData so it can be put to another canvas using putImageData. 4. Some custom algorithms to generate large amounts of computation heavy data saved into texture, then read on CPU - this depends per case. Sometimes async approach is viable there, sometimes it is not. In many cases glReadPixels is called per each frame, like for picking, and easily can drop frame rate due to blocking nature. I've noticed that PBOs are mentioned in WebGL 2.0 spec, but not much info apart of just that mention: https://www.khronos.org/registry/webgl/specs/latest/2.0/#4.2 PBOs would allow to get render target data into buffers without stalling GPU pipeline, and then read them. How does getBufferSubDataAsync relates to PBOs? Cheers, Max On 4 January 2017 at 05:27, Kenneth Russell wrote: > Apologies for not discussing this extension on public_webgl before > introducing it as a draft in the WebGL extension registry. > > The cost of synchronous glReadPixels has been a longstanding problem in > WebGL. The Chrome browser specifically has a particularly deep graphics > pipeline, and draining it with a synchronous call each frame imposes a > too-great performance penalty. This has forced applications to rewrite > certain algorithms when porting to WebGL. > > getBufferSubDataAsync is a direct parallel to getBufferSubData, and solves > these performance pitfalls in Chrome. We've gathered data from two test > cases so far, a GPU-based picking algorithm and a GPGPU global illumination > algorithm, and the results look good. We will present this data on > public_webgl soon, when making a case for moving the extension forward. > > -Ken > > > > On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs > wrote: > >> Worth mentioning that promises are extremely bad for GC and real-time >> applications, they do not provide a developer enough control to structure >> logic so to avoids any allocations. >> >> Promises - are not good for real-time at all, and lead to issues with GC. >> Any API in WebGL that is meant to be used in real-time applications should >> not be based on API's that are not real-time friendly. >> >> On 2 January 2017 at 20:49, Florian B?sch wrote: >> >>> Upon thinking about this extension, I don't think it should exist at >>> all. Ideally the mapBuffer semantic would be exposed. But even if it isn't, >>> it shall not be that an extension is required to express functionality >>> already found in the core functionality of the underlying ES specification. >>> >>> Furthermore, getBufferSubDataAsync does not adequately express the >>> reality of map/flush/unmap, and hides the fact that unmap/flush are still >>> synchronizing calls happening. However getBufferSubDataAsync obstructs >>> appropriate code dealing with proper insertion of synchronization points. >>> >>> In addition, it would lead to allocating promises once or many times per >>> frame, and since tracking would be required in some instances, would also >>> lead to allocating a closure once or many times a call. An issue that map >>> buffer range does not exhibit. >>> >>> Due to the lack of discussion of this feature, I believe a great >>> disservice is done to WebGL 2 by the introduction of these ideas/APIs and I >>> strongly suggest to withdraw this from draft immediately and go back to the >>> drawing board. >>> >>> >>> >>> On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch wrote: >>> >>>> This extension https://www.khronos.org/registry/webgl/extensions/ >>>> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to >>>> draft without any public discussion. >>>> >>>> In a nutshell it proposes a new WebGL2 function called >>>> getBufferSubDataAsync which returns a promise that will be called >>>> eventually with the buffer data. >>>> >>>> I think there are several problems: >>>> >>>> 1. The extension process states that "*Extensions move through four >>>> states during their development: proposed, draft, community approved, and >>>> Khronos ratified**"*. This extension never moved through the >>>> proposal stage. >>>> 2. The extension introduces promises to the WebGL API. This >>>> requires a more fundamental discussion. >>>> 3. A discussion if this extension is required if WebWorkers can >>>> access the same context as the main thread has not happened. >>>> >>>> This extension should be in proposal status, and the necessary >>>> discussions should happen first. >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kai...@ Wed Jan 4 10:55:04 2017 From: kai...@ (Kai Ninomiya) Date: Wed, 04 Jan 2017 18:55:04 +0000 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: Max, In my demo [1] there are 3 different possible readback paths: * readPixels to CPU [2] * readPixels to PBO + getBufferSubData [3] * readPixels to PBO + getBufferSubDataAsync [4] As you said, getBufferSubData(/Async) can be used for reading back any buffer data (such as transform feedback or GPGPU shader results). PBO is necessary (AFAIK) for async readback from framebuffer data (note: an async readPixels wouldn't be as useful as it would block any operation which writes to the framebuffer). -Kai [1] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ [2] https://github.com/kainino0x/getBufferSubDataAsync-Demo/blob/master/index.html#L245 [3] https://github.com/kainino0x/getBufferSubDataAsync-Demo/blob/master/index.html#L261 [4] https://github.com/kainino0x/getBufferSubDataAsync-Demo/blob/master/index.html#L280 On Wed, Jan 4, 2017 at 5:09 AM Maksims Mihejevs wrote: > From PlayCanvas side, we express a need for async glReadPixels path too. > We and our users have been using it in many ways, some of the ways: > 1. GPU picking: ID encoded in unique colour, reading pixel under mouse. > 2. GPU screen to world: reading pixel from depth texture, and using > frustum with math reconstructing world position. > 3. Render Target to another Canvas. In Editor we have thumbnail previews > for materials, models, cubemaps and other assets. We render them into > render target in main context and then reading pixels to create ImageData > so it can be put to another canvas using putImageData. > 4. Some custom algorithms to generate large amounts of computation heavy > data saved into texture, then read on CPU - this depends per case. > Sometimes async approach is viable there, sometimes it is not. > > In many cases glReadPixels is called per each frame, like for picking, and > easily can drop frame rate due to blocking nature. > > I've noticed that PBOs are mentioned in WebGL 2.0 spec, but not much info > apart of just that mention: > https://www.khronos.org/registry/webgl/specs/latest/2.0/#4.2 > PBOs would allow to get render target data into buffers without stalling > GPU pipeline, and then read them. > > How does getBufferSubDataAsync relates to PBOs? > > Cheers, > Max > > On 4 January 2017 at 05:27, Kenneth Russell wrote: > > Apologies for not discussing this extension on public_webgl before > introducing it as a draft in the WebGL extension registry. > > The cost of synchronous glReadPixels has been a longstanding problem in > WebGL. The Chrome browser specifically has a particularly deep graphics > pipeline, and draining it with a synchronous call each frame imposes a > too-great performance penalty. This has forced applications to rewrite > certain algorithms when porting to WebGL. > > getBufferSubDataAsync is a direct parallel to getBufferSubData, and solves > these performance pitfalls in Chrome. We've gathered data from two test > cases so far, a GPU-based picking algorithm and a GPGPU global illumination > algorithm, and the results look good. We will present this data on > public_webgl soon, when making a case for moving the extension forward. > > -Ken > > > > On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs > wrote: > > Worth mentioning that promises are extremely bad for GC and real-time > applications, they do not provide a developer enough control to structure > logic so to avoids any allocations. > > Promises - are not good for real-time at all, and lead to issues with GC. > Any API in WebGL that is meant to be used in real-time applications should > not be based on API's that are not real-time friendly. > > On 2 January 2017 at 20:49, Florian B?sch wrote: > > Upon thinking about this extension, I don't think it should exist at all. > Ideally the mapBuffer semantic would be exposed. But even if it isn't, it > shall not be that an extension is required to express functionality already > found in the core functionality of the underlying ES specification. > > Furthermore, getBufferSubDataAsync does not adequately express the reality > of map/flush/unmap, and hides the fact that unmap/flush are still > synchronizing calls happening. However getBufferSubDataAsync obstructs > appropriate code dealing with proper insertion of synchronization points. > > In addition, it would lead to allocating promises once or many times per > frame, and since tracking would be required in some instances, would also > lead to allocating a closure once or many times a call. An issue that map > buffer range does not exhibit. > > Due to the lack of discussion of this feature, I believe a great > disservice is done to WebGL 2 by the introduction of these ideas/APIs and I > strongly suggest to withdraw this from draft immediately and go back to the > drawing board. > > > > On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch wrote: > > This extension > https://www.khronos.org/registry/webgl/extensions/WEBGL_get_buffer_sub_data_async/ > has been introduced and elevated to draft without any public discussion. > > In a nutshell it proposes a new WebGL2 function called > getBufferSubDataAsync which returns a promise that will be called > eventually with the buffer data. > > I think there are several problems: > > 1. The extension process states that "*Extensions move through four > states during their development: proposed, draft, community approved, and > Khronos ratified**"*. This extension never moved through the proposal > stage. > 2. The extension introduces promises to the WebGL API. This requires a > more fundamental discussion. > 3. A discussion if this extension is required if WebWorkers can access > the same context as the main thread has not happened. > > This extension should be in proposal status, and the necessary discussions > should happen first. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4845 bytes Desc: S/MIME Cryptographic Signature URL: From Raf...@ Wed Jan 4 14:08:23 2017 From: Raf...@ (Rafael Cintron) Date: Wed, 4 Jan 2017 22:08:23 +0000 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> Message-ID: Some feedback: I would prefer that this extension work in WebGL 1.0 and be used to render to multiple views with one draw command. While it is great that Chrome and Firefox have WebGL 2.0, it will not be available in all browsers in the short term. For Edge, WebVR is likely going to arrive before WebGL 2.0. If the layout qualifier prevents the shader from working in WebGL 1.0, I think we should drop the qualifier requirement. This will allow the shader to be used in more places and remove the additional overhead/complexity of rationalizing the number of views in the qualifier with the state of the world on the API side. We should also consider simplifying the extension API by removing FramebufferTextureMultiviewOVR and its associated parameters. If we need these for a more general extension that allows additional things to be ?multi-viewed?, we can add them as a separate extension. --Rafael From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Olli Etuaho Sent: Wednesday, January 4, 2017 3:54 AM To: Florian B?sch ; Mark Callow Cc: Maksims Mihejevs ; public webgl Subject: RE: [Public WebGL] WEBGL_multiview discussion Trying to answer the questions here: The extension document includes proposed changes to the core WebGL 1.0 spec to enable creation of stereo default framebuffers. This is required to introduce an opaque stereo framebuffer concept which can be implemented under the hood in various ways, such as side-by-side images inside a single texture, as a texture array, or as a native stereo framebuffer. This makes it possible to avoid extra copies in the display pipeline. Basically the API user would set context creation parameter ?stereo: true? and then get a stereo default framebuffer back. Support can be checked by querying the actual context creation parameters. We want stereo canvases in WebGL 1.0 so that existing WebVR applications that are built on WebGL 1.0 will be able to easily migrate to using them. It also makes the spec simpler. I would expect practically all WebVR capable hardware to eventually support WebGL 2.0, though. In the current proposal using shaders that don?t declare numViews can?t be used to render to multiple views with a single draw command. Whether multiple views can be cleared with a single clear() command is not clearly defined in the native OVR_multiview spec either, and needs to be either fixed there or defined in the WebGL version of the extension spec. The question is whether these things should be possible, maybe even in the core spec when using a stereo framebuffer. We?re trying to keep the WebGL extension spec close to OVR_multiview, but unfortunately OVR_multiview is quite vague in many respects and needs to be tightened for WebGL. Also the WebGL spec needs to be reasonably implementable on top of different APIs like DirectX as well, which requires some changes like the support for opaque stereo framebuffers. -Olli From: Florian B?sch [mailto:pyalot...@] Sent: keskiviikkona 4. tammikuuta 2017 11.01 To: Mark Callow > Cc: Maksims Mihejevs >; Olli Etuaho >; public webgl > Subject: Re: [Public WebGL] WEBGL_multiview discussion On Mon, Jan 2, 2017 at 6:14 PM, Olli Etuaho > wrote: 1. Why not just enable WEBGL_multiview extension automatically when a stereo canvas is requested? Support for stereo canvas requires core spec changes either way. The biggest hurdle for this I see is that WEBGL_multiview requires layout qualifiers, which do not exist in core WebGL 1.0 shaders, so it would be a bad fit for WebGL 1.0. I'm not sure I understand the question. How is a stereo canvas requested? 2. Should drawing commands that don?t use multiview shaders be compatible with choosing both default framebuffer draw buffers with gl.BACK? This could be done similarly for clears as well. Also not sure I understand this one, can you elaborate? On Tue, Jan 3, 2017 at 2:48 AM, Mark Callow > wrote: Why does this extension need to be supported on WebGL 1.0? Browsers are on the verge of removing the flags that are currently hiding their WebGL 2 implementations. If something can be supported in WebGL1 in principle, it's worth investigating because WebGL1 will be the only choice for a long time long after WebGL2 activation by some or other UA. That's because either other UAs haven't implemented WebGL2 yet, or because the hardware does not support it, or the hardware/drivers exhibit blacklistable flags for the WebGL2 implementation (but not WebGL1). On Tue, Jan 3, 2017 at 3:19 AM, Mark Callow > wrote: The question in this case is whether any of this hardware can support stereo canvases? That's irrelevant. The ES extension for multiview https://www.khronos.org/registry/gles/extensions/OVR/multiview.txt refers to OpenGL ES 3.0 as required. It doesn't matter what the hardware supports or not, unless an API revision of ES 3.0 is used (both on the front-end as well as on the backend, or something better), OVR_multiview cannot be exposed. It's unfortunate that that means OVR_multiview can't go on WebGL1, but it is what it is. And unless we're willing to break with the parent standards intentionally (please don't), that's the way it is. ---- I have some questions regarding this extension: 1. The extension definition does not follow the customary form of extensions so far defined that mirror underlying ES extensions. Can you elaborate why? 2. How does an application switch between stereo and non stereo rendering? (a common thing that web pages would do depending if the user wishes to interact with a HMD or with a monitor) 3. Does this extension integrate with WebVR in any way? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Jan 4 14:18:42 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 4 Jan 2017 23:18:42 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> Message-ID: On Wed, Jan 4, 2017 at 11:08 PM, Rafael Cintron < Rafael.Cintron...@> wrote: > > If the layout qualifier prevents the shader from working in WebGL 1.0, I > think we should drop the qualifier requirement. This will allow the shader > to be used in more places and remove the additional overhead/complexity of > rationalizing the number of views in the qualifier with the state of the > world on the API side. > You'd effectively default to a layout of 2 views. I think that could be a mistake. It's widely recognized that a wide FOV is more immersive. At the same time very wide FOVs will require more than 2 views to be rendered to avoid excessive distortion (due to perspective projection that pinches everything in the center and blows it up towards the edges). There's at least some HMDs that come with 4 panels, and other wide FOV HMDs might use a 4-split projection to get around distortion issues. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Jan 4 14:20:47 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 4 Jan 2017 23:20:47 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> Message-ID: That didn't address one of my questions: How do you switch between a single view and a stereo view while the application is running on the same canvas/context? An additional question I also have is: How do you deal with the need to render multiple views into a framebuffer object? -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Wed Jan 4 17:20:56 2017 From: khr...@ (Mark Callow) Date: Thu, 5 Jan 2017 10:20:56 +0900 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> Message-ID: > On Jan 4, 2017, at 20:01, Florian B?sch wrote: > > On Tue, Jan 3, 2017 at 3:19 AM, Mark Callow > wrote: > The question in this case is whether any of this hardware can support stereo canvases? > That's irrelevant. > As I hope you see now from Olli?s reply, my question is not irrelevant as it is related to the proposed changes to core to enable creation of stereo default framebuffers, which are there as far as I can see only to support WebGL 1.0. However, since Rafael has stated that Edge needs to support VR via WebGL 1, my question has become moot. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Thu Jan 5 01:16:42 2017 From: khr...@ (Mark Callow) Date: Thu, 5 Jan 2017 18:16:42 +0900 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> Message-ID: <3141DB03-91E4-42F4-BC59-BAB338C535CE@callow.im> > On Jan 4, 2017, at 20:53, Olli Etuaho wrote: > > The extension document includes proposed changes to the core WebGL 1.0 spec to enable creation of stereo default framebuffers. Adding functionality like this means it is no longer WebGL 1.0. We would have to create a WebGL 1.1. Is there any way for an application to determine that the implementation supports a stereo/multiview default framebuffer absent attempting to create the context with one? This question is valid regardless of version numbering. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Thu Jan 5 02:55:14 2017 From: max...@ (Maksims Mihejevs) Date: Thu, 5 Jan 2017 10:55:14 +0000 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: Thank you for answer, this examples make sense. The only concern as mentioned before are: Promises. There is no a single use of promises in WebGL, and there is a reason why they should not be used in real-time applications if they can be avoided (which is the case). If possible Fence (Sync Objects) were involved in design process? Could something like this be achieved by using more generic functionality in GL such as Sync Objects? Kind Regards, Max On 4 January 2017 at 18:55, Kai Ninomiya wrote: > Max, > > In my demo [1] there are 3 different possible readback paths: > * readPixels to CPU [2] > * readPixels to PBO + getBufferSubData [3] > * readPixels to PBO + getBufferSubDataAsync [4] > > As you said, getBufferSubData(/Async) can be used for reading back any > buffer data (such as transform feedback or GPGPU shader results). PBO is > necessary (AFAIK) for async readback from framebuffer data (note: an async > readPixels wouldn't be as useful as it would block any operation which > writes to the framebuffer). > > -Kai > > [1] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ > [2] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ > blob/master/index.html#L245 > [3] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ > blob/master/index.html#L261 > [4] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ > blob/master/index.html#L280 > > On Wed, Jan 4, 2017 at 5:09 AM Maksims Mihejevs > wrote: > >> From PlayCanvas side, we express a need for async glReadPixels path too. >> We and our users have been using it in many ways, some of the ways: >> 1. GPU picking: ID encoded in unique colour, reading pixel under mouse. >> 2. GPU screen to world: reading pixel from depth texture, and using >> frustum with math reconstructing world position. >> 3. Render Target to another Canvas. In Editor we have thumbnail previews >> for materials, models, cubemaps and other assets. We render them into >> render target in main context and then reading pixels to create ImageData >> so it can be put to another canvas using putImageData. >> 4. Some custom algorithms to generate large amounts of computation heavy >> data saved into texture, then read on CPU - this depends per case. >> Sometimes async approach is viable there, sometimes it is not. >> >> In many cases glReadPixels is called per each frame, like for picking, >> and easily can drop frame rate due to blocking nature. >> >> I've noticed that PBOs are mentioned in WebGL 2.0 spec, but not much info >> apart of just that mention: https://www.khronos.org/registry/webgl/specs/ >> latest/2.0/#4.2 >> PBOs would allow to get render target data into buffers without stalling >> GPU pipeline, and then read them. >> >> How does getBufferSubDataAsync relates to PBOs? >> >> Cheers, >> Max >> >> On 4 January 2017 at 05:27, Kenneth Russell wrote: >> >> Apologies for not discussing this extension on public_webgl before >> introducing it as a draft in the WebGL extension registry. >> >> The cost of synchronous glReadPixels has been a longstanding problem in >> WebGL. The Chrome browser specifically has a particularly deep graphics >> pipeline, and draining it with a synchronous call each frame imposes a >> too-great performance penalty. This has forced applications to rewrite >> certain algorithms when porting to WebGL. >> >> getBufferSubDataAsync is a direct parallel to getBufferSubData, and >> solves these performance pitfalls in Chrome. We've gathered data from two >> test cases so far, a GPU-based picking algorithm and a GPGPU global >> illumination algorithm, and the results look good. We will present this >> data on public_webgl soon, when making a case for moving the extension >> forward. >> >> -Ken >> >> >> >> On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs >> wrote: >> >> Worth mentioning that promises are extremely bad for GC and real-time >> applications, they do not provide a developer enough control to structure >> logic so to avoids any allocations. >> >> Promises - are not good for real-time at all, and lead to issues with GC. >> Any API in WebGL that is meant to be used in real-time applications should >> not be based on API's that are not real-time friendly. >> >> On 2 January 2017 at 20:49, Florian B?sch wrote: >> >> Upon thinking about this extension, I don't think it should exist at all. >> Ideally the mapBuffer semantic would be exposed. But even if it isn't, it >> shall not be that an extension is required to express functionality already >> found in the core functionality of the underlying ES specification. >> >> Furthermore, getBufferSubDataAsync does not adequately express the >> reality of map/flush/unmap, and hides the fact that unmap/flush are still >> synchronizing calls happening. However getBufferSubDataAsync obstructs >> appropriate code dealing with proper insertion of synchronization points. >> >> In addition, it would lead to allocating promises once or many times per >> frame, and since tracking would be required in some instances, would also >> lead to allocating a closure once or many times a call. An issue that map >> buffer range does not exhibit. >> >> Due to the lack of discussion of this feature, I believe a great >> disservice is done to WebGL 2 by the introduction of these ideas/APIs and I >> strongly suggest to withdraw this from draft immediately and go back to the >> drawing board. >> >> >> >> On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch wrote: >> >> This extension https://www.khronos.org/registry/webgl/extensions/ >> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to >> draft without any public discussion. >> >> In a nutshell it proposes a new WebGL2 function called >> getBufferSubDataAsync which returns a promise that will be called >> eventually with the buffer data. >> >> I think there are several problems: >> >> 1. The extension process states that "*Extensions move through four >> states during their development: proposed, draft, community approved, and >> Khronos ratified**"*. This extension never moved through the proposal >> stage. >> 2. The extension introduces promises to the WebGL API. This requires >> a more fundamental discussion. >> 3. A discussion if this extension is required if WebWorkers can >> access the same context as the main thread has not happened. >> >> This extension should be in proposal status, and the necessary >> discussions should happen first. >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Jan 5 08:33:18 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 5 Jan 2017 17:33:18 +0100 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: Could you link to the changesets that implement getBufferSubDataAsync in google chrome? On Wed, Jan 4, 2017 at 7:55 PM, Kai Ninomiya wrote: > Max, > > In my demo [1] there are 3 different possible readback paths: > * readPixels to CPU [2] > * readPixels to PBO + getBufferSubData [3] > * readPixels to PBO + getBufferSubDataAsync [4] > > As you said, getBufferSubData(/Async) can be used for reading back any > buffer data (such as transform feedback or GPGPU shader results). PBO is > necessary (AFAIK) for async readback from framebuffer data (note: an async > readPixels wouldn't be as useful as it would block any operation which > writes to the framebuffer). > > -Kai > > [1] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ > [2] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ > blob/master/index.html#L245 > [3] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ > blob/master/index.html#L261 > [4] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ > blob/master/index.html#L280 > > On Wed, Jan 4, 2017 at 5:09 AM Maksims Mihejevs > wrote: > >> From PlayCanvas side, we express a need for async glReadPixels path too. >> We and our users have been using it in many ways, some of the ways: >> 1. GPU picking: ID encoded in unique colour, reading pixel under mouse. >> 2. GPU screen to world: reading pixel from depth texture, and using >> frustum with math reconstructing world position. >> 3. Render Target to another Canvas. In Editor we have thumbnail previews >> for materials, models, cubemaps and other assets. We render them into >> render target in main context and then reading pixels to create ImageData >> so it can be put to another canvas using putImageData. >> 4. Some custom algorithms to generate large amounts of computation heavy >> data saved into texture, then read on CPU - this depends per case. >> Sometimes async approach is viable there, sometimes it is not. >> >> In many cases glReadPixels is called per each frame, like for picking, >> and easily can drop frame rate due to blocking nature. >> >> I've noticed that PBOs are mentioned in WebGL 2.0 spec, but not much info >> apart of just that mention: https://www.khronos.org/registry/webgl/specs/ >> latest/2.0/#4.2 >> PBOs would allow to get render target data into buffers without stalling >> GPU pipeline, and then read them. >> >> How does getBufferSubDataAsync relates to PBOs? >> >> Cheers, >> Max >> >> On 4 January 2017 at 05:27, Kenneth Russell wrote: >> >> Apologies for not discussing this extension on public_webgl before >> introducing it as a draft in the WebGL extension registry. >> >> The cost of synchronous glReadPixels has been a longstanding problem in >> WebGL. The Chrome browser specifically has a particularly deep graphics >> pipeline, and draining it with a synchronous call each frame imposes a >> too-great performance penalty. This has forced applications to rewrite >> certain algorithms when porting to WebGL. >> >> getBufferSubDataAsync is a direct parallel to getBufferSubData, and >> solves these performance pitfalls in Chrome. We've gathered data from two >> test cases so far, a GPU-based picking algorithm and a GPGPU global >> illumination algorithm, and the results look good. We will present this >> data on public_webgl soon, when making a case for moving the extension >> forward. >> >> -Ken >> >> >> >> On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs >> wrote: >> >> Worth mentioning that promises are extremely bad for GC and real-time >> applications, they do not provide a developer enough control to structure >> logic so to avoids any allocations. >> >> Promises - are not good for real-time at all, and lead to issues with GC. >> Any API in WebGL that is meant to be used in real-time applications should >> not be based on API's that are not real-time friendly. >> >> On 2 January 2017 at 20:49, Florian B?sch wrote: >> >> Upon thinking about this extension, I don't think it should exist at all. >> Ideally the mapBuffer semantic would be exposed. But even if it isn't, it >> shall not be that an extension is required to express functionality already >> found in the core functionality of the underlying ES specification. >> >> Furthermore, getBufferSubDataAsync does not adequately express the >> reality of map/flush/unmap, and hides the fact that unmap/flush are still >> synchronizing calls happening. However getBufferSubDataAsync obstructs >> appropriate code dealing with proper insertion of synchronization points. >> >> In addition, it would lead to allocating promises once or many times per >> frame, and since tracking would be required in some instances, would also >> lead to allocating a closure once or many times a call. An issue that map >> buffer range does not exhibit. >> >> Due to the lack of discussion of this feature, I believe a great >> disservice is done to WebGL 2 by the introduction of these ideas/APIs and I >> strongly suggest to withdraw this from draft immediately and go back to the >> drawing board. >> >> >> >> On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch wrote: >> >> This extension https://www.khronos.org/registry/webgl/extensions/ >> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to >> draft without any public discussion. >> >> In a nutshell it proposes a new WebGL2 function called >> getBufferSubDataAsync which returns a promise that will be called >> eventually with the buffer data. >> >> I think there are several problems: >> >> 1. The extension process states that "*Extensions move through four >> states during their development: proposed, draft, community approved, and >> Khronos ratified**"*. This extension never moved through the proposal >> stage. >> 2. The extension introduces promises to the WebGL API. This requires >> a more fundamental discussion. >> 3. A discussion if this extension is required if WebWorkers can >> access the same context as the main thread has not happened. >> >> This extension should be in proposal status, and the necessary >> discussions should happen first. >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Thu Jan 5 09:22:53 2017 From: oet...@ (Olli Etuaho) Date: Thu, 5 Jan 2017 17:22:53 +0000 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <3141DB03-91E4-42F4-BC59-BAB338C535CE@callow.im> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> <3141DB03-91E4-42F4-BC59-BAB338C535CE@callow.im> Message-ID: <98c9054182464a3e9ce574c435eb8db2@ukmail101.nvidia.com> The proposed way to determine support for stereo default framebuffer is to query the actual context creation parameter. An application that wants to render to WebVR needs to create a WebGL context regardless of if the stereo attribute is supported or not, so I don't see a very strong use case for another mechanism for querying stereo support. In the current proposal stereo can't be toggled on and off after context creation, but monoscopic contexts like the browser page compositor only display the left buffer. So the application can simply elect not to render to the right buffer at times when only one buffer is being displayed. Florian, by rendering multiple views into one framebuffer, do you mean more than 2 views? The extension does include the possibility to render into texture arrays to cover some use cases like this. The default framebuffer only supports 2 buffers, though, because it's been designed with the current WebVR spec in mind, which also hard-codes 2 buffers. With regards to version numbering, I don't think that there's grounds for requiring a different context id for getContext(). Writing an entirely separate spec document for the version with stereo also seems like it would only complicate things, so I hope we can keep it in the WebGL 1 spec document. Other than that I'm happy with the version with stereo to be called either 1.0.x or 1.1. -Olli From: Mark Callow [mailto:khronos...@] Sent: torstaina 5. tammikuuta 2017 9.17 To: Olli Etuaho Cc: Florian B?sch ; Maksims Mihejevs ; public webgl Subject: Re: [Public WebGL] WEBGL_multiview discussion On Jan 4, 2017, at 20:53, Olli Etuaho > wrote: The extension document includes proposed changes to the core WebGL 1.0 spec to enable creation of stereo default framebuffers. Adding functionality like this means it is no longer WebGL 1.0. We would have to create a WebGL 1.1. Is there any way for an application to determine that the implementation supports a stereo/multiview default framebuffer absent attempting to create the context with one? This question is valid regardless of version numbering. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Jan 5 16:01:57 2017 From: kbr...@ (Kenneth Russell) Date: Thu, 5 Jan 2017 16:01:57 -0800 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: On Thu, Jan 5, 2017 at 2:55 AM, Maksims Mihejevs wrote: > Thank you for answer, this examples make sense. > > The only concern as mentioned before are: Promises. There is no a single > use of promises in WebGL, and there is a reason why they should not be used > in real-time applications if they can be avoided (which is the case). > > If possible Fence (Sync Objects) were involved in design process? Could > something like this be achieved by using more generic functionality in GL > such as Sync Objects? > Promises were chosen for a particular reason. Even if a primitive like mapBufferRange were exposed in WebGL, a typed array would still need to be allocated as the return result. There's no provision in the ECMAScript specification for "re-pointing" a typed array at a new backing store, although Chrome's V8 team proposed this a while back. Mapping a buffer's range at the OpenGL ES level returns a void*, and that would need to be exposed to ECMAScript. Passing in an already-allocated typed array wouldn't work in the current form of the APIs. Given that an allocation would be needed anyway, returning a Promise is much more web-friendly. It avoids the need for the application to deal with sync objects, and potentially inconsistent states of the returned typed array. getBufferSubDataAsync lets the application specify where it wants the returned data to be copied. -Ken > Kind Regards, > Max > > On 4 January 2017 at 18:55, Kai Ninomiya wrote: > >> Max, >> >> In my demo [1] there are 3 different possible readback paths: >> * readPixels to CPU [2] >> * readPixels to PBO + getBufferSubData [3] >> * readPixels to PBO + getBufferSubDataAsync [4] >> >> As you said, getBufferSubData(/Async) can be used for reading back any >> buffer data (such as transform feedback or GPGPU shader results). PBO is >> necessary (AFAIK) for async readback from framebuffer data (note: an async >> readPixels wouldn't be as useful as it would block any operation which >> writes to the framebuffer). >> >> -Kai >> >> [1] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >> [2] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >> blob/master/index.html#L245 >> [3] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >> blob/master/index.html#L261 >> [4] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >> blob/master/index.html#L280 >> >> On Wed, Jan 4, 2017 at 5:09 AM Maksims Mihejevs >> wrote: >> >>> From PlayCanvas side, we express a need for async glReadPixels path too. >>> We and our users have been using it in many ways, some of the ways: >>> 1. GPU picking: ID encoded in unique colour, reading pixel under mouse. >>> 2. GPU screen to world: reading pixel from depth texture, and using >>> frustum with math reconstructing world position. >>> 3. Render Target to another Canvas. In Editor we have thumbnail previews >>> for materials, models, cubemaps and other assets. We render them into >>> render target in main context and then reading pixels to create ImageData >>> so it can be put to another canvas using putImageData. >>> 4. Some custom algorithms to generate large amounts of computation heavy >>> data saved into texture, then read on CPU - this depends per case. >>> Sometimes async approach is viable there, sometimes it is not. >>> >>> In many cases glReadPixels is called per each frame, like for picking, >>> and easily can drop frame rate due to blocking nature. >>> >>> I've noticed that PBOs are mentioned in WebGL 2.0 spec, but not much >>> info apart of just that mention: https://www.khronos.o >>> rg/registry/webgl/specs/latest/2.0/#4.2 >>> PBOs would allow to get render target data into buffers without stalling >>> GPU pipeline, and then read them. >>> >>> How does getBufferSubDataAsync relates to PBOs? >>> >>> Cheers, >>> Max >>> >>> On 4 January 2017 at 05:27, Kenneth Russell wrote: >>> >>> Apologies for not discussing this extension on public_webgl before >>> introducing it as a draft in the WebGL extension registry. >>> >>> The cost of synchronous glReadPixels has been a longstanding problem in >>> WebGL. The Chrome browser specifically has a particularly deep graphics >>> pipeline, and draining it with a synchronous call each frame imposes a >>> too-great performance penalty. This has forced applications to rewrite >>> certain algorithms when porting to WebGL. >>> >>> getBufferSubDataAsync is a direct parallel to getBufferSubData, and >>> solves these performance pitfalls in Chrome. We've gathered data from two >>> test cases so far, a GPU-based picking algorithm and a GPGPU global >>> illumination algorithm, and the results look good. We will present this >>> data on public_webgl soon, when making a case for moving the extension >>> forward. >>> >>> -Ken >>> >>> >>> >>> On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs >>> wrote: >>> >>> Worth mentioning that promises are extremely bad for GC and real-time >>> applications, they do not provide a developer enough control to structure >>> logic so to avoids any allocations. >>> >>> Promises - are not good for real-time at all, and lead to issues with >>> GC. Any API in WebGL that is meant to be used in real-time applications >>> should not be based on API's that are not real-time friendly. >>> >>> On 2 January 2017 at 20:49, Florian B?sch wrote: >>> >>> Upon thinking about this extension, I don't think it should exist at >>> all. Ideally the mapBuffer semantic would be exposed. But even if it isn't, >>> it shall not be that an extension is required to express functionality >>> already found in the core functionality of the underlying ES specification. >>> >>> Furthermore, getBufferSubDataAsync does not adequately express the >>> reality of map/flush/unmap, and hides the fact that unmap/flush are still >>> synchronizing calls happening. However getBufferSubDataAsync obstructs >>> appropriate code dealing with proper insertion of synchronization points. >>> >>> In addition, it would lead to allocating promises once or many times per >>> frame, and since tracking would be required in some instances, would also >>> lead to allocating a closure once or many times a call. An issue that map >>> buffer range does not exhibit. >>> >>> Due to the lack of discussion of this feature, I believe a great >>> disservice is done to WebGL 2 by the introduction of these ideas/APIs and I >>> strongly suggest to withdraw this from draft immediately and go back to the >>> drawing board. >>> >>> >>> >>> On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch wrote: >>> >>> This extension https://www.khronos.org/registry/webgl/extensions/ >>> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to >>> draft without any public discussion. >>> >>> In a nutshell it proposes a new WebGL2 function called >>> getBufferSubDataAsync which returns a promise that will be called >>> eventually with the buffer data. >>> >>> I think there are several problems: >>> >>> 1. The extension process states that "*Extensions move through four >>> states during their development: proposed, draft, community approved, and >>> Khronos ratified**"*. This extension never moved through the >>> proposal stage. >>> 2. The extension introduces promises to the WebGL API. This requires >>> a more fundamental discussion. >>> 3. A discussion if this extension is required if WebWorkers can >>> access the same context as the main thread has not happened. >>> >>> This extension should be in proposal status, and the necessary >>> discussions should happen first. >>> >>> >>> >>> >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Jan 5 16:02:12 2017 From: kbr...@ (Kenneth Russell) Date: Thu, 5 Jan 2017 16:02:12 -0800 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: They're linked from http://crbug.com/616554 . -Ken On Thu, Jan 5, 2017 at 8:33 AM, Florian B?sch wrote: > Could you link to the changesets that implement getBufferSubDataAsync in > google chrome? > > On Wed, Jan 4, 2017 at 7:55 PM, Kai Ninomiya wrote: > >> Max, >> >> In my demo [1] there are 3 different possible readback paths: >> * readPixels to CPU [2] >> * readPixels to PBO + getBufferSubData [3] >> * readPixels to PBO + getBufferSubDataAsync [4] >> >> As you said, getBufferSubData(/Async) can be used for reading back any >> buffer data (such as transform feedback or GPGPU shader results). PBO is >> necessary (AFAIK) for async readback from framebuffer data (note: an async >> readPixels wouldn't be as useful as it would block any operation which >> writes to the framebuffer). >> >> -Kai >> >> [1] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >> [2] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >> blob/master/index.html#L245 >> [3] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >> blob/master/index.html#L261 >> [4] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >> blob/master/index.html#L280 >> >> On Wed, Jan 4, 2017 at 5:09 AM Maksims Mihejevs >> wrote: >> >>> From PlayCanvas side, we express a need for async glReadPixels path too. >>> We and our users have been using it in many ways, some of the ways: >>> 1. GPU picking: ID encoded in unique colour, reading pixel under mouse. >>> 2. GPU screen to world: reading pixel from depth texture, and using >>> frustum with math reconstructing world position. >>> 3. Render Target to another Canvas. In Editor we have thumbnail previews >>> for materials, models, cubemaps and other assets. We render them into >>> render target in main context and then reading pixels to create ImageData >>> so it can be put to another canvas using putImageData. >>> 4. Some custom algorithms to generate large amounts of computation heavy >>> data saved into texture, then read on CPU - this depends per case. >>> Sometimes async approach is viable there, sometimes it is not. >>> >>> In many cases glReadPixels is called per each frame, like for picking, >>> and easily can drop frame rate due to blocking nature. >>> >>> I've noticed that PBOs are mentioned in WebGL 2.0 spec, but not much >>> info apart of just that mention: https://www.khronos.o >>> rg/registry/webgl/specs/latest/2.0/#4.2 >>> PBOs would allow to get render target data into buffers without stalling >>> GPU pipeline, and then read them. >>> >>> How does getBufferSubDataAsync relates to PBOs? >>> >>> Cheers, >>> Max >>> >>> On 4 January 2017 at 05:27, Kenneth Russell wrote: >>> >>> Apologies for not discussing this extension on public_webgl before >>> introducing it as a draft in the WebGL extension registry. >>> >>> The cost of synchronous glReadPixels has been a longstanding problem in >>> WebGL. The Chrome browser specifically has a particularly deep graphics >>> pipeline, and draining it with a synchronous call each frame imposes a >>> too-great performance penalty. This has forced applications to rewrite >>> certain algorithms when porting to WebGL. >>> >>> getBufferSubDataAsync is a direct parallel to getBufferSubData, and >>> solves these performance pitfalls in Chrome. We've gathered data from two >>> test cases so far, a GPU-based picking algorithm and a GPGPU global >>> illumination algorithm, and the results look good. We will present this >>> data on public_webgl soon, when making a case for moving the extension >>> forward. >>> >>> -Ken >>> >>> >>> >>> On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs >>> wrote: >>> >>> Worth mentioning that promises are extremely bad for GC and real-time >>> applications, they do not provide a developer enough control to structure >>> logic so to avoids any allocations. >>> >>> Promises - are not good for real-time at all, and lead to issues with >>> GC. Any API in WebGL that is meant to be used in real-time applications >>> should not be based on API's that are not real-time friendly. >>> >>> On 2 January 2017 at 20:49, Florian B?sch wrote: >>> >>> Upon thinking about this extension, I don't think it should exist at >>> all. Ideally the mapBuffer semantic would be exposed. But even if it isn't, >>> it shall not be that an extension is required to express functionality >>> already found in the core functionality of the underlying ES specification. >>> >>> Furthermore, getBufferSubDataAsync does not adequately express the >>> reality of map/flush/unmap, and hides the fact that unmap/flush are still >>> synchronizing calls happening. However getBufferSubDataAsync obstructs >>> appropriate code dealing with proper insertion of synchronization points. >>> >>> In addition, it would lead to allocating promises once or many times per >>> frame, and since tracking would be required in some instances, would also >>> lead to allocating a closure once or many times a call. An issue that map >>> buffer range does not exhibit. >>> >>> Due to the lack of discussion of this feature, I believe a great >>> disservice is done to WebGL 2 by the introduction of these ideas/APIs and I >>> strongly suggest to withdraw this from draft immediately and go back to the >>> drawing board. >>> >>> >>> >>> On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch wrote: >>> >>> This extension https://www.khronos.org/registry/webgl/extensions/ >>> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to >>> draft without any public discussion. >>> >>> In a nutshell it proposes a new WebGL2 function called >>> getBufferSubDataAsync which returns a promise that will be called >>> eventually with the buffer data. >>> >>> I think there are several problems: >>> >>> 1. The extension process states that "*Extensions move through four >>> states during their development: proposed, draft, community approved, and >>> Khronos ratified**"*. This extension never moved through the >>> proposal stage. >>> 2. The extension introduces promises to the WebGL API. This requires >>> a more fundamental discussion. >>> 3. A discussion if this extension is required if WebWorkers can >>> access the same context as the main thread has not happened. >>> >>> This extension should be in proposal status, and the necessary >>> discussions should happen first. >>> >>> >>> >>> >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Thu Jan 5 17:03:59 2017 From: jgi...@ (Jeff Gilbert) Date: Thu, 5 Jan 2017 17:03:59 -0800 Subject: [Public WebGL] EXT_texture_storage In-Reply-To: References: Message-ID: Let's reject. It'd be nice to have, but its potential impact seems minimal. On Tue, Jan 3, 2017 at 9:29 PM, Kenneth Russell wrote: > I agree that it'd be better to reject this extension rather than move it > forward. Now that WebGL 2.0's on the verge of shipping in multiple browsers, > I think we should encourage more implementations rather than continue to add > extensions to WebGL 1.0. > > > On Tue, Jan 3, 2017 at 4:18 PM, Zhenyao Mo wrote: >> >> My main concern is we won't have full EXT_texture_storage on top of DX9, >> on which some WebGL1 implementations are based. >> >> To me, a better path is just to switch to WebGL2 whenever it's possible, >> where texture storage is part of core. >> >> On Mon, Jan 2, 2017 at 8:54 AM, Florian B?sch wrote: >>> >>> No change has occured on >>> https://www.khronos.org/registry/webgl/extensions/proposals/EXT_texture_storage/ >>> since September 2015 >>> >>> Can this extension be elevated to draft? >> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From max...@ Fri Jan 6 04:52:54 2017 From: max...@ (Maksims Mihejevs) Date: Fri, 6 Jan 2017 12:52:54 +0000 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: Why not to use classic callbacks with (err, response) arguments? On 6 January 2017 at 00:01, Kenneth Russell wrote: > On Thu, Jan 5, 2017 at 2:55 AM, Maksims Mihejevs > wrote: > >> Thank you for answer, this examples make sense. >> >> The only concern as mentioned before are: Promises. There is no a single >> use of promises in WebGL, and there is a reason why they should not be used >> in real-time applications if they can be avoided (which is the case). >> >> If possible Fence (Sync Objects) were involved in design process? Could >> something like this be achieved by using more generic functionality in GL >> such as Sync Objects? >> > > Promises were chosen for a particular reason. Even if a primitive like > mapBufferRange were exposed in WebGL, a typed array would still need to be > allocated as the return result. There's no provision in the ECMAScript > specification for "re-pointing" a typed array at a new backing store, > although Chrome's V8 team proposed this a while back. Mapping a buffer's > range at the OpenGL ES level returns a void*, and that would need to be > exposed to ECMAScript. Passing in an already-allocated typed array wouldn't > work in the current form of the APIs. > > Given that an allocation would be needed anyway, returning a Promise is > much more web-friendly. It avoids the need for the application to deal with > sync objects, and potentially inconsistent states of the returned typed > array. getBufferSubDataAsync lets the application specify where it wants > the returned data to be copied. > > -Ken > > > >> Kind Regards, >> Max >> >> On 4 January 2017 at 18:55, Kai Ninomiya wrote: >> >>> Max, >>> >>> In my demo [1] there are 3 different possible readback paths: >>> * readPixels to CPU [2] >>> * readPixels to PBO + getBufferSubData [3] >>> * readPixels to PBO + getBufferSubDataAsync [4] >>> >>> As you said, getBufferSubData(/Async) can be used for reading back any >>> buffer data (such as transform feedback or GPGPU shader results). PBO is >>> necessary (AFAIK) for async readback from framebuffer data (note: an async >>> readPixels wouldn't be as useful as it would block any operation which >>> writes to the framebuffer). >>> >>> -Kai >>> >>> [1] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>> [2] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>> blob/master/index.html#L245 >>> [3] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>> blob/master/index.html#L261 >>> [4] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>> blob/master/index.html#L280 >>> >>> On Wed, Jan 4, 2017 at 5:09 AM Maksims Mihejevs >>> wrote: >>> >>>> From PlayCanvas side, we express a need for async glReadPixels path >>>> too. We and our users have been using it in many ways, some of the ways: >>>> 1. GPU picking: ID encoded in unique colour, reading pixel under mouse. >>>> 2. GPU screen to world: reading pixel from depth texture, and using >>>> frustum with math reconstructing world position. >>>> 3. Render Target to another Canvas. In Editor we have thumbnail >>>> previews for materials, models, cubemaps and other assets. We render them >>>> into render target in main context and then reading pixels to create >>>> ImageData so it can be put to another canvas using putImageData. >>>> 4. Some custom algorithms to generate large amounts of computation >>>> heavy data saved into texture, then read on CPU - this depends per case. >>>> Sometimes async approach is viable there, sometimes it is not. >>>> >>>> In many cases glReadPixels is called per each frame, like for picking, >>>> and easily can drop frame rate due to blocking nature. >>>> >>>> I've noticed that PBOs are mentioned in WebGL 2.0 spec, but not much >>>> info apart of just that mention: https://www.khronos.o >>>> rg/registry/webgl/specs/latest/2.0/#4.2 >>>> PBOs would allow to get render target data into buffers without >>>> stalling GPU pipeline, and then read them. >>>> >>>> How does getBufferSubDataAsync relates to PBOs? >>>> >>>> Cheers, >>>> Max >>>> >>>> On 4 January 2017 at 05:27, Kenneth Russell wrote: >>>> >>>> Apologies for not discussing this extension on public_webgl before >>>> introducing it as a draft in the WebGL extension registry. >>>> >>>> The cost of synchronous glReadPixels has been a longstanding problem in >>>> WebGL. The Chrome browser specifically has a particularly deep graphics >>>> pipeline, and draining it with a synchronous call each frame imposes a >>>> too-great performance penalty. This has forced applications to rewrite >>>> certain algorithms when porting to WebGL. >>>> >>>> getBufferSubDataAsync is a direct parallel to getBufferSubData, and >>>> solves these performance pitfalls in Chrome. We've gathered data from two >>>> test cases so far, a GPU-based picking algorithm and a GPGPU global >>>> illumination algorithm, and the results look good. We will present this >>>> data on public_webgl soon, when making a case for moving the extension >>>> forward. >>>> >>>> -Ken >>>> >>>> >>>> >>>> On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs >>>> wrote: >>>> >>>> Worth mentioning that promises are extremely bad for GC and real-time >>>> applications, they do not provide a developer enough control to structure >>>> logic so to avoids any allocations. >>>> >>>> Promises - are not good for real-time at all, and lead to issues with >>>> GC. Any API in WebGL that is meant to be used in real-time applications >>>> should not be based on API's that are not real-time friendly. >>>> >>>> On 2 January 2017 at 20:49, Florian B?sch wrote: >>>> >>>> Upon thinking about this extension, I don't think it should exist at >>>> all. Ideally the mapBuffer semantic would be exposed. But even if it isn't, >>>> it shall not be that an extension is required to express functionality >>>> already found in the core functionality of the underlying ES specification. >>>> >>>> Furthermore, getBufferSubDataAsync does not adequately express the >>>> reality of map/flush/unmap, and hides the fact that unmap/flush are still >>>> synchronizing calls happening. However getBufferSubDataAsync obstructs >>>> appropriate code dealing with proper insertion of synchronization points. >>>> >>>> In addition, it would lead to allocating promises once or many times >>>> per frame, and since tracking would be required in some instances, would >>>> also lead to allocating a closure once or many times a call. An issue that >>>> map buffer range does not exhibit. >>>> >>>> Due to the lack of discussion of this feature, I believe a great >>>> disservice is done to WebGL 2 by the introduction of these ideas/APIs and I >>>> strongly suggest to withdraw this from draft immediately and go back to the >>>> drawing board. >>>> >>>> >>>> >>>> On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch wrote: >>>> >>>> This extension https://www.khronos.org/registry/webgl/extensions/ >>>> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to >>>> draft without any public discussion. >>>> >>>> In a nutshell it proposes a new WebGL2 function called >>>> getBufferSubDataAsync which returns a promise that will be called >>>> eventually with the buffer data. >>>> >>>> I think there are several problems: >>>> >>>> 1. The extension process states that "*Extensions move through four >>>> states during their development: proposed, draft, community approved, and >>>> Khronos ratified**"*. This extension never moved through the >>>> proposal stage. >>>> 2. The extension introduces promises to the WebGL API. This >>>> requires a more fundamental discussion. >>>> 3. A discussion if this extension is required if WebWorkers can >>>> access the same context as the main thread has not happened. >>>> >>>> This extension should be in proposal status, and the necessary >>>> discussions should happen first. >>>> >>>> >>>> >>>> >>>> >>>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jan 6 08:36:04 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 6 Jan 2017 17:36:04 +0100 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: The problem isn't just the promise allocation, it's also the closure allocation. And while you can avoid that in some cases, you can't avoid it in all cases. And beyond that, the further problem of registering callbacks to do stuff would be that it lifts rendering logic out of the requestAnimationFrame loop and occurs... whenever. Which isn't a proper way to render. So in practice you'd end up attaching a callback just to collect the result and put it in a variable you'll check for not being null at the start of the next requestAnimationFrame invocation. Of course when you do that, the entire exercise of providing a callback is moot. Since that'd be the most common usage pattern that everybody would recommend based on sound reasoning about the predictability of rendering in a requestAnimationFrame loop, that makes promises/callbacks entirely moot and the wrong thing to do. On Fri, Jan 6, 2017 at 1:52 PM, Maksims Mihejevs wrote: > Why not to use classic callbacks with (err, response) arguments? > > On 6 January 2017 at 00:01, Kenneth Russell wrote: > >> On Thu, Jan 5, 2017 at 2:55 AM, Maksims Mihejevs >> wrote: >> >>> Thank you for answer, this examples make sense. >>> >>> The only concern as mentioned before are: Promises. There is no a single >>> use of promises in WebGL, and there is a reason why they should not be used >>> in real-time applications if they can be avoided (which is the case). >>> >>> If possible Fence (Sync Objects) were involved in design process? Could >>> something like this be achieved by using more generic functionality in GL >>> such as Sync Objects? >>> >> >> Promises were chosen for a particular reason. Even if a primitive like >> mapBufferRange were exposed in WebGL, a typed array would still need to be >> allocated as the return result. There's no provision in the ECMAScript >> specification for "re-pointing" a typed array at a new backing store, >> although Chrome's V8 team proposed this a while back. Mapping a buffer's >> range at the OpenGL ES level returns a void*, and that would need to be >> exposed to ECMAScript. Passing in an already-allocated typed array wouldn't >> work in the current form of the APIs. >> >> Given that an allocation would be needed anyway, returning a Promise is >> much more web-friendly. It avoids the need for the application to deal with >> sync objects, and potentially inconsistent states of the returned typed >> array. getBufferSubDataAsync lets the application specify where it wants >> the returned data to be copied. >> >> -Ken >> >> >> >>> Kind Regards, >>> Max >>> >>> On 4 January 2017 at 18:55, Kai Ninomiya wrote: >>> >>>> Max, >>>> >>>> In my demo [1] there are 3 different possible readback paths: >>>> * readPixels to CPU [2] >>>> * readPixels to PBO + getBufferSubData [3] >>>> * readPixels to PBO + getBufferSubDataAsync [4] >>>> >>>> As you said, getBufferSubData(/Async) can be used for reading back any >>>> buffer data (such as transform feedback or GPGPU shader results). PBO is >>>> necessary (AFAIK) for async readback from framebuffer data (note: an async >>>> readPixels wouldn't be as useful as it would block any operation which >>>> writes to the framebuffer). >>>> >>>> -Kai >>>> >>>> [1] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>> [2] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>> blob/master/index.html#L245 >>>> [3] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>> blob/master/index.html#L261 >>>> [4] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>> blob/master/index.html#L280 >>>> >>>> On Wed, Jan 4, 2017 at 5:09 AM Maksims Mihejevs >>>> wrote: >>>> >>>>> From PlayCanvas side, we express a need for async glReadPixels path >>>>> too. We and our users have been using it in many ways, some of the ways: >>>>> 1. GPU picking: ID encoded in unique colour, reading pixel under mouse. >>>>> 2. GPU screen to world: reading pixel from depth texture, and using >>>>> frustum with math reconstructing world position. >>>>> 3. Render Target to another Canvas. In Editor we have thumbnail >>>>> previews for materials, models, cubemaps and other assets. We render them >>>>> into render target in main context and then reading pixels to create >>>>> ImageData so it can be put to another canvas using putImageData. >>>>> 4. Some custom algorithms to generate large amounts of computation >>>>> heavy data saved into texture, then read on CPU - this depends per case. >>>>> Sometimes async approach is viable there, sometimes it is not. >>>>> >>>>> In many cases glReadPixels is called per each frame, like for picking, >>>>> and easily can drop frame rate due to blocking nature. >>>>> >>>>> I've noticed that PBOs are mentioned in WebGL 2.0 spec, but not much >>>>> info apart of just that mention: https://www.khronos.o >>>>> rg/registry/webgl/specs/latest/2.0/#4.2 >>>>> PBOs would allow to get render target data into buffers without >>>>> stalling GPU pipeline, and then read them. >>>>> >>>>> How does getBufferSubDataAsync relates to PBOs? >>>>> >>>>> Cheers, >>>>> Max >>>>> >>>>> On 4 January 2017 at 05:27, Kenneth Russell wrote: >>>>> >>>>> Apologies for not discussing this extension on public_webgl before >>>>> introducing it as a draft in the WebGL extension registry. >>>>> >>>>> The cost of synchronous glReadPixels has been a longstanding problem >>>>> in WebGL. The Chrome browser specifically has a particularly deep graphics >>>>> pipeline, and draining it with a synchronous call each frame imposes a >>>>> too-great performance penalty. This has forced applications to rewrite >>>>> certain algorithms when porting to WebGL. >>>>> >>>>> getBufferSubDataAsync is a direct parallel to getBufferSubData, and >>>>> solves these performance pitfalls in Chrome. We've gathered data from two >>>>> test cases so far, a GPU-based picking algorithm and a GPGPU global >>>>> illumination algorithm, and the results look good. We will present this >>>>> data on public_webgl soon, when making a case for moving the extension >>>>> forward. >>>>> >>>>> -Ken >>>>> >>>>> >>>>> >>>>> On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs >>>>> wrote: >>>>> >>>>> Worth mentioning that promises are extremely bad for GC and real-time >>>>> applications, they do not provide a developer enough control to structure >>>>> logic so to avoids any allocations. >>>>> >>>>> Promises - are not good for real-time at all, and lead to issues with >>>>> GC. Any API in WebGL that is meant to be used in real-time applications >>>>> should not be based on API's that are not real-time friendly. >>>>> >>>>> On 2 January 2017 at 20:49, Florian B?sch wrote: >>>>> >>>>> Upon thinking about this extension, I don't think it should exist at >>>>> all. Ideally the mapBuffer semantic would be exposed. But even if it isn't, >>>>> it shall not be that an extension is required to express functionality >>>>> already found in the core functionality of the underlying ES specification. >>>>> >>>>> Furthermore, getBufferSubDataAsync does not adequately express the >>>>> reality of map/flush/unmap, and hides the fact that unmap/flush are still >>>>> synchronizing calls happening. However getBufferSubDataAsync obstructs >>>>> appropriate code dealing with proper insertion of synchronization points. >>>>> >>>>> In addition, it would lead to allocating promises once or many times >>>>> per frame, and since tracking would be required in some instances, would >>>>> also lead to allocating a closure once or many times a call. An issue that >>>>> map buffer range does not exhibit. >>>>> >>>>> Due to the lack of discussion of this feature, I believe a great >>>>> disservice is done to WebGL 2 by the introduction of these ideas/APIs and I >>>>> strongly suggest to withdraw this from draft immediately and go back to the >>>>> drawing board. >>>>> >>>>> >>>>> >>>>> On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch >>>>> wrote: >>>>> >>>>> This extension https://www.khronos.org/registry/webgl/extensions/ >>>>> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to >>>>> draft without any public discussion. >>>>> >>>>> In a nutshell it proposes a new WebGL2 function called >>>>> getBufferSubDataAsync which returns a promise that will be called >>>>> eventually with the buffer data. >>>>> >>>>> I think there are several problems: >>>>> >>>>> 1. The extension process states that "*Extensions move through >>>>> four states during their development: proposed, draft, community approved, >>>>> and Khronos ratified**"*. This extension never moved through the >>>>> proposal stage. >>>>> 2. The extension introduces promises to the WebGL API. This >>>>> requires a more fundamental discussion. >>>>> 3. A discussion if this extension is required if WebWorkers can >>>>> access the same context as the main thread has not happened. >>>>> >>>>> This extension should be in proposal status, and the necessary >>>>> discussions should happen first. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Fri Jan 6 09:33:41 2017 From: max...@ (Maksims Mihejevs) Date: Fri, 6 Jan 2017 17:33:41 +0000 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: That is very good point too. Most common case of similar pattern is mouse, keyboard, touch input - it is in callback, and you end up collecting own state information about input, so to access it in main loop of your app in requestAnimationFrame. Developers coming from real-time applications and other platforms get confused a lot by this all the time. That is based on many users on PlayCanvas from hobbyists to commercial clients do mention it time to times. On 6 January 2017 at 16:36, Florian B?sch wrote: > The problem isn't just the promise allocation, it's also the closure > allocation. And while you can avoid that in some cases, you can't avoid it > in all cases. And beyond that, the further problem of registering callbacks > to do stuff would be that it lifts rendering logic out of the > requestAnimationFrame loop and occurs... whenever. Which isn't a proper way > to render. So in practice you'd end up attaching a callback just to collect > the result and put it in a variable you'll check for not being null at the > start of the next requestAnimationFrame invocation. Of course when you do > that, the entire exercise of providing a callback is moot. Since that'd be > the most common usage pattern that everybody would recommend based on sound > reasoning about the predictability of rendering in a requestAnimationFrame > loop, that makes promises/callbacks entirely moot and the wrong thing to do. > > On Fri, Jan 6, 2017 at 1:52 PM, Maksims Mihejevs > wrote: > >> Why not to use classic callbacks with (err, response) arguments? >> >> On 6 January 2017 at 00:01, Kenneth Russell wrote: >> >>> On Thu, Jan 5, 2017 at 2:55 AM, Maksims Mihejevs >>> wrote: >>> >>>> Thank you for answer, this examples make sense. >>>> >>>> The only concern as mentioned before are: Promises. There is no a >>>> single use of promises in WebGL, and there is a reason why they should not >>>> be used in real-time applications if they can be avoided (which is the >>>> case). >>>> >>>> If possible Fence (Sync Objects) were involved in design process? Could >>>> something like this be achieved by using more generic functionality in GL >>>> such as Sync Objects? >>>> >>> >>> Promises were chosen for a particular reason. Even if a primitive like >>> mapBufferRange were exposed in WebGL, a typed array would still need to be >>> allocated as the return result. There's no provision in the ECMAScript >>> specification for "re-pointing" a typed array at a new backing store, >>> although Chrome's V8 team proposed this a while back. Mapping a buffer's >>> range at the OpenGL ES level returns a void*, and that would need to be >>> exposed to ECMAScript. Passing in an already-allocated typed array wouldn't >>> work in the current form of the APIs. >>> >>> Given that an allocation would be needed anyway, returning a Promise is >>> much more web-friendly. It avoids the need for the application to deal with >>> sync objects, and potentially inconsistent states of the returned typed >>> array. getBufferSubDataAsync lets the application specify where it wants >>> the returned data to be copied. >>> >>> -Ken >>> >>> >>> >>>> Kind Regards, >>>> Max >>>> >>>> On 4 January 2017 at 18:55, Kai Ninomiya wrote: >>>> >>>>> Max, >>>>> >>>>> In my demo [1] there are 3 different possible readback paths: >>>>> * readPixels to CPU [2] >>>>> * readPixels to PBO + getBufferSubData [3] >>>>> * readPixels to PBO + getBufferSubDataAsync [4] >>>>> >>>>> As you said, getBufferSubData(/Async) can be used for reading back any >>>>> buffer data (such as transform feedback or GPGPU shader results). PBO is >>>>> necessary (AFAIK) for async readback from framebuffer data (note: an async >>>>> readPixels wouldn't be as useful as it would block any operation which >>>>> writes to the framebuffer). >>>>> >>>>> -Kai >>>>> >>>>> [1] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>>> [2] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>>> blob/master/index.html#L245 >>>>> [3] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>>> blob/master/index.html#L261 >>>>> [4] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>>> blob/master/index.html#L280 >>>>> >>>>> On Wed, Jan 4, 2017 at 5:09 AM Maksims Mihejevs >>>>> wrote: >>>>> >>>>>> From PlayCanvas side, we express a need for async glReadPixels path >>>>>> too. We and our users have been using it in many ways, some of the ways: >>>>>> 1. GPU picking: ID encoded in unique colour, reading pixel under >>>>>> mouse. >>>>>> 2. GPU screen to world: reading pixel from depth texture, and using >>>>>> frustum with math reconstructing world position. >>>>>> 3. Render Target to another Canvas. In Editor we have thumbnail >>>>>> previews for materials, models, cubemaps and other assets. We render them >>>>>> into render target in main context and then reading pixels to create >>>>>> ImageData so it can be put to another canvas using putImageData. >>>>>> 4. Some custom algorithms to generate large amounts of computation >>>>>> heavy data saved into texture, then read on CPU - this depends per case. >>>>>> Sometimes async approach is viable there, sometimes it is not. >>>>>> >>>>>> In many cases glReadPixels is called per each frame, like for >>>>>> picking, and easily can drop frame rate due to blocking nature. >>>>>> >>>>>> I've noticed that PBOs are mentioned in WebGL 2.0 spec, but not much >>>>>> info apart of just that mention: https://www.khronos.o >>>>>> rg/registry/webgl/specs/latest/2.0/#4.2 >>>>>> PBOs would allow to get render target data into buffers without >>>>>> stalling GPU pipeline, and then read them. >>>>>> >>>>>> How does getBufferSubDataAsync relates to PBOs? >>>>>> >>>>>> Cheers, >>>>>> Max >>>>>> >>>>>> On 4 January 2017 at 05:27, Kenneth Russell wrote: >>>>>> >>>>>> Apologies for not discussing this extension on public_webgl before >>>>>> introducing it as a draft in the WebGL extension registry. >>>>>> >>>>>> The cost of synchronous glReadPixels has been a longstanding problem >>>>>> in WebGL. The Chrome browser specifically has a particularly deep graphics >>>>>> pipeline, and draining it with a synchronous call each frame imposes a >>>>>> too-great performance penalty. This has forced applications to rewrite >>>>>> certain algorithms when porting to WebGL. >>>>>> >>>>>> getBufferSubDataAsync is a direct parallel to getBufferSubData, and >>>>>> solves these performance pitfalls in Chrome. We've gathered data from two >>>>>> test cases so far, a GPU-based picking algorithm and a GPGPU global >>>>>> illumination algorithm, and the results look good. We will present this >>>>>> data on public_webgl soon, when making a case for moving the extension >>>>>> forward. >>>>>> >>>>>> -Ken >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs >>>>>> wrote: >>>>>> >>>>>> Worth mentioning that promises are extremely bad for GC and real-time >>>>>> applications, they do not provide a developer enough control to structure >>>>>> logic so to avoids any allocations. >>>>>> >>>>>> Promises - are not good for real-time at all, and lead to issues with >>>>>> GC. Any API in WebGL that is meant to be used in real-time applications >>>>>> should not be based on API's that are not real-time friendly. >>>>>> >>>>>> On 2 January 2017 at 20:49, Florian B?sch wrote: >>>>>> >>>>>> Upon thinking about this extension, I don't think it should exist at >>>>>> all. Ideally the mapBuffer semantic would be exposed. But even if it isn't, >>>>>> it shall not be that an extension is required to express functionality >>>>>> already found in the core functionality of the underlying ES specification. >>>>>> >>>>>> Furthermore, getBufferSubDataAsync does not adequately express the >>>>>> reality of map/flush/unmap, and hides the fact that unmap/flush are still >>>>>> synchronizing calls happening. However getBufferSubDataAsync obstructs >>>>>> appropriate code dealing with proper insertion of synchronization points. >>>>>> >>>>>> In addition, it would lead to allocating promises once or many times >>>>>> per frame, and since tracking would be required in some instances, would >>>>>> also lead to allocating a closure once or many times a call. An issue that >>>>>> map buffer range does not exhibit. >>>>>> >>>>>> Due to the lack of discussion of this feature, I believe a great >>>>>> disservice is done to WebGL 2 by the introduction of these ideas/APIs and I >>>>>> strongly suggest to withdraw this from draft immediately and go back to the >>>>>> drawing board. >>>>>> >>>>>> >>>>>> >>>>>> On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch >>>>>> wrote: >>>>>> >>>>>> This extension https://www.khronos.org/registry/webgl/extensions/ >>>>>> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated to >>>>>> draft without any public discussion. >>>>>> >>>>>> In a nutshell it proposes a new WebGL2 function called >>>>>> getBufferSubDataAsync which returns a promise that will be called >>>>>> eventually with the buffer data. >>>>>> >>>>>> I think there are several problems: >>>>>> >>>>>> 1. The extension process states that "*Extensions move through >>>>>> four states during their development: proposed, draft, community approved, >>>>>> and Khronos ratified**"*. This extension never moved through the >>>>>> proposal stage. >>>>>> 2. The extension introduces promises to the WebGL API. This >>>>>> requires a more fundamental discussion. >>>>>> 3. A discussion if this extension is required if WebWorkers can >>>>>> access the same context as the main thread has not happened. >>>>>> >>>>>> This extension should be in proposal status, and the necessary >>>>>> discussions should happen first. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sun...@ Fri Jan 6 11:01:01 2017 From: sun...@ (Byungseon Shin) Date: Fri, 06 Jan 2017 19:01:01 +0000 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: References: Message-ID: Hi WebGL working group members, We have updated proposal by adding following function : - Provides time of frame, texture width and height of HTMLVideoElement's EGLImage. Please take a look at the pull request: Kind regards, Byungseon Shin On Tue, Jan 3, 2017 at 8:48 AM Byungseon Shin wrote: > Hi Florian, > > We are working on update which will cover dynamic_texture concerns. > I plan to upload new version on github for reviewing by this week. > > Happy 2017 year! > Byungseon Shin > > On Tue, Jan 3, 2017 at 1:58 AM Florian B?sch wrote: > > What is the status of the > https://www.khronos.org/registry/webgl/extensions/proposals/OES_EGL_image_external/ > extension? Is work on the spec still going on or is this the final version > that takes into account dynamic_texture concerns? > > On Wed, Oct 12, 2016 at 3:59 AM, Byungseon Shin > wrote: > > Hi Ken, > > Thanks for the feedback. > Of course, we are willing to devote time and eager to enhance WebGL video > rendering performance. > We prefer to use our proposal as a base but we could revise > dynamic_texture as well. > > Kind regards, > Byungseon Shin > > > On Wed, Oct 12, 2016 at 8:48 AM Kenneth Russell wrote: > > Hi Byungseon, > > Thanks for putting together this extension proposal. > > In conversations with groups within Google that are trying to do 360 > degree video rendering in WebGL, they need more information than your > extension would provide; specifically, they would need the exact timestamp > of the frame currently being rendered, as well as other per-frame metadata > like the current width and height of the texture (for variable bitrate > video streams). > > Mark Callow's WEBGL_dynamic_texture extension proposal > https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/ > provides the controls needed. However, per working group discussions, the > current extension's a little too complicated. I think if that extension > were simplified a little bit that it would provide all of the performance > benefits yours offers, and the controls that are known to be necessary. > > Do you think you'd be willing to devote some time to either extending your > proposal (I can provide specific feedback) or editing down Mark's proposal > to handle these use cases? I'd prefer to edit down WEBGL_dynamic_texture, > if you're willing. > > Thanks, > > -Ken > > > > On Tue, Oct 11, 2016 at 8:27 AM, Byungseon Shin > wrote: > > Hi Corentin, > > On Tue, Oct 11, 2016 at 11:16 PM Corentin Wallez > wrote: > > The problem with using an EGL_image type of extension is that the texture > data store becomes shared between the new texture and the object it was > created from. This is great when the application controls both sides and > can ensure read and writes are probably synchronized, but in the case of > video here, one side would write without the application being able to > control synchronization. > > The standard way to do this type of video / texture binding in EGL is to > use EGL_KHR_stream > which > is a much more complex and might not be an improvement over what developers > can do with the current APIs. > > > As in proposal revision#3 and as Florian already mentioned, > Current proposal support "bind once and update implicitly" concept : "ext.EGLImageTargetTexture2DOES(ext.TEXTURE_EXTERNAL_OES, > videoElement); that the texture is "bound" to the video and no further > calls need to be emitted (as would presently be the case)" > > In a that sense, we could abstract the details implementation into > MediaPlayer Side like synchronizing issues. > > Even EGL_KHR_stream provides more details synchronization issues but still > lot's of GPU driver and Video Decoder need to support the latest > specification. > But OES_EGL_image_external extension is already supported by most of the > GPU drivers and Video decoders. > > And one of the performance bottleneck is to converting EGLImage input to > TEXTURE_2D to handled by WebGL application. > > > On Tue, Oct 11, 2016 at 6:54 AM, Byungseon Shin > wrote: > > Hi Florian, Maksims, > > Thanks for the feedback. > > I have updated proposal to explain how application works. > Please see the attached updated proposal. > > To render an update frame, we need to call ext.EGLImageTargetTexture2DOES > to bind newly generated texture buffer from video decoder. > > By supporting OES_EGL_image_external extension, application can use direct > texture when video driver provides EGLImage like OpenGL ES natvie > applications. > > Our proposal is just focusing on extending format of texture compatible > with OpenGL ES extension and does not conflict with previous proposals. > > So, we provides a simplest way to adopt OES_EGL_image_external extension. > > By using references implementations of browser, WebGL renders video more > than 10 times faster than TEXTURE_2D format with Full HD resolution output. > > Kind regards, > Byungseon Shin > > On Tue, Oct 11, 2016 at 4:51 PM Maksims Mihejevs > wrote: > > Worth mentioning that in web there are multiple sources that have > independent redraw mechanics, that includes video as well as canvas > elements. It might make sense to have single extension for providing direct > access to those sources without need to re-upload it. > > I have assumptions that internally in browsers, both video and canvas have > their buffers, so it might lead to very similar implementation as from > internal side as well as from webgl api side. > > > On 11 Oct 2016 12:12 p.m., "Florian B?sch" wrote: > > Am I correct in assuming that once you've called ext. > EGLImageTargetTexture2DOES(ext.TEXTURE_EXTERNAL_OES, videoElement); that > the texture is "bound" to the video and no further calls need to be emitted > (as would presently be the case) for it to stay current? > > If that's the case, I think it's a good idea, it takes some work off web > application programmers to make sure the video is updated, and it should > also in all cases avoid a texture download/reupload. > > There are some other extensions that have attempted to deal with this > problem. > > - > https://www.khronos.org/registry/webgl/extensions/rejected/WEBGL_texture_from_depth_video/ > : this extension is rejected because it introduced a complex behavioral > changes/semantics whose benefits where unclear and the champions of this > extension stopped responding to questions and didn't offer any improvements > on the extension specification. > - > https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/ > : this extension is currently in a proposal state and it covers similar > functionality as EGL_image_external. Where it fundamentally differs is that > it also deals with format conversions (YUV 442 to rgb etc.) and accurate > timing for presentation of a video frame. > > Perhaps you could elaborate a little why OES_EGL_image_external would be > preferrable to WEBGL_dynamic_texture, and how the issues that > EGL_image_external does not tackle (timing, format conversion, other > things) would be handled by the web application programmer? > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jan 6 11:14:48 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 6 Jan 2017 20:14:48 +0100 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: References: Message-ID: Unless I'm mistaken, knowing the time alone isn't going to help that much. The dynamic_texture extension attempted to solve this problem with setting a presentation time for a frame. I'm not sure how realistic it is to try to time presentation time, so that might have been overly ambitious. On Fri, Jan 6, 2017 at 8:01 PM, Byungseon Shin wrote: > Hi WebGL working group members, > > We have updated proposal by adding following function : > - Provides time of frame, texture width and height of HTMLVideoElement's > EGLImage. > > Please take a look at the pull request: > > > Kind regards, > Byungseon Shin > > > > On Tue, Jan 3, 2017 at 8:48 AM Byungseon Shin wrote: > >> Hi Florian, >> >> We are working on update which will cover dynamic_texture concerns. >> I plan to upload new version on github for reviewing by this week. >> >> Happy 2017 year! >> Byungseon Shin >> >> On Tue, Jan 3, 2017 at 1:58 AM Florian B?sch wrote: >> >> What is the status of the https://www.khronos.org/ >> registry/webgl/extensions/proposals/OES_EGL_image_external/ extension? >> Is work on the spec still going on or is this the final version that takes >> into account dynamic_texture concerns? >> >> On Wed, Oct 12, 2016 at 3:59 AM, Byungseon Shin >> wrote: >> >> Hi Ken, >> >> Thanks for the feedback. >> Of course, we are willing to devote time and eager to enhance WebGL video >> rendering performance. >> We prefer to use our proposal as a base but we could revise >> dynamic_texture as well. >> >> Kind regards, >> Byungseon Shin >> >> >> On Wed, Oct 12, 2016 at 8:48 AM Kenneth Russell wrote: >> >> Hi Byungseon, >> >> Thanks for putting together this extension proposal. >> >> In conversations with groups within Google that are trying to do 360 >> degree video rendering in WebGL, they need more information than your >> extension would provide; specifically, they would need the exact timestamp >> of the frame currently being rendered, as well as other per-frame metadata >> like the current width and height of the texture (for variable bitrate >> video streams). >> >> Mark Callow's WEBGL_dynamic_texture extension proposal >> https://www.khronos.org/registry/webgl/extensions/ >> proposals/WEBGL_dynamic_texture/ provides the controls needed. However, >> per working group discussions, the current extension's a little too >> complicated. I think if that extension were simplified a little bit that it >> would provide all of the performance benefits yours offers, and the >> controls that are known to be necessary. >> >> Do you think you'd be willing to devote some time to either extending >> your proposal (I can provide specific feedback) or editing down Mark's >> proposal to handle these use cases? I'd prefer to edit down >> WEBGL_dynamic_texture, if you're willing. >> >> Thanks, >> >> -Ken >> >> >> >> On Tue, Oct 11, 2016 at 8:27 AM, Byungseon Shin >> wrote: >> >> Hi Corentin, >> >> On Tue, Oct 11, 2016 at 11:16 PM Corentin Wallez >> wrote: >> >> The problem with using an EGL_image type of extension is that the texture >> data store becomes shared between the new texture and the object it was >> created from. This is great when the application controls both sides and >> can ensure read and writes are probably synchronized, but in the case of >> video here, one side would write without the application being able to >> control synchronization. >> >> The standard way to do this type of video / texture binding in EGL is to >> use EGL_KHR_stream >> which >> is a much more complex and might not be an improvement over what developers >> can do with the current APIs. >> >> >> As in proposal revision#3 and as Florian already mentioned, >> Current proposal support "bind once and update implicitly" concept : " >> ext.EGLImageTargetTexture2DOES(ext.TEXTURE_EXTERNAL_OES, videoElement); >> that the texture is "bound" to the video and no further calls need to be >> emitted (as would presently be the case)" >> >> In a that sense, we could abstract the details implementation into >> MediaPlayer Side like synchronizing issues. >> >> Even EGL_KHR_stream provides more details synchronization issues but >> still lot's of GPU driver and Video Decoder need to support the latest >> specification. >> But OES_EGL_image_external extension is already supported by most of the >> GPU drivers and Video decoders. >> >> And one of the performance bottleneck is to converting EGLImage input to >> TEXTURE_2D to handled by WebGL application. >> >> >> On Tue, Oct 11, 2016 at 6:54 AM, Byungseon Shin >> wrote: >> >> Hi Florian, Maksims, >> >> Thanks for the feedback. >> >> I have updated proposal to explain how application works. >> Please see the attached updated proposal. >> >> To render an update frame, we need to call ext.EGLImageTargetTexture2DOES >> to bind newly generated texture buffer from video decoder. >> >> By supporting OES_EGL_image_external extension, application can use >> direct texture when video driver provides EGLImage like OpenGL ES natvie >> applications. >> >> Our proposal is just focusing on extending format of texture compatible >> with OpenGL ES extension and does not conflict with previous proposals. >> >> So, we provides a simplest way to adopt OES_EGL_image_external extension. >> >> By using references implementations of browser, WebGL renders video more >> than 10 times faster than TEXTURE_2D format with Full HD resolution output. >> >> Kind regards, >> Byungseon Shin >> >> On Tue, Oct 11, 2016 at 4:51 PM Maksims Mihejevs >> wrote: >> >> Worth mentioning that in web there are multiple sources that have >> independent redraw mechanics, that includes video as well as canvas >> elements. It might make sense to have single extension for providing direct >> access to those sources without need to re-upload it. >> >> I have assumptions that internally in browsers, both video and canvas >> have their buffers, so it might lead to very similar implementation as from >> internal side as well as from webgl api side. >> >> >> On 11 Oct 2016 12:12 p.m., "Florian B?sch" wrote: >> >> Am I correct in assuming that once you've called ext.EGLImageTargetTextu >> re2DOES(ext.TEXTURE_EXTERNAL_OES, videoElement); that the texture is >> "bound" to the video and no further calls need to be emitted (as would >> presently be the case) for it to stay current? >> >> If that's the case, I think it's a good idea, it takes some work off web >> application programmers to make sure the video is updated, and it should >> also in all cases avoid a texture download/reupload. >> >> There are some other extensions that have attempted to deal with this >> problem. >> >> - https://www.khronos.org/registry/webgl/extensions/ >> rejected/WEBGL_texture_from_depth_video/ >> >> : this extension is rejected because it introduced a complex behavioral >> changes/semantics whose benefits where unclear and the champions of this >> extension stopped responding to questions and didn't offer any improvements >> on the extension specification. >> - https://www.khronos.org/registry/webgl/extensions/ >> proposals/WEBGL_dynamic_texture/ >> >> : this extension is currently in a proposal state and it covers similar >> functionality as EGL_image_external. Where it fundamentally differs is that >> it also deals with format conversions (YUV 442 to rgb etc.) and accurate >> timing for presentation of a video frame. >> >> Perhaps you could elaborate a little why OES_EGL_image_external would be >> preferrable to WEBGL_dynamic_texture, and how the issues that >> EGL_image_external does not tackle (timing, format conversion, other >> things) would be handled by the web application programmer? >> >> >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jan 6 11:34:17 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 6 Jan 2017 20:34:17 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <98c9054182464a3e9ce574c435eb8db2@ukmail101.nvidia.com> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> <3141DB03-91E4-42F4-BC59-BAB338C535CE@callow.im> <98c9054182464a3e9ce574c435eb8db2@ukmail101.nvidia.com> Message-ID: On Thu, Jan 5, 2017 at 6:22 PM, Olli Etuaho wrote: > In the current proposal stereo can?t be toggled on and off after context > creation, but monoscopic contexts like the browser page compositor only > display the left buffer. So the application can simply elect not to render > to the right buffer at times when only one buffer is being displayed. > How does an application electc not to render to the right buffer? Florian, by rendering multiple views into one framebuffer, do you mean more > than 2 views? > I mean into a framebuffer object with an attached texture. Typically in rendering you'd render a few things into ordinary framebuffers (such as shadow maps, GPGPU stuff, whatever), and some of the renders (such as stereo views) might be rendered to a framebuffer object to facilitate postprocessing of various kinds. How do you select if a framebuffer is stereo or mono? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sun...@ Fri Jan 6 12:10:35 2017 From: sun...@ (Byungseon Shin) Date: Fri, 06 Jan 2017 20:10:35 +0000 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: References: Message-ID: First of all, I missed to reply-all, so I am adding public-webgl alias again. Second, I agree with your point. I just was trying to explain that application can control the rendering logic. We can propose alternative approach by making explicit calling of EGLImageTargetTexture2DOES. Then, application can decide if new frame will be used or not. var update = function(){ updateTexture(); // ext.EGLImageTargetTexture2DOES render(); requestAnimationFrame(update); } requestAnimationFrame(update); On Sat, Jan 7, 2017 at 4:46 AM Florian B?sch wrote: > That's not how WebGL works, this is: > > var update = function(){ > render(); > requestAnimationFrame(update); > } > requestAnimationFrame(update); > > If you block in the JS thread on purpose you achieve nothing because no > matter how long you block, the frame is only going to be presented at the > next animationFrame composit, which is synced to vsync (or g-sync, or > direct-present sync, etc.) > > You do run a risk though of missing the sync, in which case you drop a > frame, which is bad (jitter/glitching/nausea (in HMD users) etc.). > > On Fri, Jan 6, 2017 at 8:43 PM, Byungseon Shin > wrote: > > As in the proposal, application can skip calling of render function which > will call swapBuffers as following: > ---------------------------------------------------- > update(); > while (true) { > ... > // check the gap between frame time of WebGL and Video > if ( gap < allowable_delay) > render() > } > > On Sat, Jan 7, 2017 at 4:36 AM Florian B?sch wrote: > > Sure, but WebGL applications have no control over and can't influence > (beyond not being late) when they're presented on screen. They do not > control the swapBuffers command. > > On Fri, Jan 6, 2017 at 8:32 PM, Byungseon Shin > wrote: > > Hi Florian, > > To control the sync, I think, application can use the gap between Video > frame time and WebGL frame time. > > Kind regards, > Byungseon Shin > > On Sat, Jan 7, 2017 at 4:14 AM Florian B?sch wrote: > > Unless I'm mistaken, knowing the time alone isn't going to help that much. > > The dynamic_texture extension attempted to solve this problem with setting > a presentation time for a frame. I'm not sure how realistic it is to try to > time presentation time, so that might have been overly ambitious. > > On Fri, Jan 6, 2017 at 8:01 PM, Byungseon Shin > wrote: > > Hi WebGL working group members, > > We have updated proposal by adding following function : > - Provides time of frame, texture width and height of HTMLVideoElement's > EGLImage. > > Please take a look at the pull request: > > > Kind regards, > Byungseon Shin > > > > On Tue, Jan 3, 2017 at 8:48 AM Byungseon Shin wrote: > > Hi Florian, > > We are working on update which will cover dynamic_texture concerns. > I plan to upload new version on github for reviewing by this week. > > Happy 2017 year! > Byungseon Shin > > On Tue, Jan 3, 2017 at 1:58 AM Florian B?sch wrote: > > What is the status of the > https://www.khronos.org/registry/webgl/extensions/proposals/OES_EGL_image_external/ > extension? Is work on the spec still going on or is this the final version > that takes into account dynamic_texture concerns? > > On Wed, Oct 12, 2016 at 3:59 AM, Byungseon Shin > wrote: > > Hi Ken, > > Thanks for the feedback. > Of course, we are willing to devote time and eager to enhance WebGL video > rendering performance. > We prefer to use our proposal as a base but we could revise > dynamic_texture as well. > > Kind regards, > Byungseon Shin > > > On Wed, Oct 12, 2016 at 8:48 AM Kenneth Russell wrote: > > Hi Byungseon, > > Thanks for putting together this extension proposal. > > In conversations with groups within Google that are trying to do 360 > degree video rendering in WebGL, they need more information than your > extension would provide; specifically, they would need the exact timestamp > of the frame currently being rendered, as well as other per-frame metadata > like the current width and height of the texture (for variable bitrate > video streams). > > Mark Callow's WEBGL_dynamic_texture extension proposal > https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_dynamic_texture/ > provides the controls needed. However, per working group discussions, the > current extension's a little too complicated. I think if that extension > were simplified a little bit that it would provide all of the performance > benefits yours offers, and the controls that are known to be necessary. > > Do you think you'd be willing to devote some time to either extending your > proposal (I can provide specific feedback) or editing down Mark's proposal > to handle these use cases? I'd prefer to edit down WEBGL_dynamic_texture, > if you're willing. > > Thanks, > > -Ken > > > > On Tue, Oct 11, 2016 at 8:27 AM, Byungseon Shin > wrote: > > Hi Corentin, > > On Tue, Oct 11, 2016 at 11:16 PM Corentin Wallez > wrote: > > The problem with using an EGL_image type of extension is that the texture > data store becomes shared between the new texture and the object it was > created from. This is great when the application controls both sides and > can ensure read and writes are probably synchronized, but in the case of > video here, one side would write without the application being able to > control synchronization. > > The standard way to do this type of video / texture binding in EGL is to > use EGL_KHR_stream > which > is a much more complex and might not be an improvement over what developers > can do with the current APIs. > > > As in proposal revision#3 and as Florian already mentioned, > Current proposal support "bind once and update implicitly" concept : "ext.EGLImageTargetTexture2DOES(ext.TEXTURE_EXTERNAL_OES, > videoElement); that the texture is "bound" to the video and no further > calls need to be emitted (as would presently be the case)" > > In a that sense, we could abstract the details implementation into > MediaPlayer Side like synchronizing issues. > > Even EGL_KHR_stream provides more details synchronization issues but still > lot's of GPU driver and Video Decoder need to support the latest > specification. > But OES_EGL_image_external extension is already supported by most of the > GPU drivers and Video decoders. > > And one of the performance bottleneck is to converting EGLImage input to > TEXTURE_2D to handled by WebGL application. > > > On Tue, Oct 11, 2016 at 6:54 AM, Byungseon Shin > wrote: > > Hi Florian, Maksims, > > Thanks for the feedback. > > I have updated proposal to explain how application works. > Please see the attached updated proposal. > > To render an update frame, we need to call ext.EGLImageTargetTexture2DOES > to bind newly generated texture buffer from video decoder. > > By supporting OES_EGL_image_external extension, application can use direct > texture when video driver provides EGLImage like OpenGL ES natvie > applications. > > Our proposal is just focusing on extending format of texture compatible > with OpenGL ES extension and does not conflict with previous proposals. > > So, we provides a simplest way to adopt OES_EGL_image_external extension. > > By using references implementations of browser, WebGL renders video more > than 10 times faster than TEXTURE_2D format with Full HD resolution output. > > Kind regards, > Byungseon Shin > > On Tue, Oct 11, 2016 at 4:51 PM Maksims Mihejevs > wrote: > > Worth mentioning that in web there are multiple sources that have > independent redraw mechanics, that includes video as well as canvas > elements. It might make sense to have single extension for providing direct > access to those sources without need to re-upload it. > > I have assumptions that internally in browsers, both video and canvas have > their buffers, so it might lead to very similar implementation as from > internal side as well as from webgl api side. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jan 6 13:56:12 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 6 Jan 2017 22:56:12 +0100 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: References: Message-ID: Would it be possible to inform the decoder of a desired time so it can pick the best matching frame? Although we don't have control over when a frame is going to be presented, we have a pretty good idea of when it is going to be on screen (assuming we make the vsync), and from that we could relatively easily derive a "I'd like a video frame closest to that time". On Fri, Jan 6, 2017 at 9:10 PM, Byungseon Shin wrote: > First of all, I missed to reply-all, so I am adding public-webgl alias > again. > > Second, I agree with your point. I just was trying to explain that > application can control the rendering logic. > > We can propose alternative approach by making explicit calling of > EGLImageTargetTexture2DOES. > Then, application can decide if new frame will be used or not. > > var update = function(){ > updateTexture(); // ext.EGLImageTargetTexture2DOES > render(); > requestAnimationFrame(update); > } > requestAnimationFrame(update); > > > On Sat, Jan 7, 2017 at 4:46 AM Florian B?sch wrote: > >> That's not how WebGL works, this is: >> >> var update = function(){ >> render(); >> requestAnimationFrame(update); >> } >> requestAnimationFrame(update); >> >> If you block in the JS thread on purpose you achieve nothing because no >> matter how long you block, the frame is only going to be presented at the >> next animationFrame composit, which is synced to vsync (or g-sync, or >> direct-present sync, etc.) >> >> You do run a risk though of missing the sync, in which case you drop a >> frame, which is bad (jitter/glitching/nausea (in HMD users) etc.). >> >> On Fri, Jan 6, 2017 at 8:43 PM, Byungseon Shin >> wrote: >> >> As in the proposal, application can skip calling of render function which >> will call swapBuffers as following: >> ---------------------------------------------------- >> update(); >> while (true) { >> ... >> // check the gap between frame time of WebGL and Video >> if ( gap < allowable_delay) >> render() >> } >> >> On Sat, Jan 7, 2017 at 4:36 AM Florian B?sch wrote: >> >> Sure, but WebGL applications have no control over and can't influence >> (beyond not being late) when they're presented on screen. They do not >> control the swapBuffers command. >> >> On Fri, Jan 6, 2017 at 8:32 PM, Byungseon Shin >> wrote: >> >> Hi Florian, >> >> To control the sync, I think, application can use the gap between Video >> frame time and WebGL frame time. >> >> Kind regards, >> Byungseon Shin >> >> On Sat, Jan 7, 2017 at 4:14 AM Florian B?sch wrote: >> >> Unless I'm mistaken, knowing the time alone isn't going to help that much. >> >> The dynamic_texture extension attempted to solve this problem with >> setting a presentation time for a frame. I'm not sure how realistic it is >> to try to time presentation time, so that might have been overly ambitious. >> >> On Fri, Jan 6, 2017 at 8:01 PM, Byungseon Shin >> wrote: >> >> Hi WebGL working group members, >> >> We have updated proposal by adding following function : >> - Provides time of frame, texture width and height of HTMLVideoElement's >> EGLImage. >> >> Please take a look at the pull request: >> >> >> Kind regards, >> Byungseon Shin >> >> >> >> On Tue, Jan 3, 2017 at 8:48 AM Byungseon Shin >> wrote: >> >> Hi Florian, >> >> We are working on update which will cover dynamic_texture concerns. >> I plan to upload new version on github for reviewing by this week. >> >> Happy 2017 year! >> Byungseon Shin >> >> On Tue, Jan 3, 2017 at 1:58 AM Florian B?sch wrote: >> >> What is the status of the https://www.khronos.org/ >> registry/webgl/extensions/proposals/OES_EGL_image_external/ extension? >> Is work on the spec still going on or is this the final version that takes >> into account dynamic_texture concerns? >> >> On Wed, Oct 12, 2016 at 3:59 AM, Byungseon Shin >> wrote: >> >> Hi Ken, >> >> Thanks for the feedback. >> Of course, we are willing to devote time and eager to enhance WebGL video >> rendering performance. >> We prefer to use our proposal as a base but we could revise >> dynamic_texture as well. >> >> Kind regards, >> Byungseon Shin >> >> >> On Wed, Oct 12, 2016 at 8:48 AM Kenneth Russell wrote: >> >> Hi Byungseon, >> >> Thanks for putting together this extension proposal. >> >> In conversations with groups within Google that are trying to do 360 >> degree video rendering in WebGL, they need more information than your >> extension would provide; specifically, they would need the exact timestamp >> of the frame currently being rendered, as well as other per-frame metadata >> like the current width and height of the texture (for variable bitrate >> video streams). >> >> Mark Callow's WEBGL_dynamic_texture extension proposal >> https://www.khronos.org/registry/webgl/extensions/ >> proposals/WEBGL_dynamic_texture/ provides the controls needed. However, >> per working group discussions, the current extension's a little too >> complicated. I think if that extension were simplified a little bit that it >> would provide all of the performance benefits yours offers, and the >> controls that are known to be necessary. >> >> Do you think you'd be willing to devote some time to either extending >> your proposal (I can provide specific feedback) or editing down Mark's >> proposal to handle these use cases? I'd prefer to edit down >> WEBGL_dynamic_texture, if you're willing. >> >> Thanks, >> >> -Ken >> >> >> >> On Tue, Oct 11, 2016 at 8:27 AM, Byungseon Shin >> wrote: >> >> Hi Corentin, >> >> On Tue, Oct 11, 2016 at 11:16 PM Corentin Wallez >> wrote: >> >> The problem with using an EGL_image type of extension is that the texture >> data store becomes shared between the new texture and the object it was >> created from. This is great when the application controls both sides and >> can ensure read and writes are probably synchronized, but in the case of >> video here, one side would write without the application being able to >> control synchronization. >> >> The standard way to do this type of video / texture binding in EGL is to >> use EGL_KHR_stream >> which >> is a much more complex and might not be an improvement over what developers >> can do with the current APIs. >> >> >> As in proposal revision#3 and as Florian already mentioned, >> Current proposal support "bind once and update implicitly" concept : " >> ext.EGLImageTargetTexture2DOES(ext.TEXTURE_EXTERNAL_OES, videoElement); >> that the texture is "bound" to the video and no further calls need to be >> emitted (as would presently be the case)" >> >> In a that sense, we could abstract the details implementation into >> MediaPlayer Side like synchronizing issues. >> >> Even EGL_KHR_stream provides more details synchronization issues but >> still lot's of GPU driver and Video Decoder need to support the latest >> specification. >> But OES_EGL_image_external extension is already supported by most of the >> GPU drivers and Video decoders. >> >> And one of the performance bottleneck is to converting EGLImage input to >> TEXTURE_2D to handled by WebGL application. >> >> >> On Tue, Oct 11, 2016 at 6:54 AM, Byungseon Shin >> wrote: >> >> Hi Florian, Maksims, >> >> Thanks for the feedback. >> >> I have updated proposal to explain how application works. >> Please see the attached updated proposal. >> >> To render an update frame, we need to call ext.EGLImageTargetTexture2DOES >> to bind newly generated texture buffer from video decoder. >> >> By supporting OES_EGL_image_external extension, application can use >> direct texture when video driver provides EGLImage like OpenGL ES natvie >> applications. >> >> Our proposal is just focusing on extending format of texture compatible >> with OpenGL ES extension and does not conflict with previous proposals. >> >> So, we provides a simplest way to adopt OES_EGL_image_external extension. >> >> By using references implementations of browser, WebGL renders video more >> than 10 times faster than TEXTURE_2D format with Full HD resolution output. >> >> Kind regards, >> Byungseon Shin >> >> On Tue, Oct 11, 2016 at 4:51 PM Maksims Mihejevs >> wrote: >> >> Worth mentioning that in web there are multiple sources that have >> independent redraw mechanics, that includes video as well as canvas >> elements. It might make sense to have single extension for providing direct >> access to those sources without need to re-upload it. >> >> I have assumptions that internally in browsers, both video and canvas >> have their buffers, so it might lead to very similar implementation as from >> internal side as well as from webgl api side. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Raf...@ Sun Jan 8 14:55:26 2017 From: Raf...@ (Rafael Cintron) Date: Sun, 8 Jan 2017 22:55:26 +0000 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> , Message-ID: When you use the extension, you can query gl_ViewID_OVR in the vertex shader to determine which view the vertex will appear. As currently speced, you will only ever see two possible values in gl_ViewID_OVR due to the stereo context attribute. But in the future, we can add additional numbers of views. The layout qualifier is a hint to the compiler that the shader is intended for a certain number of views. So removing it to allow the extension to be used in WebGL 1.0 should not be too detrimental. If anything, it will give shader authors greater flexibility to use one shader for any number of views. --Rafael ________________________________ From: Florian B?sch Sent: Wednesday, January 4, 2017 2:18 PM To: Rafael Cintron Cc: Olli Etuaho; Mark Callow; Maksims Mihejevs; public webgl Subject: Re: [Public WebGL] WEBGL_multiview discussion On Wed, Jan 4, 2017 at 11:08 PM, Rafael Cintron > wrote: If the layout qualifier prevents the shader from working in WebGL 1.0, I think we should drop the qualifier requirement. This will allow the shader to be used in more places and remove the additional overhead/complexity of rationalizing the number of views in the qualifier with the state of the world on the API side. You'd effectively default to a layout of 2 views. I think that could be a mistake. It's widely recognized that a wide FOV is more immersive. At the same time very wide FOVs will require more than 2 views to be rendered to avoid excessive distortion (due to perspective projection that pinches everything in the center and blows it up towards the edges). There's at least some HMDs that come with 4 panels, and other wide FOV HMDs might use a 4-split projection to get around distortion issues. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Mon Jan 9 10:18:35 2017 From: oet...@ (Olli Etuaho) Date: Mon, 9 Jan 2017 18:18:35 +0000 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> , Message-ID: Actually the layout qualifier could maybe be left out for WebGL 1, since WebGL 1 version of the extension could only support up to 2 views of the default framebuffer, and would not support multiview rendering to texture arrays with an arbitrary number of views. However, I definitely would not want to remove the layout qualifier requirement from WebGL 2 version of the extension, since it would prevent a clean implementation of the WebGL extension on top of native OVR_multiview. WebGL 1 support will make the extension spec quite a bit more complicated if we decide to do it. To answer Florian's questions: The application can render only to the left buffer of the default framebuffer by choosing glDrawBuffer(GL_BACK_LEFT); The situations where both the left and right buffers are displayed are controlled by other APIs that WebGL interacts with, such as WebVR. Displaying both halves of the stereo default framebuffer side-by-side is not in the current proposal, but this is something that an application can also implement by other means, and is mostly useful only for debugging. Regular non-multiview FBOs are also available when the extension is on - nothing changes for them, so shadow maps etc can be rendered as usual. -Olli From: Rafael Cintron [mailto:Rafael.Cintron...@] Sent: sunnuntaina 8. tammikuuta 2017 22.55 To: Florian B?sch Cc: Olli Etuaho ; Mark Callow ; Maksims Mihejevs ; public webgl Subject: Re: [Public WebGL] WEBGL_multiview discussion When you use the extension, you can query gl_ViewID_OVR in the vertex shader to determine which view the vertex will appear. As currently speced, you will only ever see two possible values in gl_ViewID_OVR due to the stereo context attribute. But in the future, we can add additional numbers of views. The layout qualifier is a hint to the compiler that the shader is intended for a certain number of views. So removing it to allow the extension to be used in WebGL 1.0 should not be too detrimental. If anything, it will give shader authors greater flexibility to use one shader for any number of views. --Rafael ________________________________ From: Florian B?sch > Sent: Wednesday, January 4, 2017 2:18 PM To: Rafael Cintron Cc: Olli Etuaho; Mark Callow; Maksims Mihejevs; public webgl Subject: Re: [Public WebGL] WEBGL_multiview discussion On Wed, Jan 4, 2017 at 11:08 PM, Rafael Cintron > wrote: If the layout qualifier prevents the shader from working in WebGL 1.0, I think we should drop the qualifier requirement. This will allow the shader to be used in more places and remove the additional overhead/complexity of rationalizing the number of views in the qualifier with the state of the world on the API side. You'd effectively default to a layout of 2 views. I think that could be a mistake. It's widely recognized that a wide FOV is more immersive. At the same time very wide FOVs will require more than 2 views to be rendered to avoid excessive distortion (due to perspective projection that pinches everything in the center and blows it up towards the edges). There's at least some HMDs that come with 4 panels, and other wide FOV HMDs might use a 4-split projection to get around distortion issues. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 9 11:28:17 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 9 Jan 2017 20:28:17 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> Message-ID: On Mon, Jan 9, 2017 at 7:18 PM, Olli Etuaho wrote: > > Regular non-multiview FBOs are also available when the extension is on ? > nothing changes for them, so shadow maps etc can be rendered as usual. > But how do you render a stereo view into a framebuffer object? How do you select if a framebuffer object is stereo or non stereo? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Raf...@ Mon Jan 9 14:01:56 2017 From: Raf...@ (Rafael Cintron) Date: Mon, 9 Jan 2017 22:01:56 +0000 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> , Message-ID: Olli, besides the layout qualifier and removing things like FramebufferTextureMultiviewOVR, what other complications will we need to work through to have the extension available on WebGL 1.0? I would think this is a matter of simplifying the extension and not adding complication. What am I missing? --Rafael From: Olli Etuaho [mailto:oetuaho...@] Sent: Monday, January 9, 2017 10:19 AM To: Rafael Cintron ; Florian B?sch Cc: Mark Callow ; Maksims Mihejevs ; public webgl Subject: RE: [Public WebGL] WEBGL_multiview discussion Actually the layout qualifier could maybe be left out for WebGL 1, since WebGL 1 version of the extension could only support up to 2 views of the default framebuffer, and would not support multiview rendering to texture arrays with an arbitrary number of views. However, I definitely would not want to remove the layout qualifier requirement from WebGL 2 version of the extension, since it would prevent a clean implementation of the WebGL extension on top of native OVR_multiview. WebGL 1 support will make the extension spec quite a bit more complicated if we decide to do it. To answer Florian's questions: The application can render only to the left buffer of the default framebuffer by choosing glDrawBuffer(GL_BACK_LEFT); The situations where both the left and right buffers are displayed are controlled by other APIs that WebGL interacts with, such as WebVR. Displaying both halves of the stereo default framebuffer side-by-side is not in the current proposal, but this is something that an application can also implement by other means, and is mostly useful only for debugging. Regular non-multiview FBOs are also available when the extension is on - nothing changes for them, so shadow maps etc can be rendered as usual. -Olli From: Rafael Cintron [mailto:Rafael.Cintron...@] Sent: sunnuntaina 8. tammikuuta 2017 22.55 To: Florian B?sch > Cc: Olli Etuaho >; Mark Callow >; Maksims Mihejevs >; public webgl > Subject: Re: [Public WebGL] WEBGL_multiview discussion When you use the extension, you can query gl_ViewID_OVR in the vertex shader to determine which view the vertex will appear. As currently speced, you will only ever see two possible values in gl_ViewID_OVR due to the stereo context attribute. But in the future, we can add additional numbers of views. The layout qualifier is a hint to the compiler that the shader is intended for a certain number of views. So removing it to allow the extension to be used in WebGL 1.0 should not be too detrimental. If anything, it will give shader authors greater flexibility to use one shader for any number of views. --Rafael ________________________________ From: Florian B?sch > Sent: Wednesday, January 4, 2017 2:18 PM To: Rafael Cintron Cc: Olli Etuaho; Mark Callow; Maksims Mihejevs; public webgl Subject: Re: [Public WebGL] WEBGL_multiview discussion On Wed, Jan 4, 2017 at 11:08 PM, Rafael Cintron > wrote: If the layout qualifier prevents the shader from working in WebGL 1.0, I think we should drop the qualifier requirement. This will allow the shader to be used in more places and remove the additional overhead/complexity of rationalizing the number of views in the qualifier with the state of the world on the API side. You'd effectively default to a layout of 2 views. I think that could be a mistake. It's widely recognized that a wide FOV is more immersive. At the same time very wide FOVs will require more than 2 views to be rendered to avoid excessive distortion (due to perspective projection that pinches everything in the center and blows it up towards the edges). There's at least some HMDs that come with 4 panels, and other wide FOV HMDs might use a 4-split projection to get around distortion issues. -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Mon Jan 9 17:27:42 2017 From: khr...@ (Mark Callow) Date: Tue, 10 Jan 2017 10:27:42 +0900 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: References: Message-ID: <926B7F74-9102-459E-A68D-7EB7DCCEC867@callow.im> > On Jan 7, 2017, at 4:01, Byungseon Shin wrote: > > Hi WebGL working group members, > > We have updated proposal by adding following function : > - Provides time of frame, texture width and height of HTMLVideoElement's EGLImage. > > Please take a look at the pull request: > > > > Kind regards, > Byungseon Shin > Thanks for the update. Unfortunately it does nothing to settle my previous concerns. On the native side EGL_image and its companion OES_EGL_image_external are designed only for static images such as something drawn with OpenVG. EGL_KHR_stream (and family) + EGL_KHR_stream_consumer_gltexture are designed for dynamic images, i.e. sequences of image frames, . The stream provides buffering necessary to accommodate different frame rates of the video decoder and the application?s GL rendering and to avoid having to lock either to ensure half-completed video frames are not rendered. In the web browser, as far as I know, the HTMLVideoElement decoder and WebGL rendering can run at different rates so a similar buffer or else locking is needed. This extension provides neither. Because of this ?time of frame of HTMLVideoElement?s EGLImage? could change at any time, even right after the function is called and before the WebGL application renders. Unless browsers are prepared to provide guarantees that HTMLVideoElement decoding and WebGL rendering have the same frame rate and will be synchronized, buffering is needed. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From sun...@ Mon Jan 9 18:54:41 2017 From: sun...@ (Byungseon Shin) Date: Tue, 10 Jan 2017 02:54:41 +0000 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: <926B7F74-9102-459E-A68D-7EB7DCCEC867@callow.im> References: <926B7F74-9102-459E-A68D-7EB7DCCEC867@callow.im> Message-ID: Hi Mark, Thanks for the feedback. I replied inline on your comments. Kind regards, Byungseon Shin On Tue, Jan 10, 2017 at 10:27 AM Mark Callow wrote: > > On Jan 7, 2017, at 4:01, Byungseon Shin wrote: > > Hi WebGL working group members, > > We have updated proposal by adding following function : > - Provides time of frame, texture width and height of HTMLVideoElement's > EGLImage. > > Please take a look at the pull request: > > > Kind regards, > Byungseon Shin > > > Thanks for the update. Unfortunately it does nothing to settle my previous > concerns. > > On the native side EGL_image and its companion OES_EGL_image_external are > designed only for static images such as something drawn with OpenVG. > I could not find such a restriction. Please see the Android Video Streaming use cases. < https://developer.android.com/reference/android/graphics/SurfaceTexture.html > EGL_KHR_stream (and family) + EGL_KHR_stream_consumer_gltexture are > designed for dynamic images, i.e. sequences of image frames, . The stream > provides buffering necessary to accommodate different frame rates of the > video decoder and the application?s GL rendering and to avoid having to > lock either to ensure half-completed video frames are not rendered. > Again, not all embedded GPU vendors support EGL_KHR_Stream. > > In the web browser, as far as I know, the HTMLVideoElement decoder and > WebGL rendering can run at different rates so a similar buffer or else > locking is needed. > We can implement locking handled by browser engine internally. > This extension provides neither. Because of this ?time of frame of > HTMLVideoElement?s EGLImage? could change at any time, even right after the > function is called and before the WebGL application renders. > In case of Media Source Extension(MSE), application control the buffering of each sources, so it could be used for matching the current video frame time. Unless browsers are prepared to provide guarantees that HTMLVideoElement > decoding and WebGL rendering have the same frame rate and will be > synchronized, buffering is needed. > I agree this function but I am not sure this extension should cover it or not. > Regards > > > -Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 9 23:34:22 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 10 Jan 2017 08:34:22 +0100 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: <926B7F74-9102-459E-A68D-7EB7DCCEC867@callow.im> References: <926B7F74-9102-459E-A68D-7EB7DCCEC867@callow.im> Message-ID: On Tue, Jan 10, 2017 at 2:27 AM, Mark Callow wrote: > Thanks for the update. Unfortunately it does nothing to settle my previous > concerns. > > On the native side EGL_image and its companion OES_EGL_image_external are > designed only for static images such as something drawn with OpenVG. > EGL_KHR_stream (and family) + EGL_KHR_stream_consumer_gltexture are > designed for dynamic images, i.e. sequences of image frames, . The stream > provides buffering necessary to accommodate different frame rates of the > video decoder and the application?s GL rendering and to avoid having to > lock either to ensure half-completed video frames are not rendered. > > In the web browser, as far as I know, the HTMLVideoElement decoder and > WebGL rendering can run at different rates so a similar buffer or else > locking is needed. This extension provides neither. Because of this ?time > of frame of HTMLVideoElement?s EGLImage? could change at any time, even > right after the function is called and before the WebGL application renders. > > Unless browsers are prepared to provide guarantees that HTMLVideoElement > decoding and WebGL rendering have the same frame rate and will be > synchronized, buffering is needed. > If the decoder provides the video frame closest to the estimate when a webgl framebuffer shows up on screen, then it doesn't matter if "the video frame just changed" because the decoder would be lookahead to match. If that can be done implicitely by the UA, and doesn't have to be tracked by the API user, all the better. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Jan 10 00:12:12 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 10 Jan 2017 09:12:12 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> Message-ID: The extension adds the following: - FRAMEBUFFER_ATTACHMENT_TEXTURE_NUM_VIEWS_OVR accepted by getFramebufferAttachmentParameter - FRAMEBUFFER_ATTACHMENT_TEXTURE_BASE_VIEW_INDEX_OVR accepted by getFramebufferAttachmentParameter - MAX_VIEWS_OVR accepted by getParameter - FRAMEBUFFER_INCOMPLETE_VIEW_TARGETS_OVR accepted by checkFramebufferStatus - BACK_LEFT_AND_RIGHT_MULTIVIEW_WEBGL accepted by drawBuffer - framebufferTextureMultiviewWEBGL(...) - context creation parameter "stereo" - gl.drawBuffer - gl.BACK_LEFT - gl.BACK_RIGHT - layout qualifier to specify the number of views a shader renders There are some issues with that extension: 1. The enums BACK_LEFT and BACK_RIGHT are added to the WebGLRenderingContextBase instead of the extension objects IDL 2. the function drawBuffer which does not exist in WebGL is added to the WebGLRenderingContextBase instead of the extension objects IDL 3. The relationship between WebGL2 gl.drawBuffer*s* and this extensions drawBuffer is unclear 4. The layout qualifier is not WebGL1 compatible, and the extension does not define WebGL1 usable gl_NumViews constant or the like 5. The extension does not contain a differentiation between WebGL1 and WebGL2, but those need to be 2 specifications, so either there's two extensions needed, or this extension needs to define differing functionality in different sections for both WebGL1 and WebGL2 6. The examples section is insufficient, examples should be added for context creation and back buffer use, framebuffer object use and selection of traditional or stereo view rendering. 7. Since this extension is meant to interact with the WebVR specification, and the WebVR specification so far requires the application to render to the back buffer in split-view, the interaction with WebVR is unclear and should be clarified -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Jan 10 00:18:26 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 10 Jan 2017 09:18:26 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> Message-ID: Addendum: The extension also does not define a formative section on what identifiers it adds to shaders. An example of how that works is contained there: https://www.khronos.org/registry/webgl/extensions/EXT_shader_texture_lod/ Overview > When this extension is enabled: > > - ... > > > - ... > > > - ... > > On Tue, Jan 10, 2017 at 9:12 AM, Florian B?sch wrote: > The extension adds the following: > > - FRAMEBUFFER_ATTACHMENT_TEXTURE_NUM_VIEWS_OVR accepted by > getFramebufferAttachmentParameter > - FRAMEBUFFER_ATTACHMENT_TEXTURE_BASE_VIEW_INDEX_OVR accepted by > getFramebufferAttachmentParameter > - MAX_VIEWS_OVR accepted by getParameter > - FRAMEBUFFER_INCOMPLETE_VIEW_TARGETS_OVR accepted by > checkFramebufferStatus > - BACK_LEFT_AND_RIGHT_MULTIVIEW_WEBGL accepted by drawBuffer > - framebufferTextureMultiviewWEBGL(...) > - context creation parameter "stereo" > - gl.drawBuffer > - gl.BACK_LEFT > - gl.BACK_RIGHT > - layout qualifier to specify the number of views a shader renders > > There are some issues with that extension: > > 1. The enums BACK_LEFT and BACK_RIGHT are added to the > WebGLRenderingContextBase instead of the extension objects IDL > 2. the function drawBuffer which does not exist in WebGL is added to > the WebGLRenderingContextBase instead of the extension objects IDL > 3. The relationship between WebGL2 gl.drawBuffer*s* and this > extensions drawBuffer is unclear > 4. The layout qualifier is not WebGL1 compatible, and the extension > does not define WebGL1 usable gl_NumViews constant or the like > 5. The extension does not contain a differentiation between WebGL1 and > WebGL2, but those need to be 2 specifications, so either there's two > extensions needed, or this extension needs to define differing > functionality in different sections for both WebGL1 and WebGL2 > 6. The examples section is insufficient, examples should be added for > context creation and back buffer use, framebuffer object use and selection > of traditional or stereo view rendering. > 7. Since this extension is meant to interact with the WebVR > specification, and the WebVR specification so far requires the application > to render to the back buffer in split-view, the interaction with WebVR is > unclear and should be clarified > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Jan 10 00:28:47 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 10 Jan 2017 09:28:47 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> Message-ID: I personally also would also find it preferable if the structure was: overview (formative) IDL examples And the rest of the description wouldn't be interspersed with it. On Tue, Jan 10, 2017 at 9:18 AM, Florian B?sch wrote: > Addendum: The extension also does not define a formative section on what > identifiers it adds to shaders. > > An example of how that works is contained there: https://www.khronos. > org/registry/webgl/extensions/EXT_shader_texture_lod/ > > Overview >> When this extension is enabled: >> >> - ... >> >> >> - ... >> >> >> - ... >> >> > On Tue, Jan 10, 2017 at 9:12 AM, Florian B?sch wrote: > >> The extension adds the following: >> >> - FRAMEBUFFER_ATTACHMENT_TEXTURE_NUM_VIEWS_OVR accepted by >> getFramebufferAttachmentParameter >> - FRAMEBUFFER_ATTACHMENT_TEXTURE_BASE_VIEW_INDEX_OVR accepted by >> getFramebufferAttachmentParameter >> - MAX_VIEWS_OVR accepted by getParameter >> - FRAMEBUFFER_INCOMPLETE_VIEW_TARGETS_OVR accepted by >> checkFramebufferStatus >> - BACK_LEFT_AND_RIGHT_MULTIVIEW_WEBGL accepted by drawBuffer >> - framebufferTextureMultiviewWEBGL(...) >> - context creation parameter "stereo" >> - gl.drawBuffer >> - gl.BACK_LEFT >> - gl.BACK_RIGHT >> - layout qualifier to specify the number of views a shader renders >> >> There are some issues with that extension: >> >> 1. The enums BACK_LEFT and BACK_RIGHT are added to the >> WebGLRenderingContextBase instead of the extension objects IDL >> 2. the function drawBuffer which does not exist in WebGL is added to >> the WebGLRenderingContextBase instead of the extension objects IDL >> 3. The relationship between WebGL2 gl.drawBuffer*s* and this >> extensions drawBuffer is unclear >> 4. The layout qualifier is not WebGL1 compatible, and the extension >> does not define WebGL1 usable gl_NumViews constant or the like >> 5. The extension does not contain a differentiation between WebGL1 >> and WebGL2, but those need to be 2 specifications, so either there's two >> extensions needed, or this extension needs to define differing >> functionality in different sections for both WebGL1 and WebGL2 >> 6. The examples section is insufficient, examples should be added for >> context creation and back buffer use, framebuffer object use and selection >> of traditional or stereo view rendering. >> 7. Since this extension is meant to interact with the WebVR >> specification, and the WebVR specification so far requires the application >> to render to the back buffer in split-view, the interaction with WebVR is >> unclear and should be clarified >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Jan 10 02:05:32 2017 From: khr...@ (Mark Callow) Date: Tue, 10 Jan 2017 19:05:32 +0900 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: References: <926B7F74-9102-459E-A68D-7EB7DCCEC867@callow.im> Message-ID: <070A9DFF-58D1-4A9D-8C5B-AF1124151A62@callow.im> > On Jan 10, 2017, at 11:54, Byungseon Shin wrote: > > > On Tue, Jan 10, 2017 at 10:27 AM Mark Callow > wrote: > > Thanks for the update. Unfortunately it does nothing to settle my previous concerns. > > On the native side EGL_image and its companion OES_EGL_image_external are designed only for static images such as something drawn with OpenVG. > > I could not find such a restriction. Please see the Android Video Streaming use cases. > > I see the disconnect. My comments have been referring to the native OES_EGL_image_external OpenGL ES extension. The API added by that extension only supports binding an EGLImage to a texture. An EGLImage is a single image. You are referring to the underlying external texture object. Yes an external texture as defined by OES_EGL_image_external can be the consumer endpoint for either an EGLImage or an EGLStream or indeed Android?s SurfaceTexture. But different API functions are used to hook up EGLStreams or SufaceTextures. An external texture maps only a single image. It is really important to be clear about this. SurfaceTexture is a stream, almost certainly implemented, in most cases, using EGLStream. It says When updateTexImage() is called, the contents of the texture object specified when the SurfaceTexture was created are updated to contain the most recent image from the image stream. This may cause some frames of the stream to be skipped. updateTexImage() is to be called by the app every frame. Your WebGL version of OES_EGL_image_external modifies EGLImageTargetTexture2DOES to accept an HTMLVideoElement instead of an EGLImage. Thus you are, confusingly, changing a call that in the native extension took a single image, into one that takes a stream of images. You are stepping into the world of EGLStream but avoiding its API. The example code shows EGLImageTargetTexture2DOES being called just once at the start of the program. This is equivalent to calling SurfaceTexture.attachToGLContext. There is no step to latch a video frame when drawing a WebGL frame, no equivalent to updateTexImage. It just does WebGL rendering every frame. So you are relying on unspecified magic to ensure that the mapped frame is never being updated while the WebGL application is sampling it. You are also concealing from the app which video frame it will be sampling. I maintain that you need to stop relying on magic and hiding what?s happening. Internally here has to be a buffered stream to ensure there is always a complete frame when the WebGL app is sampling thus avoiding tearing. (The alternative is locking on a single image buffer which is likely to stall both the video decoder and the graphics pipeline causing horrendous performance.) The buffering should be exposed and put under the control of the application. The extension needs updateTexImage-like functionality. Once you have that then your new WebGLVideoFrameInfo can return information about the currently mapped frame. If you are trying to mimic SurfaceTexture then you may also need to handle cases where the video size or orientation can change between frames. This is handled in SurfaceTexture by requiring the app to query the texture transform to be used with the texture. I don?t know if such changes can occur in HTMLVideoElements. > > EGL_KHR_stream (and family) + EGL_KHR_stream_consumer_gltexture are designed for dynamic images, i.e. sequences of image frames, . The stream provides buffering necessary to accommodate different frame rates of the video decoder and the application?s GL rendering and to avoid having to lock either to ensure half-completed video frames are not rendered. > > Again, not all embedded GPU vendors support EGL_KHR_Stream. Do they all support Android implementing SurfaceTexture on them? If so they support something very like EGLStream. One more thing. To support this on WebGL 2 (ESSL 3.x) you will need a WebGL wrapper for OES_EGL_image_external_essl3 as well. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Tue Jan 10 08:20:00 2017 From: oet...@ (Olli Etuaho) Date: Tue, 10 Jan 2017 16:20:00 +0000 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> Message-ID: <7e1d0ebac1504572a2ea25c1e4e2072c@ukmail101.nvidia.com> Re: Issues in the extension: 1. BACK_LEFT and BACK_RIGHT are intended to be included in WebGLRenderingContextBase. Note the disclaimer at the top: ?These changes are written here in the extension document in order to facilitate discussion of the proposal. They are intended to be merged to the core WebGL specification and removed from here at a later date.? 2. drawBuffer is intended to be included in WebGLRenderingContextBase. 3. drawBuffer() and drawBuffers() are separate entry points, they don?t have any special relationship just because they are named similarly. The similar naming is a bit unfortunate, but it?s inherited from desktop GL. 4. See 5. 5. The current extension proposal is not compatible with WebGL 1.0, though the proposed core spec changes apply to WebGL 1.0. If a WebGL 1.0 version of the multiview extension is desired, it might make the most sense to specify that separately from the WebGL 2.0 extension. Doing that only makes sense after there?s an agreement on what goes to the core spec, though. 6. Including a very simple API usage example is a good idea, but I don?t think the extension document needs to go beyond that. The spec document is not a tutorial. 7. The interaction with WebVR needs to be specified in the WebVR spec. So far this has been discussed only informally. I?m working on fixing some of the smaller omissions you pointed out. The organization of the extension document is controlled by the XML transform for extensions, unfortunately that isn?t working that great with the current state of the document which includes the core spec changes. When the part that needs to go to the core WebGL specification is removed from the extension document, the document should become a lot clearer. Cheers, Olli From: Florian B?sch [mailto:pyalot...@] Sent: tiistaina 10. tammikuuta 2017 8.29 To: Rafael Cintron Cc: Olli Etuaho ; Mark Callow ; Maksims Mihejevs ; public webgl Subject: Re: [Public WebGL] WEBGL_multiview discussion I personally also would also find it preferable if the structure was: overview (formative) IDL examples And the rest of the description wouldn't be interspersed with it. On Tue, Jan 10, 2017 at 9:18 AM, Florian B?sch > wrote: Addendum: The extension also does not define a formative section on what identifiers it adds to shaders. An example of how that works is contained there: https://www.khronos.org/registry/webgl/extensions/EXT_shader_texture_lod/ Overview When this extension is enabled: * ... * ... * ... On Tue, Jan 10, 2017 at 9:12 AM, Florian B?sch > wrote: The extension adds the following: * FRAMEBUFFER_ATTACHMENT_TEXTURE_NUM_VIEWS_OVR accepted by getFramebufferAttachmentParameter * FRAMEBUFFER_ATTACHMENT_TEXTURE_BASE_VIEW_INDEX_OVR accepted by getFramebufferAttachmentParameter * MAX_VIEWS_OVR accepted by getParameter * FRAMEBUFFER_INCOMPLETE_VIEW_TARGETS_OVR accepted by checkFramebufferStatus * BACK_LEFT_AND_RIGHT_MULTIVIEW_WEBGL accepted by drawBuffer * framebufferTextureMultiviewWEBGL(...) * context creation parameter "stereo" * gl.drawBuffer * gl.BACK_LEFT * gl.BACK_RIGHT * layout qualifier to specify the number of views a shader renders There are some issues with that extension: 1. The enums BACK_LEFT and BACK_RIGHT are added to the WebGLRenderingContextBase instead of the extension objects IDL 2. the function drawBuffer which does not exist in WebGL is added to the WebGLRenderingContextBase instead of the extension objects IDL 3. The relationship between WebGL2 gl.drawBuffers and this extensions drawBuffer is unclear 4. The layout qualifier is not WebGL1 compatible, and the extension does not define WebGL1 usable gl_NumViews constant or the like 5. The extension does not contain a differentiation between WebGL1 and WebGL2, but those need to be 2 specifications, so either there's two extensions needed, or this extension needs to define differing functionality in different sections for both WebGL1 and WebGL2 6. The examples section is insufficient, examples should be added for context creation and back buffer use, framebuffer object use and selection of traditional or stereo view rendering. 7. Since this extension is meant to interact with the WebVR specification, and the WebVR specification so far requires the application to render to the back buffer in split-view, the interaction with WebVR is unclear and should be clarified -------------- next part -------------- An HTML attachment was scrubbed... URL: From sun...@ Tue Jan 10 09:16:10 2017 From: sun...@ (Byungseon Shin) Date: Tue, 10 Jan 2017 17:16:10 +0000 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: <070A9DFF-58D1-4A9D-8C5B-AF1124151A62@callow.im> References: <926B7F74-9102-459E-A68D-7EB7DCCEC867@callow.im> <070A9DFF-58D1-4A9D-8C5B-AF1124151A62@callow.im> Message-ID: Hi Mark, Thanks for the in-depth feedback. I agree your opinions and have changed the logic by your guide. I have created a new patch and please review it again. https://github.com/KhronosGroup/WebGL/pull/2243/commits/c8193b5f1ffb37881bb815bd98b4da91e516828d About the OES_EGL_image_external_essl3, we will follow up soon. Kind regards, Byungseon Shin On Tue, Jan 10, 2017 at 7:05 PM Mark Callow wrote: > On Jan 10, 2017, at 11:54, Byungseon Shin wrote: > > On Tue, Jan 10, 2017 at 10:27 AM Mark Callow wrote: > > > Thanks for the update. Unfortunately it does nothing to settle my previous > concerns. > > On the native side EGL_image and its companion OES_EGL_image_external are > designed only for static images such as something drawn with OpenVG. > > > I could not find such a restriction. Please see the Android Video > Streaming use cases. > < > https://developer.android.com/reference/android/graphics/SurfaceTexture.html > > > > > I see the disconnect. My comments have been referring to the native > OES_EGL_image_external OpenGL ES extension. The API added by that > extension only supports binding an EGLImage to a texture. An EGLImage is a > single image. > > You are referring to the underlying external texture object. Yes an > external texture as defined by OES_EGL_image_external can be the consumer > endpoint for either an EGLImage or an EGLStream or indeed Android?s > SurfaceTexture. But different API functions are used to hook up EGLStreams > or SufaceTextures. > > An external texture maps only a single image. It is really important to be > clear about this. SurfaceTexture is a stream, almost certainly > implemented, in most cases, using EGLStream. It says > > When updateTexImage() > > is called, the contents of the texture object specified when the > SurfaceTexture was created are updated to contain the most recent image > from the image stream. This may cause some frames of the stream to be > skipped. > > > updateTexImage() is to be called by the app every frame. > > Your WebGL version of OES_EGL_image_external modifies EGLImageTargetTexture2DOES > to accept an HTMLVideoElement instead of an EGLImage. Thus you are, > confusingly, changing a call that in the native extension took a single > image, into one that takes a stream of images. You are stepping into the > world of EGLStream but avoiding its API. > > The example code shows EGLImageTargetTexture2DOES being called just once > at the start of the program. This is equivalent to calling > SurfaceTexture.attachToGLContext. There is no step to latch a video frame > when drawing a WebGL frame, no equivalent to updateTexImage. It just does > WebGL rendering every frame. > > So you are relying on unspecified magic to ensure that the mapped frame is > never being updated while the WebGL application is sampling it. You are > also concealing from the app which video frame it will be sampling. > > I maintain that you need to stop relying on magic and hiding what?s > happening. Internally here has to be a buffered stream to ensure there is > always a complete frame when the WebGL app is sampling thus avoiding > tearing. (The alternative is locking on a single image buffer which is > likely to stall both the video decoder and the graphics pipeline causing > horrendous performance.) The buffering should be exposed and put under the > control of the application. > > The extension needs updateTexImage-like functionality. Once you have that > then your new WebGLVideoFrameInfo can return information about the > currently mapped frame. > > If you are trying to mimic SurfaceTexture then you may also need to handle > cases where the video size or orientation can change between frames. This > is handled in SurfaceTexture by requiring the app to query the texture > transform to be used with the texture. I don?t know if such changes can > occur in HTMLVideoElements. > > > EGL_KHR_stream (and family) + EGL_KHR_stream_consumer_gltexture are > designed for dynamic images, i.e. sequences of image frames, . The stream > provides buffering necessary to accommodate different frame rates of the > video decoder and the application?s GL rendering and to avoid having to > lock either to ensure half-completed video frames are not rendered. > > > Again, not all embedded GPU vendors support EGL_KHR_Stream. > > > Do they all support Android implementing SurfaceTexture on them? If so > they support something very like EGLStream. > > > One more thing. To support this on WebGL 2 (ESSL 3.x) you will need a > WebGL wrapper for OES_EGL_image_external_essl3 > as > well. > > Regards > > > -Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Jan 11 07:54:16 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 11 Jan 2017 16:54:16 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <7e1d0ebac1504572a2ea25c1e4e2072c@ukmail101.nvidia.com> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> <7e1d0ebac1504572a2ea25c1e4e2072c@ukmail101.nvidia.com> Message-ID: On Tue, Jan 10, 2017 at 5:20 PM, Olli Etuaho wrote: > > 1. BACK_LEFT and BACK_RIGHT are intended to be included in > WebGLRenderingContextBase. Note the disclaimer at the top: ?These changes > are written here in the extension document in order to facilitate > discussion of the proposal. They are intended to be merged to the core > WebGL specification and removed from here at a later date.? > So far no extension has followed that pattern. WebGL extensions have always defined their properties on the extension object, even if some of those symbols are later put into a standard. I don't it's consistent. > 2. drawBuffer is intended to be included in > WebGLRenderingContextBase. > Same objection as above. > 3. drawBuffer() and drawBuffers() are separate entry points, they > don?t have any special relationship just because they are named similarly. > The similar naming is a bit unfortunate, but it?s inherited from desktop GL. > drawBuffers enables writing to multiple surfaces. drawBuffer sets which views are rendered to. The framebufferTextureMultiviewWEBGL function sets up a texture to be attached to an attachment point and view (left or right). Is it correct to assume if you render to multiple render targets at the same time and if those targets are set up to be stereo with framebufferTextureMultiviewWebGL (which attaches them to the framebuffer object, right?), that all attached rendertargets get stereo rendering? What happens if one is stereo and another isn't? Is this an error, does it go silently? There clearly are interactions between drawBuffers, drawBuffer and framebufferTextureMultiviewWebGL, but this extension doesn't lay it out. > 4. See 5. > > 5. The current extension proposal is not compatible with WebGL 1.0, > though the proposed core spec changes apply to WebGL 1.0. If a WebGL 1.0 > version of the multiview extension is desired, it might make the most sense > to specify that separately from the WebGL 2.0 extension. Doing that only > makes sense after there?s an agreement on what goes to the core spec, > though. > > 6. Including a very simple API usage example is a good idea, but I > don?t think the extension document needs to go beyond that. The spec > document is not a tutorial. > It's a complex (one of the most complex extensions) in the registry. In addition it introduces a host of behavioral changes unique to WebGL. I think it deserves a bit more examples than is usual for extensions because of that. And it should have a brief example of each major usage scenario. > 7. The interaction with WebVR needs to be specified in the WebVR > spec. So far this has been discussed only informally. > That's true, but since this extension interacts with WebVR it would be good if it pointed out how. > I?m working on fixing some of the smaller omissions you pointed out. The > organization of the extension document is controlled by the XML transform > for extensions, unfortunately that isn?t working that great with the > current state of the document which includes the core spec changes. > It wasn't meant to. Extensions have never been a core/extension hybrid I think it's weird. ----- On the extension/core hybrid extension, I think that a much more proper way to do things was to offer the extension self-contained, and if somebody wants to write forward compatible core code, they can polyfill the feature from the extension. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Wed Jan 11 08:41:44 2017 From: oet...@ (Olli Etuaho) Date: Wed, 11 Jan 2017 16:41:44 +0000 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> <7e1d0ebac1504572a2ea25c1e4e2072c@ukmail101.nvidia.com> Message-ID: <104e30463c654e2f9b5684001ba2e894@ukmail101.nvidia.com> I previously suggested that I?d put the proposed core spec changes in a regular pull request against the spec. However, after a bit of discussion it was requested that I?d include the proposed core spec changes in the extension document instead, since the extension is related, and it?s easier to discuss them together. I agree it?s unintended usage of the extension template, but I?m not sure what a better third alternative would be. Would others have suggestions on how to organize the proposal better? Please read the spec carefully. It says that it is forbidden to call drawBuffer() when an FBO is bound. On the other hand drawBuffers() and framebufferTextureMultiview() only operate on FBOs. So there?s no unspecified interaction. Also, the native OVR_multiview spec says ?The number of views, as specified by numViews, must be the same for all framebuffer attachments points where the value of FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE is not NONE or the framebuffer is incomplete.?. I think the native spec may have a bug when it describes the effect of draw commands, it seems as if only one texture array?s layers are being toggled, but this should be quite easy to fix. -Olli From: Florian B?sch [mailto:pyalot...@] Sent: keskiviikkona 11. tammikuuta 2017 15.54 To: Olli Etuaho Cc: Rafael Cintron ; Mark Callow ; Maksims Mihejevs ; public webgl Subject: Re: [Public WebGL] WEBGL_multiview discussion On Tue, Jan 10, 2017 at 5:20 PM, Olli Etuaho > wrote: 1. BACK_LEFT and BACK_RIGHT are intended to be included in WebGLRenderingContextBase. Note the disclaimer at the top: ?These changes are written here in the extension document in order to facilitate discussion of the proposal. They are intended to be merged to the core WebGL specification and removed from here at a later date.? So far no extension has followed that pattern. WebGL extensions have always defined their properties on the extension object, even if some of those symbols are later put into a standard. I don't it's consistent. 2. drawBuffer is intended to be included in WebGLRenderingContextBase. Same objection as above. 3. drawBuffer() and drawBuffers() are separate entry points, they don?t have any special relationship just because they are named similarly. The similar naming is a bit unfortunate, but it?s inherited from desktop GL. drawBuffers enables writing to multiple surfaces. drawBuffer sets which views are rendered to. The framebufferTextureMultiviewWEBGL function sets up a texture to be attached to an attachment point and view (left or right). Is it correct to assume if you render to multiple render targets at the same time and if those targets are set up to be stereo with framebufferTextureMultiviewWebGL (which attaches them to the framebuffer object, right?), that all attached rendertargets get stereo rendering? What happens if one is stereo and another isn't? Is this an error, does it go silently? There clearly are interactions between drawBuffers, drawBuffer and framebufferTextureMultiviewWebGL, but this extension doesn't lay it out. 4. See 5. 5. The current extension proposal is not compatible with WebGL 1.0, though the proposed core spec changes apply to WebGL 1.0. If a WebGL 1.0 version of the multiview extension is desired, it might make the most sense to specify that separately from the WebGL 2.0 extension. Doing that only makes sense after there?s an agreement on what goes to the core spec, though. 6. Including a very simple API usage example is a good idea, but I don?t think the extension document needs to go beyond that. The spec document is not a tutorial. It's a complex (one of the most complex extensions) in the registry. In addition it introduces a host of behavioral changes unique to WebGL. I think it deserves a bit more examples than is usual for extensions because of that. And it should have a brief example of each major usage scenario. 7. The interaction with WebVR needs to be specified in the WebVR spec. So far this has been discussed only informally. That's true, but since this extension interacts with WebVR it would be good if it pointed out how. I?m working on fixing some of the smaller omissions you pointed out. The organization of the extension document is controlled by the XML transform for extensions, unfortunately that isn?t working that great with the current state of the document which includes the core spec changes. It wasn't meant to. Extensions have never been a core/extension hybrid I think it's weird. ----- On the extension/core hybrid extension, I think that a much more proper way to do things was to offer the extension self-contained, and if somebody wants to write forward compatible core code, they can polyfill the feature from the extension. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Jan 11 09:59:22 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 11 Jan 2017 18:59:22 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <104e30463c654e2f9b5684001ba2e894@ukmail101.nvidia.com> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> <7e1d0ebac1504572a2ea25c1e4e2072c@ukmail101.nvidia.com> <104e30463c654e2f9b5684001ba2e894@ukmail101.nvidia.com> Message-ID: On Wed, Jan 11, 2017 at 5:41 PM, Olli Etuaho wrote: > I agree it?s unintended usage of the extension template, but I?m not sure > what a better third alternative would be. Would others have suggestions on > how to organize the proposal better? > Make the extension host all changes including added tokens and functions on its IDL. Make another pull request to change the WebGL specification which includes the text in the specification. People who want to do forward compatibility can polyfill the forward core changes from the extension object. > Please read the spec carefully. > Does not include an errors section where errors are usually included, please draft the specification more carefully. the XML processor has a directive to describe errors. The errors described in the overviews sections is... informative, but not formative in regards to errors. > It says that it is forbidden to call drawBuffer() when an FBO is bound. > See missing errors section. Also consult the XSLT for the sections: - new functions: https://github.com/KhronosGroup/WebGL/blob/master/extensions/extension.xsl#L126 - new tokens: https://github.com/KhronosGroup/WebGL/blob/master/extensions/extension.xsl#L136 - errors: https://github.com/KhronosGroup/WebGL/blob/master/extensions/extension.xsl#L146 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Jan 11 10:00:30 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 11 Jan 2017 19:00:30 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> <7e1d0ebac1504572a2ea25c1e4e2072c@ukmail101.nvidia.com> <104e30463c654e2f9b5684001ba2e894@ukmail101.nvidia.com> Message-ID: It's customary to reflect the essentials of a mirrored extension if there's significant WebGL behavioral changes. On Wed, Jan 11, 2017 at 6:59 PM, Florian B?sch wrote: > On Wed, Jan 11, 2017 at 5:41 PM, Olli Etuaho wrote: > >> I agree it?s unintended usage of the extension template, but I?m not sure >> what a better third alternative would be. Would others have suggestions on >> how to organize the proposal better? >> > Make the extension host all changes including added tokens and functions > on its IDL. Make another pull request to change the WebGL specification > which includes the text in the specification. People who want to do forward > compatibility can polyfill the forward core changes from the extension > object. > > > >> Please read the spec carefully. >> > Does not include an errors section where errors are usually included, > please draft the specification more carefully. the XML processor has a > directive to describe errors. The errors described in the overviews > sections is... informative, but not formative in regards to errors. > > >> It says that it is forbidden to call drawBuffer() when an FBO is bound. >> > See missing errors section. > > Also consult the XSLT for the sections: > > > - new functions: https://github.com/KhronosGroup/WebGL/blob/ > master/extensions/extension.xsl#L126 > > - new tokens: https://github.com/KhronosGroup/WebGL/blob/ > master/extensions/extension.xsl#L136 > > - errors: https://github.com/KhronosGroup/WebGL/blob/ > master/extensions/extension.xsl#L146 > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Jan 11 18:11:52 2017 From: kbr...@ (Kenneth Russell) Date: Wed, 11 Jan 2017 18:11:52 -0800 Subject: [Public WebGL] EXT_texture_storage In-Reply-To: References: Message-ID: Rejected in https://github.com/KhronosGroup/WebGL/pull/2255 . On Thu, Jan 5, 2017 at 5:03 PM, Jeff Gilbert wrote: > Let's reject. It'd be nice to have, but its potential impact seems minimal. > > On Tue, Jan 3, 2017 at 9:29 PM, Kenneth Russell wrote: > > I agree that it'd be better to reject this extension rather than move it > > forward. Now that WebGL 2.0's on the verge of shipping in multiple > browsers, > > I think we should encourage more implementations rather than continue to > add > > extensions to WebGL 1.0. > > > > > > On Tue, Jan 3, 2017 at 4:18 PM, Zhenyao Mo wrote: > >> > >> My main concern is we won't have full EXT_texture_storage on top of DX9, > >> on which some WebGL1 implementations are based. > >> > >> To me, a better path is just to switch to WebGL2 whenever it's possible, > >> where texture storage is part of core. > >> > >> On Mon, Jan 2, 2017 at 8:54 AM, Florian B?sch wrote: > >>> > >>> No change has occured on > >>> https://www.khronos.org/registry/webgl/extensions/ > proposals/EXT_texture_storage/ > >>> since September 2015 > >>> > >>> Can this extension be elevated to draft? > >> > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Jan 11 18:23:09 2017 From: kbr...@ (Kenneth Russell) Date: Wed, 11 Jan 2017 18:23:09 -0800 Subject: [Public WebGL] Re: WEBGL_get_buffer_sub_data_async In-Reply-To: References: Message-ID: The reason Promises were chosen for this extension is that they're the current best practice on the web platform for expressing asynchronous computation. The syntax is fluent, and it would have been difficult for me to argue with Chrome's leadership that there was a significant enough performance implication that a raw callback should be passed, contrary to the design of all other current web specs. Kai and Jaume Sanchez ( https://www.clicktorelease.com/ ) are preparing another test case based on one of Jaume's global illumination examples. Once that's ready we'll post more data. At a high level, using this asynchronous API lets Chrome's deeply pipelined WebGL implementation match the performance of single-threaded and/or single-process WebGL implementations, though the code has to be changed to know about and take advantage of the extension. -Ken On Fri, Jan 6, 2017 at 9:33 AM, Maksims Mihejevs wrote: > That is very good point too. > Most common case of similar pattern is mouse, keyboard, touch input - it > is in callback, and you end up collecting own state information about > input, so to access it in main loop of your app in requestAnimationFrame. > > Developers coming from real-time applications and other platforms get > confused a lot by this all the time. That is based on many users on > PlayCanvas from hobbyists to commercial clients do mention it time to times. > > On 6 January 2017 at 16:36, Florian B?sch wrote: > >> The problem isn't just the promise allocation, it's also the closure >> allocation. And while you can avoid that in some cases, you can't avoid it >> in all cases. And beyond that, the further problem of registering callbacks >> to do stuff would be that it lifts rendering logic out of the >> requestAnimationFrame loop and occurs... whenever. Which isn't a proper way >> to render. So in practice you'd end up attaching a callback just to collect >> the result and put it in a variable you'll check for not being null at the >> start of the next requestAnimationFrame invocation. Of course when you do >> that, the entire exercise of providing a callback is moot. Since that'd be >> the most common usage pattern that everybody would recommend based on sound >> reasoning about the predictability of rendering in a requestAnimationFrame >> loop, that makes promises/callbacks entirely moot and the wrong thing to do. >> >> On Fri, Jan 6, 2017 at 1:52 PM, Maksims Mihejevs >> wrote: >> >>> Why not to use classic callbacks with (err, response) arguments? >>> >>> On 6 January 2017 at 00:01, Kenneth Russell wrote: >>> >>>> On Thu, Jan 5, 2017 at 2:55 AM, Maksims Mihejevs >>>> wrote: >>>> >>>>> Thank you for answer, this examples make sense. >>>>> >>>>> The only concern as mentioned before are: Promises. There is no a >>>>> single use of promises in WebGL, and there is a reason why they should not >>>>> be used in real-time applications if they can be avoided (which is the >>>>> case). >>>>> >>>>> If possible Fence (Sync Objects) were involved in design process? >>>>> Could something like this be achieved by using more generic functionality >>>>> in GL such as Sync Objects? >>>>> >>>> >>>> Promises were chosen for a particular reason. Even if a primitive like >>>> mapBufferRange were exposed in WebGL, a typed array would still need to be >>>> allocated as the return result. There's no provision in the ECMAScript >>>> specification for "re-pointing" a typed array at a new backing store, >>>> although Chrome's V8 team proposed this a while back. Mapping a buffer's >>>> range at the OpenGL ES level returns a void*, and that would need to be >>>> exposed to ECMAScript. Passing in an already-allocated typed array wouldn't >>>> work in the current form of the APIs. >>>> >>>> Given that an allocation would be needed anyway, returning a Promise is >>>> much more web-friendly. It avoids the need for the application to deal with >>>> sync objects, and potentially inconsistent states of the returned typed >>>> array. getBufferSubDataAsync lets the application specify where it wants >>>> the returned data to be copied. >>>> >>>> -Ken >>>> >>>> >>>> >>>>> Kind Regards, >>>>> Max >>>>> >>>>> On 4 January 2017 at 18:55, Kai Ninomiya wrote: >>>>> >>>>>> Max, >>>>>> >>>>>> In my demo [1] there are 3 different possible readback paths: >>>>>> * readPixels to CPU [2] >>>>>> * readPixels to PBO + getBufferSubData [3] >>>>>> * readPixels to PBO + getBufferSubDataAsync [4] >>>>>> >>>>>> As you said, getBufferSubData(/Async) can be used for reading back >>>>>> any buffer data (such as transform feedback or GPGPU shader results). PBO >>>>>> is necessary (AFAIK) for async readback from framebuffer data (note: an >>>>>> async readPixels wouldn't be as useful as it would block any operation >>>>>> which writes to the framebuffer). >>>>>> >>>>>> -Kai >>>>>> >>>>>> [1] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>>>> [2] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>>>> blob/master/index.html#L245 >>>>>> [3] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>>>> blob/master/index.html#L261 >>>>>> [4] https://github.com/kainino0x/getBufferSubDataAsync-Demo/ >>>>>> blob/master/index.html#L280 >>>>>> >>>>>> On Wed, Jan 4, 2017 at 5:09 AM Maksims Mihejevs >>>>>> wrote: >>>>>> >>>>>>> From PlayCanvas side, we express a need for async glReadPixels path >>>>>>> too. We and our users have been using it in many ways, some of the ways: >>>>>>> 1. GPU picking: ID encoded in unique colour, reading pixel under >>>>>>> mouse. >>>>>>> 2. GPU screen to world: reading pixel from depth texture, and using >>>>>>> frustum with math reconstructing world position. >>>>>>> 3. Render Target to another Canvas. In Editor we have thumbnail >>>>>>> previews for materials, models, cubemaps and other assets. We render them >>>>>>> into render target in main context and then reading pixels to create >>>>>>> ImageData so it can be put to another canvas using putImageData. >>>>>>> 4. Some custom algorithms to generate large amounts of computation >>>>>>> heavy data saved into texture, then read on CPU - this depends per case. >>>>>>> Sometimes async approach is viable there, sometimes it is not. >>>>>>> >>>>>>> In many cases glReadPixels is called per each frame, like for >>>>>>> picking, and easily can drop frame rate due to blocking nature. >>>>>>> >>>>>>> I've noticed that PBOs are mentioned in WebGL 2.0 spec, but not much >>>>>>> info apart of just that mention: https://www.khronos.o >>>>>>> rg/registry/webgl/specs/latest/2.0/#4.2 >>>>>>> PBOs would allow to get render target data into buffers without >>>>>>> stalling GPU pipeline, and then read them. >>>>>>> >>>>>>> How does getBufferSubDataAsync relates to PBOs? >>>>>>> >>>>>>> Cheers, >>>>>>> Max >>>>>>> >>>>>>> On 4 January 2017 at 05:27, Kenneth Russell wrote: >>>>>>> >>>>>>> Apologies for not discussing this extension on public_webgl before >>>>>>> introducing it as a draft in the WebGL extension registry. >>>>>>> >>>>>>> The cost of synchronous glReadPixels has been a longstanding problem >>>>>>> in WebGL. The Chrome browser specifically has a particularly deep graphics >>>>>>> pipeline, and draining it with a synchronous call each frame imposes a >>>>>>> too-great performance penalty. This has forced applications to rewrite >>>>>>> certain algorithms when porting to WebGL. >>>>>>> >>>>>>> getBufferSubDataAsync is a direct parallel to getBufferSubData, and >>>>>>> solves these performance pitfalls in Chrome. We've gathered data from two >>>>>>> test cases so far, a GPU-based picking algorithm and a GPGPU global >>>>>>> illumination algorithm, and the results look good. We will present this >>>>>>> data on public_webgl soon, when making a case for moving the extension >>>>>>> forward. >>>>>>> >>>>>>> -Ken >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Jan 2, 2017 at 2:27 PM, Maksims Mihejevs >>>>>> > wrote: >>>>>>> >>>>>>> Worth mentioning that promises are extremely bad for GC and >>>>>>> real-time applications, they do not provide a developer enough control to >>>>>>> structure logic so to avoids any allocations. >>>>>>> >>>>>>> Promises - are not good for real-time at all, and lead to issues >>>>>>> with GC. Any API in WebGL that is meant to be used in real-time >>>>>>> applications should not be based on API's that are not real-time friendly. >>>>>>> >>>>>>> On 2 January 2017 at 20:49, Florian B?sch wrote: >>>>>>> >>>>>>> Upon thinking about this extension, I don't think it should exist at >>>>>>> all. Ideally the mapBuffer semantic would be exposed. But even if it isn't, >>>>>>> it shall not be that an extension is required to express functionality >>>>>>> already found in the core functionality of the underlying ES specification. >>>>>>> >>>>>>> Furthermore, getBufferSubDataAsync does not adequately express the >>>>>>> reality of map/flush/unmap, and hides the fact that unmap/flush are still >>>>>>> synchronizing calls happening. However getBufferSubDataAsync obstructs >>>>>>> appropriate code dealing with proper insertion of synchronization points. >>>>>>> >>>>>>> In addition, it would lead to allocating promises once or many times >>>>>>> per frame, and since tracking would be required in some instances, would >>>>>>> also lead to allocating a closure once or many times a call. An issue that >>>>>>> map buffer range does not exhibit. >>>>>>> >>>>>>> Due to the lack of discussion of this feature, I believe a great >>>>>>> disservice is done to WebGL 2 by the introduction of these ideas/APIs and I >>>>>>> strongly suggest to withdraw this from draft immediately and go back to the >>>>>>> drawing board. >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Mon, Jan 2, 2017 at 5:48 PM, Florian B?sch >>>>>>> wrote: >>>>>>> >>>>>>> This extension https://www.khronos.org/registry/webgl/extensions/ >>>>>>> WEBGL_get_buffer_sub_data_async/ has been introduced and elevated >>>>>>> to draft without any public discussion. >>>>>>> >>>>>>> In a nutshell it proposes a new WebGL2 function called >>>>>>> getBufferSubDataAsync which returns a promise that will be called >>>>>>> eventually with the buffer data. >>>>>>> >>>>>>> I think there are several problems: >>>>>>> >>>>>>> 1. The extension process states that "*Extensions move through >>>>>>> four states during their development: proposed, draft, community approved, >>>>>>> and Khronos ratified**"*. This extension never moved through the >>>>>>> proposal stage. >>>>>>> 2. The extension introduces promises to the WebGL API. This >>>>>>> requires a more fundamental discussion. >>>>>>> 3. A discussion if this extension is required if WebWorkers can >>>>>>> access the same context as the main thread has not happened. >>>>>>> >>>>>>> This extension should be in proposal status, and the necessary >>>>>>> discussions should happen first. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Wed Jan 11 22:00:11 2017 From: khr...@ (Mark Callow) Date: Thu, 12 Jan 2017 15:00:11 +0900 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> <7e1d0ebac1504572a2ea25c1e4e2072c@ukmail101.nvidia.com> <104e30463c654e2f9b5684001ba2e894@ukmail101.nvidia.com> Message-ID: <1C8BE270-01E8-4D9F-BDB9-A0B05FE1470F@callow.im> If we modify the OVR_multiview stuff so it can work in WebGL 1.0, as suggested by Rafael, then is it really necessary to add DrawBuffer to support stereo default framebuffers? Can?t a stereo default framebuffer (i.e. canvas) be presented to the WebGL app as a texture array? Then apps will only need one way to deal with multiple views regardless of whether they are drawing to the default or an app created framebuffer and we can remove all that DrawBuffer stuff targeted for core WebGL whose interactions with OVR_multiview have so far managed to confuse quite a few very bright people. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Wed Jan 11 22:43:05 2017 From: khr...@ (Mark Callow) Date: Thu, 12 Jan 2017 15:43:05 +0900 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: References: <926B7F74-9102-459E-A68D-7EB7DCCEC867@callow.im> <070A9DFF-58D1-4A9D-8C5B-AF1124151A62@callow.im> Message-ID: <1303E4E0-9E39-4439-8AEB-FB86F1922E67@callow.im> > On Jan 11, 2017, at 2:16, Byungseon Shin wrote: > > Hi Mark, > > Thanks for the in-depth feedback. > I agree your opinions and have changed the logic by your guide. > > I have created a new patch and please review it again. > https://github.com/KhronosGroup/WebGL/pull/2243/commits/c8193b5f1ffb37881bb815bd98b4da91e516828d > > About the OES_EGL_image_external_essl3, we will follow up soon. > Thanks updating the spec. I am not comfortable with calling this OES_EGL_image_external and especially not with reusing the function name EGLImageTargetTexture2DOES. I think you are really wrapping the OES extension NV_EGL_stream_consumer_external . This is actually what WEBGL_dynamic_texture is wrapping. I also wonder if using the same function for initial setup and for latching each frame is a good idea. The work to be done will be different so there should be different functions. I think it would be better to go back to WEBGL_dynamic_texture, simplify it a bit by changing its WDTStream class to be a bit more like Android?s SurfaceTexture and adding the essl3 support. I know at first glance WDT looks way more complex than this current proposal but that is because it documents precise detailed behavior while the current proposal still leaves a lot to the imagination. The only real functional difference is the presence of a call to set the presentation time of the drawing buffer. This function is also the area of greatest concern for browser vendors. I note though that Android?s MediaCodec class provides this capability via its releaseOutputBuffer(bufferId, timestamp) method. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Thu Jan 12 19:30:49 2017 From: khr...@ (Mark Callow) Date: Fri, 13 Jan 2017 12:30:49 +0900 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: <1303E4E0-9E39-4439-8AEB-FB86F1922E67@callow.im> References: <926B7F74-9102-459E-A68D-7EB7DCCEC867@callow.im> <070A9DFF-58D1-4A9D-8C5B-AF1124151A62@callow.im> <1303E4E0-9E39-4439-8AEB-FB86F1922E67@callow.im> Message-ID: > On Jan 12, 2017, at 15:43, Mark Callow wrote: > > The only real functional difference is the presence of a call to set the presentation time of the drawing buffer. This function is also the area of greatest concern for browser vendors. Oops! And the query to return the time at which the last frame was presented. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Thu Jan 12 23:33:09 2017 From: khr...@ (Mark Callow) Date: Fri, 13 Jan 2017 16:33:09 +0900 Subject: [Public WebGL] OES_EGL_image_external Extension proposal In-Reply-To: <1303E4E0-9E39-4439-8AEB-FB86F1922E67@callow.im> References: <926B7F74-9102-459E-A68D-7EB7DCCEC867@callow.im> <070A9DFF-58D1-4A9D-8C5B-AF1124151A62@callow.im> <1303E4E0-9E39-4439-8AEB-FB86F1922E67@callow.im> Message-ID: <3AAFCAFA-680B-4560-A44C-0D76C7380341@callow.im> > On Jan 12, 2017, at 15:43, Mark Callow wrote: > > I note though that Android?s MediaCodec class provides this capability via its releaseOutputBuffer(bufferId, timestamp) method. FWIW, it looks like the behavior of this MediaCodec method is only specified when the codec is connected to a SurfaceView. Nothing is written about the behavior when connected to a SurfaceTexture. This makes sense. Since the application will be responsible for rendering the texture image into the scene, specifying the timestamp to the codec is too early in the chain. Presentation time needs to be specified to the compositor or buffer swapper. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 14 00:09:35 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 14 Jan 2017 09:09:35 +0100 Subject: [Public WebGL] Add minimums to the WebGL2 specification Message-ID: I'd suggest adding a table of the minimum values for all max parameters of WebGL2 (including the taken over parameters from WebGL1) to the WebGL2 specification. Where reasonable this table should exceed the ES 3 mandated minimums that in practice are just never observed. It would be a convenience for WebGL2 practitioners to not have to consult the glGet on https://www.khronos.org/opengles/sdk/docs/man3/ to figure out those minimums, and if in practice never seen minimums are removed, it aides the decision making of practitioners for real-world values. I will be adding display parameters in WebGL2 to WebGL Stats very soon. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chr...@ Sat Jan 14 04:11:22 2017 From: chr...@ (Christophe Riccio) Date: Sat, 14 Jan 2017 13:11:22 +0100 Subject: [Public WebGL] Promote WEBGL_compressed_texture_s3tc_srgb to community approved Message-ID: Hi, We have been working with Google on WEBGL_compressed_texture_s3tc_srgb to support linear rendering without compromises either in texture quality (storing color in linear directly) or performance (doing the gamma to linear conversion in the shader). This extension is now supported by Chrome and we are looking forward to see other implementation as this a big jump in quality for us. Also we would like to suggest to the WebGL group to promote WEBGL_compressed_texture_s3tc_srgb to community approved. Thanks, Christophe -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 14 05:15:15 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 14 Jan 2017 14:15:15 +0100 Subject: [Public WebGL] Promote WEBGL_compressed_texture_s3tc_srgb to community approved In-Reply-To: References: Message-ID: On Sat, Jan 14, 2017 at 1:11 PM, Christophe Riccio < christophe.riccio...@> wrote: > (storing color in linear directly) > sRGB stores nonlinear color. You write linear into gl_FragColor, but stored is nonlinear. You just don't do the conversion. Performance impact of a gamma conversion prior to writing to gl_FragColor is neglible. What isn't neglible is the effects of blending, which you don't mention. Blending in nonlinear space produces incorrect results. However the conversion with the sRGB format is done inside the blending stage, and therefore blending is done in linear space and only after blending it's serialized to nonlinear space. -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Sat Jan 14 05:34:29 2017 From: khr...@ (Mark Callow) Date: Sat, 14 Jan 2017 22:34:29 +0900 Subject: [Public WebGL] Promote WEBGL_compressed_texture_s3tc_srgb to community approved In-Reply-To: References: Message-ID: > On Jan 14, 2017, at 22:15, Florian B?sch wrote: > > On Sat, Jan 14, 2017 at 1:11 PM, Christophe Riccio > wrote: > (storing color in linear directly) > sRGB stores nonlinear color. You write linear into gl_FragColor, but stored is nonlinear. You just don't do the conversion. Performance impact of a gamma conversion prior to writing to gl_FragColor is neglible. What isn't neglible is the effects of blending, which you don't mention. Blending in nonlinear space produces incorrect results. However the conversion with the sRGB format is done inside the blending stage, and therefore blending is done in linear space and only after blending it's serialized to nonlinear space. I think Christophe knows all that. His comment, which admittedly made me pause for a moment too, refers to the loss of quality that occurs when you have to store linear data in your texture, i.e. what happens if only WEBGL_compressed_texture_s3tc is available. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From chr...@ Sat Jan 14 08:55:52 2017 From: chr...@ (Christophe Riccio) Date: Sat, 14 Jan 2017 17:55:52 +0100 Subject: [Public WebGL] Promote WEBGL_compressed_texture_s3tc_srgb to community approved In-Reply-To: References: Message-ID: Yes what Mark said. :) On Sat, Jan 14, 2017 at 2:34 PM, Mark Callow wrote: > > On Jan 14, 2017, at 22:15, Florian B?sch wrote: > > On Sat, Jan 14, 2017 at 1:11 PM, Christophe Riccio < > christophe.riccio...@> wrote: > >> (storing color in linear directly) >> > sRGB stores nonlinear color. You write linear into gl_FragColor, but > stored is nonlinear. You just don't do the conversion. Performance impact > of a gamma conversion prior to writing to gl_FragColor is neglible. What > isn't neglible is the effects of blending, which you don't mention. > Blending in nonlinear space produces incorrect results. However the > conversion with the sRGB format is done inside the blending stage, and > therefore blending is done in linear space and only after blending it's > serialized to nonlinear space. > > > I think Christophe knows all that. His comment, which admittedly made me > pause for a moment too, refers to the loss of quality that occurs when you > have to store linear data in your texture, i.e. what happens if only > WEBGL_compressed_texture_s3tc is available. > > Regards > > -Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Sat Jan 14 13:58:00 2017 From: kbr...@ (Kenneth Russell) Date: Sat, 14 Jan 2017 13:58:00 -0800 Subject: [Public WebGL] Promote WEBGL_compressed_texture_s3tc_srgb to community approved In-Reply-To: References: Message-ID: Google supports moving this extension to community approved. Appreciate Unity pointing out its need, and pushing for its support. -Ken On Sat, Jan 14, 2017 at 8:55 AM, Christophe Riccio < christophe.riccio...@> wrote: > Yes what Mark said. :) > > On Sat, Jan 14, 2017 at 2:34 PM, Mark Callow wrote: > >> >> On Jan 14, 2017, at 22:15, Florian B?sch wrote: >> >> On Sat, Jan 14, 2017 at 1:11 PM, Christophe Riccio < >> christophe.riccio...@> wrote: >> >>> (storing color in linear directly) >>> >> sRGB stores nonlinear color. You write linear into gl_FragColor, but >> stored is nonlinear. You just don't do the conversion. Performance impact >> of a gamma conversion prior to writing to gl_FragColor is neglible. What >> isn't neglible is the effects of blending, which you don't mention. >> Blending in nonlinear space produces incorrect results. However the >> conversion with the sRGB format is done inside the blending stage, and >> therefore blending is done in linear space and only after blending it's >> serialized to nonlinear space. >> >> >> I think Christophe knows all that. His comment, which admittedly made me >> pause for a moment too, refers to the loss of quality that occurs when you >> have to store linear data in your texture, i.e. what happens if only >> WEBGL_compressed_texture_s3tc is available. >> >> Regards >> >> -Mark >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Sat Jan 14 14:45:40 2017 From: kbr...@ (Kenneth Russell) Date: Sat, 14 Jan 2017 14:45:40 -0800 Subject: [Public WebGL] Add minimums to the WebGL2 specification In-Reply-To: References: Message-ID: Thanks for the suggestion Florian. It's a good one and I filed https://github.com/KhronosGroup/WebGL/issues/2259 to track it. If you'd be willing to contribute a pull request that would be great. Initially we should just include the exact table from the OpenGL ES 3.0 spec, because that's all the WebGL 2.0 spec can guarantee initially. -Ken On Sat, Jan 14, 2017 at 12:09 AM, Florian B?sch wrote: > I'd suggest adding a table of the minimum values for all max parameters of > WebGL2 (including the taken over parameters from WebGL1) to the WebGL2 > specification. > > Where reasonable this table should exceed the ES 3 mandated minimums that > in practice are just never observed. > > It would be a convenience for WebGL2 practitioners to not have to consult > the glGet on https://www.khronos.org/opengles/sdk/docs/man3/ to figure > out those minimums, and if in practice never seen minimums are removed, it > aides the decision making of practitioners for real-world values. > > I will be adding display parameters in WebGL2 to WebGL Stats very soon. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 14 15:28:54 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sun, 15 Jan 2017 00:28:54 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: <1C8BE270-01E8-4D9F-BDB9-A0B05FE1470F@callow.im> References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> <7e1d0ebac1504572a2ea25c1e4e2072c@ukmail101.nvidia.com> <104e30463c654e2f9b5684001ba2e894@ukmail101.nvidia.com> <1C8BE270-01E8-4D9F-BDB9-A0B05FE1470F@callow.im> Message-ID: Why are the disjoint timer query and multiview extensions mutually exclusive? Also why are they mutually exclusive upon extension activation and not on use? On Thu, Jan 12, 2017 at 7:00 AM, Mark Callow wrote: > > If we modify the OVR_multiview stuff so it can work in WebGL 1.0, as > suggested by Rafael, then is it really necessary to add DrawBuffer to > support stereo default framebuffers? Can?t a stereo default framebuffer > (i.e. canvas) be presented to the WebGL app as a texture array? > > Then apps will only need one way to deal with multiple views regardless of > whether they are drawing to the default or an app created framebuffer and > we can remove all that DrawBuffer stuff targeted for core WebGL whose > interactions with OVR_multiview have so far managed to confuse quite a few > very bright people. > > Regards > > -Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Mon Jan 16 08:37:29 2017 From: art...@ (Mr F) Date: Mon, 16 Jan 2017 19:37:29 +0300 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: So it seems like ANGLE sometimes implements blit on CPU (!): https://github.com/google/angle/blob/master/src/libANGLE/renderer/d3d/d3d11/Blit11.cpp#L240 Which gives huge slowdowns without really telling why and being able to predict it. If on some platform some blit is not possible, maybe we should just throw an error, disable it, make it an extension or something? CPU blit is pure evil. On 1 December 2016 at 15:50, Mr F wrote: > OK, sorry for bothering - must be my mistake somewhere. Made it work while > preparing the test case :) > > On 30 November 2016 at 23:03, Jamie Madill wrote: > >> Yes, a reproduction / test case would be awesome, thanks. >> >> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert >> wrote: >> >>> FWIW, the spec says pretty explicitly that BlitFramebuffer should be >>> functional for depth->depth blitting, as long as the formats match, >>> etc etc. >>> >>> GLES 3.0.5 p198: >>> If the >>> source formats are depth values, sample values are resolved in an >>> implementation- >>> dependent manner where the result will be between the minimum and maximum >>> depth values in the pixel. >>> >>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell wrote: >>> > Could you make any test case available? Do you have any other platforms >>> > (Mac, Linux) to test on? You could also try launching Chrome with the >>> > command line argument --use-angle=gl to see the difference between >>> ANGLE's >>> > D3D11 and OpenGL backends. >>> > >>> > >>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F wrote: >>> >> >>> >> I'll try to prepare a clean test case soon. I'm on the latest Chrome >>> >> Canary/Win 7/GTX970. Both source and destination are 32F. What I mean >>> by >>> >> "blank" actually looks like a solid red color, which seems to be >>> uniform >>> >> 1,0,0, no matter how close/far I'm from any geometry.It's good to >>> know that >>> >> it is supposed to work - maybe just my mistake somewhere, although I >>> tested >>> >> it heavily. The source buffer is created and attached using >>> >> >>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>> gl.DEPTH_COMPONENT32F, >>> >> width, height) >>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>> >> gl.RENDERBUFFER, msaaBuffer) >>> >> >>> >> and the destination created and attached to another framebufer with >>> >> >>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, width, height, 0, >>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>> >> framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>> gl.TEXTURE_2D, >>> >> tex, 0) >>> >> >>> >> The blit is performed like this: >>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>> >> >>> >> On 29 November 2016 at 17:51, Jamie Madill >>> wrote: >>> >>> >>> >>> This generally should work. There are tests for this in dEQP's >>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL 2-specific >>> tests in >>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>> >>> >>> >>> Note some of these tests are marked as failing on Intel on pretty >>> much >>> >>> all platforms: >>> >>> >>> >>> https://cs.chromium.org/chromium/src/content/test/gpu/gpu_te >>> sts/webgl2_conformance_expectations.py?q=webgl2_conform&sq= >>> package:chromium&l=118 >>> >>> >>> >>> Mr F, what platform are you on, and which vendor? >>> >>> >>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell >>> wrote: >>> >>>> >>> >>>> I'm pretty sure this should either work, or generate an OpenGL >>> error. >>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says BlitFramebuffer >>> >>>> generates INVALID_OPERATION if the mask includes DEPTH_BUFFER_BIT or >>> >>>> STENCIL_BUFFER_BIT, and the source and destination depth and >>> stencil buffer >>> >>>> formats don't match. >>> >>>> >>> >>>> It looks like there are extensive tests of resolving multisampled >>> depth >>> >>>> buffers to single-sampled ones, but I'm not sure about resolving to >>> a depth >>> >>>> texture. Do you have a test case? What platform are you on (can you >>> provide >>> >>>> about:gpu if you're running Chrome)? >>> >>>> >>> >>>> -Ken >>> >>>> >>> >>>> >>> >>>> >>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F >>> wrote: >>> >>>>> >>> >>>>> The docs are a bit unclear on that. Is it possible to use >>> >>>>> blitFramebuffer to get current MSAA depth buffer into a non-MSAA >>> depth >>> >>>>> texture? The color buffer is resolved nicely, however the depth is >>> just >>> >>>>> blank, and no error is generated - wondering if it's the expected >>> behaviour. >>> >>>>> Depth format is DEPTH_COMPONENT32F. >>> >>>> >>> >>>> >>> >>> >>> >> >>> > >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Tue Jan 17 11:39:33 2017 From: art...@ (Mr F) Date: Tue, 17 Jan 2017 22:39:33 +0300 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: I understand depth/stencil complications. Are you sure 32F depth blits are never CPU-bound? There's a weird issue right now with it taking large chunk of performance, although only in FF Nightly (Chrome is fine). I wonder if some older versions of ANGLE did CPU copy for D32F? On 17 January 2017 at 22:28, Jamie Madill wrote: > Blit is core in GLES 3, so we need to emulate some of the depth/stencil > blits to support GLES 3 at all. Such is the nature of what ANGLE does. > > It's something that is predictable from ANGLE's point of view, but I > understand it might be unclear from the app's point of view. Stencil bits > in particular are difficult, and multisample depth/stencil. We're working > on making this more visible - one thing is that it'll show up in Chrome's > about:gpu page. Ideally it could show up in some debugging log visible to > javascript. > > On Mon, Jan 16, 2017 at 11:37 AM, Mr F wrote: > >> So it seems like ANGLE sometimes implements blit on CPU (!): >> https://github.com/google/angle/blob/master/src/libANGL >> E/renderer/d3d/d3d11/Blit11.cpp#L240 >> >> Which gives huge slowdowns without really telling why and being able to >> predict it. If on some platform some blit is not possible, maybe we should >> just throw an error, disable it, make it an extension or something? CPU >> blit is pure evil. >> >> On 1 December 2016 at 15:50, Mr F wrote: >> >>> OK, sorry for bothering - must be my mistake somewhere. Made it work >>> while preparing the test case :) >>> >>> On 30 November 2016 at 23:03, Jamie Madill wrote: >>> >>>> Yes, a reproduction / test case would be awesome, thanks. >>>> >>>> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert >>>> wrote: >>>> >>>>> FWIW, the spec says pretty explicitly that BlitFramebuffer should be >>>>> functional for depth->depth blitting, as long as the formats match, >>>>> etc etc. >>>>> >>>>> GLES 3.0.5 p198: >>>>> If the >>>>> source formats are depth values, sample values are resolved in an >>>>> implementation- >>>>> dependent manner where the result will be between the minimum and >>>>> maximum >>>>> depth values in the pixel. >>>>> >>>>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell >>>>> wrote: >>>>> > Could you make any test case available? Do you have any other >>>>> platforms >>>>> > (Mac, Linux) to test on? You could also try launching Chrome with the >>>>> > command line argument --use-angle=gl to see the difference between >>>>> ANGLE's >>>>> > D3D11 and OpenGL backends. >>>>> > >>>>> > >>>>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F >>>>> wrote: >>>>> >> >>>>> >> I'll try to prepare a clean test case soon. I'm on the latest Chrome >>>>> >> Canary/Win 7/GTX970. Both source and destination are 32F. What I >>>>> mean by >>>>> >> "blank" actually looks like a solid red color, which seems to be >>>>> uniform >>>>> >> 1,0,0, no matter how close/far I'm from any geometry.It's good to >>>>> know that >>>>> >> it is supposed to work - maybe just my mistake somewhere, although >>>>> I tested >>>>> >> it heavily. The source buffer is created and attached using >>>>> >> >>>>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>>>> gl.DEPTH_COMPONENT32F, >>>>> >> width, height) >>>>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>> >> gl.RENDERBUFFER, msaaBuffer) >>>>> >> >>>>> >> and the destination created and attached to another framebufer with >>>>> >> >>>>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, width, height, >>>>> 0, >>>>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>>>> >> framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>> gl.TEXTURE_2D, >>>>> >> tex, 0) >>>>> >> >>>>> >> The blit is performed like this: >>>>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>>>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>>>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>>>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>>>> >> >>>>> >> On 29 November 2016 at 17:51, Jamie Madill >>>>> wrote: >>>>> >>> >>>>> >>> This generally should work. There are tests for this in dEQP's >>>>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL 2-specific >>>>> tests in >>>>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>>>> >>> >>>>> >>> Note some of these tests are marked as failing on Intel on pretty >>>>> much >>>>> >>> all platforms: >>>>> >>> >>>>> >>> https://cs.chromium.org/chromium/src/content/test/gpu/gpu_te >>>>> sts/webgl2_conformance_expectations.py?q=webgl2_conform&sq=p >>>>> ackage:chromium&l=118 >>>>> >>> >>>>> >>> Mr F, what platform are you on, and which vendor? >>>>> >>> >>>>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell >>>>> wrote: >>>>> >>>> >>>>> >>>> I'm pretty sure this should either work, or generate an OpenGL >>>>> error. >>>>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says >>>>> BlitFramebuffer >>>>> >>>> generates INVALID_OPERATION if the mask includes DEPTH_BUFFER_BIT >>>>> or >>>>> >>>> STENCIL_BUFFER_BIT, and the source and destination depth and >>>>> stencil buffer >>>>> >>>> formats don't match. >>>>> >>>> >>>>> >>>> It looks like there are extensive tests of resolving multisampled >>>>> depth >>>>> >>>> buffers to single-sampled ones, but I'm not sure about resolving >>>>> to a depth >>>>> >>>> texture. Do you have a test case? What platform are you on (can >>>>> you provide >>>>> >>>> about:gpu if you're running Chrome)? >>>>> >>>> >>>>> >>>> -Ken >>>>> >>>> >>>>> >>>> >>>>> >>>> >>>>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F >>>>> wrote: >>>>> >>>>> >>>>> >>>>> The docs are a bit unclear on that. Is it possible to use >>>>> >>>>> blitFramebuffer to get current MSAA depth buffer into a non-MSAA >>>>> depth >>>>> >>>>> texture? The color buffer is resolved nicely, however the depth >>>>> is just >>>>> >>>>> blank, and no error is generated - wondering if it's the >>>>> expected behaviour. >>>>> >>>>> Depth format is DEPTH_COMPONENT32F. >>>>> >>>> >>>>> >>>> >>>>> >>> >>>>> >> >>>>> > >>>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Tue Jan 17 13:23:23 2017 From: art...@ (Mr F) Date: Wed, 18 Jan 2017 00:23:23 +0300 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: I'm talking about multisampled blit though On 17 January 2017 at 23:03, Jamie Madill wrote: > Non-multisample depth blit by itself should use a shader.. but we'd need a > small repro case to verify. Let me know if you can pull one together and > I'll look. > > On Tue, Jan 17, 2017 at 2:39 PM, Mr F wrote: > >> I understand depth/stencil complications. Are you sure 32F depth blits >> are never CPU-bound? There's a weird issue right now with it taking large >> chunk of performance, although only in FF Nightly (Chrome is fine). I >> wonder if some older versions of ANGLE did CPU copy for D32F? >> >> On 17 January 2017 at 22:28, Jamie Madill wrote: >> >>> Blit is core in GLES 3, so we need to emulate some of the depth/stencil >>> blits to support GLES 3 at all. Such is the nature of what ANGLE does. >>> >>> It's something that is predictable from ANGLE's point of view, but I >>> understand it might be unclear from the app's point of view. Stencil bits >>> in particular are difficult, and multisample depth/stencil. We're working >>> on making this more visible - one thing is that it'll show up in Chrome's >>> about:gpu page. Ideally it could show up in some debugging log visible to >>> javascript. >>> >>> On Mon, Jan 16, 2017 at 11:37 AM, Mr F wrote: >>> >>>> So it seems like ANGLE sometimes implements blit on CPU (!): >>>> https://github.com/google/angle/blob/master/src/libANGL >>>> E/renderer/d3d/d3d11/Blit11.cpp#L240 >>>> >>>> Which gives huge slowdowns without really telling why and being able to >>>> predict it. If on some platform some blit is not possible, maybe we should >>>> just throw an error, disable it, make it an extension or something? CPU >>>> blit is pure evil. >>>> >>>> On 1 December 2016 at 15:50, Mr F wrote: >>>> >>>>> OK, sorry for bothering - must be my mistake somewhere. Made it work >>>>> while preparing the test case :) >>>>> >>>>> On 30 November 2016 at 23:03, Jamie Madill >>>>> wrote: >>>>> >>>>>> Yes, a reproduction / test case would be awesome, thanks. >>>>>> >>>>>> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert >>>>>> wrote: >>>>>> >>>>>>> FWIW, the spec says pretty explicitly that BlitFramebuffer should be >>>>>>> functional for depth->depth blitting, as long as the formats match, >>>>>>> etc etc. >>>>>>> >>>>>>> GLES 3.0.5 p198: >>>>>>> If the >>>>>>> source formats are depth values, sample values are resolved in an >>>>>>> implementation- >>>>>>> dependent manner where the result will be between the minimum and >>>>>>> maximum >>>>>>> depth values in the pixel. >>>>>>> >>>>>>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell >>>>>>> wrote: >>>>>>> > Could you make any test case available? Do you have any other >>>>>>> platforms >>>>>>> > (Mac, Linux) to test on? You could also try launching Chrome with >>>>>>> the >>>>>>> > command line argument --use-angle=gl to see the difference between >>>>>>> ANGLE's >>>>>>> > D3D11 and OpenGL backends. >>>>>>> > >>>>>>> > >>>>>>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F >>>>>>> wrote: >>>>>>> >> >>>>>>> >> I'll try to prepare a clean test case soon. I'm on the latest >>>>>>> Chrome >>>>>>> >> Canary/Win 7/GTX970. Both source and destination are 32F. What I >>>>>>> mean by >>>>>>> >> "blank" actually looks like a solid red color, which seems to be >>>>>>> uniform >>>>>>> >> 1,0,0, no matter how close/far I'm from any geometry.It's good to >>>>>>> know that >>>>>>> >> it is supposed to work - maybe just my mistake somewhere, >>>>>>> although I tested >>>>>>> >> it heavily. The source buffer is created and attached using >>>>>>> >> >>>>>>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>>>>>> gl.DEPTH_COMPONENT32F, >>>>>>> >> width, height) >>>>>>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>> >> gl.RENDERBUFFER, msaaBuffer) >>>>>>> >> >>>>>>> >> and the destination created and attached to another framebufer >>>>>>> with >>>>>>> >> >>>>>>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, width, >>>>>>> height, 0, >>>>>>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>>>>>> >> framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>> gl.TEXTURE_2D, >>>>>>> >> tex, 0) >>>>>>> >> >>>>>>> >> The blit is performed like this: >>>>>>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>>>>>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>>>>>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>>>>>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>>>>>> >> >>>>>>> >> On 29 November 2016 at 17:51, Jamie Madill >>>>>>> wrote: >>>>>>> >>> >>>>>>> >>> This generally should work. There are tests for this in dEQP's >>>>>>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL 2-specific >>>>>>> tests in >>>>>>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>>>>>> >>> >>>>>>> >>> Note some of these tests are marked as failing on Intel on >>>>>>> pretty much >>>>>>> >>> all platforms: >>>>>>> >>> >>>>>>> >>> https://cs.chromium.org/chromium/src/content/test/gpu/gpu_te >>>>>>> sts/webgl2_conformance_expectations.py?q=webgl2_conform&sq=p >>>>>>> ackage:chromium&l=118 >>>>>>> >>> >>>>>>> >>> Mr F, what platform are you on, and which vendor? >>>>>>> >>> >>>>>>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell < >>>>>>> kbr...@> wrote: >>>>>>> >>>> >>>>>>> >>>> I'm pretty sure this should either work, or generate an OpenGL >>>>>>> error. >>>>>>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says >>>>>>> BlitFramebuffer >>>>>>> >>>> generates INVALID_OPERATION if the mask includes >>>>>>> DEPTH_BUFFER_BIT or >>>>>>> >>>> STENCIL_BUFFER_BIT, and the source and destination depth and >>>>>>> stencil buffer >>>>>>> >>>> formats don't match. >>>>>>> >>>> >>>>>>> >>>> It looks like there are extensive tests of resolving >>>>>>> multisampled depth >>>>>>> >>>> buffers to single-sampled ones, but I'm not sure about >>>>>>> resolving to a depth >>>>>>> >>>> texture. Do you have a test case? What platform are you on (can >>>>>>> you provide >>>>>>> >>>> about:gpu if you're running Chrome)? >>>>>>> >>>> >>>>>>> >>>> -Ken >>>>>>> >>>> >>>>>>> >>>> >>>>>>> >>>> >>>>>>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F >>>>>>> wrote: >>>>>>> >>>>> >>>>>>> >>>>> The docs are a bit unclear on that. Is it possible to use >>>>>>> >>>>> blitFramebuffer to get current MSAA depth buffer into a >>>>>>> non-MSAA depth >>>>>>> >>>>> texture? The color buffer is resolved nicely, however the >>>>>>> depth is just >>>>>>> >>>>> blank, and no error is generated - wondering if it's the >>>>>>> expected behaviour. >>>>>>> >>>>> Depth format is DEPTH_COMPONENT32F. >>>>>>> >>>> >>>>>>> >>>> >>>>>>> >>> >>>>>>> >> >>>>>>> > >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Tue Jan 17 13:27:28 2017 From: art...@ (Mr F) Date: Wed, 18 Jan 2017 00:27:28 +0300 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: Here's a tiny example: http://geom.io/testResolveDepth/index.html It blits MSAA R11G11B10 + 32F depth into 2 non-multisampled textures. On 18 January 2017 at 00:23, Mr F wrote: > I'm talking about multisampled blit though > > On 17 January 2017 at 23:03, Jamie Madill wrote: > >> Non-multisample depth blit by itself should use a shader.. but we'd need >> a small repro case to verify. Let me know if you can pull one together and >> I'll look. >> >> On Tue, Jan 17, 2017 at 2:39 PM, Mr F wrote: >> >>> I understand depth/stencil complications. Are you sure 32F depth blits >>> are never CPU-bound? There's a weird issue right now with it taking large >>> chunk of performance, although only in FF Nightly (Chrome is fine). I >>> wonder if some older versions of ANGLE did CPU copy for D32F? >>> >>> On 17 January 2017 at 22:28, Jamie Madill wrote: >>> >>>> Blit is core in GLES 3, so we need to emulate some of the depth/stencil >>>> blits to support GLES 3 at all. Such is the nature of what ANGLE does. >>>> >>>> It's something that is predictable from ANGLE's point of view, but I >>>> understand it might be unclear from the app's point of view. Stencil bits >>>> in particular are difficult, and multisample depth/stencil. We're working >>>> on making this more visible - one thing is that it'll show up in Chrome's >>>> about:gpu page. Ideally it could show up in some debugging log visible to >>>> javascript. >>>> >>>> On Mon, Jan 16, 2017 at 11:37 AM, Mr F wrote: >>>> >>>>> So it seems like ANGLE sometimes implements blit on CPU (!): >>>>> https://github.com/google/angle/blob/master/src/libANGL >>>>> E/renderer/d3d/d3d11/Blit11.cpp#L240 >>>>> >>>>> Which gives huge slowdowns without really telling why and being able >>>>> to predict it. If on some platform some blit is not possible, maybe we >>>>> should just throw an error, disable it, make it an extension or something? >>>>> CPU blit is pure evil. >>>>> >>>>> On 1 December 2016 at 15:50, Mr F wrote: >>>>> >>>>>> OK, sorry for bothering - must be my mistake somewhere. Made it work >>>>>> while preparing the test case :) >>>>>> >>>>>> On 30 November 2016 at 23:03, Jamie Madill >>>>>> wrote: >>>>>> >>>>>>> Yes, a reproduction / test case would be awesome, thanks. >>>>>>> >>>>>>> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert >>>>>>> wrote: >>>>>>> >>>>>>>> FWIW, the spec says pretty explicitly that BlitFramebuffer should be >>>>>>>> functional for depth->depth blitting, as long as the formats match, >>>>>>>> etc etc. >>>>>>>> >>>>>>>> GLES 3.0.5 p198: >>>>>>>> If the >>>>>>>> source formats are depth values, sample values are resolved in an >>>>>>>> implementation- >>>>>>>> dependent manner where the result will be between the minimum and >>>>>>>> maximum >>>>>>>> depth values in the pixel. >>>>>>>> >>>>>>>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell >>>>>>>> wrote: >>>>>>>> > Could you make any test case available? Do you have any other >>>>>>>> platforms >>>>>>>> > (Mac, Linux) to test on? You could also try launching Chrome with >>>>>>>> the >>>>>>>> > command line argument --use-angle=gl to see the difference >>>>>>>> between ANGLE's >>>>>>>> > D3D11 and OpenGL backends. >>>>>>>> > >>>>>>>> > >>>>>>>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F >>>>>>>> wrote: >>>>>>>> >> >>>>>>>> >> I'll try to prepare a clean test case soon. I'm on the latest >>>>>>>> Chrome >>>>>>>> >> Canary/Win 7/GTX970. Both source and destination are 32F. What I >>>>>>>> mean by >>>>>>>> >> "blank" actually looks like a solid red color, which seems to be >>>>>>>> uniform >>>>>>>> >> 1,0,0, no matter how close/far I'm from any geometry.It's good >>>>>>>> to know that >>>>>>>> >> it is supposed to work - maybe just my mistake somewhere, >>>>>>>> although I tested >>>>>>>> >> it heavily. The source buffer is created and attached using >>>>>>>> >> >>>>>>>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>>>>>>> gl.DEPTH_COMPONENT32F, >>>>>>>> >> width, height) >>>>>>>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>>> >> gl.RENDERBUFFER, msaaBuffer) >>>>>>>> >> >>>>>>>> >> and the destination created and attached to another framebufer >>>>>>>> with >>>>>>>> >> >>>>>>>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, width, >>>>>>>> height, 0, >>>>>>>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>>>>>>> >> framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>>> gl.TEXTURE_2D, >>>>>>>> >> tex, 0) >>>>>>>> >> >>>>>>>> >> The blit is performed like this: >>>>>>>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>>>>>>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>>>>>>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>>>>>>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>>>>>>> >> >>>>>>>> >> On 29 November 2016 at 17:51, Jamie Madill >>>>>>>> wrote: >>>>>>>> >>> >>>>>>>> >>> This generally should work. There are tests for this in dEQP's >>>>>>>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL 2-specific >>>>>>>> tests in >>>>>>>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>>>>>>> >>> >>>>>>>> >>> Note some of these tests are marked as failing on Intel on >>>>>>>> pretty much >>>>>>>> >>> all platforms: >>>>>>>> >>> >>>>>>>> >>> https://cs.chromium.org/chromium/src/content/test/gpu/gpu_te >>>>>>>> sts/webgl2_conformance_expectations.py?q=webgl2_conform&sq=p >>>>>>>> ackage:chromium&l=118 >>>>>>>> >>> >>>>>>>> >>> Mr F, what platform are you on, and which vendor? >>>>>>>> >>> >>>>>>>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell < >>>>>>>> kbr...@> wrote: >>>>>>>> >>>> >>>>>>>> >>>> I'm pretty sure this should either work, or generate an OpenGL >>>>>>>> error. >>>>>>>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says >>>>>>>> BlitFramebuffer >>>>>>>> >>>> generates INVALID_OPERATION if the mask includes >>>>>>>> DEPTH_BUFFER_BIT or >>>>>>>> >>>> STENCIL_BUFFER_BIT, and the source and destination depth and >>>>>>>> stencil buffer >>>>>>>> >>>> formats don't match. >>>>>>>> >>>> >>>>>>>> >>>> It looks like there are extensive tests of resolving >>>>>>>> multisampled depth >>>>>>>> >>>> buffers to single-sampled ones, but I'm not sure about >>>>>>>> resolving to a depth >>>>>>>> >>>> texture. Do you have a test case? What platform are you on >>>>>>>> (can you provide >>>>>>>> >>>> about:gpu if you're running Chrome)? >>>>>>>> >>>> >>>>>>>> >>>> -Ken >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F >>>>>>>> wrote: >>>>>>>> >>>>> >>>>>>>> >>>>> The docs are a bit unclear on that. Is it possible to use >>>>>>>> >>>>> blitFramebuffer to get current MSAA depth buffer into a >>>>>>>> non-MSAA depth >>>>>>>> >>>>> texture? The color buffer is resolved nicely, however the >>>>>>>> depth is just >>>>>>>> >>>>> blank, and no error is generated - wondering if it's the >>>>>>>> expected behaviour. >>>>>>>> >>>>> Depth format is DEPTH_COMPONENT32F. >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>> >>>>>>>> >> >>>>>>>> > >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Jan 18 01:35:03 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 18 Jan 2017 10:35:03 +0100 Subject: [Public WebGL] WEBGL_multiview discussion In-Reply-To: References: <319100139c49460c83c02e3e99cc6eb2@ukmail101.nvidia.com> <0834B8B9-132C-483A-9D0A-074B89FF0138@callow.im> <7C2AD586-684A-4183-892D-4265F57C13C3@callow.im> <9c887fdb017b4a509d3ca012bc0aa5f4@ukmail101.nvidia.com> <7e1d0ebac1504572a2ea25c1e4e2072c@ukmail101.nvidia.com> <104e30463c654e2f9b5684001ba2e894@ukmail101.nvidia.com> <1C8BE270-01E8-4D9F-BDB9-A0B05FE1470F@callow.im> Message-ID: Issues I have with the underlying OVR_multiview extension: - no tessellation and geometry shaders: this isn't going to bother WebGL right now, but OpenGL ES 3.2 does introduce tessellation and geometry shaders. This issue should be resolved before WebGL gets to ES 3.2. That might seem like a lot of time now, but it's my observation that in the end time always runs out to fix all the things, so please plan for this now. - no timer queries: Timer queries are helpful in a variety of usecases. Since WebGL is FPS limited to whatever RAF does (60 or 90 in the case of WebVR), it's the only way to measure performance impact of the render loop before frames are being dropped. Dropped frames are extremely undesirable for VR. Although timer queries might have computational overhead (depending on driver/GPU), paying that cost can still be preferable to dropping frames. Given the unique importance of timer queries to WebVR, this issue should be resolved ASAP -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Jan 18 04:53:29 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 18 Jan 2017 13:53:29 +0100 Subject: [Public WebGL] EXT_disjoint_timer_query implementation status Message-ID: The EXT_disjoint_timer_query extension seems a bit neglected: - Stats: http://webglstats.com/webgl/extension/EXT_disjoint_timer_query - Spec: https://www.khronos.org/registry/webgl/extensions/EXT_ disjoint_timer_query/ As of January 15th 2017 it only has an overall support of 45% with: - Desktop: 51% - Smartphone: 29% - Tablet: 5% - Game Consoles: 0% Chrome and Opera have major support for this Extension on any platform. Safari and Edge do not implement this extension at all. Firefox does not implement it on OSX and on all other platforms the extension is exposed with a very low percentage by Firefox. Tickets: - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 - Edge: https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/10570105/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chr...@ Wed Jan 18 06:11:10 2017 From: chr...@ (Christophe Riccio) Date: Wed, 18 Jan 2017 15:11:10 +0100 Subject: [Public WebGL] EXT_disjoint_timer_query implementation status In-Reply-To: References: Message-ID: On mobile OpenGL ES this extension essentially doesn't work so already "Smartphone: 29%" comes as a surprise. I am working on an OpenGL ES extension which hopefully should fix the design... On Wed, Jan 18, 2017 at 1:53 PM, Florian B?sch wrote: > The EXT_disjoint_timer_query extension seems a bit neglected: > > - Stats: http://webglstats.com/webgl/extension/EXT_disjoint_timer_query > - Spec: https://www.khronos.org/registry/webgl/extensions/EXT_ > disjoint_timer_query/ > > As of January 15th 2017 it only has an overall support of 45% with: > > - Desktop: 51% > - Smartphone: 29% > - Tablet: 5% > - Game Consoles: 0% > > Chrome and Opera have major support for this Extension on any platform. > > Safari and Edge do not implement this extension at all. Firefox does not > implement it on OSX and on all other platforms the extension is exposed > with a very low percentage by Firefox. > > Tickets: > > - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 > - Edge: https://developer.microsoft.com/en-us/microsoft- > edge/platform/issues/10570105/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Jan 18 07:48:01 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Wed, 18 Jan 2017 16:48:01 +0100 Subject: [Public WebGL] Re: EXT_disjoint_timer_query implementation status In-Reply-To: References: Message-ID: Edge feedback link ignored since June 2015: https://wpdev.uservoice.com/forums/257854-microsoft-edge-developer/suggestions/8480995-implement-webgl-extension-ext-disjoint-timer-query Tweet: https://twitter.com/pyalot/status/821745889060458497 On Wed, Jan 18, 2017 at 1:53 PM, Florian B?sch wrote: > The EXT_disjoint_timer_query extension seems a bit neglected: > > - Stats: http://webglstats.com/webgl/extension/EXT_disjoint_timer_query > - Spec: https://www.khronos.org/registry/webgl/extensions/EXT_ > disjoint_timer_query/ > > As of January 15th 2017 it only has an overall support of 45% with: > > - Desktop: 51% > - Smartphone: 29% > - Tablet: 5% > - Game Consoles: 0% > > Chrome and Opera have major support for this Extension on any platform. > > Safari and Edge do not implement this extension at all. Firefox does not > implement it on OSX and on all other platforms the extension is exposed > with a very low percentage by Firefox. > > Tickets: > > - Webkit: https://bugs.webkit.org/show_bug.cgi?id=129090 > - Edge: https://developer.microsoft.com/en-us/microsoft- > edge/platform/issues/10570105/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Wed Jan 18 13:02:44 2017 From: art...@ (Mr F) Date: Thu, 19 Jan 2017 00:02:44 +0300 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: I've just posted a possible solution to this issue there. On 18 January 2017 at 01:08, Jamie Madill wrote: > Aha yes. > > Long story short, I couldn't get multisampled resolve (aka blit from > multisample to single-sampled textures) working with a shader. I spent a > fair bit of time on this. Sorry about this, but for now it's a slow path. > Feel free to comment on the issue I just opened: anglebug.com/1710. I > tried this a few different ways but always ended up with a blank texture > and nothing being written. > > Not sure what to suggest other than trying to use a non-depth texture for > the same effect if possible. > > On Tue, Jan 17, 2017 at 4:27 PM, Mr F wrote: > >> Here's a tiny example: http://geom.io/testResolveDepth/index.html >> It blits MSAA R11G11B10 + 32F depth into 2 non-multisampled textures. >> >> >> On 18 January 2017 at 00:23, Mr F wrote: >> >>> I'm talking about multisampled blit though >>> >>> On 17 January 2017 at 23:03, Jamie Madill wrote: >>> >>>> Non-multisample depth blit by itself should use a shader.. but we'd >>>> need a small repro case to verify. Let me know if you can pull one together >>>> and I'll look. >>>> >>>> On Tue, Jan 17, 2017 at 2:39 PM, Mr F wrote: >>>> >>>>> I understand depth/stencil complications. Are you sure 32F depth blits >>>>> are never CPU-bound? There's a weird issue right now with it taking large >>>>> chunk of performance, although only in FF Nightly (Chrome is fine). I >>>>> wonder if some older versions of ANGLE did CPU copy for D32F? >>>>> >>>>> On 17 January 2017 at 22:28, Jamie Madill >>>>> wrote: >>>>> >>>>>> Blit is core in GLES 3, so we need to emulate some of the >>>>>> depth/stencil blits to support GLES 3 at all. Such is the nature of what >>>>>> ANGLE does. >>>>>> >>>>>> It's something that is predictable from ANGLE's point of view, but I >>>>>> understand it might be unclear from the app's point of view. Stencil bits >>>>>> in particular are difficult, and multisample depth/stencil. We're working >>>>>> on making this more visible - one thing is that it'll show up in Chrome's >>>>>> about:gpu page. Ideally it could show up in some debugging log visible to >>>>>> javascript. >>>>>> >>>>>> On Mon, Jan 16, 2017 at 11:37 AM, Mr F wrote: >>>>>> >>>>>>> So it seems like ANGLE sometimes implements blit on CPU (!): >>>>>>> https://github.com/google/angle/blob/master/src/libANGL >>>>>>> E/renderer/d3d/d3d11/Blit11.cpp#L240 >>>>>>> >>>>>>> Which gives huge slowdowns without really telling why and being able >>>>>>> to predict it. If on some platform some blit is not possible, maybe we >>>>>>> should just throw an error, disable it, make it an extension or something? >>>>>>> CPU blit is pure evil. >>>>>>> >>>>>>> On 1 December 2016 at 15:50, Mr F wrote: >>>>>>> >>>>>>>> OK, sorry for bothering - must be my mistake somewhere. Made it >>>>>>>> work while preparing the test case :) >>>>>>>> >>>>>>>> On 30 November 2016 at 23:03, Jamie Madill >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Yes, a reproduction / test case would be awesome, thanks. >>>>>>>>> >>>>>>>>> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert < >>>>>>>>> jgilbert...@> wrote: >>>>>>>>> >>>>>>>>>> FWIW, the spec says pretty explicitly that BlitFramebuffer should >>>>>>>>>> be >>>>>>>>>> functional for depth->depth blitting, as long as the formats >>>>>>>>>> match, >>>>>>>>>> etc etc. >>>>>>>>>> >>>>>>>>>> GLES 3.0.5 p198: >>>>>>>>>> If the >>>>>>>>>> source formats are depth values, sample values are resolved in an >>>>>>>>>> implementation- >>>>>>>>>> dependent manner where the result will be between the minimum and >>>>>>>>>> maximum >>>>>>>>>> depth values in the pixel. >>>>>>>>>> >>>>>>>>>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell >>>>>>>>>> wrote: >>>>>>>>>> > Could you make any test case available? Do you have any other >>>>>>>>>> platforms >>>>>>>>>> > (Mac, Linux) to test on? You could also try launching Chrome >>>>>>>>>> with the >>>>>>>>>> > command line argument --use-angle=gl to see the difference >>>>>>>>>> between ANGLE's >>>>>>>>>> > D3D11 and OpenGL backends. >>>>>>>>>> > >>>>>>>>>> > >>>>>>>>>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F >>>>>>>>>> wrote: >>>>>>>>>> >> >>>>>>>>>> >> I'll try to prepare a clean test case soon. I'm on the latest >>>>>>>>>> Chrome >>>>>>>>>> >> Canary/Win 7/GTX970. Both source and destination are 32F. What >>>>>>>>>> I mean by >>>>>>>>>> >> "blank" actually looks like a solid red color, which seems to >>>>>>>>>> be uniform >>>>>>>>>> >> 1,0,0, no matter how close/far I'm from any geometry.It's good >>>>>>>>>> to know that >>>>>>>>>> >> it is supposed to work - maybe just my mistake somewhere, >>>>>>>>>> although I tested >>>>>>>>>> >> it heavily. The source buffer is created and attached using >>>>>>>>>> >> >>>>>>>>>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>>>>>>>>> gl.DEPTH_COMPONENT32F, >>>>>>>>>> >> width, height) >>>>>>>>>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>>>>> >> gl.RENDERBUFFER, msaaBuffer) >>>>>>>>>> >> >>>>>>>>>> >> and the destination created and attached to another framebufer >>>>>>>>>> with >>>>>>>>>> >> >>>>>>>>>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, width, >>>>>>>>>> height, 0, >>>>>>>>>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>>>>>>>>> >> framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>>>>> gl.TEXTURE_2D, >>>>>>>>>> >> tex, 0) >>>>>>>>>> >> >>>>>>>>>> >> The blit is performed like this: >>>>>>>>>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>>>>>>>>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>>>>>>>>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>>>>>>>>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>>>>>>>>> >> >>>>>>>>>> >> On 29 November 2016 at 17:51, Jamie Madill < >>>>>>>>>> jmadill...@> wrote: >>>>>>>>>> >>> >>>>>>>>>> >>> This generally should work. There are tests for this in dEQP's >>>>>>>>>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL >>>>>>>>>> 2-specific tests in >>>>>>>>>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>>>>>>>>> >>> >>>>>>>>>> >>> Note some of these tests are marked as failing on Intel on >>>>>>>>>> pretty much >>>>>>>>>> >>> all platforms: >>>>>>>>>> >>> >>>>>>>>>> >>> https://cs.chromium.org/chromium/src/content/test/gpu/gpu_te >>>>>>>>>> sts/webgl2_conformance_expectations.py?q=webgl2_conform&sq=p >>>>>>>>>> ackage:chromium&l=118 >>>>>>>>>> >>> >>>>>>>>>> >>> Mr F, what platform are you on, and which vendor? >>>>>>>>>> >>> >>>>>>>>>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell < >>>>>>>>>> kbr...@> wrote: >>>>>>>>>> >>>> >>>>>>>>>> >>>> I'm pretty sure this should either work, or generate an >>>>>>>>>> OpenGL error. >>>>>>>>>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says >>>>>>>>>> BlitFramebuffer >>>>>>>>>> >>>> generates INVALID_OPERATION if the mask includes >>>>>>>>>> DEPTH_BUFFER_BIT or >>>>>>>>>> >>>> STENCIL_BUFFER_BIT, and the source and destination depth and >>>>>>>>>> stencil buffer >>>>>>>>>> >>>> formats don't match. >>>>>>>>>> >>>> >>>>>>>>>> >>>> It looks like there are extensive tests of resolving >>>>>>>>>> multisampled depth >>>>>>>>>> >>>> buffers to single-sampled ones, but I'm not sure about >>>>>>>>>> resolving to a depth >>>>>>>>>> >>>> texture. Do you have a test case? What platform are you on >>>>>>>>>> (can you provide >>>>>>>>>> >>>> about:gpu if you're running Chrome)? >>>>>>>>>> >>>> >>>>>>>>>> >>>> -Ken >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F < >>>>>>>>>> arthur...@> wrote: >>>>>>>>>> >>>>> >>>>>>>>>> >>>>> The docs are a bit unclear on that. Is it possible to use >>>>>>>>>> >>>>> blitFramebuffer to get current MSAA depth buffer into a >>>>>>>>>> non-MSAA depth >>>>>>>>>> >>>>> texture? The color buffer is resolved nicely, however the >>>>>>>>>> depth is just >>>>>>>>>> >>>>> blank, and no error is generated - wondering if it's the >>>>>>>>>> expected behaviour. >>>>>>>>>> >>>>> Depth format is DEPTH_COMPONENT32F. >>>>>>>>>> >>>> >>>>>>>>>> >>>> >>>>>>>>>> >>> >>>>>>>>>> >> >>>>>>>>>> > >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Thu Jan 19 05:46:45 2017 From: art...@ (Mr F) Date: Thu, 19 Jan 2017 16:46:45 +0300 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: Hmm seems like ANGLE is not able to render MSAA R32F color as well? So I'll have to use RGBA8 encoded depth WebGL1-style again. On 19 January 2017 at 00:02, Mr F wrote: > I've just posted a possible solution to this issue there. > > On 18 January 2017 at 01:08, Jamie Madill wrote: > >> Aha yes. >> >> Long story short, I couldn't get multisampled resolve (aka blit from >> multisample to single-sampled textures) working with a shader. I spent a >> fair bit of time on this. Sorry about this, but for now it's a slow path. >> Feel free to comment on the issue I just opened: anglebug.com/1710. I >> tried this a few different ways but always ended up with a blank texture >> and nothing being written. >> >> Not sure what to suggest other than trying to use a non-depth texture for >> the same effect if possible. >> >> On Tue, Jan 17, 2017 at 4:27 PM, Mr F wrote: >> >>> Here's a tiny example: http://geom.io/testResolveDepth/index.html >>> It blits MSAA R11G11B10 + 32F depth into 2 non-multisampled textures. >>> >>> >>> On 18 January 2017 at 00:23, Mr F wrote: >>> >>>> I'm talking about multisampled blit though >>>> >>>> On 17 January 2017 at 23:03, Jamie Madill wrote: >>>> >>>>> Non-multisample depth blit by itself should use a shader.. but we'd >>>>> need a small repro case to verify. Let me know if you can pull one together >>>>> and I'll look. >>>>> >>>>> On Tue, Jan 17, 2017 at 2:39 PM, Mr F wrote: >>>>> >>>>>> I understand depth/stencil complications. Are you sure 32F depth >>>>>> blits are never CPU-bound? There's a weird issue right now with it taking >>>>>> large chunk of performance, although only in FF Nightly (Chrome is fine). I >>>>>> wonder if some older versions of ANGLE did CPU copy for D32F? >>>>>> >>>>>> On 17 January 2017 at 22:28, Jamie Madill >>>>>> wrote: >>>>>> >>>>>>> Blit is core in GLES 3, so we need to emulate some of the >>>>>>> depth/stencil blits to support GLES 3 at all. Such is the nature of what >>>>>>> ANGLE does. >>>>>>> >>>>>>> It's something that is predictable from ANGLE's point of view, but I >>>>>>> understand it might be unclear from the app's point of view. Stencil bits >>>>>>> in particular are difficult, and multisample depth/stencil. We're working >>>>>>> on making this more visible - one thing is that it'll show up in Chrome's >>>>>>> about:gpu page. Ideally it could show up in some debugging log visible to >>>>>>> javascript. >>>>>>> >>>>>>> On Mon, Jan 16, 2017 at 11:37 AM, Mr F >>>>>>> wrote: >>>>>>> >>>>>>>> So it seems like ANGLE sometimes implements blit on CPU (!): >>>>>>>> https://github.com/google/angle/blob/master/src/libANGL >>>>>>>> E/renderer/d3d/d3d11/Blit11.cpp#L240 >>>>>>>> >>>>>>>> Which gives huge slowdowns without really telling why and being >>>>>>>> able to predict it. If on some platform some blit is not possible, maybe we >>>>>>>> should just throw an error, disable it, make it an extension or something? >>>>>>>> CPU blit is pure evil. >>>>>>>> >>>>>>>> On 1 December 2016 at 15:50, Mr F wrote: >>>>>>>> >>>>>>>>> OK, sorry for bothering - must be my mistake somewhere. Made it >>>>>>>>> work while preparing the test case :) >>>>>>>>> >>>>>>>>> On 30 November 2016 at 23:03, Jamie Madill >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Yes, a reproduction / test case would be awesome, thanks. >>>>>>>>>> >>>>>>>>>> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert < >>>>>>>>>> jgilbert...@> wrote: >>>>>>>>>> >>>>>>>>>>> FWIW, the spec says pretty explicitly that BlitFramebuffer >>>>>>>>>>> should be >>>>>>>>>>> functional for depth->depth blitting, as long as the formats >>>>>>>>>>> match, >>>>>>>>>>> etc etc. >>>>>>>>>>> >>>>>>>>>>> GLES 3.0.5 p198: >>>>>>>>>>> If the >>>>>>>>>>> source formats are depth values, sample values are resolved in an >>>>>>>>>>> implementation- >>>>>>>>>>> dependent manner where the result will be between the minimum >>>>>>>>>>> and maximum >>>>>>>>>>> depth values in the pixel. >>>>>>>>>>> >>>>>>>>>>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell >>>>>>>>>>> wrote: >>>>>>>>>>> > Could you make any test case available? Do you have any other >>>>>>>>>>> platforms >>>>>>>>>>> > (Mac, Linux) to test on? You could also try launching Chrome >>>>>>>>>>> with the >>>>>>>>>>> > command line argument --use-angle=gl to see the difference >>>>>>>>>>> between ANGLE's >>>>>>>>>>> > D3D11 and OpenGL backends. >>>>>>>>>>> > >>>>>>>>>>> > >>>>>>>>>>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F >>>>>>>>>>> wrote: >>>>>>>>>>> >> >>>>>>>>>>> >> I'll try to prepare a clean test case soon. I'm on the latest >>>>>>>>>>> Chrome >>>>>>>>>>> >> Canary/Win 7/GTX970. Both source and destination are 32F. >>>>>>>>>>> What I mean by >>>>>>>>>>> >> "blank" actually looks like a solid red color, which seems to >>>>>>>>>>> be uniform >>>>>>>>>>> >> 1,0,0, no matter how close/far I'm from any geometry.It's >>>>>>>>>>> good to know that >>>>>>>>>>> >> it is supposed to work - maybe just my mistake somewhere, >>>>>>>>>>> although I tested >>>>>>>>>>> >> it heavily. The source buffer is created and attached using >>>>>>>>>>> >> >>>>>>>>>>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>>>>>>>>>> gl.DEPTH_COMPONENT32F, >>>>>>>>>>> >> width, height) >>>>>>>>>>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>>>>>> >> gl.RENDERBUFFER, msaaBuffer) >>>>>>>>>>> >> >>>>>>>>>>> >> and the destination created and attached to another >>>>>>>>>>> framebufer with >>>>>>>>>>> >> >>>>>>>>>>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, width, >>>>>>>>>>> height, 0, >>>>>>>>>>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>>>>>>>>>> >> framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>>>>>> gl.TEXTURE_2D, >>>>>>>>>>> >> tex, 0) >>>>>>>>>>> >> >>>>>>>>>>> >> The blit is performed like this: >>>>>>>>>>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>>>>>>>>>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>>>>>>>>>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>>>>>>>>>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>>>>>>>>>> >> >>>>>>>>>>> >> On 29 November 2016 at 17:51, Jamie Madill < >>>>>>>>>>> jmadill...@> wrote: >>>>>>>>>>> >>> >>>>>>>>>>> >>> This generally should work. There are tests for this in >>>>>>>>>>> dEQP's >>>>>>>>>>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL >>>>>>>>>>> 2-specific tests in >>>>>>>>>>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>>>>>>>>>> >>> >>>>>>>>>>> >>> Note some of these tests are marked as failing on Intel on >>>>>>>>>>> pretty much >>>>>>>>>>> >>> all platforms: >>>>>>>>>>> >>> >>>>>>>>>>> >>> https://cs.chromium.org/chromium/src/content/test/gpu/gpu_te >>>>>>>>>>> sts/webgl2_conformance_expectations.py?q=webgl2_conform&sq=p >>>>>>>>>>> ackage:chromium&l=118 >>>>>>>>>>> >>> >>>>>>>>>>> >>> Mr F, what platform are you on, and which vendor? >>>>>>>>>>> >>> >>>>>>>>>>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell < >>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> I'm pretty sure this should either work, or generate an >>>>>>>>>>> OpenGL error. >>>>>>>>>>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says >>>>>>>>>>> BlitFramebuffer >>>>>>>>>>> >>>> generates INVALID_OPERATION if the mask includes >>>>>>>>>>> DEPTH_BUFFER_BIT or >>>>>>>>>>> >>>> STENCIL_BUFFER_BIT, and the source and destination depth >>>>>>>>>>> and stencil buffer >>>>>>>>>>> >>>> formats don't match. >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> It looks like there are extensive tests of resolving >>>>>>>>>>> multisampled depth >>>>>>>>>>> >>>> buffers to single-sampled ones, but I'm not sure about >>>>>>>>>>> resolving to a depth >>>>>>>>>>> >>>> texture. Do you have a test case? What platform are you on >>>>>>>>>>> (can you provide >>>>>>>>>>> >>>> about:gpu if you're running Chrome)? >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> -Ken >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F < >>>>>>>>>>> arthur...@> wrote: >>>>>>>>>>> >>>>> >>>>>>>>>>> >>>>> The docs are a bit unclear on that. Is it possible to use >>>>>>>>>>> >>>>> blitFramebuffer to get current MSAA depth buffer into a >>>>>>>>>>> non-MSAA depth >>>>>>>>>>> >>>>> texture? The color buffer is resolved nicely, however the >>>>>>>>>>> depth is just >>>>>>>>>>> >>>>> blank, and no error is generated - wondering if it's the >>>>>>>>>>> expected behaviour. >>>>>>>>>>> >>>>> Depth format is DEPTH_COMPONENT32F. >>>>>>>>>>> >>>> >>>>>>>>>>> >>>> >>>>>>>>>>> >>> >>>>>>>>>>> >> >>>>>>>>>>> > >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Thu Jan 19 06:28:52 2017 From: art...@ (Mr F) Date: Thu, 19 Jan 2017 17:28:52 +0300 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: Furthermore, rendering depth to a separate RT (using drawBuffers) doesn't make much sense, because the samples are averaged (only needs 1st sample, and that's what blit was supposed to do). On 19 January 2017 at 16:46, Mr F wrote: > Hmm seems like ANGLE is not able to render MSAA R32F color as well? So > I'll have to use RGBA8 encoded depth WebGL1-style again. > > On 19 January 2017 at 00:02, Mr F wrote: > >> I've just posted a possible solution to this issue there. >> >> On 18 January 2017 at 01:08, Jamie Madill wrote: >> >>> Aha yes. >>> >>> Long story short, I couldn't get multisampled resolve (aka blit from >>> multisample to single-sampled textures) working with a shader. I spent a >>> fair bit of time on this. Sorry about this, but for now it's a slow path. >>> Feel free to comment on the issue I just opened: anglebug.com/1710. I >>> tried this a few different ways but always ended up with a blank texture >>> and nothing being written. >>> >>> Not sure what to suggest other than trying to use a non-depth texture >>> for the same effect if possible. >>> >>> On Tue, Jan 17, 2017 at 4:27 PM, Mr F wrote: >>> >>>> Here's a tiny example: http://geom.io/testResolveDepth/index.html >>>> It blits MSAA R11G11B10 + 32F depth into 2 non-multisampled textures. >>>> >>>> >>>> On 18 January 2017 at 00:23, Mr F wrote: >>>> >>>>> I'm talking about multisampled blit though >>>>> >>>>> On 17 January 2017 at 23:03, Jamie Madill >>>>> wrote: >>>>> >>>>>> Non-multisample depth blit by itself should use a shader.. but we'd >>>>>> need a small repro case to verify. Let me know if you can pull one together >>>>>> and I'll look. >>>>>> >>>>>> On Tue, Jan 17, 2017 at 2:39 PM, Mr F wrote: >>>>>> >>>>>>> I understand depth/stencil complications. Are you sure 32F depth >>>>>>> blits are never CPU-bound? There's a weird issue right now with it taking >>>>>>> large chunk of performance, although only in FF Nightly (Chrome is fine). I >>>>>>> wonder if some older versions of ANGLE did CPU copy for D32F? >>>>>>> >>>>>>> On 17 January 2017 at 22:28, Jamie Madill >>>>>>> wrote: >>>>>>> >>>>>>>> Blit is core in GLES 3, so we need to emulate some of the >>>>>>>> depth/stencil blits to support GLES 3 at all. Such is the nature of what >>>>>>>> ANGLE does. >>>>>>>> >>>>>>>> It's something that is predictable from ANGLE's point of view, but >>>>>>>> I understand it might be unclear from the app's point of view. Stencil bits >>>>>>>> in particular are difficult, and multisample depth/stencil. We're working >>>>>>>> on making this more visible - one thing is that it'll show up in Chrome's >>>>>>>> about:gpu page. Ideally it could show up in some debugging log visible to >>>>>>>> javascript. >>>>>>>> >>>>>>>> On Mon, Jan 16, 2017 at 11:37 AM, Mr F >>>>>>>> wrote: >>>>>>>> >>>>>>>>> So it seems like ANGLE sometimes implements blit on CPU (!): >>>>>>>>> https://github.com/google/angle/blob/master/src/libANGL >>>>>>>>> E/renderer/d3d/d3d11/Blit11.cpp#L240 >>>>>>>>> >>>>>>>>> Which gives huge slowdowns without really telling why and being >>>>>>>>> able to predict it. If on some platform some blit is not possible, maybe we >>>>>>>>> should just throw an error, disable it, make it an extension or something? >>>>>>>>> CPU blit is pure evil. >>>>>>>>> >>>>>>>>> On 1 December 2016 at 15:50, Mr F wrote: >>>>>>>>> >>>>>>>>>> OK, sorry for bothering - must be my mistake somewhere. Made it >>>>>>>>>> work while preparing the test case :) >>>>>>>>>> >>>>>>>>>> On 30 November 2016 at 23:03, Jamie Madill >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Yes, a reproduction / test case would be awesome, thanks. >>>>>>>>>>> >>>>>>>>>>> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert < >>>>>>>>>>> jgilbert...@> wrote: >>>>>>>>>>> >>>>>>>>>>>> FWIW, the spec says pretty explicitly that BlitFramebuffer >>>>>>>>>>>> should be >>>>>>>>>>>> functional for depth->depth blitting, as long as the formats >>>>>>>>>>>> match, >>>>>>>>>>>> etc etc. >>>>>>>>>>>> >>>>>>>>>>>> GLES 3.0.5 p198: >>>>>>>>>>>> If the >>>>>>>>>>>> source formats are depth values, sample values are resolved in >>>>>>>>>>>> an >>>>>>>>>>>> implementation- >>>>>>>>>>>> dependent manner where the result will be between the minimum >>>>>>>>>>>> and maximum >>>>>>>>>>>> depth values in the pixel. >>>>>>>>>>>> >>>>>>>>>>>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell < >>>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>>> > Could you make any test case available? Do you have any other >>>>>>>>>>>> platforms >>>>>>>>>>>> > (Mac, Linux) to test on? You could also try launching Chrome >>>>>>>>>>>> with the >>>>>>>>>>>> > command line argument --use-angle=gl to see the difference >>>>>>>>>>>> between ANGLE's >>>>>>>>>>>> > D3D11 and OpenGL backends. >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F >>>>>>>>>>>> wrote: >>>>>>>>>>>> >> >>>>>>>>>>>> >> I'll try to prepare a clean test case soon. I'm on the >>>>>>>>>>>> latest Chrome >>>>>>>>>>>> >> Canary/Win 7/GTX970. Both source and destination are 32F. >>>>>>>>>>>> What I mean by >>>>>>>>>>>> >> "blank" actually looks like a solid red color, which seems >>>>>>>>>>>> to be uniform >>>>>>>>>>>> >> 1,0,0, no matter how close/far I'm from any geometry.It's >>>>>>>>>>>> good to know that >>>>>>>>>>>> >> it is supposed to work - maybe just my mistake somewhere, >>>>>>>>>>>> although I tested >>>>>>>>>>>> >> it heavily. The source buffer is created and attached using >>>>>>>>>>>> >> >>>>>>>>>>>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>>>>>>>>>>> gl.DEPTH_COMPONENT32F, >>>>>>>>>>>> >> width, height) >>>>>>>>>>>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>>>>>>> >> gl.RENDERBUFFER, msaaBuffer) >>>>>>>>>>>> >> >>>>>>>>>>>> >> and the destination created and attached to another >>>>>>>>>>>> framebufer with >>>>>>>>>>>> >> >>>>>>>>>>>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, width, >>>>>>>>>>>> height, 0, >>>>>>>>>>>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>>>>>>>>>>> >> framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>>>>>>> gl.TEXTURE_2D, >>>>>>>>>>>> >> tex, 0) >>>>>>>>>>>> >> >>>>>>>>>>>> >> The blit is performed like this: >>>>>>>>>>>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>>>>>>>>>>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>>>>>>>>>>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>>>>>>>>>>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>>>>>>>>>>> >> >>>>>>>>>>>> >> On 29 November 2016 at 17:51, Jamie Madill < >>>>>>>>>>>> jmadill...@> wrote: >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> This generally should work. There are tests for this in >>>>>>>>>>>> dEQP's >>>>>>>>>>>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL >>>>>>>>>>>> 2-specific tests in >>>>>>>>>>>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Note some of these tests are marked as failing on Intel on >>>>>>>>>>>> pretty much >>>>>>>>>>>> >>> all platforms: >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> https://cs.chromium.org/chromi >>>>>>>>>>>> um/src/content/test/gpu/gpu_tests/webgl2_conformance_expecta >>>>>>>>>>>> tions.py?q=webgl2_conform&sq=package:chromium&l=118 >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> Mr F, what platform are you on, and which vendor? >>>>>>>>>>>> >>> >>>>>>>>>>>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell < >>>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> I'm pretty sure this should either work, or generate an >>>>>>>>>>>> OpenGL error. >>>>>>>>>>>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says >>>>>>>>>>>> BlitFramebuffer >>>>>>>>>>>> >>>> generates INVALID_OPERATION if the mask includes >>>>>>>>>>>> DEPTH_BUFFER_BIT or >>>>>>>>>>>> >>>> STENCIL_BUFFER_BIT, and the source and destination depth >>>>>>>>>>>> and stencil buffer >>>>>>>>>>>> >>>> formats don't match. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> It looks like there are extensive tests of resolving >>>>>>>>>>>> multisampled depth >>>>>>>>>>>> >>>> buffers to single-sampled ones, but I'm not sure about >>>>>>>>>>>> resolving to a depth >>>>>>>>>>>> >>>> texture. Do you have a test case? What platform are you on >>>>>>>>>>>> (can you provide >>>>>>>>>>>> >>>> about:gpu if you're running Chrome)? >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> -Ken >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F < >>>>>>>>>>>> arthur...@> wrote: >>>>>>>>>>>> >>>>> >>>>>>>>>>>> >>>>> The docs are a bit unclear on that. Is it possible to use >>>>>>>>>>>> >>>>> blitFramebuffer to get current MSAA depth buffer into a >>>>>>>>>>>> non-MSAA depth >>>>>>>>>>>> >>>>> texture? The color buffer is resolved nicely, however the >>>>>>>>>>>> depth is just >>>>>>>>>>>> >>>>> blank, and no error is generated - wondering if it's the >>>>>>>>>>>> expected behaviour. >>>>>>>>>>>> >>>>> Depth format is DEPTH_COMPONENT32F. >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>>> >>>>>>>>>>>> >>> >>>>>>>>>>>> >> >>>>>>>>>>>> > >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Jan 19 06:34:29 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 19 Jan 2017 15:34:29 +0100 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: Note that averaged (MSAA'ed) colors at the borders and a single depth value for the pixel produces noticable artifacts when shading as a deferred step (using the depth for light calculations like area of influence of a deferred light). Most deferred renderers don't do MSAA rendering for that reason (and because of performance) and use alternative approaches to AA'ing that operates on the output once all lighting calculations are done. On Thu, Jan 19, 2017 at 3:28 PM, Mr F wrote: > Furthermore, rendering depth to a separate RT (using drawBuffers) doesn't > make much sense, because the samples are averaged (only needs 1st sample, > and that's what blit was supposed to do). > > On 19 January 2017 at 16:46, Mr F wrote: > >> Hmm seems like ANGLE is not able to render MSAA R32F color as well? So >> I'll have to use RGBA8 encoded depth WebGL1-style again. >> >> On 19 January 2017 at 00:02, Mr F wrote: >> >>> I've just posted a possible solution to this issue there. >>> >>> On 18 January 2017 at 01:08, Jamie Madill wrote: >>> >>>> Aha yes. >>>> >>>> Long story short, I couldn't get multisampled resolve (aka blit from >>>> multisample to single-sampled textures) working with a shader. I spent a >>>> fair bit of time on this. Sorry about this, but for now it's a slow path. >>>> Feel free to comment on the issue I just opened: anglebug.com/1710. I >>>> tried this a few different ways but always ended up with a blank texture >>>> and nothing being written. >>>> >>>> Not sure what to suggest other than trying to use a non-depth texture >>>> for the same effect if possible. >>>> >>>> On Tue, Jan 17, 2017 at 4:27 PM, Mr F wrote: >>>> >>>>> Here's a tiny example: http://geom.io/testResolveDepth/index.html >>>>> It blits MSAA R11G11B10 + 32F depth into 2 non-multisampled textures. >>>>> >>>>> >>>>> On 18 January 2017 at 00:23, Mr F wrote: >>>>> >>>>>> I'm talking about multisampled blit though >>>>>> >>>>>> On 17 January 2017 at 23:03, Jamie Madill >>>>>> wrote: >>>>>> >>>>>>> Non-multisample depth blit by itself should use a shader.. but we'd >>>>>>> need a small repro case to verify. Let me know if you can pull one together >>>>>>> and I'll look. >>>>>>> >>>>>>> On Tue, Jan 17, 2017 at 2:39 PM, Mr F wrote: >>>>>>> >>>>>>>> I understand depth/stencil complications. Are you sure 32F depth >>>>>>>> blits are never CPU-bound? There's a weird issue right now with it taking >>>>>>>> large chunk of performance, although only in FF Nightly (Chrome is fine). I >>>>>>>> wonder if some older versions of ANGLE did CPU copy for D32F? >>>>>>>> >>>>>>>> On 17 January 2017 at 22:28, Jamie Madill >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Blit is core in GLES 3, so we need to emulate some of the >>>>>>>>> depth/stencil blits to support GLES 3 at all. Such is the nature of what >>>>>>>>> ANGLE does. >>>>>>>>> >>>>>>>>> It's something that is predictable from ANGLE's point of view, but >>>>>>>>> I understand it might be unclear from the app's point of view. Stencil bits >>>>>>>>> in particular are difficult, and multisample depth/stencil. We're working >>>>>>>>> on making this more visible - one thing is that it'll show up in Chrome's >>>>>>>>> about:gpu page. Ideally it could show up in some debugging log visible to >>>>>>>>> javascript. >>>>>>>>> >>>>>>>>> On Mon, Jan 16, 2017 at 11:37 AM, Mr F >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> So it seems like ANGLE sometimes implements blit on CPU (!): >>>>>>>>>> https://github.com/google/angle/blob/master/src/libANGL >>>>>>>>>> E/renderer/d3d/d3d11/Blit11.cpp#L240 >>>>>>>>>> >>>>>>>>>> Which gives huge slowdowns without really telling why and being >>>>>>>>>> able to predict it. If on some platform some blit is not possible, maybe we >>>>>>>>>> should just throw an error, disable it, make it an extension or something? >>>>>>>>>> CPU blit is pure evil. >>>>>>>>>> >>>>>>>>>> On 1 December 2016 at 15:50, Mr F wrote: >>>>>>>>>> >>>>>>>>>>> OK, sorry for bothering - must be my mistake somewhere. Made it >>>>>>>>>>> work while preparing the test case :) >>>>>>>>>>> >>>>>>>>>>> On 30 November 2016 at 23:03, Jamie Madill >>>>>>>>>> > wrote: >>>>>>>>>>> >>>>>>>>>>>> Yes, a reproduction / test case would be awesome, thanks. >>>>>>>>>>>> >>>>>>>>>>>> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert < >>>>>>>>>>>> jgilbert...@> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> FWIW, the spec says pretty explicitly that BlitFramebuffer >>>>>>>>>>>>> should be >>>>>>>>>>>>> functional for depth->depth blitting, as long as the formats >>>>>>>>>>>>> match, >>>>>>>>>>>>> etc etc. >>>>>>>>>>>>> >>>>>>>>>>>>> GLES 3.0.5 p198: >>>>>>>>>>>>> If the >>>>>>>>>>>>> source formats are depth values, sample values are resolved in >>>>>>>>>>>>> an >>>>>>>>>>>>> implementation- >>>>>>>>>>>>> dependent manner where the result will be between the minimum >>>>>>>>>>>>> and maximum >>>>>>>>>>>>> depth values in the pixel. >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell < >>>>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>>>> > Could you make any test case available? Do you have any >>>>>>>>>>>>> other platforms >>>>>>>>>>>>> > (Mac, Linux) to test on? You could also try launching Chrome >>>>>>>>>>>>> with the >>>>>>>>>>>>> > command line argument --use-angle=gl to see the difference >>>>>>>>>>>>> between ANGLE's >>>>>>>>>>>>> > D3D11 and OpenGL backends. >>>>>>>>>>>>> > >>>>>>>>>>>>> > >>>>>>>>>>>>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F < >>>>>>>>>>>>> arthur...@> wrote: >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> I'll try to prepare a clean test case soon. I'm on the >>>>>>>>>>>>> latest Chrome >>>>>>>>>>>>> >> Canary/Win 7/GTX970. Both source and destination are 32F. >>>>>>>>>>>>> What I mean by >>>>>>>>>>>>> >> "blank" actually looks like a solid red color, which seems >>>>>>>>>>>>> to be uniform >>>>>>>>>>>>> >> 1,0,0, no matter how close/far I'm from any geometry.It's >>>>>>>>>>>>> good to know that >>>>>>>>>>>>> >> it is supposed to work - maybe just my mistake somewhere, >>>>>>>>>>>>> although I tested >>>>>>>>>>>>> >> it heavily. The source buffer is created and attached using >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>>>>>>>>>>>> gl.DEPTH_COMPONENT32F, >>>>>>>>>>>>> >> width, height) >>>>>>>>>>>>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, >>>>>>>>>>>>> gl.DEPTH_ATTACHMENT, >>>>>>>>>>>>> >> gl.RENDERBUFFER, msaaBuffer) >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> and the destination created and attached to another >>>>>>>>>>>>> framebufer with >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, width, >>>>>>>>>>>>> height, 0, >>>>>>>>>>>>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>>>>>>>>>>>> >> framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>>>>>>>> gl.TEXTURE_2D, >>>>>>>>>>>>> >> tex, 0) >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> The blit is performed like this: >>>>>>>>>>>>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>>>>>>>>>>>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>>>>>>>>>>>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>>>>>>>>>>>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>>>>>>>>>>>> >> >>>>>>>>>>>>> >> On 29 November 2016 at 17:51, Jamie Madill < >>>>>>>>>>>>> jmadill...@> wrote: >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> This generally should work. There are tests for this in >>>>>>>>>>>>> dEQP's >>>>>>>>>>>>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL >>>>>>>>>>>>> 2-specific tests in >>>>>>>>>>>>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> Note some of these tests are marked as failing on Intel on >>>>>>>>>>>>> pretty much >>>>>>>>>>>>> >>> all platforms: >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> https://cs.chromium.org/chromi >>>>>>>>>>>>> um/src/content/test/gpu/gpu_tests/webgl2_conformance_expecta >>>>>>>>>>>>> tions.py?q=webgl2_conform&sq=package:chromium&l=118 >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> Mr F, what platform are you on, and which vendor? >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell < >>>>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> I'm pretty sure this should either work, or generate an >>>>>>>>>>>>> OpenGL error. >>>>>>>>>>>>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says >>>>>>>>>>>>> BlitFramebuffer >>>>>>>>>>>>> >>>> generates INVALID_OPERATION if the mask includes >>>>>>>>>>>>> DEPTH_BUFFER_BIT or >>>>>>>>>>>>> >>>> STENCIL_BUFFER_BIT, and the source and destination depth >>>>>>>>>>>>> and stencil buffer >>>>>>>>>>>>> >>>> formats don't match. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> It looks like there are extensive tests of resolving >>>>>>>>>>>>> multisampled depth >>>>>>>>>>>>> >>>> buffers to single-sampled ones, but I'm not sure about >>>>>>>>>>>>> resolving to a depth >>>>>>>>>>>>> >>>> texture. Do you have a test case? What platform are you >>>>>>>>>>>>> on (can you provide >>>>>>>>>>>>> >>>> about:gpu if you're running Chrome)? >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> -Ken >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F < >>>>>>>>>>>>> arthur...@> wrote: >>>>>>>>>>>>> >>>>> >>>>>>>>>>>>> >>>>> The docs are a bit unclear on that. Is it possible to use >>>>>>>>>>>>> >>>>> blitFramebuffer to get current MSAA depth buffer into a >>>>>>>>>>>>> non-MSAA depth >>>>>>>>>>>>> >>>>> texture? The color buffer is resolved nicely, however >>>>>>>>>>>>> the depth is just >>>>>>>>>>>>> >>>>> blank, and no error is generated - wondering if it's the >>>>>>>>>>>>> expected behaviour. >>>>>>>>>>>>> >>>>> Depth format is DEPTH_COMPONENT32F. >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>>> >>>>>>>>>>>>> >>> >>>>>>>>>>>>> >> >>>>>>>>>>>>> > >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Thu Jan 19 06:38:21 2017 From: art...@ (Mr F) Date: Thu, 19 Jan 2017 17:38:21 +0300 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: I know and I'm not doing deferred, but rather more simple depth-dependent effects in forward, which work quite fine with blitFramebuffer depth (except the current performance issue). If I can't write to R32F with MSAA though, I can only write encoded RGBA8 values, and these got averaged, they become useless. On 19 January 2017 at 17:34, Florian B?sch wrote: > Note that averaged (MSAA'ed) colors at the borders and a single depth > value for the pixel produces noticable artifacts when shading as a deferred > step (using the depth for light calculations like area of influence of a > deferred light). > > Most deferred renderers don't do MSAA rendering for that reason (and > because of performance) and use alternative approaches to AA'ing that > operates on the output once all lighting calculations are done. > > On Thu, Jan 19, 2017 at 3:28 PM, Mr F wrote: > >> Furthermore, rendering depth to a separate RT (using drawBuffers) doesn't >> make much sense, because the samples are averaged (only needs 1st sample, >> and that's what blit was supposed to do). >> >> On 19 January 2017 at 16:46, Mr F wrote: >> >>> Hmm seems like ANGLE is not able to render MSAA R32F color as well? So >>> I'll have to use RGBA8 encoded depth WebGL1-style again. >>> >>> On 19 January 2017 at 00:02, Mr F wrote: >>> >>>> I've just posted a possible solution to this issue there. >>>> >>>> On 18 January 2017 at 01:08, Jamie Madill wrote: >>>> >>>>> Aha yes. >>>>> >>>>> Long story short, I couldn't get multisampled resolve (aka blit from >>>>> multisample to single-sampled textures) working with a shader. I spent a >>>>> fair bit of time on this. Sorry about this, but for now it's a slow path. >>>>> Feel free to comment on the issue I just opened: anglebug.com/1710. I >>>>> tried this a few different ways but always ended up with a blank texture >>>>> and nothing being written. >>>>> >>>>> Not sure what to suggest other than trying to use a non-depth texture >>>>> for the same effect if possible. >>>>> >>>>> On Tue, Jan 17, 2017 at 4:27 PM, Mr F wrote: >>>>> >>>>>> Here's a tiny example: http://geom.io/testResolveDepth/index.html >>>>>> It blits MSAA R11G11B10 + 32F depth into 2 non-multisampled textures. >>>>>> >>>>>> >>>>>> On 18 January 2017 at 00:23, Mr F wrote: >>>>>> >>>>>>> I'm talking about multisampled blit though >>>>>>> >>>>>>> On 17 January 2017 at 23:03, Jamie Madill >>>>>>> wrote: >>>>>>> >>>>>>>> Non-multisample depth blit by itself should use a shader.. but we'd >>>>>>>> need a small repro case to verify. Let me know if you can pull one together >>>>>>>> and I'll look. >>>>>>>> >>>>>>>> On Tue, Jan 17, 2017 at 2:39 PM, Mr F >>>>>>>> wrote: >>>>>>>> >>>>>>>>> I understand depth/stencil complications. Are you sure 32F depth >>>>>>>>> blits are never CPU-bound? There's a weird issue right now with it taking >>>>>>>>> large chunk of performance, although only in FF Nightly (Chrome is fine). I >>>>>>>>> wonder if some older versions of ANGLE did CPU copy for D32F? >>>>>>>>> >>>>>>>>> On 17 January 2017 at 22:28, Jamie Madill >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Blit is core in GLES 3, so we need to emulate some of the >>>>>>>>>> depth/stencil blits to support GLES 3 at all. Such is the nature of what >>>>>>>>>> ANGLE does. >>>>>>>>>> >>>>>>>>>> It's something that is predictable from ANGLE's point of view, >>>>>>>>>> but I understand it might be unclear from the app's point of view. Stencil >>>>>>>>>> bits in particular are difficult, and multisample depth/stencil. We're >>>>>>>>>> working on making this more visible - one thing is that it'll show up in >>>>>>>>>> Chrome's about:gpu page. Ideally it could show up in some debugging log >>>>>>>>>> visible to javascript. >>>>>>>>>> >>>>>>>>>> On Mon, Jan 16, 2017 at 11:37 AM, Mr F >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> So it seems like ANGLE sometimes implements blit on CPU (!): >>>>>>>>>>> https://github.com/google/angle/blob/master/src/libANGL >>>>>>>>>>> E/renderer/d3d/d3d11/Blit11.cpp#L240 >>>>>>>>>>> >>>>>>>>>>> Which gives huge slowdowns without really telling why and being >>>>>>>>>>> able to predict it. If on some platform some blit is not possible, maybe we >>>>>>>>>>> should just throw an error, disable it, make it an extension or something? >>>>>>>>>>> CPU blit is pure evil. >>>>>>>>>>> >>>>>>>>>>> On 1 December 2016 at 15:50, Mr F wrote: >>>>>>>>>>> >>>>>>>>>>>> OK, sorry for bothering - must be my mistake somewhere. Made it >>>>>>>>>>>> work while preparing the test case :) >>>>>>>>>>>> >>>>>>>>>>>> On 30 November 2016 at 23:03, Jamie Madill < >>>>>>>>>>>> jmadill...@> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Yes, a reproduction / test case would be awesome, thanks. >>>>>>>>>>>>> >>>>>>>>>>>>> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert < >>>>>>>>>>>>> jgilbert...@> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> FWIW, the spec says pretty explicitly that BlitFramebuffer >>>>>>>>>>>>>> should be >>>>>>>>>>>>>> functional for depth->depth blitting, as long as the formats >>>>>>>>>>>>>> match, >>>>>>>>>>>>>> etc etc. >>>>>>>>>>>>>> >>>>>>>>>>>>>> GLES 3.0.5 p198: >>>>>>>>>>>>>> If the >>>>>>>>>>>>>> source formats are depth values, sample values are resolved >>>>>>>>>>>>>> in an >>>>>>>>>>>>>> implementation- >>>>>>>>>>>>>> dependent manner where the result will be between the minimum >>>>>>>>>>>>>> and maximum >>>>>>>>>>>>>> depth values in the pixel. >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell < >>>>>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>>>>> > Could you make any test case available? Do you have any >>>>>>>>>>>>>> other platforms >>>>>>>>>>>>>> > (Mac, Linux) to test on? You could also try launching >>>>>>>>>>>>>> Chrome with the >>>>>>>>>>>>>> > command line argument --use-angle=gl to see the difference >>>>>>>>>>>>>> between ANGLE's >>>>>>>>>>>>>> > D3D11 and OpenGL backends. >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F < >>>>>>>>>>>>>> arthur...@> wrote: >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> I'll try to prepare a clean test case soon. I'm on the >>>>>>>>>>>>>> latest Chrome >>>>>>>>>>>>>> >> Canary/Win 7/GTX970. Both source and destination are 32F. >>>>>>>>>>>>>> What I mean by >>>>>>>>>>>>>> >> "blank" actually looks like a solid red color, which seems >>>>>>>>>>>>>> to be uniform >>>>>>>>>>>>>> >> 1,0,0, no matter how close/far I'm from any geometry.It's >>>>>>>>>>>>>> good to know that >>>>>>>>>>>>>> >> it is supposed to work - maybe just my mistake somewhere, >>>>>>>>>>>>>> although I tested >>>>>>>>>>>>>> >> it heavily. The source buffer is created and attached using >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>>>>>>>>>>>>> gl.DEPTH_COMPONENT32F, >>>>>>>>>>>>>> >> width, height) >>>>>>>>>>>>>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, >>>>>>>>>>>>>> gl.DEPTH_ATTACHMENT, >>>>>>>>>>>>>> >> gl.RENDERBUFFER, msaaBuffer) >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> and the destination created and attached to another >>>>>>>>>>>>>> framebufer with >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, width, >>>>>>>>>>>>>> height, 0, >>>>>>>>>>>>>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>>>>>>>>>>>>> >> framebufferTexture2D(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, >>>>>>>>>>>>>> gl.TEXTURE_2D, >>>>>>>>>>>>>> >> tex, 0) >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> The blit is performed like this: >>>>>>>>>>>>>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>>>>>>>>>>>>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>>>>>>>>>>>>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>>>>>>>>>>>>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> >> On 29 November 2016 at 17:51, Jamie Madill < >>>>>>>>>>>>>> jmadill...@> wrote: >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> This generally should work. There are tests for this in >>>>>>>>>>>>>> dEQP's >>>>>>>>>>>>>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL >>>>>>>>>>>>>> 2-specific tests in >>>>>>>>>>>>>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> Note some of these tests are marked as failing on Intel >>>>>>>>>>>>>> on pretty much >>>>>>>>>>>>>> >>> all platforms: >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> https://cs.chromium.org/chromi >>>>>>>>>>>>>> um/src/content/test/gpu/gpu_tests/webgl2_conformance_expecta >>>>>>>>>>>>>> tions.py?q=webgl2_conform&sq=package:chromium&l=118 >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> Mr F, what platform are you on, and which vendor? >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell < >>>>>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> I'm pretty sure this should either work, or generate an >>>>>>>>>>>>>> OpenGL error. >>>>>>>>>>>>>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says >>>>>>>>>>>>>> BlitFramebuffer >>>>>>>>>>>>>> >>>> generates INVALID_OPERATION if the mask includes >>>>>>>>>>>>>> DEPTH_BUFFER_BIT or >>>>>>>>>>>>>> >>>> STENCIL_BUFFER_BIT, and the source and destination depth >>>>>>>>>>>>>> and stencil buffer >>>>>>>>>>>>>> >>>> formats don't match. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> It looks like there are extensive tests of resolving >>>>>>>>>>>>>> multisampled depth >>>>>>>>>>>>>> >>>> buffers to single-sampled ones, but I'm not sure about >>>>>>>>>>>>>> resolving to a depth >>>>>>>>>>>>>> >>>> texture. Do you have a test case? What platform are you >>>>>>>>>>>>>> on (can you provide >>>>>>>>>>>>>> >>>> about:gpu if you're running Chrome)? >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> -Ken >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F < >>>>>>>>>>>>>> arthur...@> wrote: >>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>> >>>>> The docs are a bit unclear on that. Is it possible to >>>>>>>>>>>>>> use >>>>>>>>>>>>>> >>>>> blitFramebuffer to get current MSAA depth buffer into a >>>>>>>>>>>>>> non-MSAA depth >>>>>>>>>>>>>> >>>>> texture? The color buffer is resolved nicely, however >>>>>>>>>>>>>> the depth is just >>>>>>>>>>>>>> >>>>> blank, and no error is generated - wondering if it's >>>>>>>>>>>>>> the expected behaviour. >>>>>>>>>>>>>> >>>>> Depth format is DEPTH_COMPONENT32F. >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>> >>> >>>>>>>>>>>>>> >> >>>>>>>>>>>>>> > >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Jan 19 06:52:13 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 19 Jan 2017 15:52:13 +0100 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: Can't you create a pixel pack buffer for your depth FBO attachment and then bindBuffer and texImage2D(data=null) it to a texture2D so you can read the subpixels? On Thu, Jan 19, 2017 at 3:38 PM, Mr F wrote: > I know and I'm not doing deferred, but rather more simple depth-dependent > effects in forward, which work quite fine with blitFramebuffer depth > (except the current performance issue). If I can't write to R32F with MSAA > though, I can only write encoded RGBA8 values, and these got averaged, they > become useless. > > On 19 January 2017 at 17:34, Florian B?sch wrote: > >> Note that averaged (MSAA'ed) colors at the borders and a single depth >> value for the pixel produces noticable artifacts when shading as a deferred >> step (using the depth for light calculations like area of influence of a >> deferred light). >> >> Most deferred renderers don't do MSAA rendering for that reason (and >> because of performance) and use alternative approaches to AA'ing that >> operates on the output once all lighting calculations are done. >> >> On Thu, Jan 19, 2017 at 3:28 PM, Mr F wrote: >> >>> Furthermore, rendering depth to a separate RT (using drawBuffers) >>> doesn't make much sense, because the samples are averaged (only needs 1st >>> sample, and that's what blit was supposed to do). >>> >>> On 19 January 2017 at 16:46, Mr F wrote: >>> >>>> Hmm seems like ANGLE is not able to render MSAA R32F color as well? So >>>> I'll have to use RGBA8 encoded depth WebGL1-style again. >>>> >>>> On 19 January 2017 at 00:02, Mr F wrote: >>>> >>>>> I've just posted a possible solution to this issue there. >>>>> >>>>> On 18 January 2017 at 01:08, Jamie Madill >>>>> wrote: >>>>> >>>>>> Aha yes. >>>>>> >>>>>> Long story short, I couldn't get multisampled resolve (aka blit from >>>>>> multisample to single-sampled textures) working with a shader. I spent a >>>>>> fair bit of time on this. Sorry about this, but for now it's a slow path. >>>>>> Feel free to comment on the issue I just opened: anglebug.com/1710. >>>>>> I tried this a few different ways but always ended up with a blank texture >>>>>> and nothing being written. >>>>>> >>>>>> Not sure what to suggest other than trying to use a non-depth texture >>>>>> for the same effect if possible. >>>>>> >>>>>> On Tue, Jan 17, 2017 at 4:27 PM, Mr F wrote: >>>>>> >>>>>>> Here's a tiny example: http://geom.io/testResolveDepth/index.html >>>>>>> It blits MSAA R11G11B10 + 32F depth into 2 non-multisampled textures. >>>>>>> >>>>>>> >>>>>>> On 18 January 2017 at 00:23, Mr F wrote: >>>>>>> >>>>>>>> I'm talking about multisampled blit though >>>>>>>> >>>>>>>> On 17 January 2017 at 23:03, Jamie Madill >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Non-multisample depth blit by itself should use a shader.. but >>>>>>>>> we'd need a small repro case to verify. Let me know if you can pull one >>>>>>>>> together and I'll look. >>>>>>>>> >>>>>>>>> On Tue, Jan 17, 2017 at 2:39 PM, Mr F >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> I understand depth/stencil complications. Are you sure 32F depth >>>>>>>>>> blits are never CPU-bound? There's a weird issue right now with it taking >>>>>>>>>> large chunk of performance, although only in FF Nightly (Chrome is fine). I >>>>>>>>>> wonder if some older versions of ANGLE did CPU copy for D32F? >>>>>>>>>> >>>>>>>>>> On 17 January 2017 at 22:28, Jamie Madill >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Blit is core in GLES 3, so we need to emulate some of the >>>>>>>>>>> depth/stencil blits to support GLES 3 at all. Such is the nature of what >>>>>>>>>>> ANGLE does. >>>>>>>>>>> >>>>>>>>>>> It's something that is predictable from ANGLE's point of view, >>>>>>>>>>> but I understand it might be unclear from the app's point of view. Stencil >>>>>>>>>>> bits in particular are difficult, and multisample depth/stencil. We're >>>>>>>>>>> working on making this more visible - one thing is that it'll show up in >>>>>>>>>>> Chrome's about:gpu page. Ideally it could show up in some debugging log >>>>>>>>>>> visible to javascript. >>>>>>>>>>> >>>>>>>>>>> On Mon, Jan 16, 2017 at 11:37 AM, Mr F >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> So it seems like ANGLE sometimes implements blit on CPU (!): >>>>>>>>>>>> https://github.com/google/angle/blob/master/src/libANGL >>>>>>>>>>>> E/renderer/d3d/d3d11/Blit11.cpp#L240 >>>>>>>>>>>> >>>>>>>>>>>> Which gives huge slowdowns without really telling why and being >>>>>>>>>>>> able to predict it. If on some platform some blit is not possible, maybe we >>>>>>>>>>>> should just throw an error, disable it, make it an extension or something? >>>>>>>>>>>> CPU blit is pure evil. >>>>>>>>>>>> >>>>>>>>>>>> On 1 December 2016 at 15:50, Mr F >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> OK, sorry for bothering - must be my mistake somewhere. Made >>>>>>>>>>>>> it work while preparing the test case :) >>>>>>>>>>>>> >>>>>>>>>>>>> On 30 November 2016 at 23:03, Jamie Madill < >>>>>>>>>>>>> jmadill...@> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Yes, a reproduction / test case would be awesome, thanks. >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert < >>>>>>>>>>>>>> jgilbert...@> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> FWIW, the spec says pretty explicitly that BlitFramebuffer >>>>>>>>>>>>>>> should be >>>>>>>>>>>>>>> functional for depth->depth blitting, as long as the formats >>>>>>>>>>>>>>> match, >>>>>>>>>>>>>>> etc etc. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> GLES 3.0.5 p198: >>>>>>>>>>>>>>> If the >>>>>>>>>>>>>>> source formats are depth values, sample values are resolved >>>>>>>>>>>>>>> in an >>>>>>>>>>>>>>> implementation- >>>>>>>>>>>>>>> dependent manner where the result will be between the >>>>>>>>>>>>>>> minimum and maximum >>>>>>>>>>>>>>> depth values in the pixel. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell < >>>>>>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>>>>>> > Could you make any test case available? Do you have any >>>>>>>>>>>>>>> other platforms >>>>>>>>>>>>>>> > (Mac, Linux) to test on? You could also try launching >>>>>>>>>>>>>>> Chrome with the >>>>>>>>>>>>>>> > command line argument --use-angle=gl to see the difference >>>>>>>>>>>>>>> between ANGLE's >>>>>>>>>>>>>>> > D3D11 and OpenGL backends. >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F < >>>>>>>>>>>>>>> arthur...@> wrote: >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> I'll try to prepare a clean test case soon. I'm on the >>>>>>>>>>>>>>> latest Chrome >>>>>>>>>>>>>>> >> Canary/Win 7/GTX970. Both source and destination are 32F. >>>>>>>>>>>>>>> What I mean by >>>>>>>>>>>>>>> >> "blank" actually looks like a solid red color, which >>>>>>>>>>>>>>> seems to be uniform >>>>>>>>>>>>>>> >> 1,0,0, no matter how close/far I'm from any geometry.It's >>>>>>>>>>>>>>> good to know that >>>>>>>>>>>>>>> >> it is supposed to work - maybe just my mistake somewhere, >>>>>>>>>>>>>>> although I tested >>>>>>>>>>>>>>> >> it heavily. The source buffer is created and attached >>>>>>>>>>>>>>> using >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>>>>>>>>>>>>>> gl.DEPTH_COMPONENT32F, >>>>>>>>>>>>>>> >> width, height) >>>>>>>>>>>>>>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, >>>>>>>>>>>>>>> gl.DEPTH_ATTACHMENT, >>>>>>>>>>>>>>> >> gl.RENDERBUFFER, msaaBuffer) >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> and the destination created and attached to another >>>>>>>>>>>>>>> framebufer with >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, >>>>>>>>>>>>>>> width, height, 0, >>>>>>>>>>>>>>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>>>>>>>>>>>>>> >> framebufferTexture2D(gl.FRAMEBUFFER, >>>>>>>>>>>>>>> gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, >>>>>>>>>>>>>>> >> tex, 0) >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> The blit is performed like this: >>>>>>>>>>>>>>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>>>>>>>>>>>>>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>>>>>>>>>>>>>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>>>>>>>>>>>>>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> >> On 29 November 2016 at 17:51, Jamie Madill < >>>>>>>>>>>>>>> jmadill...@> wrote: >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> This generally should work. There are tests for this in >>>>>>>>>>>>>>> dEQP's >>>>>>>>>>>>>>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL >>>>>>>>>>>>>>> 2-specific tests in >>>>>>>>>>>>>>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> Note some of these tests are marked as failing on Intel >>>>>>>>>>>>>>> on pretty much >>>>>>>>>>>>>>> >>> all platforms: >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> https://cs.chromium.org/chromi >>>>>>>>>>>>>>> um/src/content/test/gpu/gpu_tests/webgl2_conformance_expecta >>>>>>>>>>>>>>> tions.py?q=webgl2_conform&sq=package:chromium&l=118 >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> Mr F, what platform are you on, and which vendor? >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell < >>>>>>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>> >>>> I'm pretty sure this should either work, or generate an >>>>>>>>>>>>>>> OpenGL error. >>>>>>>>>>>>>>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says >>>>>>>>>>>>>>> BlitFramebuffer >>>>>>>>>>>>>>> >>>> generates INVALID_OPERATION if the mask includes >>>>>>>>>>>>>>> DEPTH_BUFFER_BIT or >>>>>>>>>>>>>>> >>>> STENCIL_BUFFER_BIT, and the source and destination >>>>>>>>>>>>>>> depth and stencil buffer >>>>>>>>>>>>>>> >>>> formats don't match. >>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>> >>>> It looks like there are extensive tests of resolving >>>>>>>>>>>>>>> multisampled depth >>>>>>>>>>>>>>> >>>> buffers to single-sampled ones, but I'm not sure about >>>>>>>>>>>>>>> resolving to a depth >>>>>>>>>>>>>>> >>>> texture. Do you have a test case? What platform are you >>>>>>>>>>>>>>> on (can you provide >>>>>>>>>>>>>>> >>>> about:gpu if you're running Chrome)? >>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>> >>>> -Ken >>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F < >>>>>>>>>>>>>>> arthur...@> wrote: >>>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>>> >>>>> The docs are a bit unclear on that. Is it possible to >>>>>>>>>>>>>>> use >>>>>>>>>>>>>>> >>>>> blitFramebuffer to get current MSAA depth buffer into >>>>>>>>>>>>>>> a non-MSAA depth >>>>>>>>>>>>>>> >>>>> texture? The color buffer is resolved nicely, however >>>>>>>>>>>>>>> the depth is just >>>>>>>>>>>>>>> >>>>> blank, and no error is generated - wondering if it's >>>>>>>>>>>>>>> the expected behaviour. >>>>>>>>>>>>>>> >>>>> Depth format is DEPTH_COMPONENT32F. >>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Thu Jan 19 08:45:05 2017 From: art...@ (Mr F) Date: Thu, 19 Jan 2017 19:45:05 +0300 Subject: [Public WebGL] Resolve MSAA depth? In-Reply-To: References: Message-ID: I'd be surprised if it'll work, but I'll give it a try, thanks. On 19 January 2017 at 17:52, Florian B?sch wrote: > Can't you create a pixel pack buffer for your depth FBO attachment and > then bindBuffer and texImage2D(data=null) it to a texture2D so you can read > the subpixels? > > On Thu, Jan 19, 2017 at 3:38 PM, Mr F wrote: > >> I know and I'm not doing deferred, but rather more simple depth-dependent >> effects in forward, which work quite fine with blitFramebuffer depth >> (except the current performance issue). If I can't write to R32F with MSAA >> though, I can only write encoded RGBA8 values, and these got averaged, they >> become useless. >> >> On 19 January 2017 at 17:34, Florian B?sch wrote: >> >>> Note that averaged (MSAA'ed) colors at the borders and a single depth >>> value for the pixel produces noticable artifacts when shading as a deferred >>> step (using the depth for light calculations like area of influence of a >>> deferred light). >>> >>> Most deferred renderers don't do MSAA rendering for that reason (and >>> because of performance) and use alternative approaches to AA'ing that >>> operates on the output once all lighting calculations are done. >>> >>> On Thu, Jan 19, 2017 at 3:28 PM, Mr F wrote: >>> >>>> Furthermore, rendering depth to a separate RT (using drawBuffers) >>>> doesn't make much sense, because the samples are averaged (only needs 1st >>>> sample, and that's what blit was supposed to do). >>>> >>>> On 19 January 2017 at 16:46, Mr F wrote: >>>> >>>>> Hmm seems like ANGLE is not able to render MSAA R32F color as well? So >>>>> I'll have to use RGBA8 encoded depth WebGL1-style again. >>>>> >>>>> On 19 January 2017 at 00:02, Mr F wrote: >>>>> >>>>>> I've just posted a possible solution to this issue there. >>>>>> >>>>>> On 18 January 2017 at 01:08, Jamie Madill >>>>>> wrote: >>>>>> >>>>>>> Aha yes. >>>>>>> >>>>>>> Long story short, I couldn't get multisampled resolve (aka blit from >>>>>>> multisample to single-sampled textures) working with a shader. I spent a >>>>>>> fair bit of time on this. Sorry about this, but for now it's a slow path. >>>>>>> Feel free to comment on the issue I just opened: anglebug.com/1710. >>>>>>> I tried this a few different ways but always ended up with a blank texture >>>>>>> and nothing being written. >>>>>>> >>>>>>> Not sure what to suggest other than trying to use a non-depth >>>>>>> texture for the same effect if possible. >>>>>>> >>>>>>> On Tue, Jan 17, 2017 at 4:27 PM, Mr F wrote: >>>>>>> >>>>>>>> Here's a tiny example: http://geom.io/testResolveDepth/index.html >>>>>>>> It blits MSAA R11G11B10 + 32F depth into 2 non-multisampled >>>>>>>> textures. >>>>>>>> >>>>>>>> >>>>>>>> On 18 January 2017 at 00:23, Mr F wrote: >>>>>>>> >>>>>>>>> I'm talking about multisampled blit though >>>>>>>>> >>>>>>>>> On 17 January 2017 at 23:03, Jamie Madill >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Non-multisample depth blit by itself should use a shader.. but >>>>>>>>>> we'd need a small repro case to verify. Let me know if you can pull one >>>>>>>>>> together and I'll look. >>>>>>>>>> >>>>>>>>>> On Tue, Jan 17, 2017 at 2:39 PM, Mr F >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> I understand depth/stencil complications. Are you sure 32F depth >>>>>>>>>>> blits are never CPU-bound? There's a weird issue right now with it taking >>>>>>>>>>> large chunk of performance, although only in FF Nightly (Chrome is fine). I >>>>>>>>>>> wonder if some older versions of ANGLE did CPU copy for D32F? >>>>>>>>>>> >>>>>>>>>>> On 17 January 2017 at 22:28, Jamie Madill >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>>>> Blit is core in GLES 3, so we need to emulate some of the >>>>>>>>>>>> depth/stencil blits to support GLES 3 at all. Such is the nature of what >>>>>>>>>>>> ANGLE does. >>>>>>>>>>>> >>>>>>>>>>>> It's something that is predictable from ANGLE's point of view, >>>>>>>>>>>> but I understand it might be unclear from the app's point of view. Stencil >>>>>>>>>>>> bits in particular are difficult, and multisample depth/stencil. We're >>>>>>>>>>>> working on making this more visible - one thing is that it'll show up in >>>>>>>>>>>> Chrome's about:gpu page. Ideally it could show up in some debugging log >>>>>>>>>>>> visible to javascript. >>>>>>>>>>>> >>>>>>>>>>>> On Mon, Jan 16, 2017 at 11:37 AM, Mr F >>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> So it seems like ANGLE sometimes implements blit on CPU (!): >>>>>>>>>>>>> https://github.com/google/angle/blob/master/src/libANGL >>>>>>>>>>>>> E/renderer/d3d/d3d11/Blit11.cpp#L240 >>>>>>>>>>>>> >>>>>>>>>>>>> Which gives huge slowdowns without really telling why and >>>>>>>>>>>>> being able to predict it. If on some platform some blit is not possible, >>>>>>>>>>>>> maybe we should just throw an error, disable it, make it an extension or >>>>>>>>>>>>> something? CPU blit is pure evil. >>>>>>>>>>>>> >>>>>>>>>>>>> On 1 December 2016 at 15:50, Mr F >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> OK, sorry for bothering - must be my mistake somewhere. Made >>>>>>>>>>>>>> it work while preparing the test case :) >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 30 November 2016 at 23:03, Jamie Madill < >>>>>>>>>>>>>> jmadill...@> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Yes, a reproduction / test case would be awesome, thanks. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Tue, Nov 29, 2016 at 9:20 PM, Jeff Gilbert < >>>>>>>>>>>>>>> jgilbert...@> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> FWIW, the spec says pretty explicitly that BlitFramebuffer >>>>>>>>>>>>>>>> should be >>>>>>>>>>>>>>>> functional for depth->depth blitting, as long as the >>>>>>>>>>>>>>>> formats match, >>>>>>>>>>>>>>>> etc etc. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> GLES 3.0.5 p198: >>>>>>>>>>>>>>>> If the >>>>>>>>>>>>>>>> source formats are depth values, sample values are resolved >>>>>>>>>>>>>>>> in an >>>>>>>>>>>>>>>> implementation- >>>>>>>>>>>>>>>> dependent manner where the result will be between the >>>>>>>>>>>>>>>> minimum and maximum >>>>>>>>>>>>>>>> depth values in the pixel. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On Tue, Nov 29, 2016 at 8:16 PM, Kenneth Russell < >>>>>>>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>>>>>>> > Could you make any test case available? Do you have any >>>>>>>>>>>>>>>> other platforms >>>>>>>>>>>>>>>> > (Mac, Linux) to test on? You could also try launching >>>>>>>>>>>>>>>> Chrome with the >>>>>>>>>>>>>>>> > command line argument --use-angle=gl to see the >>>>>>>>>>>>>>>> difference between ANGLE's >>>>>>>>>>>>>>>> > D3D11 and OpenGL backends. >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> > On Tue, Nov 29, 2016 at 12:42 PM, Mr F < >>>>>>>>>>>>>>>> arthur...@> wrote: >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> I'll try to prepare a clean test case soon. I'm on the >>>>>>>>>>>>>>>> latest Chrome >>>>>>>>>>>>>>>> >> Canary/Win 7/GTX970. Both source and destination are >>>>>>>>>>>>>>>> 32F. What I mean by >>>>>>>>>>>>>>>> >> "blank" actually looks like a solid red color, which >>>>>>>>>>>>>>>> seems to be uniform >>>>>>>>>>>>>>>> >> 1,0,0, no matter how close/far I'm from any >>>>>>>>>>>>>>>> geometry.It's good to know that >>>>>>>>>>>>>>>> >> it is supposed to work - maybe just my mistake >>>>>>>>>>>>>>>> somewhere, although I tested >>>>>>>>>>>>>>>> >> it heavily. The source buffer is created and attached >>>>>>>>>>>>>>>> using >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> renderbufferStorageMultisample(gl.RENDERBUFFER, 4, >>>>>>>>>>>>>>>> gl.DEPTH_COMPONENT32F, >>>>>>>>>>>>>>>> >> width, height) >>>>>>>>>>>>>>>> >> framebufferRenderbuffer(gl.FRAMEBUFFER, >>>>>>>>>>>>>>>> gl.DEPTH_ATTACHMENT, >>>>>>>>>>>>>>>> >> gl.RENDERBUFFER, msaaBuffer) >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> and the destination created and attached to another >>>>>>>>>>>>>>>> framebufer with >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> texImage2D(gl.TEXTURE_2D, 0, gl.DEPTH_COMPONENT32F, >>>>>>>>>>>>>>>> width, height, 0, >>>>>>>>>>>>>>>> >> gl.DEPTH_COMPONENT, gl.FLOAT, null) >>>>>>>>>>>>>>>> >> framebufferTexture2D(gl.FRAMEBUFFER, >>>>>>>>>>>>>>>> gl.DEPTH_ATTACHMENT, gl.TEXTURE_2D, >>>>>>>>>>>>>>>> >> tex, 0) >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> The blit is performed like this: >>>>>>>>>>>>>>>> >> bindFramebuffer(gl.READ_FRAMEBUFFER, msaaFBO) >>>>>>>>>>>>>>>> >> bindFramebuffer(gl.DRAW_FRAMEBUFFER, resolveFBO) >>>>>>>>>>>>>>>> >> blitFramebuffer(0, 0, width, height, 0, 0, width, height, >>>>>>>>>>>>>>>> >> gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT, gl.NEAREST) >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> >> On 29 November 2016 at 17:51, Jamie Madill < >>>>>>>>>>>>>>>> jmadill...@> wrote: >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >>> This generally should work. There are tests for this in >>>>>>>>>>>>>>>> dEQP's >>>>>>>>>>>>>>>> >>> dEQP-GLES3.functional.fbo.msaa and also in the WebGL >>>>>>>>>>>>>>>> 2-specific tests in >>>>>>>>>>>>>>>> >>> gles3/fbomultisample and I think also fboinvalidate/sub. >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >>> Note some of these tests are marked as failing on Intel >>>>>>>>>>>>>>>> on pretty much >>>>>>>>>>>>>>>> >>> all platforms: >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >>> https://cs.chromium.org/chromi >>>>>>>>>>>>>>>> um/src/content/test/gpu/gpu_te >>>>>>>>>>>>>>>> sts/webgl2_conformance_expecta >>>>>>>>>>>>>>>> tions.py?q=webgl2_conform&sq=package:chromium&l=118 >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >>> Mr F, what platform are you on, and which vendor? >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >>> On Mon, Nov 28, 2016 at 10:34 PM, Kenneth Russell < >>>>>>>>>>>>>>>> kbr...@> wrote: >>>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>>> >>>> I'm pretty sure this should either work, or generate >>>>>>>>>>>>>>>> an OpenGL error. >>>>>>>>>>>>>>>> >>>> OpenGL ES 3.0.5 section 4.3.3 "Copying Pixels" says >>>>>>>>>>>>>>>> BlitFramebuffer >>>>>>>>>>>>>>>> >>>> generates INVALID_OPERATION if the mask includes >>>>>>>>>>>>>>>> DEPTH_BUFFER_BIT or >>>>>>>>>>>>>>>> >>>> STENCIL_BUFFER_BIT, and the source and destination >>>>>>>>>>>>>>>> depth and stencil buffer >>>>>>>>>>>>>>>> >>>> formats don't match. >>>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>>> >>>> It looks like there are extensive tests of resolving >>>>>>>>>>>>>>>> multisampled depth >>>>>>>>>>>>>>>> >>>> buffers to single-sampled ones, but I'm not sure about >>>>>>>>>>>>>>>> resolving to a depth >>>>>>>>>>>>>>>> >>>> texture. Do you have a test case? What platform are >>>>>>>>>>>>>>>> you on (can you provide >>>>>>>>>>>>>>>> >>>> about:gpu if you're running Chrome)? >>>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>>> >>>> -Ken >>>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>>> >>>> On Sat, Nov 26, 2016 at 12:26 PM, Mr F < >>>>>>>>>>>>>>>> arthur...@> wrote: >>>>>>>>>>>>>>>> >>>>> >>>>>>>>>>>>>>>> >>>>> The docs are a bit unclear on that. Is it possible to >>>>>>>>>>>>>>>> use >>>>>>>>>>>>>>>> >>>>> blitFramebuffer to get current MSAA depth buffer into >>>>>>>>>>>>>>>> a non-MSAA depth >>>>>>>>>>>>>>>> >>>>> texture? The color buffer is resolved nicely, however >>>>>>>>>>>>>>>> the depth is just >>>>>>>>>>>>>>>> >>>>> blank, and no error is generated - wondering if it's >>>>>>>>>>>>>>>> the expected behaviour. >>>>>>>>>>>>>>>> >>>>> Depth format is DEPTH_COMPONENT32F. >>>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>>> >>>> >>>>>>>>>>>>>>>> >>> >>>>>>>>>>>>>>>> >> >>>>>>>>>>>>>>>> > >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juj...@ Fri Jan 20 07:46:59 2017 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Fri, 20 Jan 2017 17:46:59 +0200 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? Message-ID: Hi, I'm working on upgrading an engine to use WebGL 2 from WebGL 1, and I find that EXT_shader_texture_lod has been added to the core WebGL 2 spec. In WebGL 2, one can write #version 100 and #version 300 es shaders, with the idea that in WebGL2, one could reuse GLSL 1.0/GLES2 shaders without having to mass re-write all of them to GLSL 3.00/GLES3 when migrating. This is great for backwards compatibility, however I find that neither Firefox or Chrome implementations of WebGL 2 expose EXT_shader_texture_lod anymore. This means that if one does run #version 100 shaders in a WebGL 2 context, they lose the ability to do textureCubeLodEXT() and so forth, which forces one to go and mass convert all shaders to GLSL 3.00/GLES3. If one wants to simultaneously target both WebGL1 and WebGL2, i.e. gracefully falling back to WebGL1 on older browsers, this means that one needs to author two versions of each shader that needs this extension, which is a lot of hassle to manage. Could implementations still carry EXT_shader_texture_lod around in WebGL 2 so that #version 100 shaders with that extension would still keep working? I know WebGL is neither backwards or forwards compatible, but it would be great to minimize these types of breaking changes where possible, so that engines that co-target both versions on the fly would have easier time. Though I do admit I'm not completely sure how this looks like in different vendors' native GLES3 implementations, and if they have this mechanism available as well for this extension. Does anyone know if there was any fundamental reason to not advertise EXT_shader_texture_lod anymore in WebGL 2, or was it just a "oh, that's in core now so no need to have it present anymore" type of thought? Thanks! Jukka -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jan 20 08:15:51 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 20 Jan 2017 17:15:51 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: source = source.replace(/textureCubeLodExt/g, 'textureCubeLod') gl.shaderSource(source); Most serious WebGL code usually does some serious preprocessing on the shaders before submitting them, I'm guessing yours is no different. There's various glsl parsers and the like you use to do this properly. On Fri, Jan 20, 2017 at 4:46 PM, Jukka Jyl?nki wrote: > Does anyone know if there was any fundamental reason to not advertise > EXT_shader_texture_lod anymore in WebGL 2, or was it just a "oh, that's in > core now so no need to have it present anymore" type of thought? > GL/ES and WebGL have slightly different philosophies. In WebGL1, you cannot use any of the ES 1.0 fixed function functionality, it's essentially ES 2.0 core without the gunk. It stems from an observation of the horrid nature of the usual fare of oldschool/newschool mixed GL code find floating about, and how that's been a drain on GL. This philosophy of cleaning things up has carried on to WebGL 2 by not exposing extensions for which there is now core functionality, in the hope not to produce a horrible oldschool/newschool mix of WebGL code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juj...@ Fri Jan 20 08:46:13 2017 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Fri, 20 Jan 2017 18:46:13 +0200 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: 2017-01-20 18:15 GMT+02:00 Florian B?sch : > source = source.replace(/textureCubeLodExt/g, 'textureCubeLod') > Went down that path, though unfortunately such a function does not exist. There is textureLod() in GLSL 3.0, but then like mentioned above, one has to do a bunch of other search-replaces. Got that working, but it is super hacky. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zmo...@ Fri Jan 20 10:24:13 2017 From: zmo...@ (Zhenyao Mo) Date: Fri, 20 Jan 2017 10:24:13 -0800 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: Underlying ES3 drivers don't expose the EXT_shader_texture_lod, so currently you are not getting EXT_shader_texture_lod on most mobile platforms in WebGL1 because we don't expose this if WebGL1 is implemented on top of ES3 drivers. There are other incompatibilities between WebGL1/WebGL2 shaders, like texture2D -> texture, etc. You will need to rewrite or prep-process your old shaders anyway. I don't see a strong reason to expose older semantics for texture lod in WebGL2. On Fri, Jan 20, 2017 at 7:46 AM, Jukka Jyl?nki wrote: > Hi, > > I'm working on upgrading an engine to use WebGL 2 from WebGL 1, and I find > that EXT_shader_texture_lod has been added to the core WebGL 2 spec. > > In WebGL 2, one can write #version 100 and #version 300 es shaders, with > the idea that in WebGL2, one could reuse GLSL 1.0/GLES2 shaders without > having to mass re-write all of them to GLSL 3.00/GLES3 when migrating. > > This is great for backwards compatibility, however I find that neither > Firefox or Chrome implementations of WebGL 2 expose EXT_shader_texture_lod > anymore. This means that if one does run #version 100 shaders in a WebGL 2 > context, they lose the ability to do textureCubeLodEXT() and so forth, > which forces one to go and mass convert all shaders to GLSL 3.00/GLES3. If > one wants to simultaneously target both WebGL1 and WebGL2, i.e. gracefully > falling back to WebGL1 on older browsers, this means that one needs to > author two versions of each shader that needs this extension, which is a > lot of hassle to manage. > > Could implementations still carry EXT_shader_texture_lod around in WebGL 2 > so that #version 100 shaders with that extension would still keep > working? I know WebGL is neither backwards or forwards compatible, but it > would be great to minimize these types of breaking changes where possible, > so that engines that co-target both versions on the fly would have easier > time. > > Though I do admit I'm not completely sure how this looks like in different > vendors' native GLES3 implementations, and if they have this mechanism > available as well for this extension. > > Does anyone know if there was any fundamental reason to not advertise > EXT_shader_texture_lod anymore in WebGL 2, or was it just a "oh, that's in > core now so no need to have it present anymore" type of thought? > > Thanks! > Jukka > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Fri Jan 20 11:35:41 2017 From: max...@ (Maksims Mihejevs) Date: Fri, 20 Jan 2017 19:35:41 +0000 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: Zhenyao, question regarding availability of EXT_shader_texture_lod in webgl1, we've noticed at PlayCanvas lack of it on many mobile platforms, and had to come up with several workarounds for different things. One of them is image based lighting for physically based rendering. On those platforms we have to use alternative approach, and on runtime generate atlas texture with all mip levels in it, so in shader we have to do 2 samples and lerp between them, basically simulating LoD. For IBL using cube maps, you would store different prefilter levels of cubemap as mips, and then with texture_lod simply pick them. This drives big problems, and makes IBL quiet slow on those platforms, because as you guess, sampling twice same texture from very different locations (atlas), leads to constant sample block cache revalidation. And this has pretty sad impact on performance. Question is: *how much of a trouble would be actually make texture_lod still work when webgl1 is implemented on top of ES3?* Kind Regards, Max On 20 January 2017 at 18:24, Zhenyao Mo wrote: > Underlying ES3 drivers don't expose the EXT_shader_texture_lod, so > currently you are not getting EXT_shader_texture_lod on most mobile > platforms in WebGL1 because we don't expose this if WebGL1 is implemented > on top of ES3 drivers. > > There are other incompatibilities between WebGL1/WebGL2 shaders, like > texture2D -> texture, etc. You will need to rewrite or prep-process your > old shaders anyway. I don't see a strong reason to expose older semantics > for texture lod in WebGL2. > > On Fri, Jan 20, 2017 at 7:46 AM, Jukka Jyl?nki wrote: > >> Hi, >> >> I'm working on upgrading an engine to use WebGL 2 from WebGL 1, and I >> find that EXT_shader_texture_lod has been added to the core WebGL 2 spec. >> >> In WebGL 2, one can write #version 100 and #version 300 es shaders, with >> the idea that in WebGL2, one could reuse GLSL 1.0/GLES2 shaders without >> having to mass re-write all of them to GLSL 3.00/GLES3 when migrating. >> >> This is great for backwards compatibility, however I find that neither >> Firefox or Chrome implementations of WebGL 2 expose EXT_shader_texture_lod >> anymore. This means that if one does run #version 100 shaders in a WebGL 2 >> context, they lose the ability to do textureCubeLodEXT() and so forth, >> which forces one to go and mass convert all shaders to GLSL 3.00/GLES3. If >> one wants to simultaneously target both WebGL1 and WebGL2, i.e. gracefully >> falling back to WebGL1 on older browsers, this means that one needs to >> author two versions of each shader that needs this extension, which is a >> lot of hassle to manage. >> >> Could implementations still carry EXT_shader_texture_lod around in WebGL >> 2 so that #version 100 shaders with that extension would still keep >> working? I know WebGL is neither backwards or forwards compatible, but it >> would be great to minimize these types of breaking changes where possible, >> so that engines that co-target both versions on the fly would have easier >> time. >> >> Though I do admit I'm not completely sure how this looks like in >> different vendors' native GLES3 implementations, and if they have this >> mechanism available as well for this extension. >> >> Does anyone know if there was any fundamental reason to not advertise >> EXT_shader_texture_lod anymore in WebGL 2, or was it just a "oh, that's in >> core now so no need to have it present anymore" type of thought? >> >> Thanks! >> Jukka >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zmo...@ Fri Jan 20 14:26:28 2017 From: zmo...@ (Zhenyao Mo) Date: Fri, 20 Jan 2017 14:26:28 -0800 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: At this point, WebGL1 is in maintenance mode. With limited engineering resources, instead of exposing ES3 features to WebGL1 as extensions, our focus is to make WebGL2 great and then next generation of web graphics. Chrome/Firefox will ship WebGL2 on desktop this month, and mobile should follow shortly after. That should be the path forward. On Fri, Jan 20, 2017 at 11:35 AM, Maksims Mihejevs wrote: > Zhenyao, question regarding availability of EXT_shader_texture_lod in > webgl1, we've noticed at PlayCanvas lack of it on many mobile platforms, > and had to come up with several workarounds for different things. One of > them is image based lighting for physically based rendering. On those > platforms we have to use alternative approach, and on runtime generate > atlas texture with all mip levels in it, so in shader we have to do 2 > samples and lerp between them, basically simulating LoD. > > For IBL using cube maps, you would store different prefilter levels of > cubemap as mips, and then with texture_lod simply pick them. > > This drives big problems, and makes IBL quiet slow on those platforms, > because as you guess, sampling twice same texture from very different > locations (atlas), leads to constant sample block cache revalidation. And > this has pretty sad impact on performance. > > Question is: *how much of a trouble would be actually make texture_lod > still work when webgl1 is implemented on top of ES3?* > > Kind Regards, > Max > > On 20 January 2017 at 18:24, Zhenyao Mo wrote: > >> Underlying ES3 drivers don't expose the EXT_shader_texture_lod, so >> currently you are not getting EXT_shader_texture_lod on most mobile >> platforms in WebGL1 because we don't expose this if WebGL1 is implemented >> on top of ES3 drivers. >> >> There are other incompatibilities between WebGL1/WebGL2 shaders, like >> texture2D -> texture, etc. You will need to rewrite or prep-process your >> old shaders anyway. I don't see a strong reason to expose older semantics >> for texture lod in WebGL2. >> >> On Fri, Jan 20, 2017 at 7:46 AM, Jukka Jyl?nki wrote: >> >>> Hi, >>> >>> I'm working on upgrading an engine to use WebGL 2 from WebGL 1, and I >>> find that EXT_shader_texture_lod has been added to the core WebGL 2 spec. >>> >>> In WebGL 2, one can write #version 100 and #version 300 es shaders, with >>> the idea that in WebGL2, one could reuse GLSL 1.0/GLES2 shaders without >>> having to mass re-write all of them to GLSL 3.00/GLES3 when migrating. >>> >>> This is great for backwards compatibility, however I find that neither >>> Firefox or Chrome implementations of WebGL 2 expose EXT_shader_texture_lod >>> anymore. This means that if one does run #version 100 shaders in a WebGL 2 >>> context, they lose the ability to do textureCubeLodEXT() and so forth, >>> which forces one to go and mass convert all shaders to GLSL 3.00/GLES3. If >>> one wants to simultaneously target both WebGL1 and WebGL2, i.e. gracefully >>> falling back to WebGL1 on older browsers, this means that one needs to >>> author two versions of each shader that needs this extension, which is a >>> lot of hassle to manage. >>> >>> Could implementations still carry EXT_shader_texture_lod around in WebGL >>> 2 so that #version 100 shaders with that extension would still keep >>> working? I know WebGL is neither backwards or forwards compatible, but it >>> would be great to minimize these types of breaking changes where possible, >>> so that engines that co-target both versions on the fly would have easier >>> time. >>> >>> Though I do admit I'm not completely sure how this looks like in >>> different vendors' native GLES3 implementations, and if they have this >>> mechanism available as well for this extension. >>> >>> Does anyone know if there was any fundamental reason to not advertise >>> EXT_shader_texture_lod anymore in WebGL 2, or was it just a "oh, that's in >>> core now so no need to have it present anymore" type of thought? >>> >>> Thanks! >>> Jukka >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 21 01:03:18 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 21 Jan 2017 10:03:18 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: On Fri, Jan 20, 2017 at 11:26 PM, Zhenyao Mo wrote: > At this point, WebGL1 is in maintenance mode. > Not that I object to the sentiment of not forward porting legacy APIs to WebGL2, I do take issue with the sentiment in general. I understand why you think this is how things should be, but I think this is wrong. As of January 19th 2017, support for WebGL1 extensions is still fragmentary . Picking out extensions with less than 90% support where support levels by UAs that do implement them is substantially higher (and that aren't compression related) - EXT_color_buffer_float *10%, could be 98% * (introduced 2012, draft 2012, community approved 2014) - EXT_color_buffer_half_float *10%, could be 97%* (introduced 2012, draft 2012, community approved 2014) - WEBGL_debug_renderer_info *89%, could be 100%* (introduced 2011, ratified 2013) - WEBGL_depth_texture *89%, could be 91% *(introduced 2012, community approved 2013, ratified 2013) - EXT_disjoint_timer_query *45%, could be 75% *(introduced 2013, community approved 2015) - WEBGL_draw_buffers *62%, could be 75%* (introduced 2012, ratified 2014) - EXT_frag_depth *68%, could be 84%* (introduced 2012, draft 2012, community approved 2014, ratified 2015) - EXT_sRGB *64%, could be 76% *(introduced 2012, draft 2013, community approved 2014) - EXT_shader_texture_lod *77%, could be 86%* (introduced 2014, draft 2014, community approved 2014, ratified 2015) All of these extensions are older than 3 years, and the reason they're not well supported is because some or other UA didn't implement them, oftentimes citing "concentrating on other things". Chief among the reasons why other UAs didn't see a pressing need to implement these ubiquitously supportable extensions, is because *other* UAs didn't send a clear enough signal early enough that these are in everybodies interest to be implemented. WebGL1 unfortunately is going to be continued to be used for many years for a variety of reasons. I believe that WebGL Stats tracking of WebGL2 will demonstrate this in the years to come. UAs share a responsibility to make WebGL better in all regards, this includes: - Extending the maximum parameter space of all WebGL versions (Apple, I'm looking at you) - Improving the support of existing extensions - Implementing new extensions - Supporting WebGL2 (Microsoft, Apple, your move) Saying "WebGL1 is in maintenance mode" is really just saying you're abandoning it. Abandoning WebGL1 effectively means abandoning WebGL, period. You condemn developers to live with a situation for many years, that could easily have been vastly improved if UAs properly assumed their responsibilities and vigorously competed with each other to to provide the better WebGL implementation (of any version) than their competition. There will be a time when its appropriate to abandon WebGL1. That time will not be here for many years. And although you should indicate very clearly that WebGL2 is now the top of the competition, that does not mean you can slack off in every other regard. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 21 01:17:10 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 21 Jan 2017 10:17:10 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: Writing multiple renderpaths for multiple versions, that take full advantage of the capabilities of each version is very time-consuming and difficult. The easiest way to for developers to take full advantage of WebGL2 for the time being, would be to be able to "polyfill" features they'd like to use into WebGL1 that are found in WebGL2. By abandoning WebGL1, with a very fragmentary capability coverage to WebGL2, you're ensuring that WebGL2 adoption will be much slower than we all wish. That's because there's no easy way to "polyfill" the enticing features of WebGL2 into WebGL1. Although I don't think this should prompt a mass backporting effort, I do think this warrants continued effort to improve the existing WebGL1 capabilities across all UAs and maybe introduce more WebGL1 extensions for particularly sore spots (like 3D textures). -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Sat Jan 21 04:32:31 2017 From: max...@ (Maksims Mihejevs) Date: Sat, 21 Jan 2017 12:32:31 +0000 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: WebGL 1 will stay very important for very long. If the gap between of webgl1 and webgl2 is large, this just complicates everything. We already seeing this, and it makes engine and app developers life much harder. As they still have to support both for long time to come. And more importantly, if any implementation is done, it has to be done not using CPU path. We've seeing already many enormous issues. For example MSAA with ANGLE has CPU path, and is unusable at all, as performance drops insanely. ETC2 was another case where CPU path was pushed a lot. And if this happens, for sake of "best availability support", you then damaging performance and quality so badly, that we even had to disable certain features in webgl2 simply by checking "if ANGLE, then no MSAA". We need not just feature availability in webgl2, but done with responsibility and performance. And we need webgl1 to progress. This is extremely important to allow webgl developers to create content that truly works everywhere. Webgl2 is not a globally adopted replacement of webgl1. It is an enhancement and new features to end developer. But not a replacement by any means. On 21 Jan 2017 9:17 a.m., "Florian B?sch" wrote: > Writing multiple renderpaths for multiple versions, that take full > advantage of the capabilities of each version is very time-consuming and > difficult. > > The easiest way to for developers to take full advantage of WebGL2 for the > time being, would be to be able to "polyfill" features they'd like to use > into WebGL1 that are found in WebGL2. > > By abandoning WebGL1, with a very fragmentary capability coverage to > WebGL2, you're ensuring that WebGL2 adoption will be much slower than we > all wish. That's because there's no easy way to "polyfill" the enticing > features of WebGL2 into WebGL1. Although I don't think this should prompt a > mass backporting effort, I do think this warrants continued effort to > improve the existing WebGL1 capabilities across all UAs and maybe introduce > more WebGL1 extensions for particularly sore spots (like 3D textures). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 21 04:36:16 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 21 Jan 2017 13:36:16 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs wrote: > And more importantly, if any implementation is done, it has to be done not > using CPU path. > We've seeing already many enormous issues. For example MSAA with ANGLE has > CPU path, and is unusable at all, as performance drops insanely. ETC2 was > another case where CPU path was pushed a lot. And if this happens, for sake > of "best availability support", you then damaging performance and quality > so badly, that we even had to disable certain features in webgl2 simply by > checking "if ANGLE, then no MSAA". > I believe that the failIfMajorPerformanceCaveat should indicate those CPU paths doesn't it? I've started tracking this on WebGL stats at the end of december. So at the end of January there'll be a full 30-day window for that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 21 04:41:42 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 21 Jan 2017 13:41:42 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: On Sat, Jan 21, 2017 at 1:36 PM, Florian B?sch wrote: > I believe that the failIfMajorPerformanceCaveat should indicate > It would of course help if every UA implemented this flag, even if it's never failing. I test implementation status as: if (ctx.getContextAttributes().failIfMajorPerformanceCaveat != null) { So if that is null, I have to assume it's not implemented, which I signal on WebGL Stats with "Unknown". -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Sat Jan 21 07:37:15 2017 From: max...@ (Maksims Mihejevs) Date: Sat, 21 Jan 2017 15:37:15 +0000 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: On 21 January 2017 at 12:36, Florian B?sch wrote: > On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs > wrote: > >> And more importantly, if any implementation is done, it has to be done >> not using CPU path. >> We've seeing already many enormous issues. For example MSAA with ANGLE >> has CPU path, and is unusable at all, as performance drops insanely. ETC2 >> was another case where CPU path was pushed a lot. And if this happens, for >> sake of "best availability support", you then damaging performance and >> quality so badly, that we even had to disable certain features in webgl2 >> simply by checking "if ANGLE, then no MSAA". >> > > I believe that the failIfMajorPerformanceCaveat should indicate those CPU > paths doesn't it? I've started tracking this on WebGL stats at the end of > december. So at the end of January there'll be a full 30-day window for > that. > Is this reliable? And will it include every small thing, and how friendly it is to end user to use, in order to query for those caveats? Can we rely on vendors that they will report honestly about caveats? Because it seems like this is not the case. For example on Windows, where blit is implemented on CPU, currently failIfMajorPerformanceCaveat - does not show anything about it. I bet ETC2 wouldn't be mentioned there either. I strongly believe it is better not to ship feature at all if it will perform worse than previous alternatives. Because this sends a positive message "hey, feature is here for you", but then developers spend time figuring out why it performs so badly, and then endup finding a ways to know if this feature follows "slow path", and have to explicitly "ban" it from using. We already to this in PlayCanvas, and amount of time it takes to figure this things out, is dramatically frustrating! It is complicated and hard to track those issues, and they are implemented in hidden way. It shall be clear to end developer, if feature is there, it shall work and perform reliable across the devices. Extensions - is a good way to follow, it gives optional availability of the feature, and being an extension, it is expected it to be not available on some platforms. When users come back asking why their simple game works on some devices great, and on some 5- fps, we wonder, and this is purely because something hits CPU path, either the whole renderer, or part of it. Or even some features like float32 textures, are actually not 32bits. There must be way more string QA in place, to prevent those cases, as they damage WebGL as a platform, for everybody, and this is not what we all want. We are building business around this technology, and we want to be reliable about dependencies - vendors and ANGLE (in particular), that they do a good job and not following some dreams of webgl-utopia. Kind Regards, Max -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 21 07:55:15 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 21 Jan 2017 16:55:15 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs wrote: > We've seeing already many enormous issues. For example MSAA with ANGLE has > CPU path, and is unusable at all, as performance drops insanely. ETC2 was > another case where CPU path was pushed a lot. > Are you sure ETC2 is software decoded and put on the GPU plain? Because if so, we've had a lengthy debate in 2016 about ETC1 that ended with Jeff Gilbert stating: Update: The WG is planning on only exposing the extension where there > is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this > best matches what devs expect when they see support for a compressed > texture extension. Compressed image formats are a better delivery > mechanism than compressed texture formats, if they're going to be > decompressed anyway. https://www.khronos.org/webgl/public-mailing-list/archives/1609/msg00074.php I don't believe we'll need to have this debate all over again... do we? -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Sat Jan 21 13:46:15 2017 From: max...@ (Maksims Mihejevs) Date: Sat, 21 Jan 2017 21:46:15 +0000 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: Correct me if I'm wrong. But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ right? I could not find any information that Nvidia GPU actually supports ETC2 and EAC. On Windows, using ANGLE, there is no OpenGL involved. So why then we seeing 71% support of WEBGL_compressed_texture_etc on Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see only 5% on Linux, and 0% on OSX? :) I believe, they haven't actually decided anything, and wen't their way regarding CPU path, unless there is somebody here to prove that I'm wrong. On 21 January 2017 at 15:55, Florian B?sch wrote: > On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs > wrote: > >> We've seeing already many enormous issues. For example MSAA with ANGLE >> has CPU path, and is unusable at all, as performance drops insanely. ETC2 >> was another case where CPU path was pushed a lot. >> > > Are you sure ETC2 is software decoded and put on the GPU plain? Because if > so, we've had a lengthy debate in 2016 about ETC1 that ended with Jeff > Gilbert stating: > > Update: The WG is planning on only exposing the extension where there >> is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this >> best matches what devs expect when they see support for a compressed >> texture extension. Compressed image formats are a better delivery >> mechanism than compressed texture formats, if they're going to be >> decompressed anyway. > > > https://www.khronos.org/webgl/public-mailing-list/archives/ > 1609/msg00074.php > > I don't believe we'll need to have this debate all over again... do we? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Sat Jan 21 13:54:04 2017 From: max...@ (Maksims Mihejevs) Date: Sat, 21 Jan 2017 21:54:04 +0000 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: This suggests they haven't done it: https://bugs.chromium.org/p/chromium/issues/detail?id=658763&desc=2 We wanted to add ETC2 support, but currently blocked by this, as it is totally unacceptable path: more VRAM (comparing to alternatives) and more Download size. On 21 January 2017 at 21:46, Maksims Mihejevs wrote: > Correct me if I'm wrong. > But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ right? > I could not find any information that Nvidia GPU actually supports ETC2 > and EAC. > > On Windows, using ANGLE, there is no OpenGL involved. > > So why then we seeing 71% support of WEBGL_compressed_texture_etc on > Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see > only 5% on Linux, and 0% on OSX? :) > > I believe, they haven't actually decided anything, and wen't their way > regarding CPU path, unless there is somebody here to prove that I'm wrong. > > On 21 January 2017 at 15:55, Florian B?sch wrote: > >> On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs >> wrote: >> >>> We've seeing already many enormous issues. For example MSAA with ANGLE >>> has CPU path, and is unusable at all, as performance drops insanely. ETC2 >>> was another case where CPU path was pushed a lot. >>> >> >> Are you sure ETC2 is software decoded and put on the GPU plain? Because >> if so, we've had a lengthy debate in 2016 about ETC1 that ended with Jeff >> Gilbert stating: >> >> Update: The WG is planning on only exposing the extension where there >>> is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this >>> best matches what devs expect when they see support for a compressed >>> texture extension. Compressed image formats are a better delivery >>> mechanism than compressed texture formats, if they're going to be >>> decompressed anyway. >> >> >> https://www.khronos.org/webgl/public-mailing-list/archives/1 >> 609/msg00074.php >> >> I don't believe we'll need to have this debate all over again... do we? >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 21 14:01:42 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 21 Jan 2017 23:01:42 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: Well we had the lengthy debate with a clear outcome, do not expose capabilities that the hardware doesn't support, especially not when the emulation has drastically different characteristics and leads to worse outcomes than when it wasn't pretend supported. I'm guessing this then falls under "WebGL1 is in maintenance mode and we won't fix it."? On Sat, Jan 21, 2017 at 10:54 PM, Maksims Mihejevs wrote: > This suggests they haven't done it: > https://bugs.chromium.org/p/chromium/issues/detail?id=658763&desc=2 > > We wanted to add ETC2 support, but currently blocked by this, as it is > totally unacceptable path: more VRAM (comparing to alternatives) and more > Download size. > > On 21 January 2017 at 21:46, Maksims Mihejevs wrote: > >> Correct me if I'm wrong. >> But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ right? >> I could not find any information that Nvidia GPU actually supports ETC2 >> and EAC. >> >> On Windows, using ANGLE, there is no OpenGL involved. >> >> So why then we seeing 71% support of WEBGL_compressed_texture_etc on >> Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see >> only 5% on Linux, and 0% on OSX? :) >> >> I believe, they haven't actually decided anything, and wen't their way >> regarding CPU path, unless there is somebody here to prove that I'm wrong. >> >> On 21 January 2017 at 15:55, Florian B?sch wrote: >> >>> On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs >>> wrote: >>> >>>> We've seeing already many enormous issues. For example MSAA with ANGLE >>>> has CPU path, and is unusable at all, as performance drops insanely. ETC2 >>>> was another case where CPU path was pushed a lot. >>>> >>> >>> Are you sure ETC2 is software decoded and put on the GPU plain? Because >>> if so, we've had a lengthy debate in 2016 about ETC1 that ended with Jeff >>> Gilbert stating: >>> >>> Update: The WG is planning on only exposing the extension where there >>>> is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this >>>> best matches what devs expect when they see support for a compressed >>>> texture extension. Compressed image formats are a better delivery >>>> mechanism than compressed texture formats, if they're going to be >>>> decompressed anyway. >>> >>> >>> https://www.khronos.org/webgl/public-mailing-list/archives/1 >>> 609/msg00074.php >>> >>> I don't believe we'll need to have this debate all over again... do we? >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 21 14:03:13 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 21 Jan 2017 23:03:13 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: I'll add a disclaimer to the ETC and ETC1 extension that empathically discourages people from using these extensions. Are there any other extensions I should warn people not to use? On Sat, Jan 21, 2017 at 11:01 PM, Florian B?sch wrote: > Well we had the lengthy debate with a clear outcome, do not expose > capabilities that the hardware doesn't support, especially not when the > emulation has drastically different characteristics and leads to worse > outcomes than when it wasn't pretend supported. I'm guessing this then > falls under "WebGL1 is in maintenance mode and we won't fix it."? > > On Sat, Jan 21, 2017 at 10:54 PM, Maksims Mihejevs > wrote: > >> This suggests they haven't done it: >> https://bugs.chromium.org/p/chromium/issues/detail?id=658763&desc=2 >> >> We wanted to add ETC2 support, but currently blocked by this, as it is >> totally unacceptable path: more VRAM (comparing to alternatives) and more >> Download size. >> >> On 21 January 2017 at 21:46, Maksims Mihejevs wrote: >> >>> Correct me if I'm wrong. >>> But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ right? >>> I could not find any information that Nvidia GPU actually supports ETC2 >>> and EAC. >>> >>> On Windows, using ANGLE, there is no OpenGL involved. >>> >>> So why then we seeing 71% support of WEBGL_compressed_texture_etc on >>> Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see >>> only 5% on Linux, and 0% on OSX? :) >>> >>> I believe, they haven't actually decided anything, and wen't their way >>> regarding CPU path, unless there is somebody here to prove that I'm wrong. >>> >>> On 21 January 2017 at 15:55, Florian B?sch wrote: >>> >>>> On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs >>>> wrote: >>>> >>>>> We've seeing already many enormous issues. For example MSAA with ANGLE >>>>> has CPU path, and is unusable at all, as performance drops insanely. ETC2 >>>>> was another case where CPU path was pushed a lot. >>>>> >>>> >>>> Are you sure ETC2 is software decoded and put on the GPU plain? Because >>>> if so, we've had a lengthy debate in 2016 about ETC1 that ended with Jeff >>>> Gilbert stating: >>>> >>>> Update: The WG is planning on only exposing the extension where there >>>>> is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this >>>>> best matches what devs expect when they see support for a compressed >>>>> texture extension. Compressed image formats are a better delivery >>>>> mechanism than compressed texture formats, if they're going to be >>>>> decompressed anyway. >>>> >>>> >>>> https://www.khronos.org/webgl/public-mailing-list/archives/1 >>>> 609/msg00074.php >>>> >>>> I don't believe we'll need to have this debate all over again... do we? >>>> >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 21 14:18:38 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 21 Jan 2017 23:18:38 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: Severe blinking "*Warning DO NOT USE*" added to all ETC and ETC1 extension pages, stating that: Often implemented in browsers by decompressing on the CPU and uploading > full size to GPU with severe performance, vram and quality impacts. > 1. http://webglstats.com/webgl/extension/WEBGL_compressed_texture_etc 2. http://webglstats.com/webgl/extension/WEBGL_compressed_texture_etc1 3. http://webglstats.com/webgl2/extension/WEBGL_compressed_texture_etc 4. http://webglstats.com/webgl2/extension/WEBGL_compressed_texture_etc1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Sat Jan 21 14:18:33 2017 From: max...@ (Maksims Mihejevs) Date: Sat, 21 Jan 2017 22:18:33 +0000 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: I'm not sure ETC1 is supported by desktop either, but has almost identical support stats. It suggests that they have same code-path for both formats. OES_texture_float shows high support, but reality is that in some cases it is not "real 32bit", we've got to that problem where been packing stuff into it assuming it was 32 bits, but on some Android it was coming totally weird. So we've implemented a test for precision of it: https://github.com/playcanvas/engine/blob/master/src/graphics/device.js#L721 Bliting depth texture in WebGL 2 on Windows (only) does gets into CPU path, making performance to go below the floor. https://bugs.chromium.org/p/angleproject/issues/detail?id=1710 And I bet there is much more, of cases. The problem is profiling them is nearly impossible, and when performance is bad in complex applications, developer spends a lot of time figuring out what's the heck. And after long-long pain, things might point to one or the other thing, again, ANGLE.. *ANGLE - is great, and WebGL wouldn't be possible without it.* But there are certain ideology used, that are extremely harmful to whole WebGL platform. CPU paths and dishonesty of API - is one of them. Kind Regards, Max On 21 January 2017 at 22:03, Florian B?sch wrote: > I'll add a disclaimer to the ETC and ETC1 extension that empathically > discourages people from using these extensions. Are there any other > extensions I should warn people not to use? > > On Sat, Jan 21, 2017 at 11:01 PM, Florian B?sch wrote: > >> Well we had the lengthy debate with a clear outcome, do not expose >> capabilities that the hardware doesn't support, especially not when the >> emulation has drastically different characteristics and leads to worse >> outcomes than when it wasn't pretend supported. I'm guessing this then >> falls under "WebGL1 is in maintenance mode and we won't fix it."? >> >> On Sat, Jan 21, 2017 at 10:54 PM, Maksims Mihejevs >> wrote: >> >>> This suggests they haven't done it: >>> https://bugs.chromium.org/p/chromium/issues/detail?id=658763&desc=2 >>> >>> We wanted to add ETC2 support, but currently blocked by this, as it is >>> totally unacceptable path: more VRAM (comparing to alternatives) and more >>> Download size. >>> >>> On 21 January 2017 at 21:46, Maksims Mihejevs >>> wrote: >>> >>>> Correct me if I'm wrong. >>>> But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ right? >>>> I could not find any information that Nvidia GPU actually supports ETC2 >>>> and EAC. >>>> >>>> On Windows, using ANGLE, there is no OpenGL involved. >>>> >>>> So why then we seeing 71% support of WEBGL_compressed_texture_etc on >>>> Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see >>>> only 5% on Linux, and 0% on OSX? :) >>>> >>>> I believe, they haven't actually decided anything, and wen't their way >>>> regarding CPU path, unless there is somebody here to prove that I'm wrong. >>>> >>>> On 21 January 2017 at 15:55, Florian B?sch wrote: >>>> >>>>> On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs >>>>> wrote: >>>>> >>>>>> We've seeing already many enormous issues. For example MSAA with >>>>>> ANGLE has CPU path, and is unusable at all, as performance drops insanely. >>>>>> ETC2 was another case where CPU path was pushed a lot. >>>>>> >>>>> >>>>> Are you sure ETC2 is software decoded and put on the GPU plain? >>>>> Because if so, we've had a lengthy debate in 2016 about ETC1 that ended >>>>> with Jeff Gilbert stating: >>>>> >>>>> Update: The WG is planning on only exposing the extension where there >>>>>> is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this >>>>>> best matches what devs expect when they see support for a compressed >>>>>> texture extension. Compressed image formats are a better delivery >>>>>> mechanism than compressed texture formats, if they're going to be >>>>>> decompressed anyway. >>>>> >>>>> >>>>> https://www.khronos.org/webgl/public-mailing-list/archives/1 >>>>> 609/msg00074.php >>>>> >>>>> I don't believe we'll need to have this debate all over again... do we? >>>>> >>>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kai...@ Sat Jan 21 14:28:08 2017 From: kai...@ (Kai Ninomiya) Date: Sat, 21 Jan 2017 22:28:08 +0000 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: In Chrome, the ETC1+2 formats should be disabled on both WebGL 1 and WebGL 2 on ANGLE (i.e. Windows) unconditionally. Unfortunately I made a mistake in my first patch to disable ETC1 on ANGLE, so it won't roll out until Chrome 57 (beginning of March). ETC2 should be correct in 56 (end of January). So the warning should very soon become outdated, and it would be good to mention that in the warning text. I am not sure about the status in Firefox but IIRC they agreed to implement the same semantics and it's probably in the version they are releasing this week(?) The linked bug (658763) is for re-enabling it on desktop platforms which have hardware support (maybe Intel), and for ANGLE on other platforms. -Kai On Sat, Jan 21, 2017, 2:04 PM Florian B?sch wrote: I'll add a disclaimer to the ETC and ETC1 extension that empathically discourages people from using these extensions. Are there any other extensions I should warn people not to use? On Sat, Jan 21, 2017 at 11:01 PM, Florian B?sch wrote: Well we had the lengthy debate with a clear outcome, do not expose capabilities that the hardware doesn't support, especially not when the emulation has drastically different characteristics and leads to worse outcomes than when it wasn't pretend supported. I'm guessing this then falls under "WebGL1 is in maintenance mode and we won't fix it."? On Sat, Jan 21, 2017 at 10:54 PM, Maksims Mihejevs wrote: This suggests they haven't done it: https://bugs.chromium.org/p/chromium/issues/detail?id=658763&desc=2 We wanted to add ETC2 support, but currently blocked by this, as it is totally unacceptable path: more VRAM (comparing to alternatives) and more Download size. On 21 January 2017 at 21:46, Maksims Mihejevs wrote: Correct me if I'm wrong. But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ right? I could not find any information that Nvidia GPU actually supports ETC2 and EAC. On Windows, using ANGLE, there is no OpenGL involved. So why then we seeing 71% support of WEBGL_compressed_texture_etc on Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see only 5% on Linux, and 0% on OSX? :) I believe, they haven't actually decided anything, and wen't their way regarding CPU path, unless there is somebody here to prove that I'm wrong. On 21 January 2017 at 15:55, Florian B?sch wrote: On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs wrote: We've seeing already many enormous issues. For example MSAA with ANGLE has CPU path, and is unusable at all, as performance drops insanely. ETC2 was another case where CPU path was pushed a lot. Are you sure ETC2 is software decoded and put on the GPU plain? Because if so, we've had a lengthy debate in 2016 about ETC1 that ended with Jeff Gilbert stating: Update: The WG is planning on only exposing the extension where there is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this best matches what devs expect when they see support for a compressed texture extension. Compressed image formats are a better delivery mechanism than compressed texture formats, if they're going to be decompressed anyway. https://www.khronos.org/webgl/public-mailing-list/archives/1609/msg00074.php I don't believe we'll need to have this debate all over again... do we? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4845 bytes Desc: S/MIME Cryptographic Signature URL: From max...@ Sat Jan 21 14:32:56 2017 From: max...@ (Maksims Mihejevs) Date: Sat, 21 Jan 2017 22:32:56 +0000 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: Kai, thank you for the info. This is great news! On 21 January 2017 at 22:28, Kai Ninomiya wrote: > In Chrome, the ETC1+2 formats should be disabled on both WebGL 1 and WebGL > 2 on ANGLE (i.e. Windows) unconditionally. Unfortunately I made a mistake > in my first patch to disable ETC1 on ANGLE, so it won't roll out until > Chrome 57 (beginning of March). ETC2 should be correct in 56 (end of > January). > > So the warning should very soon become outdated, and it would be good to > mention that in the warning text. I am not sure about the status in Firefox > but IIRC they agreed to implement the same semantics and it's probably in > the version they are releasing this week(?) > > The linked bug (658763) is for re-enabling it on desktop platforms which > have hardware support (maybe Intel), and for ANGLE on other platforms. > > -Kai > > On Sat, Jan 21, 2017, 2:04 PM Florian B?sch wrote: > > I'll add a disclaimer to the ETC and ETC1 extension that empathically > discourages people from using these extensions. Are there any other > extensions I should warn people not to use? > > On Sat, Jan 21, 2017 at 11:01 PM, Florian B?sch wrote: > > Well we had the lengthy debate with a clear outcome, do not expose > capabilities that the hardware doesn't support, especially not when the > emulation has drastically different characteristics and leads to worse > outcomes than when it wasn't pretend supported. I'm guessing this then > falls under "WebGL1 is in maintenance mode and we won't fix it."? > > On Sat, Jan 21, 2017 at 10:54 PM, Maksims Mihejevs > wrote: > > This suggests they haven't done it: > https://bugs.chromium.org/p/chromium/issues/detail?id=658763&desc=2 > > We wanted to add ETC2 support, but currently blocked by this, as it is > totally unacceptable path: more VRAM (comparing to alternatives) and more > Download size. > > On 21 January 2017 at 21:46, Maksims Mihejevs wrote: > > Correct me if I'm wrong. > But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ right? > I could not find any information that Nvidia GPU actually supports ETC2 > and EAC. > > On Windows, using ANGLE, there is no OpenGL involved. > > So why then we seeing 71% support of WEBGL_compressed_texture_etc on > Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see > only 5% on Linux, and 0% on OSX? :) > > I believe, they haven't actually decided anything, and wen't their way > regarding CPU path, unless there is somebody here to prove that I'm wrong. > > On 21 January 2017 at 15:55, Florian B?sch wrote: > > On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs > wrote: > > We've seeing already many enormous issues. For example MSAA with ANGLE has > CPU path, and is unusable at all, as performance drops insanely. ETC2 was > another case where CPU path was pushed a lot. > > > Are you sure ETC2 is software decoded and put on the GPU plain? Because if > so, we've had a lengthy debate in 2016 about ETC1 that ended with Jeff > Gilbert stating: > > Update: The WG is planning on only exposing the extension where there > is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this > best matches what devs expect when they see support for a compressed > texture extension. Compressed image formats are a better delivery > mechanism than compressed texture formats, if they're going to be > decompressed anyway. > > > https://www.khronos.org/webgl/public-mailing-list/archives/ > 1609/msg00074.php > > I don't believe we'll need to have this debate all over again... do we? > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Jan 21 14:33:55 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 21 Jan 2017 23:33:55 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: Changed the text to: Often implemented in browsers by decompressing on the CPU and uploading > full size to GPU with severe performance, vram and quality impacts. Fixed > in Chrome 57 and Firefox ?? > Will remove the warning a month after the fix has landed in Chrome and Firefox public releases (Jeff please inform me which FF version). On Sat, Jan 21, 2017 at 11:28 PM, Kai Ninomiya wrote: > In Chrome, the ETC1+2 formats should be disabled on both WebGL 1 and WebGL > 2 on ANGLE (i.e. Windows) unconditionally. Unfortunately I made a mistake > in my first patch to disable ETC1 on ANGLE, so it won't roll out until > Chrome 57 (beginning of March). ETC2 should be correct in 56 (end of > January). > > So the warning should very soon become outdated, and it would be good to > mention that in the warning text. I am not sure about the status in Firefox > but IIRC they agreed to implement the same semantics and it's probably in > the version they are releasing this week(?) > > The linked bug (658763) is for re-enabling it on desktop platforms which > have hardware support (maybe Intel), and for ANGLE on other platforms. > > -Kai > > On Sat, Jan 21, 2017, 2:04 PM Florian B?sch wrote: > > I'll add a disclaimer to the ETC and ETC1 extension that empathically > discourages people from using these extensions. Are there any other > extensions I should warn people not to use? > > On Sat, Jan 21, 2017 at 11:01 PM, Florian B?sch wrote: > > Well we had the lengthy debate with a clear outcome, do not expose > capabilities that the hardware doesn't support, especially not when the > emulation has drastically different characteristics and leads to worse > outcomes than when it wasn't pretend supported. I'm guessing this then > falls under "WebGL1 is in maintenance mode and we won't fix it."? > > On Sat, Jan 21, 2017 at 10:54 PM, Maksims Mihejevs > wrote: > > This suggests they haven't done it: > https://bugs.chromium.org/p/chromium/issues/detail?id=658763&desc=2 > > We wanted to add ETC2 support, but currently blocked by this, as it is > totally unacceptable path: more VRAM (comparing to alternatives) and more > Download size. > > On 21 January 2017 at 21:46, Maksims Mihejevs wrote: > > Correct me if I'm wrong. > But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ right? > I could not find any information that Nvidia GPU actually supports ETC2 > and EAC. > > On Windows, using ANGLE, there is no OpenGL involved. > > So why then we seeing 71% support of WEBGL_compressed_texture_etc on > Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see > only 5% on Linux, and 0% on OSX? :) > > I believe, they haven't actually decided anything, and wen't their way > regarding CPU path, unless there is somebody here to prove that I'm wrong. > > On 21 January 2017 at 15:55, Florian B?sch wrote: > > On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs > wrote: > > We've seeing already many enormous issues. For example MSAA with ANGLE has > CPU path, and is unusable at all, as performance drops insanely. ETC2 was > another case where CPU path was pushed a lot. > > > Are you sure ETC2 is software decoded and put on the GPU plain? Because if > so, we've had a lengthy debate in 2016 about ETC1 that ended with Jeff > Gilbert stating: > > Update: The WG is planning on only exposing the extension where there > is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this > best matches what devs expect when they see support for a compressed > texture extension. Compressed image formats are a better delivery > mechanism than compressed texture formats, if they're going to be > decompressed anyway. > > > https://www.khronos.org/webgl/public-mailing-list/archives/ > 1609/msg00074.php > > I don't believe we'll need to have this debate all over again... do we? > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kai...@ Sat Jan 21 14:58:53 2017 From: kai...@ (Kai Ninomiya) Date: Sat, 21 Jan 2017 22:58:53 +0000 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: FYI I checked FF beta and it is still enabled, so they probably didn't have time before their WebGL 2 launch. On Sat, Jan 21, 2017, 2:33 PM Florian B?sch wrote: > Changed the text to: > > Often implemented in browsers by decompressing on the CPU and uploading > full size to GPU with severe performance, vram and quality impacts. Fixed > in Chrome 57 and Firefox ?? > > > Will remove the warning a month after the fix has landed in Chrome and > Firefox public releases (Jeff please inform me which FF version). > > On Sat, Jan 21, 2017 at 11:28 PM, Kai Ninomiya wrote: > > In Chrome, the ETC1+2 formats should be disabled on both WebGL 1 and WebGL > 2 on ANGLE (i.e. Windows) unconditionally. Unfortunately I made a mistake > in my first patch to disable ETC1 on ANGLE, so it won't roll out until > Chrome 57 (beginning of March). ETC2 should be correct in 56 (end of > January). > > So the warning should very soon become outdated, and it would be good to > mention that in the warning text. I am not sure about the status in Firefox > but IIRC they agreed to implement the same semantics and it's probably in > the version they are releasing this week(?) > > The linked bug (658763) is for re-enabling it on desktop platforms which > have hardware support (maybe Intel), and for ANGLE on other platforms. > > -Kai > > On Sat, Jan 21, 2017, 2:04 PM Florian B?sch wrote: > > I'll add a disclaimer to the ETC and ETC1 extension that empathically > discourages people from using these extensions. Are there any other > extensions I should warn people not to use? > > On Sat, Jan 21, 2017 at 11:01 PM, Florian B?sch wrote: > > Well we had the lengthy debate with a clear outcome, do not expose > capabilities that the hardware doesn't support, especially not when the > emulation has drastically different characteristics and leads to worse > outcomes than when it wasn't pretend supported. I'm guessing this then > falls under "WebGL1 is in maintenance mode and we won't fix it."? > > On Sat, Jan 21, 2017 at 10:54 PM, Maksims Mihejevs > wrote: > > This suggests they haven't done it: > https://bugs.chromium.org/p/chromium/issues/detail?id=658763&desc=2 > > We wanted to add ETC2 support, but currently blocked by this, as it is > totally unacceptable path: more VRAM (comparing to alternatives) and more > Download size. > > On 21 January 2017 at 21:46, Maksims Mihejevs wrote: > > Correct me if I'm wrong. > But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ right? > I could not find any information that Nvidia GPU actually supports ETC2 > and EAC. > > On Windows, using ANGLE, there is no OpenGL involved. > > So why then we seeing 71% support of WEBGL_compressed_texture_etc on > Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see > only 5% on Linux, and 0% on OSX? :) > > I believe, they haven't actually decided anything, and wen't their way > regarding CPU path, unless there is somebody here to prove that I'm wrong. > > On 21 January 2017 at 15:55, Florian B?sch wrote: > > On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs > wrote: > > We've seeing already many enormous issues. For example MSAA with ANGLE has > CPU path, and is unusable at all, as performance drops insanely. ETC2 was > another case where CPU path was pushed a lot. > > > Are you sure ETC2 is software decoded and put on the GPU plain? Because if > so, we've had a lengthy debate in 2016 about ETC1 that ended with Jeff > Gilbert stating: > > Update: The WG is planning on only exposing the extension where there > is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this > best matches what devs expect when they see support for a compressed > texture extension. Compressed image formats are a better delivery > mechanism than compressed texture formats, if they're going to be > decompressed anyway. > > > > https://www.khronos.org/webgl/public-mailing-list/archives/1609/msg00074.php > > I don't believe we'll need to have this debate all over again... do we? > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4845 bytes Desc: S/MIME Cryptographic Signature URL: From zmo...@ Mon Jan 23 09:18:24 2017 From: zmo...@ (Zhenyao Mo) Date: Mon, 23 Jan 2017 09:18:24 -0800 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: It's not abandoning WebGL1 and we understand the importance of WebGL1 to developers. EXT_shader_texture_lod is an extension that's not even exposed in ES3. In theory we can expose all ES3 functionalities to WebGL1 through emulation etc, and one can always argue the benefit of exposing this and that. I just don't see that as a good way of spending our limited engineering resources. On Sat, Jan 21, 2017 at 1:17 AM, Florian B?sch wrote: > Writing multiple renderpaths for multiple versions, that take full > advantage of the capabilities of each version is very time-consuming and > difficult. > > The easiest way to for developers to take full advantage of WebGL2 for the > time being, would be to be able to "polyfill" features they'd like to use > into WebGL1 that are found in WebGL2. > > By abandoning WebGL1, with a very fragmentary capability coverage to > WebGL2, you're ensuring that WebGL2 adoption will be much slower than we > all wish. That's because there's no easy way to "polyfill" the enticing > features of WebGL2 into WebGL1. Although I don't think this should prompt a > mass backporting effort, I do think this warrants continued effort to > improve the existing WebGL1 capabilities across all UAs and maybe introduce > more WebGL1 extensions for particularly sore spots (like 3D textures). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Jan 23 09:32:48 2017 From: kbr...@ (Kenneth Russell) Date: Mon, 23 Jan 2017 09:32:48 -0800 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: It's very difficult to emulate some of these older ES 2.0 extensions on ES 3.0. Taking EXT_draw_buffers / WEBGL_draw_buffers as an example: this extension only works against ESSL 1.00 shaders. When using ESSL 3.00, it's explicitly disallowed in the spec to write to gl_FragData[i] where i > 0. This means that in order to continue to expose these extensions on top of an ES 3.0 context, the browser will have to translate ESSL 1.00 to ESSL 3.00, and rename the user's variables during API calls like getUniformLocation. WebGL 2.0 represents a large feature upgrade which unifies the behavior among many types of devices. We expect that it will roll out quickly to most browsers and operating systems. At the present time we would like to focus our efforts on getting it widely deployed (i.e., into Edge and Safari) and getting the conformance suite to pass 100% on all GPU types. -Ken On Mon, Jan 23, 2017 at 9:18 AM, Zhenyao Mo wrote: > It's not abandoning WebGL1 and we understand the importance of WebGL1 to > developers. > > EXT_shader_texture_lod is an extension that's not even exposed in ES3. > > In theory we can expose all ES3 functionalities to WebGL1 through > emulation etc, and one can always argue the benefit of exposing this and > that. I just don't see that as a good way of spending our limited > engineering resources. > > > On Sat, Jan 21, 2017 at 1:17 AM, Florian B?sch wrote: > >> Writing multiple renderpaths for multiple versions, that take full >> advantage of the capabilities of each version is very time-consuming and >> difficult. >> >> The easiest way to for developers to take full advantage of WebGL2 for >> the time being, would be to be able to "polyfill" features they'd like to >> use into WebGL1 that are found in WebGL2. >> >> By abandoning WebGL1, with a very fragmentary capability coverage to >> WebGL2, you're ensuring that WebGL2 adoption will be much slower than we >> all wish. That's because there's no easy way to "polyfill" the enticing >> features of WebGL2 into WebGL1. Although I don't think this should prompt a >> mass backporting effort, I do think this warrants continued effort to >> improve the existing WebGL1 capabilities across all UAs and maybe introduce >> more WebGL1 extensions for particularly sore spots (like 3D textures). >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 23 09:36:10 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 23 Jan 2017 18:36:10 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: As I've said, I'm not at all for exposing legacy functionality to WebGL2. I think it's a mistake when OpenGL did it, and I think it would be a mistake if WebGL started doing it. I referred to the general sentiment of taking care of WebGL1, even now, especially now, that WebGL2 comes along. Because not doing so sabotages both WebGL1 and WebGL2. On Mon, Jan 23, 2017 at 6:18 PM, Zhenyao Mo wrote: > It's not abandoning WebGL1 and we understand the importance of WebGL1 to > developers. > > EXT_shader_texture_lod is an extension that's not even exposed in ES3. > > In theory we can expose all ES3 functionalities to WebGL1 through > emulation etc, and one can always argue the benefit of exposing this and > that. I just don't see that as a good way of spending our limited > engineering resources. > > > On Sat, Jan 21, 2017 at 1:17 AM, Florian B?sch wrote: > >> Writing multiple renderpaths for multiple versions, that take full >> advantage of the capabilities of each version is very time-consuming and >> difficult. >> >> The easiest way to for developers to take full advantage of WebGL2 for >> the time being, would be to be able to "polyfill" features they'd like to >> use into WebGL1 that are found in WebGL2. >> >> By abandoning WebGL1, with a very fragmentary capability coverage to >> WebGL2, you're ensuring that WebGL2 adoption will be much slower than we >> all wish. That's because there's no easy way to "polyfill" the enticing >> features of WebGL2 into WebGL1. Although I don't think this should prompt a >> mass backporting effort, I do think this warrants continued effort to >> improve the existing WebGL1 capabilities across all UAs and maybe introduce >> more WebGL1 extensions for particularly sore spots (like 3D textures). >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Jan 23 09:45:57 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 23 Jan 2017 18:45:57 +0100 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: On Mon, Jan 23, 2017 at 6:32 PM, Kenneth Russell wrote: > We expect that it will roll out quickly to most browsers and operating > systems. > I've crunched the numbers on that a bit, and I've found that although a Chrome and Firefox rollout will will bring the UA support to something close to 73%. Around 10% of WebGL1 contexts come with major performance caveats (meaning mostly swiftshader/cpu paths, etc.) and will be unusable, even for WebGL1. Since DX11/GL3 hardware is required for ES 3.0, I've also found that on the steam hardware stats around 15% of devices are not capable of that, incidentally, a bit more than 15% of Steam hardware survey devices are Intel integrated something or other, it's therefore logical to assume that a large chunk of the Intel integrated cards will have trouble with ES 3.0. Unfortunately, on the general web (and not on steam), these comprise around 50% of the hardware seen. Therefore the initial support level you can reasonably expect is likely to be between 30-50%. And this level will remain in place for as long as: - PC/Laptop retailers continue selling bottom of the barrel Intel cards - Users do not upgrade their bottom of the barrel Intel cards - Microsoft and Apple do not Implement WebGL2. I'd like to be wrong about this, I desperately do, but I think you've got to be realistic. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Jan 23 09:52:17 2017 From: kbr...@ (Kenneth Russell) Date: Mon, 23 Jan 2017 09:52:17 -0800 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: On Sat, Jan 21, 2017 at 2:18 PM, Maksims Mihejevs wrote: > I'm not sure ETC1 is supported by desktop either, but has almost identical > support stats. It suggests that they have same code-path for both formats. > > OES_texture_float shows high support, but reality is that in some cases it > is not "real 32bit", we've got to that problem where been packing stuff > into it assuming it was 32 bits, but on some Android it was coming totally > weird. So we've implemented a test for precision of it: > https://github.com/playcanvas/engine/blob/master/ > src/graphics/device.js#L721 > That's unexpected. Would you please put up a pull request adding a test case to the WebGL conformance test sdk/tests/conformance/extensions/oes-texture-float.html in https://github.com/KhronosGroup/WebGL ? It looks like https://www.khronos.org/registry/gles/extensions/OES/OES_texture_float.txt requires that the 32-bit floating-point texture formats are really 32-bit, so any loss in precision is unexpected. Note that the floating-point precision in shaders may be applying here. I haven't re-checked the ESSL 1.00 spec but you would almost certainly need to use highp precision, and I'm not sure that that guarantees full 32-bit floats. > Bliting depth texture in WebGL 2 on Windows (only) does gets into CPU > path, making performance to go below the floor. https://bugs.chromium. > org/p/angleproject/issues/detail?id=1710 > Thanks for your input on this and for your test case. I've raised the bug to high priority and we'll try to figure out what's going wrong with ANGLE's handling of multisampled depth textures. I can tell you that a couple of days were invested trying to integrate your sample code and it's not trivial to figure out why it doesn't work in the context of ANGLE. And I bet there is much more, of cases. The problem is profiling them is > nearly impossible, and when performance is bad in complex applications, > developer spends a lot of time figuring out what's the heck. And after > long-long pain, things might point to one or the other thing, again, ANGLE.. > > *ANGLE - is great, and WebGL wouldn't be possible without it.* > But there are certain ideology used, that are extremely harmful to whole > WebGL platform. CPU paths and dishonesty of API - is one of them. > Max, this is unkind to the ANGLE developers and irrelevant to the Android issue you raised above; ANGLE is not used on that platform. We want to work with you and other developers pushing the leading edge to address these issues and ensure portability, high performance, and a good feature set for WebGL. Please keep the conversations civil and please continue to point out issues you encounter. -Ken > > Kind Regards, > Max > > On 21 January 2017 at 22:03, Florian B?sch wrote: > >> I'll add a disclaimer to the ETC and ETC1 extension that empathically >> discourages people from using these extensions. Are there any other >> extensions I should warn people not to use? >> >> On Sat, Jan 21, 2017 at 11:01 PM, Florian B?sch wrote: >> >>> Well we had the lengthy debate with a clear outcome, do not expose >>> capabilities that the hardware doesn't support, especially not when the >>> emulation has drastically different characteristics and leads to worse >>> outcomes than when it wasn't pretend supported. I'm guessing this then >>> falls under "WebGL1 is in maintenance mode and we won't fix it."? >>> >>> On Sat, Jan 21, 2017 at 10:54 PM, Maksims Mihejevs >>> wrote: >>> >>>> This suggests they haven't done it: >>>> https://bugs.chromium.org/p/chromium/issues/detail?id=658763&desc=2 >>>> >>>> We wanted to add ETC2 support, but currently blocked by this, as it is >>>> totally unacceptable path: more VRAM (comparing to alternatives) and more >>>> Download size. >>>> >>>> On 21 January 2017 at 21:46, Maksims Mihejevs >>>> wrote: >>>> >>>>> Correct me if I'm wrong. >>>>> But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ right? >>>>> I could not find any information that Nvidia GPU actually supports >>>>> ETC2 and EAC. >>>>> >>>>> On Windows, using ANGLE, there is no OpenGL involved. >>>>> >>>>> So why then we seeing 71% support of WEBGL_compressed_texture_etc on >>>>> Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see >>>>> only 5% on Linux, and 0% on OSX? :) >>>>> >>>>> I believe, they haven't actually decided anything, and wen't their way >>>>> regarding CPU path, unless there is somebody here to prove that I'm wrong. >>>>> >>>>> On 21 January 2017 at 15:55, Florian B?sch wrote: >>>>> >>>>>> On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs >>>>> > wrote: >>>>>> >>>>>>> We've seeing already many enormous issues. For example MSAA with >>>>>>> ANGLE has CPU path, and is unusable at all, as performance drops insanely. >>>>>>> ETC2 was another case where CPU path was pushed a lot. >>>>>>> >>>>>> >>>>>> Are you sure ETC2 is software decoded and put on the GPU plain? >>>>>> Because if so, we've had a lengthy debate in 2016 about ETC1 that ended >>>>>> with Jeff Gilbert stating: >>>>>> >>>>>> Update: The WG is planning on only exposing the extension where there >>>>>>> is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this >>>>>>> best matches what devs expect when they see support for a compressed >>>>>>> texture extension. Compressed image formats are a better delivery >>>>>>> mechanism than compressed texture formats, if they're going to be >>>>>>> decompressed anyway. >>>>>> >>>>>> >>>>>> https://www.khronos.org/webgl/public-mailing-list/archives/1 >>>>>> 609/msg00074.php >>>>>> >>>>>> I don't believe we'll need to have this debate all over again... do >>>>>> we? >>>>>> >>>>>> >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Jan 23 09:59:37 2017 From: kbr...@ (Kenneth Russell) Date: Mon, 23 Jan 2017 09:59:37 -0800 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: On Mon, Jan 23, 2017 at 9:45 AM, Florian B?sch wrote: > On Mon, Jan 23, 2017 at 6:32 PM, Kenneth Russell wrote: > >> We expect that it will roll out quickly to most browsers and operating >> systems. >> > > I've crunched the numbers on that a bit, and I've found that although a > Chrome and Firefox rollout will will bring the UA support to something > close to 73%. Around 10% of WebGL1 contexts come with major performance > caveats (meaning mostly swiftshader/cpu paths, etc.) and will be unusable, > even for WebGL1. Since DX11/GL3 hardware is required for ES 3.0, I've also > found that on the steam hardware stats around 15% of devices are not > capable of that, incidentally, a bit more than 15% of Steam hardware survey > devices are Intel integrated something or other, it's therefore logical to > assume that a large chunk of the Intel integrated cards will have trouble > with ES 3.0. Unfortunately, on the general web (and not on steam), these > comprise around 50% of the hardware seen. > While I don't have concrete numbers for market penetration of various Intel GPUs, Intel has been a strong contributor to the WebGL 2.0 effort, and have done a tremendous amount of work on their graphics drivers on Linux, Windows and macOS to pass the conformance suite. Linux with Intel GPUs is one of the configurations on which Google submitted WebGL 2.0 conformance results. The Intel HD 4000 (released in 2012) and later GPUs should be able to support WebGL 2.0. I think that this will reach a majority of the users likely to try using WebGL 2.0. -Ken > > Therefore the initial support level you can reasonably expect is likely to > be between 30-50%. And this level will remain in place for as long as: > > - PC/Laptop retailers continue selling bottom of the barrel Intel cards > - Users do not upgrade their bottom of the barrel Intel cards > - Microsoft and Apple do not Implement WebGL2. > > I'd like to be wrong about this, I desperately do, but I think you've got > to be realistic. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Jan 24 01:24:05 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 24 Jan 2017 10:24:05 +0100 Subject: [Public WebGL] dual GPU setups default to integrated GPU Message-ID: Dual GPU laptops such as some brands of windows and macOS laptops, the OS has a GPU switching function that switches to the discrete GPU in graphics intensive applications (such as games, cad software etc.) However, when a browser is running, it is often the case that the integrated GPU is used, regardless of if a tab is doing something graphics intensive with WebGL or not ( https://twitter.com/grorgwork/status/823719997616701440). On windows a user can influence this by a series of complicated steps to designate the preferred GPU, but it is a machine wide-setting, and sometimes it was ignored (I don't know if that's still the case). Obviously this presents a problem for WebGL developers. Neither would we want to leech a users batteries unnecessarily, nor would we like to force a user with a discrete GPU to receive worse performance should they wish to use a graphics intensive WebGL application. In WebGL1 there was a context creation flag called "preferLowPowerToHighPerformance", but I'm not aware how widely this was implemented, and apparently it's also ignored on macOS (because it defaults to false, yet the discrete GPU is still not used). WebGL2 has no equivalent context creation flag. Questions: 1. It would seem we have a sufficient mechanism to express a GPU preference, is this a correct assessment? 2. Why was preferLowPowerToHighPerformance dropped from WebGL2? 3. Why is preferLowPowerToHighPerformance ignored for WebGL1 on some configurations where it would be most useful? 4. Should an additional mechanism be introduced so a user can switch between GPUs at his choice for tabs? -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Tue Jan 24 05:00:27 2017 From: max...@ (Maksims Mihejevs) Date: Tue, 24 Jan 2017 13:00:27 +0000 Subject: [Public WebGL] dual GPU setups default to integrated GPU In-Reply-To: References: Message-ID: We've recognised same problem that a lot of people with Windows laptops and dual gpu get integrated gpu as default. Although this is not the case for Edge for example. Which gives it advantage over other browsers on Windows laptops. On 24 Jan 2017 9:26 a.m., "Florian B?sch" wrote: > Dual GPU laptops such as some brands of windows and macOS laptops, the OS > has a GPU switching function that switches to the discrete GPU in graphics > intensive applications (such as games, cad software etc.) > > However, when a browser is running, it is often the case that the > integrated GPU is used, regardless of if a tab is doing something graphics > intensive with WebGL or not (https://twitter.com/grorgwork/status/ > 823719997616701440). > > On windows a user can influence this by a series of complicated steps to > designate the preferred GPU, but it is a machine wide-setting, and > sometimes it was ignored (I don't know if that's still the case). > > Obviously this presents a problem for WebGL developers. Neither would we > want to leech a users batteries unnecessarily, nor would we like to force a > user with a discrete GPU to receive worse performance should they wish to > use a graphics intensive WebGL application. > > In WebGL1 there was a context creation flag called " > preferLowPowerToHighPerformance", but I'm not aware how widely this was > implemented, and apparently it's also ignored on macOS (because it defaults > to false, yet the discrete GPU is still not used). > > WebGL2 has no equivalent context creation flag. > > Questions: > > 1. It would seem we have a sufficient mechanism to express a GPU > preference, is this a correct assessment? > 2. Why was preferLowPowerToHighPerformance dropped from WebGL2? > 3. Why is preferLowPowerToHighPerformance ignored for WebGL1 on some > configurations where it would be most useful? > 4. Should an additional mechanism be introduced so a user can switch > between GPUs at his choice for tabs? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thu...@ Tue Jan 24 06:02:33 2017 From: thu...@ (Ben Adams) Date: Tue, 24 Jan 2017 14:02:33 +0000 Subject: [Public WebGL] dual GPU setups default to integrated GPU In-Reply-To: References: Message-ID: dGPU should always be used when on power; and this is only a decision that effects the choice when on battery? On 24 January 2017 at 13:00, Maksims Mihejevs wrote: > We've recognised same problem that a lot of people with Windows laptops > and dual gpu get integrated gpu as default. > > Although this is not the case for Edge for example. Which gives it > advantage over other browsers on Windows laptops. > > On 24 Jan 2017 9:26 a.m., "Florian B?sch" wrote: > >> Dual GPU laptops such as some brands of windows and macOS laptops, the OS >> has a GPU switching function that switches to the discrete GPU in graphics >> intensive applications (such as games, cad software etc.) >> >> However, when a browser is running, it is often the case that the >> integrated GPU is used, regardless of if a tab is doing something graphics >> intensive with WebGL or not (https://twitter.com/grorgwork >> /status/823719997616701440). >> >> On windows a user can influence this by a series of complicated steps to >> designate the preferred GPU, but it is a machine wide-setting, and >> sometimes it was ignored (I don't know if that's still the case). >> >> Obviously this presents a problem for WebGL developers. Neither would we >> want to leech a users batteries unnecessarily, nor would we like to force a >> user with a discrete GPU to receive worse performance should they wish to >> use a graphics intensive WebGL application. >> >> In WebGL1 there was a context creation flag called >> "preferLowPowerToHighPerformance", but I'm not aware how widely this was >> implemented, and apparently it's also ignored on macOS (because it defaults >> to false, yet the discrete GPU is still not used). >> >> WebGL2 has no equivalent context creation flag. >> >> Questions: >> >> 1. It would seem we have a sufficient mechanism to express a GPU >> preference, is this a correct assessment? >> 2. Why was preferLowPowerToHighPerformance dropped from WebGL2? >> 3. Why is preferLowPowerToHighPerformance ignored for WebGL1 on some >> configurations where it would be most useful? >> 4. Should an additional mechanism be introduced so a user can switch >> between GPUs at his choice for tabs? >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Tue Jan 24 06:30:45 2017 From: max...@ (Maksims Mihejevs) Date: Tue, 24 Jan 2017 14:30:45 +0000 Subject: [Public WebGL] dual GPU setups default to integrated GPU In-Reply-To: References: Message-ID: Even my home laptop by default uses integrated GPU for Chrome, regardless of battery/plug. NVIDIA Control Panel has a preset for programs, and I've seen it is set by default to Integrated. On 24 January 2017 at 14:02, Ben Adams wrote: > dGPU should always be used when on power; and this is only a decision that > effects the choice when on battery? > > > On 24 January 2017 at 13:00, Maksims Mihejevs wrote: > >> We've recognised same problem that a lot of people with Windows laptops >> and dual gpu get integrated gpu as default. >> >> Although this is not the case for Edge for example. Which gives it >> advantage over other browsers on Windows laptops. >> >> On 24 Jan 2017 9:26 a.m., "Florian B?sch" wrote: >> >>> Dual GPU laptops such as some brands of windows and macOS laptops, the >>> OS has a GPU switching function that switches to the discrete GPU in >>> graphics intensive applications (such as games, cad software etc.) >>> >>> However, when a browser is running, it is often the case that the >>> integrated GPU is used, regardless of if a tab is doing something graphics >>> intensive with WebGL or not (https://twitter.com/grorgwork >>> /status/823719997616701440). >>> >>> On windows a user can influence this by a series of complicated steps to >>> designate the preferred GPU, but it is a machine wide-setting, and >>> sometimes it was ignored (I don't know if that's still the case). >>> >>> Obviously this presents a problem for WebGL developers. Neither would we >>> want to leech a users batteries unnecessarily, nor would we like to force a >>> user with a discrete GPU to receive worse performance should they wish to >>> use a graphics intensive WebGL application. >>> >>> In WebGL1 there was a context creation flag called >>> "preferLowPowerToHighPerformance", but I'm not aware how widely this >>> was implemented, and apparently it's also ignored on macOS (because it >>> defaults to false, yet the discrete GPU is still not used). >>> >>> WebGL2 has no equivalent context creation flag. >>> >>> Questions: >>> >>> 1. It would seem we have a sufficient mechanism to express a GPU >>> preference, is this a correct assessment? >>> 2. Why was preferLowPowerToHighPerformance dropped from WebGL2? >>> 3. Why is preferLowPowerToHighPerformance ignored for WebGL1 on some >>> configurations where it would be most useful? >>> 4. Should an additional mechanism be introduced so a user can switch >>> between GPUs at his choice for tabs? >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Wed Jan 25 14:00:55 2017 From: jgi...@ (Jeff Gilbert) Date: Wed, 25 Jan 2017 14:00:55 -0800 Subject: [Public WebGL] EXT_shader_texture_lod in WebGL2? In-Reply-To: References: Message-ID: Yep, my bad. I never ended up disabling these. Bug filed: https://bugzilla.mozilla.org/show_bug.cgi?id=1333930 On Sat, Jan 21, 2017 at 2:58 PM, Kai Ninomiya wrote: > FYI I checked FF beta and it is still enabled, so they probably didn't have > time before their WebGL 2 launch. > > > On Sat, Jan 21, 2017, 2:33 PM Florian B?sch wrote: >> >> Changed the text to: >> >>> Often implemented in browsers by decompressing on the CPU and uploading >>> full size to GPU with severe performance, vram and quality impacts. Fixed in >>> Chrome 57 and Firefox ?? >> >> >> Will remove the warning a month after the fix has landed in Chrome and >> Firefox public releases (Jeff please inform me which FF version). >> >> On Sat, Jan 21, 2017 at 11:28 PM, Kai Ninomiya wrote: >>> >>> In Chrome, the ETC1+2 formats should be disabled on both WebGL 1 and >>> WebGL 2 on ANGLE (i.e. Windows) unconditionally. Unfortunately I made a >>> mistake in my first patch to disable ETC1 on ANGLE, so it won't roll out >>> until Chrome 57 (beginning of March). ETC2 should be correct in 56 (end of >>> January). >>> >>> So the warning should very soon become outdated, and it would be good to >>> mention that in the warning text. I am not sure about the status in Firefox >>> but IIRC they agreed to implement the same semantics and it's probably in >>> the version they are releasing this week(?) >>> >>> The linked bug (658763) is for re-enabling it on desktop platforms which >>> have hardware support (maybe Intel), and for ANGLE on other platforms. >>> >>> -Kai >>> >>> >>> On Sat, Jan 21, 2017, 2:04 PM Florian B?sch wrote: >>>> >>>> I'll add a disclaimer to the ETC and ETC1 extension that empathically >>>> discourages people from using these extensions. Are there any other >>>> extensions I should warn people not to use? >>>> >>>> On Sat, Jan 21, 2017 at 11:01 PM, Florian B?sch >>>> wrote: >>>>> >>>>> Well we had the lengthy debate with a clear outcome, do not expose >>>>> capabilities that the hardware doesn't support, especially not when the >>>>> emulation has drastically different characteristics and leads to worse >>>>> outcomes than when it wasn't pretend supported. I'm guessing this then falls >>>>> under "WebGL1 is in maintenance mode and we won't fix it."? >>>>> >>>>> On Sat, Jan 21, 2017 at 10:54 PM, Maksims Mihejevs >>>>> wrote: >>>>>> >>>>>> This suggests they haven't done it: >>>>>> https://bugs.chromium.org/p/chromium/issues/detail?id=658763&desc=2 >>>>>> >>>>>> We wanted to add ETC2 support, but currently blocked by this, as it is >>>>>> totally unacceptable path: more VRAM (comparing to alternatives) and more >>>>>> Download size. >>>>>> >>>>>> On 21 January 2017 at 21:46, Maksims Mihejevs >>>>>> wrote: >>>>>>> >>>>>>> Correct me if I'm wrong. >>>>>>> But ETC2 and EAC is mandatory on OpenGL ES 3.0+ and OpenGL 4.3+ >>>>>>> right? >>>>>>> I could not find any information that Nvidia GPU actually supports >>>>>>> ETC2 and EAC. >>>>>>> >>>>>>> On Windows, using ANGLE, there is no OpenGL involved. >>>>>>> >>>>>>> So why then we seeing 71% support of WEBGL_compressed_texture_etc on >>>>>>> Desktop Windows? And where there is OpenGL, such as Linux and OSX, we see >>>>>>> only 5% on Linux, and 0% on OSX? :) >>>>>>> >>>>>>> I believe, they haven't actually decided anything, and wen't their >>>>>>> way regarding CPU path, unless there is somebody here to prove that I'm >>>>>>> wrong. >>>>>>> >>>>>>> On 21 January 2017 at 15:55, Florian B?sch wrote: >>>>>>>> >>>>>>>> On Sat, Jan 21, 2017 at 1:32 PM, Maksims Mihejevs >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> We've seeing already many enormous issues. For example MSAA with >>>>>>>>> ANGLE has CPU path, and is unusable at all, as performance drops insanely. >>>>>>>>> ETC2 was another case where CPU path was pushed a lot. >>>>>>>> >>>>>>>> >>>>>>>> Are you sure ETC2 is software decoded and put on the GPU plain? >>>>>>>> Because if so, we've had a lengthy debate in 2016 about ETC1 that ended with >>>>>>>> Jeff Gilbert stating: >>>>>>>> >>>>>>>>> Update: The WG is planning on only exposing the extension where >>>>>>>>> there >>>>>>>>> is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this >>>>>>>>> best matches what devs expect when they see support for a >>>>>>>>> compressed >>>>>>>>> texture extension. Compressed image formats are a better delivery >>>>>>>>> mechanism than compressed texture formats, if they're going to be >>>>>>>>> decompressed anyway. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> https://www.khronos.org/webgl/public-mailing-list/archives/1609/msg00074.php >>>>>>>> >>>>>>>> I don't believe we'll need to have this debate all over again... do >>>>>>>> we? >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gri...@ Fri Jan 27 00:42:52 2017 From: gri...@ (Andrew) Date: Fri, 27 Jan 2017 09:42:52 +0100 Subject: [Public WebGL] WEBGL_lose_context In-Reply-To: References: <581cc522.41d71c0a.3b445.dc9d@mx.google.com> <5821f2fd.07941c0a.72454.8d6e@mx.google.com> <58220957.e626c20a.4cc77.0c1b@mx.google.com> Message-ID: I was a bit surprised to learn that the resources on the GPU are freed up by the browser when their JS objects are GC'ed, I thought you needed to call for example deleteTexture explicitly. So what happens if I have a very simple demo where I just create a texture, upload some image onto it, I lose the reference for the JS object (the WebGLTexture instance) but I still have a render loop where the shader I'm rendering with uses that texture? In this case I thought since I no longer need to do anything to that texture with JS (after I bind it once initially to a given unit), I'm free to lose it's WebGLTexture reference but since I haven't deleted the resource on the GPU, I can still use it for rendering. Instead of this, when the GC collects the WebGLTexture reference and frees up the resource on the GPU, there will be a WebGL warning/error because I'm trying to use a texture which has been deleted? On Fri, Nov 11, 2016 at 11:55 PM, Jeff Gilbert wrote: > > To be clear, while we do guarantee that unreferenced WebGL objects > will eventually be GC'd, like with all GC-able objects, do not expect > it to be prompt. For instance, Firefox (at least) does not trigger GC > to address a high-GPU-memory situation, so relying heavily on the GC > to release GPU-memory-intensive WebGL resources is an anti-pattern, > and could lead to OUT_OF_MEMORY issues. > > On Fri, Nov 11, 2016 at 2:23 PM, Maksims Mihejevs > wrote: > > Thank you Russell and Jeff. > > > > In PlayCanvas we do encourage users to release resources associated with > > abstract assets in order to manage their data on GPU. > > Few months back when we made changes on engine to enable clear deleting > of > > resources. Before that, despite assumption that some stuff would be > GC'ed, > > it was actually complex to debug, and seemed like wasn't a case. > > So now users can easily release GPU resources by explicit calls. > > > > Thanks for clarification. > > Max > > > > On 11 November 2016 at 02:30, Kenneth Russell wrote: > >> > >> Max: not to speak for Jeff, but while that will work, it is strongly > >> discouraged. There is no guarantee that the WebGLTexture object will be > >> reclaimed promptly be the garbage collector. Please use the explicit > delete* > >> APIs on the WebGLRenderingContext to release GPU resources that your > >> application is no longer using. > >> > >> -Ken > >> > >> > >> On Thu, Nov 10, 2016 at 6:22 PM, Maksims Mihejevs > >> wrote: > >>> > >>> So Jeff, this should work? > >>> > >>> var texture = gl.createTexture() > >>> // put buffer, and upload it to GPU > >>> texture = null; > >>> // after some time that texture will be collected by GC? > >>> > >>> Cheers, > >>> Max > >>> > >>> On 10 November 2016 at 22:02, Jeff Gilbert > wrote: > >>>> > >>>> There's no way to know from JS that a GL object is unused, but we do > >>>> track the WebGL JS objects and GC them when they are unused by both JS > >>>> and GL. (GL resources are freed when we GC them) > >>>> > >>>> It's not a clean solution, but WEBGL_lose_context does give you a way > >>>> to very strongly hint to the browser that you want it to delete all a > >>>> WebGL context's resources. (Certainly in Firefox, loseContext() > >>>> destroys all child objects of the context, then tears down the actual > >>>> driver's GLContext as well) > >>>> > >>>> In fact, when there are too many outstanding WebGL contexts active in > >>>> the browser (maybe many background tabs?), our method for controlling > >>>> resource usage is to force the oldest WebGL context to become Lost. > >>>> The code path used here is identical in Firefox to what's invoked by > >>>> `loseContext()`. > >>>> > >>>> I'm not sure what the codepaths look like in other browsers, but > >>>> `loseContext()` is a byword for "release this context and its > >>>> resources" in Firefox. > >>>> > >>>> On Tue, Nov 8, 2016 at 3:39 PM, Maksims Mihejevs > >>>> wrote: > >>>> > GPU resources have to be explicitly destroyed, as there is no way to > >>>> > know > >>>> > from browser if it is no more needed. Because references are simply > - > >>>> > Number. > >>>> > > >>>> > If developer uploads texture to GPU, the only way to remove it, is > by > >>>> > calling deleteTexture. > >>>> > There is no persistent object in JavaScript associated with GL > >>>> > references. > >>>> > > >>>> > It is not poor design of WebGL, but just inherited design from > OpenGL > >>>> > ES. > >>>> > Which not sure if anyone anticipated, that would be used not only in > >>>> > static > >>>> > environments, but in such dynamic as browser. > >>>> > > >>>> > Perhaps good way of destroying context with releasing all associated > >>>> > resources could be exposed to WebGL, especially with the nature of > >>>> > single-page websites. > >>>> > > >>>> > I assume it does it when you remove canvas element from DOM and GC > all > >>>> > related to context data. Making sure it is not referenced anywhere > in > >>>> > JS. > >>>> > > >>>> > Cheers, > >>>> > Max > >>>> > > >>>> > > >>>> > On 8 Nov 2016 5:22 p.m., wrote: > >>>> > > >>>> > Well it?s a dirty trick for sure, and I wouldn?t recommend it to > >>>> > anyone :) > >>>> > > >>>> > > >>>> > > >>>> > - Omar > >>>> > > >>>> > > >>>> > > >>>> > From: Justin Novosad > >>>> > Sent: Tuesday, November 8, 2016 6:58 PM > >>>> > To: omarhuseynov97...@ > >>>> > Cc: Jukka Jyl?nki; public_webgl...@ > >>>> > > >>>> > > >>>> > Subject: Re: [Public WebGL] WEBGL_lose_context > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > On Tue, Nov 8, 2016 at 10:45 AM, wrote: > >>>> > > >>>> > I?ve added WebGL context to the iframe element and then found out > that > >>>> > refreshing that iframe page resulted in significant memory decrease > in > >>>> > Chrome as I watched it using dev tools (garbage collection kicking > >>>> > in?). As > >>>> > far as I know, this is the only way of ?deleting? WebGL context on > >>>> > Chrome > >>>> > (but I am still not sure as I?m not aware of what Chrome is actually > >>>> > doing) > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > Hmmm... you need to determine whether that trick relies on specified > >>>> > behaviors. Otherwise, this may very well be a trick that works > today > >>>> > but > >>>> > not tomorrow. Anyone here an iframe expert? > >>>> > > >>>> > - Omar > >>>> > > >>>> > > >>>> > > >>>> > From: Jukka Jyl?nki > >>>> > Sent: Tuesday, November 8, 2016 6:19 PM > >>>> > To: Ryan Patterson > >>>> > Cc: Kenneth Russell; public webgl > >>>> > Subject: Re: [Public WebGL] WEBGL_lose_context > >>>> > > >>>> > > >>>> > > >>>> > I agree and have myself sometimes pondered about how to kill a > context > >>>> > explicitly, and there's no way other than to release refs to all GL > >>>> > objects > >>>> > and leave it to the GC. It would be nice to have an explicit API, > >>>> > although > >>>> > the lose_context extension is not it (and shouldn't be), because it > >>>> > matches > >>>> > the resource loss semantics from the system, to solve another > problem, > >>>> > and > >>>> > not the context itself. It would be nice to have a deleteContext() > >>>> > feature, > >>>> > since otherwise getContext()ing something effectively "taints" the > >>>> > > >>>> > and ties it to that context type for its remaining lifetime, which > is > >>>> > a bit > >>>> > messy. Not critical, but agree this is a bit dirty part if the API. > >>>> > > >>>> > > >>>> > > >>>> > On Nov 7, 2016 4:24 PM, "Ryan Patterson" > wrote: > >>>> > > >>>> > Unfortunately I have been unable to find the deleteContext function > in > >>>> > the > >>>> > canvas API. Some implementations keep webGL contexts around even > >>>> > after all > >>>> > references are set to null. The WEBGL_lose_context extension seems > to > >>>> > be > >>>> > the only way to declare you are finished with a context and free the > >>>> > context > >>>> > resource itself in a deterministic manor. > >>>> > > >>>> > _____________ > >>>> > Ryan Patterson > >>>> > > >>>> > > >>>> > > >>>> > On Fri, Nov 4, 2016 at 1:45 PM, Kenneth Russell > >>>> > wrote: > >>>> > > >>>> > This deliberately isn't specified. If you're looking for a way to > free > >>>> > your > >>>> > application's resources, you should use the various delete* APIs. > >>>> > > >>>> > > >>>> > > >>>> > -Ken > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > On Fri, Nov 4, 2016 at 10:28 AM, wrote: > >>>> > > >>>> > It says the extension ?WEBGL_lose_context? simulates webgl context > >>>> > loss. > >>>> > Does it mean the browser actually frees the resources on the GPU > >>>> > allocated > >>>> > by the webgl context or just pretends that it did? > >>>> > > >>>> > > >>>> > > >>>> > - Omar > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>>> > > >>> > >>> > >> > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zmo...@ Fri Jan 27 09:02:35 2017 From: zmo...@ (Zhenyao Mo) Date: Fri, 27 Jan 2017 09:02:35 -0800 Subject: [Public WebGL] WEBGL_lose_context In-Reply-To: References: <581cc522.41d71c0a.3b445.dc9d@mx.google.com> <5821f2fd.07941c0a.72454.8d6e@mx.google.com> <58220957.e626c20a.4cc77.0c1b@mx.google.com> Message-ID: On Fri, Jan 27, 2017 at 12:42 AM, Andrew wrote: > I was a bit surprised to learn that the resources on the GPU are freed up > by the browser when their JS objects are GC'ed, I thought you needed to > call for example deleteTexture explicitly. > > So what happens if I have a very simple demo where I just create a > texture, upload some image onto it, I lose the reference for the JS object > (the WebGLTexture instance) but I still have a render loop where the shader > I'm rendering with uses that texture? > In that case, that texture must be bound to a binding point, which is a reference to keep the object alive. > In this case I thought since I no longer need to do anything to that > texture with JS (after I bind it once initially to a given unit), I'm free > to lose it's WebGLTexture reference but since I haven't deleted the > resource on the GPU, I can still use it for rendering. > Instead of this, when the GC collects the WebGLTexture reference and frees > up the resource on the GPU, there will be a WebGL warning/error because I'm > trying to use a texture which has been deleted? > > > > On Fri, Nov 11, 2016 at 11:55 PM, Jeff Gilbert > wrote: > >> >> To be clear, while we do guarantee that unreferenced WebGL objects >> will eventually be GC'd, like with all GC-able objects, do not expect >> it to be prompt. For instance, Firefox (at least) does not trigger GC >> to address a high-GPU-memory situation, so relying heavily on the GC >> to release GPU-memory-intensive WebGL resources is an anti-pattern, >> and could lead to OUT_OF_MEMORY issues. >> >> On Fri, Nov 11, 2016 at 2:23 PM, Maksims Mihejevs >> wrote: >> > Thank you Russell and Jeff. >> > >> > In PlayCanvas we do encourage users to release resources associated with >> > abstract assets in order to manage their data on GPU. >> > Few months back when we made changes on engine to enable clear deleting >> of >> > resources. Before that, despite assumption that some stuff would be >> GC'ed, >> > it was actually complex to debug, and seemed like wasn't a case. >> > So now users can easily release GPU resources by explicit calls. >> > >> > Thanks for clarification. >> > Max >> > >> > On 11 November 2016 at 02:30, Kenneth Russell wrote: >> >> >> >> Max: not to speak for Jeff, but while that will work, it is strongly >> >> discouraged. There is no guarantee that the WebGLTexture object will be >> >> reclaimed promptly be the garbage collector. Please use the explicit >> delete* >> >> APIs on the WebGLRenderingContext to release GPU resources that your >> >> application is no longer using. >> >> >> >> -Ken >> >> >> >> >> >> On Thu, Nov 10, 2016 at 6:22 PM, Maksims Mihejevs >> >> wrote: >> >>> >> >>> So Jeff, this should work? >> >>> >> >>> var texture = gl.createTexture() >> >>> // put buffer, and upload it to GPU >> >>> texture = null; >> >>> // after some time that texture will be collected by GC? >> >>> >> >>> Cheers, >> >>> Max >> >>> >> >>> On 10 November 2016 at 22:02, Jeff Gilbert >> wrote: >> >>>> >> >>>> There's no way to know from JS that a GL object is unused, but we do >> >>>> track the WebGL JS objects and GC them when they are unused by both >> JS >> >>>> and GL. (GL resources are freed when we GC them) >> >>>> >> >>>> It's not a clean solution, but WEBGL_lose_context does give you a way >> >>>> to very strongly hint to the browser that you want it to delete all a >> >>>> WebGL context's resources. (Certainly in Firefox, loseContext() >> >>>> destroys all child objects of the context, then tears down the actual >> >>>> driver's GLContext as well) >> >>>> >> >>>> In fact, when there are too many outstanding WebGL contexts active in >> >>>> the browser (maybe many background tabs?), our method for controlling >> >>>> resource usage is to force the oldest WebGL context to become Lost. >> >>>> The code path used here is identical in Firefox to what's invoked by >> >>>> `loseContext()`. >> >>>> >> >>>> I'm not sure what the codepaths look like in other browsers, but >> >>>> `loseContext()` is a byword for "release this context and its >> >>>> resources" in Firefox. >> >>>> >> >>>> On Tue, Nov 8, 2016 at 3:39 PM, Maksims Mihejevs > > >> >>>> wrote: >> >>>> > GPU resources have to be explicitly destroyed, as there is no way >> to >> >>>> > know >> >>>> > from browser if it is no more needed. Because references are >> simply - >> >>>> > Number. >> >>>> > >> >>>> > If developer uploads texture to GPU, the only way to remove it, is >> by >> >>>> > calling deleteTexture. >> >>>> > There is no persistent object in JavaScript associated with GL >> >>>> > references. >> >>>> > >> >>>> > It is not poor design of WebGL, but just inherited design from >> OpenGL >> >>>> > ES. >> >>>> > Which not sure if anyone anticipated, that would be used not only >> in >> >>>> > static >> >>>> > environments, but in such dynamic as browser. >> >>>> > >> >>>> > Perhaps good way of destroying context with releasing all >> associated >> >>>> > resources could be exposed to WebGL, especially with the nature of >> >>>> > single-page websites. >> >>>> > >> >>>> > I assume it does it when you remove canvas element from DOM and GC >> all >> >>>> > related to context data. Making sure it is not referenced anywhere >> in >> >>>> > JS. >> >>>> > >> >>>> > Cheers, >> >>>> > Max >> >>>> > >> >>>> > >> >>>> > On 8 Nov 2016 5:22 p.m., wrote: >> >>>> > >> >>>> > Well it?s a dirty trick for sure, and I wouldn?t recommend it to >> >>>> > anyone :) >> >>>> > >> >>>> > >> >>>> > >> >>>> > - Omar >> >>>> > >> >>>> > >> >>>> > >> >>>> > From: Justin Novosad >> >>>> > Sent: Tuesday, November 8, 2016 6:58 PM >> >>>> > To: omarhuseynov97...@ >> >>>> > Cc: Jukka Jyl?nki; public_webgl...@ >> >>>> > >> >>>> > >> >>>> > Subject: Re: [Public WebGL] WEBGL_lose_context >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > On Tue, Nov 8, 2016 at 10:45 AM, wrote: >> >>>> > >> >>>> > I?ve added WebGL context to the iframe element and then found out >> that >> >>>> > refreshing that iframe page resulted in significant memory >> decrease in >> >>>> > Chrome as I watched it using dev tools (garbage collection kicking >> >>>> > in?). As >> >>>> > far as I know, this is the only way of ?deleting? WebGL context on >> >>>> > Chrome >> >>>> > (but I am still not sure as I?m not aware of what Chrome is >> actually >> >>>> > doing) >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > Hmmm... you need to determine whether that trick relies on >> specified >> >>>> > behaviors. Otherwise, this may very well be a trick that works >> today >> >>>> > but >> >>>> > not tomorrow. Anyone here an iframe expert? >> >>>> > >> >>>> > - Omar >> >>>> > >> >>>> > >> >>>> > >> >>>> > From: Jukka Jyl?nki >> >>>> > Sent: Tuesday, November 8, 2016 6:19 PM >> >>>> > To: Ryan Patterson >> >>>> > Cc: Kenneth Russell; public webgl >> >>>> > Subject: Re: [Public WebGL] WEBGL_lose_context >> >>>> > >> >>>> > >> >>>> > >> >>>> > I agree and have myself sometimes pondered about how to kill a >> context >> >>>> > explicitly, and there's no way other than to release refs to all GL >> >>>> > objects >> >>>> > and leave it to the GC. It would be nice to have an explicit API, >> >>>> > although >> >>>> > the lose_context extension is not it (and shouldn't be), because it >> >>>> > matches >> >>>> > the resource loss semantics from the system, to solve another >> problem, >> >>>> > and >> >>>> > not the context itself. It would be nice to have a deleteContext() >> >>>> > feature, >> >>>> > since otherwise getContext()ing something effectively "taints" the >> >>>> > >> >>>> > and ties it to that context type for its remaining lifetime, which >> is >> >>>> > a bit >> >>>> > messy. Not critical, but agree this is a bit dirty part if the API. >> >>>> > >> >>>> > >> >>>> > >> >>>> > On Nov 7, 2016 4:24 PM, "Ryan Patterson" >> wrote: >> >>>> > >> >>>> > Unfortunately I have been unable to find the deleteContext >> function in >> >>>> > the >> >>>> > canvas API. Some implementations keep webGL contexts around even >> >>>> > after all >> >>>> > references are set to null. The WEBGL_lose_context extension >> seems to >> >>>> > be >> >>>> > the only way to declare you are finished with a context and free >> the >> >>>> > context >> >>>> > resource itself in a deterministic manor. >> >>>> > >> >>>> > _____________ >> >>>> > Ryan Patterson >> >>>> > >> >>>> > >> >>>> > >> >>>> > On Fri, Nov 4, 2016 at 1:45 PM, Kenneth Russell >> >>>> > wrote: >> >>>> > >> >>>> > This deliberately isn't specified. If you're looking for a way to >> free >> >>>> > your >> >>>> > application's resources, you should use the various delete* APIs. >> >>>> > >> >>>> > >> >>>> > >> >>>> > -Ken >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > On Fri, Nov 4, 2016 at 10:28 AM, wrote: >> >>>> > >> >>>> > It says the extension ?WEBGL_lose_context? simulates webgl context >> >>>> > loss. >> >>>> > Does it mean the browser actually frees the resources on the GPU >> >>>> > allocated >> >>>> > by the webgl context or just pretends that it did? >> >>>> > >> >>>> > >> >>>> > >> >>>> > - Omar >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>> >> >>> >> >> >> > >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Jan 27 09:12:47 2017 From: kbr...@ (Kenneth Russell) Date: Fri, 27 Jan 2017 09:12:47 -0800 Subject: [Public WebGL] First WebGL 2.0 implementations released Message-ID: WebGL community, The first WebGL 2.0 implementations have been released! Firefox 51 and Chrome 56 are shipping now, and both support the new version of the API, which tracks the OpenGL ES 3.0 feature set. Mozilla wrote a nice blog post describing some of the key new features, and linking to some new demos: https://hacks.mozilla.org/2017/01/webgl-2-lands-in-firefox/ In both browsers, WebGL 2.0 is enabled by default on desktop platforms. Android will follow soon, once the conformance tests have been passed. It's already possible to enable WebGL 2.0 in both browsers on Android for development and testing. Looking forward to seeing what the community creates with this significant upgrade to the feature set! -Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jan 27 09:54:32 2017 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 27 Jan 2017 18:54:32 +0100 Subject: [Public WebGL] First WebGL 2.0 implementations released In-Reply-To: References: Message-ID: Outstanding! :) In yesterdays stats, WebGL2 went to 2%, the highest since the start of record keeping in October (still low across 30 days of course): http://webglstats.com/webgl2 On Fri, Jan 27, 2017 at 6:12 PM, Kenneth Russell wrote: > WebGL community, > > The first WebGL 2.0 implementations have been released! Firefox 51 and > Chrome 56 are shipping now, and both support the new version of the API, > which tracks the OpenGL ES 3.0 feature set. Mozilla wrote a nice blog post > describing some of the key new features, and linking to some new demos: > > https://hacks.mozilla.org/2017/01/webgl-2-lands-in-firefox/ > > In both browsers, WebGL 2.0 is enabled by default on desktop platforms. > Android will follow soon, once the conformance tests have been passed. It's > already possible to enable WebGL 2.0 in both browsers on Android for > development and testing. > > Looking forward to seeing what the community creates with this significant > upgrade to the feature set! > > -Ken > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gri...@ Sat Jan 28 04:49:06 2017 From: gri...@ (Andrew) Date: Sat, 28 Jan 2017 13:49:06 +0100 Subject: [Public WebGL] WEBGL_lose_context In-Reply-To: References: <581cc522.41d71c0a.3b445.dc9d@mx.google.com> <5821f2fd.07941c0a.72454.8d6e@mx.google.com> <58220957.e626c20a.4cc77.0c1b@mx.google.com> Message-ID: That makes sense, thank you! On Fri, Jan 27, 2017 at 6:02 PM, Zhenyao Mo wrote: > > > On Fri, Jan 27, 2017 at 12:42 AM, Andrew wrote: > >> I was a bit surprised to learn that the resources on the GPU are freed up >> by the browser when their JS objects are GC'ed, I thought you needed to >> call for example deleteTexture explicitly. >> >> So what happens if I have a very simple demo where I just create a >> texture, upload some image onto it, I lose the reference for the JS object >> (the WebGLTexture instance) but I still have a render loop where the shader >> I'm rendering with uses that texture? >> > > In that case, that texture must be bound to a binding point, which is a > reference to keep the object alive. > > >> In this case I thought since I no longer need to do anything to that >> texture with JS (after I bind it once initially to a given unit), I'm free >> to lose it's WebGLTexture reference but since I haven't deleted the >> resource on the GPU, I can still use it for rendering. >> Instead of this, when the GC collects the WebGLTexture reference and >> frees up the resource on the GPU, there will be a WebGL warning/error >> because I'm trying to use a texture which has been deleted? >> >> >> >> On Fri, Nov 11, 2016 at 11:55 PM, Jeff Gilbert >> wrote: >> >>> >>> To be clear, while we do guarantee that unreferenced WebGL objects >>> will eventually be GC'd, like with all GC-able objects, do not expect >>> it to be prompt. For instance, Firefox (at least) does not trigger GC >>> to address a high-GPU-memory situation, so relying heavily on the GC >>> to release GPU-memory-intensive WebGL resources is an anti-pattern, >>> and could lead to OUT_OF_MEMORY issues. >>> >>> On Fri, Nov 11, 2016 at 2:23 PM, Maksims Mihejevs >>> wrote: >>> > Thank you Russell and Jeff. >>> > >>> > In PlayCanvas we do encourage users to release resources associated >>> with >>> > abstract assets in order to manage their data on GPU. >>> > Few months back when we made changes on engine to enable clear >>> deleting of >>> > resources. Before that, despite assumption that some stuff would be >>> GC'ed, >>> > it was actually complex to debug, and seemed like wasn't a case. >>> > So now users can easily release GPU resources by explicit calls. >>> > >>> > Thanks for clarification. >>> > Max >>> > >>> > On 11 November 2016 at 02:30, Kenneth Russell wrote: >>> >> >>> >> Max: not to speak for Jeff, but while that will work, it is strongly >>> >> discouraged. There is no guarantee that the WebGLTexture object will >>> be >>> >> reclaimed promptly be the garbage collector. Please use the explicit >>> delete* >>> >> APIs on the WebGLRenderingContext to release GPU resources that your >>> >> application is no longer using. >>> >> >>> >> -Ken >>> >> >>> >> >>> >> On Thu, Nov 10, 2016 at 6:22 PM, Maksims Mihejevs >> > >>> >> wrote: >>> >>> >>> >>> So Jeff, this should work? >>> >>> >>> >>> var texture = gl.createTexture() >>> >>> // put buffer, and upload it to GPU >>> >>> texture = null; >>> >>> // after some time that texture will be collected by GC? >>> >>> >>> >>> Cheers, >>> >>> Max >>> >>> >>> >>> On 10 November 2016 at 22:02, Jeff Gilbert >>> wrote: >>> >>>> >>> >>>> There's no way to know from JS that a GL object is unused, but we do >>> >>>> track the WebGL JS objects and GC them when they are unused by both >>> JS >>> >>>> and GL. (GL resources are freed when we GC them) >>> >>>> >>> >>>> It's not a clean solution, but WEBGL_lose_context does give you a >>> way >>> >>>> to very strongly hint to the browser that you want it to delete all >>> a >>> >>>> WebGL context's resources. (Certainly in Firefox, loseContext() >>> >>>> destroys all child objects of the context, then tears down the >>> actual >>> >>>> driver's GLContext as well) >>> >>>> >>> >>>> In fact, when there are too many outstanding WebGL contexts active >>> in >>> >>>> the browser (maybe many background tabs?), our method for >>> controlling >>> >>>> resource usage is to force the oldest WebGL context to become Lost. >>> >>>> The code path used here is identical in Firefox to what's invoked by >>> >>>> `loseContext()`. >>> >>>> >>> >>>> I'm not sure what the codepaths look like in other browsers, but >>> >>>> `loseContext()` is a byword for "release this context and its >>> >>>> resources" in Firefox. >>> >>>> >>> >>>> On Tue, Nov 8, 2016 at 3:39 PM, Maksims Mihejevs < >>> max...@> >>> >>>> wrote: >>> >>>> > GPU resources have to be explicitly destroyed, as there is no way >>> to >>> >>>> > know >>> >>>> > from browser if it is no more needed. Because references are >>> simply - >>> >>>> > Number. >>> >>>> > >>> >>>> > If developer uploads texture to GPU, the only way to remove it, >>> is by >>> >>>> > calling deleteTexture. >>> >>>> > There is no persistent object in JavaScript associated with GL >>> >>>> > references. >>> >>>> > >>> >>>> > It is not poor design of WebGL, but just inherited design from >>> OpenGL >>> >>>> > ES. >>> >>>> > Which not sure if anyone anticipated, that would be used not only >>> in >>> >>>> > static >>> >>>> > environments, but in such dynamic as browser. >>> >>>> > >>> >>>> > Perhaps good way of destroying context with releasing all >>> associated >>> >>>> > resources could be exposed to WebGL, especially with the nature of >>> >>>> > single-page websites. >>> >>>> > >>> >>>> > I assume it does it when you remove canvas element from DOM and >>> GC all >>> >>>> > related to context data. Making sure it is not referenced >>> anywhere in >>> >>>> > JS. >>> >>>> > >>> >>>> > Cheers, >>> >>>> > Max >>> >>>> > >>> >>>> > >>> >>>> > On 8 Nov 2016 5:22 p.m., wrote: >>> >>>> > >>> >>>> > Well it?s a dirty trick for sure, and I wouldn?t recommend it to >>> >>>> > anyone :) >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > - Omar >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > From: Justin Novosad >>> >>>> > Sent: Tuesday, November 8, 2016 6:58 PM >>> >>>> > To: omarhuseynov97...@ >>> >>>> > Cc: Jukka Jyl?nki; public_webgl...@ >>> >>>> > >>> >>>> > >>> >>>> > Subject: Re: [Public WebGL] WEBGL_lose_context >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > On Tue, Nov 8, 2016 at 10:45 AM, >>> wrote: >>> >>>> > >>> >>>> > I?ve added WebGL context to the iframe element and then found out >>> that >>> >>>> > refreshing that iframe page resulted in significant memory >>> decrease in >>> >>>> > Chrome as I watched it using dev tools (garbage collection kicking >>> >>>> > in?). As >>> >>>> > far as I know, this is the only way of ?deleting? WebGL context on >>> >>>> > Chrome >>> >>>> > (but I am still not sure as I?m not aware of what Chrome is >>> actually >>> >>>> > doing) >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > Hmmm... you need to determine whether that trick relies on >>> specified >>> >>>> > behaviors. Otherwise, this may very well be a trick that works >>> today >>> >>>> > but >>> >>>> > not tomorrow. Anyone here an iframe expert? >>> >>>> > >>> >>>> > - Omar >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > From: Jukka Jyl?nki >>> >>>> > Sent: Tuesday, November 8, 2016 6:19 PM >>> >>>> > To: Ryan Patterson >>> >>>> > Cc: Kenneth Russell; public webgl >>> >>>> > Subject: Re: [Public WebGL] WEBGL_lose_context >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > I agree and have myself sometimes pondered about how to kill a >>> context >>> >>>> > explicitly, and there's no way other than to release refs to all >>> GL >>> >>>> > objects >>> >>>> > and leave it to the GC. It would be nice to have an explicit API, >>> >>>> > although >>> >>>> > the lose_context extension is not it (and shouldn't be), because >>> it >>> >>>> > matches >>> >>>> > the resource loss semantics from the system, to solve another >>> problem, >>> >>>> > and >>> >>>> > not the context itself. It would be nice to have a deleteContext() >>> >>>> > feature, >>> >>>> > since otherwise getContext()ing something effectively "taints" the >>> >>>> > >>> >>>> > and ties it to that context type for its remaining lifetime, >>> which is >>> >>>> > a bit >>> >>>> > messy. Not critical, but agree this is a bit dirty part if the >>> API. >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > On Nov 7, 2016 4:24 PM, "Ryan Patterson" >>> wrote: >>> >>>> > >>> >>>> > Unfortunately I have been unable to find the deleteContext >>> function in >>> >>>> > the >>> >>>> > canvas API. Some implementations keep webGL contexts around even >>> >>>> > after all >>> >>>> > references are set to null. The WEBGL_lose_context extension >>> seems to >>> >>>> > be >>> >>>> > the only way to declare you are finished with a context and free >>> the >>> >>>> > context >>> >>>> > resource itself in a deterministic manor. >>> >>>> > >>> >>>> > _____________ >>> >>>> > Ryan Patterson >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > On Fri, Nov 4, 2016 at 1:45 PM, Kenneth Russell >>> >>>> > wrote: >>> >>>> > >>> >>>> > This deliberately isn't specified. If you're looking for a way to >>> free >>> >>>> > your >>> >>>> > application's resources, you should use the various delete* APIs. >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > -Ken >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > On Fri, Nov 4, 2016 at 10:28 AM, >>> wrote: >>> >>>> > >>> >>>> > It says the extension ?WEBGL_lose_context? simulates webgl context >>> >>>> > loss. >>> >>>> > Does it mean the browser actually frees the resources on the GPU >>> >>>> > allocated >>> >>>> > by the webgl context or just pretends that it did? >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > - Omar >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>>> > >>> >>> >>> >>> >>> >> >>> > >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Jan 30 17:45:18 2017 From: kbr...@ (Kenneth Russell) Date: Mon, 30 Jan 2017 17:45:18 -0800 Subject: [Public WebGL] dual GPU setups default to integrated GPU In-Reply-To: References: Message-ID: preferLowPowerToHighPerformance would apply equally to WebGL 2.0 as WebGL 1.0. It wasn't dropped from the spec. https://www.khronos.org/registry/webgl/specs/latest/2.0/#2.2 mandates that a few of the context creation attributes must be honored, but that's because multisampled renderbuffers are a mandatory part of the OpenGL ES 3.0 spec. I'm not a D3D expert so don't know how feasible it is to render to a texture on one D3D device and display on another. To the best of my knowledge Edge doesn't dynamically activate the discrete GPU when WebGL's active and switch back to the integrated GPU when it isn't. It uses the "0th" GPU, whatever that is according to the control panel settings. -Ken On Tue, Jan 24, 2017 at 6:30 AM, Maksims Mihejevs wrote: > Even my home laptop by default uses integrated GPU for Chrome, regardless > of battery/plug. NVIDIA Control Panel has a preset for programs, and I've > seen it is set by default to Integrated. > > On 24 January 2017 at 14:02, Ben Adams wrote: > >> dGPU should always be used when on power; and this is only a decision >> that effects the choice when on battery? >> >> >> On 24 January 2017 at 13:00, Maksims Mihejevs wrote: >> >>> We've recognised same problem that a lot of people with Windows laptops >>> and dual gpu get integrated gpu as default. >>> >>> Although this is not the case for Edge for example. Which gives it >>> advantage over other browsers on Windows laptops. >>> >>> On 24 Jan 2017 9:26 a.m., "Florian B?sch" wrote: >>> >>>> Dual GPU laptops such as some brands of windows and macOS laptops, the >>>> OS has a GPU switching function that switches to the discrete GPU in >>>> graphics intensive applications (such as games, cad software etc.) >>>> >>>> However, when a browser is running, it is often the case that the >>>> integrated GPU is used, regardless of if a tab is doing something graphics >>>> intensive with WebGL or not (https://twitter.com/grorgwork >>>> /status/823719997616701440). >>>> >>>> On windows a user can influence this by a series of complicated steps >>>> to designate the preferred GPU, but it is a machine wide-setting, and >>>> sometimes it was ignored (I don't know if that's still the case). >>>> >>>> Obviously this presents a problem for WebGL developers. Neither would >>>> we want to leech a users batteries unnecessarily, nor would we like to >>>> force a user with a discrete GPU to receive worse performance should they >>>> wish to use a graphics intensive WebGL application. >>>> >>>> In WebGL1 there was a context creation flag called >>>> "preferLowPowerToHighPerformance", but I'm not aware how widely this >>>> was implemented, and apparently it's also ignored on macOS (because it >>>> defaults to false, yet the discrete GPU is still not used). >>>> >>>> WebGL2 has no equivalent context creation flag. >>>> >>>> Questions: >>>> >>>> 1. It would seem we have a sufficient mechanism to express a GPU >>>> preference, is this a correct assessment? >>>> 2. Why was preferLowPowerToHighPerformance dropped from WebGL2? >>>> 3. Why is preferLowPowerToHighPerformance ignored for WebGL1 on >>>> some configurations where it would be most useful? >>>> 4. Should an additional mechanism be introduced so a user can >>>> switch between GPUs at his choice for tabs? >>>> >>>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Raf...@ Mon Jan 30 17:58:26 2017 From: Raf...@ (Rafael Cintron) Date: Tue, 31 Jan 2017 01:58:26 +0000 Subject: [Public WebGL] dual GPU setups default to integrated GPU In-Reply-To: References: Message-ID: Currently, when Edge enumerates adapters with D3D, it always picks adapter #0 for all content, including WebGL content. By default, adapter #0 is the iGPU on hybrid machines. Users can customize this behavior by either using the custom IHV control panel or checking the ?Use software rendering instead of GPU rendering? checkbox. The latter, of course, will use Warp for rendering. D3D11 allows you to share textures between D3D11 devices running on the same adapter. However, resource sharing does not work between adapters. Developers that need to migrate resources between adapters can do so by manually copying the data to the CPU from the source adapter and uploading it to a resource of the same type on the destination adapter. Not all resources can be easily read back and restored in this manner. --Rafael From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Kenneth Russell Sent: Monday, January 30, 2017 5:45 PM To: Maksims Mihejevs Cc: Ben Adams ; Florian B?sch ; public Subject: Re: [Public WebGL] dual GPU setups default to integrated GPU preferLowPowerToHighPerformance would apply equally to WebGL 2.0 as WebGL 1.0. It wasn't dropped from the spec. https://www.khronos.org/registry/webgl/specs/latest/2.0/#2.2 mandates that a few of the context creation attributes must be honored, but that's because multisampled renderbuffers are a mandatory part of the OpenGL ES 3.0 spec. I'm not a D3D expert so don't know how feasible it is to render to a texture on one D3D device and display on another. To the best of my knowledge Edge doesn't dynamically activate the discrete GPU when WebGL's active and switch back to the integrated GPU when it isn't. It uses the "0th" GPU, whatever that is according to the control panel settings. -Ken On Tue, Jan 24, 2017 at 6:30 AM, Maksims Mihejevs > wrote: Even my home laptop by default uses integrated GPU for Chrome, regardless of battery/plug. NVIDIA Control Panel has a preset for programs, and I've seen it is set by default to Integrated. On 24 January 2017 at 14:02, Ben Adams > wrote: dGPU should always be used when on power; and this is only a decision that effects the choice when on battery? On 24 January 2017 at 13:00, Maksims Mihejevs > wrote: We've recognised same problem that a lot of people with Windows laptops and dual gpu get integrated gpu as default. Although this is not the case for Edge for example. Which gives it advantage over other browsers on Windows laptops. On 24 Jan 2017 9:26 a.m., "Florian B?sch" > wrote: Dual GPU laptops such as some brands of windows and macOS laptops, the OS has a GPU switching function that switches to the discrete GPU in graphics intensive applications (such as games, cad software etc.) However, when a browser is running, it is often the case that the integrated GPU is used, regardless of if a tab is doing something graphics intensive with WebGL or not (https://twitter.com/grorgwork/status/823719997616701440). On windows a user can influence this by a series of complicated steps to designate the preferred GPU, but it is a machine wide-setting, and sometimes it was ignored (I don't know if that's still the case). Obviously this presents a problem for WebGL developers. Neither would we want to leech a users batteries unnecessarily, nor would we like to force a user with a discrete GPU to receive worse performance should they wish to use a graphics intensive WebGL application. In WebGL1 there was a context creation flag called "preferLowPowerToHighPerformance", but I'm not aware how widely this was implemented, and apparently it's also ignored on macOS (because it defaults to false, yet the discrete GPU is still not used). WebGL2 has no equivalent context creation flag. Questions: 1. It would seem we have a sufficient mechanism to express a GPU preference, is this a correct assessment? 2. Why was preferLowPowerToHighPerformance dropped from WebGL2? 3. Why is preferLowPowerToHighPerformance ignored for WebGL1 on some configurations where it would be most useful? 4. Should an additional mechanism be introduced so a user can switch between GPUs at his choice for tabs? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Jan 30 18:06:05 2017 From: kbr...@ (Kenneth Russell) Date: Mon, 30 Jan 2017 18:06:05 -0800 Subject: [Public WebGL] dual GPU setups default to integrated GPU In-Reply-To: References: Message-ID: Thanks Rafael for the feedback. Is there any way (with a public Windows API) to compose D3D textures rendered on the integrated GPU with textures rendered on the discrete GPU? Does DirectComposition support that? Thanks, -Ken On Mon, Jan 30, 2017 at 5:58 PM, Rafael Cintron < Rafael.Cintron...@> wrote: > Currently, when Edge enumerates adapters with D3D, it always picks adapter > #0 for all content, including WebGL content. By default, adapter #0 is the > iGPU on hybrid machines. Users can customize this behavior by either using > the custom IHV control panel or checking the ?Use software rendering > instead of GPU rendering? checkbox. The latter, of course, will use Warp > for rendering. > > > > D3D11 allows you to share textures between D3D11 devices running on the > same adapter. However, resource sharing does not work between adapters. > Developers that need to migrate resources between adapters can do so by > manually copying the data to the CPU from the source adapter and uploading > it to a resource of the same type on the destination adapter. Not all > resources can be easily read back and restored in this manner. > > > > --Rafael > > > > *From:* owners-public_webgl...@ [mailto:owners-public_webgl@ > khronos.org] *On Behalf Of *Kenneth Russell > *Sent:* Monday, January 30, 2017 5:45 PM > *To:* Maksims Mihejevs > *Cc:* Ben Adams ; Florian B?sch < > pyalot...@>; public > *Subject:* Re: [Public WebGL] dual GPU setups default to integrated GPU > > > > preferLowPowerToHighPerformance would apply equally to WebGL 2.0 as WebGL > 1.0. It wasn't dropped from the spec. > > > > https://www.khronos.org/registry/webgl/specs/latest/2.0/#2.2 mandates > that a few of the context creation attributes must be honored, but that's > because multisampled renderbuffers are a mandatory part of the OpenGL ES > 3.0 spec. > > > > I'm not a D3D expert so don't know how feasible it is to render to a > texture on one D3D device and display on another. To the best of my > knowledge Edge doesn't dynamically activate the discrete GPU when WebGL's > active and switch back to the integrated GPU when it isn't. It uses the > "0th" GPU, whatever that is according to the control panel settings. > > > > -Ken > > > > > > On Tue, Jan 24, 2017 at 6:30 AM, Maksims Mihejevs > wrote: > > Even my home laptop by default uses integrated GPU for Chrome, regardless > of battery/plug. NVIDIA Control Panel has a preset for programs, and I've > seen it is set by default to Integrated. > > > > On 24 January 2017 at 14:02, Ben Adams wrote: > > dGPU should always be used when on power; and this is only a decision that > effects the choice when on battery? > > > > On 24 January 2017 at 13:00, Maksims Mihejevs wrote: > > We've recognised same problem that a lot of people with Windows laptops > and dual gpu get integrated gpu as default. > > > > Although this is not the case for Edge for example. Which gives it > advantage over other browsers on Windows laptops. > > > > On 24 Jan 2017 9:26 a.m., "Florian B?sch" wrote: > > Dual GPU laptops such as some brands of windows and macOS laptops, the OS > has a GPU switching function that switches to the discrete GPU in graphics > intensive applications (such as games, cad software etc.) > > > > However, when a browser is running, it is often the case that the > integrated GPU is used, regardless of if a tab is doing something graphics > intensive with WebGL or not (https://twitter.com/grorgwork/status/ > 823719997616701440). > > > > On windows a user can influence this by a series of complicated steps to > designate the preferred GPU, but it is a machine wide-setting, and > sometimes it was ignored (I don't know if that's still the case). > > > > Obviously this presents a problem for WebGL developers. Neither would we > want to leech a users batteries unnecessarily, nor would we like to force a > user with a discrete GPU to receive worse performance should they wish to > use a graphics intensive WebGL application. > > > > In WebGL1 there was a context creation flag called " > preferLowPowerToHighPerformance", but I'm not aware how widely this was > implemented, and apparently it's also ignored on macOS (because it defaults > to false, yet the discrete GPU is still not used). > > > > WebGL2 has no equivalent context creation flag. > > > > Questions: > > 1. It would seem we have a sufficient mechanism to express a GPU > preference, is this a correct assessment? > 2. Why was preferLowPowerToHighPerformance dropped from WebGL2? > 3. Why is preferLowPowerToHighPerformance ignored for WebGL1 on some > configurations where it would be most useful? > 4. Should an additional mechanism be introduced so a user can switch > between GPUs at his choice for tabs? > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Raf...@ Mon Jan 30 19:07:42 2017 From: Raf...@ (Rafael Cintron) Date: Tue, 31 Jan 2017 03:07:42 +0000 Subject: [Public WebGL] dual GPU setups default to integrated GPU In-Reply-To: References: Message-ID: Yes, you can compose D3D textures rendered on the integrated GPU with textures rendered on the discrete GPU using DirectComposition. The Desktop Window Manager (DWM) takes care of doing the copy on your behalf. It also takes care of cases where multiple monitors are connected to different GPUs on the system. Most hybrid laptops have the output port wired to the iGPU. For many GPU loads, absorbing a copy from the dGPU to the iGPU for output yields better performance than doing rendering+output on the iGPU. Obviously, ?better performance? comes at the cost of battery life. --Rafael From: Kenneth Russell [mailto:kbr...@] Sent: Monday, January 30, 2017 6:06 PM To: Rafael Cintron Cc: Maksims Mihejevs ; Ben Adams ; Florian B?sch ; public Subject: Re: [Public WebGL] dual GPU setups default to integrated GPU Thanks Rafael for the feedback. Is there any way (with a public Windows API) to compose D3D textures rendered on the integrated GPU with textures rendered on the discrete GPU? Does DirectComposition support that? Thanks, -Ken On Mon, Jan 30, 2017 at 5:58 PM, Rafael Cintron > wrote: Currently, when Edge enumerates adapters with D3D, it always picks adapter #0 for all content, including WebGL content. By default, adapter #0 is the iGPU on hybrid machines. Users can customize this behavior by either using the custom IHV control panel or checking the ?Use software rendering instead of GPU rendering? checkbox. The latter, of course, will use Warp for rendering. D3D11 allows you to share textures between D3D11 devices running on the same adapter. However, resource sharing does not work between adapters. Developers that need to migrate resources between adapters can do so by manually copying the data to the CPU from the source adapter and uploading it to a resource of the same type on the destination adapter. Not all resources can be easily read back and restored in this manner. --Rafael From: owners-public_webgl...@ [mailto:owners-public_webgl...@] On Behalf Of Kenneth Russell Sent: Monday, January 30, 2017 5:45 PM To: Maksims Mihejevs > Cc: Ben Adams >; Florian B?sch >; public > Subject: Re: [Public WebGL] dual GPU setups default to integrated GPU preferLowPowerToHighPerformance would apply equally to WebGL 2.0 as WebGL 1.0. It wasn't dropped from the spec. https://www.khronos.org/registry/webgl/specs/latest/2.0/#2.2 mandates that a few of the context creation attributes must be honored, but that's because multisampled renderbuffers are a mandatory part of the OpenGL ES 3.0 spec. I'm not a D3D expert so don't know how feasible it is to render to a texture on one D3D device and display on another. To the best of my knowledge Edge doesn't dynamically activate the discrete GPU when WebGL's active and switch back to the integrated GPU when it isn't. It uses the "0th" GPU, whatever that is according to the control panel settings. -Ken On Tue, Jan 24, 2017 at 6:30 AM, Maksims Mihejevs > wrote: Even my home laptop by default uses integrated GPU for Chrome, regardless of battery/plug. NVIDIA Control Panel has a preset for programs, and I've seen it is set by default to Integrated. On 24 January 2017 at 14:02, Ben Adams > wrote: dGPU should always be used when on power; and this is only a decision that effects the choice when on battery? On 24 January 2017 at 13:00, Maksims Mihejevs > wrote: We've recognised same problem that a lot of people with Windows laptops and dual gpu get integrated gpu as default. Although this is not the case for Edge for example. Which gives it advantage over other browsers on Windows laptops. On 24 Jan 2017 9:26 a.m., "Florian B?sch" > wrote: Dual GPU laptops such as some brands of windows and macOS laptops, the OS has a GPU switching function that switches to the discrete GPU in graphics intensive applications (such as games, cad software etc.) However, when a browser is running, it is often the case that the integrated GPU is used, regardless of if a tab is doing something graphics intensive with WebGL or not (https://twitter.com/grorgwork/status/823719997616701440). On windows a user can influence this by a series of complicated steps to designate the preferred GPU, but it is a machine wide-setting, and sometimes it was ignored (I don't know if that's still the case). Obviously this presents a problem for WebGL developers. Neither would we want to leech a users batteries unnecessarily, nor would we like to force a user with a discrete GPU to receive worse performance should they wish to use a graphics intensive WebGL application. In WebGL1 there was a context creation flag called "preferLowPowerToHighPerformance", but I'm not aware how widely this was implemented, and apparently it's also ignored on macOS (because it defaults to false, yet the discrete GPU is still not used). WebGL2 has no equivalent context creation flag. Questions: 1. It would seem we have a sufficient mechanism to express a GPU preference, is this a correct assessment? 2. Why was preferLowPowerToHighPerformance dropped from WebGL2? 3. Why is preferLowPowerToHighPerformance ignored for WebGL1 on some configurations where it would be most useful? 4. Should an additional mechanism be introduced so a user can switch between GPUs at his choice for tabs? -------------- next part -------------- An HTML attachment was scrubbed... URL: