From kbr...@ Sun Jul 1 16:58:53 2018 From: kbr...@ (Ken Russell) Date: Sun, 1 Jul 2018 16:58:53 -0700 Subject: [Public WebGL] Call for Participation: WebGL BOF at SIGGRAPH Message-ID: [cross-posted to webgl-dev-list] WebGL community, Khronos will be hosting the WebGL Birds of a Feather session at SIGGRAPH this year on Wednesday, August 15 at 11:00 AM. We'll give a WebGL ecosystem update, show the latest techniques utilizing WebGL, and take your questions. https://www.khronos.org/news/events/2018-siggraph If you have a brief demo you'd like to show, please email me directly. Looking forward to seeing you at the event! -Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From rac...@ Mon Jul 2 08:59:23 2018 From: rac...@ (Rachid El Guerrab) Date: Mon, 2 Jul 2018 08:59:23 -0700 Subject: [Public WebGL] Use powerPreference to request high or low power GPUs In-Reply-To: References: <16021C25-8A73-4304-8619-5BCDCEF935B4@apple.com> <1F4D5F96-9973-4C27-8ACB-9ABDF0075D74@apple.com> Message-ID: I second Gregg Tavares's question about what's reported back. How can we tell if we're running with the high-performance option or not? On Thu, Jun 28, 2018 at 7:36 PM Jeff Gilbert wrote: > > I don't know about precisely specifying "default" yet, but I do intend > to implement the background-tab => iGPU behavior. (that sounds great!) > > I think there's other ideas worth investigating for "default": > - shelving a context's dGPU request if it goes idle/stops drawing. > - Only bringing up the dGPU once drawing begins initially. (probes > wouldn't spin up the dGPU) > > I also like the cross-origin idea. > > > On Thu, Jun 28, 2018 at 7:14 PM, Ken Russell wrote: > > On Thu, Jun 28, 2018 at 6:44 PM Dean Jackson wrote: > >> > >> > >> > >> On 29 Jun 2018, at 11:40, Ken Russell wrote: > >> > >> It would be helpful to leave the user agent's defaults loosely > specified, > >> at least for the moment. I'd like to experiment with heuristics to try > to > >> decide a good default for various kinds of content up front. > >> > >> > >> How would this work? You'd decide per site? Size of the canvas? iframe v > >> main page? > >> > >> We considered many of these things but decided it was better to be > >> consistent. > > > > > > Not sure yet; that's why I'd like to experiment. Haven't had the time to > do > > so yet. cross-origin iframe vs. main page seems like one that may have a > > good benefit. > > > > -Ken > > > > > >> Dean > >> > >> > >> > >> > >> On Thu, Jun 28, 2018 at 6:20 PM Dean Jackson wrote: > >>> > >>> > >>> > >>> > >>> > On 29 Jun 2018, at 07:49, Jeff Gilbert wrote: > >>> > > >>> > Initial opt-in "low-power" support for MacOS has landed in Firefox 63 > >>> > (Nightly). > >>> > >>> I assume opt-in means that your default value is "high-performance"? > >>> > >>> I wonder if we should make an effort to be consistent on this, > although I > >>> don't think Apple will want to move away from a "low-power" default. > >>> However, I wouldn't mind pushing for it to be "high-performance" if the > >>> device is connected to a power supply. (Downside: assuming you're > already > >>> being tracked, and they know you're on a dual-GPU machine, a website > can now > >>> detect if you're connected to power) > >>> > >>> Dean > >>> > >>> > >>> > > >>> > On Fri, Mar 17, 2017 at 3:04 PM, Dean Jackson > wrote: > >>> >> > >>> >> Hello WebGL community, > >>> >> > >>> >> We recently added powerPreference to the WebGL 1.0 specification, > >>> >> which allows content developers to give a hint as to what type of > GPU they > >>> >> require. > >>> >> > >>> >> https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.2.1 > >>> >> > >>> >> This replaces the old preferLowPowerToHighPerformance which, even > >>> >> though WebKit implemented it, never shipped in a form that actually > changed > >>> >> behaviour. > >>> >> > >>> >> Here's an example. If you're on a macOS system with two GPUs (e.g. a > >>> >> Macbook Pro), you'd request the more powerful and power hungry GPU > using: > >>> >> > >>> >> let gl = canvas.getContext("webgl", { powerPreference: > >>> >> "high-performance" }); > >>> >> > >>> >> Note that, as the specification suggests, it doesn't guarantee > you'll > >>> >> get the GPU, and you'll be at the front of the line if the system > needs to > >>> >> reset some WebGL contexts in order to reclaim system resources. You > MUST > >>> >> have a registered event handler for the webglcontextlost and > >>> >> webglcontextrestored events if you want the user agent to respect > your > >>> >> request for high-performance. > >>> >> > >>> >> WebKit and Safari Technology Preview have implemented this > attribute, > >>> >> so you can try them out now. Some details on the current WebKit > >>> >> implementation: > >>> >> > >>> >> - the default value for powerPreference is equivalent to "low-power" > >>> >> (i.e. we still prioritise power use). > >>> >> - even if you get the discrete GPU, you WILL swap to the integrated > >>> >> GPU if your tab is moved to the background, or the page is hidden. > This > >>> >> shouldn't cause any issues, but please let me know if you think it > is. > >>> >> - similarly, if you request "low-power" you might be swapped to the > >>> >> discrete GPU if another page or system app turns it on. > >>> >> > >>> >> Other browser engines are indicating they'll be implementing this > soon > >>> >> too. The behaviour on other operating systems and hardware might be > slightly > >>> >> different. > >>> >> > >>> >> Dean > >>> >> > >>> >> > >>> >> ----------------------------------------------------------- > >>> >> You are currently subscribed to public_webgl...@ > >>> >> To unsubscribe, send an email to majordomo...@ with > >>> >> the following command in the body of your email: > >>> >> unsubscribe public_webgl > >>> >> ----------------------------------------------------------- > >>> >> > >>> > >>> > >>> ----------------------------------------------------------- > >>> You are currently subscribed to public_webgl...@ > >>> To unsubscribe, send an email to majordomo...@ with > >>> the following command in the body of your email: > >>> unsubscribe public_webgl > >>> ----------------------------------------------------------- > >>> > >> > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -- - rachid -------------- next part -------------- An HTML attachment was scrubbed... URL: From kai...@ Mon Jul 2 12:27:16 2018 From: kai...@ (Kai Ninomiya) Date: Mon, 2 Jul 2018 12:27:16 -0700 Subject: [Public WebGL] Use powerPreference to request high or low power GPUs In-Reply-To: References: <16021C25-8A73-4304-8619-5BCDCEF935B4@apple.com> <1F4D5F96-9973-4C27-8ACB-9ABDF0075D74@apple.com> Message-ID: I think at one point we talked about having getContextAttributes() return only 'high-performance' or 'low-power' (never 'default'), but I don't think this got discussed much or specced. On Mon, Jul 2, 2018 at 9:00 AM Rachid El Guerrab < rachid.el.guerrab...@> wrote: > I second Gregg Tavares's question about what's reported back. > > How can we tell if we're running with the high-performance option or not? > > On Thu, Jun 28, 2018 at 7:36 PM Jeff Gilbert wrote: > >> >> I don't know about precisely specifying "default" yet, but I do intend >> to implement the background-tab => iGPU behavior. (that sounds great!) >> >> I think there's other ideas worth investigating for "default": >> - shelving a context's dGPU request if it goes idle/stops drawing. >> - Only bringing up the dGPU once drawing begins initially. (probes >> wouldn't spin up the dGPU) >> >> I also like the cross-origin idea. >> >> >> On Thu, Jun 28, 2018 at 7:14 PM, Ken Russell wrote: >> > On Thu, Jun 28, 2018 at 6:44 PM Dean Jackson wrote: >> >> >> >> >> >> >> >> On 29 Jun 2018, at 11:40, Ken Russell wrote: >> >> >> >> It would be helpful to leave the user agent's defaults loosely >> specified, >> >> at least for the moment. I'd like to experiment with heuristics to try >> to >> >> decide a good default for various kinds of content up front. >> >> >> >> >> >> How would this work? You'd decide per site? Size of the canvas? iframe >> v >> >> main page? >> >> >> >> We considered many of these things but decided it was better to be >> >> consistent. >> > >> > >> > Not sure yet; that's why I'd like to experiment. Haven't had the time >> to do >> > so yet. cross-origin iframe vs. main page seems like one that may have a >> > good benefit. >> > >> > -Ken >> > >> > >> >> Dean >> >> >> >> >> >> >> >> >> >> On Thu, Jun 28, 2018 at 6:20 PM Dean Jackson wrote: >> >>> >> >>> >> >>> >> >>> >> >>> > On 29 Jun 2018, at 07:49, Jeff Gilbert >> wrote: >> >>> > >> >>> > Initial opt-in "low-power" support for MacOS has landed in Firefox >> 63 >> >>> > (Nightly). >> >>> >> >>> I assume opt-in means that your default value is "high-performance"? >> >>> >> >>> I wonder if we should make an effort to be consistent on this, >> although I >> >>> don't think Apple will want to move away from a "low-power" default. >> >>> However, I wouldn't mind pushing for it to be "high-performance" if >> the >> >>> device is connected to a power supply. (Downside: assuming you're >> already >> >>> being tracked, and they know you're on a dual-GPU machine, a website >> can now >> >>> detect if you're connected to power) >> >>> >> >>> Dean >> >>> >> >>> >> >>> > >> >>> > On Fri, Mar 17, 2017 at 3:04 PM, Dean Jackson >> wrote: >> >>> >> >> >>> >> Hello WebGL community, >> >>> >> >> >>> >> We recently added powerPreference to the WebGL 1.0 specification, >> >>> >> which allows content developers to give a hint as to what type of >> GPU they >> >>> >> require. >> >>> >> >> >>> >> https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.2.1 >> >>> >> >> >>> >> This replaces the old preferLowPowerToHighPerformance which, even >> >>> >> though WebKit implemented it, never shipped in a form that >> actually changed >> >>> >> behaviour. >> >>> >> >> >>> >> Here's an example. If you're on a macOS system with two GPUs (e.g. >> a >> >>> >> Macbook Pro), you'd request the more powerful and power hungry GPU >> using: >> >>> >> >> >>> >> let gl = canvas.getContext("webgl", { powerPreference: >> >>> >> "high-performance" }); >> >>> >> >> >>> >> Note that, as the specification suggests, it doesn't guarantee >> you'll >> >>> >> get the GPU, and you'll be at the front of the line if the system >> needs to >> >>> >> reset some WebGL contexts in order to reclaim system resources. >> You MUST >> >>> >> have a registered event handler for the webglcontextlost and >> >>> >> webglcontextrestored events if you want the user agent to respect >> your >> >>> >> request for high-performance. >> >>> >> >> >>> >> WebKit and Safari Technology Preview have implemented this >> attribute, >> >>> >> so you can try them out now. Some details on the current WebKit >> >>> >> implementation: >> >>> >> >> >>> >> - the default value for powerPreference is equivalent to >> "low-power" >> >>> >> (i.e. we still prioritise power use). >> >>> >> - even if you get the discrete GPU, you WILL swap to the integrated >> >>> >> GPU if your tab is moved to the background, or the page is hidden. >> This >> >>> >> shouldn't cause any issues, but please let me know if you think it >> is. >> >>> >> - similarly, if you request "low-power" you might be swapped to the >> >>> >> discrete GPU if another page or system app turns it on. >> >>> >> >> >>> >> Other browser engines are indicating they'll be implementing this >> soon >> >>> >> too. The behaviour on other operating systems and hardware might >> be slightly >> >>> >> different. >> >>> >> >> >>> >> Dean >> >>> >> >> >>> >> >> >>> >> ----------------------------------------------------------- >> >>> >> You are currently subscribed to public_webgl...@ >> >>> >> To unsubscribe, send an email to majordomo...@ with >> >>> >> the following command in the body of your email: >> >>> >> unsubscribe public_webgl >> >>> >> ----------------------------------------------------------- >> >>> >> >> >>> >> >>> >> >>> ----------------------------------------------------------- >> >>> You are currently subscribed to public_webgl...@ >> >>> To unsubscribe, send an email to majordomo...@ with >> >>> the following command in the body of your email: >> >>> unsubscribe public_webgl >> >>> ----------------------------------------------------------- >> >>> >> >> >> > >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> > > -- > - rachid > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4845 bytes Desc: S/MIME Cryptographic Signature URL: From din...@ Mon Jul 2 17:13:15 2018 From: din...@ (Dean Jackson) Date: Tue, 03 Jul 2018 10:13:15 +1000 Subject: [Public WebGL] Use powerPreference to request high or low power GPUs In-Reply-To: References: <16021C25-8A73-4304-8619-5BCDCEF935B4@apple.com> <1F4D5F96-9973-4C27-8ACB-9ABDF0075D74@apple.com> Message-ID: <1481E1DD-EEBB-4F37-A979-6D8787B798DF@apple.com> > On 3 Jul 2018, at 01:59, Rachid El Guerrab wrote: > > I second Gregg Tavares's question about what's reported back. > > How can we tell if we're running with the high-performance option or not? Why should it matter? A relatively small set of people have dual GPU systems - and most people don't have powerful GPUs. And that's before you consider mobile devices. Also, in Safari on macOS, you don't necessarily get what you ask for anyway. You might ask for low-power but get high-performance because another app (or page) on the system has fired up that GPU. In other words, you have to write your content to work on the average GPU. But if you really have a good reason to know, you can query the GPU vendor string. It would be up to you to decide whether you think that's a high-performance GPU. Dean ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From rac...@ Mon Jul 2 17:37:09 2018 From: rac...@ (Rachid El Guerrab) Date: Mon, 2 Jul 2018 17:37:09 -0700 Subject: [Public WebGL] Use powerPreference to request high or low power GPUs In-Reply-To: <1481E1DD-EEBB-4F37-A979-6D8787B798DF@apple.com> References: <16021C25-8A73-4304-8619-5BCDCEF935B4@apple.com> <1F4D5F96-9973-4C27-8ACB-9ABDF0075D74@apple.com> <1481E1DD-EEBB-4F37-A979-6D8787B798DF@apple.com> Message-ID: Hi Dean, 1) Do you have statistics on how many people run WebGL on laptops with dual cards? Just curious why you think it's a small set.. 2) I get that I can query the vendor string. But the webgl committee creates this neat API, and vendors spend time implementing it, to give us some useful abstraction to GPU power, in realtime, which is awesome. And now you're telling me I should ignore all that work and query the string myself? What's the point then?? My content can adapt in many ways if I know I've switched to a lower profile, at the beginning or dynamically. But if i don't know, then what's the point? A message to the user that wouldn't know what to do about it? - Rachid > On Jul 2, 2018, at 5:13 PM, Dean Jackson wrote: > > > >> On 3 Jul 2018, at 01:59, Rachid El Guerrab wrote: >> >> I second Gregg Tavares's question about what's reported back. >> >> How can we tell if we're running with the high-performance option or not? > > Why should it matter? A relatively small set of people have dual GPU systems - and most people don't have powerful GPUs. And that's before you consider mobile devices. > > Also, in Safari on macOS, you don't necessarily get what you ask for anyway. You might ask for low-power but get high-performance because another app (or page) on the system has fired up that GPU. In other words, you have to write your content to work on the average GPU. > > But if you really have a good reason to know, you can query the GPU vendor string. It would be up to you to decide whether you think that's a high-performance GPU. > > Dean > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From din...@ Mon Jul 2 19:32:34 2018 From: din...@ (Dean Jackson) Date: Tue, 03 Jul 2018 12:32:34 +1000 Subject: [Public WebGL] Use powerPreference to request high or low power GPUs In-Reply-To: References: <16021C25-8A73-4304-8619-5BCDCEF935B4@apple.com> <1F4D5F96-9973-4C27-8ACB-9ABDF0075D74@apple.com> <1481E1DD-EEBB-4F37-A979-6D8787B798DF@apple.com> Message-ID: <19C60680-386E-48C9-AC42-0881000D8E67@apple.com> Hi Rachid, > On 3 Jul 2018, at 10:37, Rachid El Guerrab wrote: > > 1) Do you have statistics on how many people run WebGL on laptops with dual cards? Just curious why you think it's a small set.. As far as I'm aware, the MacBook Pro 15" is the only laptop that has dual GPUs and can dynamically swap between the two (and not all configurations of MacBook Pro 15" have dual GPUs). I'm not familiar with Windows, but when we discussed this in the group I remember hearing that dual-GPU Windows devices can only use one GPU at a time (i.e. it doesn't matter what the content requests because the browser doesn't have a choice). This might change in future versions of Windows. I don't know if Linux handles this configuration at all. For the MacBook Pro case, Apple doesn't release sales data by model, so I'm not sure how popular it is in comparison to MacBooks and MacBook Airs. But I think it is ok to guess that it is a fairly small set, firstly in comparison to the total number of laptop users, then the total number of desktop OS users, then to the total number of users on mobile and desktop. > > 2) I get that I can query the vendor string. > But the webgl committee creates this neat API, and vendors spend time implementing it, to give us some useful abstraction to GPU power, in realtime, which is awesome. > And now you're telling me I should ignore all that work and query the string myself? What's the point then?? Would being able to check the actual value we used when creating be enough? In Safari, you do actually end up getting what you want most of the time. However, it can change as the user hides the tab or application. You can detect this by listening for a "webglcontextchanged" event (although I just noticed this never made it into the specification, so it's non-standard :( ) > > My content can adapt in many ways if I know I've switched to a lower profile, at the beginning or dynamically. > > But if i don't know, then what's the point? A message to the user that wouldn't know what to do about it? Let's consider the case of two MacBook Pros - one with a second GPU, one without. The "low-power" GPU on the first is both the "low-power" and "high-performance" GPU on the second. If you decide that your app *really* needs to run on the best GPU, you'd ask for "high-performance". But on that second device, you're not getting a more powerful GPU. So your content has to either: - be designed to run on a wide range of hardware - query the GPU vendor string and hopefully know what that means for your app And this still applies even if there was no way to even request a high or low power GPU, or to older dual-GPU hardware where the high-performance GPU is slower than today's low-power GPU, or actually to any other hardware. I'm not arguing with you btw - just pointing out that it doesn't really matter whether you get one GPU or another. You have to assume the worst unless you're willing to check the vendor string and know what it means to your app. The powerPreference parameter gives the author the ability to indicate that their content is (hopefully) "simple" enough to not need the fastest GPU (e.g. it isn't a full-page game or a cryptocurrency miner). Dean > > - Rachid > >> On Jul 2, 2018, at 5:13 PM, Dean Jackson wrote: >> >> >> >>> On 3 Jul 2018, at 01:59, Rachid El Guerrab wrote: >>> >>> I second Gregg Tavares's question about what's reported back. >>> >>> How can we tell if we're running with the high-performance option or not? >> >> Why should it matter? A relatively small set of people have dual GPU systems - and most people don't have powerful GPUs. And that's before you consider mobile devices. >> >> Also, in Safari on macOS, you don't necessarily get what you ask for anyway. You might ask for low-power but get high-performance because another app (or page) on the system has fired up that GPU. In other words, you have to write your content to work on the average GPU. >> >> But if you really have a good reason to know, you can query the GPU vendor string. It would be up to you to decide whether you think that's a high-performance GPU. >> >> Dean >> >> >> >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Mon Jul 2 20:12:02 2018 From: jgi...@ (Jeff Gilbert) Date: Mon, 2 Jul 2018 20:12:02 -0700 Subject: [Public WebGL] Use powerPreference to request high or low power GPUs In-Reply-To: <19C60680-386E-48C9-AC42-0881000D8E67@apple.com> References: <16021C25-8A73-4304-8619-5BCDCEF935B4@apple.com> <1F4D5F96-9973-4C27-8ACB-9ABDF0075D74@apple.com> <1481E1DD-EEBB-4F37-A979-6D8787B798DF@apple.com> <19C60680-386E-48C9-AC42-0881000D8E67@apple.com> Message-ID: Cross-adapter sharing is possible on Windows, but only via DirectComposite, which no one leverages yet, to my knowledge. On Mon, Jul 2, 2018 at 7:32 PM, Dean Jackson wrote: > Hi Rachid, > >> On 3 Jul 2018, at 10:37, Rachid El Guerrab wrote: >> >> 1) Do you have statistics on how many people run WebGL on laptops with dual cards? Just curious why you think it's a small set.. > > As far as I'm aware, the MacBook Pro 15" is the only laptop that has dual GPUs and can dynamically swap between the two (and not all configurations of MacBook Pro 15" have dual GPUs). I'm not familiar with Windows, but when we discussed this in the group I remember hearing that dual-GPU Windows devices can only use one GPU at a time (i.e. it doesn't matter what the content requests because the browser doesn't have a choice). This might change in future versions of Windows. I don't know if Linux handles this configuration at all. > > For the MacBook Pro case, Apple doesn't release sales data by model, so I'm not sure how popular it is in comparison to MacBooks and MacBook Airs. > > But I think it is ok to guess that it is a fairly small set, firstly in comparison to the total number of laptop users, then the total number of desktop OS users, then to the total number of users on mobile and desktop. > >> >> 2) I get that I can query the vendor string. >> But the webgl committee creates this neat API, and vendors spend time implementing it, to give us some useful abstraction to GPU power, in realtime, which is awesome. >> And now you're telling me I should ignore all that work and query the string myself? What's the point then?? > > Would being able to check the actual value we used when creating be enough? In Safari, you do actually end up getting what you want most of the time. However, it can change as the user hides the tab or application. You can detect this by listening for a "webglcontextchanged" event (although I just noticed this never made it into the specification, so it's non-standard :( ) > >> >> My content can adapt in many ways if I know I've switched to a lower profile, at the beginning or dynamically. >> >> But if i don't know, then what's the point? A message to the user that wouldn't know what to do about it? > > Let's consider the case of two MacBook Pros - one with a second GPU, one without. The "low-power" GPU on the first is both the "low-power" and "high-performance" GPU on the second. If you decide that your app *really* needs to run on the best GPU, you'd ask for "high-performance". But on that second device, you're not getting a more powerful GPU. So your content has to either: > - be designed to run on a wide range of hardware > - query the GPU vendor string and hopefully know what that means for your app > > And this still applies even if there was no way to even request a high or low power GPU, or to older dual-GPU hardware where the high-performance GPU is slower than today's low-power GPU, or actually to any other hardware. > > I'm not arguing with you btw - just pointing out that it doesn't really matter whether you get one GPU or another. You have to assume the worst unless you're willing to check the vendor string and know what it means to your app. The powerPreference parameter gives the author the ability to indicate that their content is (hopefully) "simple" enough to not need the fastest GPU (e.g. it isn't a full-page game or a cryptocurrency miner). > > Dean > > >> >> - Rachid >> >>> On Jul 2, 2018, at 5:13 PM, Dean Jackson wrote: >>> >>> >>> >>>> On 3 Jul 2018, at 01:59, Rachid El Guerrab wrote: >>>> >>>> I second Gregg Tavares's question about what's reported back. >>>> >>>> How can we tell if we're running with the high-performance option or not? >>> >>> Why should it matter? A relatively small set of people have dual GPU systems - and most people don't have powerful GPUs. And that's before you consider mobile devices. >>> >>> Also, in Safari on macOS, you don't necessarily get what you ask for anyway. You might ask for low-power but get high-performance because another app (or page) on the system has fired up that GPU. In other words, you have to write your content to work on the average GPU. >>> >>> But if you really have a good reason to know, you can query the GPU vendor string. It would be up to you to decide whether you think that's a high-performance GPU. >>> >>> Dean >>> >>> >>> >>> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jri...@ Mon Jul 2 20:44:16 2018 From: jri...@ (James Ritts) Date: Mon, 2 Jul 2018 20:44:16 -0700 Subject: [Public WebGL] Use powerPreference to request high or low power GPUs In-Reply-To: References: <16021C25-8A73-4304-8619-5BCDCEF935B4@apple.com> <1F4D5F96-9973-4C27-8ACB-9ABDF0075D74@apple.com> <1481E1DD-EEBB-4F37-A979-6D8787B798DF@apple.com> <19C60680-386E-48C9-AC42-0881000D8E67@apple.com> Message-ID: > So your content has to be designed to run on a wide range of hardware For exactly these cases... > Would being able to check the actual value we used when creating be enough? ...could knowing (a) what profile was actually used at init time and (b) what profile is active after a switch potentially be useful, at least as a fall-through, if a site can't match the vendor string to some known pattern? On Mon, Jul 2, 2018 at 8:12 PM, Jeff Gilbert wrote: > Cross-adapter sharing is possible on Windows, but only via > DirectComposite, which no one leverages yet, to my knowledge. > > On Mon, Jul 2, 2018 at 7:32 PM, Dean Jackson wrote: > > Hi Rachid, > > > >> On 3 Jul 2018, at 10:37, Rachid El Guerrab > wrote: > >> > >> 1) Do you have statistics on how many people run WebGL on laptops with > dual cards? Just curious why you think it's a small set.. > > > > As far as I'm aware, the MacBook Pro 15" is the only laptop that has > dual GPUs and can dynamically swap between the two (and not all > configurations of MacBook Pro 15" have dual GPUs). I'm not familiar with > Windows, but when we discussed this in the group I remember hearing that > dual-GPU Windows devices can only use one GPU at a time (i.e. it doesn't > matter what the content requests because the browser doesn't have a > choice). This might change in future versions of Windows. I don't know if > Linux handles this configuration at all. > > > > For the MacBook Pro case, Apple doesn't release sales data by model, so > I'm not sure how popular it is in comparison to MacBooks and MacBook Airs. > > > > But I think it is ok to guess that it is a fairly small set, firstly in > comparison to the total number of laptop users, then the total number of > desktop OS users, then to the total number of users on mobile and desktop. > > > >> > >> 2) I get that I can query the vendor string. > >> But the webgl committee creates this neat API, and vendors spend time > implementing it, to give us some useful abstraction to GPU power, in > realtime, which is awesome. > >> And now you're telling me I should ignore all that work and query the > string myself? What's the point then?? > > > > Would being able to check the actual value we used when creating be > enough? In Safari, you do actually end up getting what you want most of the > time. However, it can change as the user hides the tab or application. You > can detect this by listening for a "webglcontextchanged" event (although I > just noticed this never made it into the specification, so it's > non-standard :( ) > > > >> > >> My content can adapt in many ways if I know I've switched to a lower > profile, at the beginning or dynamically. > >> > >> But if i don't know, then what's the point? A message to the user that > wouldn't know what to do about it? > > > > Let's consider the case of two MacBook Pros - one with a second GPU, one > without. The "low-power" GPU on the first is both the "low-power" and > "high-performance" GPU on the second. If you decide that your app *really* > needs to run on the best GPU, you'd ask for "high-performance". But on that > second device, you're not getting a more powerful GPU. So your content has > to either: > > - be designed to run on a wide range of hardware > > - query the GPU vendor string and hopefully know what that means for > your app > > > > And this still applies even if there was no way to even request a high > or low power GPU, or to older dual-GPU hardware where the high-performance > GPU is slower than today's low-power GPU, or actually to any other hardware. > > > > I'm not arguing with you btw - just pointing out that it doesn't really > matter whether you get one GPU or another. You have to assume the worst > unless you're willing to check the vendor string and know what it means to > your app. The powerPreference parameter gives the author the ability to > indicate that their content is (hopefully) "simple" enough to not need the > fastest GPU (e.g. it isn't a full-page game or a cryptocurrency miner). > > > > Dean > > > > > >> > >> - Rachid > >> > >>> On Jul 2, 2018, at 5:13 PM, Dean Jackson wrote: > >>> > >>> > >>> > >>>> On 3 Jul 2018, at 01:59, Rachid El Guerrab < > rachid.el.guerrab...@> wrote: > >>>> > >>>> I second Gregg Tavares's question about what's reported back. > >>>> > >>>> How can we tell if we're running with the high-performance option or > not? > >>> > >>> Why should it matter? A relatively small set of people have dual GPU > systems - and most people don't have powerful GPUs. And that's before you > consider mobile devices. > >>> > >>> Also, in Safari on macOS, you don't necessarily get what you ask for > anyway. You might ask for low-power but get high-performance because > another app (or page) on the system has fired up that GPU. In other words, > you have to write your content to work on the average GPU. > >>> > >>> But if you really have a good reason to know, you can query the GPU > vendor string. It would be up to you to decide whether you think that's a > high-performance GPU. > >>> > >>> Dean > >>> > >>> > >>> > >>> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rac...@ Tue Jul 3 06:26:15 2018 From: rac...@ (Rachid El Guerrab) Date: Tue, 3 Jul 2018 06:26:15 -0700 Subject: [Public WebGL] Use powerPreference to request high or low power GPUs In-Reply-To: References: <16021C25-8A73-4304-8619-5BCDCEF935B4@apple.com> <1F4D5F96-9973-4C27-8ACB-9ABDF0075D74@apple.com> <1481E1DD-EEBB-4F37-A979-6D8787B798DF@apple.com> <19C60680-386E-48C9-AC42-0881000D8E67@apple.com> Message-ID: Hi Dean, Thanks for the explanation. I'm still a bit confused as to the intent here. So please bear with me :-) When you conceived of this update, was the idea that the trend will be more dual GPUs? If a system only has the integrated card, does it mean it'll only create contexts that ask for "low-power", no matter what the performance of the GPU is? or are there more considerations? Are you just looking to know if a context doesn't need full rendering performance and therefore would be fine if pushed to the integrated GPU? Is this more helpful to the system as a whole and not useful for the specific content? And what system decides to switch the context to a lower profile? the browser? the OS? Outside of the tab hidden, and maybe "low battery" on the host computer, do you know of other cases where the context might be switched from high performance to low? Thank you, -Rachid On Mon, Jul 2, 2018 at 8:44 PM James Ritts wrote: > > So your content has to be designed to run on a wide range of hardware > > For exactly these cases... > > > Would being able to check the actual value we used when creating be > enough? > > ...could knowing (a) what profile was actually used at init time and (b) > what profile is active after a switch potentially be useful, at least as a > fall-through, if a site can't match the vendor string to some known pattern? > > > On Mon, Jul 2, 2018 at 8:12 PM, Jeff Gilbert wrote: > >> Cross-adapter sharing is possible on Windows, but only via >> DirectComposite, which no one leverages yet, to my knowledge. >> >> On Mon, Jul 2, 2018 at 7:32 PM, Dean Jackson wrote: >> > Hi Rachid, >> > >> >> On 3 Jul 2018, at 10:37, Rachid El Guerrab < >> rachid.el.guerrab...@> wrote: >> >> >> >> 1) Do you have statistics on how many people run WebGL on laptops with >> dual cards? Just curious why you think it's a small set.. >> > >> > As far as I'm aware, the MacBook Pro 15" is the only laptop that has >> dual GPUs and can dynamically swap between the two (and not all >> configurations of MacBook Pro 15" have dual GPUs). I'm not familiar with >> Windows, but when we discussed this in the group I remember hearing that >> dual-GPU Windows devices can only use one GPU at a time (i.e. it doesn't >> matter what the content requests because the browser doesn't have a >> choice). This might change in future versions of Windows. I don't know if >> Linux handles this configuration at all. >> > >> > For the MacBook Pro case, Apple doesn't release sales data by model, so >> I'm not sure how popular it is in comparison to MacBooks and MacBook Airs. >> > >> > But I think it is ok to guess that it is a fairly small set, firstly in >> comparison to the total number of laptop users, then the total number of >> desktop OS users, then to the total number of users on mobile and desktop. >> > >> >> >> >> 2) I get that I can query the vendor string. >> >> But the webgl committee creates this neat API, and vendors spend time >> implementing it, to give us some useful abstraction to GPU power, in >> realtime, which is awesome. >> >> And now you're telling me I should ignore all that work and query the >> string myself? What's the point then?? >> > >> > Would being able to check the actual value we used when creating be >> enough? In Safari, you do actually end up getting what you want most of the >> time. However, it can change as the user hides the tab or application. You >> can detect this by listening for a "webglcontextchanged" event (although I >> just noticed this never made it into the specification, so it's >> non-standard :( ) >> > >> >> >> >> My content can adapt in many ways if I know I've switched to a lower >> profile, at the beginning or dynamically. >> >> >> >> But if i don't know, then what's the point? A message to the user that >> wouldn't know what to do about it? >> > >> > Let's consider the case of two MacBook Pros - one with a second GPU, >> one without. The "low-power" GPU on the first is both the "low-power" and >> "high-performance" GPU on the second. If you decide that your app *really* >> needs to run on the best GPU, you'd ask for "high-performance". But on that >> second device, you're not getting a more powerful GPU. So your content has >> to either: >> > - be designed to run on a wide range of hardware >> > - query the GPU vendor string and hopefully know what that means for >> your app >> > >> > And this still applies even if there was no way to even request a high >> or low power GPU, or to older dual-GPU hardware where the high-performance >> GPU is slower than today's low-power GPU, or actually to any other hardware. >> > >> > I'm not arguing with you btw - just pointing out that it doesn't really >> matter whether you get one GPU or another. You have to assume the worst >> unless you're willing to check the vendor string and know what it means to >> your app. The powerPreference parameter gives the author the ability to >> indicate that their content is (hopefully) "simple" enough to not need the >> fastest GPU (e.g. it isn't a full-page game or a cryptocurrency miner). >> > >> > Dean >> > >> > >> >> >> >> - Rachid >> >> >> >>> On Jul 2, 2018, at 5:13 PM, Dean Jackson wrote: >> >>> >> >>> >> >>> >> >>>> On 3 Jul 2018, at 01:59, Rachid El Guerrab < >> rachid.el.guerrab...@> wrote: >> >>>> >> >>>> I second Gregg Tavares's question about what's reported back. >> >>>> >> >>>> How can we tell if we're running with the high-performance option or >> not? >> >>> >> >>> Why should it matter? A relatively small set of people have dual GPU >> systems - and most people don't have powerful GPUs. And that's before you >> consider mobile devices. >> >>> >> >>> Also, in Safari on macOS, you don't necessarily get what you ask for >> anyway. You might ask for low-power but get high-performance because >> another app (or page) on the system has fired up that GPU. In other words, >> you have to write your content to work on the average GPU. >> >>> >> >>> But if you really have a good reason to know, you can query the GPU >> vendor string. It would be up to you to decide whether you think that's a >> high-performance GPU. >> >>> >> >>> Dean >> >>> >> >>> >> >>> >> >>> >> > >> > > -- - rachid -------------- next part -------------- An HTML attachment was scrubbed... URL: From kai...@ Tue Jul 3 11:42:22 2018 From: kai...@ (Kai Ninomiya) Date: Tue, 3 Jul 2018 11:42:22 -0700 Subject: [Public WebGL] Use powerPreference to request high or low power GPUs In-Reply-To: References: <16021C25-8A73-4304-8619-5BCDCEF935B4@apple.com> <1F4D5F96-9973-4C27-8ACB-9ABDF0075D74@apple.com> <1481E1DD-EEBB-4F37-A979-6D8787B798DF@apple.com> <19C60680-386E-48C9-AC42-0881000D8E67@apple.com> Message-ID: If a system has only an integrated card, it will always get the integrated card regardless of power preference. Power preference won't prevent the context from being created, AFAIK. On Tue, Jul 3, 2018 at 6:27 AM Rachid El Guerrab < rachid.el.guerrab...@> wrote: > Hi Dean, > > Thanks for the explanation. > > I'm still a bit confused as to the intent here. So please bear with me :-) > > When you conceived of this update, was the idea that the trend will be > more dual GPUs? > > If a system only has the integrated card, does it mean it'll only create > contexts that ask for "low-power", no matter what the performance of the > GPU is? or are there more considerations? > > Are you just looking to know if a context doesn't need full rendering > performance and therefore would be fine if pushed to the integrated GPU? Is > this more helpful to the system as a whole and not useful for the specific > content? > > And what system decides to switch the context to a lower profile? the > browser? the OS? > > Outside of the tab hidden, and maybe "low battery" on the host computer, > do you know of other cases where the context might be switched from high > performance to low? > > Thank you, > > -Rachid > > > On Mon, Jul 2, 2018 at 8:44 PM James Ritts wrote: > >> > So your content has to be designed to run on a wide range of hardware >> >> For exactly these cases... >> >> > Would being able to check the actual value we used when creating be >> enough? >> >> ...could knowing (a) what profile was actually used at init time and (b) >> what profile is active after a switch potentially be useful, at least as a >> fall-through, if a site can't match the vendor string to some known pattern? >> >> >> On Mon, Jul 2, 2018 at 8:12 PM, Jeff Gilbert >> wrote: >> >>> Cross-adapter sharing is possible on Windows, but only via >>> DirectComposite, which no one leverages yet, to my knowledge. >>> >>> On Mon, Jul 2, 2018 at 7:32 PM, Dean Jackson wrote: >>> > Hi Rachid, >>> > >>> >> On 3 Jul 2018, at 10:37, Rachid El Guerrab < >>> rachid.el.guerrab...@> wrote: >>> >> >>> >> 1) Do you have statistics on how many people run WebGL on laptops >>> with dual cards? Just curious why you think it's a small set.. >>> > >>> > As far as I'm aware, the MacBook Pro 15" is the only laptop that has >>> dual GPUs and can dynamically swap between the two (and not all >>> configurations of MacBook Pro 15" have dual GPUs). I'm not familiar with >>> Windows, but when we discussed this in the group I remember hearing that >>> dual-GPU Windows devices can only use one GPU at a time (i.e. it doesn't >>> matter what the content requests because the browser doesn't have a >>> choice). This might change in future versions of Windows. I don't know if >>> Linux handles this configuration at all. >>> > >>> > For the MacBook Pro case, Apple doesn't release sales data by model, >>> so I'm not sure how popular it is in comparison to MacBooks and MacBook >>> Airs. >>> > >>> > But I think it is ok to guess that it is a fairly small set, firstly >>> in comparison to the total number of laptop users, then the total number of >>> desktop OS users, then to the total number of users on mobile and desktop. >>> > >>> >> >>> >> 2) I get that I can query the vendor string. >>> >> But the webgl committee creates this neat API, and vendors spend time >>> implementing it, to give us some useful abstraction to GPU power, in >>> realtime, which is awesome. >>> >> And now you're telling me I should ignore all that work and query the >>> string myself? What's the point then?? >>> > >>> > Would being able to check the actual value we used when creating be >>> enough? In Safari, you do actually end up getting what you want most of the >>> time. However, it can change as the user hides the tab or application. You >>> can detect this by listening for a "webglcontextchanged" event (although I >>> just noticed this never made it into the specification, so it's >>> non-standard :( ) >>> > >>> >> >>> >> My content can adapt in many ways if I know I've switched to a lower >>> profile, at the beginning or dynamically. >>> >> >>> >> But if i don't know, then what's the point? A message to the user >>> that wouldn't know what to do about it? >>> > >>> > Let's consider the case of two MacBook Pros - one with a second GPU, >>> one without. The "low-power" GPU on the first is both the "low-power" and >>> "high-performance" GPU on the second. If you decide that your app *really* >>> needs to run on the best GPU, you'd ask for "high-performance". But on that >>> second device, you're not getting a more powerful GPU. So your content has >>> to either: >>> > - be designed to run on a wide range of hardware >>> > - query the GPU vendor string and hopefully know what that means for >>> your app >>> > >>> > And this still applies even if there was no way to even request a high >>> or low power GPU, or to older dual-GPU hardware where the high-performance >>> GPU is slower than today's low-power GPU, or actually to any other hardware. >>> > >>> > I'm not arguing with you btw - just pointing out that it doesn't >>> really matter whether you get one GPU or another. You have to assume the >>> worst unless you're willing to check the vendor string and know what it >>> means to your app. The powerPreference parameter gives the author the >>> ability to indicate that their content is (hopefully) "simple" enough to >>> not need the fastest GPU (e.g. it isn't a full-page game or a >>> cryptocurrency miner). >>> > >>> > Dean >>> > >>> > >>> >> >>> >> - Rachid >>> >> >>> >>> On Jul 2, 2018, at 5:13 PM, Dean Jackson wrote: >>> >>> >>> >>> >>> >>> >>> >>>> On 3 Jul 2018, at 01:59, Rachid El Guerrab < >>> rachid.el.guerrab...@> wrote: >>> >>>> >>> >>>> I second Gregg Tavares's question about what's reported back. >>> >>>> >>> >>>> How can we tell if we're running with the high-performance option >>> or not? >>> >>> >>> >>> Why should it matter? A relatively small set of people have dual GPU >>> systems - and most people don't have powerful GPUs. And that's before you >>> consider mobile devices. >>> >>> >>> >>> Also, in Safari on macOS, you don't necessarily get what you ask for >>> anyway. You might ask for low-power but get high-performance because >>> another app (or page) on the system has fired up that GPU. In other words, >>> you have to write your content to work on the average GPU. >>> >>> >>> >>> But if you really have a good reason to know, you can query the GPU >>> vendor string. It would be up to you to decide whether you think that's a >>> high-performance GPU. >>> >>> >>> >>> Dean >>> >>> >>> >>> >>> >>> >>> >>> >>> > >>> >> >> > > -- > - rachid > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4845 bytes Desc: S/MIME Cryptographic Signature URL: From khr...@ Wed Jul 4 23:33:12 2018 From: khr...@ (Gregg Tavares) Date: Thu, 5 Jul 2018 15:33:12 +0900 Subject: [Public WebGL] the commit API on the OffscreenCanvas Message-ID: I'm not sure where to bring this up but I've been trying for a couple of weeks in other places and getting zero feedback sooo I am hoping you guys in charge of things will take a few minutes and read this and take some time to thoughtfully respond. It's possible I don't understand how OffscreenCanvas is supposed to work. I've read the spec and written several tests and a couple of short examples and this is my understanding. There are basically 2 ways to use it. They are documented at MDN. https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas One is listed as "*Synchronous display of frames produced by an OffscreenCanvas*". It involves using "offscreenCanvas.transferToImageBitmap" inside the worker, transferring that bitmap back to the main thread, and calling bitmapContext.transferImageBitmap. This API makes sense to me. If you want to synchronize DOM updates with WebGL updates then you need to make sure both get updated at the same time. Like say you have an HTML label over a moving 3D object. The other is listed as "*Asynchronous display of frames produced by an OffscreenCanvas*". In that case you just call `*gl.commit*` inside the worker and the canvas back on the page will be updated. This is arguably the more common use case. The majority of WebGL and three.js apps etc would use this method. The example on MDN shows sending a message to the worker each time you want it to render. Testing that on Chrome seems to work but it currently has a significant performance penalty. Recently 2 more things were added. One is that *requestAnimationFrame* was added to workers. The other is the *commit as been changed to be a synchronous* function. The worker freezes until the frame has been displayed. It's these last 2 things I don't understand. *First:* given that rAF is now available in workers I would think this is valid code // in worker function loop() { render(); requestAnimationFrame(loop); gl.commit(); } loop(); onmessage = function() { // get messages related to say camera position or // window size or mouse position etc to affect rendering }; Unfortunately testing it out in Chrome this doesn't work. The `onmessage` callback is never called regardless of how many messages are sent. I filed a bug. Was told "WONTFIX: working as intended" Really? Is that really the intent of the spec? Apple? Mozilla? Microsoft? Do you agree that the code above is not a supported use case and is working as intended? *Second:* other events and callbacks don't work // in worker fetch('someimage.png', {mode:'cors'}).then(function(response) { return response.blob(); }).then(function(blob) { return createImageBitmap(response.blob()); }).then(function(bitmap) { gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, bitmap); }); function loop() { render(); requestAnimationFrame(loop); gl.commit(); } loop(); This also does not work. The *fetch* response never comes. My guess is this is because in Chrome *commit* blocks and rAF event gets pushed to the top of the event queue so no other events ever get processed. The spec has nothing to say about this. Is this supposed to work? It seems like a valid use case. Note that switching the end of loop to gl.commit(); requestAnimationFrame(loop); also does not work. Is that correct that it should not work? I guess I don't really understand the point of having rAF in worker if these use cases are not supposed to work. Are they? If they are not supposed to work can someone please explain rAF's use case in a worker? *Third*, according to various comments around the specs one use case is a spin loop on gl.commit for webassembly ports. Effectively this is supposed to work while(true) { render(); gl.commit(); } But I don't understand how this is useful given that no events come in if you do that. You can't communicate with the worker. The worker can't load files or call fetch or get a websocket message or receive input passed in from the main thread or do anything except render. Maybe people are thinking SharedArrayBuffers are a way to pass in data to such a loop but really? How would you pass in an image? As it is you'd have write your own decoder since you can't get the raw data losslessly out of an image from any web APIs and you can't transfer images into the worker (since it's not listening for messages) then you'd need to some how parse the image yourself and copy it into a sharedarraybuffer. That would a very slow jank inducing process in the main thread so now it seems like the spec is saying to use a gl.commit spin loop you need 2 workers, one for rendering, one for loading images and other things and then you need 1 or more SharedArrayBuffers and you have to implement a bunch of synchronization stuff just so you can use WebGL in a worker using this pattern mentioned in the spec? Is that really the intent? Is there something I'm missing? This seems like a platform breaking API. Use it and the entire rest of the platform becomes unusable without major amounts of code. If I'm wrong I'm happy to be corrected. *Four:* Non front tabs: rAF is currently not delivered if the page is not the front tab which is great but rAF is an event so even when rAF stops firing because the page is not on the front tab other events still arrive (fetch, onmessage, XHR, websockets, etc...) This means even though your page doesn't get a rAF callback it can still process incoming data (like your chat app's messages). How is that supposed to work with `gl.commit` loops? It's not the front tab so you want to block the commit so the worker doesn't spin and waste time. If the worker locks then that seems to have implications for all other associated workers and the main thread. If you're using Atomics to sync up things suddenly they'll fail indefinitely even more complicating all the code you have to write to use this feature. Chrome has already committed to shipping the API. The code as been committed so if nothing changes it will ship automatically in a few weeks with all the issues mentioned above not behind a flag but live so it seems important to understand how to use this and if all these issues were considered and what their solutions are. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Jul 5 13:16:21 2018 From: kbr...@ (Ken Russell) Date: Thu, 5 Jul 2018 13:16:21 -0700 Subject: [Public WebGL] Re: the commit API on the OffscreenCanvas In-Reply-To: References: Message-ID: On Wed, Jul 4, 2018 at 11:33 PM Gregg Tavares wrote: > I'm not sure where to bring this up but I've been trying for a couple of > weeks in other places and getting zero feedback sooo I am hoping you guys > in charge of things will take a few minutes and read this and take some > time to thoughtfully respond. > > It's possible I don't understand how OffscreenCanvas is supposed to work. > I've read the spec and written several tests and a couple of short examples > and this is my understanding. > > There are basically 2 ways to use it. They are documented at MDN. > > https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas > > One is listed as "*Synchronous display of frames produced by an > OffscreenCanvas*". It involves using > "offscreenCanvas.transferToImageBitmap" inside the worker, transferring > that bitmap back to the main thread, and calling > bitmapContext.transferImageBitmap. This API makes sense to me. If you want > to synchronize DOM updates with WebGL updates then you need to make sure > both get updated at the same time. Like say you have an HTML label over a > moving 3D object. > > The other is listed as "*Asynchronous display of frames produced by an > OffscreenCanvas*". In that case you just call `*gl.commit*` inside the > worker and the canvas back on the page will be updated. This is arguably > the more common use case. The majority of WebGL and three.js apps etc would > use this method. The example on MDN shows sending a message to the worker > each time you want it to render. Testing that on Chrome seems to work but > it currently has a significant performance penalty. > > Recently 2 more things were added. One is that *requestAnimationFrame* > was added to workers. The other is the *commit as been changed to be a > synchronous* function. The worker freezes until the frame has been > displayed. > Hi Gregg, requestAnimationFrame on workers was added as a result of feedback from the W3C TAG. It provides a way to animate and implicitly commit frames in the same way as with HTMLCanvasElement on the main thread. It replaces the use of setTimeout() on web workers for animating OffscreenCanvases, and provides a unified mechanism to allow VR headsets to animate at higher framerates than typical monitors. It's these last 2 things I don't understand. > > *First:* given that rAF is now available in workers I would think this is > valid code > > // in worker > function loop() { > render(); > requestAnimationFrame(loop); > gl.commit(); > } > loop(); > > onmessage = function() { > // get messages related to say camera position or > // window size or mouse position etc to affect rendering > }; > > Unfortunately testing it out in Chrome this doesn't work. The `onmessage` > callback is never called regardless of how many messages are sent. I filed > a bug. Was told "WONTFIX: working as intended" > In the current semantics it's an error to call commit() from inside a requestAnimationFrame callback on a worker. The spec and implementations should be changed to throw an exception from commit() in this case. I updated your samples in your Chromium bug report http://crbug.com/859275 to remove the call to commit() from within the rAF callback and they work very well. No flickering, and work exactly as you intended. Also replied to your same questions on https://github.com/w3ctag/design-reviews/issues/141 . > Really? Is that really the intent of the spec? Apple? Mozilla? Microsoft? > Do you agree that the code above is not a supported use case and is working > as intended? > > *Second:* other events and callbacks don't work > > // in worker > fetch('someimage.png', {mode:'cors'}).then(function(response) { > return response.blob(); > }).then(function(blob) { > return createImageBitmap(response.blob()); > }).then(function(bitmap) { > gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, > gl.UNSIGNED_BYTE, bitmap); > }); > > function loop() { > render(); > requestAnimationFrame(loop); > gl.commit(); > } > loop(); > > This also does not work. The *fetch* response never comes. My guess is > this is because in Chrome *commit* blocks and rAF event gets pushed to > the top of the event queue so no other events ever get processed. The spec > has nothing to say about this. Is this supposed to work? It seems like a > valid use case. Note that switching the end of loop to > > gl.commit(); > requestAnimationFrame(loop); > > also does not work. > > Is that correct that it should not work? I guess I don't really understand > the point of having rAF in worker if these use cases are not supposed to > work. Are they? If they are not supposed to work can someone please explain > rAF's use case in a worker? > rAF in a worker replaces the use of commit(). Another alternative to animating in a worker would be setTimeout(), but now that rAF is present in workers, it's the best alternative. *Third*, according to various comments around the specs one use case is a > spin loop on gl.commit for webassembly ports. Effectively this is supposed > to work > > while(true) { > render(); > gl.commit(); > } > > But I don't understand how this is useful given that no events come in if > you do that. You can't communicate with the worker. The worker can't load > files or call fetch or get a websocket message or receive input passed in > from the main thread or do anything except render. > > Maybe people are thinking SharedArrayBuffers are a way to pass in data to > such a loop but really? How would you pass in an image? As it is you'd have > write your own decoder since you can't get the raw data losslessly out of > an image from any web APIs and you can't transfer images into the worker > (since it's not listening for messages) then you'd need to some how parse > the image yourself and copy it into a sharedarraybuffer. That would a very > slow jank inducing process in the main thread so now it seems like the spec > is saying to use a gl.commit spin loop you need 2 workers, one for > rendering, one for loading images and other things and then you need 1 or > more SharedArrayBuffers and you have to implement a bunch of > synchronization stuff just so you can use WebGL in a worker using this > pattern mentioned in the spec? > > Is that really the intent? Is there something I'm missing? This seems like > a platform breaking API. Use it and the entire rest of the platform becomes > unusable without major amounts of code. > > If I'm wrong I'm happy to be corrected. > commit() is mainly intended to support compiling multithreaded programs to WebAssembly. The C language's threading model is that threads start up from a start function and only return from it when the thread exits. We are trying to get real use cases working which transfer all data in to these rendering threads via the C heap from other threads. commit() and its blocking behavior are required in order to reach parity with how native platforms work in this scenario. *Four:* Non front tabs: rAF is currently not delivered if the page is not > the front tab which is great but rAF is an event so even when rAF stops > firing because the page is not on the front tab other events still arrive > (fetch, onmessage, XHR, websockets, etc...) This means even though your > page doesn't get a rAF callback it can still process incoming data (like > your chat app's messages). > > How is that supposed to work with `gl.commit` loops? It's not the front > tab so you want to block the commit so the worker doesn't spin and waste > time. If the worker locks then that seems to have implications for all > other associated workers and the main thread. If you're using Atomics to > sync up things suddenly they'll fail indefinitely even more complicating > all the code you have to write to use this feature. > I think that ideally commit() would block until the tab comes back to the foreground, to minimize CPU usage. However, if that turns out to be suboptimal for some use cases, we could consider throttling commit(), to essentially block for some time period and then return control to the worker. -Ken Chrome has already committed to shipping the API. The code as been > committed so if nothing changes it will ship automatically in a few weeks > with all the issues mentioned above not behind a flag but live so it seems > important to understand how to use this and if all these issues were > considered and what their solutions are. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Raf...@ Thu Jul 5 16:09:09 2018 From: Raf...@ (Rafael Cintron) Date: Thu, 5 Jul 2018 23:09:09 +0000 Subject: [Public WebGL] Use powerPreference to request high or low power GPUs In-Reply-To: References: <16021C25-8A73-4304-8619-5BCDCEF935B4@apple.com> <1F4D5F96-9973-4C27-8ACB-9ABDF0075D74@apple.com> <1481E1DD-EEBB-4F37-A979-6D8787B798DF@apple.com> <19C60680-386E-48C9-AC42-0881000D8E67@apple.com> Message-ID: There have been a few misconceptions about how Windows works on this thread. I?d like to clarify these. On Windows, applications can enumerate adapters (GPUs) on the system and render using any of the ones in the list. Unlike MacOS, Windows does not automatically move your content between adapters behind your back. If you want to switch your drawing to occur on a different adapter, you will need to take care of reading back, or re-creating, the resources yourself on the new adapter. On Windows, there is no such thing as ?cross adapter sharing?. You cannot allocate a texture on one adapter and render to it using another adapter. So you cannot allocate a texture on an Nvidia GPU?s VRAM and render to it using the Intel GPU or vice versa. You can only share resources between D3D devices created on the same adapter. The Desktop Window Manager (DWM) is responsible for composing content rendered with multiple adapters on the system. It must also abide by the ?no cross adapter sharing? rule. If you draw your content to a swap chain created on adapter A and the user moves your application?s window to a monitor connected to adapter B, DWM will copy the output of your application from adapter A to adapter B behind your back and texture from adapter B?s copy. You can use DirectComposition to create a tree of ?visuals?, each with its own texture and have the DWM compose the tree of visuals for you instead of having a swap chain for the application window. The textures in the visual tree can come from different adapters. But this doesn?t prevent the copies the DWM has to perform if there is a mismatch between the rendering adapter and the output adapter. Edge has been using DirectComposition and its successor, Windows.UI.Composition, for multiple releases. I believe Chrome uses DirectComposition to render some content. On most hybrid laptops, the output ports are directly connected to the integrated GPU (iGPU). The discrete GPU (dGPU) sits off to the side. If you draw using the dGPU, the output texture/swap chain must be copied, through system memory, to the iGPU before you see it on your laptop screen. Usually, the dGPU much faster than the iGPU so the copy is worth it, or can happen while you draw another frame. I am told there are some gaming laptops where it reversed and the output ports are directly connected to the dGPU and the iGPU is the one that sits off to the side, but this case is pretty rare. I worry when people ask that browsers render ?just WebGL? using the high performance adapter and keep ?everything else? on the low performance adapter. This is fine if the inputs and outputs of WebGL are all contained in their own island. But, as we know, Web developers can upload images, SVG content, canvas elements, ImageBitmaps and videos to WebGL textures. In a dual GPU rendering case, that content has to be transferred between adapters, through system memory, before it can be used by WebGL. I am not too concerned about images or other static content. I do, however, worry about the 4K-floating point pixel-HDR-360 video being transferred every frame. I suppose we can keep content on both GPUs and heuristically determine which GPU is being used more often for the video case, or decode in both places perhaps. But, in the meantime, web developers that ask for ?high performance? may be in for a surprise on some hardware. --Rafael From: owners-public_webgl...@ On Behalf Of Kai Ninomiya Sent: Tuesday, July 3, 2018 11:42 AM To: Rachid El Guerrab Cc: jritts...@; jgilbert...@; Dean Jackson ; Kenneth Russell ; public_webgl...@ Subject: Re: [Public WebGL] Use powerPreference to request high or low power GPUs If a system has only an integrated card, it will always get the integrated card regardless of power preference. Power preference won't prevent the context from being created, AFAIK. On Tue, Jul 3, 2018 at 6:27 AM Rachid El Guerrab > wrote: Hi Dean, Thanks for the explanation. I'm still a bit confused as to the intent here. So please bear with me :-) When you conceived of this update, was the idea that the trend will be more dual GPUs? If a system only has the integrated card, does it mean it'll only create contexts that ask for "low-power", no matter what the performance of the GPU is? or are there more considerations? Are you just looking to know if a context doesn't need full rendering performance and therefore would be fine if pushed to the integrated GPU? Is this more helpful to the system as a whole and not useful for the specific content? And what system decides to switch the context to a lower profile? the browser? the OS? Outside of the tab hidden, and maybe "low battery" on the host computer, do you know of other cases where the context might be switched from high performance to low? Thank you, -Rachid On Mon, Jul 2, 2018 at 8:44 PM James Ritts > wrote: > So your content has to be designed to run on a wide range of hardware For exactly these cases... > Would being able to check the actual value we used when creating be enough? ...could knowing (a) what profile was actually used at init time and (b) what profile is active after a switch potentially be useful, at least as a fall-through, if a site can't match the vendor string to some known pattern? On Mon, Jul 2, 2018 at 8:12 PM, Jeff Gilbert > wrote: Cross-adapter sharing is possible on Windows, but only via DirectComposite, which no one leverages yet, to my knowledge. On Mon, Jul 2, 2018 at 7:32 PM, Dean Jackson > wrote: > Hi Rachid, > >> On 3 Jul 2018, at 10:37, Rachid El Guerrab > wrote: >> >> 1) Do you have statistics on how many people run WebGL on laptops with dual cards? Just curious why you think it's a small set.. > > As far as I'm aware, the MacBook Pro 15" is the only laptop that has dual GPUs and can dynamically swap between the two (and not all configurations of MacBook Pro 15" have dual GPUs). I'm not familiar with Windows, but when we discussed this in the group I remember hearing that dual-GPU Windows devices can only use one GPU at a time (i.e. it doesn't matter what the content requests because the browser doesn't have a choice). This might change in future versions of Windows. I don't know if Linux handles this configuration at all. > > For the MacBook Pro case, Apple doesn't release sales data by model, so I'm not sure how popular it is in comparison to MacBooks and MacBook Airs. > > But I think it is ok to guess that it is a fairly small set, firstly in comparison to the total number of laptop users, then the total number of desktop OS users, then to the total number of users on mobile and desktop. > >> >> 2) I get that I can query the vendor string. >> But the webgl committee creates this neat API, and vendors spend time implementing it, to give us some useful abstraction to GPU power, in realtime, which is awesome. >> And now you're telling me I should ignore all that work and query the string myself? What's the point then?? > > Would being able to check the actual value we used when creating be enough? In Safari, you do actually end up getting what you want most of the time. However, it can change as the user hides the tab or application. You can detect this by listening for a "webglcontextchanged" event (although I just noticed this never made it into the specification, so it's non-standard :( ) > >> >> My content can adapt in many ways if I know I've switched to a lower profile, at the beginning or dynamically. >> >> But if i don't know, then what's the point? A message to the user that wouldn't know what to do about it? > > Let's consider the case of two MacBook Pros - one with a second GPU, one without. The "low-power" GPU on the first is both the "low-power" and "high-performance" GPU on the second. If you decide that your app *really* needs to run on the best GPU, you'd ask for "high-performance". But on that second device, you're not getting a more powerful GPU. So your content has to either: > - be designed to run on a wide range of hardware > - query the GPU vendor string and hopefully know what that means for your app > > And this still applies even if there was no way to even request a high or low power GPU, or to older dual-GPU hardware where the high-performance GPU is slower than today's low-power GPU, or actually to any other hardware. > > I'm not arguing with you btw - just pointing out that it doesn't really matter whether you get one GPU or another. You have to assume the worst unless you're willing to check the vendor string and know what it means to your app. The powerPreference parameter gives the author the ability to indicate that their content is (hopefully) "simple" enough to not need the fastest GPU (e.g. it isn't a full-page game or a cryptocurrency miner). > > Dean > > >> >> - Rachid >> >>> On Jul 2, 2018, at 5:13 PM, Dean Jackson > wrote: >>> >>> >>> >>>> On 3 Jul 2018, at 01:59, Rachid El Guerrab > wrote: >>>> >>>> I second Gregg Tavares's question about what's reported back. >>>> >>>> How can we tell if we're running with the high-performance option or not? >>> >>> Why should it matter? A relatively small set of people have dual GPU systems - and most people don't have powerful GPUs. And that's before you consider mobile devices. >>> >>> Also, in Safari on macOS, you don't necessarily get what you ask for anyway. You might ask for low-power but get high-performance because another app (or page) on the system has fired up that GPU. In other words, you have to write your content to work on the average GPU. >>> >>> But if you really have a good reason to know, you can query the GPU vendor string. It would be up to you to decide whether you think that's a high-performance GPU. >>> >>> Dean >>> >>> >>> >>> > -- - rachid -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Thu Jul 5 18:29:16 2018 From: khr...@ (Gregg Tavares) Date: Fri, 6 Jul 2018 10:29:16 +0900 Subject: [Public WebGL] Re: the commit API on the OffscreenCanvas In-Reply-To: References: Message-ID: 2018?7?6?(?) 5:16 Ken Russell : > On Wed, Jul 4, 2018 at 11:33 PM Gregg Tavares > wrote: > >> I'm not sure where to bring this up but I've been trying for a couple of >> weeks in other places and getting zero feedback sooo I am hoping you guys >> in charge of things will take a few minutes and read this and take some >> time to thoughtfully respond. >> >> It's possible I don't understand how OffscreenCanvas is supposed to work. >> I've read the spec and written several tests and a couple of short examples >> and this is my understanding. >> >> There are basically 2 ways to use it. They are documented at MDN. >> >> https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas >> >> One is listed as "*Synchronous display of frames produced by an >> OffscreenCanvas*". It involves using >> "offscreenCanvas.transferToImageBitmap" inside the worker, transferring >> that bitmap back to the main thread, and calling >> bitmapContext.transferImageBitmap. This API makes sense to me. If you want >> to synchronize DOM updates with WebGL updates then you need to make sure >> both get updated at the same time. Like say you have an HTML label over a >> moving 3D object. >> >> The other is listed as "*Asynchronous display of frames produced by an >> OffscreenCanvas*". In that case you just call `*gl.commit*` inside the >> worker and the canvas back on the page will be updated. This is arguably >> the more common use case. The majority of WebGL and three.js apps etc would >> use this method. The example on MDN shows sending a message to the worker >> each time you want it to render. Testing that on Chrome seems to work but >> it currently has a significant performance penalty. >> >> Recently 2 more things were added. One is that *requestAnimationFrame* >> was added to workers. The other is the *commit as been changed to be a >> synchronous* function. The worker freezes until the frame has been >> displayed. >> > > Hi Gregg, > > requestAnimationFrame on workers was added as a result of feedback from > the W3C TAG. It provides a way to animate and implicitly commit frames in > the same way as with HTMLCanvasElement on the main thread. It replaces the > use of setTimeout() on web workers for animating OffscreenCanvases, and > provides a unified mechanism to allow VR headsets to animate at higher > framerates than typical monitors. > > > It's these last 2 things I don't understand. >> >> *First:* given that rAF is now available in workers I would think this >> is valid code >> >> // in worker >> function loop() { >> render(); >> requestAnimationFrame(loop); >> gl.commit(); >> } >> loop(); >> > >> onmessage = function() { >> // get messages related to say camera position or >> // window size or mouse position etc to affect rendering >> }; >> >> Unfortunately testing it out in Chrome this doesn't work. The `onmessage` >> callback is never called regardless of how many messages are sent. I filed >> a bug. Was told "WONTFIX: working as intended" >> > > In the current semantics it's an error to call commit() from inside a > requestAnimationFrame callback on a worker. The spec and implementations > should be changed to throw an exception from commit() in this case. > > I updated your samples in your Chromium bug report http://crbug.com/859275 > to remove the call to commit() from within the rAF callback and they work > very well. No flickering, and work exactly as you intended. Also replied to > your same questions on https://github.com/w3ctag/design-reviews/issues/141 > . > > > >> Really? Is that really the intent of the spec? Apple? Mozilla? Microsoft? >> Do you agree that the code above is not a supported use case and is working >> as intended? >> >> *Second:* other events and callbacks don't work >> >> // in worker >> fetch('someimage.png', {mode:'cors'}).then(function(response) { >> return response.blob(); >> }).then(function(blob) { >> return createImageBitmap(response.blob()); >> }).then(function(bitmap) { >> gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, >> gl.UNSIGNED_BYTE, bitmap); >> }); >> >> function loop() { >> render(); >> requestAnimationFrame(loop); >> gl.commit(); >> } >> loop(); >> >> This also does not work. The *fetch* response never comes. My guess is >> this is because in Chrome *commit* blocks and rAF event gets pushed to >> the top of the event queue so no other events ever get processed. The spec >> has nothing to say about this. Is this supposed to work? It seems like a >> valid use case. Note that switching the end of loop to >> >> gl.commit(); >> requestAnimationFrame(loop); >> >> also does not work. >> >> Is that correct that it should not work? I guess I don't really >> understand the point of having rAF in worker if these use cases are not >> supposed to work. Are they? If they are not supposed to work can someone >> please explain rAF's use case in a worker? >> > > rAF in a worker replaces the use of commit(). Another alternative to > animating in a worker would be setTimeout(), but now that rAF is present in > workers, it's the best alternative. > > *Third*, according to various comments around the specs one use case is a >> spin loop on gl.commit for webassembly ports. Effectively this is supposed >> to work >> >> while(true) { >> render(); >> gl.commit(); >> } >> >> But I don't understand how this is useful given that no events come in if >> you do that. You can't communicate with the worker. The worker can't load >> files or call fetch or get a websocket message or receive input passed in >> from the main thread or do anything except render. >> >> Maybe people are thinking SharedArrayBuffers are a way to pass in data to >> such a loop but really? How would you pass in an image? As it is you'd have >> write your own decoder since you can't get the raw data losslessly out of >> an image from any web APIs and you can't transfer images into the worker >> (since it's not listening for messages) then you'd need to some how parse >> the image yourself and copy it into a sharedarraybuffer. That would a very >> slow jank inducing process in the main thread so now it seems like the spec >> is saying to use a gl.commit spin loop you need 2 workers, one for >> rendering, one for loading images and other things and then you need 1 or >> more SharedArrayBuffers and you have to implement a bunch of >> synchronization stuff just so you can use WebGL in a worker using this >> pattern mentioned in the spec? >> >> Is that really the intent? Is there something I'm missing? This seems >> like a platform breaking API. Use it and the entire rest of the platform >> becomes unusable without major amounts of code. >> >> If I'm wrong I'm happy to be corrected. >> > > commit() is mainly intended to support compiling multithreaded programs to > WebAssembly. The C language's threading model is that threads start up from > a start function and only return from it when the thread exits. We are > trying to get real use cases working which transfer all data in to these > rendering threads via the C heap from other threads. commit() and its > blocking behavior are required in order to reach parity with how native > platforms work in this scenario. > > Supporting porting native C app seems like a huge rabbit whole. Are there going to be sync image loading APIs next? Blocking `select` sockets? Reading the clipboard without an event? This is making *commit* be a huge gate. The moment you use it you throw away the entire rest of the platform. Really? It's hard to believe that's being signed off on. If commit's only use case is native apps it seems like it should not ship and should stay behind a flag until these other issues are worked out. I pointed out several above, like the example that there is no way to use the browser's native image loading with commit even through sharedarraybuffers. It seems very premature to ship such an api (commit) without actually knowing how those issues will be resolved. Tests can be made behind a flag. Are there tests and ports running now with this feature behind a flag that show all these issues can be solved in reasonable ways? > > *Four:* Non front tabs: rAF is currently not delivered if the page is not >> the front tab which is great but rAF is an event so even when rAF stops >> firing because the page is not on the front tab other events still arrive >> (fetch, onmessage, XHR, websockets, etc...) This means even though your >> page doesn't get a rAF callback it can still process incoming data (like >> your chat app's messages). >> >> How is that supposed to work with `gl.commit` loops? It's not the front >> tab so you want to block the commit so the worker doesn't spin and waste >> time. If the worker locks then that seems to have implications for all >> other associated workers and the main thread. If you're using Atomics to >> sync up things suddenly they'll fail indefinitely even more complicating >> all the code you have to write to use this feature. >> > > I think that ideally commit() would block until the tab comes back to the > foreground, to minimize CPU usage. However, if that turns out to be > suboptimal for some use cases, we could consider throttling commit(), to > essentially block for some time period and then return control to the > worker. > Shouldn't this be figured out before shipping? > > -Ken > > > Chrome has already committed to shipping the API. The code as been >> committed so if nothing changes it will ship automatically in a few weeks >> with all the issues mentioned above not behind a flag but live so it seems >> important to understand how to use this and if all these issues were >> considered and what their solutions are. >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Jul 6 14:32:48 2018 From: kbr...@ (Ken Russell) Date: Fri, 6 Jul 2018 14:32:48 -0700 Subject: [Public WebGL] Re: the commit API on the OffscreenCanvas In-Reply-To: References: Message-ID: What is known is that some way of committing frames from a spin-loop worker is required in the spec, in order to support multithreaded rendering from WebAssembly applications. commit() has been tested in small standalone test cases. Several groups are collaborating to make multithreaded rendering work in a real-world WebAssembly application. It's a fair point that this should be made to fully work before shipping it so we will plan to put commit() back behind a flag in Chrome for the time being. -Ken On Thu, Jul 5, 2018 at 6:29 PM Gregg Tavares wrote: > > > 2018?7?6?(?) 5:16 Ken Russell : > >> On Wed, Jul 4, 2018 at 11:33 PM Gregg Tavares >> wrote: >> >>> I'm not sure where to bring this up but I've been trying for a couple of >>> weeks in other places and getting zero feedback sooo I am hoping you guys >>> in charge of things will take a few minutes and read this and take some >>> time to thoughtfully respond. >>> >>> It's possible I don't understand how OffscreenCanvas is supposed to >>> work. I've read the spec and written several tests and a couple of short >>> examples and this is my understanding. >>> >>> There are basically 2 ways to use it. They are documented at MDN. >>> >>> https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas >>> >>> One is listed as "*Synchronous display of frames produced by an >>> OffscreenCanvas*". It involves using >>> "offscreenCanvas.transferToImageBitmap" inside the worker, transferring >>> that bitmap back to the main thread, and calling >>> bitmapContext.transferImageBitmap. This API makes sense to me. If you want >>> to synchronize DOM updates with WebGL updates then you need to make sure >>> both get updated at the same time. Like say you have an HTML label over a >>> moving 3D object. >>> >>> The other is listed as "*Asynchronous display of frames produced by an >>> OffscreenCanvas*". In that case you just call `*gl.commit*` inside the >>> worker and the canvas back on the page will be updated. This is arguably >>> the more common use case. The majority of WebGL and three.js apps etc would >>> use this method. The example on MDN shows sending a message to the worker >>> each time you want it to render. Testing that on Chrome seems to work but >>> it currently has a significant performance penalty. >>> >>> Recently 2 more things were added. One is that *requestAnimationFrame* >>> was added to workers. The other is the *commit as been changed to be a >>> synchronous* function. The worker freezes until the frame has been >>> displayed. >>> >> >> Hi Gregg, >> >> requestAnimationFrame on workers was added as a result of feedback from >> the W3C TAG. It provides a way to animate and implicitly commit frames in >> the same way as with HTMLCanvasElement on the main thread. It replaces the >> use of setTimeout() on web workers for animating OffscreenCanvases, and >> provides a unified mechanism to allow VR headsets to animate at higher >> framerates than typical monitors. >> >> >> It's these last 2 things I don't understand. >>> >>> *First:* given that rAF is now available in workers I would think this >>> is valid code >>> >>> // in worker >>> function loop() { >>> render(); >>> requestAnimationFrame(loop); >>> gl.commit(); >>> } >>> loop(); >>> >> >>> onmessage = function() { >>> // get messages related to say camera position or >>> // window size or mouse position etc to affect rendering >>> }; >>> >>> Unfortunately testing it out in Chrome this doesn't work. The >>> `onmessage` callback is never called regardless of how many messages are >>> sent. I filed a bug. Was told "WONTFIX: working as intended" >>> >> >> In the current semantics it's an error to call commit() from inside a >> requestAnimationFrame callback on a worker. The spec and implementations >> should be changed to throw an exception from commit() in this case. >> >> I updated your samples in your Chromium bug report >> http://crbug.com/859275 to remove the call to commit() from within the >> rAF callback and they work very well. No flickering, and work exactly as >> you intended. Also replied to your same questions on >> https://github.com/w3ctag/design-reviews/issues/141 . >> >> >> >>> Really? Is that really the intent of the spec? Apple? Mozilla? >>> Microsoft? Do you agree that the code above is not a supported use case and >>> is working as intended? >>> >>> *Second:* other events and callbacks don't work >>> >>> // in worker >>> fetch('someimage.png', {mode:'cors'}).then(function(response) { >>> return response.blob(); >>> }).then(function(blob) { >>> return createImageBitmap(response.blob()); >>> }).then(function(bitmap) { >>> gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, >>> gl.UNSIGNED_BYTE, bitmap); >>> }); >>> >>> function loop() { >>> render(); >>> requestAnimationFrame(loop); >>> gl.commit(); >>> } >>> loop(); >>> >>> This also does not work. The *fetch* response never comes. My guess is >>> this is because in Chrome *commit* blocks and rAF event gets pushed to >>> the top of the event queue so no other events ever get processed. The spec >>> has nothing to say about this. Is this supposed to work? It seems like a >>> valid use case. Note that switching the end of loop to >>> >>> gl.commit(); >>> requestAnimationFrame(loop); >>> >>> also does not work. >>> >>> Is that correct that it should not work? I guess I don't really >>> understand the point of having rAF in worker if these use cases are not >>> supposed to work. Are they? If they are not supposed to work can someone >>> please explain rAF's use case in a worker? >>> >> >> rAF in a worker replaces the use of commit(). Another alternative to >> animating in a worker would be setTimeout(), but now that rAF is present in >> workers, it's the best alternative. >> >> *Third*, according to various comments around the specs one use case is >>> a spin loop on gl.commit for webassembly ports. Effectively this is >>> supposed to work >>> >>> while(true) { >>> render(); >>> gl.commit(); >>> } >>> >>> But I don't understand how this is useful given that no events come in >>> if you do that. You can't communicate with the worker. The worker can't >>> load files or call fetch or get a websocket message or receive input passed >>> in from the main thread or do anything except render. >>> >>> Maybe people are thinking SharedArrayBuffers are a way to pass in data >>> to such a loop but really? How would you pass in an image? As it is you'd >>> have write your own decoder since you can't get the raw data losslessly out >>> of an image from any web APIs and you can't transfer images into the worker >>> (since it's not listening for messages) then you'd need to some how parse >>> the image yourself and copy it into a sharedarraybuffer. That would a very >>> slow jank inducing process in the main thread so now it seems like the spec >>> is saying to use a gl.commit spin loop you need 2 workers, one for >>> rendering, one for loading images and other things and then you need 1 or >>> more SharedArrayBuffers and you have to implement a bunch of >>> synchronization stuff just so you can use WebGL in a worker using this >>> pattern mentioned in the spec? >>> >>> Is that really the intent? Is there something I'm missing? This seems >>> like a platform breaking API. Use it and the entire rest of the platform >>> becomes unusable without major amounts of code. >>> >>> If I'm wrong I'm happy to be corrected. >>> >> >> commit() is mainly intended to support compiling multithreaded programs >> to WebAssembly. The C language's threading model is that threads start up >> from a start function and only return from it when the thread exits. We are >> trying to get real use cases working which transfer all data in to these >> rendering threads via the C heap from other threads. commit() and its >> blocking behavior are required in order to reach parity with how native >> platforms work in this scenario. >> >> > Supporting porting native C app seems like a huge rabbit whole. Are there > going to be sync image loading APIs next? Blocking `select` sockets? > Reading the clipboard without an event? This is making *commit* be a huge > gate. The moment you use it you throw away the entire rest of the platform. > Really? It's hard to believe that's being signed off on. > > If commit's only use case is native apps it seems like it should not ship > and should stay behind a flag until these other issues are worked out. I > pointed out several above, like the example that there is no way to use the > browser's native image loading with commit even through sharedarraybuffers. > It seems very premature to ship such an api (commit) without actually > knowing how those issues will be resolved. Tests can be made behind a flag. > > Are there tests and ports running now with this feature behind a flag that > show all these issues can be solved in reasonable ways? > > >> >> *Four:* Non front tabs: rAF is currently not delivered if the page is >>> not the front tab which is great but rAF is an event so even when rAF stops >>> firing because the page is not on the front tab other events still arrive >>> (fetch, onmessage, XHR, websockets, etc...) This means even though your >>> page doesn't get a rAF callback it can still process incoming data (like >>> your chat app's messages). >>> >>> How is that supposed to work with `gl.commit` loops? It's not the front >>> tab so you want to block the commit so the worker doesn't spin and waste >>> time. If the worker locks then that seems to have implications for all >>> other associated workers and the main thread. If you're using Atomics to >>> sync up things suddenly they'll fail indefinitely even more complicating >>> all the code you have to write to use this feature. >>> >> >> I think that ideally commit() would block until the tab comes back to the >> foreground, to minimize CPU usage. However, if that turns out to be >> suboptimal for some use cases, we could consider throttling commit(), to >> essentially block for some time period and then return control to the >> worker. >> > > Shouldn't this be figured out before shipping? > > >> >> -Ken >> >> >> Chrome has already committed to shipping the API. The code as been >>> committed so if nothing changes it will ship automatically in a few weeks >>> with all the issues mentioned above not behind a flag but live so it seems >>> important to understand how to use this and if all these issues were >>> considered and what their solutions are. >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From juj...@ Sun Jul 8 10:18:33 2018 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Sun, 8 Jul 2018 20:18:33 +0300 Subject: [Public WebGL] Re: the commit API on the OffscreenCanvas In-Reply-To: References: Message-ID: > In the current semantics it's an error to call commit() from inside a requestAnimationFrame callback on a worker. What is the rationale behind this restriction? Is the intent that having an infinite loop running in a web worker from within a rAF() is troublesome? I have been asking before that there was a context creation option to WebGL that would remove the current implicit "when I return from event handler, if there were any glDraw()s to front buffer in that event handler, that's a swap" presentation behavior. It would be nice to be able to create a WebGL contexts with an explicit swapping behavior mode, where returning from any event handler would not swap, but instead swapping would be done only when explicitly told to via a .commit() option? Current Emscripten multithreading web worker GL support has been implemented from the perspective that also applications that do not drive their own infinite main loops can still call .commit() to manually tell when they are finished producing a frame. This feature would be super nice because it decouples the overall program flow structure from GL buffers presentation logic, and this decoupling provides a great deal of flexibility. Another feature about .commit() is that I would like to see vsync control either on commit(), or via some other companion function, that would allow code to control whether .commit() would be vsyncless/nonblocking, or with vsync swap interval of 1,2,3,etc. And an API that would allow querying what vsync rate a vsync swap interval==1 means (in the context of current canvas display or the context). The rationale for this feature is: a. If one knows that one has presentation locked to a particular vsync rate, one can drive game/animation logic, physics and the rest at fixed timesteps. Using performance.now() or histogram based guessing based on rAF() firing rates is troublesome, especially if under existing GPU load, and will just result in microstuttering. Being able to read context.vsyncRate or canvas.vsyncRate or something similar will allow animation to reach microstuttering free timing for animation. b. Displays with >60Hz refresh rates are becoming more common, and sometimes content wants to constrain to run at known 30fps, or 60fps (e.g. known 30fps source video animation on a WebGL texture might make it illogical for the app to render at 60Hz or 120Hz of the source display). It would be nice to deal with this via a context.commit(1); or context.commit(2); kind of API to specify the desired swap interval. (or a separate function would be fine as well). Having a context.vsyncRate info field would allow computing what the appropriate swapInterval would be desirable, 1/2th, 1/3th, 1/4th or so on. c. Rendering without vsync enabled at context.commit(0); would be great for sites that do performance benchmarking or competitive gaming. This would be fine to be constrained for fullscreen apps for example. Even if presenting did need to wait for vsync, a "context.commit(0);" API could be a way to tell "please present when possible, but don't block on my present API call now". In any case, blocking .commit() should never be a full on "glFinish() right here and now" kind of API, but something that queues new present work to the swap chain, and immediately returns if there is a free render target still available on the swap chain (triple buffering). Otherwise there will be CPU-GPU pipeline bubbles. In other words, .commit() would only block when producer is running too far ahead and has run out of free swap chain buffers to start producing the next frame to, that it should wait for one to finish presenting to be freed up for reuse. 2018-07-07 0:32 GMT+03:00 Ken Russell : > What is known is that some way of committing frames from a spin-loop worker > is required in the spec, in order to support multithreaded rendering from > WebAssembly applications. commit() has been tested in small standalone test > cases. Several groups are collaborating to make multithreaded rendering work > in a real-world WebAssembly application. > > It's a fair point that this should be made to fully work before shipping it > so we will plan to put commit() back behind a flag in Chrome for the time > being. > > -Ken > > > On Thu, Jul 5, 2018 at 6:29 PM Gregg Tavares wrote: >> >> >> >> 2018?7?6?(?) 5:16 Ken Russell : >>> >>> On Wed, Jul 4, 2018 at 11:33 PM Gregg Tavares >>> wrote: >>>> >>>> I'm not sure where to bring this up but I've been trying for a couple of >>>> weeks in other places and getting zero feedback sooo I am hoping you guys in >>>> charge of things will take a few minutes and read this and take some time to >>>> thoughtfully respond. >>>> >>>> It's possible I don't understand how OffscreenCanvas is supposed to >>>> work. I've read the spec and written several tests and a couple of short >>>> examples and this is my understanding. >>>> >>>> There are basically 2 ways to use it. They are documented at MDN. >>>> >>>> https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas >>>> >>>> One is listed as "Synchronous display of frames produced by an >>>> OffscreenCanvas". It involves using "offscreenCanvas.transferToImageBitmap" >>>> inside the worker, transferring that bitmap back to the main thread, and >>>> calling bitmapContext.transferImageBitmap. This API makes sense to me. If >>>> you want to synchronize DOM updates with WebGL updates then you need to make >>>> sure both get updated at the same time. Like say you have an HTML label over >>>> a moving 3D object. >>>> >>>> The other is listed as "Asynchronous display of frames produced by an >>>> OffscreenCanvas". In that case you just call `gl.commit` inside the worker >>>> and the canvas back on the page will be updated. This is arguably the more >>>> common use case. The majority of WebGL and three.js apps etc would use this >>>> method. The example on MDN shows sending a message to the worker each time >>>> you want it to render. Testing that on Chrome seems to work but it currently >>>> has a significant performance penalty. >>>> >>>> Recently 2 more things were added. One is that requestAnimationFrame was >>>> added to workers. The other is the commit as been changed to be a >>>> synchronous function. The worker freezes until the frame has been displayed. >>> >>> >>> Hi Gregg, >>> >>> requestAnimationFrame on workers was added as a result of feedback from >>> the W3C TAG. It provides a way to animate and implicitly commit frames in >>> the same way as with HTMLCanvasElement on the main thread. It replaces the >>> use of setTimeout() on web workers for animating OffscreenCanvases, and >>> provides a unified mechanism to allow VR headsets to animate at higher >>> framerates than typical monitors. >>> >>> >>>> It's these last 2 things I don't understand. >>>> >>>> First: given that rAF is now available in workers I would think this is >>>> valid code >>>> >>>> // in worker >>>> function loop() { >>>> render(); >>>> requestAnimationFrame(loop); >>>> gl.commit(); >>>> } >>>> loop(); >>>> >>>> >>>> onmessage = function() { >>>> // get messages related to say camera position or >>>> // window size or mouse position etc to affect rendering >>>> }; >>>> >>>> Unfortunately testing it out in Chrome this doesn't work. The >>>> `onmessage` callback is never called regardless of how many messages are >>>> sent. I filed a bug. Was told "WONTFIX: working as intended" >>> >>> >>> In the current semantics it's an error to call commit() from inside a >>> requestAnimationFrame callback on a worker. The spec and implementations >>> should be changed to throw an exception from commit() in this case. >>> >>> I updated your samples in your Chromium bug report >>> http://crbug.com/859275 to remove the call to commit() from within the rAF >>> callback and they work very well. No flickering, and work exactly as you >>> intended. Also replied to your same questions on >>> https://github.com/w3ctag/design-reviews/issues/141 . >>> >>> >>>> >>>> Really? Is that really the intent of the spec? Apple? Mozilla? >>>> Microsoft? Do you agree that the code above is not a supported use case and >>>> is working as intended? >>>> >>>> Second: other events and callbacks don't work >>>> >>>> // in worker >>>> fetch('someimage.png', {mode:'cors'}).then(function(response) { >>>> return response.blob(); >>>> }).then(function(blob) { >>>> return createImageBitmap(response.blob()); >>>> }).then(function(bitmap) { >>>> gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, >>>> gl.UNSIGNED_BYTE, bitmap); >>>> }); >>>> >>>> function loop() { >>>> render(); >>>> requestAnimationFrame(loop); >>>> gl.commit(); >>>> } >>>> loop(); >>>> >>>> This also does not work. The fetch response never comes. My guess is >>>> this is because in Chrome commit blocks and rAF event gets pushed to the top >>>> of the event queue so no other events ever get processed. The spec has >>>> nothing to say about this. Is this supposed to work? It seems like a valid >>>> use case. Note that switching the end of loop to >>>> >>>> gl.commit(); >>>> requestAnimationFrame(loop); >>>> >>>> also does not work. >>>> >>>> Is that correct that it should not work? I guess I don't really >>>> understand the point of having rAF in worker if these use cases are not >>>> supposed to work. Are they? If they are not supposed to work can someone >>>> please explain rAF's use case in a worker? >>> >>> >>> rAF in a worker replaces the use of commit(). Another alternative to >>> animating in a worker would be setTimeout(), but now that rAF is present in >>> workers, it's the best alternative. >>> >>>> Third, according to various comments around the specs one use case is a >>>> spin loop on gl.commit for webassembly ports. Effectively this is supposed >>>> to work >>>> >>>> while(true) { >>>> render(); >>>> gl.commit(); >>>> } >>>> >>>> But I don't understand how this is useful given that no events come in >>>> if you do that. You can't communicate with the worker. The worker can't load >>>> files or call fetch or get a websocket message or receive input passed in >>>> from the main thread or do anything except render. >>>> >>>> Maybe people are thinking SharedArrayBuffers are a way to pass in data >>>> to such a loop but really? How would you pass in an image? As it is you'd >>>> have write your own decoder since you can't get the raw data losslessly out >>>> of an image from any web APIs and you can't transfer images into the worker >>>> (since it's not listening for messages) then you'd need to some how parse >>>> the image yourself and copy it into a sharedarraybuffer. That would a very >>>> slow jank inducing process in the main thread so now it seems like the spec >>>> is saying to use a gl.commit spin loop you need 2 workers, one for >>>> rendering, one for loading images and other things and then you need 1 or >>>> more SharedArrayBuffers and you have to implement a bunch of synchronization >>>> stuff just so you can use WebGL in a worker using this pattern mentioned in >>>> the spec? >>>> >>>> Is that really the intent? Is there something I'm missing? This seems >>>> like a platform breaking API. Use it and the entire rest of the platform >>>> becomes unusable without major amounts of code. >>>> >>>> If I'm wrong I'm happy to be corrected. >>> >>> >>> commit() is mainly intended to support compiling multithreaded programs >>> to WebAssembly. The C language's threading model is that threads start up >>> from a start function and only return from it when the thread exits. We are >>> trying to get real use cases working which transfer all data in to these >>> rendering threads via the C heap from other threads. commit() and its >>> blocking behavior are required in order to reach parity with how native >>> platforms work in this scenario. >>> >> >> Supporting porting native C app seems like a huge rabbit whole. Are there >> going to be sync image loading APIs next? Blocking `select` sockets? Reading >> the clipboard without an event? This is making commit be a huge gate. The >> moment you use it you throw away the entire rest of the platform. Really? >> It's hard to believe that's being signed off on. >> >> If commit's only use case is native apps it seems like it should not ship >> and should stay behind a flag until these other issues are worked out. I >> pointed out several above, like the example that there is no way to use the >> browser's native image loading with commit even through sharedarraybuffers. >> It seems very premature to ship such an api (commit) without actually >> knowing how those issues will be resolved. Tests can be made behind a flag. >> >> Are there tests and ports running now with this feature behind a flag that >> show all these issues can be solved in reasonable ways? >> >>> >>> >>>> Four: Non front tabs: rAF is currently not delivered if the page is not >>>> the front tab which is great but rAF is an event so even when rAF stops >>>> firing because the page is not on the front tab other events still arrive >>>> (fetch, onmessage, XHR, websockets, etc...) This means even though your page >>>> doesn't get a rAF callback it can still process incoming data (like your >>>> chat app's messages). >>>> >>>> How is that supposed to work with `gl.commit` loops? It's not the front >>>> tab so you want to block the commit so the worker doesn't spin and waste >>>> time. If the worker locks then that seems to have implications for all other >>>> associated workers and the main thread. If you're using Atomics to sync up >>>> things suddenly they'll fail indefinitely even more complicating all the >>>> code you have to write to use this feature. >>> >>> >>> I think that ideally commit() would block until the tab comes back to the >>> foreground, to minimize CPU usage. However, if that turns out to be >>> suboptimal for some use cases, we could consider throttling commit(), to >>> essentially block for some time period and then return control to the >>> worker. >> >> >> Shouldn't this be figured out before shipping? >> >>> >>> >>> -Ken >>> >>> >>>> Chrome has already committed to shipping the API. The code as been >>>> committed so if nothing changes it will ship automatically in a few weeks >>>> with all the issues mentioned above not behind a flag but live so it seems >>>> important to understand how to use this and if all these issues were >>>> considered and what their solutions are. >>>> >>>> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jun...@ Mon Jul 9 07:52:23 2018 From: jun...@ (Justin Novosad) Date: Mon, 9 Jul 2018 10:52:23 -0400 Subject: [Public WebGL] Re: the commit API on the OffscreenCanvas In-Reply-To: References: Message-ID: On Sun, Jul 8, 2018 at 1:19 PM Jukka Jyl?nki wrote: > > > In the current semantics it's an error to call commit() from inside a > requestAnimationFrame callback on a worker. > > What is the rationale behind this restriction? Is the intent that > having an infinite loop running in a web worker from within a rAF() is > troublesome? > If you put an infinite rendering loop inside of a rAF callback, then you don't need to be using rAF at all because you are basically ignoring the frame scheduling behavior provided by rAF. Since commit() and rAF() each have their own scheduling behaviors you should not use them together. With rAF, there is an implicit commit at the end of all script tasks that draw something. This is compatible with the behavior you get with regular canvases on the main thread, which means that animation code that was written for running on the main thread can be made to work on a worker with minimal code changes. > I have been asking before that there was a context creation option to > WebGL that would remove the current implicit "when I return from event > handler, if there were any glDraw()s to front buffer in that event > handler, that's a swap" presentation behavior. It would be nice to be > able to create a WebGL contexts with an explicit swapping behavior > mode, where returning from any event handler would not swap, but > instead swapping would be done only when explicitly told to via a > .commit() option? > Is there a use case that really needs this? In the infinite loop use case, this does not really matter, right? Since the task never ends, there is no implicit swap. The only case I can think of is if you have multiple event handlers that each draw part of the frame, and you want to prevent partially rendered frames from being displayed. But I can't think of a reason why an app might need to be written that way. > > Current Emscripten multithreading web worker GL support has been > implemented from the perspective that also applications that do not > drive their own infinite main loops can still call .commit() to > manually tell when they are finished producing a frame. This feature > would be super nice because it decouples the overall program flow > structure from GL buffers presentation logic, and this decoupling > provides a great deal of flexibility. > > Another feature about .commit() is that I would like to see vsync > control either on commit(), or via some other companion function, that > would allow code to control whether .commit() would be > vsyncless/nonblocking, or with vsync swap interval of 1,2,3,etc. And > an API that would allow querying what vsync rate a vsync swap > interval==1 means (in the context of current canvas display or the > context). The rationale for this feature is: > a. If one knows that one has presentation locked to a particular > vsync rate, one can drive game/animation logic, physics and the rest > at fixed timesteps. Using performance.now() or histogram based > guessing based on rAF() firing rates is troublesome, especially if > under existing GPU load, and will just result in microstuttering. > Being able to read context.vsyncRate or canvas.vsyncRate or something > similar will allow animation to reach microstuttering free timing for > animation. > b. Displays with >60Hz refresh rates are becoming more common, and > sometimes content wants to constrain to run at known 30fps, or 60fps > (e.g. known 30fps source video animation on a WebGL texture might make > it illogical for the app to render at 60Hz or 120Hz of the source > display). It would be nice to deal with this via a context.commit(1); > or context.commit(2); kind of API to specify the desired swap > interval. (or a separate function would be fine as well). Having a > context.vsyncRate info field would allow computing what the > appropriate swapInterval would be desirable, 1/2th, 1/3th, 1/4th or so > on. > c. Rendering without vsync enabled at context.commit(0); would be > great for sites that do performance benchmarking or competitive > gaming. This would be fine to be constrained for fullscreen apps for > example. Even if presenting did need to wait for vsync, a > "context.commit(0);" API could be a way to tell "please present when > possible, but don't block on my present API call now". > Similar ideas have been proposed as extensions for rAF as well. I think this should be in the next feature iteration. Basically we should add an optional dictionary argument to both rAF and commit() that exposes advanced animation timing options. > In any case, blocking .commit() should never be a full on "glFinish() > right here and now" kind of API, but something that queues new present > work to the swap chain, and immediately returns if there is a free > render target still available on the swap chain (triple buffering). > Agreed. The current spec is not so explicit on how the throttling should happen because different browsers/OSes use different graphics pipeline models, but basically commit() should only block when the pipe is full. This is similar to how current rAF implementations deliberately skip frames when the GPU can't keep up. > Otherwise there will be CPU-GPU pipeline bubbles. In other words, > .commit() would only block when producer is running too far ahead and > has run out of free swap chain buffers to start producing the next > frame to, that it should wait for one to finish presenting to be freed > up for reuse. > > > 2018-07-07 0:32 GMT+03:00 Ken Russell : > > What is known is that some way of committing frames from a spin-loop > worker > > is required in the spec, in order to support multithreaded rendering from > > WebAssembly applications. commit() has been tested in small standalone > test > > cases. Several groups are collaborating to make multithreaded rendering > work > > in a real-world WebAssembly application. > > > > It's a fair point that this should be made to fully work before shipping > it > > so we will plan to put commit() back behind a flag in Chrome for the time > > being. > > > > -Ken > > > > > > On Thu, Jul 5, 2018 at 6:29 PM Gregg Tavares > wrote: > >> > >> > >> > >> 2018?7?6?(?) 5:16 Ken Russell : > >>> > >>> On Wed, Jul 4, 2018 at 11:33 PM Gregg Tavares > >>> wrote: > >>>> > >>>> I'm not sure where to bring this up but I've been trying for a couple > of > >>>> weeks in other places and getting zero feedback sooo I am hoping you > guys in > >>>> charge of things will take a few minutes and read this and take some > time to > >>>> thoughtfully respond. > >>>> > >>>> It's possible I don't understand how OffscreenCanvas is supposed to > >>>> work. I've read the spec and written several tests and a couple of > short > >>>> examples and this is my understanding. > >>>> > >>>> There are basically 2 ways to use it. They are documented at MDN. > >>>> > >>>> https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas > >>>> > >>>> One is listed as "Synchronous display of frames produced by an > >>>> OffscreenCanvas". It involves using > "offscreenCanvas.transferToImageBitmap" > >>>> inside the worker, transferring that bitmap back to the main thread, > and > >>>> calling bitmapContext.transferImageBitmap. This API makes sense to > me. If > >>>> you want to synchronize DOM updates with WebGL updates then you need > to make > >>>> sure both get updated at the same time. Like say you have an HTML > label over > >>>> a moving 3D object. > >>>> > >>>> The other is listed as "Asynchronous display of frames produced by an > >>>> OffscreenCanvas". In that case you just call `gl.commit` inside the > worker > >>>> and the canvas back on the page will be updated. This is arguably the > more > >>>> common use case. The majority of WebGL and three.js apps etc would > use this > >>>> method. The example on MDN shows sending a message to the worker each > time > >>>> you want it to render. Testing that on Chrome seems to work but it > currently > >>>> has a significant performance penalty. > >>>> > >>>> Recently 2 more things were added. One is that requestAnimationFrame > was > >>>> added to workers. The other is the commit as been changed to be a > >>>> synchronous function. The worker freezes until the frame has been > displayed. > >>> > >>> > >>> Hi Gregg, > >>> > >>> requestAnimationFrame on workers was added as a result of feedback from > >>> the W3C TAG. It provides a way to animate and implicitly commit frames > in > >>> the same way as with HTMLCanvasElement on the main thread. It replaces > the > >>> use of setTimeout() on web workers for animating OffscreenCanvases, and > >>> provides a unified mechanism to allow VR headsets to animate at higher > >>> framerates than typical monitors. > >>> > >>> > >>>> It's these last 2 things I don't understand. > >>>> > >>>> First: given that rAF is now available in workers I would think this > is > >>>> valid code > >>>> > >>>> // in worker > >>>> function loop() { > >>>> render(); > >>>> requestAnimationFrame(loop); > >>>> gl.commit(); > >>>> } > >>>> loop(); > >>>> > >>>> > >>>> onmessage = function() { > >>>> // get messages related to say camera position or > >>>> // window size or mouse position etc to affect rendering > >>>> }; > >>>> > >>>> Unfortunately testing it out in Chrome this doesn't work. The > >>>> `onmessage` callback is never called regardless of how many messages > are > >>>> sent. I filed a bug. Was told "WONTFIX: working as intended" > >>> > >>> > >>> In the current semantics it's an error to call commit() from inside a > >>> requestAnimationFrame callback on a worker. The spec and > implementations > >>> should be changed to throw an exception from commit() in this case. > >>> > >>> I updated your samples in your Chromium bug report > >>> http://crbug.com/859275 to remove the call to commit() from within > the rAF > >>> callback and they work very well. No flickering, and work exactly as > you > >>> intended. Also replied to your same questions on > >>> https://github.com/w3ctag/design-reviews/issues/141 . > >>> > >>> > >>>> > >>>> Really? Is that really the intent of the spec? Apple? Mozilla? > >>>> Microsoft? Do you agree that the code above is not a supported use > case and > >>>> is working as intended? > >>>> > >>>> Second: other events and callbacks don't work > >>>> > >>>> // in worker > >>>> fetch('someimage.png', {mode:'cors'}).then(function(response) { > >>>> return response.blob(); > >>>> }).then(function(blob) { > >>>> return createImageBitmap(response.blob()); > >>>> }).then(function(bitmap) { > >>>> gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, > >>>> gl.UNSIGNED_BYTE, bitmap); > >>>> }); > >>>> > >>>> function loop() { > >>>> render(); > >>>> requestAnimationFrame(loop); > >>>> gl.commit(); > >>>> } > >>>> loop(); > >>>> > >>>> This also does not work. The fetch response never comes. My guess is > >>>> this is because in Chrome commit blocks and rAF event gets pushed to > the top > >>>> of the event queue so no other events ever get processed. The spec has > >>>> nothing to say about this. Is this supposed to work? It seems like a > valid > >>>> use case. Note that switching the end of loop to > >>>> > >>>> gl.commit(); > >>>> requestAnimationFrame(loop); > >>>> > >>>> also does not work. > >>>> > >>>> Is that correct that it should not work? I guess I don't really > >>>> understand the point of having rAF in worker if these use cases are > not > >>>> supposed to work. Are they? If they are not supposed to work can > someone > >>>> please explain rAF's use case in a worker? > >>> > >>> > >>> rAF in a worker replaces the use of commit(). Another alternative to > >>> animating in a worker would be setTimeout(), but now that rAF is > present in > >>> workers, it's the best alternative. > >>> > >>>> Third, according to various comments around the specs one use case is > a > >>>> spin loop on gl.commit for webassembly ports. Effectively this is > supposed > >>>> to work > >>>> > >>>> while(true) { > >>>> render(); > >>>> gl.commit(); > >>>> } > >>>> > >>>> But I don't understand how this is useful given that no events come in > >>>> if you do that. You can't communicate with the worker. The worker > can't load > >>>> files or call fetch or get a websocket message or receive input > passed in > >>>> from the main thread or do anything except render. > >>>> > >>>> Maybe people are thinking SharedArrayBuffers are a way to pass in data > >>>> to such a loop but really? How would you pass in an image? As it is > you'd > >>>> have write your own decoder since you can't get the raw data > losslessly out > >>>> of an image from any web APIs and you can't transfer images into the > worker > >>>> (since it's not listening for messages) then you'd need to some how > parse > >>>> the image yourself and copy it into a sharedarraybuffer. That would a > very > >>>> slow jank inducing process in the main thread so now it seems like > the spec > >>>> is saying to use a gl.commit spin loop you need 2 workers, one for > >>>> rendering, one for loading images and other things and then you need > 1 or > >>>> more SharedArrayBuffers and you have to implement a bunch of > synchronization > >>>> stuff just so you can use WebGL in a worker using this pattern > mentioned in > >>>> the spec? > >>>> > >>>> Is that really the intent? Is there something I'm missing? This seems > >>>> like a platform breaking API. Use it and the entire rest of the > platform > >>>> becomes unusable without major amounts of code. > >>>> > >>>> If I'm wrong I'm happy to be corrected. > >>> > >>> > >>> commit() is mainly intended to support compiling multithreaded programs > >>> to WebAssembly. The C language's threading model is that threads start > up > >>> from a start function and only return from it when the thread exits. > We are > >>> trying to get real use cases working which transfer all data in to > these > >>> rendering threads via the C heap from other threads. commit() and its > >>> blocking behavior are required in order to reach parity with how native > >>> platforms work in this scenario. > >>> > >> > >> Supporting porting native C app seems like a huge rabbit whole. Are > there > >> going to be sync image loading APIs next? Blocking `select` sockets? > Reading > >> the clipboard without an event? This is making commit be a huge gate. > The > >> moment you use it you throw away the entire rest of the platform. > Really? > >> It's hard to believe that's being signed off on. > >> > >> If commit's only use case is native apps it seems like it should not > ship > >> and should stay behind a flag until these other issues are worked out. I > >> pointed out several above, like the example that there is no way to use > the > >> browser's native image loading with commit even through > sharedarraybuffers. > >> It seems very premature to ship such an api (commit) without actually > >> knowing how those issues will be resolved. Tests can be made behind a > flag. > >> > >> Are there tests and ports running now with this feature behind a flag > that > >> show all these issues can be solved in reasonable ways? > >> > >>> > >>> > >>>> Four: Non front tabs: rAF is currently not delivered if the page is > not > >>>> the front tab which is great but rAF is an event so even when rAF > stops > >>>> firing because the page is not on the front tab other events still > arrive > >>>> (fetch, onmessage, XHR, websockets, etc...) This means even though > your page > >>>> doesn't get a rAF callback it can still process incoming data (like > your > >>>> chat app's messages). > >>>> > >>>> How is that supposed to work with `gl.commit` loops? It's not the > front > >>>> tab so you want to block the commit so the worker doesn't spin and > waste > >>>> time. If the worker locks then that seems to have implications for > all other > >>>> associated workers and the main thread. If you're using Atomics to > sync up > >>>> things suddenly they'll fail indefinitely even more complicating all > the > >>>> code you have to write to use this feature. > >>> > >>> > >>> I think that ideally commit() would block until the tab comes back to > the > >>> foreground, to minimize CPU usage. However, if that turns out to be > >>> suboptimal for some use cases, we could consider throttling commit(), > to > >>> essentially block for some time period and then return control to the > >>> worker. > >> > >> > >> Shouldn't this be figured out before shipping? > >> > >>> > >>> > >>> -Ken > >>> > >>> > >>>> Chrome has already committed to shipping the API. The code as been > >>>> committed so if nothing changes it will ship automatically in a few > weeks > >>>> with all the issues mentioned above not behind a flag but live so it > seems > >>>> important to understand how to use this and if all these issues were > >>>> considered and what their solutions are. > >>>> > >>>> > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Wed Jul 11 05:11:37 2018 From: khr...@ (Gregg Tavares) Date: Wed, 11 Jul 2018 21:11:37 +0900 Subject: [Public WebGL] Re: the commit API on the OffscreenCanvas In-Reply-To: References: Message-ID: Whether or not a page can opt into or out of vsync putting that on rAF options or commit options won't really work because there can be multiple places calling rAF and or commit. rAF is not a WebGL or canvas API it's a general API like setTimeout. It's not really the canvas that is synchronized it's the entire page so that seems like something that would have to happen at a page level. 2018?7?9?(?) 23:52 Justin Novosad : > > > On Sun, Jul 8, 2018 at 1:19 PM Jukka Jyl?nki wrote: > >> >> > In the current semantics it's an error to call commit() from inside a >> requestAnimationFrame callback on a worker. >> >> What is the rationale behind this restriction? Is the intent that >> having an infinite loop running in a web worker from within a rAF() is >> troublesome? >> > > If you put an infinite rendering loop inside of a rAF callback, then you > don't need to be using rAF at all because you are basically ignoring the > frame scheduling behavior provided by rAF. > Since commit() and rAF() each have their own scheduling behaviors you > should not use them together. With rAF, there is an implicit commit at the > end of all script tasks that draw something. This is compatible with the > behavior you get with regular canvases on the main thread, which means that > animation code that was written for running on the main thread can be made > to work on a worker with minimal code changes. > > >> I have been asking before that there was a context creation option to >> WebGL that would remove the current implicit "when I return from event >> handler, if there were any glDraw()s to front buffer in that event >> handler, that's a swap" presentation behavior. It would be nice to be >> able to create a WebGL contexts with an explicit swapping behavior >> mode, where returning from any event handler would not swap, but >> instead swapping would be done only when explicitly told to via a >> .commit() option? >> > > Is there a use case that really needs this? In the infinite loop use case, > this does not really matter, right? Since the task never ends, there is no > implicit swap. > > The only case I can think of is if you have multiple event handlers that > each draw part of the frame, and you want to prevent partially rendered > frames from being displayed. But I can't think of a reason why an app > might need to be written that way. > > >> >> Current Emscripten multithreading web worker GL support has been >> implemented from the perspective that also applications that do not >> drive their own infinite main loops can still call .commit() to >> manually tell when they are finished producing a frame. This feature >> would be super nice because it decouples the overall program flow >> structure from GL buffers presentation logic, and this decoupling >> provides a great deal of flexibility. >> >> Another feature about .commit() is that I would like to see vsync >> control either on commit(), or via some other companion function, that >> would allow code to control whether .commit() would be >> vsyncless/nonblocking, or with vsync swap interval of 1,2,3,etc. And >> an API that would allow querying what vsync rate a vsync swap >> interval==1 means (in the context of current canvas display or the >> context). The rationale for this feature is: >> a. If one knows that one has presentation locked to a particular >> vsync rate, one can drive game/animation logic, physics and the rest >> at fixed timesteps. Using performance.now() or histogram based >> guessing based on rAF() firing rates is troublesome, especially if >> under existing GPU load, and will just result in microstuttering. >> Being able to read context.vsyncRate or canvas.vsyncRate or something >> similar will allow animation to reach microstuttering free timing for >> animation. >> b. Displays with >60Hz refresh rates are becoming more common, and >> sometimes content wants to constrain to run at known 30fps, or 60fps >> (e.g. known 30fps source video animation on a WebGL texture might make >> it illogical for the app to render at 60Hz or 120Hz of the source >> display). It would be nice to deal with this via a context.commit(1); >> or context.commit(2); kind of API to specify the desired swap >> interval. (or a separate function would be fine as well). Having a >> context.vsyncRate info field would allow computing what the >> appropriate swapInterval would be desirable, 1/2th, 1/3th, 1/4th or so >> on. >> c. Rendering without vsync enabled at context.commit(0); would be >> great for sites that do performance benchmarking or competitive >> gaming. This would be fine to be constrained for fullscreen apps for >> example. Even if presenting did need to wait for vsync, a >> "context.commit(0);" API could be a way to tell "please present when >> possible, but don't block on my present API call now". >> > > Similar ideas have been proposed as extensions for rAF as well. I think > this should be in the next feature iteration. Basically we should add an > optional dictionary argument to both rAF and commit() that exposes advanced > animation timing options. > > >> In any case, blocking .commit() should never be a full on "glFinish() >> right here and now" kind of API, but something that queues new present >> work to the swap chain, and immediately returns if there is a free >> render target still available on the swap chain (triple buffering). >> > > Agreed. The current spec is not so explicit on how the throttling should > happen because different browsers/OSes use different graphics pipeline > models, but basically commit() should only block when the pipe is full. > This is similar to how current rAF implementations deliberately skip frames > when the GPU can't keep up. > > >> Otherwise there will be CPU-GPU pipeline bubbles. In other words, >> .commit() would only block when producer is running too far ahead and >> has run out of free swap chain buffers to start producing the next >> frame to, that it should wait for one to finish presenting to be freed >> up for reuse. >> >> >> 2018-07-07 0:32 GMT+03:00 Ken Russell : >> > What is known is that some way of committing frames from a spin-loop >> worker >> > is required in the spec, in order to support multithreaded rendering >> from >> > WebAssembly applications. commit() has been tested in small standalone >> test >> > cases. Several groups are collaborating to make multithreaded rendering >> work >> > in a real-world WebAssembly application. >> > >> > It's a fair point that this should be made to fully work before >> shipping it >> > so we will plan to put commit() back behind a flag in Chrome for the >> time >> > being. >> > >> > -Ken >> > >> > >> > On Thu, Jul 5, 2018 at 6:29 PM Gregg Tavares >> wrote: >> >> >> >> >> >> >> >> 2018?7?6?(?) 5:16 Ken Russell : >> >>> >> >>> On Wed, Jul 4, 2018 at 11:33 PM Gregg Tavares >> >>> wrote: >> >>>> >> >>>> I'm not sure where to bring this up but I've been trying for a >> couple of >> >>>> weeks in other places and getting zero feedback sooo I am hoping you >> guys in >> >>>> charge of things will take a few minutes and read this and take some >> time to >> >>>> thoughtfully respond. >> >>>> >> >>>> It's possible I don't understand how OffscreenCanvas is supposed to >> >>>> work. I've read the spec and written several tests and a couple of >> short >> >>>> examples and this is my understanding. >> >>>> >> >>>> There are basically 2 ways to use it. They are documented at MDN. >> >>>> >> >>>> https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas >> >>>> >> >>>> One is listed as "Synchronous display of frames produced by an >> >>>> OffscreenCanvas". It involves using >> "offscreenCanvas.transferToImageBitmap" >> >>>> inside the worker, transferring that bitmap back to the main thread, >> and >> >>>> calling bitmapContext.transferImageBitmap. This API makes sense to >> me. If >> >>>> you want to synchronize DOM updates with WebGL updates then you need >> to make >> >>>> sure both get updated at the same time. Like say you have an HTML >> label over >> >>>> a moving 3D object. >> >>>> >> >>>> The other is listed as "Asynchronous display of frames produced by an >> >>>> OffscreenCanvas". In that case you just call `gl.commit` inside the >> worker >> >>>> and the canvas back on the page will be updated. This is arguably >> the more >> >>>> common use case. The majority of WebGL and three.js apps etc would >> use this >> >>>> method. The example on MDN shows sending a message to the worker >> each time >> >>>> you want it to render. Testing that on Chrome seems to work but it >> currently >> >>>> has a significant performance penalty. >> >>>> >> >>>> Recently 2 more things were added. One is that requestAnimationFrame >> was >> >>>> added to workers. The other is the commit as been changed to be a >> >>>> synchronous function. The worker freezes until the frame has been >> displayed. >> >>> >> >>> >> >>> Hi Gregg, >> >>> >> >>> requestAnimationFrame on workers was added as a result of feedback >> from >> >>> the W3C TAG. It provides a way to animate and implicitly commit >> frames in >> >>> the same way as with HTMLCanvasElement on the main thread. It >> replaces the >> >>> use of setTimeout() on web workers for animating OffscreenCanvases, >> and >> >>> provides a unified mechanism to allow VR headsets to animate at higher >> >>> framerates than typical monitors. >> >>> >> >>> >> >>>> It's these last 2 things I don't understand. >> >>>> >> >>>> First: given that rAF is now available in workers I would think this >> is >> >>>> valid code >> >>>> >> >>>> // in worker >> >>>> function loop() { >> >>>> render(); >> >>>> requestAnimationFrame(loop); >> >>>> gl.commit(); >> >>>> } >> >>>> loop(); >> >>>> >> >>>> >> >>>> onmessage = function() { >> >>>> // get messages related to say camera position or >> >>>> // window size or mouse position etc to affect rendering >> >>>> }; >> >>>> >> >>>> Unfortunately testing it out in Chrome this doesn't work. The >> >>>> `onmessage` callback is never called regardless of how many messages >> are >> >>>> sent. I filed a bug. Was told "WONTFIX: working as intended" >> >>> >> >>> >> >>> In the current semantics it's an error to call commit() from inside a >> >>> requestAnimationFrame callback on a worker. The spec and >> implementations >> >>> should be changed to throw an exception from commit() in this case. >> >>> >> >>> I updated your samples in your Chromium bug report >> >>> http://crbug.com/859275 to remove the call to commit() from within >> the rAF >> >>> callback and they work very well. No flickering, and work exactly as >> you >> >>> intended. Also replied to your same questions on >> >>> https://github.com/w3ctag/design-reviews/issues/141 . >> >>> >> >>> >> >>>> >> >>>> Really? Is that really the intent of the spec? Apple? Mozilla? >> >>>> Microsoft? Do you agree that the code above is not a supported use >> case and >> >>>> is working as intended? >> >>>> >> >>>> Second: other events and callbacks don't work >> >>>> >> >>>> // in worker >> >>>> fetch('someimage.png', {mode:'cors'}).then(function(response) { >> >>>> return response.blob(); >> >>>> }).then(function(blob) { >> >>>> return createImageBitmap(response.blob()); >> >>>> }).then(function(bitmap) { >> >>>> gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, >> >>>> gl.UNSIGNED_BYTE, bitmap); >> >>>> }); >> >>>> >> >>>> function loop() { >> >>>> render(); >> >>>> requestAnimationFrame(loop); >> >>>> gl.commit(); >> >>>> } >> >>>> loop(); >> >>>> >> >>>> This also does not work. The fetch response never comes. My guess is >> >>>> this is because in Chrome commit blocks and rAF event gets pushed to >> the top >> >>>> of the event queue so no other events ever get processed. The spec >> has >> >>>> nothing to say about this. Is this supposed to work? It seems like a >> valid >> >>>> use case. Note that switching the end of loop to >> >>>> >> >>>> gl.commit(); >> >>>> requestAnimationFrame(loop); >> >>>> >> >>>> also does not work. >> >>>> >> >>>> Is that correct that it should not work? I guess I don't really >> >>>> understand the point of having rAF in worker if these use cases are >> not >> >>>> supposed to work. Are they? If they are not supposed to work can >> someone >> >>>> please explain rAF's use case in a worker? >> >>> >> >>> >> >>> rAF in a worker replaces the use of commit(). Another alternative to >> >>> animating in a worker would be setTimeout(), but now that rAF is >> present in >> >>> workers, it's the best alternative. >> >>> >> >>>> Third, according to various comments around the specs one use case >> is a >> >>>> spin loop on gl.commit for webassembly ports. Effectively this is >> supposed >> >>>> to work >> >>>> >> >>>> while(true) { >> >>>> render(); >> >>>> gl.commit(); >> >>>> } >> >>>> >> >>>> But I don't understand how this is useful given that no events come >> in >> >>>> if you do that. You can't communicate with the worker. The worker >> can't load >> >>>> files or call fetch or get a websocket message or receive input >> passed in >> >>>> from the main thread or do anything except render. >> >>>> >> >>>> Maybe people are thinking SharedArrayBuffers are a way to pass in >> data >> >>>> to such a loop but really? How would you pass in an image? As it is >> you'd >> >>>> have write your own decoder since you can't get the raw data >> losslessly out >> >>>> of an image from any web APIs and you can't transfer images into the >> worker >> >>>> (since it's not listening for messages) then you'd need to some how >> parse >> >>>> the image yourself and copy it into a sharedarraybuffer. That would >> a very >> >>>> slow jank inducing process in the main thread so now it seems like >> the spec >> >>>> is saying to use a gl.commit spin loop you need 2 workers, one for >> >>>> rendering, one for loading images and other things and then you need >> 1 or >> >>>> more SharedArrayBuffers and you have to implement a bunch of >> synchronization >> >>>> stuff just so you can use WebGL in a worker using this pattern >> mentioned in >> >>>> the spec? >> >>>> >> >>>> Is that really the intent? Is there something I'm missing? This seems >> >>>> like a platform breaking API. Use it and the entire rest of the >> platform >> >>>> becomes unusable without major amounts of code. >> >>>> >> >>>> If I'm wrong I'm happy to be corrected. >> >>> >> >>> >> >>> commit() is mainly intended to support compiling multithreaded >> programs >> >>> to WebAssembly. The C language's threading model is that threads >> start up >> >>> from a start function and only return from it when the thread exits. >> We are >> >>> trying to get real use cases working which transfer all data in to >> these >> >>> rendering threads via the C heap from other threads. commit() and its >> >>> blocking behavior are required in order to reach parity with how >> native >> >>> platforms work in this scenario. >> >>> >> >> >> >> Supporting porting native C app seems like a huge rabbit whole. Are >> there >> >> going to be sync image loading APIs next? Blocking `select` sockets? >> Reading >> >> the clipboard without an event? This is making commit be a huge gate. >> The >> >> moment you use it you throw away the entire rest of the platform. >> Really? >> >> It's hard to believe that's being signed off on. >> >> >> >> If commit's only use case is native apps it seems like it should not >> ship >> >> and should stay behind a flag until these other issues are worked out. >> I >> >> pointed out several above, like the example that there is no way to >> use the >> >> browser's native image loading with commit even through >> sharedarraybuffers. >> >> It seems very premature to ship such an api (commit) without actually >> >> knowing how those issues will be resolved. Tests can be made behind a >> flag. >> >> >> >> Are there tests and ports running now with this feature behind a flag >> that >> >> show all these issues can be solved in reasonable ways? >> >> >> >>> >> >>> >> >>>> Four: Non front tabs: rAF is currently not delivered if the page is >> not >> >>>> the front tab which is great but rAF is an event so even when rAF >> stops >> >>>> firing because the page is not on the front tab other events still >> arrive >> >>>> (fetch, onmessage, XHR, websockets, etc...) This means even though >> your page >> >>>> doesn't get a rAF callback it can still process incoming data (like >> your >> >>>> chat app's messages). >> >>>> >> >>>> How is that supposed to work with `gl.commit` loops? It's not the >> front >> >>>> tab so you want to block the commit so the worker doesn't spin and >> waste >> >>>> time. If the worker locks then that seems to have implications for >> all other >> >>>> associated workers and the main thread. If you're using Atomics to >> sync up >> >>>> things suddenly they'll fail indefinitely even more complicating all >> the >> >>>> code you have to write to use this feature. >> >>> >> >>> >> >>> I think that ideally commit() would block until the tab comes back to >> the >> >>> foreground, to minimize CPU usage. However, if that turns out to be >> >>> suboptimal for some use cases, we could consider throttling commit(), >> to >> >>> essentially block for some time period and then return control to the >> >>> worker. >> >> >> >> >> >> Shouldn't this be figured out before shipping? >> >> >> >>> >> >>> >> >>> -Ken >> >>> >> >>> >> >>>> Chrome has already committed to shipping the API. The code as been >> >>>> committed so if nothing changes it will ship automatically in a few >> weeks >> >>>> with all the issues mentioned above not behind a flag but live so it >> seems >> >>>> important to understand how to use this and if all these issues were >> >>>> considered and what their solutions are. >> >>>> >> >>>> >> > >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jav...@ Thu Jul 12 02:47:03 2018 From: jav...@ (Javi Agenjo) Date: Thu, 12 Jul 2018 11:47:03 +0200 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) Message-ID: Hi: Now that Chrome supports HDR video rendering (in Windows 10) with 10bits per color (using the VP9 Profile 2 10-bit) I was wondering if there would be any changes that we can instantiate a WebGL Context that has more than 8bits per color component, now that HDR displays are starting to roll out commercially. Sorry if this topic has been brought before or if this feature is already supported, but I did my research and couldnt find anything. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Jul 12 03:06:27 2018 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 12 Jul 2018 12:06:27 +0200 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: My knowledge on the topic might be a bit outdated, but here's my understanding. Neither OpenGL nor Direct3D support HDR front/back buffers and even though the GPU might be able to output a HDR signal to the monitor (something that previously was only possible with ugly hacks in medical imaging setups with a special driver that interlaced 8-bit and then tacked on 2 more bits into a seperate render target), when you render hardware accelerated all your output values get clamped to 8-bits per component when you put them into gl_FragColor for the rasterizer. Please correct me if this is wrong (by now). On Thu, Jul 12, 2018 at 11:47 AM, Javi Agenjo wrote: > Hi: > > Now that Chrome supports HDR video rendering (in Windows 10) with 10bits > per color (using the VP9 Profile 2 10-bit) I was wondering if there would > be any changes that we can instantiate a WebGL Context that has more than > 8bits per color component, now that HDR displays are starting to roll out > commercially. > > Sorry if this topic has been brought before or if this feature is already > supported, but I did my research and couldnt find anything. > > Thanks > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Jul 12 03:10:21 2018 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 12 Jul 2018 12:10:21 +0200 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: Oh and in any case afaik the only 10-bit consumer grad display panels out there are cheap, don't really do 10-bit colors and have poor gamut/contrast effectively negating any benefit. On Thu, Jul 12, 2018 at 12:06 PM, Florian B?sch wrote: > My knowledge on the topic might be a bit outdated, but here's my > understanding. > > Neither OpenGL nor Direct3D support HDR front/back buffers and even though > the GPU might be able to output a HDR signal to the monitor (something that > previously was only possible with ugly hacks in medical imaging setups with > a special driver that interlaced 8-bit and then tacked on 2 more bits into > a seperate render target), when you render hardware accelerated all your > output values get clamped to 8-bits per component when you put them into > gl_FragColor for the rasterizer. > > Please correct me if this is wrong (by now). > > On Thu, Jul 12, 2018 at 11:47 AM, Javi Agenjo > wrote: > >> Hi: >> >> Now that Chrome supports HDR video rendering (in Windows 10) with 10bits >> per color (using the VP9 Profile 2 10-bit) I was wondering if there would >> be any changes that we can instantiate a WebGL Context that has more than >> 8bits per color component, now that HDR displays are starting to roll out >> commercially. >> >> Sorry if this topic has been brought before or if this feature is already >> supported, but I did my research and couldnt find anything. >> >> Thanks >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jav...@ Thu Jul 12 03:33:58 2018 From: jav...@ (Javi Agenjo) Date: Thu, 12 Jul 2018 12:33:58 +0200 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: I see, thanks for the info Florian. I didnt know front/back was limited to 8 bits on desktop computers, but if Chrome supports HDR video rendering (as far as they say), there has to be some sort of pipeline going on outputing to 10bits, unless it is all happenning beyond the pipeline through some sort of decoding chip inside the GPU. Im asking because Im working in an European project related to HDR (HDR4EU ) and there are companies pushing HDR displays for consumers so there are reasons to expect changes in the near future, with better quality and gamuts. So it would be interesting to see some suggestions about how browsers can adapt to that change in the next years. Cheers On Thu, Jul 12, 2018 at 12:10 PM, Florian B?sch wrote: > Oh and in any case afaik the only 10-bit consumer grad display panels out > there are cheap, don't really do 10-bit colors and have poor gamut/contrast > effectively negating any benefit. > > On Thu, Jul 12, 2018 at 12:06 PM, Florian B?sch wrote: > >> My knowledge on the topic might be a bit outdated, but here's my >> understanding. >> >> Neither OpenGL nor Direct3D support HDR front/back buffers and even >> though the GPU might be able to output a HDR signal to the monitor >> (something that previously was only possible with ugly hacks in medical >> imaging setups with a special driver that interlaced 8-bit and then tacked >> on 2 more bits into a seperate render target), when you render hardware >> accelerated all your output values get clamped to 8-bits per component when >> you put them into gl_FragColor for the rasterizer. >> >> Please correct me if this is wrong (by now). >> >> On Thu, Jul 12, 2018 at 11:47 AM, Javi Agenjo >> wrote: >> >>> Hi: >>> >>> Now that Chrome supports HDR video rendering (in Windows 10) with 10bits >>> per color (using the VP9 Profile 2 10-bit) I was wondering if there would >>> be any changes that we can instantiate a WebGL Context that has more than >>> 8bits per color component, now that HDR displays are starting to roll out >>> commercially. >>> >>> Sorry if this topic has been brought before or if this feature is >>> already supported, but I did my research and couldnt find anything. >>> >>> Thanks >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Jul 12 04:16:00 2018 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 12 Jul 2018 13:16:00 +0200 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: On Thu, Jul 12, 2018 at 12:33 PM, Javi Agenjo wrote: > but if Chrome supports HDR video rendering (as far as they say), there has > to be some sort of pipeline going on outputing to 10bits, unless it is > all happenning beyond the pipeline through some sort of decoding chip > inside the GPU. > My guess is it's a feature of the hardware accelerated video decoder. > Im asking because Im working in an European project related to HDR (HDR4EU > ) and there are companies pushing HDR > displays for consumers so there are reasons to expect changes in the near > future, with better quality and gamuts. So it would be interesting to see > some suggestions about how browsers can adapt to that change in the next > years. > I would absolutely love HDR capability trough the pipeline. The 8-bit per channel convention is ridiculous nowadays because the actual display hardware (especially in OLED displays) is capable of many more graduations (even if the decoder chips is in the monitor aren't). Linear color space floating point rendering is becoming the norm, only for the result to be squashed together into a gamma/8-bit channel. It's nuts. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jav...@ Thu Jul 12 06:08:27 2018 From: jav...@ (Javi Agenjo) Date: Thu, 12 Jul 2018 15:08:27 +0200 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: Something to add is that HDMI 2.0a and forward supports HDR formats, so it should be possible by GPU manufacturers to add support for 10bits front/back buffers just by updating the drivers. Current HDR professional solutions rely on rendering to a FBO and downloading every frame back to RAM to send it through some special video card to the HDR display. Anyway, I guess this request will have to scalate to OpenGL group. Thanks for your time Florian. On Thu, Jul 12, 2018 at 1:16 PM, Florian B?sch wrote: > On Thu, Jul 12, 2018 at 12:33 PM, Javi Agenjo > wrote: > >> but if Chrome supports HDR video rendering (as far as they say), there >> has to be some sort of pipeline going on outputing to 10bits, unless it is >> all happenning beyond the pipeline through some sort of decoding chip >> inside the GPU. >> > > My guess is it's a feature of the hardware accelerated video decoder. > > >> Im asking because Im working in an European project related to HDR ( >> HDR4EU ) and there are companies pushing >> HDR displays for consumers so there are reasons to expect changes in the >> near future, with better quality and gamuts. So it would be interesting to >> see some suggestions about how browsers can adapt to that change in the >> next years. >> > > I would absolutely love HDR capability trough the pipeline. The 8-bit per > channel convention is ridiculous nowadays because the actual display > hardware (especially in OLED displays) is capable of many more graduations > (even if the decoder chips is in the monitor aren't). Linear color space > floating point rendering is becoming the norm, only for the result to be > squashed together into a gamma/8-bit channel. It's nuts. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Jul 12 06:14:57 2018 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 12 Jul 2018 15:14:57 +0200 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: On Thu, Jul 12, 2018 at 3:08 PM, Javi Agenjo wrote: > so it should be possible by GPU manufacturers to add support for 10bits > front/back buffers just by updating the drivers. > I'm not sure about that. You're assuming that the HW-acc rasterizer is able to switch to different numerical formats depending on storage buffer setting. However afaik the HW-acc rasterizer is essentially a fixed chunk of silicone with very few logical path bits in it, and it might simply not have the data lanes available for each pixel to be of higher precision and only offer 24 lanes for the transport of pixel values. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Jul 12 06:16:41 2018 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 12 Jul 2018 15:16:41 +0200 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: I know there where experiments with having a software defined rasterizer, but it was so slow it's just not feasible. On Thu, Jul 12, 2018 at 3:14 PM, Florian B?sch wrote: > On Thu, Jul 12, 2018 at 3:08 PM, Javi Agenjo > wrote: > >> so it should be possible by GPU manufacturers to add support for 10bits >> front/back buffers just by updating the drivers. >> > > I'm not sure about that. You're assuming that the HW-acc rasterizer is > able to switch to different numerical formats depending on storage buffer > setting. However afaik the HW-acc rasterizer is essentially a fixed chunk > of silicone with very few logical path bits in it, and it might simply not > have the data lanes available for each pixel to be of higher precision and > only offer 24 lanes for the transport of pixel values. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsh...@ Fri Jul 13 03:55:39 2018 From: tsh...@ (Tarek Sherif) Date: Fri, 13 Jul 2018 06:55:39 -0400 Subject: [Public WebGL] EXT_disjoint_timer_query disabled In-Reply-To: References: <919DFFC6-6587-45D6-B5B7-E0C954B440DA@callow.im> Message-ID: Hi all, Was just reading that site isolation is live in Chrome: https://arstechnica.com/information-technology/2018/07/chrome-enables-site-isolation-to-blunt-the-threat-of-spectre-attacks/ But the GPU timer is still unavailable any idea of an ETA? And any news when it will be back in Firefox? Tarek Sherif http://tareksherif.net/ https://www.biodigital.com/ On Mon, Jun 4, 2018 at 3:33 AM, Markus M?nig wrote: > As somebody who is trying to make a living by coding WebGL based > applications for the Web and Desktop (PaintSupreme3D.com, > Material-Z.com etc) I fully agree with Florian. > > WebGL is a great concept but was coded by a group of people who never > tried to create a real world application with it. > > The fact that you can not compile shaders in the background is a major > design flaw and has not been addressed for years because nobody really > seems to care. > > Now we cannot time our shaders anymore, which makes it nearly > impossible for some of us to ship our applications. If some of you > would care, there would be ways to enable this again to users, like > showing a dialog and let the user decide to enable this extension or > not per application. > > If Microsoft has a security flaw in their graphics subsystem, would > they disable the whole graphics subsystem ? No, they would find a > solution because their business depends on it. > > Google just does not care. We will remember. > > > On Sat, May 19, 2018 at 11:36 PM, Ken Russell wrote: > > On Sat, May 19, 2018 at 3:36 AM Mark Callow wrote: > >> > >> > >> > >> On May 19, 2018, at 10:08, Ken Russell wrote: > >> > >> act as a high-precision timer to carry out Spectre-like attacks > >> > >> > >> I thought the OS?s already had mitigations for Spectre. Why do the > >> browsers need additional ones? > > > > > > Please see my reply to Florian. > > > > -Ken > > > > > >> > >> > >> Regards > >> > >> -Mark > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jul 13 04:16:37 2018 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 13 Jul 2018 13:16:37 +0200 Subject: [Public WebGL] EXT_disjoint_timer_query disabled In-Reply-To: References: Message-ID: On Sat, May 19, 2018 at 6:35 PM, Ken Russell wrote: > Chrome has always been returning microsecond resolution for these queries > rather than nanosecond resolution. In discussion with the GLitch > researchers, it seems likely that this reduction in precision is sufficient > ? and since no WebGL developer ever complained about low resolution of > Chrome's timer queries, there's no need to make any changes to the > precision. > Actually Ken... I was running some tests on instruction speeds a while back to figure out optimal codepaths with disjoint timer query and didn't get any result but noise, so I assumed, it's all the same to the GPU and they're that smart to even optimize badly written GLSL to "good". Is it perhaps possible I was just measuring how much you fuzzed my results? Could you please not fuzz my results? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Jul 13 10:36:28 2018 From: kbr...@ (Ken Russell) Date: Fri, 13 Jul 2018 10:36:28 -0700 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: Yes! There's work underway to support an HDR back buffer for the WebGL rendering context. The current proposed API is here: https://github.com/WICG/canvas-color-space/blob/master/CanvasColorSpaceProposal.md Color spaces and profiles are a complex topic (and I'm no expert) but my understanding is that the initial switch is to be able to allocate a float16 back buffer for WebGL. The browser will then assume responsibility for presenting that to the screen. The colorspace will, I think, be extended sRGB. I've heard from a co-worker who's actively working in this area that they have HDR output coming out of WebGL on an HDR monitor. Not sure of the standardization / shipment status of this. To understand the current status in Chrome, please sign up for this group and post to it: https://groups.google.com/a/chromium.org/forum/#!forum/graphics-dev -Ken On Thu, Jul 12, 2018 at 2:48 AM Javi Agenjo wrote: > Hi: > > Now that Chrome supports HDR video rendering (in Windows 10) with 10bits > per color (using the VP9 Profile 2 10-bit) I was wondering if there would > be any changes that we can instantiate a WebGL Context that has more than > 8bits per color component, now that HDR displays are starting to roll out > commercially. > > Sorry if this topic has been brought before or if this feature is already > supported, but I did my research and couldnt find anything. > > Thanks > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jav...@ Fri Jul 13 11:07:42 2018 From: jav...@ (Javi Agenjo) Date: Fri, 13 Jul 2018 20:07:42 +0200 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: Thanks Ken!, great news then, I will keep an eye on the status. Cheers On Fri, Jul 13, 2018 at 7:36 PM, Ken Russell wrote: > Yes! There's work underway to support an HDR back buffer for the WebGL > rendering context. The current proposed API is here: > https://github.com/WICG/canvas-color-space/blob/master/ > CanvasColorSpaceProposal.md > > Color spaces and profiles are a complex topic (and I'm no expert) but my > understanding is that the initial switch is to be able to allocate a > float16 back buffer for WebGL. The browser will then assume responsibility > for presenting that to the screen. The colorspace will, I think, be > extended sRGB. > > I've heard from a co-worker who's actively working in this area that they > have HDR output coming out of WebGL on an HDR monitor. > > Not sure of the standardization / shipment status of this. To understand > the current status in Chrome, please sign up for this group and post to it: > https://groups.google.com/a/chromium.org/forum/#!forum/graphics-dev > > -Ken > > > > On Thu, Jul 12, 2018 at 2:48 AM Javi Agenjo wrote: > >> Hi: >> >> Now that Chrome supports HDR video rendering (in Windows 10) with 10bits >> per color (using the VP9 Profile 2 10-bit) I was wondering if there would >> be any changes that we can instantiate a WebGL Context that has more than >> 8bits per color component, now that HDR displays are starting to roll out >> commercially. >> >> Sorry if this topic has been brought before or if this feature is already >> supported, but I did my research and couldnt find anything. >> >> Thanks >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Jul 13 11:13:50 2018 From: kbr...@ (Ken Russell) Date: Fri, 13 Jul 2018 11:13:50 -0700 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: Please do post to graphics-dev...@ . The folks working on HDR support are on that list, but not on this one. -Ken On Fri, Jul 13, 2018 at 11:09 AM Javi Agenjo wrote: > Thanks Ken!, great news then, I will keep an eye on the status. > > Cheers > > On Fri, Jul 13, 2018 at 7:36 PM, Ken Russell wrote: > >> Yes! There's work underway to support an HDR back buffer for the WebGL >> rendering context. The current proposed API is here: >> >> https://github.com/WICG/canvas-color-space/blob/master/CanvasColorSpaceProposal.md >> >> Color spaces and profiles are a complex topic (and I'm no expert) but my >> understanding is that the initial switch is to be able to allocate a >> float16 back buffer for WebGL. The browser will then assume responsibility >> for presenting that to the screen. The colorspace will, I think, be >> extended sRGB. >> >> I've heard from a co-worker who's actively working in this area that they >> have HDR output coming out of WebGL on an HDR monitor. >> >> Not sure of the standardization / shipment status of this. To understand >> the current status in Chrome, please sign up for this group and post to it: >> https://groups.google.com/a/chromium.org/forum/#!forum/graphics-dev >> >> -Ken >> >> >> >> On Thu, Jul 12, 2018 at 2:48 AM Javi Agenjo >> wrote: >> >>> Hi: >>> >>> Now that Chrome supports HDR video rendering (in Windows 10) with 10bits >>> per color (using the VP9 Profile 2 10-bit) I was wondering if there would >>> be any changes that we can instantiate a WebGL Context that has more than >>> 8bits per color component, now that HDR displays are starting to roll out >>> commercially. >>> >>> Sorry if this topic has been brought before or if this feature is >>> already supported, but I did my research and couldnt find anything. >>> >>> Thanks >>> >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Jul 13 11:30:44 2018 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 13 Jul 2018 20:30:44 +0200 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: Afaik you cannot create a float16 frontbuffer with OpenGL because WGL's setPixelFormat function does not support a type argument (only the bitplanes) but from that it cannot infer what kind of buffer you where meant to have (other than an integer one). It looks like you could in theory create a float16 frontbuffer with Direct3Ds DXGI_SWAP_CHAIN_DESC which supports the format argument of the type DXGI_FORMAT_R16G16B16A16_FLOAT. I have no idea what the hardware support for that is, or if it even works at all as intended. On Fri, Jul 13, 2018 at 8:13 PM, Ken Russell wrote: > Please do post to graphics-dev...@ . The folks working on HDR > support are on that list, but not on this one. > > -Ken > > > On Fri, Jul 13, 2018 at 11:09 AM Javi Agenjo > wrote: > >> Thanks Ken!, great news then, I will keep an eye on the status. >> >> Cheers >> >> On Fri, Jul 13, 2018 at 7:36 PM, Ken Russell wrote: >> >>> Yes! There's work underway to support an HDR back buffer for the WebGL >>> rendering context. The current proposed API is here: >>> https://github.com/WICG/canvas-color-space/blob/master/ >>> CanvasColorSpaceProposal.md >>> >>> Color spaces and profiles are a complex topic (and I'm no expert) but my >>> understanding is that the initial switch is to be able to allocate a >>> float16 back buffer for WebGL. The browser will then assume responsibility >>> for presenting that to the screen. The colorspace will, I think, be >>> extended sRGB. >>> >>> I've heard from a co-worker who's actively working in this area that >>> they have HDR output coming out of WebGL on an HDR monitor. >>> >>> Not sure of the standardization / shipment status of this. To understand >>> the current status in Chrome, please sign up for this group and post to it: >>> https://groups.google.com/a/chromium.org/forum/#!forum/graphics-dev >>> >>> -Ken >>> >>> >>> >>> On Thu, Jul 12, 2018 at 2:48 AM Javi Agenjo >>> wrote: >>> >>>> Hi: >>>> >>>> Now that Chrome supports HDR video rendering (in Windows 10) with >>>> 10bits per color (using the VP9 Profile 2 10-bit) I was wondering if there >>>> would be any changes that we can instantiate a WebGL Context that has more >>>> than 8bits per color component, now that HDR displays are starting to roll >>>> out commercially. >>>> >>>> Sorry if this topic has been brought before or if this feature is >>>> already supported, but I did my research and couldnt find anything. >>>> >>>> Thanks >>>> >>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Jul 13 11:34:47 2018 From: kbr...@ (Ken Russell) Date: Fri, 13 Jul 2018 11:34:47 -0700 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: I'm not 100% sure how this is implemented nor on what platforms. The folks on graphics-dev...@ will know. My understanding is that it's on Windows (using D3D under the hood) and Android, and possibly macOS. On each of these platforms I believe that a platform-specific GPU memory buffer is allocated with a higher bit depth, it's bound to an OpenGL texture using platform-specific APIs, and ultimately presented to the window system's compositor again using platform-specific APIs. -Ken On Fri, Jul 13, 2018 at 11:30 AM Florian B?sch wrote: > Afaik you cannot create a float16 frontbuffer with OpenGL because WGL's > setPixelFormat function does not support a type argument (only the > bitplanes) but from that it cannot infer what kind of buffer you where > meant to have (other than an integer one). > > It looks like you could in theory create a float16 frontbuffer with > Direct3Ds DXGI_SWAP_CHAIN_DESC which supports the format argument of the > type DXGI_FORMAT_R16G16B16A16_FLOAT. I have no idea what the hardware > support for that is, or if it even works at all as intended. > > On Fri, Jul 13, 2018 at 8:13 PM, Ken Russell wrote: > >> Please do post to graphics-dev...@ . The folks working on HDR >> support are on that list, but not on this one. >> >> -Ken >> >> >> On Fri, Jul 13, 2018 at 11:09 AM Javi Agenjo >> wrote: >> >>> Thanks Ken!, great news then, I will keep an eye on the status. >>> >>> Cheers >>> >>> On Fri, Jul 13, 2018 at 7:36 PM, Ken Russell wrote: >>> >>>> Yes! There's work underway to support an HDR back buffer for the WebGL >>>> rendering context. The current proposed API is here: >>>> >>>> https://github.com/WICG/canvas-color-space/blob/master/CanvasColorSpaceProposal.md >>>> >>>> Color spaces and profiles are a complex topic (and I'm no expert) but >>>> my understanding is that the initial switch is to be able to allocate a >>>> float16 back buffer for WebGL. The browser will then assume responsibility >>>> for presenting that to the screen. The colorspace will, I think, be >>>> extended sRGB. >>>> >>>> I've heard from a co-worker who's actively working in this area that >>>> they have HDR output coming out of WebGL on an HDR monitor. >>>> >>>> Not sure of the standardization / shipment status of this. To >>>> understand the current status in Chrome, please sign up for this group and >>>> post to it: >>>> https://groups.google.com/a/chromium.org/forum/#!forum/graphics-dev >>>> >>>> -Ken >>>> >>>> >>>> >>>> On Thu, Jul 12, 2018 at 2:48 AM Javi Agenjo >>>> wrote: >>>> >>>>> Hi: >>>>> >>>>> Now that Chrome supports HDR video rendering (in Windows 10) with >>>>> 10bits per color (using the VP9 Profile 2 10-bit) I was wondering if there >>>>> would be any changes that we can instantiate a WebGL Context that has more >>>>> than 8bits per color component, now that HDR displays are starting to roll >>>>> out commercially. >>>>> >>>>> Sorry if this topic has been brought before or if this feature is >>>>> already supported, but I did my research and couldnt find anything. >>>>> >>>>> Thanks >>>>> >>>>> >>>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geo...@ Fri Jul 13 11:44:03 2018 From: geo...@ (Geoff Lang) Date: Fri, 13 Jul 2018 14:44:03 -0400 Subject: [Public WebGL] Rendering to HDR displays (10bits per color component) In-Reply-To: References: Message-ID: ANGLE is capable of creating swap chains with D3D11 using DXGI_FORMAT_R16G16B16A16_FLOAT and DXGI_FORMAT_R10G10B10A2_UNORM for HDR displays, we had to support this for HDR video output in Chrome. There is also ARB_color_buffer_float for desktop GL which adds floating point backbuffer formats to WGL and GLX but I'm not sure about the guarantees on how these are composited. On Fri, Jul 13, 2018 at 2:35 PM Ken Russell wrote: > I'm not 100% sure how this is implemented nor on what platforms. The folks > on graphics-dev...@ will know. My understanding is that it's on > Windows (using D3D under the hood) and Android, and possibly macOS. On each > of these platforms I believe that a platform-specific GPU memory buffer is > allocated with a higher bit depth, it's bound to an OpenGL texture using > platform-specific APIs, and ultimately presented to the window system's > compositor again using platform-specific APIs. > > -Ken > > > On Fri, Jul 13, 2018 at 11:30 AM Florian B?sch wrote: > >> Afaik you cannot create a float16 frontbuffer with OpenGL because WGL's >> setPixelFormat function does not support a type argument (only the >> bitplanes) but from that it cannot infer what kind of buffer you where >> meant to have (other than an integer one). >> >> It looks like you could in theory create a float16 frontbuffer with >> Direct3Ds DXGI_SWAP_CHAIN_DESC which supports the format argument of the >> type DXGI_FORMAT_R16G16B16A16_FLOAT. I have no idea what the hardware >> support for that is, or if it even works at all as intended. >> >> On Fri, Jul 13, 2018 at 8:13 PM, Ken Russell wrote: >> >>> Please do post to graphics-dev...@ . The folks working on HDR >>> support are on that list, but not on this one. >>> >>> -Ken >>> >>> >>> On Fri, Jul 13, 2018 at 11:09 AM Javi Agenjo >>> wrote: >>> >>>> Thanks Ken!, great news then, I will keep an eye on the status. >>>> >>>> Cheers >>>> >>>> On Fri, Jul 13, 2018 at 7:36 PM, Ken Russell wrote: >>>> >>>>> Yes! There's work underway to support an HDR back buffer for the WebGL >>>>> rendering context. The current proposed API is here: >>>>> >>>>> https://github.com/WICG/canvas-color-space/blob/master/CanvasColorSpaceProposal.md >>>>> >>>>> Color spaces and profiles are a complex topic (and I'm no expert) but >>>>> my understanding is that the initial switch is to be able to allocate a >>>>> float16 back buffer for WebGL. The browser will then assume responsibility >>>>> for presenting that to the screen. The colorspace will, I think, be >>>>> extended sRGB. >>>>> >>>>> I've heard from a co-worker who's actively working in this area that >>>>> they have HDR output coming out of WebGL on an HDR monitor. >>>>> >>>>> Not sure of the standardization / shipment status of this. To >>>>> understand the current status in Chrome, please sign up for this group and >>>>> post to it: >>>>> https://groups.google.com/a/chromium.org/forum/#!forum/graphics-dev >>>>> >>>>> -Ken >>>>> >>>>> >>>>> >>>>> On Thu, Jul 12, 2018 at 2:48 AM Javi Agenjo >>>>> wrote: >>>>> >>>>>> Hi: >>>>>> >>>>>> Now that Chrome supports HDR video rendering (in Windows 10) with >>>>>> 10bits per color (using the VP9 Profile 2 10-bit) I was wondering if there >>>>>> would be any changes that we can instantiate a WebGL Context that has more >>>>>> than 8bits per color component, now that HDR displays are starting to roll >>>>>> out commercially. >>>>>> >>>>>> Sorry if this topic has been brought before or if this feature is >>>>>> already supported, but I did my research and couldnt find anything. >>>>>> >>>>>> Thanks >>>>>> >>>>>> >>>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Jul 13 12:11:01 2018 From: kbr...@ (Ken Russell) Date: Fri, 13 Jul 2018 12:11:01 -0700 Subject: [Public WebGL] EXT_disjoint_timer_query disabled In-Reply-To: References: <919DFFC6-6587-45D6-B5B7-E0C954B440DA@callow.im> Message-ID: I just learned yesterday that Site Isolation has successfully shipped on desktop platforms. We'll proceed with re-enabling EXT_disjoint_timer_query there. -Ken On Fri, Jul 13, 2018 at 3:55 AM Tarek Sherif wrote: > Hi all, > > Was just reading that site isolation is live in Chrome: > https://arstechnica.com/information-technology/2018/07/chrome-enables-site-isolation-to-blunt-the-threat-of-spectre-attacks/ > But the GPU timer is still unavailable any idea of an ETA? > > And any news when it will be back in Firefox? > > Tarek Sherif > http://tareksherif.net/ > https://www.biodigital.com/ > > > On Mon, Jun 4, 2018 at 3:33 AM, Markus M?nig > wrote: > >> As somebody who is trying to make a living by coding WebGL based >> applications for the Web and Desktop (PaintSupreme3D.com, >> Material-Z.com etc) I fully agree with Florian. >> >> WebGL is a great concept but was coded by a group of people who never >> tried to create a real world application with it. >> >> The fact that you can not compile shaders in the background is a major >> design flaw and has not been addressed for years because nobody really >> seems to care. >> >> Now we cannot time our shaders anymore, which makes it nearly >> impossible for some of us to ship our applications. If some of you >> would care, there would be ways to enable this again to users, like >> showing a dialog and let the user decide to enable this extension or >> not per application. >> >> If Microsoft has a security flaw in their graphics subsystem, would >> they disable the whole graphics subsystem ? No, they would find a >> solution because their business depends on it. >> >> Google just does not care. We will remember. >> >> >> On Sat, May 19, 2018 at 11:36 PM, Ken Russell wrote: >> > On Sat, May 19, 2018 at 3:36 AM Mark Callow wrote: >> >> >> >> >> >> >> >> On May 19, 2018, at 10:08, Ken Russell wrote: >> >> >> >> act as a high-precision timer to carry out Spectre-like attacks >> >> >> >> >> >> I thought the OS?s already had mitigations for Spectre. Why do the >> >> browsers need additional ones? >> > >> > >> > Please see my reply to Florian. >> > >> > -Ken >> > >> > >> >> >> >> >> >> Regards >> >> >> >> -Mark >> >> >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mar...@ Mon Jul 16 07:52:32 2018 From: mar...@ (=?UTF-8?Q?Markus_Sch=c3=bctz?=) Date: Mon, 16 Jul 2018 16:52:32 +0200 Subject: [Public WebGL] WebGL compute shader support Message-ID: Hi all, Are there any updates on compute shader and SSBO support in WebGL? Compute shaders would extremely exciting for a variety of algorithms that aren't possible to implement in WebGL 2 right now. Best, Markus ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kai...@ Mon Jul 16 14:02:09 2018 From: kai...@ (Kai Ninomiya) Date: Mon, 16 Jul 2018 14:02:09 -0700 Subject: [Public WebGL] WebGL compute shader support In-Reply-To: References: Message-ID: Hey Markus, Chromium (Intel in particular) has begun working on experimental support, however due to both the WebGPU project and infeasibility of implementing WebGL compute on Apple platforms, it is unlikely to become a fully cross-browser, cross-platform, enabled-by-default technology. Check out this post/thread: https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/bPD47wqY-r8 On Mon, Jul 16, 2018 at 7:53 AM Markus Sch?tz wrote: > > > Hi all, > > Are there any updates on compute shader and SSBO support in WebGL? > Compute shaders would extremely exciting for a variety of algorithms > that aren't possible to implement in WebGL 2 right now. > > Best, > Markus > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4845 bytes Desc: S/MIME Cryptographic Signature URL: From mar...@ Wed Jul 25 20:44:03 2018 From: mar...@ (=?UTF-8?Q?Markus_M=C3=B6nig?=) Date: Thu, 26 Jul 2018 10:44:03 +0700 Subject: [Public WebGL] Shader Compilation Message-ID: Hi, I know this has probably been discussed many times, but will it ever be possible to know when a shader has finished compiling in WebGL v2 ? When the disjoint timer extension is back working again, for me this is the biggest limitation in WebGL. Not even async compiling, just knowing when the shader is finished. In my apps, when the shader gets complex I just don't know when it's safe to use it, and every access before compilation was finished results in blocking the app which really _sucks_. Thanks Markus ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pau...@ Wed Jul 25 23:43:24 2018 From: pau...@ (=?UTF-8?Q?Paul_Cheyrou=2DLagr=C3=A8ze?=) Date: Thu, 26 Jul 2018 08:43:24 +0200 Subject: [Public WebGL] Shader Compilation In-Reply-To: References: Message-ID: Hi, This one is about async compiling but also knowing when it finished: https://github.com/KhronosGroup/WebGL/issues/2638 In Angle https://github.com/google/angle/blob/master/extensions/KHR_parallel_shader_compile.txt Not sure it would work in sync mode though, I agree that it would be very useful too. -Paul On Thu, Jul 26, 2018 at 5:45 AM Markus M?nig wrote: > > Hi, > > I know this has probably been discussed many times, but will it ever > be possible to know when a shader has finished compiling in WebGL v2 ? > > When the disjoint timer extension is back working again, for me this > is the biggest limitation in WebGL. Not even async compiling, just > knowing when the shader is finished. > > In my apps, when the shader gets complex I just don't know when it's > safe to use it, and every access before compilation was finished > results in blocking the app which really _sucks_. > > Thanks > > Markus > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mar...@ Thu Jul 26 01:23:33 2018 From: mar...@ (=?UTF-8?Q?Markus_M=C3=B6nig?=) Date: Thu, 26 Jul 2018 15:23:33 +0700 Subject: [Public WebGL] Shader Compilation In-Reply-To: References: Message-ID: Hi Paul, these look great, but do we know if they will ever make it into WebGL ? Thanks On Thu, Jul 26, 2018 at 1:44 PM Paul Cheyrou-Lagr?ze wrote: > > Hi, > > This one is about async compiling but also knowing when it finished: > https://github.com/KhronosGroup/WebGL/issues/2638 > > In Angle > https://github.com/google/angle/blob/master/extensions/KHR_parallel_shader_compile.txt > > Not sure it would work in sync mode though, I agree that it would be very useful too. > > -Paul > > > On Thu, Jul 26, 2018 at 5:45 AM Markus M?nig wrote: >> >> >> Hi, >> >> I know this has probably been discussed many times, but will it ever >> be possible to know when a shader has finished compiling in WebGL v2 ? >> >> When the disjoint timer extension is back working again, for me this >> is the biggest limitation in WebGL. Not even async compiling, just >> knowing when the shader is finished. >> >> In my apps, when the shader gets complex I just don't know when it's >> safe to use it, and every access before compilation was finished >> results in blocking the app which really _sucks_. >> >> Thanks >> >> Markus >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kai...@ Thu Jul 26 11:41:57 2018 From: kai...@ (Kai Ninomiya) Date: Thu, 26 Jul 2018 11:41:57 -0700 Subject: [Public WebGL] Shader Compilation In-Reply-To: References: Message-ID: There has been some recent progress in Chrome on this front - please star this issue if you want to follow along: https://crbug.com/849576 The main changes are in ANGLE, which should make it easier for other WebGL implementations to implement the extension (Microsoft Edge and Firefox on Windows). On Thu, Jul 26, 2018 at 1:25 AM Markus M?nig wrote: > > Hi Paul, > > these look great, but do we know if they will ever make it into WebGL ? > > Thanks > On Thu, Jul 26, 2018 at 1:44 PM Paul Cheyrou-Lagr?ze > wrote: > > > > Hi, > > > > This one is about async compiling but also knowing when it finished: > > https://github.com/KhronosGroup/WebGL/issues/2638 > > > > In Angle > > > https://github.com/google/angle/blob/master/extensions/KHR_parallel_shader_compile.txt > > > > Not sure it would work in sync mode though, I agree that it would be > very useful too. > > > > -Paul > > > > > > On Thu, Jul 26, 2018 at 5:45 AM Markus M?nig < > markus.moenig...@> wrote: > >> > >> > >> Hi, > >> > >> I know this has probably been discussed many times, but will it ever > >> be possible to know when a shader has finished compiling in WebGL v2 ? > >> > >> When the disjoint timer extension is back working again, for me this > >> is the biggest limitation in WebGL. Not even async compiling, just > >> knowing when the shader is finished. > >> > >> In my apps, when the shader gets complex I just don't know when it's > >> safe to use it, and every access before compilation was finished > >> results in blocking the app which really _sucks_. > >> > >> Thanks > >> > >> Markus > >> > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > >> unsubscribe public_webgl > >> ----------------------------------------------------------- > >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4845 bytes Desc: S/MIME Cryptographic Signature URL: From mar...@ Thu Jul 26 21:37:03 2018 From: mar...@ (=?UTF-8?Q?Markus_M=C3=B6nig?=) Date: Fri, 27 Jul 2018 11:37:03 +0700 Subject: [Public WebGL] Shader Compilation In-Reply-To: References: Message-ID: Thanks Kai, I starred the issue, hope somebody will address this soon. On Fri, Jul 27, 2018 at 1:42 AM Kai Ninomiya wrote: > > There has been some recent progress in Chrome on this front - please star this issue if you want to follow along: https://crbug.com/849576 > > The main changes are in ANGLE, which should make it easier for other WebGL implementations to implement the extension (Microsoft Edge and Firefox on Windows). > > On Thu, Jul 26, 2018 at 1:25 AM Markus M?nig wrote: >> >> >> Hi Paul, >> >> these look great, but do we know if they will ever make it into WebGL ? >> >> Thanks >> On Thu, Jul 26, 2018 at 1:44 PM Paul Cheyrou-Lagr?ze wrote: >> > >> > Hi, >> > >> > This one is about async compiling but also knowing when it finished: >> > https://github.com/KhronosGroup/WebGL/issues/2638 >> > >> > In Angle >> > https://github.com/google/angle/blob/master/extensions/KHR_parallel_shader_compile.txt >> > >> > Not sure it would work in sync mode though, I agree that it would be very useful too. >> > >> > -Paul >> > >> > >> > On Thu, Jul 26, 2018 at 5:45 AM Markus M?nig wrote: >> >> >> >> >> >> Hi, >> >> >> >> I know this has probably been discussed many times, but will it ever >> >> be possible to know when a shader has finished compiling in WebGL v2 ? >> >> >> >> When the disjoint timer extension is back working again, for me this >> >> is the biggest limitation in WebGL. Not even async compiling, just >> >> knowing when the shader is finished. >> >> >> >> In my apps, when the shader gets complex I just don't know when it's >> >> safe to use it, and every access before compilation was finished >> >> results in blocking the app which really _sucks_. >> >> >> >> Thanks >> >> >> >> Markus >> >> >> >> ----------------------------------------------------------- >> >> You are currently subscribed to public_webgl...@ >> >> To unsubscribe, send an email to majordomo...@ with >> >> the following command in the body of your email: >> >> unsubscribe public_webgl >> >> ----------------------------------------------------------- >> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl -----------------------------------------------------------