From kbr...@ Fri Feb 1 11:28:45 2013 From: kbr...@ (Kenneth Russell) Date: Fri, 1 Feb 2013 11:28:45 -0800 Subject: [Public WebGL] Conformance OSX 10.7.x In-Reply-To: References: <590382499.43169502.1358985067537.JavaMail.root@mozilla.com> <51013E58.90801@artspark.co.jp> <510152BA.3000108@artspark.co.jp> Message-ID: Balancing the needs for privacy and precise bug reports, it's been suggested by Jeff Gilbert during a recent face-to-face meeting that a WebGL extension be proposed which prompts the user for permission to access information about the 3D graphics card. If the user grants it, then plausibly the http://www.khronos.org/registry/webgl/extensions/WEBGL_debug_renderer_info/ extension would be allowed to be fetched. Additional information could be added to that extension as necessary. This would let the application construct its own bug report. If someone could propose such an extension that would be great. It would need to pass in some sort of completion callback indicating success or failure to the function which prompts the user for permission. -Ken On Thu, Jan 24, 2013 at 9:19 AM, Florian B?sch wrote: > It's not a bad idea per-se (except that perhaps instead of an email I'd like > to get it as a JSON-post to my troubleshooting interface or such). > > What I'd like to see goes further than that however: > > - We don't need to know for an identical configuration that a user has run > into the issue, again, we already know it's a problem. > - We'd really like to know how many of our visitors (if they click the > report or not) are gonna have that configuration, so we can prioritize > bugfixes and workarounds. > - If we already know that for out application this users configuration is > gonna be a problem, we don't need to send him down the rabbit hole only to > find out it sucks. We could tell him up front "dude, I'm sorry but it won't > work, we're working on fixing it, meanwhile, try ..." > > So the paradox is this: In order to provide a *good* user experience we need > to profile our users. However we also want avoid the baddies to profile our > users. So meh, kinda sucks. > > > On Thu, Jan 24, 2013 at 6:07 PM, Brandon Jones wrote: >> >> Here's a (somewhat unorthodox) idea: >> >> I've always understood the resistance to exposing GPU/driver specs via >> Javascript to be a countermeasure against using that information to >> fingerprint a user's system for tracking, combined with a reluctance to >> encourage developers to start writing code that targets specific hardware. >> As Florian points out, though, when something goes wrong that's immediately >> the first thing that both the app developer and the browser vendor asks for: >> system configuration. Unfortunately Florian is also correct that many users >> don't know how or won't go to the trouble of explicitly submitting this >> information with their complaints. >> >> What an automated system was put in place that allows user to send an >> email to the app developers pre-populated with relevant specs at the push of >> a button? For example, in Chrome we now have an infobar that triggers on >> certain WebGL issues. It would be nice if the site could provide some >> contact info that would change the info bar from "Something went wrong, >> click to reload" to "Something went wrong. www.AwesomeWebGLGame.com has >> requested more information about the problem you are experiencing, click if >> you would like to contact them about this issue" (That's a bit wordy for an >> infobar, but you get the idea). The result of this could be as simple as >> launching their default email client and pre-populating some system specs >> and the support email for the site (Maybe even a screenshot?). That way >> there's complete transparency about what's being sent, and the chances of >> the developer getting quality information goes up dramatically. Since it's >> dialog/infobar driven the site can't silently scrape the info for tracking >> purposes without the user explicitly knowing it. Sounds like a decent >> compromise. >> >> The biggest issue I see is that the only cases where this could be >> launched automatically are crashes, context loss, or something similar. >> Those cases can be interesting to the application developer, but more often >> than not those scenarios are of more interest to the browser vendors. Things >> like corrupted rendering, missing textures, or other non-crashing anomalies >> are typically only able to be identified by the user, in which case a >> "report a problem" button is more appropriate. It shouldn't be a big deal to >> provide an API to request the infobar/dialog though, which would give app >> developers more control while still keeping the process in the users hands. >> >> I realize this is a pretty significant feature I'm proposing but I do >> strongly feel that if we want the development community to really embrace >> features like WebGL we need to give them all the tools we can to address the >> new development challenges they represent. This is my idea for that, and I'd >> be happy to hear other ideas on how to provide this critical information to >> developers in a security-conscious manner. I think to rely solely on >> dev/user communication to collect this critical information is probably a >> mistake in the long run, though. >> >> --Brandon >> >> >> On Thu, Jan 24, 2013 at 8:04 AM, Florian B?sch wrote: >>> >>> On Thu, Jan 24, 2013 at 4:26 PM, Mark Callow >>> wrote: >>>> >>>> I can't think of any other way, short of blacklisting, to prevent the >>>> developer spending that time re-investigating the bug. >>> >>> So this is what happens: >>> 1) User runs app >>> 2) Problem >>> 3) User tells developer of problem >>> 4) Developer scratches head >>> 5) Developer spends some time futily trying to reproduce bug on a machine >>> where it is not present >>> 6) Developer asks user for his GPU/OS/driver combination (because WebGL >>> does not expose it) >>> 7) Users reaction "o_O wut?" >>> 8) 90% of such users are never heard of again >>> 10) some users report their configuration, which the developer doesn't >>> have. >>> 11) Intdeterminate time later, Developer gets chance to run his app on a >>> problematic configuration >>> 12) spends days/hours actually tracing down the bug >>> 13) having found the bug confirms against the conformance >>> 14) files a bug ticket >>> 15) gets told it won't be fixed >>> >>> Now repeat that times developers times users times configurations and >>> it's a lot headscratching, tweeting, bad blog postsing, people dissing WebGL >>> on HN and /. and son. Wash, rhinse and repeat. And I think we'd like that >>> avoid somehow. >>> >>> So there's two aspects to that problem: Where to catch it and how to >>> catch it. >>> >>> The answer to the "where" question should be fairly simple, it should >>> happen as early as possible. The answer to the how question seems to be that >>> we have these means to deal with it: >>> - Blacklist >>> - Differentiate by the experimental prefix >>> - Introduce an extension to differentiate >>> >>> I don't think the blacklist is a really good measure, because conceivably >>> not all applications will have issues with it. >>> The differentiation by experimental prefix sounds ok to me, but, at the >>> moment it's all experimental. So it doesn't actually solve anything right >>> now. So the functionality of an extension seems to be pretty similar, except >>> that it would offer a solution to this maybe faster than we can get rid of >>> the experimental prefix. >>> >>> Regardless, I see a problem in baking this into the experimental prefix >>> or an extension. Everytime a new device or driver pops up, you'll have to >>> update the browser so that the list gets updated and you'll have to hope >>> that this propagate trough the userbase fairly quickly so you won't have to >>> answer hundreds of emails about issues you already know about. >>> >>> So on top of my head, if this was a native app, here's what I'd do: >>> 1) User runs app >>> 2) Problem >>> 3) Open a dialog prompting the user to send me a report >>> 4) Queries an interface on my server that compares the driver, os and gpu >>> to a database of issues >>> 5) Automatically issues the user an apology and advice in case that he >>> has run into a known issue >>> 6) If not a know issue, creates a new support case for me to investigate. >>> >>> So we can't do that in WebGL because the driver and GPU is not exposed. >>> I've mentioned before that this would really handy. And understand why you >>> wouldn't want to do that. Nevertheless, I think that would be one of the >>> most sensible solutions. >> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From oli...@ Fri Feb 1 11:33:56 2013 From: oli...@ (Oliver Hunt) Date: Fri, 01 Feb 2013 11:33:56 -0800 Subject: [Public WebGL] Conformance OSX 10.7.x In-Reply-To: References: <590382499.43169502.1358985067537.JavaMail.root@mozilla.com> <51013E58.90801@artspark.co.jp> <510152BA.3000108@artspark.co.jp> Message-ID: <2AFE5BFC-3F95-4CEB-9B60-E97394CC0DEB@apple.com> How does the end user know which button to press in this dialog? --Oliver On Feb 1, 2013, at 11:28 AM, Kenneth Russell wrote: > > Balancing the needs for privacy and precise bug reports, it's been > suggested by Jeff Gilbert during a recent face-to-face meeting that a > WebGL extension be proposed which prompts the user for permission to > access information about the 3D graphics card. If the user grants it, > then plausibly the > http://www.khronos.org/registry/webgl/extensions/WEBGL_debug_renderer_info/ > extension would be allowed to be fetched. Additional information could > be added to that extension as necessary. This would let the > application construct its own bug report. > > If someone could propose such an extension that would be great. It > would need to pass in some sort of completion callback indicating > success or failure to the function which prompts the user for > permission. > > -Ken > > > > On Thu, Jan 24, 2013 at 9:19 AM, Florian B?sch wrote: >> It's not a bad idea per-se (except that perhaps instead of an email I'd like >> to get it as a JSON-post to my troubleshooting interface or such). >> >> What I'd like to see goes further than that however: >> >> - We don't need to know for an identical configuration that a user has run >> into the issue, again, we already know it's a problem. >> - We'd really like to know how many of our visitors (if they click the >> report or not) are gonna have that configuration, so we can prioritize >> bugfixes and workarounds. >> - If we already know that for out application this users configuration is >> gonna be a problem, we don't need to send him down the rabbit hole only to >> find out it sucks. We could tell him up front "dude, I'm sorry but it won't >> work, we're working on fixing it, meanwhile, try ..." >> >> So the paradox is this: In order to provide a *good* user experience we need >> to profile our users. However we also want avoid the baddies to profile our >> users. So meh, kinda sucks. >> >> >> On Thu, Jan 24, 2013 at 6:07 PM, Brandon Jones wrote: >>> >>> Here's a (somewhat unorthodox) idea: >>> >>> I've always understood the resistance to exposing GPU/driver specs via >>> Javascript to be a countermeasure against using that information to >>> fingerprint a user's system for tracking, combined with a reluctance to >>> encourage developers to start writing code that targets specific hardware. >>> As Florian points out, though, when something goes wrong that's immediately >>> the first thing that both the app developer and the browser vendor asks for: >>> system configuration. Unfortunately Florian is also correct that many users >>> don't know how or won't go to the trouble of explicitly submitting this >>> information with their complaints. >>> >>> What an automated system was put in place that allows user to send an >>> email to the app developers pre-populated with relevant specs at the push of >>> a button? For example, in Chrome we now have an infobar that triggers on >>> certain WebGL issues. It would be nice if the site could provide some >>> contact info that would change the info bar from "Something went wrong, >>> click to reload" to "Something went wrong. www.AwesomeWebGLGame.com has >>> requested more information about the problem you are experiencing, click if >>> you would like to contact them about this issue" (That's a bit wordy for an >>> infobar, but you get the idea). The result of this could be as simple as >>> launching their default email client and pre-populating some system specs >>> and the support email for the site (Maybe even a screenshot?). That way >>> there's complete transparency about what's being sent, and the chances of >>> the developer getting quality information goes up dramatically. Since it's >>> dialog/infobar driven the site can't silently scrape the info for tracking >>> purposes without the user explicitly knowing it. Sounds like a decent >>> compromise. >>> >>> The biggest issue I see is that the only cases where this could be >>> launched automatically are crashes, context loss, or something similar. >>> Those cases can be interesting to the application developer, but more often >>> than not those scenarios are of more interest to the browser vendors. Things >>> like corrupted rendering, missing textures, or other non-crashing anomalies >>> are typically only able to be identified by the user, in which case a >>> "report a problem" button is more appropriate. It shouldn't be a big deal to >>> provide an API to request the infobar/dialog though, which would give app >>> developers more control while still keeping the process in the users hands. >>> >>> I realize this is a pretty significant feature I'm proposing but I do >>> strongly feel that if we want the development community to really embrace >>> features like WebGL we need to give them all the tools we can to address the >>> new development challenges they represent. This is my idea for that, and I'd >>> be happy to hear other ideas on how to provide this critical information to >>> developers in a security-conscious manner. I think to rely solely on >>> dev/user communication to collect this critical information is probably a >>> mistake in the long run, though. >>> >>> --Brandon >>> >>> >>> On Thu, Jan 24, 2013 at 8:04 AM, Florian B?sch wrote: >>>> >>>> On Thu, Jan 24, 2013 at 4:26 PM, Mark Callow >>>> wrote: >>>>> >>>>> I can't think of any other way, short of blacklisting, to prevent the >>>>> developer spending that time re-investigating the bug. >>>> >>>> So this is what happens: >>>> 1) User runs app >>>> 2) Problem >>>> 3) User tells developer of problem >>>> 4) Developer scratches head >>>> 5) Developer spends some time futily trying to reproduce bug on a machine >>>> where it is not present >>>> 6) Developer asks user for his GPU/OS/driver combination (because WebGL >>>> does not expose it) >>>> 7) Users reaction "o_O wut?" >>>> 8) 90% of such users are never heard of again >>>> 10) some users report their configuration, which the developer doesn't >>>> have. >>>> 11) Intdeterminate time later, Developer gets chance to run his app on a >>>> problematic configuration >>>> 12) spends days/hours actually tracing down the bug >>>> 13) having found the bug confirms against the conformance >>>> 14) files a bug ticket >>>> 15) gets told it won't be fixed >>>> >>>> Now repeat that times developers times users times configurations and >>>> it's a lot headscratching, tweeting, bad blog postsing, people dissing WebGL >>>> on HN and /. and son. Wash, rhinse and repeat. And I think we'd like that >>>> avoid somehow. >>>> >>>> So there's two aspects to that problem: Where to catch it and how to >>>> catch it. >>>> >>>> The answer to the "where" question should be fairly simple, it should >>>> happen as early as possible. The answer to the how question seems to be that >>>> we have these means to deal with it: >>>> - Blacklist >>>> - Differentiate by the experimental prefix >>>> - Introduce an extension to differentiate >>>> >>>> I don't think the blacklist is a really good measure, because conceivably >>>> not all applications will have issues with it. >>>> The differentiation by experimental prefix sounds ok to me, but, at the >>>> moment it's all experimental. So it doesn't actually solve anything right >>>> now. So the functionality of an extension seems to be pretty similar, except >>>> that it would offer a solution to this maybe faster than we can get rid of >>>> the experimental prefix. >>>> >>>> Regardless, I see a problem in baking this into the experimental prefix >>>> or an extension. Everytime a new device or driver pops up, you'll have to >>>> update the browser so that the list gets updated and you'll have to hope >>>> that this propagate trough the userbase fairly quickly so you won't have to >>>> answer hundreds of emails about issues you already know about. >>>> >>>> So on top of my head, if this was a native app, here's what I'd do: >>>> 1) User runs app >>>> 2) Problem >>>> 3) Open a dialog prompting the user to send me a report >>>> 4) Queries an interface on my server that compares the driver, os and gpu >>>> to a database of issues >>>> 5) Automatically issues the user an apology and advice in case that he >>>> has run into a known issue >>>> 6) If not a know issue, creates a new support case for me to investigate. >>>> >>>> So we can't do that in WebGL because the driver and GPU is not exposed. >>>> I've mentioned before that this would really handy. And understand why you >>>> wouldn't want to do that. Nevertheless, I think that would be one of the >>>> most sensible solutions. >>> >>> >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Fri Feb 1 11:44:36 2013 From: bja...@ (Benoit Jacob) Date: Fri, 01 Feb 2013 14:44:36 -0500 Subject: [Public WebGL] Conformance OSX 10.7.x In-Reply-To: <2AFE5BFC-3F95-4CEB-9B60-E97394CC0DEB@apple.com> References: <590382499.43169502.1358985067537.JavaMail.root@mozilla.com> <51013E58.90801@artspark.co.jp> <510152BA.3000108@artspark.co.jp> <2AFE5BFC-3F95-4CEB-9B60-E97394CC0DEB@apple.com> Message-ID: <510C1B24.2020706@mozilla.com> My thoughts exactly :-) Benoit On 13-02-01 02:33 PM, Oliver Hunt wrote: > How does the end user know which button to press in this dialog? > > --Oliver > > On Feb 1, 2013, at 11:28 AM, Kenneth Russell wrote: > >> Balancing the needs for privacy and precise bug reports, it's been >> suggested by Jeff Gilbert during a recent face-to-face meeting that a >> WebGL extension be proposed which prompts the user for permission to >> access information about the 3D graphics card. If the user grants it, >> then plausibly the >> http://www.khronos.org/registry/webgl/extensions/WEBGL_debug_renderer_info/ >> extension would be allowed to be fetched. Additional information could >> be added to that extension as necessary. This would let the >> application construct its own bug report. >> >> If someone could propose such an extension that would be great. It >> would need to pass in some sort of completion callback indicating >> success or failure to the function which prompts the user for >> permission. >> >> -Ken >> >> >> >> On Thu, Jan 24, 2013 at 9:19 AM, Florian B?sch wrote: >>> It's not a bad idea per-se (except that perhaps instead of an email I'd like >>> to get it as a JSON-post to my troubleshooting interface or such). >>> >>> What I'd like to see goes further than that however: >>> >>> - We don't need to know for an identical configuration that a user has run >>> into the issue, again, we already know it's a problem. >>> - We'd really like to know how many of our visitors (if they click the >>> report or not) are gonna have that configuration, so we can prioritize >>> bugfixes and workarounds. >>> - If we already know that for out application this users configuration is >>> gonna be a problem, we don't need to send him down the rabbit hole only to >>> find out it sucks. We could tell him up front "dude, I'm sorry but it won't >>> work, we're working on fixing it, meanwhile, try ..." >>> >>> So the paradox is this: In order to provide a *good* user experience we need >>> to profile our users. However we also want avoid the baddies to profile our >>> users. So meh, kinda sucks. >>> >>> >>> On Thu, Jan 24, 2013 at 6:07 PM, Brandon Jones wrote: >>>> Here's a (somewhat unorthodox) idea: >>>> >>>> I've always understood the resistance to exposing GPU/driver specs via >>>> Javascript to be a countermeasure against using that information to >>>> fingerprint a user's system for tracking, combined with a reluctance to >>>> encourage developers to start writing code that targets specific hardware. >>>> As Florian points out, though, when something goes wrong that's immediately >>>> the first thing that both the app developer and the browser vendor asks for: >>>> system configuration. Unfortunately Florian is also correct that many users >>>> don't know how or won't go to the trouble of explicitly submitting this >>>> information with their complaints. >>>> >>>> What an automated system was put in place that allows user to send an >>>> email to the app developers pre-populated with relevant specs at the push of >>>> a button? For example, in Chrome we now have an infobar that triggers on >>>> certain WebGL issues. It would be nice if the site could provide some >>>> contact info that would change the info bar from "Something went wrong, >>>> click to reload" to "Something went wrong. www.AwesomeWebGLGame.com has >>>> requested more information about the problem you are experiencing, click if >>>> you would like to contact them about this issue" (That's a bit wordy for an >>>> infobar, but you get the idea). The result of this could be as simple as >>>> launching their default email client and pre-populating some system specs >>>> and the support email for the site (Maybe even a screenshot?). That way >>>> there's complete transparency about what's being sent, and the chances of >>>> the developer getting quality information goes up dramatically. Since it's >>>> dialog/infobar driven the site can't silently scrape the info for tracking >>>> purposes without the user explicitly knowing it. Sounds like a decent >>>> compromise. >>>> >>>> The biggest issue I see is that the only cases where this could be >>>> launched automatically are crashes, context loss, or something similar. >>>> Those cases can be interesting to the application developer, but more often >>>> than not those scenarios are of more interest to the browser vendors. Things >>>> like corrupted rendering, missing textures, or other non-crashing anomalies >>>> are typically only able to be identified by the user, in which case a >>>> "report a problem" button is more appropriate. It shouldn't be a big deal to >>>> provide an API to request the infobar/dialog though, which would give app >>>> developers more control while still keeping the process in the users hands. >>>> >>>> I realize this is a pretty significant feature I'm proposing but I do >>>> strongly feel that if we want the development community to really embrace >>>> features like WebGL we need to give them all the tools we can to address the >>>> new development challenges they represent. This is my idea for that, and I'd >>>> be happy to hear other ideas on how to provide this critical information to >>>> developers in a security-conscious manner. I think to rely solely on >>>> dev/user communication to collect this critical information is probably a >>>> mistake in the long run, though. >>>> >>>> --Brandon >>>> >>>> >>>> On Thu, Jan 24, 2013 at 8:04 AM, Florian B?sch wrote: >>>>> On Thu, Jan 24, 2013 at 4:26 PM, Mark Callow >>>>> wrote: >>>>>> I can't think of any other way, short of blacklisting, to prevent the >>>>>> developer spending that time re-investigating the bug. >>>>> So this is what happens: >>>>> 1) User runs app >>>>> 2) Problem >>>>> 3) User tells developer of problem >>>>> 4) Developer scratches head >>>>> 5) Developer spends some time futily trying to reproduce bug on a machine >>>>> where it is not present >>>>> 6) Developer asks user for his GPU/OS/driver combination (because WebGL >>>>> does not expose it) >>>>> 7) Users reaction "o_O wut?" >>>>> 8) 90% of such users are never heard of again >>>>> 10) some users report their configuration, which the developer doesn't >>>>> have. >>>>> 11) Intdeterminate time later, Developer gets chance to run his app on a >>>>> problematic configuration >>>>> 12) spends days/hours actually tracing down the bug >>>>> 13) having found the bug confirms against the conformance >>>>> 14) files a bug ticket >>>>> 15) gets told it won't be fixed >>>>> >>>>> Now repeat that times developers times users times configurations and >>>>> it's a lot headscratching, tweeting, bad blog postsing, people dissing WebGL >>>>> on HN and /. and son. Wash, rhinse and repeat. And I think we'd like that >>>>> avoid somehow. >>>>> >>>>> So there's two aspects to that problem: Where to catch it and how to >>>>> catch it. >>>>> >>>>> The answer to the "where" question should be fairly simple, it should >>>>> happen as early as possible. The answer to the how question seems to be that >>>>> we have these means to deal with it: >>>>> - Blacklist >>>>> - Differentiate by the experimental prefix >>>>> - Introduce an extension to differentiate >>>>> >>>>> I don't think the blacklist is a really good measure, because conceivably >>>>> not all applications will have issues with it. >>>>> The differentiation by experimental prefix sounds ok to me, but, at the >>>>> moment it's all experimental. So it doesn't actually solve anything right >>>>> now. So the functionality of an extension seems to be pretty similar, except >>>>> that it would offer a solution to this maybe faster than we can get rid of >>>>> the experimental prefix. >>>>> >>>>> Regardless, I see a problem in baking this into the experimental prefix >>>>> or an extension. Everytime a new device or driver pops up, you'll have to >>>>> update the browser so that the list gets updated and you'll have to hope >>>>> that this propagate trough the userbase fairly quickly so you won't have to >>>>> answer hundreds of emails about issues you already know about. >>>>> >>>>> So on top of my head, if this was a native app, here's what I'd do: >>>>> 1) User runs app >>>>> 2) Problem >>>>> 3) Open a dialog prompting the user to send me a report >>>>> 4) Queries an interface on my server that compares the driver, os and gpu >>>>> to a database of issues >>>>> 5) Automatically issues the user an apology and advice in case that he >>>>> has run into a known issue >>>>> 6) If not a know issue, creates a new support case for me to investigate. >>>>> >>>>> So we can't do that in WebGL because the driver and GPU is not exposed. >>>>> I've mentioned before that this would really handy. And understand why you >>>>> wouldn't want to do that. Nevertheless, I think that would be one of the >>>>> most sensible solutions. >>>> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Fri Feb 1 14:16:29 2013 From: gma...@ (Gregg Tavares) Date: Fri, 1 Feb 2013 14:16:29 -0800 Subject: [Public WebGL] Conformance OSX 10.7.x In-Reply-To: <2AFE5BFC-3F95-4CEB-9B60-E97394CC0DEB@apple.com> References: <590382499.43169502.1358985067537.JavaMail.root@mozilla.com> <51013E58.90801@artspark.co.jp> <510152BA.3000108@artspark.co.jp> <2AFE5BFC-3F95-4CEB-9B60-E97394CC0DEB@apple.com> Message-ID: On Fri, Feb 1, 2013 at 11:33 AM, Oliver Hunt wrote: > > How does the end user know which button to press in this dialog? > What do you mean? I'm assuming it would be just like geo location permission or webcam permission. "http://webcamtoy.com/ wants to use your camera. [ Deny ] [ Allow ]" "http://webglsite.com/" wants to gather hardware about your system. [ Deny ] [ Allow ]" Note: I'm not sure if I'm for or against this proposal. Just suggesting how I would expect it to work. > --Oliver > > On Feb 1, 2013, at 11:28 AM, Kenneth Russell wrote: > > > > > Balancing the needs for privacy and precise bug reports, it's been > > suggested by Jeff Gilbert during a recent face-to-face meeting that a > > WebGL extension be proposed which prompts the user for permission to > > access information about the 3D graphics card. If the user grants it, > > then plausibly the > > > http://www.khronos.org/registry/webgl/extensions/WEBGL_debug_renderer_info/ > > extension would be allowed to be fetched. Additional information could > > be added to that extension as necessary. This would let the > > application construct its own bug report. > > > > If someone could propose such an extension that would be great. It > > would need to pass in some sort of completion callback indicating > > success or failure to the function which prompts the user for > > permission. > > > > -Ken > > > > > > > > On Thu, Jan 24, 2013 at 9:19 AM, Florian B?sch wrote: > >> It's not a bad idea per-se (except that perhaps instead of an email I'd > like > >> to get it as a JSON-post to my troubleshooting interface or such). > >> > >> What I'd like to see goes further than that however: > >> > >> - We don't need to know for an identical configuration that a user has > run > >> into the issue, again, we already know it's a problem. > >> - We'd really like to know how many of our visitors (if they click the > >> report or not) are gonna have that configuration, so we can prioritize > >> bugfixes and workarounds. > >> - If we already know that for out application this users configuration > is > >> gonna be a problem, we don't need to send him down the rabbit hole only > to > >> find out it sucks. We could tell him up front "dude, I'm sorry but it > won't > >> work, we're working on fixing it, meanwhile, try ..." > >> > >> So the paradox is this: In order to provide a *good* user experience we > need > >> to profile our users. However we also want avoid the baddies to profile > our > >> users. So meh, kinda sucks. > >> > >> > >> On Thu, Jan 24, 2013 at 6:07 PM, Brandon Jones > wrote: > >>> > >>> Here's a (somewhat unorthodox) idea: > >>> > >>> I've always understood the resistance to exposing GPU/driver specs via > >>> Javascript to be a countermeasure against using that information to > >>> fingerprint a user's system for tracking, combined with a reluctance to > >>> encourage developers to start writing code that targets specific > hardware. > >>> As Florian points out, though, when something goes wrong that's > immediately > >>> the first thing that both the app developer and the browser vendor > asks for: > >>> system configuration. Unfortunately Florian is also correct that many > users > >>> don't know how or won't go to the trouble of explicitly submitting this > >>> information with their complaints. > >>> > >>> What an automated system was put in place that allows user to send an > >>> email to the app developers pre-populated with relevant specs at the > push of > >>> a button? For example, in Chrome we now have an infobar that triggers > on > >>> certain WebGL issues. It would be nice if the site could provide some > >>> contact info that would change the info bar from "Something went wrong, > >>> click to reload" to "Something went wrong. www.AwesomeWebGLGame.comhas > >>> requested more information about the problem you are experiencing, > click if > >>> you would like to contact them about this issue" (That's a bit wordy > for an > >>> infobar, but you get the idea). The result of this could be as simple > as > >>> launching their default email client and pre-populating some system > specs > >>> and the support email for the site (Maybe even a screenshot?). That way > >>> there's complete transparency about what's being sent, and the chances > of > >>> the developer getting quality information goes up dramatically. Since > it's > >>> dialog/infobar driven the site can't silently scrape the info for > tracking > >>> purposes without the user explicitly knowing it. Sounds like a decent > >>> compromise. > >>> > >>> The biggest issue I see is that the only cases where this could be > >>> launched automatically are crashes, context loss, or something similar. > >>> Those cases can be interesting to the application developer, but more > often > >>> than not those scenarios are of more interest to the browser vendors. > Things > >>> like corrupted rendering, missing textures, or other non-crashing > anomalies > >>> are typically only able to be identified by the user, in which case a > >>> "report a problem" button is more appropriate. It shouldn't be a big > deal to > >>> provide an API to request the infobar/dialog though, which would give > app > >>> developers more control while still keeping the process in the users > hands. > >>> > >>> I realize this is a pretty significant feature I'm proposing but I do > >>> strongly feel that if we want the development community to really > embrace > >>> features like WebGL we need to give them all the tools we can to > address the > >>> new development challenges they represent. This is my idea for that, > and I'd > >>> be happy to hear other ideas on how to provide this critical > information to > >>> developers in a security-conscious manner. I think to rely solely on > >>> dev/user communication to collect this critical information is > probably a > >>> mistake in the long run, though. > >>> > >>> --Brandon > >>> > >>> > >>> On Thu, Jan 24, 2013 at 8:04 AM, Florian B?sch > wrote: > >>>> > >>>> On Thu, Jan 24, 2013 at 4:26 PM, Mark Callow < > callow.mark...@> > >>>> wrote: > >>>>> > >>>>> I can't think of any other way, short of blacklisting, to prevent the > >>>>> developer spending that time re-investigating the bug. > >>>> > >>>> So this is what happens: > >>>> 1) User runs app > >>>> 2) Problem > >>>> 3) User tells developer of problem > >>>> 4) Developer scratches head > >>>> 5) Developer spends some time futily trying to reproduce bug on a > machine > >>>> where it is not present > >>>> 6) Developer asks user for his GPU/OS/driver combination (because > WebGL > >>>> does not expose it) > >>>> 7) Users reaction "o_O wut?" > >>>> 8) 90% of such users are never heard of again > >>>> 10) some users report their configuration, which the developer doesn't > >>>> have. > >>>> 11) Intdeterminate time later, Developer gets chance to run his app > on a > >>>> problematic configuration > >>>> 12) spends days/hours actually tracing down the bug > >>>> 13) having found the bug confirms against the conformance > >>>> 14) files a bug ticket > >>>> 15) gets told it won't be fixed > >>>> > >>>> Now repeat that times developers times users times configurations and > >>>> it's a lot headscratching, tweeting, bad blog postsing, people > dissing WebGL > >>>> on HN and /. and son. Wash, rhinse and repeat. And I think we'd like > that > >>>> avoid somehow. > >>>> > >>>> So there's two aspects to that problem: Where to catch it and how to > >>>> catch it. > >>>> > >>>> The answer to the "where" question should be fairly simple, it should > >>>> happen as early as possible. The answer to the how question seems to > be that > >>>> we have these means to deal with it: > >>>> - Blacklist > >>>> - Differentiate by the experimental prefix > >>>> - Introduce an extension to differentiate > >>>> > >>>> I don't think the blacklist is a really good measure, because > conceivably > >>>> not all applications will have issues with it. > >>>> The differentiation by experimental prefix sounds ok to me, but, at > the > >>>> moment it's all experimental. So it doesn't actually solve anything > right > >>>> now. So the functionality of an extension seems to be pretty similar, > except > >>>> that it would offer a solution to this maybe faster than we can get > rid of > >>>> the experimental prefix. > >>>> > >>>> Regardless, I see a problem in baking this into the experimental > prefix > >>>> or an extension. Everytime a new device or driver pops up, you'll > have to > >>>> update the browser so that the list gets updated and you'll have to > hope > >>>> that this propagate trough the userbase fairly quickly so you won't > have to > >>>> answer hundreds of emails about issues you already know about. > >>>> > >>>> So on top of my head, if this was a native app, here's what I'd do: > >>>> 1) User runs app > >>>> 2) Problem > >>>> 3) Open a dialog prompting the user to send me a report > >>>> 4) Queries an interface on my server that compares the driver, os and > gpu > >>>> to a database of issues > >>>> 5) Automatically issues the user an apology and advice in case that he > >>>> has run into a known issue > >>>> 6) If not a know issue, creates a new support case for me to > investigate. > >>>> > >>>> So we can't do that in WebGL because the driver and GPU is not > exposed. > >>>> I've mentioned before that this would really handy. And understand > why you > >>>> wouldn't want to do that. Nevertheless, I think that would be one of > the > >>>> most sensible solutions. > >>> > >>> > >> > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oli...@ Fri Feb 1 14:33:49 2013 From: oli...@ (Oliver Hunt) Date: Fri, 01 Feb 2013 14:33:49 -0800 Subject: [Public WebGL] Conformance OSX 10.7.x In-Reply-To: References: <590382499.43169502.1358985067537.JavaMail.root@mozilla.com> <51013E58.90801@artspark.co.jp> <510152BA.3000108@artspark.co.jp> <2AFE5BFC-3F95-4CEB-9B60-E97394CC0DEB@apple.com> Message-ID: <84BDA1DD-122B-46F2-9493-49CC2CEC88C0@apple.com> Users aren't necessarily technical experts. Most people can understand concepts like "Site X wants to know where you are" and "Site Y wants to take pictures of you". Asking for advice on technical issues is more problematic because a typical user does not have the technical knowledge to be able to make a meaningful answer. This is an issue that has been discussed in detail in numerous committees over numerous contexts. Punting to the user on a technically driven security issue is a security anti-pattern. If this information is _really_ necessary, then the goal should be to make the absolute minimum amount of information available. eg. Rather than saying "can we have an api to provide a string containing all the gpu info" we could ask "exactly what information is required? How can we provide just that information?" and provide that info through a single explicit API. --Oliver On Feb 1, 2013, at 2:16 PM, Gregg Tavares wrote: > > > > On Fri, Feb 1, 2013 at 11:33 AM, Oliver Hunt wrote: > > How does the end user know which button to press in this dialog? > > What do you mean? > > I'm assuming it would be just like geo location permission or webcam permission. > > "http://webcamtoy.com/ wants to use your camera. [ Deny ] [ Allow ]" > > "http://webglsite.com/" wants to gather hardware about your system. [ Deny ] [ Allow ]" > > Note: I'm not sure if I'm for or against this proposal. Just suggesting how I would expect it to work. > > > > --Oliver > > On Feb 1, 2013, at 11:28 AM, Kenneth Russell wrote: > > > > > Balancing the needs for privacy and precise bug reports, it's been > > suggested by Jeff Gilbert during a recent face-to-face meeting that a > > WebGL extension be proposed which prompts the user for permission to > > access information about the 3D graphics card. If the user grants it, > > then plausibly the > > http://www.khronos.org/registry/webgl/extensions/WEBGL_debug_renderer_info/ > > extension would be allowed to be fetched. Additional information could > > be added to that extension as necessary. This would let the > > application construct its own bug report. > > > > If someone could propose such an extension that would be great. It > > would need to pass in some sort of completion callback indicating > > success or failure to the function which prompts the user for > > permission. > > > > -Ken > > > > > > > > On Thu, Jan 24, 2013 at 9:19 AM, Florian B?sch wrote: > >> It's not a bad idea per-se (except that perhaps instead of an email I'd like > >> to get it as a JSON-post to my troubleshooting interface or such). > >> > >> What I'd like to see goes further than that however: > >> > >> - We don't need to know for an identical configuration that a user has run > >> into the issue, again, we already know it's a problem. > >> - We'd really like to know how many of our visitors (if they click the > >> report or not) are gonna have that configuration, so we can prioritize > >> bugfixes and workarounds. > >> - If we already know that for out application this users configuration is > >> gonna be a problem, we don't need to send him down the rabbit hole only to > >> find out it sucks. We could tell him up front "dude, I'm sorry but it won't > >> work, we're working on fixing it, meanwhile, try ..." > >> > >> So the paradox is this: In order to provide a *good* user experience we need > >> to profile our users. However we also want avoid the baddies to profile our > >> users. So meh, kinda sucks. > >> > >> > >> On Thu, Jan 24, 2013 at 6:07 PM, Brandon Jones wrote: > >>> > >>> Here's a (somewhat unorthodox) idea: > >>> > >>> I've always understood the resistance to exposing GPU/driver specs via > >>> Javascript to be a countermeasure against using that information to > >>> fingerprint a user's system for tracking, combined with a reluctance to > >>> encourage developers to start writing code that targets specific hardware. > >>> As Florian points out, though, when something goes wrong that's immediately > >>> the first thing that both the app developer and the browser vendor asks for: > >>> system configuration. Unfortunately Florian is also correct that many users > >>> don't know how or won't go to the trouble of explicitly submitting this > >>> information with their complaints. > >>> > >>> What an automated system was put in place that allows user to send an > >>> email to the app developers pre-populated with relevant specs at the push of > >>> a button? For example, in Chrome we now have an infobar that triggers on > >>> certain WebGL issues. It would be nice if the site could provide some > >>> contact info that would change the info bar from "Something went wrong, > >>> click to reload" to "Something went wrong. www.AwesomeWebGLGame.com has > >>> requested more information about the problem you are experiencing, click if > >>> you would like to contact them about this issue" (That's a bit wordy for an > >>> infobar, but you get the idea). The result of this could be as simple as > >>> launching their default email client and pre-populating some system specs > >>> and the support email for the site (Maybe even a screenshot?). That way > >>> there's complete transparency about what's being sent, and the chances of > >>> the developer getting quality information goes up dramatically. Since it's > >>> dialog/infobar driven the site can't silently scrape the info for tracking > >>> purposes without the user explicitly knowing it. Sounds like a decent > >>> compromise. > >>> > >>> The biggest issue I see is that the only cases where this could be > >>> launched automatically are crashes, context loss, or something similar. > >>> Those cases can be interesting to the application developer, but more often > >>> than not those scenarios are of more interest to the browser vendors. Things > >>> like corrupted rendering, missing textures, or other non-crashing anomalies > >>> are typically only able to be identified by the user, in which case a > >>> "report a problem" button is more appropriate. It shouldn't be a big deal to > >>> provide an API to request the infobar/dialog though, which would give app > >>> developers more control while still keeping the process in the users hands. > >>> > >>> I realize this is a pretty significant feature I'm proposing but I do > >>> strongly feel that if we want the development community to really embrace > >>> features like WebGL we need to give them all the tools we can to address the > >>> new development challenges they represent. This is my idea for that, and I'd > >>> be happy to hear other ideas on how to provide this critical information to > >>> developers in a security-conscious manner. I think to rely solely on > >>> dev/user communication to collect this critical information is probably a > >>> mistake in the long run, though. > >>> > >>> --Brandon > >>> > >>> > >>> On Thu, Jan 24, 2013 at 8:04 AM, Florian B?sch wrote: > >>>> > >>>> On Thu, Jan 24, 2013 at 4:26 PM, Mark Callow > >>>> wrote: > >>>>> > >>>>> I can't think of any other way, short of blacklisting, to prevent the > >>>>> developer spending that time re-investigating the bug. > >>>> > >>>> So this is what happens: > >>>> 1) User runs app > >>>> 2) Problem > >>>> 3) User tells developer of problem > >>>> 4) Developer scratches head > >>>> 5) Developer spends some time futily trying to reproduce bug on a machine > >>>> where it is not present > >>>> 6) Developer asks user for his GPU/OS/driver combination (because WebGL > >>>> does not expose it) > >>>> 7) Users reaction "o_O wut?" > >>>> 8) 90% of such users are never heard of again > >>>> 10) some users report their configuration, which the developer doesn't > >>>> have. > >>>> 11) Intdeterminate time later, Developer gets chance to run his app on a > >>>> problematic configuration > >>>> 12) spends days/hours actually tracing down the bug > >>>> 13) having found the bug confirms against the conformance > >>>> 14) files a bug ticket > >>>> 15) gets told it won't be fixed > >>>> > >>>> Now repeat that times developers times users times configurations and > >>>> it's a lot headscratching, tweeting, bad blog postsing, people dissing WebGL > >>>> on HN and /. and son. Wash, rhinse and repeat. And I think we'd like that > >>>> avoid somehow. > >>>> > >>>> So there's two aspects to that problem: Where to catch it and how to > >>>> catch it. > >>>> > >>>> The answer to the "where" question should be fairly simple, it should > >>>> happen as early as possible. The answer to the how question seems to be that > >>>> we have these means to deal with it: > >>>> - Blacklist > >>>> - Differentiate by the experimental prefix > >>>> - Introduce an extension to differentiate > >>>> > >>>> I don't think the blacklist is a really good measure, because conceivably > >>>> not all applications will have issues with it. > >>>> The differentiation by experimental prefix sounds ok to me, but, at the > >>>> moment it's all experimental. So it doesn't actually solve anything right > >>>> now. So the functionality of an extension seems to be pretty similar, except > >>>> that it would offer a solution to this maybe faster than we can get rid of > >>>> the experimental prefix. > >>>> > >>>> Regardless, I see a problem in baking this into the experimental prefix > >>>> or an extension. Everytime a new device or driver pops up, you'll have to > >>>> update the browser so that the list gets updated and you'll have to hope > >>>> that this propagate trough the userbase fairly quickly so you won't have to > >>>> answer hundreds of emails about issues you already know about. > >>>> > >>>> So on top of my head, if this was a native app, here's what I'd do: > >>>> 1) User runs app > >>>> 2) Problem > >>>> 3) Open a dialog prompting the user to send me a report > >>>> 4) Queries an interface on my server that compares the driver, os and gpu > >>>> to a database of issues > >>>> 5) Automatically issues the user an apology and advice in case that he > >>>> has run into a known issue > >>>> 6) If not a know issue, creates a new support case for me to investigate. > >>>> > >>>> So we can't do that in WebGL because the driver and GPU is not exposed. > >>>> I've mentioned before that this would really handy. And understand why you > >>>> wouldn't want to do that. Nevertheless, I think that would be one of the > >>>> most sensible solutions. > >>> > >>> > >> > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Feb 1 14:47:11 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Fri, 1 Feb 2013 23:47:11 +0100 Subject: [Public WebGL] Conformance OSX 10.7.x In-Reply-To: <84BDA1DD-122B-46F2-9493-49CC2CEC88C0@apple.com> References: <590382499.43169502.1358985067537.JavaMail.root@mozilla.com> <51013E58.90801@artspark.co.jp> <510152BA.3000108@artspark.co.jp> <2AFE5BFC-3F95-4CEB-9B60-E97394CC0DEB@apple.com> <84BDA1DD-122B-46F2-9493-49CC2CEC88C0@apple.com> Message-ID: On Fri, Feb 1, 2013 at 11:33 PM, Oliver Hunt wrote: > Users aren't necessarily technical experts. Most people can understand > concepts like "Site X wants to know where you are" and "Site Y wants to > take pictures of you". Asking for advice on technical issues is more > problematic because a typical user does not have the technical knowledge to > be able to make a meaningful answer. This is an issue that has been > discussed in detail in numerous committees over numerous contexts. > And it isn't going away. > Punting to the user on a technically driven security issue is a security > anti-pattern. > Making the UX worse or making it hard to support your customers is a security discussion anti-pattern. > If this information is _really_ necessary > As WebGL grows, people will deploy applications to millions and hundreds of millions of users. A sizable percentage of those will have one or another GPU or driver issue. If you cannot scale supporting your customers, than you cannot scale making commercial, supported, well behaved WebGL applications. And unless you want every company trying to do that to make their support a straight pipe trough to whatever vendor, we need to solve this. > , then the goal should be to make the absolute minimum amount of > information available. eg. Rather than saying "can we have an api to > provide a string containing all the gpu info" we could ask "exactly what > information is required? How can we provide just that information?" and > provide that info through a single explicit API. > The minimum information required is the driver version and the gpu. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Feb 2 02:04:30 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Sat, 2 Feb 2013 11:04:30 +0100 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? Message-ID: So as you might know if you go trough Angle targeting D3D the following things happen: 1. ESSL is parsed Angle 2. various bug-fixes and optimizations/unrollments are applied and it's output to HLSL 3. HLSL is parsed by the Direct3D compiler, various optimizations are applied, again 4. cso (direct3d intermediary bytecode) is produced 5. cso is translated to native GPU code Stages 3 -> 4 have been traditionally a weak spot. It takes most OpenGL drivers a couple milliseconds to parse and translate GLSL. But it usually takes Direct3D hundreds of milliseconds (sometimes dozens of seconds in pathological cases) to translate HLSL to cso. However Stages 4 -> 5 have been a traditional strength of Direct3D (troughout the development history of games on windows, game developers have always strived to precompile all their shaders to cso for shipping). Now one odd thing I have observed in the last 1-2 years is that the same shader in GLSL and HLSL, on the same GPU on the same operating system, would run fine on OpenGL (and compile as fast as anything else). But when going trough HLSL odd things would happen (strange register errors, extremely long compile times, completely broken rendering etc.). Trough discussing this with various people I think that it mainly stems from the HLSL -> cso compiler beeing... lackluster. It's slow and it often tries to optimize something erroneously which ends up being to clever by half. So since I don't have a lot of experience with Direct3D, I'm wondering about the following: Can't we just skip HLSL and compile ESSL -> cso directly? It is after all a machine/driver independent fast intermediary bytecode format. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rko...@ Sat Feb 2 02:37:42 2013 From: rko...@ (Kornmann, Ralf) Date: Sat, 2 Feb 2013 10:37:42 +0000 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: Message-ID: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> It is possible to generate D3D bytecode directly. There is even an assembler for this. Unfortunately this will not longer work as soon Angle would switch over to D3D 10+. To ensure that you don't mess with the bytecode anymore the compiled shaders are signed by the HLSL compiler and the runtime checks this. I have written a number of HLSL shaders and hardly run in compiler issues. In most cases problems were caused by me doing things wrong in the HLSL code. So I am not sure how many of the problems you noticed are caused by the ESSL to HLSL step. Anyway to ease the problem with the long compile times at least a bit it might be a good idea to add some kind of shader cache. This way it would a least faster the second time a user visit a page. Anything beyond this would most likely requires a custom shader container that can beside the pure GLSL code contains multiple binary shaders for different targets. ________________________________ Von: owner-public_webgl...@ [owner-public_webgl...@] im Auftrag von Florian B?sch [pyalot...@] Gesendet: Samstag, 2. Februar 2013 11:04 An: public webgl Betreff: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? So as you might know if you go trough Angle targeting D3D the following things happen: 1. ESSL is parsed Angle 2. various bug-fixes and optimizations/unrollments are applied and it's output to HLSL 3. HLSL is parsed by the Direct3D compiler, various optimizations are applied, again 4. cso (direct3d intermediary bytecode) is produced 5. cso is translated to native GPU code Stages 3 -> 4 have been traditionally a weak spot. It takes most OpenGL drivers a couple milliseconds to parse and translate GLSL. But it usually takes Direct3D hundreds of milliseconds (sometimes dozens of seconds in pathological cases) to translate HLSL to cso. However Stages 4 -> 5 have been a traditional strength of Direct3D (troughout the development history of games on windows, game developers have always strived to precompile all their shaders to cso for shipping). Now one odd thing I have observed in the last 1-2 years is that the same shader in GLSL and HLSL, on the same GPU on the same operating system, would run fine on OpenGL (and compile as fast as anything else). But when going trough HLSL odd things would happen (strange register errors, extremely long compile times, completely broken rendering etc.). Trough discussing this with various people I think that it mainly stems from the HLSL -> cso compiler beeing... lackluster. It's slow and it often tries to optimize something erroneously which ends up being to clever by half. So since I don't have a lot of experience with Direct3D, I'm wondering about the following: Can't we just skip HLSL and compile ESSL -> cso directly? It is after all a machine/driver independent fast intermediary bytecode format. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Feb 2 02:44:25 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Sat, 2 Feb 2013 11:44:25 +0100 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Sat, Feb 2, 2013 at 11:37 AM, Kornmann, Ralf wrote: > It is possible to generate D3D bytecode directly. There is even an > assembler for this. Unfortunately this will not longer work as soon Angle > would switch over to D3D 10+. To ensure that you don't mess with the > bytecode anymore the compiled shaders are signed by the HLSL compiler and > the runtime checks this. > Arghl > I have written a number of HLSL shaders and hardly run in compiler issues. > In most cases problems were caused by me doing things wrong in the HLSL > code. So I am not sure how many of the problems you noticed are caused by > the ESSL to HLSL step. > At least 3 of my WebGL demos have run into such issues, where a compile would take anything from 10 seconds to several minutes. The reaction of browsers to this problem is different. Chrome usually kills your context after about 11 seconds, and Firefox usually lets things run, but after about 15 seconds asks you if you want to kill the page since the JS is unresponsive. > Anyway to ease the problem with the long compile times at least a bit it > might be a good idea to add some kind of shader cache. This way it would a > least faster the second time a user visit a page. Anything beyond this > would most likely requires a custom shader container that can beside the > pure GLSL code contains multiple binary shaders for different targets. > There is a shader cache. But that doesn't really help that much because if you need to compile a serious amount of shaders (A typical high quality production has anything between 300 to 1000 different shaders) or if you run into a bunch of pathological cases (exceedingly likely with a large number of shaders) then the result is that a user never gets to the page. It'll lose context, or it'll ask him to kill the page, or he just leaves out of boredom to wait for stuff to happen. The typical 5ms or so compile time of GLSL via OpenGL is still way long. But the typical 100ms to 500ms compile time for GLSL via D3D pushes this beyond the point of breaking. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Sat Feb 2 07:35:15 2013 From: bja...@ (Benoit Jacob) Date: Sat, 02 Feb 2013 10:35:15 -0500 Subject: [Public WebGL] Behavior of texImage2D on DOM element not ready to give an image surface Message-ID: <510D3233.5050008@mozilla.com> Hi, Currently, in Firefox, texImage2D on a DOM element will throw an exception if the DOM element fails to return an image surface. For example that would typically happen on a video or image element that's not yet loaded/decoded. I seemed to remember that there had been some discussion about that and some agreement to have more graceful behavior in that case, but I can't find the discussion back and I don't remember what was decided. The spec only mentions the possibility of throwing a security exception, so I assume that throwing other kinds of exceptions, as we currently do, is illegal? Testcase: http://people.mozilla.org/~bjacob/video-cors.html Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From baj...@ Sat Feb 2 08:33:42 2013 From: baj...@ (Brandon Jones) Date: Sat, 2 Feb 2013 08:33:42 -0800 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: I've never looked into the D3D10+ compile times much, but have they improved over D3D9 at all? Ralf mentioned that a straight-to-bytecode approach would stop working once we upgrade, but if the compiler is better it may be a non-issue. Do we have any stats on that? On Saturday, February 2, 2013, Florian B?sch wrote: > On Sat, Feb 2, 2013 at 11:37 AM, Kornmann, Ralf > > wrote: > >> It is possible to generate D3D bytecode directly. There is even an >> assembler for this. Unfortunately this will not longer work as soon Angle >> would switch over to D3D 10+. To ensure that you don't mess with the >> bytecode anymore the compiled shaders are signed by the HLSL compiler and >> the runtime checks this. >> > Arghl > > >> I have written a number of HLSL shaders and hardly run in compiler >> issues. In most cases problems were caused by me doing things wrong in the >> HLSL code. So I am not sure how many of the problems you noticed are caused >> by the ESSL to HLSL step. >> > At least 3 of my WebGL demos have run into such issues, where a compile > would take anything from 10 seconds to several minutes. The reaction of > browsers to this problem is different. Chrome usually kills your context > after about 11 seconds, and Firefox usually lets things run, but after > about 15 seconds asks you if you want to kill the page since the JS is > unresponsive. > > >> Anyway to ease the problem with the long compile times at least a bit it >> might be a good idea to add some kind of shader cache. This way it would a >> least faster the second time a user visit a page. Anything beyond this >> would most likely requires a custom shader container that can beside the >> pure GLSL code contains multiple binary shaders for different targets. >> > There is a shader cache. But that doesn't really help that much because if > you need to compile a serious amount of shaders (A typical high quality > production has anything between 300 to 1000 different shaders) or if you > run into a bunch of pathological cases (exceedingly likely with a large > number of shaders) then the result is that a user never gets to the page. > It'll lose context, or it'll ask him to kill the page, or he just leaves > out of boredom to wait for stuff to happen. The typical 5ms or so compile > time of GLSL via OpenGL is still way long. But the typical 100ms to 500ms > compile time for GLSL via D3D pushes this beyond the point of breaking. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Feb 2 08:38:57 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Sat, 2 Feb 2013 17:38:57 +0100 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Sat, Feb 2, 2013 at 5:33 PM, Brandon Jones wrote: > Do we have any stats on that? > Kenneth would have a good test shader from me to see if DX10+ does better. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rko...@ Sat Feb 2 08:59:28 2013 From: rko...@ (Kornmann, Ralf) Date: Sat, 2 Feb 2013 16:59:28 +0000 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> , Message-ID: <9B3BEF16CBC82A45900B3F159DF91596017649A3A723@EU-MAIL-1-1.rws.ad.ea.com> The significant relaxed limits in SM 4+ might help as the compilation is now more straight forward. I am not sure if Angle currently uses SM 2 or 3. 2 was always a problem as the limits were so low that the compiler needs to use any possible trick to get the HLSL code in these. While I like to see an update (it would open the way for more extensions that would allow AAA graphics with WebGL) I am not sure how many systems would lost their WebGL support this way. With the concerns about the limited time that people can spend on WebGL development task I am not sure if going forward with two versions of Angle can be done. ________________________________ Von: Brandon Jones [bajones...@] Gesendet: Samstag, 2. Februar 2013 17:33 An: Florian B?sch Cc: Kornmann, Ralf; public webgl Betreff: Re: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? I've never looked into the D3D10+ compile times much, but have they improved over D3D9 at all? Ralf mentioned that a straight-to-bytecode approach would stop working once we upgrade, but if the compiler is better it may be a non-issue. Do we have any stats on that? On Saturday, February 2, 2013, Florian B?sch wrote: On Sat, Feb 2, 2013 at 11:37 AM, Kornmann, Ralf > wrote: It is possible to generate D3D bytecode directly. There is even an assembler for this. Unfortunately this will not longer work as soon Angle would switch over to D3D 10+. To ensure that you don't mess with the bytecode anymore the compiled shaders are signed by the HLSL compiler and the runtime checks this. Arghl I have written a number of HLSL shaders and hardly run in compiler issues. In most cases problems were caused by me doing things wrong in the HLSL code. So I am not sure how many of the problems you noticed are caused by the ESSL to HLSL step. At least 3 of my WebGL demos have run into such issues, where a compile would take anything from 10 seconds to several minutes. The reaction of browsers to this problem is different. Chrome usually kills your context after about 11 seconds, and Firefox usually lets things run, but after about 15 seconds asks you if you want to kill the page since the JS is unresponsive. Anyway to ease the problem with the long compile times at least a bit it might be a good idea to add some kind of shader cache. This way it would a least faster the second time a user visit a page. Anything beyond this would most likely requires a custom shader container that can beside the pure GLSL code contains multiple binary shaders for different targets. There is a shader cache. But that doesn't really help that much because if you need to compile a serious amount of shaders (A typical high quality production has anything between 300 to 1000 different shaders) or if you run into a bunch of pathological cases (exceedingly likely with a large number of shaders) then the result is that a user never gets to the page. It'll lose context, or it'll ask him to kill the page, or he just leaves out of boredom to wait for stuff to happen. The typical 5ms or so compile time of GLSL via OpenGL is still way long. But the typical 100ms to 500ms compile time for GLSL via D3D pushes this beyond the point of breaking. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Feb 2 09:09:07 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Sat, 2 Feb 2013 18:09:07 +0100 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: <9B3BEF16CBC82A45900B3F159DF91596017649A3A723@EU-MAIL-1-1.rws.ad.ea.com> References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF91596017649A3A723@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Sat, Feb 2, 2013 at 5:59 PM, Kornmann, Ralf wrote: > I am not sure if going forward with two versions of Angle can be done. > It has to be done anyway. OpenGL ES 3.0 contains many features which are, in combination, only found on about 80% of desktops and on about 0% of mobiles at this time. This will change, but slowly. So unless you want to see WebGL support (any version of webgl) plummet into the cellar, WebGL 1.0 backwards support is an absolute must. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thu...@ Sat Feb 2 09:22:54 2013 From: thu...@ (Ben Adams) Date: Sat, 2 Feb 2013 17:22:54 +0000 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: I'm not sure a striaght-to-bytecode would stop working. As I understand it, and I may be wrong, only Windows Metro/Windows Store apps require signed shaders? http://msdn.microsoft.com/en-us/library/windows/apps/hh694083.aspx But if you are worrying about that then term 3.9 would cause issues with browser apps compiling and running javascript; and 3.10 says they must support DX 9.1 (shown below) 3.9 All app logic must originate from, and reside in, your app package Your app must not attempt to change or extend the packaged content through any form of dynamic inclusion of code or data that changes how the application interacts with the Windows Runtime, or behaves with regard to Store policy. It is not permissible, for example, to download a remote script and subsequently execute that script in the local context of your app package. 3.10 Direct3D apps must support a minimum feature level This requirement applies if you depend on specific 3D graphics hardware features. If your app includes an ARM or a Neutral package it *must support Direct3D feature level 9_1*. If your app does not support ARM it must support the minimum feature level chosen on the Store portal. Because customers can change the graphics hardware in their computers after the app is installed, if you choose a minimum feature level higher than 9_1, your app must detect at launch whether or not the current hardware meets the minimum requirements. If not, the app must display a message to the customer detailing the Direct3D requirements. In addition to supporting the chosen minimum Direct3D feature level, your app may use higher feature levels when available. On Sat, Feb 2, 2013 at 4:33 PM, Brandon Jones wrote: > I've never looked into the D3D10+ compile times much, but have they > improved over D3D9 at all? Ralf mentioned that a straight-to-bytecode > approach would stop working once we upgrade, but if the compiler is better > it may be a non-issue. > > Do we have any stats on that? > > On Saturday, February 2, 2013, Florian B?sch wrote: > >> On Sat, Feb 2, 2013 at 11:37 AM, Kornmann, Ralf wrote: >> >>> It is possible to generate D3D bytecode directly. There is even an >>> assembler for this. Unfortunately this will not longer work as soon Angle >>> would switch over to D3D 10+. To ensure that you don't mess with the >>> bytecode anymore the compiled shaders are signed by the HLSL compiler and >>> the runtime checks this. >>> >> Arghl >> >> >>> I have written a number of HLSL shaders and hardly run in compiler >>> issues. In most cases problems were caused by me doing things wrong in the >>> HLSL code. So I am not sure how many of the problems you noticed are caused >>> by the ESSL to HLSL step. >>> >> At least 3 of my WebGL demos have run into such issues, where a compile >> would take anything from 10 seconds to several minutes. The reaction of >> browsers to this problem is different. Chrome usually kills your context >> after about 11 seconds, and Firefox usually lets things run, but after >> about 15 seconds asks you if you want to kill the page since the JS is >> unresponsive. >> >> >>> Anyway to ease the problem with the long compile times at least a bit it >>> might be a good idea to add some kind of shader cache. This way it would a >>> least faster the second time a user visit a page. Anything beyond this >>> would most likely requires a custom shader container that can beside the >>> pure GLSL code contains multiple binary shaders for different targets. >>> >> There is a shader cache. But that doesn't really help that much because >> if you need to compile a serious amount of shaders (A typical high quality >> production has anything between 300 to 1000 different shaders) or if you >> run into a bunch of pathological cases (exceedingly likely with a large >> number of shaders) then the result is that a user never gets to the page. >> It'll lose context, or it'll ask him to kill the page, or he just leaves >> out of boredom to wait for stuff to happen. The typical 5ms or so compile >> time of GLSL via OpenGL is still way long. But the typical 100ms to 500ms >> compile time for GLSL via D3D pushes this beyond the point of breaking. >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rko...@ Sat Feb 2 09:25:22 2013 From: rko...@ (Kornmann, Ralf) Date: Sat, 2 Feb 2013 17:25:22 +0000 Subject: AW: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF91596017649A3A723@EU-MAIL-1-1.rws.ad.ea.com>, Message-ID: <9B3BEF16CBC82A45900B3F159DF91596017649A3A724@EU-MAIL-1-1.rws.ad.ea.com> Maybe there is a missunderstanding. I was not talking about an ES 2.0 and ES 3.0 version of Angle. I pretty sure it would be possible to build one version that contains all entry points for both versions. I was talking about all the systems out there that don't Support DX 10+ at all or only in compatibility mode. An upgrade to "pure" DX 10+ would leave all people with Windows XP and DX9 level hardware behind. DX11 allows to use DX9 level hardware. But this compatibility mode doesn't support all features that can be reached with the DX9 runtime and has a higher overhead. There is a software engine (WARP) for DX11 but it is too slow for more than simple use cases. So my concern was about maintain a DX9 and DX10+ version of Angle. ________________________________ Von: Florian B?sch [pyalot...@] Gesendet: Samstag, 2. Februar 2013 18:09 An: Kornmann, Ralf Cc: Brandon Jones; public webgl Betreff: Re: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? On Sat, Feb 2, 2013 at 5:59 PM, Kornmann, Ralf > wrote: I am not sure if going forward with two versions of Angle can be done. It has to be done anyway. OpenGL ES 3.0 contains many features which are, in combination, only found on about 80% of desktops and on about 0% of mobiles at this time. This will change, but slowly. So unless you want to see WebGL support (any version of webgl) plummet into the cellar, WebGL 1.0 backwards support is an absolute must. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rko...@ Sat Feb 2 09:35:56 2013 From: rko...@ (Kornmann, Ralf) Date: Sat, 2 Feb 2013 17:35:56 +0000 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> , Message-ID: <9B3BEF16CBC82A45900B3F159DF91596017649A3A725@EU-MAIL-1-1.rws.ad.ea.com> There are special rules for browsers on Metro. Anyway this shader signing I am talking about has nothing to do with the Windows Store. This is just between the HLSL compiler and the runtime. The compiler sign the shader and the runtime checks if the sign is valid. If you know how the sign is calculated you can still do binary shaders but this is not public knowledge. I may be wrong but I am somewhat sure there would be a big problems putting a sign function in Angle that is written based on reveres engineering of the Microsoft HLSL compiler. ________________________________ Von: Ben Adams [thundercat...@] Gesendet: Samstag, 2. Februar 2013 18:22 An: Brandon Jones Cc: Florian B?sch; Kornmann, Ralf; public webgl Betreff: Re: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? I'm not sure a striaght-to-bytecode would stop working. As I understand it, and I may be wrong, only Windows Metro/Windows Store apps require signed shaders? http://msdn.microsoft.com/en-us/library/windows/apps/hh694083.aspx But if you are worrying about that then term 3.9 would cause issues with browser apps compiling and running javascript; and 3.10 says they must support DX 9.1 (shown below) 3.9 All app logic must originate from, and reside in, your app package Your app must not attempt to change or extend the packaged content through any form of dynamic inclusion of code or data that changes how the application interacts with the Windows Runtime, or behaves with regard to Store policy. It is not permissible, for example, to download a remote script and subsequently execute that script in the local context of your app package. 3.10 Direct3D apps must support a minimum feature level This requirement applies if you depend on specific 3D graphics hardware features. If your app includes an ARM or a Neutral package it must support Direct3D feature level 9_1. If your app does not support ARM it must support the minimum feature level chosen on the Store portal. Because customers can change the graphics hardware in their computers after the app is installed, if you choose a minimum feature level higher than 9_1, your app must detect at launch whether or not the current hardware meets the minimum requirements. If not, the app must display a message to the customer detailing the Direct3D requirements. In addition to supporting the chosen minimum Direct3D feature level, your app may use higher feature levels when available. On Sat, Feb 2, 2013 at 4:33 PM, Brandon Jones > wrote: I've never looked into the D3D10+ compile times much, but have they improved over D3D9 at all? Ralf mentioned that a straight-to-bytecode approach would stop working once we upgrade, but if the compiler is better it may be a non-issue. Do we have any stats on that? On Saturday, February 2, 2013, Florian B?sch wrote: On Sat, Feb 2, 2013 at 11:37 AM, Kornmann, Ralf wrote: It is possible to generate D3D bytecode directly. There is even an assembler for this. Unfortunately this will not longer work as soon Angle would switch over to D3D 10+. To ensure that you don't mess with the bytecode anymore the compiled shaders are signed by the HLSL compiler and the runtime checks this. Arghl I have written a number of HLSL shaders and hardly run in compiler issues. In most cases problems were caused by me doing things wrong in the HLSL code. So I am not sure how many of the problems you noticed are caused by the ESSL to HLSL step. At least 3 of my WebGL demos have run into such issues, where a compile would take anything from 10 seconds to several minutes. The reaction of browsers to this problem is different. Chrome usually kills your context after about 11 seconds, and Firefox usually lets things run, but after about 15 seconds asks you if you want to kill the page since the JS is unresponsive. Anyway to ease the problem with the long compile times at least a bit it might be a good idea to add some kind of shader cache. This way it would a least faster the second time a user visit a page. Anything beyond this would most likely requires a custom shader container that can beside the pure GLSL code contains multiple binary shaders for different targets. There is a shader cache. But that doesn't really help that much because if you need to compile a serious amount of shaders (A typical high quality production has anything between 300 to 1000 different shaders) or if you run into a bunch of pathological cases (exceedingly likely with a large number of shaders) then the result is that a user never gets to the page. It'll lose context, or it'll ask him to kill the page, or he just leaves out of boredom to wait for stuff to happen. The typical 5ms or so compile time of GLSL via OpenGL is still way long. But the typical 100ms to 500ms compile time for GLSL via D3D pushes this beyond the point of breaking. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Sat Feb 2 10:46:30 2013 From: gma...@ (Gregg Tavares) Date: Sat, 2 Feb 2013 10:46:30 -0800 Subject: [Public WebGL] Behavior of texImage2D on DOM element not ready to give an image surface In-Reply-To: <510D3233.5050008@mozilla.com> References: <510D3233.5050008@mozilla.com> Message-ID: I think this is the thread https://www.khronos.org/webgl/public-mailing-list/archives/1210/msg00039.html On Sat, Feb 2, 2013 at 7:35 AM, Benoit Jacob wrote: > > Hi, > > Currently, in Firefox, texImage2D on a DOM element will throw an > exception if the DOM element fails to return an image surface. For > example that would typically happen on a video or image element that's > not yet loaded/decoded. > > I seemed to remember that there had been some discussion about that and > some agreement to have more graceful behavior in that case, but I can't > find the discussion back and I don't remember what was decided. > > The spec only mentions the possibility of throwing a security exception, > so I assume that throwing other kinds of exceptions, as we currently do, > is illegal? > > Testcase: > http://people.mozilla.org/~bjacob/video-cors.html > > Cheers, > Benoit > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Sat Feb 2 15:06:02 2013 From: kbr...@ (Kenneth Russell) Date: Sat, 2 Feb 2013 15:06:02 -0800 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: <1962493783.26851480.1352252165428.JavaMail.root@mozilla.com> Message-ID: The OpenGL ES working group has agreed to move forward with the extension as EXT_draw_buffers. The WebGL mirror of the extension has been renamed and finally moved to draft status. Looking forward to implementations appearing soon. -Ken On Wed, Jan 30, 2013 at 1:38 AM, Florian B?sch wrote: > Any update on the change to the extension? > > > On Fri, Jan 25, 2013 at 7:53 PM, Kenneth Russell wrote: >> >> Brief update. There has been some good progress, and cross-vendor >> agreement, on the underlying ES extension spec that >> WEBGL_multiple_render_targets is based on. Expect another update to >> the WebGL extension to occur in the next few days, at which it would >> be a good time to move it to draft status. >> >> -Ken >> >> >> >> On Tue, Jan 22, 2013 at 1:22 PM, Florian B?sch wrote: >> > Pull request for removal of WEBGL_fbo_color_attachments created >> > https://github.com/KhronosGroup/WebGL/pull/146 >> > >> > >> > On Tue, Jan 22, 2013 at 9:37 PM, Kenneth Russell wrote: >> >> >> >> WEBGL_multiple_render_targets is the current proposal. >> >> WEBGL_fbo_color_attachments didn't implement ES 3.0's semantics, and >> >> wasn't forward compatible. Please feel free to submit a pull request >> >> deleting WEBGL_fbo_color_attachments. >> >> >> >> Moving WEBGL_multiple_render_targets to draft status is blocked on me >> >> addressing feedback from TransGaming on the underlying >> >> ANGLE_multiple_render_targets spec. Sorry for the delay. I will try to >> >> take care of this in the next day or two. >> >> >> >> -Ken >> >> >> >> >> >> >> >> On Mon, Jan 21, 2013 at 11:05 AM, Florian B?sch >> >> wrote: >> >> > There are two MRT extension for WebGL in proposal stage: >> >> > >> >> > >> >> > >> >> > http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_fbo_color_attachments/ >> >> > and >> >> > >> >> > >> >> > http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ >> >> > >> >> > Do we need both? >> >> > Which one(s) can we move to draft? >> >> > >> >> > >> >> > On Wed, Nov 7, 2012 at 6:33 PM, Brandon Jones >> >> > wrote: >> >> >> >> >> >> I know this extension does have WEBGL suffix and so it's relevant, >> >> >> but >> >> >> the >> >> >> whole prefix/suffix thing is a larger conversation affecting >> >> >> multiple >> >> >> extensions that's probably worth keeping on it's own thread. I'd >> >> >> hate >> >> >> to see >> >> >> potential issues with this extension overlooked because of all the >> >> >> *fix >> >> >> noise. Can we keep this thread focused on the extension proposal? >> >> >> >> >> >> >> >> >> On Wed, Nov 7, 2012 at 7:23 AM, Colin Mackenzie >> >> >> wrote: >> >> >>> >> >> >>> > The next reason to stick to the suffixes is because people >> >> >>> > already >> >> >>> > used to C, are used to them >> >> >>> >> >> >>> Just for the record, I don't really think this reason holds up IMHO >> >> >>> because WebGL itself drops the "gl" prefix from function names that >> >> >>> was >> >> >>> present in C (and numerous other languages). This decision really >> >> >>> surprised >> >> >>> me when I started learning to use WebGL and I think it demonstrated >> >> >>> a >> >> >>> willingness to diverge from established conventions where the >> >> >>> conventions >> >> >>> themselves have lost their purpose (namespacing). Just my opinion. >> >> >>> >> >> >>> Personally, I agree with Florian that the suffixes are mostly >> >> >>> superfluous, most especially the _WEBGL suffix because from a human >> >> >>> perspective, we already know it's in reference to WebGL. >> >> >>> >> >> >>> >> >> >>> On Wed, Nov 7, 2012 at 5:50 AM, Florian B?sch >> >> >>> wrote: >> >> >>>> >> >> >>>> On Wed, Nov 7, 2012 at 2:36 AM, Jeff Gilbert >> >> >>>> >> >> >>>> wrote: >> >> >>>>> >> >> >>>>> Original: >> >> >>>>> glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 16, 16, 0, GL_RGBA, >> >> >>>>> GL_HALF_FLOAT_OES, nullptr); >> >> >>>>> >> >> >>>>> WebGL with vendor decorations: >> >> >>>>> var ext = gl.getExtension("OES_texture_half_float"); >> >> >>>>> gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 16, 16, 0, gl.RGBA, >> >> >>>>> ext.HALF_FLOAT_OES, null); >> >> >>>>> >> >> >>>>> WebGL without vendor decorations: >> >> >>>>> var ext = gl.getExtension("texture_half_float"); >> >> >>>>> gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 16, 16, 0, gl.RGBA, >> >> >>>>> ext.HALF_FLOAT, null); >> >> >>>>> >> >> >>>>> Given the documentation we already do for WebGL extensions, I >> >> >>>>> don't >> >> >>>>> think the second case has any benefit over the third. >> >> >>>> >> >> >>>> It was about the suffixes, not the extension prefixes. >> >> >>>> >> >> >>>> The original reason for suffixes was that symbols/constants needed >> >> >>>> to >> >> >>>> co-exist within the same flat namespace in C. This reason does no >> >> >>>> longer >> >> >>>> exist in WebGL. The next reason to stick to the suffixes is >> >> >>>> because >> >> >>>> people >> >> >>>> already used to C, are used to them and it represents a 1:1 mirror >> >> >>>> to >> >> >>>> the >> >> >>>> existing extensions. This reason really only applies to any suffix >> >> >>>> but >> >> >>>> _WEBGL. The last reason to stick to the _WEBGL suffix would be >> >> >>>> such >> >> >>>> as not >> >> >>>> to confuse people who've got some auto-wrapper code that derives >> >> >>>> the >> >> >>>> suffix >> >> >>>> from the prefix, and we wouldn't want them to have to write >> >> >>>> something >> >> >>>> like >> >> >>>> if(extname.match(/.*?_WEBGL$/)){ suffix = ''; } else { suffix = >> >> >>>> extname.match(/([^_])+$/)[1]; }. >> >> >>>> >> >> >>>> I'm not in favor over abolishing prefixes on extension names, >> >> >>>> mainly, >> >> >>>> because this turned out to be impossible in a process of painful >> >> >>>> discovery >> >> >>>> of heated opinion. >> >> >>>> I am 100% in favor of not supplying the WEBGL *suffix*, but some >> >> >>>> would >> >> >>>> probably not agree. >> >> >>>> I would be absolutely ok skipping any *suffix* because, they're >> >> >>>> really >> >> >>>> superfluous. but more would probably not agree. >> >> >>>> >> >> >>> >> >> >> >> >> > >> > >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Mon Feb 4 10:31:52 2013 From: kbr...@ (Kenneth Russell) Date: Mon, 4 Feb 2013 10:31:52 -0800 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: The test shader Florian mentioned is the spherical harmonics fragment shader from his deferred irradiance volumes demo, translated to HLSL via ANGLE. It causes Microsoft's D3D Shader Compiler 9.29.952.3111 to take a really long time with /O1, even with the PS 5.0 profile. (On the same machine, ps_5_0 takes ~10 seconds; ps_3_0 takes ~18 seconds. Both of these are long enough to trigger timeouts in Chrome's WebGL implementation resulting in lost context.) Florian, would it be OK with you if I post the shader here? There have been multiple reports of slow shader compilation on Windows with ANGLE. Many of these occurred because ANGLE transformed the shader in a way that required the D3D shader compiler to unroll loops. ANGLE now detects many of these situations and does transformations such as avoiding gradient instructions in loops. I don't know why Florian's SH shader takes so long to compile; it doesn't seem to contain any of the pathological constructs. Florian's shader has been added to the top of tree WebGL conformance suite as https://www.khronos.org/registry/webgl/sdk/tests/conformance/glsl/misc/large-loop-compile.html . Its eventual inclusion will have to be debated among members of the working group, but it seems to me that even if compilation of the shader has to fail, that failure should occur in a reasonable period of time, and the WebGL context shouldn't be lost. -Ken On Sat, Feb 2, 2013 at 8:38 AM, Florian B?sch wrote: > On Sat, Feb 2, 2013 at 5:33 PM, Brandon Jones wrote: >> >> Do we have any stats on that? > > Kenneth would have a good test shader from me to see if DX10+ does better. > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Mon Feb 4 10:38:42 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Mon, 4 Feb 2013 19:38:42 +0100 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Mon, Feb 4, 2013 at 7:31 PM, Kenneth Russell wrote: > Florian, would it be OK with you if I post the shader here? Jep that's OK. Go ahead. > Florian's shader has been added to the top of tree WebGL conformance > suite as > https://www.khronos.org/registry/webgl/sdk/tests/conformance/glsl/misc/large-loop-compile.html > . Its eventual inclusion will have to be debated among members of the > working group, but it seems to me that even if compilation of the > shader has to fail, that failure should occur in a reasonable period > of time, and the WebGL context shouldn't be lost. > I'm in support of Kens opinion that failure should happen in a reasonable period of time. Where I probably differ is that I'm of the opinion that a shader that compiles and runs fine trough OpenGL 2.0 on the same machine, shouldn't fail to compile or run on Direct3D 9.0 and it shouldn't take 10x as long to compile. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Mon Feb 4 10:43:43 2013 From: bja...@ (Benoit Jacob) Date: Mon, 04 Feb 2013 13:43:43 -0500 Subject: [Public WebGL] Behavior of texImage2D on DOM element not ready to give an image surface In-Reply-To: References: <510D3233.5050008@mozilla.com> Message-ID: <5110015F.9010601@mozilla.com> Thanks Gregg. I believe that the spec needs to be clarified one way or another --- currently it's not easy to determine what it prescribes in the case where the DOM element passed to texImage2D is not ready to give an image. So first of all, I agree that Firefox's current behavior (throwing) is bad. The only reasonable options so far are 1) generate a WebGL INVALID_OPERATION 2) silently flag the texture as incomplete (hence sampling it as rgba 0,0,0,0) (your proposal) I think I agree with you for solution 2). There is a valid concern against it (that it makes bugs harder to debug) but we already have this concept of incomplete textures anyways; the only news here would be that whether a texture is incomplete would start depending on undefined behavior (whether the DOM element loaded before the texImage2D call). While that undefined behavior is bad, is it there anyways, and we're not making it better by failing in a less graceful way. So, I am in favor of 2). Opinions, anyone else? Benoit On 13-02-02 01:46 PM, Gregg Tavares wrote: > I think this is the thread > https://www.khronos.org/webgl/public-mailing-list/archives/1210/msg00039.html > > > On Sat, Feb 2, 2013 at 7:35 AM, Benoit Jacob > wrote: > > > Hi, > > Currently, in Firefox, texImage2D on a DOM element will throw an > exception if the DOM element fails to return an image surface. For > example that would typically happen on a video or image element that's > not yet loaded/decoded. > > I seemed to remember that there had been some discussion about > that and > some agreement to have more graceful behavior in that case, but I > can't > find the discussion back and I don't remember what was decided. > > The spec only mentions the possibility of throwing a security > exception, > so I assume that throwing other kinds of exceptions, as we > currently do, > is illegal? > > Testcase: > http://people.mozilla.org/~bjacob/video-cors.html > > > Cheers, > Benoit > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > . > To unsubscribe, send an email to majordomo...@ > with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Feb 4 10:58:39 2013 From: gma...@ (Gregg Tavares) Date: Mon, 4 Feb 2013 10:58:39 -0800 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Mon, Feb 4, 2013 at 10:38 AM, Florian B?sch wrote: > On Mon, Feb 4, 2013 at 7:31 PM, Kenneth Russell wrote: > >> Florian, would it be OK with you if I post the shader here? > > Jep that's OK. Go ahead. > > >> Florian's shader has been added to the top of tree WebGL conformance >> suite as >> https://www.khronos.org/registry/webgl/sdk/tests/conformance/glsl/misc/large-loop-compile.html >> . Its eventual inclusion will have to be debated among members of the >> working group, but it seems to me that even if compilation of the >> shader has to fail, that failure should occur in a reasonable period >> of time, and the WebGL context shouldn't be lost. >> > I'm in support of Kens opinion that failure should happen in a reasonable > period of time. > > Where I probably differ is that I'm of the opinion that a shader that > compiles and runs fine trough OpenGL 2.0 on the same machine, shouldn't > fail to compile or run on Direct3D 9.0 and it shouldn't take 10x as long to > compile. > I'm not convinced the best solution is to lose the context for shaders that take a long time to compile. The browser doesn't kill JavaScript that takes a long time to run. Instead it just asks the used (this javascript is taking a long time to complete. [continue] [stop]) Maybe there are other solutions. 1) Once we get WebGL in workers you can compile in a worker and not block the main thread 2) The Browser can possibly compile shaders on a separate thread and not block the rest of the GPU pipeline. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rko...@ Mon Feb 4 10:59:38 2013 From: rko...@ (Kornmann, Ralf) Date: Mon, 4 Feb 2013 18:59:38 +0000 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: <9B3BEF16CBC82A45900B3F159DF9159601764A4BA552@EU-MAIL-1-1.rws.ad.ea.com> Hi Kenneth, have you checked the result if it is still unrolled (in the case it was unrolled at all)? Do you mind sharing the compiler flags you have used. I am just curious how much skip optimization, skip validation and optimization level 0 can improve the compile time. Maybe it would be an option to do a this first in the compile call and offload an full optimized compile to another thread. I have the feeling that many OpenGL implementations today do something similar. If anything fail changing the shader compile and link functions to be async may be the last way out of this dilemma. Unfortunately the D3D compiler was always build to be part of a production pipeline. Ralf -----Original Message----- From: Kenneth Russell [mailto:kbr...@] Sent: Montag, 4. Februar 2013 19:32 To: Florian B?sch Cc: Brandon Jones; Kornmann, Ralf; public webgl Subject: Re: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? The test shader Florian mentioned is the spherical harmonics fragment shader from his deferred irradiance volumes demo, translated to HLSL via ANGLE. It causes Microsoft's D3D Shader Compiler 9.29.952.3111 to take a really long time with /O1, even with the PS 5.0 profile. (On the same machine, ps_5_0 takes ~10 seconds; ps_3_0 takes ~18 seconds. Both of these are long enough to trigger timeouts in Chrome's WebGL implementation resulting in lost context.) Florian, would it be OK with you if I post the shader here? There have been multiple reports of slow shader compilation on Windows with ANGLE. Many of these occurred because ANGLE transformed the shader in a way that required the D3D shader compiler to unroll loops. ANGLE now detects many of these situations and does transformations such as avoiding gradient instructions in loops. I don't know why Florian's SH shader takes so long to compile; it doesn't seem to contain any of the pathological constructs. Florian's shader has been added to the top of tree WebGL conformance suite as https://www.khronos.org/registry/webgl/sdk/tests/conformance/glsl/misc/large-loop-compile.html . Its eventual inclusion will have to be debated among members of the working group, but it seems to me that even if compilation of the shader has to fail, that failure should occur in a reasonable period of time, and the WebGL context shouldn't be lost. -Ken On Sat, Feb 2, 2013 at 8:38 AM, Florian B?sch wrote: > On Sat, Feb 2, 2013 at 5:33 PM, Brandon Jones wrote: >> >> Do we have any stats on that? > > Kenneth would have a good test shader from me to see if DX10+ does better. > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Mon Feb 4 11:05:00 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Mon, 4 Feb 2013 20:05:00 +0100 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: <9B3BEF16CBC82A45900B3F159DF9159601764A4BA552@EU-MAIL-1-1.rws.ad.ea.com> References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF9159601764A4BA552@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: The dilemma isn't really that the browser behaves in less than useful ways to a shader that in some configurations takes long to compile. The problem is that if you have shaders that compile for half a minute each it's essentially useless. On Mon, Feb 4, 2013 at 7:59 PM, Kornmann, Ralf wrote: > Hi Kenneth, > > have you checked the result if it is still unrolled (in the case it was > unrolled at all)? > > Do you mind sharing the compiler flags you have used. I am just curious > how much skip optimization, skip validation and optimization level 0 can > improve the compile time. Maybe it would be an option to do a this first in > the compile call and offload an full optimized compile to another thread. I > have the feeling that many OpenGL implementations today do something > similar. > > If anything fail changing the shader compile and link functions to be > async may be the last way out of this dilemma. > > Unfortunately the D3D compiler was always build to be part of a production > pipeline. > > Ralf > > -----Original Message----- > From: Kenneth Russell [mailto:kbr...@] > Sent: Montag, 4. Februar 2013 19:32 > To: Florian B?sch > Cc: Brandon Jones; Kornmann, Ralf; public webgl > Subject: Re: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do > this? > > The test shader Florian mentioned is the spherical harmonics fragment > shader from his deferred irradiance volumes demo, translated to HLSL via > ANGLE. It causes Microsoft's D3D Shader Compiler 9.29.952.3111 to take a > really long time with /O1, even with the PS 5.0 profile. (On the same > machine, ps_5_0 takes ~10 seconds; ps_3_0 takes ~18 seconds. > Both of these are long enough to trigger timeouts in Chrome's WebGL > implementation resulting in lost context.) Florian, would it be OK with you > if I post the shader here? > > There have been multiple reports of slow shader compilation on Windows > with ANGLE. Many of these occurred because ANGLE transformed the shader in > a way that required the D3D shader compiler to unroll loops. > ANGLE now detects many of these situations and does transformations such > as avoiding gradient instructions in loops. I don't know why Florian's SH > shader takes so long to compile; it doesn't seem to contain any of the > pathological constructs. > > Florian's shader has been added to the top of tree WebGL conformance suite > as > https://www.khronos.org/registry/webgl/sdk/tests/conformance/glsl/misc/large-loop-compile.html > . Its eventual inclusion will have to be debated among members of the > working group, but it seems to me that even if compilation of the shader > has to fail, that failure should occur in a reasonable period of time, and > the WebGL context shouldn't be lost. > > -Ken > > > On Sat, Feb 2, 2013 at 8:38 AM, Florian B?sch wrote: > > On Sat, Feb 2, 2013 at 5:33 PM, Brandon Jones > wrote: > >> > >> Do we have any stats on that? > > > > Kenneth would have a good test shader from me to see if DX10+ does > better. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Mon Feb 4 11:05:40 2013 From: cal...@ (Mark Callow) Date: Mon, 04 Feb 2013 11:05:40 -0800 Subject: [Public WebGL] Behavior of texImage2D on DOM element not ready to give an image surface In-Reply-To: <5110015F.9010601@mozilla.com> References: <510D3233.5050008@mozilla.com> <5110015F.9010601@mozilla.com> Message-ID: <51100684.9020903@artspark.co.jp> On 13/02/04 10:43, Benoit Jacob wrote: > > So, I am in favor of 2). Opinions, anyone else? > I'm pretty sure that's what we decided last time this was discussed. Regards -Mark -- ??:????????????????????????????????????????????????????????????? ???????????????????????????????????????????????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Mon Feb 4 11:52:21 2013 From: bja...@ (Benoit Jacob) Date: Mon, 04 Feb 2013 14:52:21 -0500 Subject: [Public WebGL] Behavior of texImage2D on DOM element not ready to give an image surface In-Reply-To: <51100684.9020903@artspark.co.jp> References: <510D3233.5050008@mozilla.com> <5110015F.9010601@mozilla.com> <51100684.9020903@artspark.co.jp> Message-ID: <51101175.5040108@mozilla.com> On 13-02-04 02:05 PM, Mark Callow wrote: > > On 13/02/04 10:43, Benoit Jacob wrote: >> >> So, I am in favor of 2). Opinions, anyone else? >> > I'm pretty sure that's what we decided last time this was discussed. Ah, right. Hadn't read enough of the thread the first time. It seems that the main unresolved question was what dimensions the texture should have (as a failed upload from DOM element could leave us with no dimensions). 1x1 or something else? I don't have any opinion there. Benoit > Regards > > -Mark > > -- > ??:????????????????????????????????????????????????????????????? > ?????????????????????????????????????????????????????????????? ?? ??. > > NOTE: This electronic mail message may contain confidential and > privileged information from HI Corporation. If you are not the > intended recipient, any disclosure, photocopying, distribution or use > of the contents of the received information is prohibited. If you have > received this e-mail in error, please notify the sender immediately > and permanently delete this message and all related copies. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Feb 4 16:37:46 2013 From: kbr...@ (Kenneth Russell) Date: Mon, 4 Feb 2013 16:37:46 -0800 Subject: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? In-Reply-To: <9B3BEF16CBC82A45900B3F159DF9159601764A4BA552@EU-MAIL-1-1.rws.ad.ea.com> References: <9B3BEF16CBC82A45900B3F159DF91596017649A3A71F@EU-MAIL-1-1.rws.ad.ea.com> <9B3BEF16CBC82A45900B3F159DF9159601764A4BA552@EU-MAIL-1-1.rws.ad.ea.com> Message-ID: On Mon, Feb 4, 2013 at 10:59 AM, Kornmann, Ralf wrote: > Hi Kenneth, > > have you checked the result if it is still unrolled (in the case it was unrolled at all)? Hi Ralf, Yes, I used the [loop] directive and confirmed that if the HLSL compiler attempted to unroll the loop, that compilation would fail. > Do you mind sharing the compiler flags you have used. I am just curious how much skip optimization, skip validation and optimization level 0 can improve the compile time. Maybe it would be an option to do a this first in the compile call and offload an full optimized compile to another thread. I have the feeling that many OpenGL implementations today do something similar. Here's a gist containing the shader and fxc compiler options (thanks Florian for granting permission to publish it): https://gist.github.com/4711083 Using /O0 is not viable at the moment. We have found that /O1 is necessary to make the majority of shaders compile with the transformations that ANGLE performs. > If anything fail changing the shader compile and link functions to be async may be the last way out of this dilemma. Yes, I think that it will probably be necessary to run the D3D HLSL compiler on another thread in the WebGL implementation, and make the compile fail after a couple of seconds. Hopefully though we can either figure out what's going wrong with this test case or submit a bug report to Microsoft. -Ken > Unfortunately the D3D compiler was always build to be part of a production pipeline. > > Ralf > > -----Original Message----- > From: Kenneth Russell [mailto:kbr...@] > Sent: Montag, 4. Februar 2013 19:32 > To: Florian B?sch > Cc: Brandon Jones; Kornmann, Ralf; public webgl > Subject: Re: [Public WebGL] ESSL -> HLSL -> cso, do we really need to do this? > > The test shader Florian mentioned is the spherical harmonics fragment shader from his deferred irradiance volumes demo, translated to HLSL via ANGLE. It causes Microsoft's D3D Shader Compiler 9.29.952.3111 to take a really long time with /O1, even with the PS 5.0 profile. (On the same machine, ps_5_0 takes ~10 seconds; ps_3_0 takes ~18 seconds. > Both of these are long enough to trigger timeouts in Chrome's WebGL implementation resulting in lost context.) Florian, would it be OK with you if I post the shader here? > > There have been multiple reports of slow shader compilation on Windows with ANGLE. Many of these occurred because ANGLE transformed the shader in a way that required the D3D shader compiler to unroll loops. > ANGLE now detects many of these situations and does transformations such as avoiding gradient instructions in loops. I don't know why Florian's SH shader takes so long to compile; it doesn't seem to contain any of the pathological constructs. > > Florian's shader has been added to the top of tree WebGL conformance suite as https://www.khronos.org/registry/webgl/sdk/tests/conformance/glsl/misc/large-loop-compile.html > . Its eventual inclusion will have to be debated among members of the working group, but it seems to me that even if compilation of the shader has to fail, that failure should occur in a reasonable period of time, and the WebGL context shouldn't be lost. > > -Ken > > > On Sat, Feb 2, 2013 at 8:38 AM, Florian B?sch wrote: >> On Sat, Feb 2, 2013 at 5:33 PM, Brandon Jones wrote: >>> >>> Do we have any stats on that? >> >> Kenneth would have a good test shader from me to see if DX10+ does better. >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From vla...@ Tue Feb 5 02:14:17 2013 From: vla...@ (Vladimir Vukicevic) Date: Tue, 05 Feb 2013 18:14:17 +0800 Subject: [Public WebGL] Conformance OSX 10.7.x In-Reply-To: <84BDA1DD-122B-46F2-9493-49CC2CEC88C0@apple.com> References: <590382499.43169502.1358985067537.JavaMail.root@mozilla.com> <51013E58.90801@artspark.co.jp> <510152BA.3000108@artspark.co.jp> <2AFE5BFC-3F95-4CEB-9B60-E97394CC0DEB@apple.com> <84BDA1DD-122B-46F2-9493-49CC2CEC88C0@apple.com> Message-ID: <5110DB79.5080301@mozilla.com> Isn't this just a matter of phrasing? "Site X wants to know what kind of computer you're using" isn't any more complicated to understand than "Site X wants to know where you are". You could argue that "video card information" is more/less detailed than "what kind of computer you're using", but I think that's just semantics. We're not asking the user "Site X wants to know if it should whizbang your shizwizzle with your pixelfrobber" or anything. - Vlad On 2/2/2013 6:33 AM, Oliver Hunt wrote: > Users aren't necessarily technical experts. Most people can > understand concepts like "Site X wants to know where you are" and > "Site Y wants to take pictures of you". Asking for advice on > technical issues is more problematic because a typical user does not > have the technical knowledge to be able to make a meaningful answer. > This is an issue that has been discussed in detail in numerous > committees over numerous contexts. > > Punting to the user on a technically driven security issue is a > security anti-pattern. > > If this information is _really_ necessary, then the goal should be to > make the absolute minimum amount of information available. eg. Rather > than saying "can we have an api to provide a string containing all the > gpu info" we could ask "exactly what information is required? How can > we provide just that information?" and provide that info through a > single explicit API. > > --Oliver > > On Feb 1, 2013, at 2:16 PM, Gregg Tavares > wrote: > >> >> >> >> On Fri, Feb 1, 2013 at 11:33 AM, Oliver Hunt > > wrote: >> >> >> How does the end user know which button to press in this dialog? >> >> >> What do you mean? >> >> I'm assuming it would be just like geo location permission or webcam >> permission. >> >> "http://webcamtoy.com/ wants to use your camera. [ Deny ] [ Allow ]" >> >> "http://webglsite.com/" wants to gather hardware about your system. [ >> Deny ] [ Allow ]" >> Note: I'm not sure if I'm for or against this proposal. Just >> suggesting how I would expect it to work. >> >> >> >> --Oliver >> >> On Feb 1, 2013, at 11:28 AM, Kenneth Russell > > wrote: >> >> > >> > Balancing the needs for privacy and precise bug reports, it's been >> > suggested by Jeff Gilbert during a recent face-to-face meeting >> that a >> > WebGL extension be proposed which prompts the user for >> permission to >> > access information about the 3D graphics card. If the user >> grants it, >> > then plausibly the >> > >> http://www.khronos.org/registry/webgl/extensions/WEBGL_debug_renderer_info/ >> > extension would be allowed to be fetched. Additional >> information could >> > be added to that extension as necessary. This would let the >> > application construct its own bug report. >> > >> > If someone could propose such an extension that would be great. It >> > would need to pass in some sort of completion callback indicating >> > success or failure to the function which prompts the user for >> > permission. >> > >> > -Ken >> > >> > >> > >> > On Thu, Jan 24, 2013 at 9:19 AM, Florian B?sch >> > wrote: >> >> It's not a bad idea per-se (except that perhaps instead of an >> email I'd like >> >> to get it as a JSON-post to my troubleshooting interface or such). >> >> >> >> What I'd like to see goes further than that however: >> >> >> >> - We don't need to know for an identical configuration that a >> user has run >> >> into the issue, again, we already know it's a problem. >> >> - We'd really like to know how many of our visitors (if they >> click the >> >> report or not) are gonna have that configuration, so we can >> prioritize >> >> bugfixes and workarounds. >> >> - If we already know that for out application this users >> configuration is >> >> gonna be a problem, we don't need to send him down the rabbit >> hole only to >> >> find out it sucks. We could tell him up front "dude, I'm sorry >> but it won't >> >> work, we're working on fixing it, meanwhile, try ..." >> >> >> >> So the paradox is this: In order to provide a *good* user >> experience we need >> >> to profile our users. However we also want avoid the baddies >> to profile our >> >> users. So meh, kinda sucks. >> >> >> >> >> >> On Thu, Jan 24, 2013 at 6:07 PM, Brandon Jones >> > wrote: >> >>> >> >>> Here's a (somewhat unorthodox) idea: >> >>> >> >>> I've always understood the resistance to exposing GPU/driver >> specs via >> >>> Javascript to be a countermeasure against using that >> information to >> >>> fingerprint a user's system for tracking, combined with a >> reluctance to >> >>> encourage developers to start writing code that targets >> specific hardware. >> >>> As Florian points out, though, when something goes wrong >> that's immediately >> >>> the first thing that both the app developer and the browser >> vendor asks for: >> >>> system configuration. Unfortunately Florian is also correct >> that many users >> >>> don't know how or won't go to the trouble of explicitly >> submitting this >> >>> information with their complaints. >> >>> >> >>> What an automated system was put in place that allows user to >> send an >> >>> email to the app developers pre-populated with relevant specs >> at the push of >> >>> a button? For example, in Chrome we now have an infobar that >> triggers on >> >>> certain WebGL issues. It would be nice if the site could >> provide some >> >>> contact info that would change the info bar from "Something >> went wrong, >> >>> click to reload" to "Something went wrong. >> www.AwesomeWebGLGame.com has >> >>> requested more information about the problem you are >> experiencing, click if >> >>> you would like to contact them about this issue" (That's a >> bit wordy for an >> >>> infobar, but you get the idea). The result of this could be >> as simple as >> >>> launching their default email client and pre-populating some >> system specs >> >>> and the support email for the site (Maybe even a >> screenshot?). That way >> >>> there's complete transparency about what's being sent, and >> the chances of >> >>> the developer getting quality information goes up >> dramatically. Since it's >> >>> dialog/infobar driven the site can't silently scrape the info >> for tracking >> >>> purposes without the user explicitly knowing it. Sounds like >> a decent >> >>> compromise. >> >>> >> >>> The biggest issue I see is that the only cases where this >> could be >> >>> launched automatically are crashes, context loss, or >> something similar. >> >>> Those cases can be interesting to the application developer, >> but more often >> >>> than not those scenarios are of more interest to the browser >> vendors. Things >> >>> like corrupted rendering, missing textures, or other >> non-crashing anomalies >> >>> are typically only able to be identified by the user, in >> which case a >> >>> "report a problem" button is more appropriate. It shouldn't >> be a big deal to >> >>> provide an API to request the infobar/dialog though, which >> would give app >> >>> developers more control while still keeping the process in >> the users hands. >> >>> >> >>> I realize this is a pretty significant feature I'm proposing >> but I do >> >>> strongly feel that if we want the development community to >> really embrace >> >>> features like WebGL we need to give them all the tools we can >> to address the >> >>> new development challenges they represent. This is my idea >> for that, and I'd >> >>> be happy to hear other ideas on how to provide this critical >> information to >> >>> developers in a security-conscious manner. I think to rely >> solely on >> >>> dev/user communication to collect this critical information >> is probably a >> >>> mistake in the long run, though. >> >>> >> >>> --Brandon >> >>> >> >>> >> >>> On Thu, Jan 24, 2013 at 8:04 AM, Florian B?sch >> > wrote: >> >>>> >> >>>> On Thu, Jan 24, 2013 at 4:26 PM, Mark Callow >> > >> >>>> wrote: >> >>>>> >> >>>>> I can't think of any other way, short of blacklisting, to >> prevent the >> >>>>> developer spending that time re-investigating the bug. >> >>>> >> >>>> So this is what happens: >> >>>> 1) User runs app >> >>>> 2) Problem >> >>>> 3) User tells developer of problem >> >>>> 4) Developer scratches head >> >>>> 5) Developer spends some time futily trying to reproduce bug >> on a machine >> >>>> where it is not present >> >>>> 6) Developer asks user for his GPU/OS/driver combination >> (because WebGL >> >>>> does not expose it) >> >>>> 7) Users reaction "o_O wut?" >> >>>> 8) 90% of such users are never heard of again >> >>>> 10) some users report their configuration, which the >> developer doesn't >> >>>> have. >> >>>> 11) Intdeterminate time later, Developer gets chance to run >> his app on a >> >>>> problematic configuration >> >>>> 12) spends days/hours actually tracing down the bug >> >>>> 13) having found the bug confirms against the conformance >> >>>> 14) files a bug ticket >> >>>> 15) gets told it won't be fixed >> >>>> >> >>>> Now repeat that times developers times users times >> configurations and >> >>>> it's a lot headscratching, tweeting, bad blog postsing, >> people dissing WebGL >> >>>> on HN and /. and son. Wash, rhinse and repeat. And I think >> we'd like that >> >>>> avoid somehow. >> >>>> >> >>>> So there's two aspects to that problem: Where to catch it >> and how to >> >>>> catch it. >> >>>> >> >>>> The answer to the "where" question should be fairly simple, >> it should >> >>>> happen as early as possible. The answer to the how question >> seems to be that >> >>>> we have these means to deal with it: >> >>>> - Blacklist >> >>>> - Differentiate by the experimental prefix >> >>>> - Introduce an extension to differentiate >> >>>> >> >>>> I don't think the blacklist is a really good measure, >> because conceivably >> >>>> not all applications will have issues with it. >> >>>> The differentiation by experimental prefix sounds ok to me, >> but, at the >> >>>> moment it's all experimental. So it doesn't actually solve >> anything right >> >>>> now. So the functionality of an extension seems to be pretty >> similar, except >> >>>> that it would offer a solution to this maybe faster than we >> can get rid of >> >>>> the experimental prefix. >> >>>> >> >>>> Regardless, I see a problem in baking this into the >> experimental prefix >> >>>> or an extension. Everytime a new device or driver pops up, >> you'll have to >> >>>> update the browser so that the list gets updated and you'll >> have to hope >> >>>> that this propagate trough the userbase fairly quickly so >> you won't have to >> >>>> answer hundreds of emails about issues you already know about. >> >>>> >> >>>> So on top of my head, if this was a native app, here's what >> I'd do: >> >>>> 1) User runs app >> >>>> 2) Problem >> >>>> 3) Open a dialog prompting the user to send me a report >> >>>> 4) Queries an interface on my server that compares the >> driver, os and gpu >> >>>> to a database of issues >> >>>> 5) Automatically issues the user an apology and advice in >> case that he >> >>>> has run into a known issue >> >>>> 6) If not a know issue, creates a new support case for me to >> investigate. >> >>>> >> >>>> So we can't do that in WebGL because the driver and GPU is >> not exposed. >> >>>> I've mentioned before that this would really handy. And >> understand why you >> >>>> wouldn't want to do that. Nevertheless, I think that would >> be one of the >> >>>> most sensible solutions. >> >>> >> >>> >> >> >> > >> > ----------------------------------------------------------- >> > You are currently subscribed to public_webgl...@ >> . >> > To unsubscribe, send an email to majordomo...@ >> with >> > the following command in the body of your email: >> > unsubscribe public_webgl >> > ----------------------------------------------------------- >> > >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> . >> To unsubscribe, send an email to majordomo...@ >> with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu Feb 7 14:55:57 2013 From: gma...@ (Gregg Tavares) Date: Thu, 7 Feb 2013 14:55:57 -0800 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: References: <1323262007.46206883.1359598185356.JavaMail.root@mozilla.com> Message-ID: So I posted a new proposal for canvas in workers and using the same context with multiple canvases to the WhatWG wiki http://wiki.whatwg.org/wiki/CanvasInWorkers This comes from discussions at the WebGL Working Group Face 2 Face meeting and from input from the Google Maps team. I mention that not to give it any authority, only rather not to take credit for a group effort. I hope we're getting close though :-) On Thu, Jan 31, 2013 at 8:05 AM, Florian B?sch wrote: > On Thu, Jan 31, 2013 at 3:09 AM, Jeff Gilbert wrote: > >> Effectively, we would create a DrawingBuffer object either from a Canvas >> or via direct use of the constructor. If not created from a Canvas, you >> will only be able to use it for offscreen rendering. This object is >> Transferable, and contains the back buffer that WebGL will render into. It >> also contains a front buffer that will be Presented to the compositor. >> (Well, at least for the DrawingBuffers created from Canvases) DrawingBuffer >> will have a commit() function that swaps the back buffer to the front >> buffer. >> >> >> Context creation attributes like `alpha` and `preserveDrawingBuffer` >> would be ideally be attached to the DrawingBuffers, not WebGL contexts. >> > I assume this means there is going to be a function like > canvas.createDrawingBuffer(attribs)? > > One major point not yet resolved is how to better synchronize Commits such >> that two or more Canvases can assure that each of their new frames is >> composited at the same time, so composite renderings are smooth and don't >> fall out of sync. This is workable currently by using a compositing >> RenderingContext, and doing the composition manually, but we may be able to >> come up with a way to assure rendered frames from different Canvases are >> composited together consistently. >> > I'm not sure it's relevant but I have one odd use-case I stumbled into. In > an application I'm writing I offer a "pop out" functionality that creates a > popup window, the canvas is removed from the main page and attached to that > popups body. This works fine, except that requestAnimationFrame has to be > switched over to be called from that popups document (otherwise there is > compositional flickering) as the two windows are not drawing in sync of > course. At present this is the only way to do it (detach, attach to new > documents body) since the canvas is unchangably bound to that context. With > the ability to present a drawing buffer to a canvas, the method could be > slightly improved by not sticking foreign elements into a popups DOM (I > somehow feel that would be cleaner). However it'll probably still present > somewhat of a syncing challenge. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kir...@ Fri Feb 8 00:36:01 2013 From: kir...@ (Kirill Prazdnikov) Date: Fri, 08 Feb 2013 12:36:01 +0400 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: References: <1323262007.46206883.1359598185356.JavaMail.root@mozilla.com> Message-ID: <5114B8F1.9040706@jetbrains.com> Hi Gregg, > > A DrawingBuffer is created by constructor as in > > var db = new DrawingBuffer(context, .... This calling style has one negative side, what if there is no more memory or parameters are incorrect and DrawingBuffer can not be created ? Why not simply add a createDrawingBuffer to the WebGLRenderingContext IDl ? > interface WebGLRenderingContext { > ... > DrawingBuffer createDrawingBuffer( ... ) > } The same style as createShader, createProgram, e.t.c were done ? Thanks -Kriill On 2/8/2013 2:55 AM, Gregg Tavares wrote: > So I posted a new proposal for canvas in workers and using the same > context with multiple canvases to the WhatWG wiki > > http://wiki.whatwg.org/wiki/CanvasInWorkers > > This comes from discussions at the WebGL Working Group Face 2 Face > meeting and from input from the Google Maps team. I mention that not > to give it any authority, only rather not to take credit for a group > effort. > > I hope we're getting close though :-) > > > > > On Thu, Jan 31, 2013 at 8:05 AM, Florian B?sch > wrote: > > On Thu, Jan 31, 2013 at 3:09 AM, Jeff Gilbert > > wrote: > > Effectively, we would create a DrawingBuffer object either > from a Canvas or via direct use of the constructor. If not > created from a Canvas, you will only be able to use it for > offscreen rendering. This object is Transferable, and contains > the back buffer that WebGL will render into. It also contains > a front buffer that will be Presented to the compositor. > (Well, at least for the DrawingBuffers created from Canvases) > DrawingBuffer will have a commit() function that swaps the > back buffer to the front buffer. > > > Context creation attributes like `alpha` and > `preserveDrawingBuffer` would be ideally be attached to the > DrawingBuffers, not WebGL contexts. > > I assume this means there is going to be a function like > canvas.createDrawingBuffer(attribs)? > > One major point not yet resolved is how to better synchronize > Commits such that two or more Canvases can assure that each of > their new frames is composited at the same time, so composite > renderings are smooth and don't fall out of sync. This is > workable currently by using a compositing RenderingContext, > and doing the composition manually, but we may be able to come > up with a way to assure rendered frames from different > Canvases are composited together consistently. > > I'm not sure it's relevant but I have one odd use-case I stumbled > into. In an application I'm writing I offer a "pop out" > functionality that creates a popup window, the canvas is removed > from the main page and attached to that popups body. This works > fine, except that requestAnimationFrame has to be switched over to > be called from that popups document (otherwise there is > compositional flickering) as the two windows are not drawing in > sync of course. At present this is the only way to do it (detach, > attach to new documents body) since the canvas is unchangably > bound to that context. With the ability to present a drawing > buffer to a canvas, the method could be slightly improved by not > sticking foreign elements into a popups DOM (I somehow feel that > would be cleaner). However it'll probably still present somewhat > of a syncing challenge. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Fri Feb 8 03:08:51 2013 From: gma...@ (Gregg Tavares) Date: Fri, 8 Feb 2013 03:08:51 -0800 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <5114B8F1.9040706@jetbrains.com> References: <1323262007.46206883.1359598185356.JavaMail.root@mozilla.com> <5114B8F1.9040706@jetbrains.com> Message-ID: 1) DrawingBuffers are not WebGL only so you'd need both a CanvasRendering2DContext.createDrawingBuffer and a WebGLRenderngContext.createDrawingBuffer and if we come out with more types of contexts you need one on those too 2) There's plenty of constructors in JavaScript. new Image(); new ArrayBuffer(); new XMLHttpRequest(); It seems pretty straight forward to define what happens if your out of memory or the parameters are wrong On Fri, Feb 8, 2013 at 12:36 AM, Kirill Prazdnikov < kirill.prazdnikov...@> wrote: > ** > Hi Gregg, > > A DrawingBuffer is created by constructor as in > > var db = new DrawingBuffer(context, .... > > > This calling style has one negative side, what if there is no more memory > or parameters are incorrect and DrawingBuffer can not be created ? > Why not simply add a createDrawingBuffer to the WebGLRenderingContext IDl > ? > > interface WebGLRenderingContext { > ... > DrawingBuffer createDrawingBuffer( ... ) > } > > > The same style as createShader, createProgram, e.t.c were done ? > > Thanks > > -Kriill > > > > > On 2/8/2013 2:55 AM, Gregg Tavares wrote: > > So I posted a new proposal for canvas in workers and using the same > context with multiple canvases to the WhatWG wiki > > http://wiki.whatwg.org/wiki/CanvasInWorkers > > This comes from discussions at the WebGL Working Group Face 2 Face > meeting and from input from the Google Maps team. I mention that not to > give it any authority, only rather not to take credit for a group effort. > > I hope we're getting close though :-) > > > > > On Thu, Jan 31, 2013 at 8:05 AM, Florian B?sch wrote: > >> On Thu, Jan 31, 2013 at 3:09 AM, Jeff Gilbert wrote: >> >>> Effectively, we would create a DrawingBuffer object either from a Canvas >>> or via direct use of the constructor. If not created from a Canvas, you >>> will only be able to use it for offscreen rendering. This object is >>> Transferable, and contains the back buffer that WebGL will render into. It >>> also contains a front buffer that will be Presented to the compositor. >>> (Well, at least for the DrawingBuffers created from Canvases) DrawingBuffer >>> will have a commit() function that swaps the back buffer to the front >>> buffer. >>> >>> >>> Context creation attributes like `alpha` and `preserveDrawingBuffer` >>> would be ideally be attached to the DrawingBuffers, not WebGL contexts. >>> >> I assume this means there is going to be a function like >> canvas.createDrawingBuffer(attribs)? >> >> One major point not yet resolved is how to better synchronize Commits >>> such that two or more Canvases can assure that each of their new frames is >>> composited at the same time, so composite renderings are smooth and don't >>> fall out of sync. This is workable currently by using a compositing >>> RenderingContext, and doing the composition manually, but we may be able to >>> come up with a way to assure rendered frames from different Canvases are >>> composited together consistently. >>> >> I'm not sure it's relevant but I have one odd use-case I stumbled into. >> In an application I'm writing I offer a "pop out" functionality that >> creates a popup window, the canvas is removed from the main page and >> attached to that popups body. This works fine, except that >> requestAnimationFrame has to be switched over to be called from that popups >> document (otherwise there is compositional flickering) as the two windows >> are not drawing in sync of course. At present this is the only way to do it >> (detach, attach to new documents body) since the canvas is unchangably >> bound to that context. With the ability to present a drawing buffer to a >> canvas, the method could be slightly improved by not sticking foreign >> elements into a popups DOM (I somehow feel that would be cleaner). However >> it'll probably still present somewhat of a syncing challenge. >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Feb 11 14:43:11 2013 From: gma...@ (Gregg Tavares) Date: Mon, 11 Feb 2013 14:43:11 -0800 Subject: [Public WebGL] Sharing Resources across contexts Message-ID: Sharing resources across contexts is still a very important feature so here's a proposal http://www.khronos.org/webgl/wiki/SharedResouces Looking forward to your feedback Note: This is an orthogonal issue to the 1 context multiple canvases issue. It is also orthogonal to the drawing from a worker into a canvas issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kir...@ Tue Feb 12 03:34:07 2013 From: kir...@ (Kirill Prazdnikov) Date: Tue, 12 Feb 2013 15:34:07 +0400 Subject: [Public WebGL] Sharing Resources across contexts In-Reply-To: References: Message-ID: <511A28AF.3060703@jetbrains.com> Hi Gregg, > void cancelAcquireSharedResource(long id); The purpose of cancelAcquireSharedResource is not clear from the document. Why simply not to use COM like ref counting ? acquireSharedResource = addRef releaseSharedResources = release ? Thanks On 2/12/2013 2:43 AM, Gregg Tavares wrote: > Sharing resources across contexts is still a very important feature so > here's a proposal > > http://www.khronos.org/webgl/wiki/SharedResouces > > Looking forward to your feedback > > Note: This is an orthogonal issue to the 1 context multiple canvases > issue. It is also orthogonal to the drawing from a worker into a > canvas issue. > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From baj...@ Tue Feb 12 08:38:40 2013 From: baj...@ (Brandon Jones) Date: Tue, 12 Feb 2013 08:38:40 -0800 Subject: [Public WebGL] Sharing Resources across contexts In-Reply-To: <511A28AF.3060703@jetbrains.com> References: <511A28AF.3060703@jetbrains.com> Message-ID: On Tuesday, February 12, 2013, Kirill Prazdnikov wrote: > > Hi Gregg, > > void cancelAcquireSharedResource(**long id); >> > > The purpose of cancelAcquireSharedResource is not clear from the document. > acquireSharedResource is an asynchronous operation. cancleAcquireSharedResource would indicate that a previous acquire call, presumably one that has not yet completed, is no longer desired. It should function like clearTimeout for a setTimeout call. One thing that's not explicitly mentioned in the wiki is the behavior of cancelAcquireSharedResource if the acquisition has already succeeded. I would imagine it's a no-op at that point? > > Why simply not to use COM like ref counting ? > acquireSharedResource = addRef > releaseSharedResources = release ? > > Thanks The effect of these functions is different than addRef/release. Acquiring a WebGL resource would allow the acquiring context access to the resource, and may prevent other contexts from accessing it if it was acquired with gl.EXCLUSIVE. If acquired exclusively, other contexts would not be able to acquire or access the resources until it had been released. This is to maintain safety across threads and to explicitly manage the requirements of OpenGL regarding use by multiple contexts. As such ref counting is not a good parallel. --Brandon > > On 2/12/2013 2:43 AM, Gregg Tavares wrote: > >> Sharing resources across contexts is still a very important feature so >> here's a proposal >> >> http://www.khronos.org/webgl/**wiki/SharedResouces >> >> Looking forward to your feedback >> >> Note: This is an orthogonal issue to the 1 context multiple canvases >> issue. It is also orthogonal to the drawing from a worker into a canvas >> issue. >> >> >> > > ------------------------------**----------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ------------------------------**----------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Feb 14 20:12:54 2013 From: kbr...@ (Kenneth Russell) Date: Thu, 14 Feb 2013 20:12:54 -0800 Subject: [Public WebGL] New conformance test snapshot; spec updates Message-ID: The WebGL conformance suite has been snapshotted again as version 1.0.2. See https://www.khronos.org/registry/webgl/conformance-suites/1.0.2/ and https://www.khronos.org/registry/webgl/conformance-suites/1.0.2/webgl-conformance-tests.html . This reflects a snapshot of the spec which has been sent to the Khronos Promoters for ratification. We anticipate that there will soon be a driver bug fix on one particular OS which will finally allow WebGL implementations to pass the 1.0.1 conformance suite on all desktop platforms. At that point, the 1.0.1 spec will be released, and the expectation is that implementations will come out from under the "experimental-" prefix. WebGL 1.0.2 should follow soon afterward. Additionally, the community approved extensions have been submitted to the Khronos Promoters, so those should hopefully also soon be listed as ratified in the WebGL extension registry. The plan is to move quickly to incorporate the OpenGL ES 3.0 functionality into the WebGL spec at this point. Implementation will follow. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Fri Feb 15 21:08:02 2013 From: gma...@ (Gregg Tavares) Date: Fri, 15 Feb 2013 21:08:02 -0800 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: <1962493783.26851480.1352252165428.JavaMail.root@mozilla.com> Message-ID: I've been writing the conformance tests for *EXT_draw_buffers* and a few things have come up and I'd like to suggest we make it *WEBGL_draw_buffers*and add a few changes *1) Let's require minimum 4 drawing buffers.* EXT_draw_buffers only requires 1 buffer. That seems rather pointless. Why make devs have to check twice, once for the extension and again that it's useful? Can we decide the minimum is 4 draw buffers? GPUs that support more than 1 all support at least 4 AFAICT. ES 3.0 requires 4. *2) Let's require that MAX_COLOR_ATTACHMENTS be >= MAX_DRAW_BUFFERS* The spec doesn't require this. MAX_COLOR_ATTACHMENTS could be 2 and MAX_DRAW_BUFFERS could be 4 which would be useless. I can't imagine a GPU that would do that so why allow it in the spec and therefore force devs to deal with that situation should some crazy GPU maker ship something like that *3) Let's require that attaching GL_RGBA/GL_UNSIGNED_BYTE to all attachment points be required to work* I might have missed this in the spec but I believe OpenGL ES 3.0 still allows drivers to fail any combination of attachments. Let's pick at least one format that devs can count on. *3b) Let's (maybe) require that attaching less than the max draw buffers be required to work* In other words, if MAX_DRAW_BUFFERS = 4 then COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE and COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE COLOR_ATTACHMENT1 = GL_RGBA/GL_UNSIGNED_BYTE and COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE COLOR_ATTACHMENT1 = GL_RGBA/GL_UNSIGNED_BYTE COLOR_ATTACHMENT2 = GL_RGBA/GL_UNSIGNED_BYTE and COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE COLOR_ATTACHMENT1 = GL_RGBA/GL_UNSIGNED_BYTE COLOR_ATTACHMENT2 = GL_RGBA/GL_UNSIGNED_BYTE COLOR_ATTACHMENT3 = GL_RGBA/GL_UNSIGNED_BYTE All work. No requirement that sparse attachments work (0 & 3) or (2) etc work. Just starting from 0 up to MAX. *3c) Let's require that #3 and #3b work with a DEPTH or DEPTH_STENCIL attachment* I'm assuming all GPUs that support MRTs do this * * *4) Should we disallow having 2 or more attachment points point to the same attachment?* In other words, make a single texture and attach it to both COLOR_ATTACHMENT0 and COLOR_ATTACHMENT1. The spec does not cover what happens there. The worry is someone will attach the same texture by mistake to 2 attachment points and depending on the GPU/Driver they'll get different results. Thoughts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From baj...@ Sat Feb 16 10:26:20 2013 From: baj...@ (Brandon Jones) Date: Sat, 16 Feb 2013 10:26:20 -0800 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: <1962493783.26851480.1352252165428.JavaMail.root@mozilla.com> Message-ID: +1, These all sound like very sensible suggestions to me. --Brandon On Friday, February 15, 2013, Gregg Tavares wrote: > I've been writing the conformance tests for *EXT_draw_buffers* and a few > things have come up and I'd like to suggest we make it *WEBGL_draw_buffers > * and add a few changes > > *1) Let's require minimum 4 drawing buffers.* > > EXT_draw_buffers only requires 1 buffer. That seems rather pointless. Why > make devs have to check twice, once for the extension and again that it's > useful? > > Can we decide the minimum is 4 draw buffers? GPUs that support more than 1 > all support at least 4 AFAICT. ES 3.0 requires 4. > > > *2) Let's require that MAX_COLOR_ATTACHMENTS be >= MAX_DRAW_BUFFERS* > > The spec doesn't require this. MAX_COLOR_ATTACHMENTS could be 2 and > MAX_DRAW_BUFFERS could be 4 which would be useless. I can't imagine a GPU > that would do that so why allow it in the spec and therefore force devs to > deal with that situation should some crazy GPU maker ship something like > that > > > *3) Let's require that attaching GL_RGBA/GL_UNSIGNED_BYTE to all > attachment points be required to work* > > I might have missed this in the spec but I believe OpenGL ES 3.0 still > allows drivers to fail any combination of attachments. Let's pick at least > one format that devs can count on. > > > *3b) Let's (maybe) require that attaching less than the max draw buffers > be required to work* > > In other words, if MAX_DRAW_BUFFERS = 4 then > > COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE > > and > > COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT1 = GL_RGBA/GL_UNSIGNED_BYTE > > and > > COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT1 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT2 = GL_RGBA/GL_UNSIGNED_BYTE > > and > > COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT1 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT2 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT3 = GL_RGBA/GL_UNSIGNED_BYTE > > All work. No requirement that sparse attachments work (0 & 3) or (2) etc > work. Just starting from 0 up to MAX. > > > *3c) Let's require that #3 and #3b work with a DEPTH or DEPTH_STENCIL > attachment* > > I'm assuming all GPUs that support MRTs do this > > * > * > *4) Should we disallow having 2 or more attachment points point to the > same attachment?* > > In other words, make a single texture and attach it to both > COLOR_ATTACHMENT0 and COLOR_ATTACHMENT1. The spec does not cover what > happens there. The worry is someone will attach the same texture by mistake > to 2 attachment points and depending on the GPU/Driver they'll get > different results. > > Thoughts? > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Feb 19 12:55:03 2013 From: kbr...@ (Kenneth Russell) Date: Tue, 19 Feb 2013 12:55:03 -0800 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: <1962493783.26851480.1352252165428.JavaMail.root@mozilla.com> Message-ID: On Fri, Feb 15, 2013 at 9:08 PM, Gregg Tavares wrote: > I've been writing the conformance tests for EXT_draw_buffers and a few > things have come up and I'd like to suggest we make it WEBGL_draw_buffers > and add a few changes > > 1) Let's require minimum 4 drawing buffers. > > EXT_draw_buffers only requires 1 buffer. That seems rather pointless. Why > make devs have to check twice, once for the extension and again that it's > useful? > > Can we decide the minimum is 4 draw buffers? GPUs that support more than 1 > all support at least 4 AFAICT. ES 3.0 requires 4. This sounds good. > 2) Let's require that MAX_COLOR_ATTACHMENTS be >= MAX_DRAW_BUFFERS > > The spec doesn't require this. MAX_COLOR_ATTACHMENTS could be 2 and > MAX_DRAW_BUFFERS could be 4 which would be useless. I can't imagine a GPU > that would do that so why allow it in the spec and therefore force devs to > deal with that situation should some crazy GPU maker ship something like > that This sounds fine too. If it happened the GPU reported a MAX_DRAW_BUFFERS > MAX_COLOR_ATTACHMENTS the WebGL implementation could just clamp it, yes? > 3) Let's require that attaching GL_RGBA/GL_UNSIGNED_BYTE to all attachment > points be required to work > > I might have missed this in the spec but I believe OpenGL ES 3.0 still > allows drivers to fail any combination of attachments. Let's pick at least > one format that devs can count on. It looks like the ES 3.0 spec requires that attaching valid textures and renderbuffers results in a complete framebuffer, modulo implementation dependent restrictions. See the subsection "Required Framebuffer Formats" on page 210 of the ES 3.0.1 spec: "Implementations must support framebuffer objects with up to MAX_COLOR_ATTACHMENTS color attachments, a depth attachment, and a stencil attachment. Each color attachment may be in any of the required color formats for textures and renderbuffers described in sections 3.8.3 and 4.4.2." EXT_draw_buffers (deliberately?) doesn't include any of the language from the ES 3.0 spec about which formats and color attachments are required to be supported. Seems OK to me to require this and test it in the conformance suite, but how would a WebGL implementation reasonably test this at runtime before returning the EXT_draw_buffers extension object? > 3b) Let's (maybe) require that attaching less than the max draw buffers be > required to work > > In other words, if MAX_DRAW_BUFFERS = 4 then > > COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE > > and > > COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT1 = GL_RGBA/GL_UNSIGNED_BYTE > > and > > COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT1 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT2 = GL_RGBA/GL_UNSIGNED_BYTE > > and > > COLOR_ATTACHMENT0 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT1 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT2 = GL_RGBA/GL_UNSIGNED_BYTE > COLOR_ATTACHMENT3 = GL_RGBA/GL_UNSIGNED_BYTE > > All work. No requirement that sparse attachments work (0 & 3) or (2) etc > work. Just starting from 0 up to MAX. Seems reasonable. Same question about how this could be verified at run-time before returning an instance of the extension object. > 3c) Let's require that #3 and #3b work with a DEPTH or DEPTH_STENCIL > attachment > > I'm assuming all GPUs that support MRTs do this Seems reasonable; same question. > 4) Should we disallow having 2 or more attachment points point to the same > attachment? > > In other words, make a single texture and attach it to both > COLOR_ATTACHMENT0 and COLOR_ATTACHMENT1. The spec does not cover what > happens there. The worry is someone will attach the same texture by mistake > to 2 attachment points and depending on the GPU/Driver they'll get different > results. I think we should postpone this; it sounds like there are multiple ways this might be handled and we can make progress without solving it now. -Ken > Thoughts? > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Feb 19 18:58:54 2013 From: kbr...@ (Kenneth Russell) Date: Tue, 19 Feb 2013 18:58:54 -0800 Subject: [Public WebGL] Proposal: merge new typed-arrays-in-workers test to 1.0.2 Message-ID: Right after the 1.0.2 conformance suite snapshot was taken, the MapsGL team discovered (actually, rediscovered) a bug in Transferable support for typed arrays in one major browser. Unfortunately, the WebGL conformance suite didn't have a test of Transferable support, which is why this bug went unnoticed to this point. A thorough test has been added to the top of tree conformance suite: https://www.khronos.org/registry/webgl/sdk/tests/conformance/typedarrays/typed-arrays-in-workers.html I would like to propose that this test be merged back to the 1.0.2 suite. It exposes bugs in the majority of browsers supporting WebGL, and it is likely that the 1.0.2 suite will be a target for both browser and GPU vendors for quite some time. Could all browser vendors supporting WebGL please reply to the list indicating whether or not you would support this? Thanks. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Wed Feb 20 15:18:10 2013 From: bja...@ (Benoit Jacob) Date: Wed, 20 Feb 2013 18:18:10 -0500 Subject: [Public WebGL] Proposal: merge new typed-arrays-in-workers test to 1.0.2 In-Reply-To: References: Message-ID: <512559B2.3000209@mozilla.com> OK. I think that we should get this fixed. It seems acceptable to leave this in 1.0.2. Benoit On 13-02-19 09:58 PM, Kenneth Russell wrote: > Right after the 1.0.2 conformance suite snapshot was taken, the MapsGL > team discovered (actually, rediscovered) a bug in Transferable support > for typed arrays in one major browser. Unfortunately, the WebGL > conformance suite didn't have a test of Transferable support, which is > why this bug went unnoticed to this point. > > A thorough test has been added to the top of tree conformance suite: > https://www.khronos.org/registry/webgl/sdk/tests/conformance/typedarrays/typed-arrays-in-workers.html > > I would like to propose that this test be merged back to the 1.0.2 > suite. It exposes bugs in the majority of browsers supporting WebGL, > and it is likely that the 1.0.2 suite will be a target for both > browser and GPU vendors for quite some time. > > Could all browser vendors supporting WebGL please reply to the list > indicating whether or not you would support this? Thanks. > > -Ken > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From sha...@ Wed Feb 20 16:03:40 2013 From: sha...@ (Shannon Woods) Date: Wed, 20 Feb 2013 19:03:40 -0500 Subject: [Public WebGL] WEBGL_depth_texture Message-ID: <24E0FB64-744D-42E3-8C70-8204C8A7C952@transgaming.com> WEBGL_depth_texture, currently in the process of ratification, has language which poses some difficulty for ANGLE. Both WEBGL_depth_texture and ANGLE_depth_texture, which it references, specify that the depth value is stored in the r, g, and b channels, with alpha being undefined. This language was included to allow for inconsistencies in the alpha value returned when performing such samples via D3D9. However, conforming to this creates a bit of a challenge when implemented over D3D11, as the depth value is then only returned by D3D in the r channel, with the other channels receiving 0, 0, 1 default values instead. Our issues would be resolved by changing ANGLE_depth_texture, as well as WEBGL_depth_texture, to guarantee the depth value only in the r channel, and extending the warning about implementation dependency to cover the g and b channels in addition to alpha. Would there be any objections to making this change? Thank you, _____________________________________________________________________ Shannon Woods Technical Manager, Graphics Technology TransGaming T: +1 416-979-9900 x 408 | E: shannon.woods...@ TransGaming.com | GameTreeMac.com | GameTreeTV.com _____________________________________________________________________ This email and any files transmitted herein are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Feb 20 23:32:15 2013 From: kbr...@ (Kenneth Russell) Date: Wed, 20 Feb 2013 23:32:15 -0800 Subject: [Public WebGL] OES_texture_float_linear and OES_texture_half_float_linear extension proposals Message-ID: Proposals for WebGL extensions mirroring OES_texture_float_linear and OES_texture_half_float_linear have been added to the registry: http://www.khronos.org/registry/webgl/extensions/proposals/OES_texture_float_linear/ http://www.khronos.org/registry/webgl/extensions/proposals/OES_texture_half_float_linear/ When the floating-point texture extensions were added to WebGL, I was lax in testing that only the NEAREST filtering mode was allowed. Recently implementations have been made more strict, breaking some content. Exposing these extensions will enable applications which require linear filtering mode for floating-point textures. Any comments or objections to moving these to draft status? Thanks, -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Wed Feb 20 23:56:28 2013 From: cal...@ (Mark Callow) Date: Thu, 21 Feb 2013 16:56:28 +0900 Subject: [Public WebGL] OES_texture_float_linear and OES_texture_half_float_linear extension proposals In-Reply-To: References: Message-ID: <5125D32C.3020100@artspark.co.jp> On 2013/02/21 16:32, Kenneth Russell wrote: > Any comments or objections to moving these to draft status? 32F textures are not filterable even in OpenGL ES 3.0 so I question how supportable OES_texture_float_linear will be outside of desktop implementations. I do not think we should make a WebGL extension. Given the discussion that preceded the decision that 32F textures should not be filterable in ES 3.0 - it greatly increases the size of certain paths in the hardware - I am surprised to see we have an OES extension and I suspect it is not well supported. Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Feb 21 01:41:11 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 21 Feb 2013 10:41:11 +0100 Subject: [Public WebGL] OES_texture_float_linear and OES_texture_half_float_linear extension proposals In-Reply-To: <5125D32C.3020100@artspark.co.jp> References: <5125D32C.3020100@artspark.co.jp> Message-ID: I'd like to note that linear filtering of float textures is a mandatory feature for me. For instance VSM on which I base a number of my demos and tutorials such as these: - http://codeflow.org/entries/2013/feb/15/soft-shadow-mapping/ - http://codeflow.org/entries/2012/aug/25/webgl-deferred-irradiance-volumes/ - http://codeflow.org/entries/2011/nov/10/webgl-gpu-landscaping-and-erosion/ As such it is important to me that I be able to detect when I cannot get filtered linear filtered floating point textures and in which format supports it. On Thu, Feb 21, 2013 at 8:56 AM, Mark Callow wrote: > On 2013/02/21 16:32, Kenneth Russell wrote: > > Any comments or objections to moving these to draft status? > > 32F textures are not filterable even in OpenGL ES 3.0 so I question how > supportable OES_texture_float_linear will be outside of desktop > implementations. I do not think we should make a WebGL extension. > > Given the discussion that preceded the decision that 32F textures should > not be filterable in ES 3.0 - it greatly increases the size of certain > paths in the hardware - I am surprised to see we have an OES extension and > I suspect it is not well supported. > > Regards > > -Mark > -- > ???????????????????????????????????????????????????????????????? > ???????????????????????????????????????????????????????????????? ??. > > NOTE: This electronic mail message may contain confidential and privileged > information from HI Corporation. If you are not the intended recipient, any > disclosure, photocopying, distribution or use of the contents of the > received information is prohibited. If you have received this e-mail in > error, please notify the sender immediately and permanently delete this > message and all related copies. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Feb 21 02:50:09 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 21 Feb 2013 11:50:09 +0100 Subject: [Public WebGL] OES_texture_float_linear and OES_texture_half_float_linear extension proposals In-Reply-To: References: <5125D32C.3020100@artspark.co.jp> Message-ID: On Thu, Feb 21, 2013 at 8:56 AM, Mark Callow wrote: > so I question how supportable OES_texture_float_linear will be outside of > desktop implementations As of Jan 21. 2013 According to http://www.glbenchmark.com/ - OES_texture_half_float: supported by 782/1180 devices - OES_texture_float: supported by 733/1180 devices - OES_texture_half_float_linear: supported by 417/1180 devices - OES_texture_float_linear: supported by 0/1180 devices On Thu, Feb 21, 2013 at 8:56 AM, Mark Callow wrote: > 32F textures are not filterable even in OpenGL ES 3.0 so I question how > supportable OES_texture_float_linear will be outside of desktop > implementations. I do not think we should make a WebGL extension. > > Given the discussion that preceded the decision that 32F textures should > not be filterable in ES 3.0 - it greatly increases the size of certain > paths in the hardware - I am surprised to see we have an OES extension and > I suspect it is not well supported. This argument is faulty for the following reason. 1. OES_texture_float is not going away. 2. OES_texture_half_float is not going away. 3. OES_texture_half_float_linear is needed to figure out if a half float texture supports linear filtering. You cannot assume linear filtering on half-float will work, roughly half of devices don't support it. Hitherto you have no way of detecting that. 4. OES_texture_float_linear is needed to figure it out, just like OES_texture_half_float_linear, it is consistent to be able to detect linear filtering support,* for the supported floating point formats*. The support for mobiles does not matter to the argument at all, since once you have determined what capabilities you have you will be able to provide a seamless fallback to another format or an alternating renderpath. Witholding the extensions *already ratified by khronos for OpenGL ES* acomplishes nothing but promote rendering that will not work on mobiles because people are just gonna assume linear filtering works because they have no way to figure out it. On Thu, Feb 21, 2013 at 8:32 AM, Kenneth Russell wrote: > Any comments or objections to moving these to draft status? I am strongly in favor of moving this extension to draft ASAP. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Thu Feb 21 03:07:30 2013 From: cal...@ (Mark Callow) Date: Thu, 21 Feb 2013 20:07:30 +0900 Subject: [Public WebGL] OES_texture_float_linear and OES_texture_half_float_linear extension proposals In-Reply-To: References: <5125D32C.3020100@artspark.co.jp> Message-ID: <5125FFF2.4020103@artspark.co.jp> On 2013/02/21 19:50, Florian B?sch wrote: > On Thu, Feb 21, 2013 at 8:56 AM, Mark > Callow > wrote: > > so I question how supportable OES_texture_float_linear will be > outside of desktop implementations > > > As of Jan 21. 2013 According to http://www.glbenchmark.com/ > > * OES_texture_half_float: supported by 782/1180 devices > * OES_texture_float: supported by 733/1180 devices > * OES_texture_half_float_linear: supported by 417/1180 devices > * OES_texture_float_linear: supported by 0/1180 devices > Thanks for the information. > because people are just gonna assume linear filtering works because > they have no way to figure out it. > The above statement makes no sense. Neither OES_texture_float nore OES_texture_half_float support linear filtering. Therefore, in the absence of the newly proposed extensions you do not have linear filtering. There is nothing to figure out. Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Feb 21 03:13:59 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 21 Feb 2013 12:13:59 +0100 Subject: [Public WebGL] OES_texture_float_linear and OES_texture_half_float_linear extension proposals In-Reply-To: <5125FFF2.4020103@artspark.co.jp> References: <5125D32C.3020100@artspark.co.jp> <5125FFF2.4020103@artspark.co.jp> Message-ID: On Thu, Feb 21, 2013 at 12:07 PM, Mark Callow wrote: > The above statement makes no sense. Neither OES_texture_float nore > OES_texture_half_float support linear filtering. Therefore, in the absence > of the newly proposed extensions you do not have linear filtering. There is > nothing to figure out. > We are already using linear filtering in limbo because we do not have these extensions. Adding OES_texture_half_float_linear but not OES_texture_float_linear makes no sense. Removing linear filtering will break a lot of peoples code. Not adding it will have the following effect: - It is inconsistent of its treatment of the already supported texture formats - It will force low-quality floating point textures on platforms perfectly able to use higher quality. - It will break peoples code -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Feb 21 10:38:31 2013 From: kbr...@ (Kenneth Russell) Date: Thu, 21 Feb 2013 10:38:31 -0800 Subject: [Public WebGL] OES_texture_float_linear and OES_texture_half_float_linear extension proposals In-Reply-To: References: <5125D32C.3020100@artspark.co.jp> <5125FFF2.4020103@artspark.co.jp> Message-ID: Mark, I understand your concern about promoting desktop-only extensions. When writing the WebGL conformance test for OES_texture_float I neglected to test and forbid the use of linear filtering, so existing applications are already using linear filtering on FP textures. As Florian points out, certain advanced rendering techniques such as Variance Shadow Maps require it. Also, exposing only OES_texture_half_float_linear and not OES_texture_float_linear would be quite asymmetric. I would like to continue to enable in WebGL the rendering techniques that FP textures + linear filtering have enabled. The same applications can target half-float textures and OES_texture_half_float_linear with little code change if they want to target mobile hardware and can tolerate a loss of precision. Alternatively, they could implement filtering manually in their shader if they knew OES_texture_float_linear was not available. Can you be convinced to drop your objection? Mozilla, Apple, Opera, could you please provide your input? Others? -Ken On Thu, Feb 21, 2013 at 3:13 AM, Florian B?sch wrote: > On Thu, Feb 21, 2013 at 12:07 PM, Mark Callow > wrote: >> >> The above statement makes no sense. Neither OES_texture_float nore >> OES_texture_half_float support linear filtering. Therefore, in the absence >> of the newly proposed extensions you do not have linear filtering. There is >> nothing to figure out. > > > We are already using linear filtering in limbo because we do not have these > extensions. Adding OES_texture_half_float_linear but not > OES_texture_float_linear makes no sense. Removing linear filtering will > break a lot of peoples code. Not adding it will have the following effect: > > It is inconsistent of its treatment of the already supported texture formats > It will force low-quality floating point textures on platforms perfectly > able to use higher quality. > It will break peoples code ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Thu Feb 21 10:50:06 2013 From: kbr...@ (Kenneth Russell) Date: Thu, 21 Feb 2013 10:50:06 -0800 Subject: [Public WebGL] WEBGL_depth_texture In-Reply-To: <24E0FB64-744D-42E3-8C70-8204C8A7C952@transgaming.com> References: <24E0FB64-744D-42E3-8C70-8204C8A7C952@transgaming.com> Message-ID: This change sounds fine to me. It will enable the extension on more platforms and guide developers toward writing portable shaders. -Ken On Wed, Feb 20, 2013 at 4:03 PM, Shannon Woods wrote: > > WEBGL_depth_texture, currently in the process of ratification, has language > which poses some difficulty for ANGLE. Both WEBGL_depth_texture and > ANGLE_depth_texture, which it references, specify that the depth value is > stored in the r, g, and b channels, with alpha being undefined. This > language was included to allow for inconsistencies in the alpha value > returned when performing such samples via D3D9. However, conforming to this > creates a bit of a challenge when implemented over D3D11, as the depth value > is then only returned by D3D in the r channel, with the other channels > receiving 0, 0, 1 default values instead. > > Our issues would be resolved by changing ANGLE_depth_texture, as well as > WEBGL_depth_texture, to guarantee the depth value only in the r channel, and > extending the warning about implementation dependency to cover the g and b > channels in addition to alpha. Would there be any objections to making this > change? > > Thank you, > _____________________________________________________________________ > Shannon Woods > Technical Manager, Graphics Technology > > TransGaming > T: +1 416-979-9900 x 408 | E: > shannon.woods...@ > > TransGaming.com | GameTreeMac.com | GameTreeTV.com > _____________________________________________________________________ > This email and any files transmitted herein are confidential and intended > solely for the use of the individual or entity to whom they are addressed. > If you are not the intended recipient you are notified that disclosing, > copying, distributing or taking any action in reliance on the contents of > this information is strictly prohibited. > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Thu Feb 21 10:51:17 2013 From: kbr...@ (Kenneth Russell) Date: Thu, 21 Feb 2013 10:51:17 -0800 Subject: [Public WebGL] Proposal: merge new typed-arrays-in-workers test to 1.0.2 In-Reply-To: <512559B2.3000209@mozilla.com> References: <512559B2.3000209@mozilla.com> Message-ID: Thanks, that's great. Other browser vendors? Apple, Opera? Note that the test is not yet in the 1.0.2 suite -- only in trunk. The proposal here is to merge it back to 1.0.2. -Ken On Wed, Feb 20, 2013 at 3:18 PM, Benoit Jacob wrote: > > OK. I think that we should get this fixed. It seems acceptable to leave > this in 1.0.2. > > Benoit > > On 13-02-19 09:58 PM, Kenneth Russell wrote: >> Right after the 1.0.2 conformance suite snapshot was taken, the MapsGL >> team discovered (actually, rediscovered) a bug in Transferable support >> for typed arrays in one major browser. Unfortunately, the WebGL >> conformance suite didn't have a test of Transferable support, which is >> why this bug went unnoticed to this point. >> >> A thorough test has been added to the top of tree conformance suite: >> https://www.khronos.org/registry/webgl/sdk/tests/conformance/typedarrays/typed-arrays-in-workers.html >> >> I would like to propose that this test be merged back to the 1.0.2 >> suite. It exposes bugs in the majority of browsers supporting WebGL, >> and it is likely that the 1.0.2 suite will be a target for both >> browser and GPU vendors for quite some time. >> >> Could all browser vendors supporting WebGL please reply to the list >> indicating whether or not you would support this? Thanks. >> >> -Ken >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Fri Feb 22 05:00:53 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Fri, 22 Feb 2013 14:00:53 +0100 Subject: [Public WebGL] Context loss events and multi canvas rendering Message-ID: I just noticed that the context loss events are registered on the canvas rather than the context. The newly proposed functionality of canvas independent contexts would make it awkward to handle context losses (you will have to register the event on every conceivable canvas and check that you don't double handle it). Would it be appropriate to extend the WebGLContext object with an event handling mechanism so it can be handled per context? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Feb 22 13:36:24 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Fri, 22 Feb 2013 22:36:24 +0100 Subject: [Public WebGL] gl.getShaderPrecisionFormat Message-ID: I've noticed that gl.getShaderPrecisionFormat is not documented in the standard, yet it is implemented by both chrome and firefox. The enumerants it relies on (such as gl.HIGH_FLOAT) are in the standard, but aren't used by any function. I suppose it's missing because of an editing oversight? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Fri Feb 22 13:50:02 2013 From: ben...@ (Ben Vanik) Date: Fri, 22 Feb 2013 13:50:02 -0800 Subject: [Public WebGL] gl.getShaderPrecisionFormat In-Reply-To: References: Message-ID: Are you looking at an old version of the spec? It's in here: http://www.khronos.org/registry/webgl/specs/latest/ On Fri, Feb 22, 2013 at 1:36 PM, Florian B?sch wrote: > I've noticed that gl.getShaderPrecisionFormat is not documented in the > standard, yet it is implemented by both chrome and firefox. The enumerants > it relies on (such as gl.HIGH_FLOAT) are in the standard, but aren't used > by any function. > > I suppose it's missing because of an editing oversight? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Feb 22 14:09:29 2013 From: kbr...@ (Kenneth Russell) Date: Fri, 22 Feb 2013 14:09:29 -0800 Subject: [Public WebGL] gl.getShaderPrecisionFormat In-Reply-To: References: Message-ID: Yes, this was an unfortunate and accidental omission from the 1.0 spec. It will be fixed in the 1.0.1 and subsequent versions of the spec, hopefully to be unblocked and released very soon. -Ken On Fri, Feb 22, 2013 at 1:50 PM, Ben Vanik wrote: > Are you looking at an old version of the spec? > > It's in here: > http://www.khronos.org/registry/webgl/specs/latest/ > > > On Fri, Feb 22, 2013 at 1:36 PM, Florian B?sch wrote: >> >> I've noticed that gl.getShaderPrecisionFormat is not documented in the >> standard, yet it is implemented by both chrome and firefox. The enumerants >> it relies on (such as gl.HIGH_FLOAT) are in the standard, but aren't used by >> any function. >> >> I suppose it's missing because of an editing oversight? > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kos...@ Fri Feb 22 14:27:01 2013 From: kos...@ (David Sheets) Date: Fri, 22 Feb 2013 14:27:01 -0800 Subject: [Public WebGL] gl.getShaderPrecisionFormat In-Reply-To: References: Message-ID: On Fri, Feb 22, 2013 at 2:09 PM, Kenneth Russell wrote: > > Yes, this was an unfortunate and accidental omission from the 1.0 > spec. It will be fixed in the 1.0.1 and subsequent versions of the > spec, hopefully to be unblocked and released very soon. Is it the policy of the WG to track spec versions to their most up-to-date revision? That is, for the specs being published with [major].[minor].[revision], do revision increments really indicate solely revisions and minor interface extensions? I ask because it seems that if this is the case, the spec known as "1.0" is actually "1.0.0" and any references to "1.0" should point to the latest spec in the "1.0" lineage. This is distinct from the "latest" branch because if the latest branch moves to 1.1.x (or 2.0.x) then 1.0 will continue to track 1.0.y where y is the largest value with a corresponding revision. This may help cut down confusion regarding the revisions. If a dev wants to refer to a specific revision, they can always still use the dotted triple. What do you think? David > -Ken > > > On Fri, Feb 22, 2013 at 1:50 PM, Ben Vanik wrote: >> Are you looking at an old version of the spec? >> >> It's in here: >> http://www.khronos.org/registry/webgl/specs/latest/ >> >> >> On Fri, Feb 22, 2013 at 1:36 PM, Florian B?sch wrote: >>> >>> I've noticed that gl.getShaderPrecisionFormat is not documented in the >>> standard, yet it is implemented by both chrome and firefox. The enumerants >>> it relies on (such as gl.HIGH_FLOAT) are in the standard, but aren't used by >>> any function. >>> >>> I suppose it's missing because of an editing oversight? >> >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Fri Feb 22 14:30:45 2013 From: kbr...@ (Kenneth Russell) Date: Fri, 22 Feb 2013 14:30:45 -0800 Subject: [Public WebGL] gl.getShaderPrecisionFormat In-Reply-To: References: Message-ID: We'll all have to discuss this once 1.0.1 and 1.0.2 actually ship. The situation will be made more complex with the forthcoming "WebGL level 2" draft spec incorporating ES 3.0 functionality. At that point we may want to consider a different scheme for separating the two major versions of the spec. -Ken On Fri, Feb 22, 2013 at 2:27 PM, David Sheets wrote: > On Fri, Feb 22, 2013 at 2:09 PM, Kenneth Russell wrote: >> >> Yes, this was an unfortunate and accidental omission from the 1.0 >> spec. It will be fixed in the 1.0.1 and subsequent versions of the >> spec, hopefully to be unblocked and released very soon. > > Is it the policy of the WG to track spec versions to their most > up-to-date revision? > > That is, for the specs being published with > [major].[minor].[revision], do revision increments really indicate > solely revisions and minor interface extensions? > > I ask because it seems that if this is the case, the spec known as > "1.0" is actually "1.0.0" and any references to "1.0" should point to > the latest spec in the "1.0" lineage. > > This is distinct from the "latest" branch because if the latest branch > moves to 1.1.x (or 2.0.x) then 1.0 will continue to track 1.0.y where > y is the largest value with a corresponding revision. > > This may help cut down confusion regarding the revisions. If a dev > wants to refer to a specific revision, they can always still use the > dotted triple. What do you think? > > David > >> -Ken >> >> >> On Fri, Feb 22, 2013 at 1:50 PM, Ben Vanik wrote: >>> Are you looking at an old version of the spec? >>> >>> It's in here: >>> http://www.khronos.org/registry/webgl/specs/latest/ >>> >>> >>> On Fri, Feb 22, 2013 at 1:36 PM, Florian B?sch wrote: >>>> >>>> I've noticed that gl.getShaderPrecisionFormat is not documented in the >>>> standard, yet it is implemented by both chrome and firefox. The enumerants >>>> it relies on (such as gl.HIGH_FLOAT) are in the standard, but aren't used by >>>> any function. >>>> >>>> I suppose it's missing because of an editing oversight? >>> >>> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Fri Feb 22 14:31:35 2013 From: gma...@ (Gregg Tavares) Date: Fri, 22 Feb 2013 14:31:35 -0800 Subject: [Public WebGL] Context loss events and multi canvas rendering In-Reply-To: References: Message-ID: On Fri, Feb 22, 2013 at 5:00 AM, Florian B?sch wrote: > I just noticed that the context loss events are registered on the canvas > rather than the context. > > The newly proposed functionality of canvas independent contexts would make > it awkward to handle context losses (you will have to register the event on > every conceivable canvas and check that you don't double handle it). > > Would it be appropriate to extend the WebGLContext object with an event > handling mechanism so it can be handled per context? > Yes, good point. They might need to be on DrawingBuffer too if we go that route. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Fri Feb 22 14:38:00 2013 From: kos...@ (David Sheets) Date: Fri, 22 Feb 2013 14:38:00 -0800 Subject: [Public WebGL] gl.getShaderPrecisionFormat In-Reply-To: References: Message-ID: On Fri, Feb 22, 2013 at 2:30 PM, Kenneth Russell wrote: > We'll all have to discuss this once 1.0.1 and 1.0.2 actually ship. The > situation will be made more complex with the forthcoming "WebGL level > 2" draft spec incorporating ES 3.0 functionality. At that point we may > want to consider a different scheme for separating the two major > versions of the spec. Once 1.0.1 ships, 1.0 will mean the spec preceding 1.0.1. Is it possible to rename 1.0 to 1.0.0? Or is there some sort of "1.0 compliant" issue which allows vendors to comply to any test suite/spec in the 1.0.x line? It seems that "1.0 compliant" is impossible. Only compliance to specific revision snapshots appears possible due to unknown unknowns. David > -Ken > > > On Fri, Feb 22, 2013 at 2:27 PM, David Sheets wrote: >> On Fri, Feb 22, 2013 at 2:09 PM, Kenneth Russell wrote: >>> >>> Yes, this was an unfortunate and accidental omission from the 1.0 >>> spec. It will be fixed in the 1.0.1 and subsequent versions of the >>> spec, hopefully to be unblocked and released very soon. >> >> Is it the policy of the WG to track spec versions to their most >> up-to-date revision? >> >> That is, for the specs being published with >> [major].[minor].[revision], do revision increments really indicate >> solely revisions and minor interface extensions? >> >> I ask because it seems that if this is the case, the spec known as >> "1.0" is actually "1.0.0" and any references to "1.0" should point to >> the latest spec in the "1.0" lineage. >> >> This is distinct from the "latest" branch because if the latest branch >> moves to 1.1.x (or 2.0.x) then 1.0 will continue to track 1.0.y where >> y is the largest value with a corresponding revision. >> >> This may help cut down confusion regarding the revisions. If a dev >> wants to refer to a specific revision, they can always still use the >> dotted triple. What do you think? >> >> David >> >>> -Ken >>> >>> >>> On Fri, Feb 22, 2013 at 1:50 PM, Ben Vanik wrote: >>>> Are you looking at an old version of the spec? >>>> >>>> It's in here: >>>> http://www.khronos.org/registry/webgl/specs/latest/ >>>> >>>> >>>> On Fri, Feb 22, 2013 at 1:36 PM, Florian B?sch wrote: >>>>> >>>>> I've noticed that gl.getShaderPrecisionFormat is not documented in the >>>>> standard, yet it is implemented by both chrome and firefox. The enumerants >>>>> it relies on (such as gl.HIGH_FLOAT) are in the standard, but aren't used by >>>>> any function. >>>>> >>>>> I suppose it's missing because of an editing oversight? >>>> >>>> >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From ret...@ Sun Feb 24 09:39:32 2013 From: ret...@ (Si Robertson) Date: Sun, 24 Feb 2013 17:39:32 +0000 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose Message-ID: Hi, I have been studying, and experimenting with, the WebGL API and the Web Audio API recently and have noticed a potential system memory related problem with typed arrays: there is no way to dispose of array buffers when they are no longer needed. When working with WebGL and/or Web Audio, array buffers can consume large amounts of system memory. Some of these buffers can be treated as permanent/static buffers and reused during the lifetime of an application, but others are temporary and might only be used during a single function call, or might need to be replaced when additional resources are loaded into the application (e.g. game level assets). It would be extremely useful for programmers if we could explicitly dispose of an array buffer when it is no longer required. Disposing of an array buffer would immediately release any system resources used by the array buffer (e.g. system memory) and reduce the array buffer's length to zero. This would also potentially reduce the amount of work the GC has to do when it eventually decides to clean things up. I am requesting a single method to be added to the ArrayBuffer type: interface ArrayBuffer { ArrayBuffer dispose(); } Most of the other programming languages I use allow these types of arrays to be disposed, for good reasons :) Regards, Si Robertson -------------- next part -------------- An HTML attachment was scrubbed... URL: From jus...@ Sun Feb 24 09:53:55 2013 From: jus...@ (Jussi Kalliokoski) Date: Sun, 24 Feb 2013 12:53:55 -0500 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: Message-ID: Hi, On Sun, Feb 24, 2013 at 12:39 PM, Si Robertson wrote: > Hi, > > I have been studying, and experimenting with, the WebGL API and the Web > Audio API recently and have noticed a potential system memory related > problem with typed arrays: there is no way to dispose of array buffers when > they are no longer needed. > > When working with WebGL and/or Web Audio, array buffers can consume large > amounts of system memory. Some of these buffers can be treated as > permanent/static buffers and reused during the lifetime of an application, > but others are temporary and might only be used during a single function > call, or might need to be replaced when additional resources are loaded > into the application (e.g. game level assets). > > It would be extremely useful for programmers if we could explicitly > dispose of an array buffer when it is no longer required. Disposing of an > array buffer would immediately release any system resources used by the > array buffer (e.g. system memory) and reduce the array buffer's length to > zero. This would also potentially reduce the amount of work the GC has to > do when it eventually decides to clean things up. > > I am requesting a single method to be added to the ArrayBuffer type: > > interface ArrayBuffer { > ArrayBuffer dispose(); > } > > Most of the other programming languages I use allow these types of arrays > to be disposed, for good reasons :) > Hmm, yeah, this is actually interesting because in any other case I would be completely against manual memory management in JS, but typed arrays are quite special because from the application's point of view you can already do this (neutering, e.g. just transfer the array to a short-lived worker). So I think that if we add this, to the application the side effects should be the same as if the typed array had been neutered (i.e. the buffer length becomes zero and so on). So, long story short, I can sympathize with your use cases (I've done a lot of audio synthesis in JS) and think this would be quite a useful addition. Cheers, Jussi > Regards, > Si Robertson > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kir...@ Mon Feb 25 02:44:28 2013 From: kir...@ (Kirill Prazdnikov) Date: Mon, 25 Feb 2013 14:44:28 +0400 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: Message-ID: <512B408C.5030304@jetbrains.com> Hi, > interface ArrayBuffer { > ArrayBuffer dispose(); > } What is the purpose of return value ? Thanks On 2/24/2013 9:39 PM, Si Robertson wrote: > Hi, > > I have been studying, and experimenting with, the WebGL API and the > Web Audio API recently and have noticed a potential system memory > related problem with typed arrays: there is no way to dispose of array > buffers when they are no longer needed. > > When working with WebGL and/or Web Audio, array buffers can consume > large amounts of system memory. Some of these buffers can be treated > as permanent/static buffers and reused during the lifetime of an > application, but others are temporary and might only be used during a > single function call, or might need to be replaced when additional > resources are loaded into the application (e.g. game level assets). > > It would be extremely useful for programmers if we could explicitly > dispose of an array buffer when it is no longer required. Disposing of > an array buffer would immediately release any system resources used by > the array buffer (e.g. system memory) and reduce the array buffer's > length to zero. This would also potentially reduce the amount of work > the GC has to do when it eventually decides to clean things up. > > I am requesting a single method to be added to the ArrayBuffer type: > > interface ArrayBuffer { > ArrayBuffer dispose(); > } > > Most of the other programming languages I use allow these types of > arrays to be disposed, for good reasons :) > > Regards, > Si Robertson -------------- next part -------------- An HTML attachment was scrubbed... URL: From ret...@ Mon Feb 25 04:00:52 2013 From: ret...@ (Si Robertson) Date: Mon, 25 Feb 2013 12:00:52 +0000 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: <512B408C.5030304@jetbrains.com> References: <512B408C.5030304@jetbrains.com> Message-ID: Sorry, my mistake. The method obviously shouldn't have a return call. interface ArrayBuffer { void dispose(); } Si ++ On 25 February 2013 10:44, Kirill Prazdnikov < kirill.prazdnikov...@> wrote: > ** > Hi, > > interface ArrayBuffer { > ArrayBuffer dispose(); > } > > > What is the purpose of return value ? > > Thanks > > On 2/24/2013 9:39 PM, Si Robertson wrote: > > Hi, > > I have been studying, and experimenting with, the WebGL API and the Web > Audio API recently and have noticed a potential system memory related > problem with typed arrays: there is no way to dispose of array buffers when > they are no longer needed. > > When working with WebGL and/or Web Audio, array buffers can consume > large amounts of system memory. Some of these buffers can be treated as > permanent/static buffers and reused during the lifetime of an application, > but others are temporary and might only be used during a single function > call, or might need to be replaced when additional resources are loaded > into the application (e.g. game level assets). > > It would be extremely useful for programmers if we could explicitly > dispose of an array buffer when it is no longer required. Disposing of an > array buffer would immediately release any system resources used by the > array buffer (e.g. system memory) and reduce the array buffer's length to > zero. This would also potentially reduce the amount of work the GC has to > do when it eventually decides to clean things up. > > I am requesting a single method to be added to the ArrayBuffer type: > > interface ArrayBuffer { > ArrayBuffer dispose(); > } > > Most of the other programming languages I use allow these types of > arrays to be disposed, for good reasons :) > > Regards, > Si Robertson > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ret...@ Mon Feb 25 05:12:21 2013 From: ret...@ (Si Robertson) Date: Mon, 25 Feb 2013 13:12:21 +0000 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: <512B408C.5030304@jetbrains.com> References: <512B408C.5030304@jetbrains.com> Message-ID: Sorry, my mistake. The method obviously shouldn't have a return value. interface ArrayBuffer { void dispose(); } Si ++ On 25 February 2013 10:44, Kirill Prazdnikov < kirill.prazdnikov...@> wrote: > ** > Hi, > > interface ArrayBuffer { > ArrayBuffer dispose(); > } > > > What is the purpose of return value ? > > Thanks > > On 2/24/2013 9:39 PM, Si Robertson wrote: > > Hi, > > I have been studying, and experimenting with, the WebGL API and the Web > Audio API recently and have noticed a potential system memory related > problem with typed arrays: there is no way to dispose of array buffers when > they are no longer needed. > > When working with WebGL and/or Web Audio, array buffers can consume > large amounts of system memory. Some of these buffers can be treated as > permanent/static buffers and reused during the lifetime of an application, > but others are temporary and might only be used during a single function > call, or might need to be replaced when additional resources are loaded > into the application (e.g. game level assets). > > It would be extremely useful for programmers if we could explicitly > dispose of an array buffer when it is no longer required. Disposing of an > array buffer would immediately release any system resources used by the > array buffer (e.g. system memory) and reduce the array buffer's length to > zero. This would also potentially reduce the amount of work the GC has to > do when it eventually decides to clean things up. > > I am requesting a single method to be added to the ArrayBuffer type: > > interface ArrayBuffer { > ArrayBuffer dispose(); > } > > Most of the other programming languages I use allow these types of > arrays to be disposed, for good reasons :) > > Regards, > Si Robertson > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Feb 25 10:32:16 2013 From: gma...@ (Gregg Tavares) Date: Mon, 25 Feb 2013 10:32:16 -0800 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: typed arrays are garbage collected just like all JS objects. You can just as easily make large strings or large JavaScript arrays. JavaScript will release any unreferenced typed array on it's own just fine. If you have a reproducible case where a typed array is not getting released file a bug for that browser. On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: > Sorry, my mistake. The method obviously shouldn't have a return value. > > interface ArrayBuffer { > void dispose(); > } > > Si ++ > > > On 25 February 2013 10:44, Kirill Prazdnikov < > kirill.prazdnikov...@> wrote: > >> ** >> Hi, >> >> interface ArrayBuffer { >> ArrayBuffer dispose(); >> } >> >> >> What is the purpose of return value ? >> >> Thanks >> >> On 2/24/2013 9:39 PM, Si Robertson wrote: >> >> Hi, >> >> I have been studying, and experimenting with, the WebGL API and the Web >> Audio API recently and have noticed a potential system memory related >> problem with typed arrays: there is no way to dispose of array buffers when >> they are no longer needed. >> >> When working with WebGL and/or Web Audio, array buffers can consume >> large amounts of system memory. Some of these buffers can be treated as >> permanent/static buffers and reused during the lifetime of an application, >> but others are temporary and might only be used during a single function >> call, or might need to be replaced when additional resources are loaded >> into the application (e.g. game level assets). >> >> It would be extremely useful for programmers if we could explicitly >> dispose of an array buffer when it is no longer required. Disposing of an >> array buffer would immediately release any system resources used by the >> array buffer (e.g. system memory) and reduce the array buffer's lengthto zero. This would also potentially reduce the amount of work the GC has >> to do when it eventually decides to clean things up. >> >> I am requesting a single method to be added to the ArrayBuffer type: >> >> interface ArrayBuffer { >> ArrayBuffer dispose(); >> } >> >> Most of the other programming languages I use allow these types of >> arrays to be disposed, for good reasons :) >> >> Regards, >> Si Robertson >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jus...@ Mon Feb 25 11:00:01 2013 From: jus...@ (Jussi Kalliokoski) Date: Mon, 25 Feb 2013 14:00:01 -0500 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: > typed arrays are garbage collected just like all JS objects. You can just > as easily make large strings or large JavaScript arrays. JavaScript will > release any unreferenced typed array on it's own just fine. If you have > a reproducible case where a typed array is not getting released file a bug > for that browser. > The problem isn't that the memory isn't released, quite the contrary, the problem is that the memory is often released at the wrong time, e.g. in the middle of filling a buffer with audio (if you fail at filling the buffer quickly enough, your sounds won't make it to the speakers) or during a drawing operation, causing jitter in the frame rate. Having a manual dispose function to free the buffer would for example let the developer free the memory for example after the buffer is filled with audio, thus reducing the risk of artifacts in the sound. Cheers, Jussi > > On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: > >> Sorry, my mistake. The method obviously shouldn't have a return value. >> >> interface ArrayBuffer { >> void dispose(); >> } >> >> Si ++ >> >> >> On 25 February 2013 10:44, Kirill Prazdnikov < >> kirill.prazdnikov...@> wrote: >> >>> ** >>> Hi, >>> >>> interface ArrayBuffer { >>> ArrayBuffer dispose(); >>> } >>> >>> >>> What is the purpose of return value ? >>> >>> Thanks >>> >>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>> >>> Hi, >>> >>> I have been studying, and experimenting with, the WebGL API and the >>> Web Audio API recently and have noticed a potential system memory related >>> problem with typed arrays: there is no way to dispose of array buffers when >>> they are no longer needed. >>> >>> When working with WebGL and/or Web Audio, array buffers can consume >>> large amounts of system memory. Some of these buffers can be treated as >>> permanent/static buffers and reused during the lifetime of an application, >>> but others are temporary and might only be used during a single function >>> call, or might need to be replaced when additional resources are loaded >>> into the application (e.g. game level assets). >>> >>> It would be extremely useful for programmers if we could explicitly >>> dispose of an array buffer when it is no longer required. Disposing of an >>> array buffer would immediately release any system resources used by the >>> array buffer (e.g. system memory) and reduce the array buffer's lengthto zero. This would also potentially reduce the amount of work the GC has >>> to do when it eventually decides to clean things up. >>> >>> I am requesting a single method to be added to the ArrayBuffer type: >>> >>> interface ArrayBuffer { >>> ArrayBuffer dispose(); >>> } >>> >>> Most of the other programming languages I use allow these types of >>> arrays to be disposed, for good reasons :) >>> >>> Regards, >>> Si Robertson >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Feb 25 11:16:56 2013 From: gma...@ (Gregg Tavares) Date: Mon, 25 Feb 2013 11:16:56 -0800 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski < jussi.kalliokoski...@> wrote: > On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: > >> typed arrays are garbage collected just like all JS objects. You can just >> as easily make large strings or large JavaScript arrays. JavaScript will >> release any unreferenced typed array on it's own just fine. If you have >> a reproducible case where a typed array is not getting released file a bug >> for that browser. >> > > The problem isn't that the memory isn't released, quite the contrary, the > problem is that the memory is often released at the wrong time, e.g. in the > middle of filling a buffer with audio (if you fail at filling the buffer > quickly enough, your sounds won't make it to the speakers) or during a > drawing operation, causing jitter in the frame rate. Having a manual > dispose function to free the buffer would for example let the developer > free the memory for example after the buffer is filled with audio, thus > reducing the risk of artifacts in the sound. > You're making a huge assumption that calling dispose would some how magically be (a) fast and (b) effect allocation speed later. > > Cheers, > Jussi > > >> >> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >> >>> Sorry, my mistake. The method obviously shouldn't have a return value. >>> >>> interface ArrayBuffer { >>> void dispose(); >>> } >>> >>> Si ++ >>> >>> >>> On 25 February 2013 10:44, Kirill Prazdnikov < >>> kirill.prazdnikov...@> wrote: >>> >>>> ** >>>> Hi, >>>> >>>> interface ArrayBuffer { >>>> ArrayBuffer dispose(); >>>> } >>>> >>>> >>>> What is the purpose of return value ? >>>> >>>> Thanks >>>> >>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>> >>>> Hi, >>>> >>>> I have been studying, and experimenting with, the WebGL API and the >>>> Web Audio API recently and have noticed a potential system memory related >>>> problem with typed arrays: there is no way to dispose of array buffers when >>>> they are no longer needed. >>>> >>>> When working with WebGL and/or Web Audio, array buffers can consume >>>> large amounts of system memory. Some of these buffers can be treated as >>>> permanent/static buffers and reused during the lifetime of an application, >>>> but others are temporary and might only be used during a single function >>>> call, or might need to be replaced when additional resources are loaded >>>> into the application (e.g. game level assets). >>>> >>>> It would be extremely useful for programmers if we could explicitly >>>> dispose of an array buffer when it is no longer required. Disposing of an >>>> array buffer would immediately release any system resources used by the >>>> array buffer (e.g. system memory) and reduce the array buffer's lengthto zero. This would also potentially reduce the amount of work the GC has >>>> to do when it eventually decides to clean things up. >>>> >>>> I am requesting a single method to be added to the ArrayBuffer type: >>>> >>>> interface ArrayBuffer { >>>> ArrayBuffer dispose(); >>>> } >>>> >>>> Most of the other programming languages I use allow these types of >>>> arrays to be disposed, for good reasons :) >>>> >>>> Regards, >>>> Si Robertson >>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Mon Feb 25 11:19:36 2013 From: ben...@ (Ben Vanik) Date: Mon, 25 Feb 2013 11:19:36 -0800 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: I've found the same issues - in Chrome (at least) there's a severe GC penalty for ArrayBuffers a severe penalty for creating new ArrayBuffers - so I pool or statically cache everything I can. The one place this still bites me is during some loading operations, where the file system API/XHR return array buffers that I will end up dropping almost immediately (as I don't want to store the raw buffer) and cause long GC times. It's fine if the GC occurs during the load operation but it rarely does -- it always happens a second or so into the app, where it's most noticeable. In continuously loading systems that stream in data over XHR/etc a GC of mostly typed arrays can often eat up 100ms, causing many dropped frames. Yuck. Unfortunately I doubt a dispose() will ever be added due to most web platform decision makers thinking that GC is perfect for all situations ;) On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski < jussi.kalliokoski...@> wrote: > On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: > >> typed arrays are garbage collected just like all JS objects. You can just >> as easily make large strings or large JavaScript arrays. JavaScript will >> release any unreferenced typed array on it's own just fine. If you have >> a reproducible case where a typed array is not getting released file a bug >> for that browser. >> > > The problem isn't that the memory isn't released, quite the contrary, the > problem is that the memory is often released at the wrong time, e.g. in the > middle of filling a buffer with audio (if you fail at filling the buffer > quickly enough, your sounds won't make it to the speakers) or during a > drawing operation, causing jitter in the frame rate. Having a manual > dispose function to free the buffer would for example let the developer > free the memory for example after the buffer is filled with audio, thus > reducing the risk of artifacts in the sound. > > Cheers, > Jussi > > >> >> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >> >>> Sorry, my mistake. The method obviously shouldn't have a return value. >>> >>> interface ArrayBuffer { >>> void dispose(); >>> } >>> >>> Si ++ >>> >>> >>> On 25 February 2013 10:44, Kirill Prazdnikov < >>> kirill.prazdnikov...@> wrote: >>> >>>> ** >>>> Hi, >>>> >>>> interface ArrayBuffer { >>>> ArrayBuffer dispose(); >>>> } >>>> >>>> >>>> What is the purpose of return value ? >>>> >>>> Thanks >>>> >>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>> >>>> Hi, >>>> >>>> I have been studying, and experimenting with, the WebGL API and the >>>> Web Audio API recently and have noticed a potential system memory related >>>> problem with typed arrays: there is no way to dispose of array buffers when >>>> they are no longer needed. >>>> >>>> When working with WebGL and/or Web Audio, array buffers can consume >>>> large amounts of system memory. Some of these buffers can be treated as >>>> permanent/static buffers and reused during the lifetime of an application, >>>> but others are temporary and might only be used during a single function >>>> call, or might need to be replaced when additional resources are loaded >>>> into the application (e.g. game level assets). >>>> >>>> It would be extremely useful for programmers if we could explicitly >>>> dispose of an array buffer when it is no longer required. Disposing of an >>>> array buffer would immediately release any system resources used by the >>>> array buffer (e.g. system memory) and reduce the array buffer's lengthto zero. This would also potentially reduce the amount of work the GC has >>>> to do when it eventually decides to clean things up. >>>> >>>> I am requesting a single method to be added to the ArrayBuffer type: >>>> >>>> interface ArrayBuffer { >>>> ArrayBuffer dispose(); >>>> } >>>> >>>> Most of the other programming languages I use allow these types of >>>> arrays to be disposed, for good reasons :) >>>> >>>> Regards, >>>> Si Robertson >>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jus...@ Mon Feb 25 11:20:19 2013 From: jus...@ (Jussi Kalliokoski) Date: Mon, 25 Feb 2013 14:20:19 -0500 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: On Mon, Feb 25, 2013 at 2:16 PM, Gregg Tavares wrote: > > > > On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski < > jussi.kalliokoski...@> wrote: > >> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >> >>> typed arrays are garbage collected just like all JS objects. You can >>> just as easily make large strings or large JavaScript arrays. JavaScript >>> will release any unreferenced typed array on it's own just fine. If you >>> have a reproducible case where a typed array is not getting released file a >>> bug for that browser. >>> >> >> The problem isn't that the memory isn't released, quite the contrary, the >> problem is that the memory is often released at the wrong time, e.g. in the >> middle of filling a buffer with audio (if you fail at filling the buffer >> quickly enough, your sounds won't make it to the speakers) or during a >> drawing operation, causing jitter in the frame rate. Having a manual >> dispose function to free the buffer would for example let the developer >> free the memory for example after the buffer is filled with audio, thus >> reducing the risk of artifacts in the sound. >> > > You're making a huge assumption that calling dispose would some how > magically be (a) fast and (b) effect allocation speed later. > What makes you think so? a) No, I don't expect it to be any faster than garbage collection. b) What? Where did you get this? The point is that the deallocation, however slow or fast, happens at a suitable time. Cheers, Jussi > >> Cheers, >> Jussi >> >> >>> >>> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >>> >>>> Sorry, my mistake. The method obviously shouldn't have a return value. >>>> >>>> interface ArrayBuffer { >>>> void dispose(); >>>> } >>>> >>>> Si ++ >>>> >>>> >>>> On 25 February 2013 10:44, Kirill Prazdnikov < >>>> kirill.prazdnikov...@> wrote: >>>> >>>>> ** >>>>> Hi, >>>>> >>>>> interface ArrayBuffer { >>>>> ArrayBuffer dispose(); >>>>> } >>>>> >>>>> >>>>> What is the purpose of return value ? >>>>> >>>>> Thanks >>>>> >>>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>>> >>>>> Hi, >>>>> >>>>> I have been studying, and experimenting with, the WebGL API and the >>>>> Web Audio API recently and have noticed a potential system memory related >>>>> problem with typed arrays: there is no way to dispose of array buffers when >>>>> they are no longer needed. >>>>> >>>>> When working with WebGL and/or Web Audio, array buffers can consume >>>>> large amounts of system memory. Some of these buffers can be treated as >>>>> permanent/static buffers and reused during the lifetime of an application, >>>>> but others are temporary and might only be used during a single function >>>>> call, or might need to be replaced when additional resources are loaded >>>>> into the application (e.g. game level assets). >>>>> >>>>> It would be extremely useful for programmers if we could explicitly >>>>> dispose of an array buffer when it is no longer required. Disposing of an >>>>> array buffer would immediately release any system resources used by the >>>>> array buffer (e.g. system memory) and reduce the array buffer's lengthto zero. This would also potentially reduce the amount of work the GC has >>>>> to do when it eventually decides to clean things up. >>>>> >>>>> I am requesting a single method to be added to the ArrayBuffer type: >>>>> >>>>> interface ArrayBuffer { >>>>> ArrayBuffer dispose(); >>>>> } >>>>> >>>>> Most of the other programming languages I use allow these types of >>>>> arrays to be disposed, for good reasons :) >>>>> >>>>> Regards, >>>>> Si Robertson >>>>> >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Mon Feb 25 11:26:30 2013 From: ben...@ (Ben Vanik) Date: Mon, 25 Feb 2013 11:26:30 -0800 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: Speed doesn't matter so much as predictability - being able to schedule the deallocation of the arrays to run in spare time (between frames, throttled to 1ms/frame, etc) is what matters in these scenarios. On Mon, Feb 25, 2013 at 11:16 AM, Gregg Tavares wrote: > > > > On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski < > jussi.kalliokoski...@> wrote: > >> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >> >>> typed arrays are garbage collected just like all JS objects. You can >>> just as easily make large strings or large JavaScript arrays. JavaScript >>> will release any unreferenced typed array on it's own just fine. If you >>> have a reproducible case where a typed array is not getting released file a >>> bug for that browser. >>> >> >> The problem isn't that the memory isn't released, quite the contrary, the >> problem is that the memory is often released at the wrong time, e.g. in the >> middle of filling a buffer with audio (if you fail at filling the buffer >> quickly enough, your sounds won't make it to the speakers) or during a >> drawing operation, causing jitter in the frame rate. Having a manual >> dispose function to free the buffer would for example let the developer >> free the memory for example after the buffer is filled with audio, thus >> reducing the risk of artifacts in the sound. >> > > You're making a huge assumption that calling dispose would some how > magically be (a) fast and (b) effect allocation speed later. > > >> >> Cheers, >> Jussi >> >> >>> >>> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >>> >>>> Sorry, my mistake. The method obviously shouldn't have a return value. >>>> >>>> interface ArrayBuffer { >>>> void dispose(); >>>> } >>>> >>>> Si ++ >>>> >>>> >>>> On 25 February 2013 10:44, Kirill Prazdnikov < >>>> kirill.prazdnikov...@> wrote: >>>> >>>>> ** >>>>> Hi, >>>>> >>>>> interface ArrayBuffer { >>>>> ArrayBuffer dispose(); >>>>> } >>>>> >>>>> >>>>> What is the purpose of return value ? >>>>> >>>>> Thanks >>>>> >>>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>>> >>>>> Hi, >>>>> >>>>> I have been studying, and experimenting with, the WebGL API and the >>>>> Web Audio API recently and have noticed a potential system memory related >>>>> problem with typed arrays: there is no way to dispose of array buffers when >>>>> they are no longer needed. >>>>> >>>>> When working with WebGL and/or Web Audio, array buffers can consume >>>>> large amounts of system memory. Some of these buffers can be treated as >>>>> permanent/static buffers and reused during the lifetime of an application, >>>>> but others are temporary and might only be used during a single function >>>>> call, or might need to be replaced when additional resources are loaded >>>>> into the application (e.g. game level assets). >>>>> >>>>> It would be extremely useful for programmers if we could explicitly >>>>> dispose of an array buffer when it is no longer required. Disposing of an >>>>> array buffer would immediately release any system resources used by the >>>>> array buffer (e.g. system memory) and reduce the array buffer's lengthto zero. This would also potentially reduce the amount of work the GC has >>>>> to do when it eventually decides to clean things up. >>>>> >>>>> I am requesting a single method to be added to the ArrayBuffer type: >>>>> >>>>> interface ArrayBuffer { >>>>> ArrayBuffer dispose(); >>>>> } >>>>> >>>>> Most of the other programming languages I use allow these types of >>>>> arrays to be disposed, for good reasons :) >>>>> >>>>> Regards, >>>>> Si Robertson >>>>> >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ret...@ Mon Feb 25 11:30:41 2013 From: ret...@ (Si Robertson) Date: Mon, 25 Feb 2013 19:30:41 +0000 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: Yep, the problem isn't memory not being released, the problem is memory being released "at some point in the future" by the GC. As mentioned previously, array buffers can consume large amounts of system memory especially when working with WebGL and Audio Context at an advanced level, and an application cannot currently dispose of any array buffers even when it knows those array buffers will no longer be needed. What we are looking at right now is large amounts of system memory being used when there is absolutely no reason for it to be in use. If a game, for example, no longer needs the assets for "level one" when "level two" is loaded into memory, the game should be able to dispose of the level one assets before the level two assets are loaded into memory. We most definitely need some kind of memory management control with these new APIs. Adding a dispose() method to array buffers will be an ideal solution for most use cases. Si ++ On 25 February 2013 19:00, Jussi Kalliokoski wrote: > On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: > >> typed arrays are garbage collected just like all JS objects. You can just >> as easily make large strings or large JavaScript arrays. JavaScript will >> release any unreferenced typed array on it's own just fine. If you have >> a reproducible case where a typed array is not getting released file a bug >> for that browser. >> > > The problem isn't that the memory isn't released, quite the contrary, the > problem is that the memory is often released at the wrong time, e.g. in the > middle of filling a buffer with audio (if you fail at filling the buffer > quickly enough, your sounds won't make it to the speakers) or during a > drawing operation, causing jitter in the frame rate. Having a manual > dispose function to free the buffer would for example let the developer > free the memory for example after the buffer is filled with audio, thus > reducing the risk of artifacts in the sound. > > Cheers, > Jussi > > >> >> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >> >>> Sorry, my mistake. The method obviously shouldn't have a return value. >>> >>> interface ArrayBuffer { >>> void dispose(); >>> } >>> >>> Si ++ >>> >>> >>> On 25 February 2013 10:44, Kirill Prazdnikov < >>> kirill.prazdnikov...@> wrote: >>> >>>> ** >>>> Hi, >>>> >>>> interface ArrayBuffer { >>>> ArrayBuffer dispose(); >>>> } >>>> >>>> >>>> What is the purpose of return value ? >>>> >>>> Thanks >>>> >>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>> >>>> Hi, >>>> >>>> I have been studying, and experimenting with, the WebGL API and the >>>> Web Audio API recently and have noticed a potential system memory related >>>> problem with typed arrays: there is no way to dispose of array buffers when >>>> they are no longer needed. >>>> >>>> When working with WebGL and/or Web Audio, array buffers can consume >>>> large amounts of system memory. Some of these buffers can be treated as >>>> permanent/static buffers and reused during the lifetime of an application, >>>> but others are temporary and might only be used during a single function >>>> call, or might need to be replaced when additional resources are loaded >>>> into the application (e.g. game level assets). >>>> >>>> It would be extremely useful for programmers if we could explicitly >>>> dispose of an array buffer when it is no longer required. Disposing of an >>>> array buffer would immediately release any system resources used by the >>>> array buffer (e.g. system memory) and reduce the array buffer's lengthto zero. This would also potentially reduce the amount of work the GC has >>>> to do when it eventually decides to clean things up. >>>> >>>> I am requesting a single method to be added to the ArrayBuffer type: >>>> >>>> interface ArrayBuffer { >>>> ArrayBuffer dispose(); >>>> } >>>> >>>> Most of the other programming languages I use allow these types of >>>> arrays to be disposed, for good reasons :) >>>> >>>> Regards, >>>> Si Robertson >>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ash...@ Mon Feb 25 11:36:37 2013 From: ash...@ (Ashley Gullen) Date: Mon, 25 Feb 2013 19:36:37 +0000 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: Why not just recycle existing array buffers? Create one big one, then create and throw away views on to that buffer, which are small and cheap objects. On 25 February 2013 19:30, Si Robertson wrote: > Yep, the problem isn't memory not being released, the problem is memory > being released "at some point in the future" by the GC. As mentioned > previously, array buffers can consume large amounts of system memory > especially when working with WebGL and Audio Context at an advanced level, > and an application cannot currently dispose of any array buffers even when > it knows those array buffers will no longer be needed. > > What we are looking at right now is large amounts of system memory being > used when there is absolutely no reason for it to be in use. If a game, for > example, no longer needs the assets for "level one" when "level two" is > loaded into memory, the game should be able to dispose of the level one > assets before the level two assets are loaded into memory. > > We most definitely need some kind of memory management control with these > new APIs. Adding a dispose() method to array buffers will be an ideal > solution for most use cases. > > Si ++ > > > On 25 February 2013 19:00, Jussi Kalliokoski wrote: > >> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >> >>> typed arrays are garbage collected just like all JS objects. You can >>> just as easily make large strings or large JavaScript arrays. JavaScript >>> will release any unreferenced typed array on it's own just fine. If you >>> have a reproducible case where a typed array is not getting released file a >>> bug for that browser. >>> >> >> The problem isn't that the memory isn't released, quite the contrary, the >> problem is that the memory is often released at the wrong time, e.g. in the >> middle of filling a buffer with audio (if you fail at filling the buffer >> quickly enough, your sounds won't make it to the speakers) or during a >> drawing operation, causing jitter in the frame rate. Having a manual >> dispose function to free the buffer would for example let the developer >> free the memory for example after the buffer is filled with audio, thus >> reducing the risk of artifacts in the sound. >> >> Cheers, >> Jussi >> >> >>> >>> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >>> >>>> Sorry, my mistake. The method obviously shouldn't have a return value. >>>> >>>> interface ArrayBuffer { >>>> void dispose(); >>>> } >>>> >>>> Si ++ >>>> >>>> >>>> On 25 February 2013 10:44, Kirill Prazdnikov < >>>> kirill.prazdnikov...@> wrote: >>>> >>>>> ** >>>>> Hi, >>>>> >>>>> interface ArrayBuffer { >>>>> ArrayBuffer dispose(); >>>>> } >>>>> >>>>> >>>>> What is the purpose of return value ? >>>>> >>>>> Thanks >>>>> >>>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>>> >>>>> Hi, >>>>> >>>>> I have been studying, and experimenting with, the WebGL API and the >>>>> Web Audio API recently and have noticed a potential system memory related >>>>> problem with typed arrays: there is no way to dispose of array buffers when >>>>> they are no longer needed. >>>>> >>>>> When working with WebGL and/or Web Audio, array buffers can consume >>>>> large amounts of system memory. Some of these buffers can be treated as >>>>> permanent/static buffers and reused during the lifetime of an application, >>>>> but others are temporary and might only be used during a single function >>>>> call, or might need to be replaced when additional resources are loaded >>>>> into the application (e.g. game level assets). >>>>> >>>>> It would be extremely useful for programmers if we could explicitly >>>>> dispose of an array buffer when it is no longer required. Disposing of an >>>>> array buffer would immediately release any system resources used by the >>>>> array buffer (e.g. system memory) and reduce the array buffer's lengthto zero. This would also potentially reduce the amount of work the GC has >>>>> to do when it eventually decides to clean things up. >>>>> >>>>> I am requesting a single method to be added to the ArrayBuffer type: >>>>> >>>>> interface ArrayBuffer { >>>>> ArrayBuffer dispose(); >>>>> } >>>>> >>>>> Most of the other programming languages I use allow these types of >>>>> arrays to be disposed, for good reasons :) >>>>> >>>>> Regards, >>>>> Si Robertson >>>>> >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Feb 25 11:41:11 2013 From: gma...@ (Gregg Tavares) Date: Mon, 25 Feb 2013 11:41:11 -0800 Subject: [Public WebGL] Sharing Resources across contexts In-Reply-To: References: <511A28AF.3060703@jetbrains.com> Message-ID: So I ran into my first issue prototyping this. Ideally checkFramebufferStatus will return FRAMEBUFFER_INCOMPLETE_ATTACHMENT if an attachment is not acquired but... if you're only going to read from a framebuffer (calling readPixels) you should only need READ_ONLY permission where as if you are going to draw to the framebuffer (clear/drawArrays/drawElements) you need EXCLUSIVE permission. Since checkFramebufferStatus doesn't know your intent it can't give you the correct answer. Several solutions came up so far. *1) Require EXCLUSIVE permission for framebuffer attachments* That's not really a solution since one of the required use cases is multiple workers reading from the same texture. * * *2) checkFramebufferStatus assumes READ_ONLY permission only* That means you might still get a INVALID_FRAMEBUFFER_OPERATION if you acquired attachments for READ_ONLY access and then try to draw. *3) expose the DRAW_FRAMEBUFFER and READ_FRAMEBUFFER bind targets* On systems that support multi-sampling there are 2 framebuffer bind targets. DRAW_FRAMEBUFFER and READ_FRAMEBUFFER. DRAW is where drawing happens. READ is where reading happens. Unfortunately since multi-sampling isn't supported everywhere allowing you to bind separate framebuffers to DRAW_FRAMEBUFFER and READ_FRAMEBUFFER won't work. *4) allow DRAW_FRAMEBUFFER and READ_FRAMEBUFFER as arguments to checkFramebufferStatus* In this case you'd still only be able to bind to gl.FRAMEBUFFER but you can call checkFramebufferStatus with either DRAW_FRAMEBUFFER or READ_FRAMEBUFFER or FRAMEBUFFER. The OpenGL spec defines FRAMEBUFFER as meaning both READ_FRAMEBUFFER and DRAW_FRAMEBUFFER so this proposal would just work and be future compatible. In other words. gl.checkFramebufferStatus(gl.FRAMEBUFFER) // checks for EXCLUSIVE access gl.checkFramebufferStatus(gl.DRAW_FRAMEBUFFER) // checks for EXCLUSIVE access gl.checkFramebufferStatus(gl.READ_FRAMEBUFFER) // checks for READ_ONLY access #4 seems like the best/correct choice but I thought see if there were other ideas. On Tue, Feb 12, 2013 at 8:38 AM, Brandon Jones wrote: > On Tuesday, February 12, 2013, Kirill Prazdnikov wrote: > >> >> Hi Gregg, >> >> void cancelAcquireSharedResource(**long id); >>> >> >> The purpose of cancelAcquireSharedResource is not clear from the document. >> > > acquireSharedResource is an asynchronous operation. > cancleAcquireSharedResource would indicate that a previous acquire call, > presumably one that has not yet completed, is no longer desired. It should > function like clearTimeout for a setTimeout call. > > One thing that's not explicitly mentioned in the wiki is the behavior of > cancelAcquireSharedResource if the acquisition has already succeeded. I > would imagine it's a no-op at that point? > > >> >> Why simply not to use COM like ref counting ? >> acquireSharedResource = addRef >> releaseSharedResources = release ? >> >> Thanks > > > The effect of these functions is different than addRef/release. Acquiring > a WebGL resource would allow the acquiring context access to the resource, > and may prevent other contexts from accessing it if it was acquired with > gl.EXCLUSIVE. If acquired exclusively, other contexts would not be able to > acquire or access the resources until it had been released. This is to > maintain safety across threads and to explicitly manage the requirements of > OpenGL regarding use by multiple contexts. As such ref counting is not a > good parallel. > > --Brandon > > >> >> On 2/12/2013 2:43 AM, Gregg Tavares wrote: >> >>> Sharing resources across contexts is still a very important feature so >>> here's a proposal >>> >>> http://www.khronos.org/webgl/**wiki/SharedResouces >>> >>> Looking forward to your feedback >>> >>> Note: This is an orthogonal issue to the 1 context multiple canvases >>> issue. It is also orthogonal to the drawing from a worker into a canvas >>> issue. >>> >>> >>> >> >> ------------------------------**----------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ------------------------------**----------------------------- >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Mon Feb 25 11:46:07 2013 From: ben...@ (Ben Vanik) Date: Mon, 25 Feb 2013 11:46:07 -0800 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: I often do, but some APIs make that difficult (such as XHR/filesystem), as they return arraybuffers. On Mon, Feb 25, 2013 at 11:36 AM, Ashley Gullen wrote: > Why not just recycle existing array buffers? Create one big one, then > create and throw away views on to that buffer, which are small and cheap > objects. > > > > On 25 February 2013 19:30, Si Robertson wrote: > >> Yep, the problem isn't memory not being released, the problem is memory >> being released "at some point in the future" by the GC. As mentioned >> previously, array buffers can consume large amounts of system memory >> especially when working with WebGL and Audio Context at an advanced level, >> and an application cannot currently dispose of any array buffers even when >> it knows those array buffers will no longer be needed. >> >> What we are looking at right now is large amounts of system memory being >> used when there is absolutely no reason for it to be in use. If a game, for >> example, no longer needs the assets for "level one" when "level two" is >> loaded into memory, the game should be able to dispose of the level one >> assets before the level two assets are loaded into memory. >> >> We most definitely need some kind of memory management control with these >> new APIs. Adding a dispose() method to array buffers will be an ideal >> solution for most use cases. >> >> Si ++ >> >> >> On 25 February 2013 19:00, Jussi Kalliokoski > > wrote: >> >>> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >>> >>>> typed arrays are garbage collected just like all JS objects. You can >>>> just as easily make large strings or large JavaScript arrays. JavaScript >>>> will release any unreferenced typed array on it's own just fine. If you >>>> have a reproducible case where a typed array is not getting released file a >>>> bug for that browser. >>>> >>> >>> The problem isn't that the memory isn't released, quite the contrary, >>> the problem is that the memory is often released at the wrong time, e.g. in >>> the middle of filling a buffer with audio (if you fail at filling the >>> buffer quickly enough, your sounds won't make it to the speakers) or during >>> a drawing operation, causing jitter in the frame rate. Having a manual >>> dispose function to free the buffer would for example let the developer >>> free the memory for example after the buffer is filled with audio, thus >>> reducing the risk of artifacts in the sound. >>> >>> Cheers, >>> Jussi >>> >>> >>>> >>>> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >>>> >>>>> Sorry, my mistake. The method obviously shouldn't have a return value. >>>>> >>>>> interface ArrayBuffer { >>>>> void dispose(); >>>>> } >>>>> >>>>> Si ++ >>>>> >>>>> >>>>> On 25 February 2013 10:44, Kirill Prazdnikov < >>>>> kirill.prazdnikov...@> wrote: >>>>> >>>>>> ** >>>>>> Hi, >>>>>> >>>>>> interface ArrayBuffer { >>>>>> ArrayBuffer dispose(); >>>>>> } >>>>>> >>>>>> >>>>>> What is the purpose of return value ? >>>>>> >>>>>> Thanks >>>>>> >>>>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> I have been studying, and experimenting with, the WebGL API and the >>>>>> Web Audio API recently and have noticed a potential system memory related >>>>>> problem with typed arrays: there is no way to dispose of array buffers when >>>>>> they are no longer needed. >>>>>> >>>>>> When working with WebGL and/or Web Audio, array buffers can consume >>>>>> large amounts of system memory. Some of these buffers can be treated as >>>>>> permanent/static buffers and reused during the lifetime of an application, >>>>>> but others are temporary and might only be used during a single function >>>>>> call, or might need to be replaced when additional resources are loaded >>>>>> into the application (e.g. game level assets). >>>>>> >>>>>> It would be extremely useful for programmers if we could explicitly >>>>>> dispose of an array buffer when it is no longer required. Disposing of an >>>>>> array buffer would immediately release any system resources used by the >>>>>> array buffer (e.g. system memory) and reduce the array buffer's >>>>>> length to zero. This would also potentially reduce the amount of >>>>>> work the GC has to do when it eventually decides to clean things up. >>>>>> >>>>>> I am requesting a single method to be added to the ArrayBuffer type: >>>>>> >>>>>> interface ArrayBuffer { >>>>>> ArrayBuffer dispose(); >>>>>> } >>>>>> >>>>>> Most of the other programming languages I use allow these types of >>>>>> arrays to be disposed, for good reasons :) >>>>>> >>>>>> Regards, >>>>>> Si Robertson >>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ret...@ Mon Feb 25 11:48:48 2013 From: ret...@ (Si Robertson) Date: Mon, 25 Feb 2013 19:48:48 +0000 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: That's not always possible do to. If you are loading data (consider audio samples) into an application via XHR, the data arrives in a new array buffer which is then immediately copied over to an audio buffer. At that point the array buffer becomes useless and is normally dereferenced at the end of the XHR load event, and all of that data ends up floating around in a void until the GC eventually decides to clear it from memory, which is more often than not at a very inconvenient time :) On 25 February 2013 19:36, Ashley Gullen wrote: > Why not just recycle existing array buffers? Create one big one, then > create and throw away views on to that buffer, which are small and cheap > objects. > > > > On 25 February 2013 19:30, Si Robertson wrote: > >> Yep, the problem isn't memory not being released, the problem is memory >> being released "at some point in the future" by the GC. As mentioned >> previously, array buffers can consume large amounts of system memory >> especially when working with WebGL and Audio Context at an advanced level, >> and an application cannot currently dispose of any array buffers even when >> it knows those array buffers will no longer be needed. >> >> What we are looking at right now is large amounts of system memory being >> used when there is absolutely no reason for it to be in use. If a game, for >> example, no longer needs the assets for "level one" when "level two" is >> loaded into memory, the game should be able to dispose of the level one >> assets before the level two assets are loaded into memory. >> >> We most definitely need some kind of memory management control with these >> new APIs. Adding a dispose() method to array buffers will be an ideal >> solution for most use cases. >> >> Si ++ >> >> >> On 25 February 2013 19:00, Jussi Kalliokoski > > wrote: >> >>> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >>> >>>> typed arrays are garbage collected just like all JS objects. You can >>>> just as easily make large strings or large JavaScript arrays. JavaScript >>>> will release any unreferenced typed array on it's own just fine. If you >>>> have a reproducible case where a typed array is not getting released file a >>>> bug for that browser. >>>> >>> >>> The problem isn't that the memory isn't released, quite the contrary, >>> the problem is that the memory is often released at the wrong time, e.g. in >>> the middle of filling a buffer with audio (if you fail at filling the >>> buffer quickly enough, your sounds won't make it to the speakers) or during >>> a drawing operation, causing jitter in the frame rate. Having a manual >>> dispose function to free the buffer would for example let the developer >>> free the memory for example after the buffer is filled with audio, thus >>> reducing the risk of artifacts in the sound. >>> >>> Cheers, >>> Jussi >>> >>> >>>> >>>> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >>>> >>>>> Sorry, my mistake. The method obviously shouldn't have a return value. >>>>> >>>>> interface ArrayBuffer { >>>>> void dispose(); >>>>> } >>>>> >>>>> Si ++ >>>>> >>>>> >>>>> On 25 February 2013 10:44, Kirill Prazdnikov < >>>>> kirill.prazdnikov...@> wrote: >>>>> >>>>>> ** >>>>>> Hi, >>>>>> >>>>>> interface ArrayBuffer { >>>>>> ArrayBuffer dispose(); >>>>>> } >>>>>> >>>>>> >>>>>> What is the purpose of return value ? >>>>>> >>>>>> Thanks >>>>>> >>>>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> I have been studying, and experimenting with, the WebGL API and the >>>>>> Web Audio API recently and have noticed a potential system memory related >>>>>> problem with typed arrays: there is no way to dispose of array buffers when >>>>>> they are no longer needed. >>>>>> >>>>>> When working with WebGL and/or Web Audio, array buffers can consume >>>>>> large amounts of system memory. Some of these buffers can be treated as >>>>>> permanent/static buffers and reused during the lifetime of an application, >>>>>> but others are temporary and might only be used during a single function >>>>>> call, or might need to be replaced when additional resources are loaded >>>>>> into the application (e.g. game level assets). >>>>>> >>>>>> It would be extremely useful for programmers if we could explicitly >>>>>> dispose of an array buffer when it is no longer required. Disposing of an >>>>>> array buffer would immediately release any system resources used by the >>>>>> array buffer (e.g. system memory) and reduce the array buffer's >>>>>> length to zero. This would also potentially reduce the amount of >>>>>> work the GC has to do when it eventually decides to clean things up. >>>>>> >>>>>> I am requesting a single method to be added to the ArrayBuffer type: >>>>>> >>>>>> interface ArrayBuffer { >>>>>> ArrayBuffer dispose(); >>>>>> } >>>>>> >>>>>> Most of the other programming languages I use allow these types of >>>>>> arrays to be disposed, for good reasons :) >>>>>> >>>>>> Regards, >>>>>> Si Robertson >>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Feb 25 11:53:09 2013 From: gma...@ (Gregg Tavares) Date: Mon, 25 Feb 2013 11:53:09 -0800 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: On Mon, Feb 25, 2013 at 11:20 AM, Jussi Kalliokoski < jussi.kalliokoski...@> wrote: > On Mon, Feb 25, 2013 at 2:16 PM, Gregg Tavares wrote: > >> >> >> >> On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski < >> jussi.kalliokoski...@> wrote: >> >>> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >>> >>>> typed arrays are garbage collected just like all JS objects. You can >>>> just as easily make large strings or large JavaScript arrays. JavaScript >>>> will release any unreferenced typed array on it's own just fine. If you >>>> have a reproducible case where a typed array is not getting released file a >>>> bug for that browser. >>>> >>> >>> The problem isn't that the memory isn't released, quite the contrary, >>> the problem is that the memory is often released at the wrong time, e.g. in >>> the middle of filling a buffer with audio (if you fail at filling the >>> buffer quickly enough, your sounds won't make it to the speakers) or during >>> a drawing operation, causing jitter in the frame rate. Having a manual >>> dispose function to free the buffer would for example let the developer >>> free the memory for example after the buffer is filled with audio, thus >>> reducing the risk of artifacts in the sound. >>> >> >> You're making a huge assumption that calling dispose would some how >> magically be (a) fast and (b) effect allocation speed later. >> > > What makes you think so? > > a) No, I don't expect it to be any faster than garbage collection. > b) What? Where did you get this? > > The point is that the deallocation, however slow or fast, happens at a > suitable time. > My point is your proposal of dispose assumes that dispose is implemented as you imagine it. It would be just as easy to implement dispose as ArrayBuffer::dispose() { addMemoryThatArrayBufferIsUsingToSomeGarbageCollectionListThatWillBeUsedSometimeInTheFuture(m_data, m_size); m_data = NULL; m_size = 0; }; Your problem has not been solved by dispose. The GC issues are being worked on. For the time being there are sucky points. Make small sample repos and file bugs. Note: I hate GC as much as the next guy. I'm a C++ guy. But adding dispose isn't going to help IMO. > > Cheers, > Jussi > > >> >>> Cheers, >>> Jussi >>> >>> >>>> >>>> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >>>> >>>>> Sorry, my mistake. The method obviously shouldn't have a return value. >>>>> >>>>> interface ArrayBuffer { >>>>> void dispose(); >>>>> } >>>>> >>>>> Si ++ >>>>> >>>>> >>>>> On 25 February 2013 10:44, Kirill Prazdnikov < >>>>> kirill.prazdnikov...@> wrote: >>>>> >>>>>> ** >>>>>> Hi, >>>>>> >>>>>> interface ArrayBuffer { >>>>>> ArrayBuffer dispose(); >>>>>> } >>>>>> >>>>>> >>>>>> What is the purpose of return value ? >>>>>> >>>>>> Thanks >>>>>> >>>>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> I have been studying, and experimenting with, the WebGL API and the >>>>>> Web Audio API recently and have noticed a potential system memory related >>>>>> problem with typed arrays: there is no way to dispose of array buffers when >>>>>> they are no longer needed. >>>>>> >>>>>> When working with WebGL and/or Web Audio, array buffers can consume >>>>>> large amounts of system memory. Some of these buffers can be treated as >>>>>> permanent/static buffers and reused during the lifetime of an application, >>>>>> but others are temporary and might only be used during a single function >>>>>> call, or might need to be replaced when additional resources are loaded >>>>>> into the application (e.g. game level assets). >>>>>> >>>>>> It would be extremely useful for programmers if we could explicitly >>>>>> dispose of an array buffer when it is no longer required. Disposing of an >>>>>> array buffer would immediately release any system resources used by the >>>>>> array buffer (e.g. system memory) and reduce the array buffer's >>>>>> length to zero. This would also potentially reduce the amount of >>>>>> work the GC has to do when it eventually decides to clean things up. >>>>>> >>>>>> I am requesting a single method to be added to the ArrayBuffer type: >>>>>> >>>>>> interface ArrayBuffer { >>>>>> ArrayBuffer dispose(); >>>>>> } >>>>>> >>>>>> Most of the other programming languages I use allow these types of >>>>>> arrays to be disposed, for good reasons :) >>>>>> >>>>>> Regards, >>>>>> Si Robertson >>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ret...@ Mon Feb 25 12:01:09 2013 From: ret...@ (Si Robertson) Date: Mon, 25 Feb 2013 20:01:09 +0000 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: The thing is, disposing of similar objects in ActionScript 3.0 (for example) releases memory instantly, the AS3 Bitmap and ByteArray objects are a good example, so it is definitely possible to do. Ideally we should be able to force the GC to run whenever we see fit, which is something else that is possible in AS3. That has solved numerous frame rate "glitch" related problems in SWF based games. On 25 February 2013 19:53, Gregg Tavares wrote: > > > > On Mon, Feb 25, 2013 at 11:20 AM, Jussi Kalliokoski < > jussi.kalliokoski...@> wrote: > >> On Mon, Feb 25, 2013 at 2:16 PM, Gregg Tavares wrote: >> >>> >>> >>> >>> On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski < >>> jussi.kalliokoski...@> wrote: >>> >>>> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >>>> >>>>> typed arrays are garbage collected just like all JS objects. You can >>>>> just as easily make large strings or large JavaScript arrays. JavaScript >>>>> will release any unreferenced typed array on it's own just fine. If you >>>>> have a reproducible case where a typed array is not getting released file a >>>>> bug for that browser. >>>>> >>>> >>>> The problem isn't that the memory isn't released, quite the contrary, >>>> the problem is that the memory is often released at the wrong time, e.g. in >>>> the middle of filling a buffer with audio (if you fail at filling the >>>> buffer quickly enough, your sounds won't make it to the speakers) or during >>>> a drawing operation, causing jitter in the frame rate. Having a manual >>>> dispose function to free the buffer would for example let the developer >>>> free the memory for example after the buffer is filled with audio, thus >>>> reducing the risk of artifacts in the sound. >>>> >>> >>> You're making a huge assumption that calling dispose would some how >>> magically be (a) fast and (b) effect allocation speed later. >>> >> >> What makes you think so? >> >> a) No, I don't expect it to be any faster than garbage collection. >> b) What? Where did you get this? >> >> The point is that the deallocation, however slow or fast, happens at a >> suitable time. >> > > My point is your proposal of dispose assumes that dispose is implemented > as you imagine it. It would be just as easy to implement dispose as > > ArrayBuffer::dispose() { > > addMemoryThatArrayBufferIsUsingToSomeGarbageCollectionListThatWillBeUsedSometimeInTheFuture(m_data, > m_size); > m_data = NULL; > m_size = 0; > }; > > Your problem has not been solved by dispose. > > The GC issues are being worked on. For the time being there are sucky > points. Make small sample repos and file bugs. > > Note: I hate GC as much as the next guy. I'm a C++ guy. But adding dispose > isn't going to help IMO. > > >> >> Cheers, >> Jussi >> >> >>> >>>> Cheers, >>>> Jussi >>>> >>>> >>>>> >>>>> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >>>>> >>>>>> Sorry, my mistake. The method obviously shouldn't have a return value. >>>>>> >>>>>> interface ArrayBuffer { >>>>>> void dispose(); >>>>>> } >>>>>> >>>>>> Si ++ >>>>>> >>>>>> >>>>>> On 25 February 2013 10:44, Kirill Prazdnikov < >>>>>> kirill.prazdnikov...@> wrote: >>>>>> >>>>>>> ** >>>>>>> Hi, >>>>>>> >>>>>>> interface ArrayBuffer { >>>>>>> ArrayBuffer dispose(); >>>>>>> } >>>>>>> >>>>>>> >>>>>>> What is the purpose of return value ? >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I have been studying, and experimenting with, the WebGL API and >>>>>>> the Web Audio API recently and have noticed a potential system memory >>>>>>> related problem with typed arrays: there is no way to dispose of array >>>>>>> buffers when they are no longer needed. >>>>>>> >>>>>>> When working with WebGL and/or Web Audio, array buffers can >>>>>>> consume large amounts of system memory. Some of these buffers can be >>>>>>> treated as permanent/static buffers and reused during the lifetime of an >>>>>>> application, but others are temporary and might only be used during a >>>>>>> single function call, or might need to be replaced when additional >>>>>>> resources are loaded into the application (e.g. game level assets). >>>>>>> >>>>>>> It would be extremely useful for programmers if we could >>>>>>> explicitly dispose of an array buffer when it is no longer required. >>>>>>> Disposing of an array buffer would immediately release any system resources >>>>>>> used by the array buffer (e.g. system memory) and reduce the array buffer's >>>>>>> length to zero. This would also potentially reduce the amount of >>>>>>> work the GC has to do when it eventually decides to clean things up. >>>>>>> >>>>>>> I am requesting a single method to be added to the ArrayBuffer >>>>>>> type: >>>>>>> >>>>>>> interface ArrayBuffer { >>>>>>> ArrayBuffer dispose(); >>>>>>> } >>>>>>> >>>>>>> Most of the other programming languages I use allow these types of >>>>>>> arrays to be disposed, for good reasons :) >>>>>>> >>>>>>> Regards, >>>>>>> Si Robertson >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bag...@ Mon Feb 25 12:02:32 2013 From: bag...@ (Patrick Baggett) Date: Mon, 25 Feb 2013 14:02:32 -0600 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: On Mon, Feb 25, 2013 at 1:53 PM, Gregg Tavares wrote: > > > > On Mon, Feb 25, 2013 at 11:20 AM, Jussi Kalliokoski < > jussi.kalliokoski...@> wrote: > >> On Mon, Feb 25, 2013 at 2:16 PM, Gregg Tavares wrote: >> >>> >>> >>> >>> On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski < >>> jussi.kalliokoski...@> wrote: >>> >>>> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >>>> >>>>> typed arrays are garbage collected just like all JS objects. You can >>>>> just as easily make large strings or large JavaScript arrays. JavaScript >>>>> will release any unreferenced typed array on it's own just fine. If you >>>>> have a reproducible case where a typed array is not getting released file a >>>>> bug for that browser. >>>>> >>>> >>>> The problem isn't that the memory isn't released, quite the contrary, >>>> the problem is that the memory is often released at the wrong time, e.g. in >>>> the middle of filling a buffer with audio (if you fail at filling the >>>> buffer quickly enough, your sounds won't make it to the speakers) or during >>>> a drawing operation, causing jitter in the frame rate. Having a manual >>>> dispose function to free the buffer would for example let the developer >>>> free the memory for example after the buffer is filled with audio, thus >>>> reducing the risk of artifacts in the sound. >>>> >>> >>> You're making a huge assumption that calling dispose would some how >>> magically be (a) fast and (b) effect allocation speed later. >>> >> >> What makes you think so? >> >> a) No, I don't expect it to be any faster than garbage collection. >> b) What? Where did you get this? >> >> The point is that the deallocation, however slow or fast, happens at a >> suitable time. >> > > My point is your proposal of dispose assumes that dispose is implemented > as you imagine it. It would be just as easy to implement dispose as > > ArrayBuffer::dispose() { > > addMemoryThatArrayBufferIsUsingToSomeGarbageCollectionListThatWillBeUsedSometimeInTheFuture(m_data, > m_size); > m_data = NULL; > m_size = 0; > }; > > This neatly explains your fears. Ack. It seems like the only way to solve this is to force the semantic of "collect it now; this is not a hint, this is a command." That is somewhat untestable in the PASS/FAIL sense of course. > Your problem has not been solved by dispose. > I'm guessing that forcing the semantic of "collect it now" is not a viable/good option? Patrick -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Feb 25 12:27:05 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Mon, 25 Feb 2013 21:27:05 +0100 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: The issues with the GC are never going to go away. They might improve, but they're not gonna go away. Dispose isn't going to make them go away either. Effectively what people want is to manage their own memory, and we can do that, almost. Array buffers can in fact be used much like a memory pool (in conjunction with views for instance). As has been noted, the biggest obstacle in that is that there are APIs that produce array buffers instead of taking a "pointer" to fill. However, even if you made XHR fill a buffer rather than produce one, there'd need to be a way to deal with what happens when the XHR would fetch more data than the buffer can hold (in C you solve that simply by copying the data somewhere else progressively as you read them in from a file or a socket). On Mon, Feb 25, 2013 at 9:02 PM, Patrick Baggett wrote: > > > On Mon, Feb 25, 2013 at 1:53 PM, Gregg Tavares wrote: > >> >> >> >> On Mon, Feb 25, 2013 at 11:20 AM, Jussi Kalliokoski < >> jussi.kalliokoski...@> wrote: >> >>> On Mon, Feb 25, 2013 at 2:16 PM, Gregg Tavares wrote: >>> >>>> >>>> >>>> >>>> On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski < >>>> jussi.kalliokoski...@> wrote: >>>> >>>>> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >>>>> >>>>>> typed arrays are garbage collected just like all JS objects. You can >>>>>> just as easily make large strings or large JavaScript arrays. JavaScript >>>>>> will release any unreferenced typed array on it's own just fine. If you >>>>>> have a reproducible case where a typed array is not getting released file a >>>>>> bug for that browser. >>>>>> >>>>> >>>>> The problem isn't that the memory isn't released, quite the contrary, >>>>> the problem is that the memory is often released at the wrong time, e.g. in >>>>> the middle of filling a buffer with audio (if you fail at filling the >>>>> buffer quickly enough, your sounds won't make it to the speakers) or during >>>>> a drawing operation, causing jitter in the frame rate. Having a manual >>>>> dispose function to free the buffer would for example let the developer >>>>> free the memory for example after the buffer is filled with audio, thus >>>>> reducing the risk of artifacts in the sound. >>>>> >>>> >>>> You're making a huge assumption that calling dispose would some how >>>> magically be (a) fast and (b) effect allocation speed later. >>>> >>> >>> What makes you think so? >>> >>> a) No, I don't expect it to be any faster than garbage collection. >>> b) What? Where did you get this? >>> >>> The point is that the deallocation, however slow or fast, happens at a >>> suitable time. >>> >> >> My point is your proposal of dispose assumes that dispose is implemented >> as you imagine it. It would be just as easy to implement dispose as >> >> ArrayBuffer::dispose() { >> >> addMemoryThatArrayBufferIsUsingToSomeGarbageCollectionListThatWillBeUsedSometimeInTheFuture(m_data, >> m_size); >> m_data = NULL; >> m_size = 0; >> }; >> >> > This neatly explains your fears. Ack. It seems like the only way to solve > this is to force the semantic of "collect it now; this is not a hint, this > is a command." That is somewhat untestable in the PASS/FAIL sense of course. > > >> Your problem has not been solved by dispose. >> > > I'm guessing that forcing the semantic of "collect it now" is not a > viable/good option? > > Patrick > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jus...@ Mon Feb 25 13:00:09 2013 From: jus...@ (Jussi Kalliokoski) Date: Mon, 25 Feb 2013 16:00:09 -0500 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: On Mon, Feb 25, 2013 at 2:53 PM, Gregg Tavares wrote: > > > > On Mon, Feb 25, 2013 at 11:20 AM, Jussi Kalliokoski < > jussi.kalliokoski...@> wrote: > >> On Mon, Feb 25, 2013 at 2:16 PM, Gregg Tavares wrote: >> >>> >>> >>> >>> On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski < >>> jussi.kalliokoski...@> wrote: >>> >>>> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >>>> >>>>> typed arrays are garbage collected just like all JS objects. You can >>>>> just as easily make large strings or large JavaScript arrays. JavaScript >>>>> will release any unreferenced typed array on it's own just fine. If you >>>>> have a reproducible case where a typed array is not getting released file a >>>>> bug for that browser. >>>>> >>>> >>>> The problem isn't that the memory isn't released, quite the contrary, >>>> the problem is that the memory is often released at the wrong time, e.g. in >>>> the middle of filling a buffer with audio (if you fail at filling the >>>> buffer quickly enough, your sounds won't make it to the speakers) or during >>>> a drawing operation, causing jitter in the frame rate. Having a manual >>>> dispose function to free the buffer would for example let the developer >>>> free the memory for example after the buffer is filled with audio, thus >>>> reducing the risk of artifacts in the sound. >>>> >>> >>> You're making a huge assumption that calling dispose would some how >>> magically be (a) fast and (b) effect allocation speed later. >>> >> >> What makes you think so? >> >> a) No, I don't expect it to be any faster than garbage collection. >> b) What? Where did you get this? >> >> The point is that the deallocation, however slow or fast, happens at a >> suitable time. >> > > My point is your proposal of dispose assumes that dispose is implemented > as you imagine it. It would be just as easy to implement dispose as > > ArrayBuffer::dispose() { > > addMemoryThatArrayBufferIsUsingToSomeGarbageCollectionListThatWillBeUsedSometimeInTheFuture(m_data, > m_size); > m_data = NULL; > m_size = 0; > }; > > Your problem has not been solved by dispose. > Why would an implementation do that? If we mandate that the memory be freed, then implementations can always decide to violate the specification, but what's the point of standardizing anything at all then? :D Of course problems don't become solved if implementations don't do what they're supposed to. > The GC issues are being worked on. For the time being there are sucky > points. Make small sample repos and file bugs. > What's the point of expecting so much intelligence from the GC? It's not reasonable to expect the GC to handle all kinds of corner cases (performance-critical code often happens to be quite corner-case) well. The developer can be equipped with the tools to help the GC. It's not even like this would be adding something that wasn't already possible to achieve, or changing the whole semantics of the web platform (like for example adding a `free` operator to JS would), it would give you a lot better way to do it. I made a gist [1] to demonstrate the claim above. The gist basically contains a polyfill for the proposed feature and an example for it. From the application's point of view, the array is disposed of. Of course, it's a crappy hack that has to create a new worker context just to dump unneeded data to, and it doesn't actually free the memory (although if we're lucky, the GC hit will be suffered only in the worker thread). But that's why we need a better way to do this. Cheers, Jussi [1] https://gist.github.com/jussi-kalliokoski/5033123 Note: I hate GC as much as the next guy. I'm a C++ guy. But adding dispose > isn't going to help IMO. > > >> >> Cheers, >> Jussi >> >> >>> >>>> Cheers, >>>> Jussi >>>> >>>> >>>>> >>>>> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson wrote: >>>>> >>>>>> Sorry, my mistake. The method obviously shouldn't have a return value. >>>>>> >>>>>> interface ArrayBuffer { >>>>>> void dispose(); >>>>>> } >>>>>> >>>>>> Si ++ >>>>>> >>>>>> >>>>>> On 25 February 2013 10:44, Kirill Prazdnikov < >>>>>> kirill.prazdnikov...@> wrote: >>>>>> >>>>>>> ** >>>>>>> Hi, >>>>>>> >>>>>>> interface ArrayBuffer { >>>>>>> ArrayBuffer dispose(); >>>>>>> } >>>>>>> >>>>>>> >>>>>>> What is the purpose of return value ? >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I have been studying, and experimenting with, the WebGL API and >>>>>>> the Web Audio API recently and have noticed a potential system memory >>>>>>> related problem with typed arrays: there is no way to dispose of array >>>>>>> buffers when they are no longer needed. >>>>>>> >>>>>>> When working with WebGL and/or Web Audio, array buffers can >>>>>>> consume large amounts of system memory. Some of these buffers can be >>>>>>> treated as permanent/static buffers and reused during the lifetime of an >>>>>>> application, but others are temporary and might only be used during a >>>>>>> single function call, or might need to be replaced when additional >>>>>>> resources are loaded into the application (e.g. game level assets). >>>>>>> >>>>>>> It would be extremely useful for programmers if we could >>>>>>> explicitly dispose of an array buffer when it is no longer required. >>>>>>> Disposing of an array buffer would immediately release any system resources >>>>>>> used by the array buffer (e.g. system memory) and reduce the array buffer's >>>>>>> length to zero. This would also potentially reduce the amount of >>>>>>> work the GC has to do when it eventually decides to clean things up. >>>>>>> >>>>>>> I am requesting a single method to be added to the ArrayBuffer >>>>>>> type: >>>>>>> >>>>>>> interface ArrayBuffer { >>>>>>> ArrayBuffer dispose(); >>>>>>> } >>>>>>> >>>>>>> Most of the other programming languages I use allow these types of >>>>>>> arrays to be disposed, for good reasons :) >>>>>>> >>>>>>> Regards, >>>>>>> Si Robertson >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Feb 25 13:03:32 2013 From: kbr...@ (Kenneth Russell) Date: Mon, 25 Feb 2013 13:03:32 -0800 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: There is much historical experience with explicitly forcing garbage collection in the context of Java and its System.gc() call. Essentially, all evidence indicates that is not a good idea to expose this primitive to applications. In basically every situation where a Java application called this method, performance was increased by *not* calling it. Forcing a full garbage collection at any point in time defeats advanced techniques like generational, incremental, and concurrent garbage collection. Work is ongoing in multiple areas to improve performance of typed arrays, GC, and JavaScript in general. In the V8 JavaScript engine, typed arrays will be allocated out of the JavaScript heap, instead of allocating their storage externally. This will both improve the speed of allocating typed arrays and views (which are slow operations right now), and improve their behavior under garbage collection. Work is also underway in V8 to implement a concurrent garbage collector, the goal of which is to eliminate GC pauses. Mozilla is also pushing forward asm.js ( http://asmjs.org/ ), a subset of JavaScript which, if used as a compilation target, would not need to do any object allocations and thereby would not need to invoke the garbage collector. To summarize, I don't support adding a dispose() method to typed arrays at this time. Instead I think the focus should be on further optimizing current implementations. There are known performance gaps and those should be addressed before adding more APIs. If you have real-world examples or benchmarks showing that GC of typed array instances is the culprit in jankiness of an application, please make them available somewhere (e.g. on Github). Thanks, -Ken On Mon, Feb 25, 2013 at 12:27 PM, Florian B?sch wrote: > The issues with the GC are never going to go away. They might improve, but > they're not gonna go away. Dispose isn't going to make them go away either. > Effectively what people want is to manage their own memory, and we can do > that, almost. Array buffers can in fact be used much like a memory pool (in > conjunction with views for instance). > > As has been noted, the biggest obstacle in that is that there are APIs that > produce array buffers instead of taking a "pointer" to fill. However, even > if you made XHR fill a buffer rather than produce one, there'd need to be a > way to deal with what happens when the XHR would fetch more data than the > buffer can hold (in C you solve that simply by copying the data somewhere > else progressively as you read them in from a file or a socket). > > > On Mon, Feb 25, 2013 at 9:02 PM, Patrick Baggett > wrote: >> >> >> >> On Mon, Feb 25, 2013 at 1:53 PM, Gregg Tavares wrote: >>> >>> >>> >>> >>> On Mon, Feb 25, 2013 at 11:20 AM, Jussi Kalliokoski >>> wrote: >>>> >>>> On Mon, Feb 25, 2013 at 2:16 PM, Gregg Tavares wrote: >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski >>>>> wrote: >>>>>> >>>>>> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares >>>>>> wrote: >>>>>>> >>>>>>> typed arrays are garbage collected just like all JS objects. You can >>>>>>> just as easily make large strings or large JavaScript arrays. JavaScript >>>>>>> will release any unreferenced typed array on it's own just fine. If you have >>>>>>> a reproducible case where a typed array is not getting released file a bug >>>>>>> for that browser. >>>>>> >>>>>> >>>>>> The problem isn't that the memory isn't released, quite the contrary, >>>>>> the problem is that the memory is often released at the wrong time, e.g. in >>>>>> the middle of filling a buffer with audio (if you fail at filling the buffer >>>>>> quickly enough, your sounds won't make it to the speakers) or during a >>>>>> drawing operation, causing jitter in the frame rate. Having a manual dispose >>>>>> function to free the buffer would for example let the developer free the >>>>>> memory for example after the buffer is filled with audio, thus reducing the >>>>>> risk of artifacts in the sound. >>>>> >>>>> >>>>> You're making a huge assumption that calling dispose would some how >>>>> magically be (a) fast and (b) effect allocation speed later. >>>> >>>> >>>> What makes you think so? >>>> >>>> a) No, I don't expect it to be any faster than garbage collection. >>>> b) What? Where did you get this? >>>> >>>> The point is that the deallocation, however slow or fast, happens at a >>>> suitable time. >>> >>> >>> My point is your proposal of dispose assumes that dispose is implemented >>> as you imagine it. It would be just as easy to implement dispose as >>> >>> ArrayBuffer::dispose() { >>> >>> addMemoryThatArrayBufferIsUsingToSomeGarbageCollectionListThatWillBeUsedSometimeInTheFuture(m_data, >>> m_size); >>> m_data = NULL; >>> m_size = 0; >>> }; >>> >> >> This neatly explains your fears. Ack. It seems like the only way to solve >> this is to force the semantic of "collect it now; this is not a hint, this >> is a command." That is somewhat untestable in the PASS/FAIL sense of course. >> >>> >>> Your problem has not been solved by dispose. >> >> >> I'm guessing that forcing the semantic of "collect it now" is not a >> viable/good option? >> >> Patrick > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Mon Feb 25 13:04:38 2013 From: gma...@ (Gregg Tavares) Date: Mon, 25 Feb 2013 13:04:38 -0800 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: On Mon, Feb 25, 2013 at 1:00 PM, Jussi Kalliokoski < jussi.kalliokoski...@> wrote: > On Mon, Feb 25, 2013 at 2:53 PM, Gregg Tavares wrote: > >> >> >> >> On Mon, Feb 25, 2013 at 11:20 AM, Jussi Kalliokoski < >> jussi.kalliokoski...@> wrote: >> >>> On Mon, Feb 25, 2013 at 2:16 PM, Gregg Tavares wrote: >>> >>>> >>>> >>>> >>>> On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski < >>>> jussi.kalliokoski...@> wrote: >>>> >>>>> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares wrote: >>>>> >>>>>> typed arrays are garbage collected just like all JS objects. You can >>>>>> just as easily make large strings or large JavaScript arrays. JavaScript >>>>>> will release any unreferenced typed array on it's own just fine. If you >>>>>> have a reproducible case where a typed array is not getting released file a >>>>>> bug for that browser. >>>>>> >>>>> >>>>> The problem isn't that the memory isn't released, quite the contrary, >>>>> the problem is that the memory is often released at the wrong time, e.g. in >>>>> the middle of filling a buffer with audio (if you fail at filling the >>>>> buffer quickly enough, your sounds won't make it to the speakers) or during >>>>> a drawing operation, causing jitter in the frame rate. Having a manual >>>>> dispose function to free the buffer would for example let the developer >>>>> free the memory for example after the buffer is filled with audio, thus >>>>> reducing the risk of artifacts in the sound. >>>>> >>>> >>>> You're making a huge assumption that calling dispose would some how >>>> magically be (a) fast and (b) effect allocation speed later. >>>> >>> >>> What makes you think so? >>> >>> a) No, I don't expect it to be any faster than garbage collection. >>> b) What? Where did you get this? >>> >>> The point is that the deallocation, however slow or fast, happens at a >>> suitable time. >>> >> >> My point is your proposal of dispose assumes that dispose is implemented >> as you imagine it. It would be just as easy to implement dispose as >> >> ArrayBuffer::dispose() { >> >> addMemoryThatArrayBufferIsUsingToSomeGarbageCollectionListThatWillBeUsedSometimeInTheFuture(m_data, >> m_size); >> m_data = NULL; >> m_size = 0; >> }; >> >> Your problem has not been solved by dispose. >> > > Why would an implementation do that? If we mandate that the memory be > freed, then implementations can always decide to violate the specification, > but what's the point of standardizing anything at all then? :D Of course > problems don't become solved if implementations don't do what they're > supposed to. > There are all kinds of allocation systems. Your expectation for example might be that if you free a 1000 byte buffer you can then allocate 2 500 byte buffers. But if the underlying system is using a bucketed allocator like TCMalloc or if it's using in place memory tracking your assumption would be false and you'll still see GC pauses when memory has to be shuffled to make space. There's no way the spec for 'dispose' can guarantee any kind of perf. > > >> The GC issues are being worked on. For the time being there are sucky >> points. Make small sample repos and file bugs. >> > > What's the point of expecting so much intelligence from the GC? It's not > reasonable to expect the GC to handle all kinds of corner cases > (performance-critical code often happens to be quite corner-case) well. The > developer can be equipped with the tools to help the GC. It's not even like > this would be adding something that wasn't already possible to achieve, or > changing the whole semantics of the web platform (like for example adding a > `free` operator to JS would), it would give you a lot better way to do it. > > I made a gist [1] to demonstrate the claim above. The gist basically > contains a polyfill for the proposed feature and an example for it. From > the application's point of view, the array is disposed of. Of course, it's > a crappy hack that has to create a new worker context just to dump unneeded > data to, and it doesn't actually free the memory (although if we're lucky, > the GC hit will be suffered only in the worker thread). But that's why we > need a better way to do this. > > Cheers, > Jussi > > [1] https://gist.github.com/jussi-kalliokoski/5033123 > > Note: I hate GC as much as the next guy. I'm a C++ guy. But adding dispose >> isn't going to help IMO. >> >> >>> >>> Cheers, >>> Jussi >>> >>> >>>> >>>>> Cheers, >>>>> Jussi >>>>> >>>>> >>>>>> >>>>>> On Mon, Feb 25, 2013 at 5:12 AM, Si Robertson >>>>> > wrote: >>>>>> >>>>>>> Sorry, my mistake. The method obviously shouldn't have a return >>>>>>> value. >>>>>>> >>>>>>> interface ArrayBuffer { >>>>>>> void dispose(); >>>>>>> } >>>>>>> >>>>>>> Si ++ >>>>>>> >>>>>>> >>>>>>> On 25 February 2013 10:44, Kirill Prazdnikov < >>>>>>> kirill.prazdnikov...@> wrote: >>>>>>> >>>>>>>> ** >>>>>>>> Hi, >>>>>>>> >>>>>>>> interface ArrayBuffer { >>>>>>>> ArrayBuffer dispose(); >>>>>>>> } >>>>>>>> >>>>>>>> >>>>>>>> What is the purpose of return value ? >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> On 2/24/2013 9:39 PM, Si Robertson wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> I have been studying, and experimenting with, the WebGL API and >>>>>>>> the Web Audio API recently and have noticed a potential system memory >>>>>>>> related problem with typed arrays: there is no way to dispose of array >>>>>>>> buffers when they are no longer needed. >>>>>>>> >>>>>>>> When working with WebGL and/or Web Audio, array buffers can >>>>>>>> consume large amounts of system memory. Some of these buffers can be >>>>>>>> treated as permanent/static buffers and reused during the lifetime of an >>>>>>>> application, but others are temporary and might only be used during a >>>>>>>> single function call, or might need to be replaced when additional >>>>>>>> resources are loaded into the application (e.g. game level assets). >>>>>>>> >>>>>>>> It would be extremely useful for programmers if we could >>>>>>>> explicitly dispose of an array buffer when it is no longer required. >>>>>>>> Disposing of an array buffer would immediately release any system resources >>>>>>>> used by the array buffer (e.g. system memory) and reduce the array buffer's >>>>>>>> length to zero. This would also potentially reduce the amount of >>>>>>>> work the GC has to do when it eventually decides to clean things up. >>>>>>>> >>>>>>>> I am requesting a single method to be added to the ArrayBuffer >>>>>>>> type: >>>>>>>> >>>>>>>> interface ArrayBuffer { >>>>>>>> ArrayBuffer dispose(); >>>>>>>> } >>>>>>>> >>>>>>>> Most of the other programming languages I use allow these types >>>>>>>> of arrays to be disposed, for good reasons :) >>>>>>>> >>>>>>>> Regards, >>>>>>>> Si Robertson >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jus...@ Mon Feb 25 22:01:28 2013 From: jus...@ (Jussi Kalliokoski) Date: Tue, 26 Feb 2013 01:01:28 -0500 Subject: [Public WebGL] [Typed Array] ArrayBuffer Method Request - Dispose In-Reply-To: References: <512B408C.5030304@jetbrains.com> Message-ID: On Mon, Feb 25, 2013 at 4:03 PM, Kenneth Russell wrote: > There is much historical experience with explicitly forcing garbage > collection in the context of Java and its System.gc() call. > Essentially, all evidence indicates that is not a good idea to expose > this primitive to applications. In basically every situation where a > Java application called this method, performance was increased by > *not* calling it. Forcing a full garbage collection at any point in > time defeats advanced techniques like generational, incremental, and > concurrent garbage collection. > That's a very good counter-example, yeah. > Work is ongoing in multiple areas to improve performance of typed > arrays, GC, and JavaScript in general. In the V8 JavaScript engine, > typed arrays will be allocated out of the JavaScript heap, instead of > allocating their storage externally. This will both improve the speed > of allocating typed arrays and views (which are slow operations right > now), and improve their behavior under garbage collection. Work is > also underway in V8 to implement a concurrent garbage collector, the > goal of which is to eliminate GC pauses. > That's excellent news! > Mozilla is also pushing forward asm.js ( http://asmjs.org/ ), a subset > of JavaScript which, if used as a compilation target, would not need > to do any object allocations and thereby would not need to invoke the > garbage collector. > Yeah, I've read about it, a pretty interesting project. > To summarize, I don't support adding a dispose() method to typed > arrays at this time. Instead I think the focus should be on further > optimizing current implementations. There are known performance gaps > and those should be addressed before adding more APIs. > > If you have real-world examples or benchmarks showing that GC of typed > array instances is the culprit in jankiness of an application, please > make them available somewhere (e.g. on Github). > To be honest, the most problems I've had is with code so old that it probably has the GC issues coming just from generally bad approaches so I doubt it'll be worth people's time to make them analyze the code. But I will do so when I come up with something more recent. Thanks for your answer, Kenneth. You're right; it's probably best to wait and see how far we can push the GC before giving developers more mental overhead. Cheers, Jussi > Thanks, > > -Ken > > > > On Mon, Feb 25, 2013 at 12:27 PM, Florian B?sch wrote: > > The issues with the GC are never going to go away. They might improve, > but > > they're not gonna go away. Dispose isn't going to make them go away > either. > > Effectively what people want is to manage their own memory, and we can do > > that, almost. Array buffers can in fact be used much like a memory pool > (in > > conjunction with views for instance). > > > > As has been noted, the biggest obstacle in that is that there are APIs > that > > produce array buffers instead of taking a "pointer" to fill. However, > even > > if you made XHR fill a buffer rather than produce one, there'd need to > be a > > way to deal with what happens when the XHR would fetch more data than the > > buffer can hold (in C you solve that simply by copying the data somewhere > > else progressively as you read them in from a file or a socket). > > > > > > On Mon, Feb 25, 2013 at 9:02 PM, Patrick Baggett < > baggett.patrick...@> > > wrote: > >> > >> > >> > >> On Mon, Feb 25, 2013 at 1:53 PM, Gregg Tavares wrote: > >>> > >>> > >>> > >>> > >>> On Mon, Feb 25, 2013 at 11:20 AM, Jussi Kalliokoski > >>> wrote: > >>>> > >>>> On Mon, Feb 25, 2013 at 2:16 PM, Gregg Tavares > wrote: > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> On Mon, Feb 25, 2013 at 11:00 AM, Jussi Kalliokoski > >>>>> wrote: > >>>>>> > >>>>>> On Mon, Feb 25, 2013 at 1:32 PM, Gregg Tavares > >>>>>> wrote: > >>>>>>> > >>>>>>> typed arrays are garbage collected just like all JS objects. You > can > >>>>>>> just as easily make large strings or large JavaScript arrays. > JavaScript > >>>>>>> will release any unreferenced typed array on it's own just fine. > If you have > >>>>>>> a reproducible case where a typed array is not getting released > file a bug > >>>>>>> for that browser. > >>>>>> > >>>>>> > >>>>>> The problem isn't that the memory isn't released, quite the > contrary, > >>>>>> the problem is that the memory is often released at the wrong time, > e.g. in > >>>>>> the middle of filling a buffer with audio (if you fail at filling > the buffer > >>>>>> quickly enough, your sounds won't make it to the speakers) or > during a > >>>>>> drawing operation, causing jitter in the frame rate. Having a > manual dispose > >>>>>> function to free the buffer would for example let the developer > free the > >>>>>> memory for example after the buffer is filled with audio, thus > reducing the > >>>>>> risk of artifacts in the sound. > >>>>> > >>>>> > >>>>> You're making a huge assumption that calling dispose would some how > >>>>> magically be (a) fast and (b) effect allocation speed later. > >>>> > >>>> > >>>> What makes you think so? > >>>> > >>>> a) No, I don't expect it to be any faster than garbage collection. > >>>> b) What? Where did you get this? > >>>> > >>>> The point is that the deallocation, however slow or fast, happens at a > >>>> suitable time. > >>> > >>> > >>> My point is your proposal of dispose assumes that dispose is > implemented > >>> as you imagine it. It would be just as easy to implement dispose as > >>> > >>> ArrayBuffer::dispose() { > >>> > >>> > addMemoryThatArrayBufferIsUsingToSomeGarbageCollectionListThatWillBeUsedSometimeInTheFuture(m_data, > >>> m_size); > >>> m_data = NULL; > >>> m_size = 0; > >>> }; > >>> > >> > >> This neatly explains your fears. Ack. It seems like the only way to > solve > >> this is to force the semantic of "collect it now; this is not a hint, > this > >> is a command." That is somewhat untestable in the PASS/FAIL sense of > course. > >> > >>> > >>> Your problem has not been solved by dispose. > >> > >> > >> I'm guessing that forcing the semantic of "collect it now" is not a > >> viable/good option? > >> > >> Patrick > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Tue Feb 26 13:11:46 2013 From: jgi...@ (Jeff Gilbert) Date: Tue, 26 Feb 2013 13:11:46 -0800 (PST) Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: <1415479227.3789790.1361911070108.JavaMail.root@mozilla.com> Message-ID: <1772215725.3798931.1361913106272.JavaMail.root@mozilla.com> The test suite has this: gl.bindAttribLocation(program, 0, 'webgl_a') expected: INVALID_OPERATION gl.bindAttribLocation(program, 0, '_webgl_a') expected: INVALID_OPERATION I don't see where this is in the spec. The GLES2 spec has this behavior for attribs starting with 'gl_', but I can't find the language in the WebGL spec which amends this. It seems like this shouldn't be strictly necessary anyways, since we already have this language in the spec: "In addition to the reserved identifiers in the aforementioned specification, identifiers starting with "webgl_" and "_webgl_" are reserved for use by WebGL. A shader which declares a function, variable, structure name, or structure field starting with these prefixes must not be allowed to load." With this restriction, it seems that the user couldn't declare an attrib with such a name anyways, so bindAttribLocation with these prefixes would always do nothing to a shader which meets this restriction, while refusing to 'load' (link? use? This should probably be clarified as well.) shaders which conflict with this passage. It seems like we don't need this restriction, so it should be removed. Is there something I'm missing? -Jeff ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Tue Feb 26 14:01:40 2013 From: gma...@ (Gregg Tavares) Date: Tue, 26 Feb 2013 14:01:40 -0800 Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: <1772215725.3798931.1361913106272.JavaMail.root@mozilla.com> References: <1415479227.3789790.1361911070108.JavaMail.root@mozilla.com> <1772215725.3798931.1361913106272.JavaMail.root@mozilla.com> Message-ID: If the WebGL spec is not clear it should be updated. It's trying to match the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed WebGL also wants to disallow "webgl_" On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert wrote: > > The test suite has this: > gl.bindAttribLocation(program, 0, 'webgl_a') expected: INVALID_OPERATION > gl.bindAttribLocation(program, 0, '_webgl_a') expected: INVALID_OPERATION > > I don't see where this is in the spec. The GLES2 spec has this behavior > for attribs starting with 'gl_', but I can't find the language in the WebGL > spec which amends this. It seems like this shouldn't be strictly necessary > anyways, since we already have this language in the spec: > > "In addition to the reserved identifiers in the aforementioned > specification, identifiers starting with "webgl_" and "_webgl_" are > reserved for use by WebGL. A shader which declares a function, variable, > structure name, or structure field starting with these prefixes must not be > allowed to load." > > With this restriction, it seems that the user couldn't declare an attrib > with such a name anyways, so bindAttribLocation with these prefixes would > always do nothing to a shader which meets this restriction, while refusing > to 'load' (link? use? This should probably be clarified as well.) shaders > which conflict with this passage. > > It seems like we don't need this restriction, so it should be removed. Is > there something I'm missing? > > -Jeff > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Feb 26 14:39:35 2013 From: kbr...@ (Kenneth Russell) Date: Tue, 26 Feb 2013 14:39:35 -0800 Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: References: <1415479227.3789790.1361911070108.JavaMail.root@mozilla.com> <1772215725.3798931.1361913106272.JavaMail.root@mozilla.com> Message-ID: Yes, that's correct. It wouldn't be acceptable if the application developer could call bindAttribLocation passing in an identifier starting with one of the reserved prefixes and have that apply to an attribute allocated internally by the WebGL implementation. Should this sentence be clarified and perhaps separated out into a separate subsection which can be referred to independently of "Supported GLSL Constructs"? -Ken On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares wrote: > If the WebGL spec is not clear it should be updated. It's trying to match > the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed WebGL > also wants to disallow "webgl_" > > > > > > On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert wrote: >> >> >> The test suite has this: >> gl.bindAttribLocation(program, 0, 'webgl_a') expected: INVALID_OPERATION >> gl.bindAttribLocation(program, 0, '_webgl_a') expected: INVALID_OPERATION >> >> I don't see where this is in the spec. The GLES2 spec has this behavior >> for attribs starting with 'gl_', but I can't find the language in the WebGL >> spec which amends this. It seems like this shouldn't be strictly necessary >> anyways, since we already have this language in the spec: >> >> "In addition to the reserved identifiers in the aforementioned >> specification, identifiers starting with "webgl_" and "_webgl_" are reserved >> for use by WebGL. A shader which declares a function, variable, structure >> name, or structure field starting with these prefixes must not be allowed to >> load." >> >> With this restriction, it seems that the user couldn't declare an attrib >> with such a name anyways, so bindAttribLocation with these prefixes would >> always do nothing to a shader which meets this restriction, while refusing >> to 'load' (link? use? This should probably be clarified as well.) shaders >> which conflict with this passage. >> >> It seems like we don't need this restriction, so it should be removed. Is >> there something I'm missing? >> >> -Jeff >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Tue Feb 26 15:17:03 2013 From: jgi...@ (Jeff Gilbert) Date: Tue, 26 Feb 2013 15:17:03 -0800 (PST) Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: Message-ID: <78242497.3817084.1361920623832.JavaMail.root@mozilla.com> I don't see why it's needed, though. The WebGL impl can easily just drop these bindings on the floor, since there's no possible time when they'd be valid. (All shaders containing them should fail to link, presumedly) I don't see a good reason for keeping this restriction in light of how easy it should be to just ignore such calls. Failing that, we should clarify the language, yes. -Jeff ----- Original Message ----- From: "Kenneth Russell" To: "Gregg Tavares" Cc: "Jeff Gilbert" , "public webgl" Sent: Tuesday, February 26, 2013 2:39:35 PM Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs Yes, that's correct. It wouldn't be acceptable if the application developer could call bindAttribLocation passing in an identifier starting with one of the reserved prefixes and have that apply to an attribute allocated internally by the WebGL implementation. Should this sentence be clarified and perhaps separated out into a separate subsection which can be referred to independently of "Supported GLSL Constructs"? -Ken On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares wrote: > If the WebGL spec is not clear it should be updated. It's trying to match > the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed WebGL > also wants to disallow "webgl_" > > > > > > On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert wrote: >> >> >> The test suite has this: >> gl.bindAttribLocation(program, 0, 'webgl_a') expected: INVALID_OPERATION >> gl.bindAttribLocation(program, 0, '_webgl_a') expected: INVALID_OPERATION >> >> I don't see where this is in the spec. The GLES2 spec has this behavior >> for attribs starting with 'gl_', but I can't find the language in the WebGL >> spec which amends this. It seems like this shouldn't be strictly necessary >> anyways, since we already have this language in the spec: >> >> "In addition to the reserved identifiers in the aforementioned >> specification, identifiers starting with "webgl_" and "_webgl_" are reserved >> for use by WebGL. A shader which declares a function, variable, structure >> name, or structure field starting with these prefixes must not be allowed to >> load." >> >> With this restriction, it seems that the user couldn't declare an attrib >> with such a name anyways, so bindAttribLocation with these prefixes would >> always do nothing to a shader which meets this restriction, while refusing >> to 'load' (link? use? This should probably be clarified as well.) shaders >> which conflict with this passage. >> >> It seems like we don't need this restriction, so it should be removed. Is >> there something I'm missing? >> >> -Jeff >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Feb 26 15:29:30 2013 From: kbr...@ (Kenneth Russell) Date: Tue, 26 Feb 2013 15:29:30 -0800 Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: <78242497.3817084.1361920623832.JavaMail.root@mozilla.com> References: <78242497.3817084.1361920623832.JavaMail.root@mozilla.com> Message-ID: The manual page for glBindAttribLocation states: GL_INVALID_OPERATION is generated if name starts with the reserved prefix "gl_". WebGL's corresponding entry point should behave the same for WebGL's reserved prefixes, not silently drop the call on the floor. -Ken On Tue, Feb 26, 2013 at 3:17 PM, Jeff Gilbert wrote: > I don't see why it's needed, though. The WebGL impl can easily just drop these bindings on the floor, since there's no possible time when they'd be valid. (All shaders containing them should fail to link, presumedly) > > I don't see a good reason for keeping this restriction in light of how easy it should be to just ignore such calls. Failing that, we should clarify the language, yes. > > -Jeff > > ----- Original Message ----- > From: "Kenneth Russell" > To: "Gregg Tavares" > Cc: "Jeff Gilbert" , "public webgl" > Sent: Tuesday, February 26, 2013 2:39:35 PM > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > > Yes, that's correct. > > It wouldn't be acceptable if the application developer could call > bindAttribLocation passing in an identifier starting with one of the > reserved prefixes and have that apply to an attribute allocated > internally by the WebGL implementation. > > Should this sentence be clarified and perhaps separated out into a > separate subsection which can be referred to independently of > "Supported GLSL Constructs"? > > -Ken > > > > On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares wrote: >> If the WebGL spec is not clear it should be updated. It's trying to match >> the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed WebGL >> also wants to disallow "webgl_" >> >> >> >> >> >> On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert wrote: >>> >>> >>> The test suite has this: >>> gl.bindAttribLocation(program, 0, 'webgl_a') expected: INVALID_OPERATION >>> gl.bindAttribLocation(program, 0, '_webgl_a') expected: INVALID_OPERATION >>> >>> I don't see where this is in the spec. The GLES2 spec has this behavior >>> for attribs starting with 'gl_', but I can't find the language in the WebGL >>> spec which amends this. It seems like this shouldn't be strictly necessary >>> anyways, since we already have this language in the spec: >>> >>> "In addition to the reserved identifiers in the aforementioned >>> specification, identifiers starting with "webgl_" and "_webgl_" are reserved >>> for use by WebGL. A shader which declares a function, variable, structure >>> name, or structure field starting with these prefixes must not be allowed to >>> load." >>> >>> With this restriction, it seems that the user couldn't declare an attrib >>> with such a name anyways, so bindAttribLocation with these prefixes would >>> always do nothing to a shader which meets this restriction, while refusing >>> to 'load' (link? use? This should probably be clarified as well.) shaders >>> which conflict with this passage. >>> >>> It seems like we don't need this restriction, so it should be removed. Is >>> there something I'm missing? >>> >>> -Jeff >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Tue Feb 26 16:47:08 2013 From: jgi...@ (Jeff Gilbert) Date: Tue, 26 Feb 2013 16:47:08 -0800 (PST) Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: Message-ID: <606042301.3849566.1361926028261.JavaMail.root@mozilla.com> We do inherit GL's behavior, but there's no reason to expand WebGL's behavior here without reason, nor should we just s/gl/webgl/ everything. Since restricting it here goes further than what GL mandates, and it doesn't win us anything, it seems unnecessary to restrict this. Further, s/gl/webgl/ here would only add the reserved "webgl_" prefix. I'm not sure why we reserve "_webgl_". -Jeff ----- Original Message ----- From: "Kenneth Russell" To: "Jeff Gilbert" Cc: "public webgl" , "Gregg Tavares" Sent: Tuesday, February 26, 2013 3:29:30 PM Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs The manual page for glBindAttribLocation states: GL_INVALID_OPERATION is generated if name starts with the reserved prefix "gl_". WebGL's corresponding entry point should behave the same for WebGL's reserved prefixes, not silently drop the call on the floor. -Ken On Tue, Feb 26, 2013 at 3:17 PM, Jeff Gilbert wrote: > I don't see why it's needed, though. The WebGL impl can easily just drop these bindings on the floor, since there's no possible time when they'd be valid. (All shaders containing them should fail to link, presumedly) > > I don't see a good reason for keeping this restriction in light of how easy it should be to just ignore such calls. Failing that, we should clarify the language, yes. > > -Jeff > > ----- Original Message ----- > From: "Kenneth Russell" > To: "Gregg Tavares" > Cc: "Jeff Gilbert" , "public webgl" > Sent: Tuesday, February 26, 2013 2:39:35 PM > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > > Yes, that's correct. > > It wouldn't be acceptable if the application developer could call > bindAttribLocation passing in an identifier starting with one of the > reserved prefixes and have that apply to an attribute allocated > internally by the WebGL implementation. > > Should this sentence be clarified and perhaps separated out into a > separate subsection which can be referred to independently of > "Supported GLSL Constructs"? > > -Ken > > > > On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares wrote: >> If the WebGL spec is not clear it should be updated. It's trying to match >> the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed WebGL >> also wants to disallow "webgl_" >> >> >> >> >> >> On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert wrote: >>> >>> >>> The test suite has this: >>> gl.bindAttribLocation(program, 0, 'webgl_a') expected: INVALID_OPERATION >>> gl.bindAttribLocation(program, 0, '_webgl_a') expected: INVALID_OPERATION >>> >>> I don't see where this is in the spec. The GLES2 spec has this behavior >>> for attribs starting with 'gl_', but I can't find the language in the WebGL >>> spec which amends this. It seems like this shouldn't be strictly necessary >>> anyways, since we already have this language in the spec: >>> >>> "In addition to the reserved identifiers in the aforementioned >>> specification, identifiers starting with "webgl_" and "_webgl_" are reserved >>> for use by WebGL. A shader which declares a function, variable, structure >>> name, or structure field starting with these prefixes must not be allowed to >>> load." >>> >>> With this restriction, it seems that the user couldn't declare an attrib >>> with such a name anyways, so bindAttribLocation with these prefixes would >>> always do nothing to a shader which meets this restriction, while refusing >>> to 'load' (link? use? This should probably be clarified as well.) shaders >>> which conflict with this passage. >>> >>> It seems like we don't need this restriction, so it should be removed. Is >>> there something I'm missing? >>> >>> -Jeff >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Tue Feb 26 18:17:07 2013 From: gma...@ (Gregg Tavares) Date: Tue, 26 Feb 2013 18:17:07 -0800 Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: <606042301.3849566.1361926028261.JavaMail.root@mozilla.com> References: <606042301.3849566.1361926028261.JavaMail.root@mozilla.com> Message-ID: On Tue, Feb 26, 2013 at 4:47 PM, Jeff Gilbert wrote: > We do inherit GL's behavior, but there's no reason to expand WebGL's > behavior here without reason, nor should we just s/gl/webgl/ everything. > Since restricting it here goes further than what GL mandates, and it > doesn't win us anything, it seems unnecessary to restrict this. > Why do you think GL stops "gl_" on bindAttribLocation? By your logic (which I agree with) it could just drop those on the floor as well right? I've assumed there was a reason. If I had to guess I'd guess so they can later add some special "gl_specialAttribThingy" and let you call glBindAttribLocation(prg, index, "gl_specialAttribThingy") to bind it to a specific attribute. If they didn't preemptively reject it in bindAttribLocation then there's the possibility someone is passing that in already which would be silently ignored and then someday suddenly not silently ignored?? So, I've assumed we should do the same for our reserved prefixes. As for _webgl_ I'm pretty sure that comes from the discussion group. No idea why http://www.khronos.org/webgl/public-mailing-list/archives/1003/msg00095.html > > Further, s/gl/webgl/ here would only add the reserved "webgl_" prefix. I'm > not sure why we reserve "_webgl_". > > -Jeff > > > ----- Original Message ----- > From: "Kenneth Russell" > To: "Jeff Gilbert" > Cc: "public webgl" , "Gregg Tavares" < > gman...@> > Sent: Tuesday, February 26, 2013 3:29:30 PM > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > > The manual page for glBindAttribLocation states: > > GL_INVALID_OPERATION is generated if name starts with the reserved > prefix "gl_". > > WebGL's corresponding entry point should behave the same for WebGL's > reserved prefixes, not silently drop the call on the floor. > > -Ken > > > > On Tue, Feb 26, 2013 at 3:17 PM, Jeff Gilbert > wrote: > > I don't see why it's needed, though. The WebGL impl can easily just drop > these bindings on the floor, since there's no possible time when they'd be > valid. (All shaders containing them should fail to link, presumedly) > > > > I don't see a good reason for keeping this restriction in light of how > easy it should be to just ignore such calls. Failing that, we should > clarify the language, yes. > > > > -Jeff > > > > ----- Original Message ----- > > From: "Kenneth Russell" > > To: "Gregg Tavares" > > Cc: "Jeff Gilbert" , "public webgl" < > public_webgl...@> > > Sent: Tuesday, February 26, 2013 2:39:35 PM > > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > > > > Yes, that's correct. > > > > It wouldn't be acceptable if the application developer could call > > bindAttribLocation passing in an identifier starting with one of the > > reserved prefixes and have that apply to an attribute allocated > > internally by the WebGL implementation. > > > > Should this sentence be clarified and perhaps separated out into a > > separate subsection which can be referred to independently of > > "Supported GLSL Constructs"? > > > > -Ken > > > > > > > > On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares wrote: > >> If the WebGL spec is not clear it should be updated. It's trying to > match > >> the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed > WebGL > >> also wants to disallow "webgl_" > >> > >> > >> > >> > >> > >> On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert > wrote: > >>> > >>> > >>> The test suite has this: > >>> gl.bindAttribLocation(program, 0, 'webgl_a') expected: > INVALID_OPERATION > >>> gl.bindAttribLocation(program, 0, '_webgl_a') expected: > INVALID_OPERATION > >>> > >>> I don't see where this is in the spec. The GLES2 spec has this behavior > >>> for attribs starting with 'gl_', but I can't find the language in the > WebGL > >>> spec which amends this. It seems like this shouldn't be strictly > necessary > >>> anyways, since we already have this language in the spec: > >>> > >>> "In addition to the reserved identifiers in the aforementioned > >>> specification, identifiers starting with "webgl_" and "_webgl_" are > reserved > >>> for use by WebGL. A shader which declares a function, variable, > structure > >>> name, or structure field starting with these prefixes must not be > allowed to > >>> load." > >>> > >>> With this restriction, it seems that the user couldn't declare an > attrib > >>> with such a name anyways, so bindAttribLocation with these prefixes > would > >>> always do nothing to a shader which meets this restriction, while > refusing > >>> to 'load' (link? use? This should probably be clarified as well.) > shaders > >>> which conflict with this passage. > >>> > >>> It seems like we don't need this restriction, so it should be removed. > Is > >>> there something I'm missing? > >>> > >>> -Jeff > >>> > >>> ----------------------------------------------------------- > >>> You are currently subscribed to public_webgl...@ > >>> To unsubscribe, send an email to majordomo...@ with > >>> the following command in the body of your email: > >>> unsubscribe public_webgl > >>> ----------------------------------------------------------- > >>> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Feb 26 18:26:55 2013 From: kbr...@ (Kenneth Russell) Date: Tue, 26 Feb 2013 18:26:55 -0800 Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: References: <606042301.3849566.1361926028261.JavaMail.root@mozilla.com> Message-ID: On Tue, Feb 26, 2013 at 6:17 PM, Gregg Tavares wrote: > > > > On Tue, Feb 26, 2013 at 4:47 PM, Jeff Gilbert wrote: >> >> We do inherit GL's behavior, but there's no reason to expand WebGL's >> behavior here without reason, nor should we just s/gl/webgl/ everything. >> Since restricting it here goes further than what GL mandates, and it doesn't >> win us anything, it seems unnecessary to restrict this. > > > Why do you think GL stops "gl_" on bindAttribLocation? By your logic (which > I agree with) it could just drop those on the floor as well right? I've > assumed there was a reason. > > If I had to guess I'd guess so they can later add some special > "gl_specialAttribThingy" and let you call glBindAttribLocation(prg, index, > "gl_specialAttribThingy") to bind it to a specific attribute. If they didn't > preemptively reject it in bindAttribLocation then there's the possibility > someone is passing that in already which would be silently ignored and then > someday suddenly not silently ignored?? So, I've assumed we should do the > same for our reserved prefixes. I agree. This has been the tested behavior for WebGL implementations for some time and I see no reason to relax or change it. > As for _webgl_ I'm pretty sure that comes from the discussion group. No idea > why > > http://www.khronos.org/webgl/public-mailing-list/archives/1003/msg00095.html I think Cedric's idea at the time was that an extension might expose a variable "webgl_foo" to shaders that developers were supposed to be able to access. (At the time, we were talking about exposing a synthetic "webgl_InstanceID" variable.) Identifiers starting with "_webgl_" would be used only by implementations. Again, I do not think the reservation of these namespaces should be changed. -Ken > > > > > > >> >> >> Further, s/gl/webgl/ here would only add the reserved "webgl_" prefix. I'm >> not sure why we reserve "_webgl_". >> >> -Jeff >> >> >> ----- Original Message ----- >> From: "Kenneth Russell" >> To: "Jeff Gilbert" >> Cc: "public webgl" , "Gregg Tavares" >> >> Sent: Tuesday, February 26, 2013 3:29:30 PM >> Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs >> >> The manual page for glBindAttribLocation states: >> >> GL_INVALID_OPERATION is generated if name starts with the reserved >> prefix "gl_". >> >> WebGL's corresponding entry point should behave the same for WebGL's >> reserved prefixes, not silently drop the call on the floor. >> >> -Ken >> >> >> >> On Tue, Feb 26, 2013 at 3:17 PM, Jeff Gilbert >> wrote: >> > I don't see why it's needed, though. The WebGL impl can easily just drop >> > these bindings on the floor, since there's no possible time when they'd be >> > valid. (All shaders containing them should fail to link, presumedly) >> > >> > I don't see a good reason for keeping this restriction in light of how >> > easy it should be to just ignore such calls. Failing that, we should clarify >> > the language, yes. >> > >> > -Jeff >> > >> > ----- Original Message ----- >> > From: "Kenneth Russell" >> > To: "Gregg Tavares" >> > Cc: "Jeff Gilbert" , "public webgl" >> > >> > Sent: Tuesday, February 26, 2013 2:39:35 PM >> > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs >> > >> > Yes, that's correct. >> > >> > It wouldn't be acceptable if the application developer could call >> > bindAttribLocation passing in an identifier starting with one of the >> > reserved prefixes and have that apply to an attribute allocated >> > internally by the WebGL implementation. >> > >> > Should this sentence be clarified and perhaps separated out into a >> > separate subsection which can be referred to independently of >> > "Supported GLSL Constructs"? >> > >> > -Ken >> > >> > >> > >> > On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares wrote: >> >> If the WebGL spec is not clear it should be updated. It's trying to >> >> match >> >> the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed >> >> WebGL >> >> also wants to disallow "webgl_" >> >> >> >> >> >> >> >> >> >> >> >> On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert >> >> wrote: >> >>> >> >>> >> >>> The test suite has this: >> >>> gl.bindAttribLocation(program, 0, 'webgl_a') expected: >> >>> INVALID_OPERATION >> >>> gl.bindAttribLocation(program, 0, '_webgl_a') expected: >> >>> INVALID_OPERATION >> >>> >> >>> I don't see where this is in the spec. The GLES2 spec has this >> >>> behavior >> >>> for attribs starting with 'gl_', but I can't find the language in the >> >>> WebGL >> >>> spec which amends this. It seems like this shouldn't be strictly >> >>> necessary >> >>> anyways, since we already have this language in the spec: >> >>> >> >>> "In addition to the reserved identifiers in the aforementioned >> >>> specification, identifiers starting with "webgl_" and "_webgl_" are >> >>> reserved >> >>> for use by WebGL. A shader which declares a function, variable, >> >>> structure >> >>> name, or structure field starting with these prefixes must not be >> >>> allowed to >> >>> load." >> >>> >> >>> With this restriction, it seems that the user couldn't declare an >> >>> attrib >> >>> with such a name anyways, so bindAttribLocation with these prefixes >> >>> would >> >>> always do nothing to a shader which meets this restriction, while >> >>> refusing >> >>> to 'load' (link? use? This should probably be clarified as well.) >> >>> shaders >> >>> which conflict with this passage. >> >>> >> >>> It seems like we don't need this restriction, so it should be removed. >> >>> Is >> >>> there something I'm missing? >> >>> >> >>> -Jeff >> >>> >> >>> ----------------------------------------------------------- >> >>> You are currently subscribed to public_webgl...@ >> >>> To unsubscribe, send an email to majordomo...@ with >> >>> the following command in the body of your email: >> >>> unsubscribe public_webgl >> >>> ----------------------------------------------------------- >> >>> >> >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Tue Feb 26 18:52:32 2013 From: jgi...@ (Jeff Gilbert) Date: Tue, 26 Feb 2013 18:52:32 -0800 (PST) Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: Message-ID: <1030123615.3858362.1361933552275.JavaMail.root@mozilla.com> "This is how it's been done" isn't a good reason by itself. The "_webgl_" prefix seems entirely unnecessary. And still, calling bindAttribLocation *always* drops things on the floor if they aren't used in the linked shader, which /_?webgl_.*/ currently can't ever be. It's not as if extensions are silently added in WebGL either, since you have to opt in to them. Certainly even if we want to keep "webgl_" reserved, we should clearly state that this will emit an error in bindAttribLocation. The spec cannot be vague about these things. Tests match the spec, not vice versa. I see no good reason why we should not remove the restriction on "_webgl_", and as such, we should allow it. I will also strongly note that this has *not* been "the tested behavior for WebGL implementations for some time", given that it was only added in 1.0.2, which we are only just now snapshotting. -Jeff ----- Original Message ----- From: "Kenneth Russell" To: "Gregg Tavares" Cc: "Jeff Gilbert" , "public webgl" Sent: Tuesday, February 26, 2013 6:26:55 PM Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs On Tue, Feb 26, 2013 at 6:17 PM, Gregg Tavares wrote: > > > > On Tue, Feb 26, 2013 at 4:47 PM, Jeff Gilbert wrote: >> >> We do inherit GL's behavior, but there's no reason to expand WebGL's >> behavior here without reason, nor should we just s/gl/webgl/ everything. >> Since restricting it here goes further than what GL mandates, and it doesn't >> win us anything, it seems unnecessary to restrict this. > > > Why do you think GL stops "gl_" on bindAttribLocation? By your logic (which > I agree with) it could just drop those on the floor as well right? I've > assumed there was a reason. > > If I had to guess I'd guess so they can later add some special > "gl_specialAttribThingy" and let you call glBindAttribLocation(prg, index, > "gl_specialAttribThingy") to bind it to a specific attribute. If they didn't > preemptively reject it in bindAttribLocation then there's the possibility > someone is passing that in already which would be silently ignored and then > someday suddenly not silently ignored?? So, I've assumed we should do the > same for our reserved prefixes. I agree. This has been the tested behavior for WebGL implementations for some time and I see no reason to relax or change it. > As for _webgl_ I'm pretty sure that comes from the discussion group. No idea > why > > http://www.khronos.org/webgl/public-mailing-list/archives/1003/msg00095.html I think Cedric's idea at the time was that an extension might expose a variable "webgl_foo" to shaders that developers were supposed to be able to access. (At the time, we were talking about exposing a synthetic "webgl_InstanceID" variable.) Identifiers starting with "_webgl_" would be used only by implementations. Again, I do not think the reservation of these namespaces should be changed. -Ken > > > > > > >> >> >> Further, s/gl/webgl/ here would only add the reserved "webgl_" prefix. I'm >> not sure why we reserve "_webgl_". >> >> -Jeff >> >> >> ----- Original Message ----- >> From: "Kenneth Russell" >> To: "Jeff Gilbert" >> Cc: "public webgl" , "Gregg Tavares" >> >> Sent: Tuesday, February 26, 2013 3:29:30 PM >> Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs >> >> The manual page for glBindAttribLocation states: >> >> GL_INVALID_OPERATION is generated if name starts with the reserved >> prefix "gl_". >> >> WebGL's corresponding entry point should behave the same for WebGL's >> reserved prefixes, not silently drop the call on the floor. >> >> -Ken >> >> >> >> On Tue, Feb 26, 2013 at 3:17 PM, Jeff Gilbert >> wrote: >> > I don't see why it's needed, though. The WebGL impl can easily just drop >> > these bindings on the floor, since there's no possible time when they'd be >> > valid. (All shaders containing them should fail to link, presumedly) >> > >> > I don't see a good reason for keeping this restriction in light of how >> > easy it should be to just ignore such calls. Failing that, we should clarify >> > the language, yes. >> > >> > -Jeff >> > >> > ----- Original Message ----- >> > From: "Kenneth Russell" >> > To: "Gregg Tavares" >> > Cc: "Jeff Gilbert" , "public webgl" >> > >> > Sent: Tuesday, February 26, 2013 2:39:35 PM >> > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs >> > >> > Yes, that's correct. >> > >> > It wouldn't be acceptable if the application developer could call >> > bindAttribLocation passing in an identifier starting with one of the >> > reserved prefixes and have that apply to an attribute allocated >> > internally by the WebGL implementation. >> > >> > Should this sentence be clarified and perhaps separated out into a >> > separate subsection which can be referred to independently of >> > "Supported GLSL Constructs"? >> > >> > -Ken >> > >> > >> > >> > On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares wrote: >> >> If the WebGL spec is not clear it should be updated. It's trying to >> >> match >> >> the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed >> >> WebGL >> >> also wants to disallow "webgl_" >> >> >> >> >> >> >> >> >> >> >> >> On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert >> >> wrote: >> >>> >> >>> >> >>> The test suite has this: >> >>> gl.bindAttribLocation(program, 0, 'webgl_a') expected: >> >>> INVALID_OPERATION >> >>> gl.bindAttribLocation(program, 0, '_webgl_a') expected: >> >>> INVALID_OPERATION >> >>> >> >>> I don't see where this is in the spec. The GLES2 spec has this >> >>> behavior >> >>> for attribs starting with 'gl_', but I can't find the language in the >> >>> WebGL >> >>> spec which amends this. It seems like this shouldn't be strictly >> >>> necessary >> >>> anyways, since we already have this language in the spec: >> >>> >> >>> "In addition to the reserved identifiers in the aforementioned >> >>> specification, identifiers starting with "webgl_" and "_webgl_" are >> >>> reserved >> >>> for use by WebGL. A shader which declares a function, variable, >> >>> structure >> >>> name, or structure field starting with these prefixes must not be >> >>> allowed to >> >>> load." >> >>> >> >>> With this restriction, it seems that the user couldn't declare an >> >>> attrib >> >>> with such a name anyways, so bindAttribLocation with these prefixes >> >>> would >> >>> always do nothing to a shader which meets this restriction, while >> >>> refusing >> >>> to 'load' (link? use? This should probably be clarified as well.) >> >>> shaders >> >>> which conflict with this passage. >> >>> >> >>> It seems like we don't need this restriction, so it should be removed. >> >>> Is >> >>> there something I'm missing? >> >>> >> >>> -Jeff >> >>> >> >>> ----------------------------------------------------------- >> >>> You are currently subscribed to public_webgl...@ >> >>> To unsubscribe, send an email to majordomo...@ with >> >>> the following command in the body of your email: >> >>> unsubscribe public_webgl >> >>> ----------------------------------------------------------- >> >>> >> >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Tue Feb 26 18:59:58 2013 From: gma...@ (Gregg Tavares) Date: Tue, 26 Feb 2013 18:59:58 -0800 Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: <1030123615.3858362.1361933552275.JavaMail.root@mozilla.com> References: <1030123615.3858362.1361933552275.JavaMail.root@mozilla.com> Message-ID: I don't have a problem dropping "_webgl_". I think we should keep "webgl_" as is > The spec cannot be vague about these things. Tests match the spec, not vice versa. There are places in the OpenGL ES spec that effectively say "up to the tests" :-( On Tue, Feb 26, 2013 at 6:52 PM, Jeff Gilbert wrote: > "This is how it's been done" isn't a good reason by itself. The "_webgl_" > prefix seems entirely unnecessary. > > And still, calling bindAttribLocation *always* drops things on the floor > if they aren't used in the linked shader, which /_?webgl_.*/ currently > can't ever be. It's not as if extensions are silently added in WebGL > either, since you have to opt in to them. > > Certainly even if we want to keep "webgl_" reserved, we should clearly > state that this will emit an error in bindAttribLocation. The spec cannot > be vague about these things. Tests match the spec, not vice versa. > > I see no good reason why we should not remove the restriction on > "_webgl_", and as such, we should allow it. > > I will also strongly note that this has *not* been "the tested behavior > for WebGL implementations > for some time", given that it was only added in 1.0.2, which we are only > just now snapshotting. > > -Jeff > > ----- Original Message ----- > From: "Kenneth Russell" > To: "Gregg Tavares" > Cc: "Jeff Gilbert" , "public webgl" < > public_webgl...@> > Sent: Tuesday, February 26, 2013 6:26:55 PM > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > > On Tue, Feb 26, 2013 at 6:17 PM, Gregg Tavares wrote: > > > > > > > > On Tue, Feb 26, 2013 at 4:47 PM, Jeff Gilbert > wrote: > >> > >> We do inherit GL's behavior, but there's no reason to expand WebGL's > >> behavior here without reason, nor should we just s/gl/webgl/ everything. > >> Since restricting it here goes further than what GL mandates, and it > doesn't > >> win us anything, it seems unnecessary to restrict this. > > > > > > Why do you think GL stops "gl_" on bindAttribLocation? By your logic > (which > > I agree with) it could just drop those on the floor as well right? I've > > assumed there was a reason. > > > > If I had to guess I'd guess so they can later add some special > > "gl_specialAttribThingy" and let you call glBindAttribLocation(prg, > index, > > "gl_specialAttribThingy") to bind it to a specific attribute. If they > didn't > > preemptively reject it in bindAttribLocation then there's the possibility > > someone is passing that in already which would be silently ignored and > then > > someday suddenly not silently ignored?? So, I've assumed we should do the > > same for our reserved prefixes. > > I agree. This has been the tested behavior for WebGL implementations > for some time and I see no reason to relax or change it. > > > > As for _webgl_ I'm pretty sure that comes from the discussion group. No > idea > > why > > > > > http://www.khronos.org/webgl/public-mailing-list/archives/1003/msg00095.html > > I think Cedric's idea at the time was that an extension might expose a > variable "webgl_foo" to shaders that developers were supposed to be > able to access. (At the time, we were talking about exposing a > synthetic "webgl_InstanceID" variable.) Identifiers starting with > "_webgl_" would be used only by implementations. Again, I do not think > the reservation of these namespaces should be changed. > > -Ken > > > > > > > > > > > > > > > >> > >> > >> Further, s/gl/webgl/ here would only add the reserved "webgl_" prefix. > I'm > >> not sure why we reserve "_webgl_". > >> > >> -Jeff > >> > >> > >> ----- Original Message ----- > >> From: "Kenneth Russell" > >> To: "Jeff Gilbert" > >> Cc: "public webgl" , "Gregg Tavares" > >> > >> Sent: Tuesday, February 26, 2013 3:29:30 PM > >> Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > >> > >> The manual page for glBindAttribLocation states: > >> > >> GL_INVALID_OPERATION is generated if name starts with the reserved > >> prefix "gl_". > >> > >> WebGL's corresponding entry point should behave the same for WebGL's > >> reserved prefixes, not silently drop the call on the floor. > >> > >> -Ken > >> > >> > >> > >> On Tue, Feb 26, 2013 at 3:17 PM, Jeff Gilbert > >> wrote: > >> > I don't see why it's needed, though. The WebGL impl can easily just > drop > >> > these bindings on the floor, since there's no possible time when > they'd be > >> > valid. (All shaders containing them should fail to link, presumedly) > >> > > >> > I don't see a good reason for keeping this restriction in light of how > >> > easy it should be to just ignore such calls. Failing that, we should > clarify > >> > the language, yes. > >> > > >> > -Jeff > >> > > >> > ----- Original Message ----- > >> > From: "Kenneth Russell" > >> > To: "Gregg Tavares" > >> > Cc: "Jeff Gilbert" , "public webgl" > >> > > >> > Sent: Tuesday, February 26, 2013 2:39:35 PM > >> > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > >> > > >> > Yes, that's correct. > >> > > >> > It wouldn't be acceptable if the application developer could call > >> > bindAttribLocation passing in an identifier starting with one of the > >> > reserved prefixes and have that apply to an attribute allocated > >> > internally by the WebGL implementation. > >> > > >> > Should this sentence be clarified and perhaps separated out into a > >> > separate subsection which can be referred to independently of > >> > "Supported GLSL Constructs"? > >> > > >> > -Ken > >> > > >> > > >> > > >> > On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares > wrote: > >> >> If the WebGL spec is not clear it should be updated. It's trying to > >> >> match > >> >> the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed > >> >> WebGL > >> >> also wants to disallow "webgl_" > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert > >> >> wrote: > >> >>> > >> >>> > >> >>> The test suite has this: > >> >>> gl.bindAttribLocation(program, 0, 'webgl_a') expected: > >> >>> INVALID_OPERATION > >> >>> gl.bindAttribLocation(program, 0, '_webgl_a') expected: > >> >>> INVALID_OPERATION > >> >>> > >> >>> I don't see where this is in the spec. The GLES2 spec has this > >> >>> behavior > >> >>> for attribs starting with 'gl_', but I can't find the language in > the > >> >>> WebGL > >> >>> spec which amends this. It seems like this shouldn't be strictly > >> >>> necessary > >> >>> anyways, since we already have this language in the spec: > >> >>> > >> >>> "In addition to the reserved identifiers in the aforementioned > >> >>> specification, identifiers starting with "webgl_" and "_webgl_" are > >> >>> reserved > >> >>> for use by WebGL. A shader which declares a function, variable, > >> >>> structure > >> >>> name, or structure field starting with these prefixes must not be > >> >>> allowed to > >> >>> load." > >> >>> > >> >>> With this restriction, it seems that the user couldn't declare an > >> >>> attrib > >> >>> with such a name anyways, so bindAttribLocation with these prefixes > >> >>> would > >> >>> always do nothing to a shader which meets this restriction, while > >> >>> refusing > >> >>> to 'load' (link? use? This should probably be clarified as well.) > >> >>> shaders > >> >>> which conflict with this passage. > >> >>> > >> >>> It seems like we don't need this restriction, so it should be > removed. > >> >>> Is > >> >>> there something I'm missing? > >> >>> > >> >>> -Jeff > >> >>> > >> >>> ----------------------------------------------------------- > >> >>> You are currently subscribed to public_webgl...@ > >> >>> To unsubscribe, send an email to majordomo...@ with > >> >>> the following command in the body of your email: > >> >>> unsubscribe public_webgl > >> >>> ----------------------------------------------------------- > >> >>> > >> >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Tue Feb 26 19:10:12 2013 From: gma...@ (Gregg Tavares) Date: Tue, 26 Feb 2013 19:10:12 -0800 Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: <1030123615.3858362.1361933552275.JavaMail.root@mozilla.com> References: <1030123615.3858362.1361933552275.JavaMail.root@mozilla.com> Message-ID: On Tue, Feb 26, 2013 at 6:52 PM, Jeff Gilbert wrote: > "This is how it's been done" isn't a good reason by itself. The "_webgl_" > prefix seems entirely unnecessary. > > And still, calling bindAttribLocation *always* drops things on the floor > if they aren't used in the linked shader, which /_?webgl_.*/ currently > can't ever be. It's not as if extensions are silently added in WebGL > either, since you have to opt in to them. > > Certainly even if we want to keep "webgl_" reserved, we should clearly > state that this will emit an error in bindAttribLocation. The spec cannot > be vague about these things. Tests match the spec, not vice versa. > > I see no good reason why we should not remove the restriction on > "_webgl_", and as such, we should allow it. > > I will also strongly note that this has *not* been "the tested behavior > for WebGL implementations > for some time", given that it was only added in 1.0.2, which we are only > just now snapshotting. > It has been in the 1.0.1 tests though https://www.khronos.org/registry/webgl/conformance-suites/1.0.1/conformance/glsl/misc/shader-with-webgl-identifier.vert.html https://www.khronos.org/registry/webgl/conformance-suites/1.0.1/conformance/glsl/misc/shader-with-_webgl-identifier.vert.html > > -Jeff > > ----- Original Message ----- > From: "Kenneth Russell" > To: "Gregg Tavares" > Cc: "Jeff Gilbert" , "public webgl" < > public_webgl...@> > Sent: Tuesday, February 26, 2013 6:26:55 PM > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > > On Tue, Feb 26, 2013 at 6:17 PM, Gregg Tavares wrote: > > > > > > > > On Tue, Feb 26, 2013 at 4:47 PM, Jeff Gilbert > wrote: > >> > >> We do inherit GL's behavior, but there's no reason to expand WebGL's > >> behavior here without reason, nor should we just s/gl/webgl/ everything. > >> Since restricting it here goes further than what GL mandates, and it > doesn't > >> win us anything, it seems unnecessary to restrict this. > > > > > > Why do you think GL stops "gl_" on bindAttribLocation? By your logic > (which > > I agree with) it could just drop those on the floor as well right? I've > > assumed there was a reason. > > > > If I had to guess I'd guess so they can later add some special > > "gl_specialAttribThingy" and let you call glBindAttribLocation(prg, > index, > > "gl_specialAttribThingy") to bind it to a specific attribute. If they > didn't > > preemptively reject it in bindAttribLocation then there's the possibility > > someone is passing that in already which would be silently ignored and > then > > someday suddenly not silently ignored?? So, I've assumed we should do the > > same for our reserved prefixes. > > I agree. This has been the tested behavior for WebGL implementations > for some time and I see no reason to relax or change it. > > > > As for _webgl_ I'm pretty sure that comes from the discussion group. No > idea > > why > > > > > http://www.khronos.org/webgl/public-mailing-list/archives/1003/msg00095.html > > I think Cedric's idea at the time was that an extension might expose a > variable "webgl_foo" to shaders that developers were supposed to be > able to access. (At the time, we were talking about exposing a > synthetic "webgl_InstanceID" variable.) Identifiers starting with > "_webgl_" would be used only by implementations. Again, I do not think > the reservation of these namespaces should be changed. > > -Ken > > > > > > > > > > > > > > > >> > >> > >> Further, s/gl/webgl/ here would only add the reserved "webgl_" prefix. > I'm > >> not sure why we reserve "_webgl_". > >> > >> -Jeff > >> > >> > >> ----- Original Message ----- > >> From: "Kenneth Russell" > >> To: "Jeff Gilbert" > >> Cc: "public webgl" , "Gregg Tavares" > >> > >> Sent: Tuesday, February 26, 2013 3:29:30 PM > >> Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > >> > >> The manual page for glBindAttribLocation states: > >> > >> GL_INVALID_OPERATION is generated if name starts with the reserved > >> prefix "gl_". > >> > >> WebGL's corresponding entry point should behave the same for WebGL's > >> reserved prefixes, not silently drop the call on the floor. > >> > >> -Ken > >> > >> > >> > >> On Tue, Feb 26, 2013 at 3:17 PM, Jeff Gilbert > >> wrote: > >> > I don't see why it's needed, though. The WebGL impl can easily just > drop > >> > these bindings on the floor, since there's no possible time when > they'd be > >> > valid. (All shaders containing them should fail to link, presumedly) > >> > > >> > I don't see a good reason for keeping this restriction in light of how > >> > easy it should be to just ignore such calls. Failing that, we should > clarify > >> > the language, yes. > >> > > >> > -Jeff > >> > > >> > ----- Original Message ----- > >> > From: "Kenneth Russell" > >> > To: "Gregg Tavares" > >> > Cc: "Jeff Gilbert" , "public webgl" > >> > > >> > Sent: Tuesday, February 26, 2013 2:39:35 PM > >> > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > >> > > >> > Yes, that's correct. > >> > > >> > It wouldn't be acceptable if the application developer could call > >> > bindAttribLocation passing in an identifier starting with one of the > >> > reserved prefixes and have that apply to an attribute allocated > >> > internally by the WebGL implementation. > >> > > >> > Should this sentence be clarified and perhaps separated out into a > >> > separate subsection which can be referred to independently of > >> > "Supported GLSL Constructs"? > >> > > >> > -Ken > >> > > >> > > >> > > >> > On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares > wrote: > >> >> If the WebGL spec is not clear it should be updated. It's trying to > >> >> match > >> >> the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed > >> >> WebGL > >> >> also wants to disallow "webgl_" > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert > >> >> wrote: > >> >>> > >> >>> > >> >>> The test suite has this: > >> >>> gl.bindAttribLocation(program, 0, 'webgl_a') expected: > >> >>> INVALID_OPERATION > >> >>> gl.bindAttribLocation(program, 0, '_webgl_a') expected: > >> >>> INVALID_OPERATION > >> >>> > >> >>> I don't see where this is in the spec. The GLES2 spec has this > >> >>> behavior > >> >>> for attribs starting with 'gl_', but I can't find the language in > the > >> >>> WebGL > >> >>> spec which amends this. It seems like this shouldn't be strictly > >> >>> necessary > >> >>> anyways, since we already have this language in the spec: > >> >>> > >> >>> "In addition to the reserved identifiers in the aforementioned > >> >>> specification, identifiers starting with "webgl_" and "_webgl_" are > >> >>> reserved > >> >>> for use by WebGL. A shader which declares a function, variable, > >> >>> structure > >> >>> name, or structure field starting with these prefixes must not be > >> >>> allowed to > >> >>> load." > >> >>> > >> >>> With this restriction, it seems that the user couldn't declare an > >> >>> attrib > >> >>> with such a name anyways, so bindAttribLocation with these prefixes > >> >>> would > >> >>> always do nothing to a shader which meets this restriction, while > >> >>> refusing > >> >>> to 'load' (link? use? This should probably be clarified as well.) > >> >>> shaders > >> >>> which conflict with this passage. > >> >>> > >> >>> It seems like we don't need this restriction, so it should be > removed. > >> >>> Is > >> >>> there something I'm missing? > >> >>> > >> >>> -Jeff > >> >>> > >> >>> ----------------------------------------------------------- > >> >>> You are currently subscribed to public_webgl...@ > >> >>> To unsubscribe, send an email to majordomo...@ with > >> >>> the following command in the body of your email: > >> >>> unsubscribe public_webgl > >> >>> ----------------------------------------------------------- > >> >>> > >> >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Wed Feb 27 11:57:38 2013 From: jgi...@ (Jeff Gilbert) Date: Wed, 27 Feb 2013 11:57:38 -0800 (PST) Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: Message-ID: <994705982.4041767.1361995058350.JavaMail.root@mozilla.com> Firefox passes all 1.0.1 tests, so if we're failing it now, it wasn't tested before. I believe the tests you mentioned is only checking that those prefixes are webglsl-reserved, not that bindAttribLocation emits an error. ----- Original Message ----- From: "Gregg Tavares" To: "Jeff Gilbert" Cc: "Kenneth Russell" , "public webgl" Sent: Tuesday, February 26, 2013 7:10:12 PM Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs On Tue, Feb 26, 2013 at 6:52 PM, Jeff Gilbert < jgilbert...@ > wrote: "This is how it's been done" isn't a good reason by itself. The "_webgl_" prefix seems entirely unnecessary. And still, calling bindAttribLocation *always* drops things on the floor if they aren't used in the linked shader, which /_?webgl_.*/ currently can't ever be. It's not as if extensions are silently added in WebGL either, since you have to opt in to them. Certainly even if we want to keep "webgl_" reserved, we should clearly state that this will emit an error in bindAttribLocation. The spec cannot be vague about these things. Tests match the spec, not vice versa. I see no good reason why we should not remove the restriction on "_webgl_", and as such, we should allow it. I will also strongly note that this has *not* been "the tested behavior for WebGL implementations for some time", given that it was only added in 1.0.2, which we are only just now snapshotting. It has been in the 1.0.1 tests though https://www.khronos.org/registry/webgl/conformance-suites/1.0.1/conformance/glsl/misc/shader-with-webgl-identifier.vert.html https://www.khronos.org/registry/webgl/conformance-suites/1.0.1/conformance/glsl/misc/shader-with-_webgl-identifier.vert.html -Jeff ----- Original Message ----- From: "Kenneth Russell" < kbr...@ > To: "Gregg Tavares" < gman...@ > Cc: "Jeff Gilbert" < jgilbert...@ >, "public webgl" < public_webgl...@ > Sent: Tuesday, February 26, 2013 6:26:55 PM Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs On Tue, Feb 26, 2013 at 6:17 PM, Gregg Tavares < gman...@ > wrote: > > > > On Tue, Feb 26, 2013 at 4:47 PM, Jeff Gilbert < jgilbert...@ > wrote: >> >> We do inherit GL's behavior, but there's no reason to expand WebGL's >> behavior here without reason, nor should we just s/gl/webgl/ everything. >> Since restricting it here goes further than what GL mandates, and it doesn't >> win us anything, it seems unnecessary to restrict this. > > > Why do you think GL stops "gl_" on bindAttribLocation? By your logic (which > I agree with) it could just drop those on the floor as well right? I've > assumed there was a reason. > > If I had to guess I'd guess so they can later add some special > "gl_specialAttribThingy" and let you call glBindAttribLocation(prg, index, > "gl_specialAttribThingy") to bind it to a specific attribute. If they didn't > preemptively reject it in bindAttribLocation then there's the possibility > someone is passing that in already which would be silently ignored and then > someday suddenly not silently ignored?? So, I've assumed we should do the > same for our reserved prefixes. I agree. This has been the tested behavior for WebGL implementations for some time and I see no reason to relax or change it. > As for _webgl_ I'm pretty sure that comes from the discussion group. No idea > why > > http://www.khronos.org/webgl/public-mailing-list/archives/1003/msg00095.html I think Cedric's idea at the time was that an extension might expose a variable "webgl_foo" to shaders that developers were supposed to be able to access. (At the time, we were talking about exposing a synthetic "webgl_InstanceID" variable.) Identifiers starting with "_webgl_" would be used only by implementations. Again, I do not think the reservation of these namespaces should be changed. -Ken > > > > > > >> >> >> Further, s/gl/webgl/ here would only add the reserved "webgl_" prefix. I'm >> not sure why we reserve "_webgl_". >> >> -Jeff >> >> >> ----- Original Message ----- >> From: "Kenneth Russell" < kbr...@ > >> To: "Jeff Gilbert" < jgilbert...@ > >> Cc: "public webgl" < public_webgl...@ >, "Gregg Tavares" >> < gman...@ > >> Sent: Tuesday, February 26, 2013 3:29:30 PM >> Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs >> >> The manual page for glBindAttribLocation states: >> >> GL_INVALID_OPERATION is generated if name starts with the reserved >> prefix "gl_". >> >> WebGL's corresponding entry point should behave the same for WebGL's >> reserved prefixes, not silently drop the call on the floor. >> >> -Ken >> >> >> >> On Tue, Feb 26, 2013 at 3:17 PM, Jeff Gilbert < jgilbert...@ > >> wrote: >> > I don't see why it's needed, though. The WebGL impl can easily just drop >> > these bindings on the floor, since there's no possible time when they'd be >> > valid. (All shaders containing them should fail to link, presumedly) >> > >> > I don't see a good reason for keeping this restriction in light of how >> > easy it should be to just ignore such calls. Failing that, we should clarify >> > the language, yes. >> > >> > -Jeff >> > >> > ----- Original Message ----- >> > From: "Kenneth Russell" < kbr...@ > >> > To: "Gregg Tavares" < gman...@ > >> > Cc: "Jeff Gilbert" < jgilbert...@ >, "public webgl" >> > < public_webgl...@ > >> > Sent: Tuesday, February 26, 2013 2:39:35 PM >> > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs >> > >> > Yes, that's correct. >> > >> > It wouldn't be acceptable if the application developer could call >> > bindAttribLocation passing in an identifier starting with one of the >> > reserved prefixes and have that apply to an attribute allocated >> > internally by the WebGL implementation. >> > >> > Should this sentence be clarified and perhaps separated out into a >> > separate subsection which can be referred to independently of >> > "Supported GLSL Constructs"? >> > >> > -Ken >> > >> > >> > >> > On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares < gman...@ > wrote: >> >> If the WebGL spec is not clear it should be updated. It's trying to >> >> match >> >> the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed >> >> WebGL >> >> also wants to disallow "webgl_" >> >> >> >> >> >> >> >> >> >> >> >> On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert < jgilbert...@ > >> >> wrote: >> >>> >> >>> >> >>> The test suite has this: >> >>> gl.bindAttribLocation(program, 0, 'webgl_a') expected: >> >>> INVALID_OPERATION >> >>> gl.bindAttribLocation(program, 0, '_webgl_a') expected: >> >>> INVALID_OPERATION >> >>> >> >>> I don't see where this is in the spec. The GLES2 spec has this >> >>> behavior >> >>> for attribs starting with 'gl_', but I can't find the language in the >> >>> WebGL >> >>> spec which amends this. It seems like this shouldn't be strictly >> >>> necessary >> >>> anyways, since we already have this language in the spec: >> >>> >> >>> "In addition to the reserved identifiers in the aforementioned >> >>> specification, identifiers starting with "webgl_" and "_webgl_" are >> >>> reserved >> >>> for use by WebGL. A shader which declares a function, variable, >> >>> structure >> >>> name, or structure field starting with these prefixes must not be >> >>> allowed to >> >>> load." >> >>> >> >>> With this restriction, it seems that the user couldn't declare an >> >>> attrib >> >>> with such a name anyways, so bindAttribLocation with these prefixes >> >>> would >> >>> always do nothing to a shader which meets this restriction, while >> >>> refusing >> >>> to 'load' (link? use? This should probably be clarified as well.) >> >>> shaders >> >>> which conflict with this passage. >> >>> >> >>> It seems like we don't need this restriction, so it should be removed. >> >>> Is >> >>> there something I'm missing? >> >>> >> >>> -Jeff >> >>> >> >>> ----------------------------------------------------------- >> >>> You are currently subscribed to public_webgl...@ . >> >>> To unsubscribe, send an email to majordomo...@ with >> >>> the following command in the body of your email: >> >>> unsubscribe public_webgl >> >>> ----------------------------------------------------------- >> >>> >> >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Wed Feb 27 12:03:22 2013 From: jgi...@ (Jeff Gilbert) Date: Wed, 27 Feb 2013 12:03:22 -0800 (PST) Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: Message-ID: <503883616.4042507.1361995402169.JavaMail.root@mozilla.com> Leaving it up to the tests is terrible, doubly-so if the spec doesn't mention this. This is clearly a tradition we should ignore. That we're talking about it now makes it clear it should be clearly specified. -Jeff ----- Original Message ----- From: "Gregg Tavares" To: "Jeff Gilbert" Cc: "Kenneth Russell" , "public webgl" Sent: Tuesday, February 26, 2013 6:59:58 PM Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs I don't have a problem dropping "_webgl_". I think we should keep "webgl_" as is > The spec cannot be vague about these things. Tests match the spec, not vice versa. There are places in the OpenGL ES spec that effectively say "up to the tests" :-( On Tue, Feb 26, 2013 at 6:52 PM, Jeff Gilbert < jgilbert...@ > wrote: "This is how it's been done" isn't a good reason by itself. The "_webgl_" prefix seems entirely unnecessary. And still, calling bindAttribLocation *always* drops things on the floor if they aren't used in the linked shader, which /_?webgl_.*/ currently can't ever be. It's not as if extensions are silently added in WebGL either, since you have to opt in to them. Certainly even if we want to keep "webgl_" reserved, we should clearly state that this will emit an error in bindAttribLocation. The spec cannot be vague about these things. Tests match the spec, not vice versa. I see no good reason why we should not remove the restriction on "_webgl_", and as such, we should allow it. I will also strongly note that this has *not* been "the tested behavior for WebGL implementations for some time", given that it was only added in 1.0.2, which we are only just now snapshotting. -Jeff ----- Original Message ----- From: "Kenneth Russell" < kbr...@ > To: "Gregg Tavares" < gman...@ > Cc: "Jeff Gilbert" < jgilbert...@ >, "public webgl" < public_webgl...@ > Sent: Tuesday, February 26, 2013 6:26:55 PM Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs On Tue, Feb 26, 2013 at 6:17 PM, Gregg Tavares < gman...@ > wrote: > > > > On Tue, Feb 26, 2013 at 4:47 PM, Jeff Gilbert < jgilbert...@ > wrote: >> >> We do inherit GL's behavior, but there's no reason to expand WebGL's >> behavior here without reason, nor should we just s/gl/webgl/ everything. >> Since restricting it here goes further than what GL mandates, and it doesn't >> win us anything, it seems unnecessary to restrict this. > > > Why do you think GL stops "gl_" on bindAttribLocation? By your logic (which > I agree with) it could just drop those on the floor as well right? I've > assumed there was a reason. > > If I had to guess I'd guess so they can later add some special > "gl_specialAttribThingy" and let you call glBindAttribLocation(prg, index, > "gl_specialAttribThingy") to bind it to a specific attribute. If they didn't > preemptively reject it in bindAttribLocation then there's the possibility > someone is passing that in already which would be silently ignored and then > someday suddenly not silently ignored?? So, I've assumed we should do the > same for our reserved prefixes. I agree. This has been the tested behavior for WebGL implementations for some time and I see no reason to relax or change it. > As for _webgl_ I'm pretty sure that comes from the discussion group. No idea > why > > http://www.khronos.org/webgl/public-mailing-list/archives/1003/msg00095.html I think Cedric's idea at the time was that an extension might expose a variable "webgl_foo" to shaders that developers were supposed to be able to access. (At the time, we were talking about exposing a synthetic "webgl_InstanceID" variable.) Identifiers starting with "_webgl_" would be used only by implementations. Again, I do not think the reservation of these namespaces should be changed. -Ken > > > > > > >> >> >> Further, s/gl/webgl/ here would only add the reserved "webgl_" prefix. I'm >> not sure why we reserve "_webgl_". >> >> -Jeff >> >> >> ----- Original Message ----- >> From: "Kenneth Russell" < kbr...@ > >> To: "Jeff Gilbert" < jgilbert...@ > >> Cc: "public webgl" < public_webgl...@ >, "Gregg Tavares" >> < gman...@ > >> Sent: Tuesday, February 26, 2013 3:29:30 PM >> Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs >> >> The manual page for glBindAttribLocation states: >> >> GL_INVALID_OPERATION is generated if name starts with the reserved >> prefix "gl_". >> >> WebGL's corresponding entry point should behave the same for WebGL's >> reserved prefixes, not silently drop the call on the floor. >> >> -Ken >> >> >> >> On Tue, Feb 26, 2013 at 3:17 PM, Jeff Gilbert < jgilbert...@ > >> wrote: >> > I don't see why it's needed, though. The WebGL impl can easily just drop >> > these bindings on the floor, since there's no possible time when they'd be >> > valid. (All shaders containing them should fail to link, presumedly) >> > >> > I don't see a good reason for keeping this restriction in light of how >> > easy it should be to just ignore such calls. Failing that, we should clarify >> > the language, yes. >> > >> > -Jeff >> > >> > ----- Original Message ----- >> > From: "Kenneth Russell" < kbr...@ > >> > To: "Gregg Tavares" < gman...@ > >> > Cc: "Jeff Gilbert" < jgilbert...@ >, "public webgl" >> > < public_webgl...@ > >> > Sent: Tuesday, February 26, 2013 2:39:35 PM >> > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs >> > >> > Yes, that's correct. >> > >> > It wouldn't be acceptable if the application developer could call >> > bindAttribLocation passing in an identifier starting with one of the >> > reserved prefixes and have that apply to an attribute allocated >> > internally by the WebGL implementation. >> > >> > Should this sentence be clarified and perhaps separated out into a >> > separate subsection which can be referred to independently of >> > "Supported GLSL Constructs"? >> > >> > -Ken >> > >> > >> > >> > On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares < gman...@ > wrote: >> >> If the WebGL spec is not clear it should be updated. It's trying to >> >> match >> >> the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed >> >> WebGL >> >> also wants to disallow "webgl_" >> >> >> >> >> >> >> >> >> >> >> >> On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert < jgilbert...@ > >> >> wrote: >> >>> >> >>> >> >>> The test suite has this: >> >>> gl.bindAttribLocation(program, 0, 'webgl_a') expected: >> >>> INVALID_OPERATION >> >>> gl.bindAttribLocation(program, 0, '_webgl_a') expected: >> >>> INVALID_OPERATION >> >>> >> >>> I don't see where this is in the spec. The GLES2 spec has this >> >>> behavior >> >>> for attribs starting with 'gl_', but I can't find the language in the >> >>> WebGL >> >>> spec which amends this. It seems like this shouldn't be strictly >> >>> necessary >> >>> anyways, since we already have this language in the spec: >> >>> >> >>> "In addition to the reserved identifiers in the aforementioned >> >>> specification, identifiers starting with "webgl_" and "_webgl_" are >> >>> reserved >> >>> for use by WebGL. A shader which declares a function, variable, >> >>> structure >> >>> name, or structure field starting with these prefixes must not be >> >>> allowed to >> >>> load." >> >>> >> >>> With this restriction, it seems that the user couldn't declare an >> >>> attrib >> >>> with such a name anyways, so bindAttribLocation with these prefixes >> >>> would >> >>> always do nothing to a shader which meets this restriction, while >> >>> refusing >> >>> to 'load' (link? use? This should probably be clarified as well.) >> >>> shaders >> >>> which conflict with this passage. >> >>> >> >>> It seems like we don't need this restriction, so it should be removed. >> >>> Is >> >>> there something I'm missing? >> >>> >> >>> -Jeff >> >>> >> >>> ----------------------------------------------------------- >> >>> You are currently subscribed to public_webgl...@ . >> >>> To unsubscribe, send an email to majordomo...@ with >> >>> the following command in the body of your email: >> >>> unsubscribe public_webgl >> >>> ----------------------------------------------------------- >> >>> >> >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Feb 27 12:10:41 2013 From: kbr...@ (Kenneth Russell) Date: Wed, 27 Feb 2013 12:10:41 -0800 Subject: [Public WebGL] bindAttribLocation on _?webgl_ attribs In-Reply-To: <503883616.4042507.1361995402169.JavaMail.root@mozilla.com> References: <503883616.4042507.1361995402169.JavaMail.root@mozilla.com> Message-ID: The editor's draft spec does need to be clarified to indicate that the WebGL reserved prefixes apply, like the "gl_" reserved prefix, to entry points like bindAttribLocation and getAttribLocation. Unless there are any objections, I will make this change and push to have it incorporated into the 1.0.2 spec. Unreserving the "_webgl_" prefix would make it so that the 1.0.1 tests no longer run on a 1.0.2-compliant WebGL implementation, cause incompatibility with the existing 1.0 and forthcoming 1.0.1 specs, and require another change to the 1.0.2 spec in the process of ratification. While I agree that in hindsight it was not necessary to reserve the "_webgl_" prefix in addition to "webgl_", I don't think it is worth the effort or confusion to change this now. Jeff, if you feel strongly about this, then please organize a formal vote in the working group. -Ken On Wed, Feb 27, 2013 at 12:03 PM, Jeff Gilbert wrote: > Leaving it up to the tests is terrible, doubly-so if the spec doesn't mention this. This is clearly a tradition we should ignore. > > That we're talking about it now makes it clear it should be clearly specified. > > -Jeff > > ----- Original Message ----- > From: "Gregg Tavares" > To: "Jeff Gilbert" > Cc: "Kenneth Russell" , "public webgl" > Sent: Tuesday, February 26, 2013 6:59:58 PM > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > > > I don't have a problem dropping "_webgl_". I think we should keep "webgl_" as is > > >> The spec cannot be vague about these things. Tests match the spec, not vice versa. > > > There are places in the OpenGL ES spec that effectively say "up to the tests" :-( > > > > > > On Tue, Feb 26, 2013 at 6:52 PM, Jeff Gilbert < jgilbert...@ > wrote: > > > "This is how it's been done" isn't a good reason by itself. The "_webgl_" prefix seems entirely unnecessary. > > And still, calling bindAttribLocation *always* drops things on the floor if they aren't used in the linked shader, which /_?webgl_.*/ currently can't ever be. It's not as if extensions are silently added in WebGL either, since you have to opt in to them. > > Certainly even if we want to keep "webgl_" reserved, we should clearly state that this will emit an error in bindAttribLocation. The spec cannot be vague about these things. Tests match the spec, not vice versa. > > I see no good reason why we should not remove the restriction on "_webgl_", and as such, we should allow it. > > I will also strongly note that this has *not* been "the tested behavior for WebGL implementations > for some time", given that it was only added in 1.0.2, which we are only just now snapshotting. > > > -Jeff > > ----- Original Message ----- > From: "Kenneth Russell" < kbr...@ > > To: "Gregg Tavares" < gman...@ > > Cc: "Jeff Gilbert" < jgilbert...@ >, "public webgl" < public_webgl...@ > > > > Sent: Tuesday, February 26, 2013 6:26:55 PM > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs > > On Tue, Feb 26, 2013 at 6:17 PM, Gregg Tavares < gman...@ > wrote: >> >> >> >> On Tue, Feb 26, 2013 at 4:47 PM, Jeff Gilbert < jgilbert...@ > wrote: >>> >>> We do inherit GL's behavior, but there's no reason to expand WebGL's >>> behavior here without reason, nor should we just s/gl/webgl/ everything. >>> Since restricting it here goes further than what GL mandates, and it doesn't >>> win us anything, it seems unnecessary to restrict this. >> >> >> Why do you think GL stops "gl_" on bindAttribLocation? By your logic (which >> I agree with) it could just drop those on the floor as well right? I've >> assumed there was a reason. >> >> If I had to guess I'd guess so they can later add some special >> "gl_specialAttribThingy" and let you call glBindAttribLocation(prg, index, >> "gl_specialAttribThingy") to bind it to a specific attribute. If they didn't >> preemptively reject it in bindAttribLocation then there's the possibility >> someone is passing that in already which would be silently ignored and then >> someday suddenly not silently ignored?? So, I've assumed we should do the >> same for our reserved prefixes. > > I agree. This has been the tested behavior for WebGL implementations > for some time and I see no reason to relax or change it. > > >> As for _webgl_ I'm pretty sure that comes from the discussion group. No idea >> why >> >> http://www.khronos.org/webgl/public-mailing-list/archives/1003/msg00095.html > > I think Cedric's idea at the time was that an extension might expose a > variable "webgl_foo" to shaders that developers were supposed to be > able to access. (At the time, we were talking about exposing a > synthetic "webgl_InstanceID" variable.) Identifiers starting with > "_webgl_" would be used only by implementations. Again, I do not think > the reservation of these namespaces should be changed. > > -Ken > > >> >> >> >> >> >> >>> >>> >>> Further, s/gl/webgl/ here would only add the reserved "webgl_" prefix. I'm >>> not sure why we reserve "_webgl_". >>> >>> -Jeff >>> >>> >>> ----- Original Message ----- >>> From: "Kenneth Russell" < kbr...@ > >>> To: "Jeff Gilbert" < jgilbert...@ > >>> Cc: "public webgl" < public_webgl...@ >, "Gregg Tavares" >>> < gman...@ > >>> Sent: Tuesday, February 26, 2013 3:29:30 PM >>> Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs >>> >>> The manual page for glBindAttribLocation states: >>> >>> GL_INVALID_OPERATION is generated if name starts with the reserved >>> prefix "gl_". >>> >>> WebGL's corresponding entry point should behave the same for WebGL's >>> reserved prefixes, not silently drop the call on the floor. >>> >>> -Ken >>> >>> >>> >>> On Tue, Feb 26, 2013 at 3:17 PM, Jeff Gilbert < jgilbert...@ > >>> wrote: >>> > I don't see why it's needed, though. The WebGL impl can easily just drop >>> > these bindings on the floor, since there's no possible time when they'd be >>> > valid. (All shaders containing them should fail to link, presumedly) >>> > >>> > I don't see a good reason for keeping this restriction in light of how >>> > easy it should be to just ignore such calls. Failing that, we should clarify >>> > the language, yes. >>> > >>> > -Jeff >>> > >>> > ----- Original Message ----- >>> > From: "Kenneth Russell" < kbr...@ > >>> > To: "Gregg Tavares" < gman...@ > >>> > Cc: "Jeff Gilbert" < jgilbert...@ >, "public webgl" >>> > < public_webgl...@ > >>> > Sent: Tuesday, February 26, 2013 2:39:35 PM >>> > Subject: Re: [Public WebGL] bindAttribLocation on _?webgl_ attribs >>> > >>> > Yes, that's correct. >>> > >>> > It wouldn't be acceptable if the application developer could call >>> > bindAttribLocation passing in an identifier starting with one of the >>> > reserved prefixes and have that apply to an attribute allocated >>> > internally by the WebGL implementation. >>> > >>> > Should this sentence be clarified and perhaps separated out into a >>> > separate subsection which can be referred to independently of >>> > "Supported GLSL Constructs"? >>> > >>> > -Ken >>> > >>> > >>> > >>> > On Tue, Feb 26, 2013 at 2:01 PM, Gregg Tavares < gman...@ > wrote: >>> >> If the WebGL spec is not clear it should be updated. It's trying to >>> >> match >>> >> the OpenGL ES spec but add a prefix. Everywhere "gl_" is disallowed >>> >> WebGL >>> >> also wants to disallow "webgl_" >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> On Tue, Feb 26, 2013 at 1:11 PM, Jeff Gilbert < jgilbert...@ > >>> >> wrote: >>> >>> >>> >>> >>> >>> The test suite has this: >>> >>> gl.bindAttribLocation(program, 0, 'webgl_a') expected: >>> >>> INVALID_OPERATION >>> >>> gl.bindAttribLocation(program, 0, '_webgl_a') expected: >>> >>> INVALID_OPERATION >>> >>> >>> >>> I don't see where this is in the spec. The GLES2 spec has this >>> >>> behavior >>> >>> for attribs starting with 'gl_', but I can't find the language in the >>> >>> WebGL >>> >>> spec which amends this. It seems like this shouldn't be strictly >>> >>> necessary >>> >>> anyways, since we already have this language in the spec: >>> >>> >>> >>> "In addition to the reserved identifiers in the aforementioned >>> >>> specification, identifiers starting with "webgl_" and "_webgl_" are >>> >>> reserved >>> >>> for use by WebGL. A shader which declares a function, variable, >>> >>> structure >>> >>> name, or structure field starting with these prefixes must not be >>> >>> allowed to >>> >>> load." >>> >>> >>> >>> With this restriction, it seems that the user couldn't declare an >>> >>> attrib >>> >>> with such a name anyways, so bindAttribLocation with these prefixes >>> >>> would >>> >>> always do nothing to a shader which meets this restriction, while >>> >>> refusing >>> >>> to 'load' (link? use? This should probably be clarified as well.) >>> >>> shaders >>> >>> which conflict with this passage. >>> >>> >>> >>> It seems like we don't need this restriction, so it should be removed. >>> >>> Is >>> >>> there something I'm missing? >>> >>> >>> >>> -Jeff >>> >>> >>> >>> ----------------------------------------------------------- >>> >>> You are currently subscribed to public_webgl...@ . >>> >>> To unsubscribe, send an email to majordomo...@ with >>> >>> the following command in the body of your email: >>> >>> unsubscribe public_webgl >>> >>> ----------------------------------------------------------- >>> >>> >>> >> >> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Feb 27 12:43:36 2013 From: kbr...@ (Kenneth Russell) Date: Wed, 27 Feb 2013 12:43:36 -0800 Subject: [Public WebGL] Proposal: merge new typed-arrays-in-workers test to 1.0.2 In-Reply-To: References: <512559B2.3000209@mozilla.com> Message-ID: Since there weren't any objections, and since bugs in this area were discovered in a large, real-world WebGL application (MapsGL), I've merged this test into the 1.0.2 conformance suite so that we collectively test this functionality sooner rather than later: https://github.com/KhronosGroup/WebGL/pull/211 -Ken On Thu, Feb 21, 2013 at 10:51 AM, Kenneth Russell wrote: > Thanks, that's great. Other browser vendors? Apple, Opera? > > Note that the test is not yet in the 1.0.2 suite -- only in trunk. The > proposal here is to merge it back to 1.0.2. > > -Ken > > > > On Wed, Feb 20, 2013 at 3:18 PM, Benoit Jacob wrote: >> >> OK. I think that we should get this fixed. It seems acceptable to leave >> this in 1.0.2. >> >> Benoit >> >> On 13-02-19 09:58 PM, Kenneth Russell wrote: >>> Right after the 1.0.2 conformance suite snapshot was taken, the MapsGL >>> team discovered (actually, rediscovered) a bug in Transferable support >>> for typed arrays in one major browser. Unfortunately, the WebGL >>> conformance suite didn't have a test of Transferable support, which is >>> why this bug went unnoticed to this point. >>> >>> A thorough test has been added to the top of tree conformance suite: >>> https://www.khronos.org/registry/webgl/sdk/tests/conformance/typedarrays/typed-arrays-in-workers.html >>> >>> I would like to propose that this test be merged back to the 1.0.2 >>> suite. It exposes bugs in the majority of browsers supporting WebGL, >>> and it is likely that the 1.0.2 suite will be a target for both >>> browser and GPU vendors for quite some time. >>> >>> Could all browser vendors supporting WebGL please reply to the list >>> indicating whether or not you would support this? Thanks. >>> >>> -Ken >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Feb 27 17:21:01 2013 From: kbr...@ (Kenneth Russell) Date: Wed, 27 Feb 2013 17:21:01 -0800 Subject: [Public WebGL] gl.getShaderPrecisionFormat In-Reply-To: References: Message-ID: On Fri, Feb 22, 2013 at 2:38 PM, David Sheets wrote: > On Fri, Feb 22, 2013 at 2:30 PM, Kenneth Russell wrote: >> We'll all have to discuss this once 1.0.1 and 1.0.2 actually ship. The >> situation will be made more complex with the forthcoming "WebGL level >> 2" draft spec incorporating ES 3.0 functionality. At that point we may >> want to consider a different scheme for separating the two major >> versions of the spec. > > Once 1.0.1 ships, 1.0 will mean the spec preceding 1.0.1. Is it > possible to rename 1.0 to 1.0.0? > > Or is there some sort of "1.0 compliant" issue which allows vendors to > comply to any test suite/spec in the 1.0.x line? > > It seems that "1.0 compliant" is impossible. Only compliance to > specific revision snapshots appears possible due to unknown unknowns. Correct, there is no notion that a WebGL implementation is conformant to all 1.0.x releases of the specification. Conformance is established by passing a specific version of the conformance tests. WebGL 1.0 won't be renamed to 1.0.0. The version number is embedded in the URL ( https://www.khronos.org/registry/webgl/specs/1.0/ ) and changing it would break links. -Ken > David > >> -Ken >> >> >> On Fri, Feb 22, 2013 at 2:27 PM, David Sheets wrote: >>> On Fri, Feb 22, 2013 at 2:09 PM, Kenneth Russell wrote: >>>> >>>> Yes, this was an unfortunate and accidental omission from the 1.0 >>>> spec. It will be fixed in the 1.0.1 and subsequent versions of the >>>> spec, hopefully to be unblocked and released very soon. >>> >>> Is it the policy of the WG to track spec versions to their most >>> up-to-date revision? >>> >>> That is, for the specs being published with >>> [major].[minor].[revision], do revision increments really indicate >>> solely revisions and minor interface extensions? >>> >>> I ask because it seems that if this is the case, the spec known as >>> "1.0" is actually "1.0.0" and any references to "1.0" should point to >>> the latest spec in the "1.0" lineage. >>> >>> This is distinct from the "latest" branch because if the latest branch >>> moves to 1.1.x (or 2.0.x) then 1.0 will continue to track 1.0.y where >>> y is the largest value with a corresponding revision. >>> >>> This may help cut down confusion regarding the revisions. If a dev >>> wants to refer to a specific revision, they can always still use the >>> dotted triple. What do you think? >>> >>> David >>> >>>> -Ken >>>> >>>> >>>> On Fri, Feb 22, 2013 at 1:50 PM, Ben Vanik wrote: >>>>> Are you looking at an old version of the spec? >>>>> >>>>> It's in here: >>>>> http://www.khronos.org/registry/webgl/specs/latest/ >>>>> >>>>> >>>>> On Fri, Feb 22, 2013 at 1:36 PM, Florian B?sch wrote: >>>>>> >>>>>> I've noticed that gl.getShaderPrecisionFormat is not documented in the >>>>>> standard, yet it is implemented by both chrome and firefox. The enumerants >>>>>> it relies on (such as gl.HIGH_FLOAT) are in the standard, but aren't used by >>>>>> any function. >>>>>> >>>>>> I suppose it's missing because of an editing oversight? >>>>> >>>>> >>>> >>>> ----------------------------------------------------------- >>>> You are currently subscribed to public_webgl...@ >>>> To unsubscribe, send an email to majordomo...@ with >>>> the following command in the body of your email: >>>> unsubscribe public_webgl >>>> ----------------------------------------------------------- >>>> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kos...@ Wed Feb 27 17:40:08 2013 From: kos...@ (David Sheets) Date: Wed, 27 Feb 2013 17:40:08 -0800 Subject: [Public WebGL] gl.getShaderPrecisionFormat In-Reply-To: References: Message-ID: On Wed, Feb 27, 2013 at 5:21 PM, Kenneth Russell wrote: > On Fri, Feb 22, 2013 at 2:38 PM, David Sheets wrote: >> On Fri, Feb 22, 2013 at 2:30 PM, Kenneth Russell wrote: >>> We'll all have to discuss this once 1.0.1 and 1.0.2 actually ship. The >>> situation will be made more complex with the forthcoming "WebGL level >>> 2" draft spec incorporating ES 3.0 functionality. At that point we may >>> want to consider a different scheme for separating the two major >>> versions of the spec. >> >> Once 1.0.1 ships, 1.0 will mean the spec preceding 1.0.1. Is it >> possible to rename 1.0 to 1.0.0? >> >> Or is there some sort of "1.0 compliant" issue which allows vendors to >> comply to any test suite/spec in the 1.0.x line? >> >> It seems that "1.0 compliant" is impossible. Only compliance to >> specific revision snapshots appears possible due to unknown unknowns. > > Correct, there is no notion that a WebGL implementation is conformant > to all 1.0.x releases of the specification. Conformance is established > by passing a specific version of the conformance tests. Ok. Just to check I understand, if an implementation passes the "1.0.0" test suite, it is conformant to the specification published as "1.0"? > WebGL 1.0 won't be renamed to 1.0.0. The version number is embedded in > the URL ( https://www.khronos.org/registry/webgl/specs/1.0/ ) and > changing it would break links. I think that URL should always be alive and point to the most up-to-date, ratified specification in the 1.0.x line. IMHO, a new URL should be minted where the zeroth revision of the spec lives. This makes the labeling of each spec parallel and aligns the spec names with the test suite names. Before 1.0.1 is ratified, and would dereference to the same document. Once 1.0.1 is ratified, would dereference to the same document as and would dereference to the original WebGL revision c. 2011. When WebGL 1.0.1 is ratified, will it be the target for implementors wishing to implement "WebGL 1.0"? When WebGL 1.0.2 is ratified, will it be the target for implementors wishing to offer "WebGL 1.0" capabilities? I'd really like the version numbers to make sense. Taking Linux kernel development as a model, people often say "I am running 2.6" and "I've updated to the newest 2.6 kernel". When reporting issues and discussing interop, however, the revision number (and patch level etc) is often cited. Can WebGL follow this kind of convention? David > -Ken > > >> David >> >>> -Ken >>> >>> >>> On Fri, Feb 22, 2013 at 2:27 PM, David Sheets wrote: >>>> On Fri, Feb 22, 2013 at 2:09 PM, Kenneth Russell wrote: >>>>> >>>>> Yes, this was an unfortunate and accidental omission from the 1.0 >>>>> spec. It will be fixed in the 1.0.1 and subsequent versions of the >>>>> spec, hopefully to be unblocked and released very soon. >>>> >>>> Is it the policy of the WG to track spec versions to their most >>>> up-to-date revision? >>>> >>>> That is, for the specs being published with >>>> [major].[minor].[revision], do revision increments really indicate >>>> solely revisions and minor interface extensions? >>>> >>>> I ask because it seems that if this is the case, the spec known as >>>> "1.0" is actually "1.0.0" and any references to "1.0" should point to >>>> the latest spec in the "1.0" lineage. >>>> >>>> This is distinct from the "latest" branch because if the latest branch >>>> moves to 1.1.x (or 2.0.x) then 1.0 will continue to track 1.0.y where >>>> y is the largest value with a corresponding revision. >>>> >>>> This may help cut down confusion regarding the revisions. If a dev >>>> wants to refer to a specific revision, they can always still use the >>>> dotted triple. What do you think? >>>> >>>> David >>>> >>>>> -Ken >>>>> >>>>> >>>>> On Fri, Feb 22, 2013 at 1:50 PM, Ben Vanik wrote: >>>>>> Are you looking at an old version of the spec? >>>>>> >>>>>> It's in here: >>>>>> http://www.khronos.org/registry/webgl/specs/latest/ >>>>>> >>>>>> >>>>>> On Fri, Feb 22, 2013 at 1:36 PM, Florian B?sch wrote: >>>>>>> >>>>>>> I've noticed that gl.getShaderPrecisionFormat is not documented in the >>>>>>> standard, yet it is implemented by both chrome and firefox. The enumerants >>>>>>> it relies on (such as gl.HIGH_FLOAT) are in the standard, but aren't used by >>>>>>> any function. >>>>>>> >>>>>>> I suppose it's missing because of an editing oversight? >>>>>> >>>>>> >>>>> >>>>> ----------------------------------------------------------- >>>>> You are currently subscribed to public_webgl...@ >>>>> To unsubscribe, send an email to majordomo...@ with >>>>> the following command in the body of your email: >>>>> unsubscribe public_webgl >>>>> ----------------------------------------------------------- >>>>> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Thu Feb 28 06:56:54 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 28 Feb 2013 15:56:54 +0100 Subject: [Public WebGL] EME and its interaction with WebGL Message-ID: I was wondering about the implications of having an EME element (video) on the page and its interaction with WebGL. The HTMLVideoElement can be used by calls to texture2D, and as I've shown dependent lookups and vertex shader lookups can be used to extract content (other than toDataURL and readPixels). Is it a given that the presence of an EME video on a page also containing a gl context will disable: - dependent lookups in fragment shaders? - vertex shader texture lookups? - toDataURL of canvases? - readPixels? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Thu Feb 28 08:39:59 2013 From: bja...@ (Benoit Jacob) Date: Thu, 28 Feb 2013 11:39:59 -0500 Subject: [Public WebGL] EME and its interaction with WebGL In-Reply-To: References: Message-ID: <512F885F.4080209@mozilla.com> At this early stage I don't suppose that browser developers would have fully thought out ideas of what exactly will be possible with EME video, but there are basically two approaches that could be taken to achieve a vague semblance of self-consistency: - either one decides that EME's goal is only to prevent ripping at the stream level. Under that theory, EME video wouldn't be subject to more WebGL-related restrictions than regular video. - or one decides that EME really tries to prevent people from reading back decoded frames. In which case one could think that that is similar to not-same-origin video; except that if one wants to prevent not just regular Web content, but also privileged code like Firefox add-ons from getting the frames, that won't be enough, and in that case one would have to add a lot of drastic restrictions on what can be done with EME video that would include disabling usage in WebGL and much more (since at that point EME video would need a separate rendering pipeline bypassing the browser compositor completely). Until these decisions are made (and I don't know that they are as of yet) we can't know the answers to your specific questions. Benoit On 13-02-28 09:56 AM, Florian B?sch wrote: > I was wondering about the implications of having an EME element > (video) on the page and its interaction with WebGL. > > The HTMLVideoElement can be used by calls to texture2D, and as I've > shown dependent lookups and vertex shader lookups can be used to > extract content (other than toDataURL and readPixels). > > Is it a given that the presence of an EME video on a page also > containing a gl context will disable: > - dependent lookups in fragment shaders? > - vertex shader texture lookups? > - toDataURL of canvases? > - readPixels? ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From won...@ Thu Feb 28 08:46:39 2013 From: won...@ (Won Chun) Date: Thu, 28 Feb 2013 11:46:39 -0500 Subject: [Public WebGL] EME and its interaction with WebGL In-Reply-To: References: Message-ID: A few clarifying questions: Does "EME" stand for "embedded media element"? Does the video element obey the same-origin policy, or use CORS headers? It sounds like you're describing the cross domain tainting business. Also curious about the larger context of this question. -Won On Thu, Feb 28, 2013 at 9:56 AM, Florian B?sch wrote: > I was wondering about the implications of having an EME element (video) on > the page and its interaction with WebGL. > > The HTMLVideoElement can be used by calls to texture2D, and as I've shown > dependent lookups and vertex shader lookups can be used to extract content > (other than toDataURL and readPixels). > > Is it a given that the presence of an EME video on a page also containing > a gl context will disable: > - dependent lookups in fragment shaders? > - vertex shader texture lookups? > - toDataURL of canvases? > - readPixels? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzb...@ Thu Feb 28 08:52:10 2013 From: bzb...@ (Boris Zbarsky) Date: Thu, 28 Feb 2013 11:52:10 -0500 Subject: [Public WebGL] EME and its interaction with WebGL In-Reply-To: References: Message-ID: <512F8B3A.4050604@mit.edu> On 2/28/13 11:46 AM, Won Chun wrote: > Does "EME" stand for "embedded media element"? "EME" stands for "Encrypted Media Extensions". See https://dvcs.w3.org/hg/html-media/raw-file/tip/encrypted-media/encrypted-media.html and the long threads on the public-html list about it. > It sounds like you're describing the cross domain tainting business. We're talking about the interaction of DRMed video with WebGL. -Boris ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From Rem...@ Thu Feb 28 09:26:20 2013 From: Rem...@ (Arnaud, Remi) Date: Thu, 28 Feb 2013 17:26:20 +0000 Subject: [Public WebGL] EME and its interaction with WebGL In-Reply-To: <512F885F.4080209@mozilla.com> References: ,<512F885F.4080209@mozilla.com> Message-ID: <24C3C22C-3769-46B8-9F09-7752C2EE329D@amd.com> My understanding from the business perspective is that EME will restrict high quality/ high resolution content, but will still provide lower quality video as fallback when the system fails to setup a fully secured pipeline. In other words, just like Benoit says, webGL may never be allowed access to EME content - but will still be able to use lower quality videos of the same content. After all, I see no harm in letting the content provider decides which quality of video they want to allow on a spinning cube, rather than full screen as intended to be experienced :-) Regards - Remi On Feb 28, 2013, at 8:43 AM, "Benoit Jacob" wrote: > > At this early stage I don't suppose that browser developers would have > fully thought out ideas of what exactly will be possible with EME video, > but there are basically two approaches that could be taken to achieve a > vague semblance of self-consistency: > - either one decides that EME's goal is only to prevent ripping at the > stream level. Under that theory, EME video wouldn't be subject to more > WebGL-related restrictions than regular video. > - or one decides that EME really tries to prevent people from reading > back decoded frames. In which case one could think that that is similar > to not-same-origin video; except that if one wants to prevent not just > regular Web content, but also privileged code like Firefox add-ons from > getting the frames, that won't be enough, and in that case one would > have to add a lot of drastic restrictions on what can be done with EME > video that would include disabling usage in WebGL and much more (since > at that point EME video would need a separate rendering pipeline > bypassing the browser compositor completely). > > Until these decisions are made (and I don't know that they are as of > yet) we can't know the answers to your specific questions. > > Benoit > > > On 13-02-28 09:56 AM, Florian B?sch wrote: >> I was wondering about the implications of having an EME element >> (video) on the page and its interaction with WebGL. >> >> The HTMLVideoElement can be used by calls to texture2D, and as I've >> shown dependent lookups and vertex shader lookups can be used to >> extract content (other than toDataURL and readPixels). >> >> Is it a given that the presence of an EME video on a page also >> containing a gl context will disable: >> - dependent lookups in fragment shaders? >> - vertex shader texture lookups? >> - toDataURL of canvases? >> - readPixels? > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Thu Feb 28 20:22:36 2013 From: kbr...@ (Kenneth Russell) Date: Thu, 28 Feb 2013 20:22:36 -0800 Subject: [Public WebGL] WEBGL_depth_texture In-Reply-To: References: <24E0FB64-744D-42E3-8C70-8204C8A7C952@transgaming.com> Message-ID: https://www.khronos.org/registry/webgl/extensions/WEBGL_depth_texture/ has been updated to track the change to the underlying ANGLE_depth_texture extension. -Ken On Thu, Feb 21, 2013 at 10:50 AM, Kenneth Russell wrote: > This change sounds fine to me. It will enable the extension on more > platforms and guide developers toward writing portable shaders. > > -Ken > > > On Wed, Feb 20, 2013 at 4:03 PM, Shannon Woods > wrote: >> >> WEBGL_depth_texture, currently in the process of ratification, has language >> which poses some difficulty for ANGLE. Both WEBGL_depth_texture and >> ANGLE_depth_texture, which it references, specify that the depth value is >> stored in the r, g, and b channels, with alpha being undefined. This >> language was included to allow for inconsistencies in the alpha value >> returned when performing such samples via D3D9. However, conforming to this >> creates a bit of a challenge when implemented over D3D11, as the depth value >> is then only returned by D3D in the r channel, with the other channels >> receiving 0, 0, 1 default values instead. >> >> Our issues would be resolved by changing ANGLE_depth_texture, as well as >> WEBGL_depth_texture, to guarantee the depth value only in the r channel, and >> extending the warning about implementation dependency to cover the g and b >> channels in addition to alpha. Would there be any objections to making this >> change? >> >> Thank you, >> _____________________________________________________________________ >> Shannon Woods >> Technical Manager, Graphics Technology >> >> TransGaming >> T: +1 416-979-9900 x 408 | E: >> shannon.woods...@ >> >> TransGaming.com | GameTreeMac.com | GameTreeTV.com >> _____________________________________________________________________ >> This email and any files transmitted herein are confidential and intended >> solely for the use of the individual or entity to whom they are addressed. >> If you are not the intended recipient you are notified that disclosing, >> copying, distributing or taking any action in reliance on the contents of >> this information is strictly prohibited. >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl -----------------------------------------------------------