From khr...@ Tue Nov 5 01:39:27 2013 From: khr...@ (Gregg Tavares) Date: Tue, 5 Nov 2013 18:39:27 +0900 Subject: [Public WebGL] premultipliedAlpha: false needs to be consistent across browsers? Message-ID: Hey There, I was answering a question about premultipliedAlpha: false and found there are some issues. I think the bugs are in firefox but I don't know for sure. It seems like for the flag to have any meaning it has to produce the same results across browsers. Right now though Chrome, Firefox on OSX and Firefox on Windows all have different results. Firefox on OSX I believe is clearly a bug and I filed one. Firefox on Windows vs Chrome I'm not sure which is correct though I'm pretty sure it's Chrome but I could be wrong. See screenshots here. http://greggman.com/downloads/examples/premultipliedalpha-issue.html The sample they were made from is here http://jsfiddle.net/greggman/pWPPC/ Ideally it seems like the spec should be updated to make it clear what math is supposed to happen when blending with the background so implementations can be consistent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Tue Nov 5 04:30:35 2013 From: oet...@ (Olli Etuaho) Date: Tue, 5 Nov 2013 13:30:35 +0100 Subject: [Public WebGL] premultipliedAlpha: false needs to be consistent across browsers? In-Reply-To: References: Message-ID: Hi, similar issues was last discussed in July when I brought it up. The conclusion was that specifying behavior of premultipliedAlpha: true when supplied with non-premultiplied data (top left in your test case) is quite problematic because of third-party compositors. There are also decisions to be made such as whether to clamp or renormalize the color values. See for example: http://www.khronos.org/webgl/public-mailing-list/archives/1307/msg00009.html However, your test case does reveal bugs in the premultipliedAlpha: false case, which shouldn't have any problems. There's definitely something odd going on in Firefox. On Firefox on Linux the image changes when it is composited repeatedly, and settles on a similar image as in your Windows screenshot. The Chrome images look like what I would expect to see. ________________________________________ From: owner-public_webgl...@ [owner-public_webgl...@] On Behalf Of Gregg Tavares [khronos...@] Sent: Tuesday, November 05, 2013 11:39 AM To: public_webgl...@ Subject: [Public WebGL] premultipliedAlpha: false needs to be consistent across browsers? Hey There, I was answering a question about premultipliedAlpha: false and found there are some issues. I think the bugs are in firefox but I don't know for sure. It seems like for the flag to have any meaning it has to produce the same results across browsers. Right now though Chrome, Firefox on OSX and Firefox on Windows all have different results. Firefox on OSX I believe is clearly a bug and I filed one. Firefox on Windows vs Chrome I'm not sure which is correct though I'm pretty sure it's Chrome but I could be wrong. See screenshots here. http://greggman.com/downloads/examples/premultipliedalpha-issue.html The sample they were made from is here http://jsfiddle.net/greggman/pWPPC/ Ideally it seems like the spec should be updated to make it clear what math is supposed to happen when blending with the background so implementations can be consistent. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Nov 5 18:48:57 2013 From: kbr...@ (Kenneth Russell) Date: Tue, 5 Nov 2013 18:48:57 -0800 Subject: [Public WebGL] premultipliedAlpha: false needs to be consistent across browsers? In-Reply-To: References: Message-ID: Hi Gregg, Olli, Agreed that the spec clearly needs to be improved because of the obvious difference in behavior between browsers. Where do you suggest the clarification? http://www.khronos.org/registry/webgl/specs/latest/1.0/#PREMULTIPLIED_ALPHA or http://www.khronos.org/registry/webgl/specs/latest/1.0/#5.2.1 ? As mentioned in the earlier email thread, I think it's impractical to spec how the browser compositor handles out-of-range colors -- those where alpha is less than r, g, or b, and the canvas context was allocated with premultipliedAlpha:true. Spec'ing out of range colors could significantly slow down compositing of WebGL content in the general case. However, Gregg, the other three cases in your test (all but "canvas: premultipliedAlpha: true / data: not premultiplied") have definite correct answers. Gregg, does Canvas.toDataURL show the same incorrect results for your test case? If not, at least a test comparing the browser's rendering to a reference image should be added to the conformance/manual/ tests. -Ken On Tue, Nov 5, 2013 at 4:30 AM, Olli Etuaho wrote: > > Hi, > > similar issues was last discussed in July when I brought it up. The conclusion was that specifying behavior of premultipliedAlpha: true when supplied with non-premultiplied data (top left in your test case) is quite problematic because of third-party compositors. There are also decisions to be made such as whether to clamp or renormalize the color values. See for example: > > http://www.khronos.org/webgl/public-mailing-list/archives/1307/msg00009.html > > However, your test case does reveal bugs in the premultipliedAlpha: false case, which shouldn't have any problems. There's definitely something odd going on in Firefox. On Firefox on Linux the image changes when it is composited repeatedly, and settles on a similar image as in your Windows screenshot. The Chrome images look like what I would expect to see. > ________________________________________ > From: owner-public_webgl...@ [owner-public_webgl...@] On Behalf Of Gregg Tavares [khronos...@] > Sent: Tuesday, November 05, 2013 11:39 AM > To: public_webgl...@ > Subject: [Public WebGL] premultipliedAlpha: false needs to be consistent across browsers? > > Hey There, > > I was answering a question about premultipliedAlpha: false and found there are some issues. I think the bugs are in firefox but I don't know for sure. > > It seems like for the flag to have any meaning it has to produce the same results across browsers. Right now though Chrome, Firefox on OSX and Firefox on Windows all have different results. Firefox on OSX I believe is clearly a bug and I filed one. Firefox on Windows vs Chrome I'm not sure which is correct though I'm pretty sure it's Chrome but I could be wrong. > > See screenshots here. > http://greggman.com/downloads/examples/premultipliedalpha-issue.html > > The sample they were made from is here > http://jsfiddle.net/greggman/pWPPC/ > > Ideally it seems like the spec should be updated to make it clear what math is supposed to happen when blending with the background so implementations can be consistent. > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From oet...@ Wed Nov 6 02:13:32 2013 From: oet...@ (Olli Etuaho) Date: Wed, 6 Nov 2013 11:13:32 +0100 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE Message-ID: Hi all, WebGL 1.0.3 conformance test extensions/webgl-compressed-texture-size-limit.html is trying to create cube maps with width and height of MAX_CUBE_MAP_TEXTURE_SIZE. This is problematic on some current hardware configurations, since the limit may be larger than what textures can actually fit into memory. We can not adjust MAX_CUBE_MAP_TEXTURE_SIZE in our drivers to address this, since in OpenGL 4.1 and later specifications, the minimum value of MAX_CUBE_MAP_TEXTURE_SIZE is 16384. With the most space-consuming texture format we currently support, it's possible to hit 32 gigabytes with a single mipmapped cube map of that size. So it's clearly not the intention of the spec that it should be possible to actually allocate this, but that the driver would simply generate OUT_OF_MEMORY when appropriate, and it could still be possible to use for example sparse textures of the maximum size that have lower memory requirements. To summarize: In OpenGL, MAX_CUBE_MAP_TEXTURE_SIZE was not designed to communicate whether a cube map can fit into memory, but simply tell what size of textures the hardware is capable of processing in case they do fit. Now, I see three ways forward from here: A Browsers will adjust MAX_CUBE_MAP_TEXTURE_SIZE downwards according to memory constraints. To really guarantee that any maximum-sized cube map can be allocated, the limit would need to be set according to the most space-consuming formats and take into account memory consumed by the OS. This would likely put it in the 1024-2048 range on common hardware. B Browsers will adjust MAX_CUBE_MAP_TEXTURE_SIZE downwards so that it is possible to allocate a cube map of the maximum size in WebGL with at least one texture format, likely putting the constant at 4096 on common hardware. C The test extensions/webgl-compressed-texture-size-limit.html will be changed to not to test huge cube maps. A would needlessly restrict the GPUs abilities to the extent that it would likely complicate writing apps, so I don't think that's a very good solution. B would be good for testing and would not likely cause problems in apps, but the constant would have a relatively convoluted meaning. C would be my choice, since it would be in line with the other OpenGL specs. C was chosen earlier when the same issue was encountered in ES3.0 tests: https://cvs.khronos.org/bugzilla/show_bug.cgi?id=10901 Opinions? Regards, Olli ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Wed Nov 6 02:30:20 2013 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 6 Nov 2013 11:30:20 +0100 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: References: Message-ID: Option A is infeasible, because often neither the driver nor the browser will know the amount of available VRAM at the time you might want it (many mobiles don't have dedicated VRAM and so forth). Option B isn't guaranteeing that the test would succeed, because see option A. Option C is favorable in my opinion because: - Trying to test out the maximum size, even on platforms that might have enough ram, might lead to unacceptably bad test case performance (it can take quite a long time in swapping systems until the out of memory error is issued) - It will do away with a test that can't be guaranteed to succeed. That being said, I'd welcome some way to figure out ahead of time if something can be allocated. Receiving an out of memory error and a subsequent context loss might seem sufficient to some, but I'd rather avoid that ahead of time before I'd need to reboot the entire application from scratch. On Wed, Nov 6, 2013 at 11:13 AM, Olli Etuaho wrote: > > Hi all, > > WebGL 1.0.3 conformance test > extensions/webgl-compressed-texture-size-limit.html is trying to create > cube maps with width and height of MAX_CUBE_MAP_TEXTURE_SIZE. This is > problematic on some current hardware configurations, since the limit may be > larger than what textures can actually fit into memory. > > We can not adjust MAX_CUBE_MAP_TEXTURE_SIZE in our drivers to address > this, since in OpenGL 4.1 and later specifications, the minimum value of > MAX_CUBE_MAP_TEXTURE_SIZE is 16384. With the most space-consuming texture > format we currently support, it's possible to hit 32 gigabytes with a > single mipmapped cube map of that size. So it's clearly not the intention > of the spec that it should be possible to actually allocate this, but that > the driver would simply generate OUT_OF_MEMORY when appropriate, and it > could still be possible to use for example sparse textures of the maximum > size that have lower memory requirements. > > To summarize: In OpenGL, MAX_CUBE_MAP_TEXTURE_SIZE was not designed to > communicate whether a cube map can fit into memory, but simply tell what > size of textures the hardware is capable of processing in case they do fit. > > Now, I see three ways forward from here: > A Browsers will adjust MAX_CUBE_MAP_TEXTURE_SIZE downwards according to > memory constraints. To really guarantee that any maximum-sized cube map can > be allocated, the limit would need to be set according to the most > space-consuming formats and take into account memory consumed by the OS. > This would likely put it in the 1024-2048 range on common hardware. > B Browsers will adjust MAX_CUBE_MAP_TEXTURE_SIZE downwards so that it is > possible to allocate a cube map of the maximum size in WebGL with at least > one texture format, likely putting the constant at 4096 on common hardware. > C The test extensions/webgl-compressed-texture-size-limit.html will be > changed to not to test huge cube maps. > > A would needlessly restrict the GPUs abilities to the extent that it would > likely complicate writing apps, so I don't think that's a very good > solution. B would be good for testing and would not likely cause problems > in apps, but the constant would have a relatively convoluted meaning. C > would be my choice, since it would be in line with the other OpenGL specs. > C was chosen earlier when the same issue was encountered in ES3.0 tests: > https://cvs.khronos.org/bugzilla/show_bug.cgi?id=10901 > > Opinions? > > Regards, Olli > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Wed Nov 6 04:20:14 2013 From: oet...@ (Olli Etuaho) Date: Wed, 6 Nov 2013 13:20:14 +0100 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: References: , Message-ID: Desktop OpenGL has one more tool for querying beforehand whether something can be allocated: proxy textures (see section 8.22 of OpenGL 4.4 spec). Proxy textures are not an ideal solution either, since they don't give you very strict guarantees, but at least they make it possible to query maximum limits with different format and size combinations. And proxy textures are not a part of GLES. Developing a complete solution to GPU memory management across different hardware is quite difficult, and it's unclear to me what such a solution would even look like, since memory can be shared between CPU and GPU, swapping may or may not be available (or desirable depending on what would be swapped), and it's generally allowed to use some of the GPU memory for implementation-specific purposes. ________________________________________ From: Florian B?sch [pyalot...@] Sent: Wednesday, November 06, 2013 12:30 PM To: Olli Etuaho Cc: public_webgl...@ Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE Option A is infeasible, because often neither the driver nor the browser will know the amount of available VRAM at the time you might want it (many mobiles don't have dedicated VRAM and so forth). Option B isn't guaranteeing that the test would succeed, because see option A. Option C is favorable in my opinion because: * Trying to test out the maximum size, even on platforms that might have enough ram, might lead to unacceptably bad test case performance (it can take quite a long time in swapping systems until the out of memory error is issued) * It will do away with a test that can't be guaranteed to succeed. That being said, I'd welcome some way to figure out ahead of time if something can be allocated. Receiving an out of memory error and a subsequent context loss might seem sufficient to some, but I'd rather avoid that ahead of time before I'd need to reboot the entire application from scratch. On Wed, Nov 6, 2013 at 11:13 AM, Olli Etuaho > wrote: Hi all, WebGL 1.0.3 conformance test extensions/webgl-compressed-texture-size-limit.html is trying to create cube maps with width and height of MAX_CUBE_MAP_TEXTURE_SIZE. This is problematic on some current hardware configurations, since the limit may be larger than what textures can actually fit into memory. We can not adjust MAX_CUBE_MAP_TEXTURE_SIZE in our drivers to address this, since in OpenGL 4.1 and later specifications, the minimum value of MAX_CUBE_MAP_TEXTURE_SIZE is 16384. With the most space-consuming texture format we currently support, it's possible to hit 32 gigabytes with a single mipmapped cube map of that size. So it's clearly not the intention of the spec that it should be possible to actually allocate this, but that the driver would simply generate OUT_OF_MEMORY when appropriate, and it could still be possible to use for example sparse textures of the maximum size that have lower memory requirements. To summarize: In OpenGL, MAX_CUBE_MAP_TEXTURE_SIZE was not designed to communicate whether a cube map can fit into memory, but simply tell what size of textures the hardware is capable of processing in case they do fit. Now, I see three ways forward from here: A Browsers will adjust MAX_CUBE_MAP_TEXTURE_SIZE downwards according to memory constraints. To really guarantee that any maximum-sized cube map can be allocated, the limit would need to be set according to the most space-consuming formats and take into account memory consumed by the OS. This would likely put it in the 1024-2048 range on common hardware. B Browsers will adjust MAX_CUBE_MAP_TEXTURE_SIZE downwards so that it is possible to allocate a cube map of the maximum size in WebGL with at least one texture format, likely putting the constant at 4096 on common hardware. C The test extensions/webgl-compressed-texture-size-limit.html will be changed to not to test huge cube maps. A would needlessly restrict the GPUs abilities to the extent that it would likely complicate writing apps, so I don't think that's a very good solution. B would be good for testing and would not likely cause problems in apps, but the constant would have a relatively convoluted meaning. C would be my choice, since it would be in line with the other OpenGL specs. C was chosen earlier when the same issue was encountered in ES3.0 tests: https://cvs.khronos.org/bugzilla/show_bug.cgi?id=10901 Opinions? Regards, Olli ----------------------------------------------------------- You are currently subscribed to public_webgl...@. To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From oet...@ Wed Nov 6 05:22:41 2013 From: oet...@ (Olli Etuaho) Date: Wed, 6 Nov 2013 14:22:41 +0100 Subject: [Public WebGL] premultipliedAlpha: false needs to be consistent across browsers? In-Reply-To: References: , Message-ID: Hi Ken, Gregg, I'd put the clarification in http://www.khronos.org/registry/webgl/specs/latest/1.0/#PREMULTIPLIED_ALPHA , since it contains all the other information on how to process pixels originating from WebGL in other parts of the browser. The manual test could be implemented also without relying on toDataURL just by rendering the expected visual result by some other means, such as clearing another WebGL canvas with the page's background color before drawing the same content as in the test canvas. -Olli ________________________________________ From: Kenneth Russell [kbr...@] Sent: Wednesday, November 06, 2013 4:48 AM To: Olli Etuaho Cc: Gregg Tavares; public_webgl...@ Subject: Re: [Public WebGL] premultipliedAlpha: false needs to be consistent across browsers? Hi Gregg, Olli, Agreed that the spec clearly needs to be improved because of the obvious difference in behavior between browsers. Where do you suggest the clarification? http://www.khronos.org/registry/webgl/specs/latest/1.0/#PREMULTIPLIED_ALPHA or http://www.khronos.org/registry/webgl/specs/latest/1.0/#5.2.1 ? As mentioned in the earlier email thread, I think it's impractical to spec how the browser compositor handles out-of-range colors -- those where alpha is less than r, g, or b, and the canvas context was allocated with premultipliedAlpha:true. Spec'ing out of range colors could significantly slow down compositing of WebGL content in the general case. However, Gregg, the other three cases in your test (all but "canvas: premultipliedAlpha: true / data: not premultiplied") have definite correct answers. Gregg, does Canvas.toDataURL show the same incorrect results for your test case? If not, at least a test comparing the browser's rendering to a reference image should be added to the conformance/manual/ tests. -Ken On Tue, Nov 5, 2013 at 4:30 AM, Olli Etuaho wrote: > > Hi, > > similar issues was last discussed in July when I brought it up. The conclusion was that specifying behavior of premultipliedAlpha: true when supplied with non-premultiplied data (top left in your test case) is quite problematic because of third-party compositors. There are also decisions to be made such as whether to clamp or renormalize the color values. See for example: > > http://www.khronos.org/webgl/public-mailing-list/archives/1307/msg00009.html > > However, your test case does reveal bugs in the premultipliedAlpha: false case, which shouldn't have any problems. There's definitely something odd going on in Firefox. On Firefox on Linux the image changes when it is composited repeatedly, and settles on a similar image as in your Windows screenshot. The Chrome images look like what I would expect to see. > ________________________________________ > From: owner-public_webgl...@ [owner-public_webgl...@] On Behalf Of Gregg Tavares [khronos...@] > Sent: Tuesday, November 05, 2013 11:39 AM > To: public_webgl...@ > Subject: [Public WebGL] premultipliedAlpha: false needs to be consistent across browsers? > > Hey There, > > I was answering a question about premultipliedAlpha: false and found there are some issues. I think the bugs are in firefox but I don't know for sure. > > It seems like for the flag to have any meaning it has to produce the same results across browsers. Right now though Chrome, Firefox on OSX and Firefox on Windows all have different results. Firefox on OSX I believe is clearly a bug and I filed one. Firefox on Windows vs Chrome I'm not sure which is correct though I'm pretty sure it's Chrome but I could be wrong. > > See screenshots here. > http://greggman.com/downloads/examples/premultipliedalpha-issue.html > > The sample they were made from is here > http://jsfiddle.net/greggman/pWPPC/ > > Ideally it seems like the spec should be updated to make it clear what math is supposed to happen when blending with the background so implementations can be consistent. > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Wed Nov 6 12:01:50 2013 From: jgi...@ (Jeff Gilbert) Date: Wed, 6 Nov 2013 12:01:50 -0800 (PST) Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: References: Message-ID: <2015959183.6449297.1383768110354.JavaMail.zimbra@mozilla.com> I would prefer changing the tests to not assume that creating a max-size object will not OOM. We should still be checking that we don't get INVALID_VALUE at the max. -Jeff ----- Original Message ----- From: "Olli Etuaho" To: "Florian B?sch" Cc: "public webgl" Sent: Wednesday, November 6, 2013 4:20:14 AM Subject: RE: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE Desktop OpenGL has one more tool for querying beforehand whether something can be allocated: proxy textures (see section 8.22 of OpenGL 4.4 spec). Proxy textures are not an ideal solution either, since they don't give you very strict guarantees, but at least they make it possible to query maximum limits with different format and size combinations. And proxy textures are not a part of GLES. Developing a complete solution to GPU memory management across different hardware is quite difficult, and it's unclear to me what such a solution would even look like, since memory can be shared between CPU and GPU, swapping may or may not be available (or desirable depending on what would be swapped), and it's generally allowed to use some of the GPU memory for implementation-specific purposes. ________________________________________ From: Florian B?sch [pyalot...@] Sent: Wednesday, November 06, 2013 12:30 PM To: Olli Etuaho Cc: public_webgl...@ Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE Option A is infeasible, because often neither the driver nor the browser will know the amount of available VRAM at the time you might want it (many mobiles don't have dedicated VRAM and so forth). Option B isn't guaranteeing that the test would succeed, because see option A. Option C is favorable in my opinion because: * Trying to test out the maximum size, even on platforms that might have enough ram, might lead to unacceptably bad test case performance (it can take quite a long time in swapping systems until the out of memory error is issued) * It will do away with a test that can't be guaranteed to succeed. That being said, I'd welcome some way to figure out ahead of time if something can be allocated. Receiving an out of memory error and a subsequent context loss might seem sufficient to some, but I'd rather avoid that ahead of time before I'd need to reboot the entire application from scratch. On Wed, Nov 6, 2013 at 11:13 AM, Olli Etuaho > wrote: Hi all, WebGL 1.0.3 conformance test extensions/webgl-compressed-texture-size-limit.html is trying to create cube maps with width and height of MAX_CUBE_MAP_TEXTURE_SIZE. This is problematic on some current hardware configurations, since the limit may be larger than what textures can actually fit into memory. We can not adjust MAX_CUBE_MAP_TEXTURE_SIZE in our drivers to address this, since in OpenGL 4.1 and later specifications, the minimum value of MAX_CUBE_MAP_TEXTURE_SIZE is 16384. With the most space-consuming texture format we currently support, it's possible to hit 32 gigabytes with a single mipmapped cube map of that size. So it's clearly not the intention of the spec that it should be possible to actually allocate this, but that the driver would simply generate OUT_OF_MEMORY when appropriate, and it could still be possible to use for example sparse textures of the maximum size that have lower memory requirements. To summarize: In OpenGL, MAX_CUBE_MAP_TEXTURE_SIZE was not designed to communicate whether a cube map can fit into memory, but simply tell what size of textures the hardware is capable of processing in case they do fit. Now, I see three ways forward from here: A Browsers will adjust MAX_CUBE_MAP_TEXTURE_SIZE downwards according to memory constraints. To really guarantee that any maximum-sized cube map can be allocated, the limit would need to be set according to the most space-consuming formats and take into account memory consumed by the OS. This would likely put it in the 1024-2048 range on common hardware. B Browsers will adjust MAX_CUBE_MAP_TEXTURE_SIZE downwards so that it is possible to allocate a cube map of the maximum size in WebGL with at least one texture format, likely putting the constant at 4096 on common hardware. C The test extensions/webgl-compressed-texture-size-limit.html will be changed to not to test huge cube maps. A would needlessly restrict the GPUs abilities to the extent that it would likely complicate writing apps, so I don't think that's a very good solution. B would be good for testing and would not likely cause problems in apps, but the constant would have a relatively convoluted meaning. C would be my choice, since it would be in line with the other OpenGL specs. C was chosen earlier when the same issue was encountered in ES3.0 tests: https://cvs.khronos.org/bugzilla/show_bug.cgi?id=10901 Opinions? Regards, Olli ----------------------------------------------------------- You are currently subscribed to public_webgl...@. To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From oet...@ Thu Nov 7 06:26:15 2013 From: oet...@ (Olli Etuaho) Date: Thu, 7 Nov 2013 15:26:15 +0100 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: <2015959183.6449297.1383768110354.JavaMail.zimbra@mozilla.com> References: ,<2015959183.6449297.1383768110354.JavaMail.zimbra@mozilla.com> Message-ID: Tests that can cause OOM and thus context loss are problematic. In Chrome, the page will be forbidden from creating any more WebGL contexts after context loss. This would cause all subsequent tests to fail. So we should not have such tests in the main automated test suite. We could put such a test in the extras, though. Do you think this would be good enough? We can still test that textures beyond the max size will fail, and that TEXTURE_2D textures with the maximum width or height will not. ________________________________________ From: Jeff Gilbert [jgilbert...@] Sent: Wednesday, November 06, 2013 10:01 PM To: Olli Etuaho Cc: Florian B?sch; public webgl Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE I would prefer changing the tests to not assume that creating a max-size object will not OOM. We should still be checking that we don't get INVALID_VALUE at the max. -Jeff ----- Original Message ----- From: "Olli Etuaho" To: "Florian B?sch" Cc: "public webgl" Sent: Wednesday, November 6, 2013 4:20:14 AM Subject: RE: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE Desktop OpenGL has one more tool for querying beforehand whether something can be allocated: proxy textures (see section 8.22 of OpenGL 4.4 spec). Proxy textures are not an ideal solution either, since they don't give you very strict guarantees, but at least they make it possible to query maximum limits with different format and size combinations. And proxy textures are not a part of GLES. Developing a complete solution to GPU memory management across different hardware is quite difficult, and it's unclear to me what such a solution would even look like, since memory can be shared between CPU and GPU, swapping may or may not be available (or desirable depending on what would be swapped), and it's generally allowed to use some of the GPU memory for implementation-specific purposes. ________________________________________ From: Florian B?sch [pyalot...@] Sent: Wednesday, November 06, 2013 12:30 PM To: Olli Etuaho Cc: public_webgl...@ Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE Option A is infeasible, because often neither the driver nor the browser will know the amount of available VRAM at the time you might want it (many mobiles don't have dedicated VRAM and so forth). Option B isn't guaranteeing that the test would succeed, because see option A. Option C is favorable in my opinion because: * Trying to test out the maximum size, even on platforms that might have enough ram, might lead to unacceptably bad test case performance (it can take quite a long time in swapping systems until the out of memory error is issued) * It will do away with a test that can't be guaranteed to succeed. That being said, I'd welcome some way to figure out ahead of time if something can be allocated. Receiving an out of memory error and a subsequent context loss might seem sufficient to some, but I'd rather avoid that ahead of time before I'd need to reboot the entire application from scratch. On Wed, Nov 6, 2013 at 11:13 AM, Olli Etuaho > wrote: Hi all, WebGL 1.0.3 conformance test extensions/webgl-compressed-texture-size-limit.html is trying to create cube maps with width and height of MAX_CUBE_MAP_TEXTURE_SIZE. This is problematic on some current hardware configurations, since the limit may be larger than what textures can actually fit into memory. We can not adjust MAX_CUBE_MAP_TEXTURE_SIZE in our drivers to address this, since in OpenGL 4.1 and later specifications, the minimum value of MAX_CUBE_MAP_TEXTURE_SIZE is 16384. With the most space-consuming texture format we currently support, it's possible to hit 32 gigabytes with a single mipmapped cube map of that size. So it's clearly not the intention of the spec that it should be possible to actually allocate this, but that the driver would simply generate OUT_OF_MEMORY when appropriate, and it could still be possible to use for example sparse textures of the maximum size that have lower memory requirements. To summarize: In OpenGL, MAX_CUBE_MAP_TEXTURE_SIZE was not designed to communicate whether a cube map can fit into memory, but simply tell what size of textures the hardware is capable of processing in case they do fit. Now, I see three ways forward from here: A Browsers will adjust MAX_CUBE_MAP_TEXTURE_SIZE downwards according to memory constraints. To really guarantee that any maximum-sized cube map can be allocated, the limit would need to be set according to the most space-consuming formats and take into account memory consumed by the OS. This would likely put it in the 1024-2048 range on common hardware. B Browsers will adjust MAX_CUBE_MAP_TEXTURE_SIZE downwards so that it is possible to allocate a cube map of the maximum size in WebGL with at least one texture format, likely putting the constant at 4096 on common hardware. C The test extensions/webgl-compressed-texture-size-limit.html will be changed to not to test huge cube maps. A would needlessly restrict the GPUs abilities to the extent that it would likely complicate writing apps, so I don't think that's a very good solution. B would be good for testing and would not likely cause problems in apps, but the constant would have a relatively convoluted meaning. C would be my choice, since it would be in line with the other OpenGL specs. C was chosen earlier when the same issue was encountered in ES3.0 tests: https://cvs.khronos.org/bugzilla/show_bug.cgi?id=10901 Opinions? Regards, Olli ----------------------------------------------------------- You are currently subscribed to public_webgl...@. To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Thu Nov 7 18:01:28 2013 From: cal...@ (Mark Callow) Date: Fri, 08 Nov 2013 11:01:28 +0900 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: References: ,<2015959183.6449297.1383768110354.JavaMail.zimbra@mozilla.com> Message-ID: <527C45F8.1020406@artspark.co.jp> On 2013/11/07 23:26, Olli Etuaho wrote: > Tests that can cause OOM and thus context loss are problematic. In Chrome, the page will be forbidden from creating any more WebGL contexts after context loss. This would cause all subsequent tests to fail. So we should not have such tests in the main automated test suite. We could put such a test in the extras, though. Do you think this would be good enough? Refusing to create any more contexts for the page after it has made just 1 excessive attempt seems too extreme. Given the uncertainty about whether a given texture can be accommodated one can easily imagine an application written to try at its preferred size and, if it gets OOM, retrying at a smaller size. Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Fri Nov 8 17:38:08 2013 From: jgi...@ (Jeff Gilbert) Date: Fri, 8 Nov 2013 17:38:08 -0800 (PST) Subject: [Public WebGL] Rendering feedback loops in WebGL In-Reply-To: <5270350A.4010102@mozilla.com> References: <934383703.5068552.1382941543003.JavaMail.zimbra@mozilla.com> <526F1647.5090702@artspark.co.jp> <5270350A.4010102@mozilla.com> Message-ID: <976670393.6899649.1383961088835.JavaMail.zimbra@mozilla.com> I submitted a pull request to add language to the spec formalizing the generation of INVALID_OPERATION in cases where an operation would cause feedback as per the GLES2 spec: https://github.com/KhronosGroup/WebGL/pull/415 -Jeff ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From oet...@ Mon Nov 11 07:14:53 2013 From: oet...@ (Olli Etuaho) Date: Mon, 11 Nov 2013 16:14:53 +0100 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: <527C45F8.1020406@artspark.co.jp> References: ,<2015959183.6449297.1383768110354.JavaMail.zimbra@mozilla.com> ,<527C45F8.1020406@artspark.co.jp> Message-ID: SSBhZ3JlZSB0aGF0IENocm9tZSBjb3VsZCBiZSBhIGJpdCBtb3JlIGxlbmllbnQgLSB0byBiZSBl eGFjdCBJIHRoaW5rIGNvbnRleHQgbG9zcyBpcyBhbGxvd2VkIG9uZSBvciB0d28gdGltZXMsIGJ1 dCBubyBtb3JlIHRoYW4gdGhhdC4gSG93ZXZlciwgSSBkb24ndCB0aGluayBpdCBjaGFuZ2VzIHRo ZSBzaXR1YXRpb24gbXVjaCB3aGV0aGVyIGEgYnJvd3NlciBhbGxvd3MgY29udGV4dCBsb3NzIDEg dGltZSBwZXIgcGFnZSBvciAxMCB0aW1lcy4gSWYgaXQgd2FzIDEwIHRpbWVzLCB3aXRoIHRoZSBj b3VudGVyIGJlaW5nIHJlc2V0IG9uIHJlbG9hZCwgYW5kIHdlIGRlY2lkZWQgdG8gYWxsb3cgdGhl c2UgdW5zdGFibGUgdGVzdHMgaW50byB0aGUgY29uZm9ybWFuY2Ugc3VpdGUsIHdlIHdvdWxkIHN0 aWxsIGhhdmUgdG8gcHV0IGEgbGltaXQgdG8gdGhlIG51bWJlciBvZiB0ZXN0cyB0aGF0IGNhbiBj YXVzZSBjb250ZXh0IGxvc3MgYW5kIGNhcmVmdWxseSBtYWludGFpbiBpdC4gVGhlbiB3ZSdkIGhh dmUgdG8gYXNrIHdoYXQgdGVzdHMgYW1vbmcgdGhvc2UgdGhhdCBjYW4gY2F1c2UgY29udGV4dCBs b3NzIGFyZSB0aGUgbW9zdCBpbXBvcnRhbnQgdG8gaW5jbHVkZSBpbiB0aGUgdGVzdCBzdWl0ZSwg YW5kIHdpbGwgdGhpcyBjaGFuZ2Ugb3ZlciB0aW1lPw0KDQpMb25nLXRlcm0gSSBob3BlIHdlIGNh biBpbXByb3ZlIHRoZSBzdGFiaWxpdHkgaW4gb3RoZXIgd2F5cyBzbyB0aGF0IHRoZSBjb250ZXh0 IGxvc3MgbWVjaGFuaXNtIGNvbWVzIHVwIGxlc3Mgb2Z0ZW4sIGJ1dCBmb3Igbm93IEkgdGhpbmsg a2VlcGluZyB0aGUgdGVzdHMgdGhhdCBjYW4gbG9zZSBjb250ZXh0IGluIHRoZSBleHRyYXMgYW5k IG91dCBvZiB0aGUgbWFpbiBzdWl0ZSBpcyB0aGUgYmVzdCB3ZSBjYW4gZG8uIEkgYWN0dWFsbHkg dGhvdWdodCB0aGF0IHRoaXMgcG9saWN5IHdhcyBiYXNlZCBvbiBlYXJsaWVyIGNvbnNlbnN1cywg YmVjYXVzZSB0aGlzIGhhcyBiZWVuIGRvbmUgd2l0aCBtdWx0aXBsZSB0ZXN0cyBhbHJlYWR5Lg0K DQotT2xsaQ0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KRnJvbTog TWFyayBDYWxsb3cgW2NhbGxvdy5tYXJrQGFydHNwYXJrLmNvLmpwXQ0KU2VudDogRnJpZGF5LCBO b3ZlbWJlciAwOCwgMjAxMyA0OjAxIEFNDQpUbzogT2xsaSBFdHVhaG87IEplZmYgR2lsYmVydA0K Q2M6IEZsb3JpYW4gQsO2c2NoOyBwdWJsaWMgd2ViZ2wNClN1YmplY3Q6IFJlOiBbUHVibGljIFdl YkdMXSBUZXN0aW5nIE1BWF9DVUJFX01BUF9URVhUVVJFX1NJWkUNCg0KT24gMjAxMy8xMS8wNyAy MzoyNiwgT2xsaSBFdHVhaG8gd3JvdGU6DQoNClRlc3RzIHRoYXQgY2FuIGNhdXNlIE9PTSBhbmQg dGh1cyBjb250ZXh0IGxvc3MgYXJlIHByb2JsZW1hdGljLiBJbiBDaHJvbWUsIHRoZSBwYWdlIHdp bGwgYmUgZm9yYmlkZGVuIGZyb20gY3JlYXRpbmcgYW55IG1vcmUgV2ViR0wgY29udGV4dHMgYWZ0 ZXIgY29udGV4dCBsb3NzLiBUaGlzIHdvdWxkIGNhdXNlIGFsbCBzdWJzZXF1ZW50IHRlc3RzIHRv IGZhaWwuIFNvIHdlIHNob3VsZCBub3QgaGF2ZSBzdWNoIHRlc3RzIGluIHRoZSBtYWluIGF1dG9t YXRlZCB0ZXN0IHN1aXRlLiBXZSBjb3VsZCBwdXQgc3VjaCBhIHRlc3QgaW4gdGhlIGV4dHJhcywg dGhvdWdoLiBEbyB5b3UgdGhpbmsgdGhpcyB3b3VsZCBiZSBnb29kIGVub3VnaD8NCg0KDQpSZWZ1 c2luZyB0byBjcmVhdGUgYW55IG1vcmUgY29udGV4dHMgZm9yIHRoZSBwYWdlIGFmdGVyIGl0IGhh cyBtYWRlIGp1c3QgMSBleGNlc3NpdmUgYXR0ZW1wdCBzZWVtcyB0b28gZXh0cmVtZS4gR2l2ZW4g dGhlIHVuY2VydGFpbnR5IGFib3V0IHdoZXRoZXIgYSBnaXZlbiB0ZXh0dXJlIGNhbiBiZSBhY2Nv bW1vZGF0ZWQgb25lIGNhbiBlYXNpbHkgaW1hZ2luZSBhbiBhcHBsaWNhdGlvbiB3cml0dGVuIHRv IHRyeSBhdCBpdHMgcHJlZmVycmVkIHNpemUgYW5kLCBpZiBpdCBnZXRzIE9PTSwgcmV0cnlpbmcg YXQgYSBzbWFsbGVyIHNpemUuDQoNClJlZ2FyZHMNCg0KICAgIC1NYXJrDQoNCi0tDQrms6jmhI/v vJrjgZPjga7pm7vlrZDjg6Hjg7zjg6vjgavjga/jgIHmoKrlvI/kvJrnpL7jgqjjgqTjg4HjgqLj gqTjga7mqZ/lr4bmg4XloLHjgYzlkKvjgb7jgozjgabjgYTjgovloLTlkIjjgYzmnInjgorjgb7j gZnjgILmraPlvI/jgarjg6Hjg7zjg6vlj5fkv6HogIXjgafjga/nhKHjgYTloLTlkIjjga/jg6Hj g7zjg6vopIfoo73jgIEg5YaN6YWN5L+h44G+44Gf44Gv5oOF5aCx44Gu5L2/55So44KS5Zu644GP 56aB44GY44Gm44GK44KK44G+44GZ44CC44Ko44Op44O844CB5omL6YGV44GE44Gn44GT44Gu44Oh 44O844Or44KS5Y+X44GR5Y+W44KJ44KM44G+44GX44Gf44KJ5YmK6Zmk44KS6KGM44GE6YWN5L+h 6ICF44Gr44GU6YCj57Wh44KS44GK6aGY44GE44GE44Gf44GXIOOBvuOBmS4NCg0KTk9URTogVGhp cyBlbGVjdHJvbmljIG1haWwgbWVzc2FnZSBtYXkgY29udGFpbiBjb25maWRlbnRpYWwgYW5kIHBy aXZpbGVnZWQgaW5mb3JtYXRpb24gZnJvbSBISSBDb3Jwb3JhdGlvbi4gSWYgeW91IGFyZSBub3Qg dGhlIGludGVuZGVkIHJlY2lwaWVudCwgYW55IGRpc2Nsb3N1cmUsIHBob3RvY29weWluZywgZGlz dHJpYnV0aW9uIG9yIHVzZSBvZiB0aGUgY29udGVudHMgb2YgdGhlIHJlY2VpdmVkIGluZm9ybWF0 aW9uIGlzIHByb2hpYml0ZWQuIElmIHlvdSBoYXZlIHJlY2VpdmVkIHRoaXMgZS1tYWlsIGluIGVy cm9yLCBwbGVhc2Ugbm90aWZ5IHRoZSBzZW5kZXIgaW1tZWRpYXRlbHkgYW5kIHBlcm1hbmVudGx5 IGRlbGV0ZSB0aGlzIG1lc3NhZ2UgYW5kIGFsbCByZWxhdGVkIGNvcGllcy4NCg0K ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From zmo...@ Mon Nov 11 12:55:17 2013 From: zmo...@ (Zhenyao Mo) Date: Mon, 11 Nov 2013 12:55:17 -0800 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: References: <2015959183.6449297.1383768110354.JavaMail.zimbra@mozilla.com> <527C45F8.1020406@artspark.co.jp> Message-ID: I think we should provide a commandline switch to lift the limit of how many context lost you can have in a browser, so in testing we don't have to worry about that. Mo On Mon, Nov 11, 2013 at 7:14 AM, Olli Etuaho wrote: > I agree that Chrome could be a bit more lenient - to be exact I think context loss is allowed one or two times, but no more than that. However, I don't think it changes the situation much whether a browser allows context loss 1 time per page or 10 times. If it was 10 times, with the counter being reset on reload, and we decided to allow these unstable tests into the conformance suite, we would still have to put a limit to the number of tests that can cause context loss and carefully maintain it. Then we'd have to ask what tests among those that can cause context loss are the most important to include in the test suite, and will this change over time? > > Long-term I hope we can improve the stability in other ways so that the context loss mechanism comes up less often, but for now I think keeping the tests that can lose context in the extras and out of the main suite is the best we can do. I actually thought that this policy was based on earlier consensus, because this has been done with multiple tests already. > > -Olli > ________________________________________ > From: Mark Callow [callow.mark...@] > Sent: Friday, November 08, 2013 4:01 AM > To: Olli Etuaho; Jeff Gilbert > Cc: Florian B?sch; public webgl > Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE > > On 2013/11/07 23:26, Olli Etuaho wrote: > > Tests that can cause OOM and thus context loss are problematic. In Chrome, the page will be forbidden from creating any more WebGL contexts after context loss. This would cause all subsequent tests to fail. So we should not have such tests in the main automated test suite. We could put such a test in the extras, though. Do you think this would be good enough? > > > Refusing to create any more contexts for the page after it has made just 1 excessive attempt seems too extreme. Given the uncertainty about whether a given texture can be accommodated one can easily imagine an application written to try at its preferred size and, if it gets OOM, retrying at a smaller size. > > Regards > > -Mark > > -- > ???????????????????????????????????????????????????????????????? ???????????????????????????????????????????????????????????????? ??. > > NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From baj...@ Mon Nov 11 13:04:48 2013 From: baj...@ (Brandon Jones) Date: Mon, 11 Nov 2013 13:04:48 -0800 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: References: <2015959183.6449297.1383768110354.JavaMail.zimbra@mozilla.com> <527C45F8.1020406@artspark.co.jp> Message-ID: I agree, Mo, especially for stability of internal testing. That doesn't help users that are running the test normally, however. We need to make sure that the average user can still navigate to the test and run it without issue to see how compliant their device is. This is especially true for mobile devices, where setting command line flags is either awkward or impossible for most users. On Mon, Nov 11, 2013 at 12:55 PM, Zhenyao Mo wrote: > > I think we should provide a commandline switch to lift the limit of > how many context lost you can have in a browser, so in testing we > don't have to worry about that. > > Mo > > On Mon, Nov 11, 2013 at 7:14 AM, Olli Etuaho wrote: > > I agree that Chrome could be a bit more lenient - to be exact I think > context loss is allowed one or two times, but no more than that. However, I > don't think it changes the situation much whether a browser allows context > loss 1 time per page or 10 times. If it was 10 times, with the counter > being reset on reload, and we decided to allow these unstable tests into > the conformance suite, we would still have to put a limit to the number of > tests that can cause context loss and carefully maintain it. Then we'd have > to ask what tests among those that can cause context loss are the most > important to include in the test suite, and will this change over time? > > > > Long-term I hope we can improve the stability in other ways so that the > context loss mechanism comes up less often, but for now I think keeping the > tests that can lose context in the extras and out of the main suite is the > best we can do. I actually thought that this policy was based on earlier > consensus, because this has been done with multiple tests already. > > > > -Olli > > ________________________________________ > > From: Mark Callow [callow.mark...@] > > Sent: Friday, November 08, 2013 4:01 AM > > To: Olli Etuaho; Jeff Gilbert > > Cc: Florian B?sch; public webgl > > Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE > > > > On 2013/11/07 23:26, Olli Etuaho wrote: > > > > Tests that can cause OOM and thus context loss are problematic. In > Chrome, the page will be forbidden from creating any more WebGL contexts > after context loss. This would cause all subsequent tests to fail. So we > should not have such tests in the main automated test suite. We could put > such a test in the extras, though. Do you think this would be good enough? > > > > > > Refusing to create any more contexts for the page after it has made just > 1 excessive attempt seems too extreme. Given the uncertainty about whether > a given texture can be accommodated one can easily imagine an application > written to try at its preferred size and, if it gets OOM, retrying at a > smaller size. > > > > Regards > > > > -Mark > > > > -- > > ???????????????????????????????????????????????????????????????? > ???????????????????????????????????????????????????????????????? ??. > > > > NOTE: This electronic mail message may contain confidential and > privileged information from HI Corporation. If you are not the intended > recipient, any disclosure, photocopying, distribution or use of the > contents of the received information is prohibited. If you have received > this e-mail in error, please notify the sender immediately and permanently > delete this message and all related copies. > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Nov 11 13:52:17 2013 From: kbr...@ (Kenneth Russell) Date: Mon, 11 Nov 2013 13:52:17 -0800 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: References: <2015959183.6449297.1383768110354.JavaMail.zimbra@mozilla.com> <527C45F8.1020406@artspark.co.jp> Message-ID: Focusing on the cube map texture tests: Reducing the maximum tested cube map size -- but still requiring that allocation succeed at that size -- sounds like the best of some unsatisfying options. I suggest modifying the test to allocate up to a 2K cube map, if that seems acceptable. If that maximum texture size is advertised, I think it's reasonable for developers to expect that the implementation will succeed in allocating one, assuming no other applications on the system are consuming all of the remaining VRAM. Modifying the test to allow GL_OUT_OF_MEMORY errors at any size does not seem like a good option. It would allow poor implementations to advertise large maximum texture sizes and then simply fail to allocate them, while still remaining conformant. Artificially restricting a WebGL implementation's maximum texture size would force least-common-denominator behavior. The discussion about OUT_OF_MEMORY and context loss in the conformance tests is an important one, but to avoid conflating topics on this thread I suggest we start a new one for that. -Ken On Mon, Nov 11, 2013 at 7:14 AM, Olli Etuaho wrote: > I agree that Chrome could be a bit more lenient - to be exact I think context loss is allowed one or two times, but no more than that. However, I don't think it changes the situation much whether a browser allows context loss 1 time per page or 10 times. If it was 10 times, with the counter being reset on reload, and we decided to allow these unstable tests into the conformance suite, we would still have to put a limit to the number of tests that can cause context loss and carefully maintain it. Then we'd have to ask what tests among those that can cause context loss are the most important to include in the test suite, and will this change over time? > > Long-term I hope we can improve the stability in other ways so that the context loss mechanism comes up less often, but for now I think keeping the tests that can lose context in the extras and out of the main suite is the best we can do. I actually thought that this policy was based on earlier consensus, because this has been done with multiple tests already. > > -Olli > ________________________________________ > From: Mark Callow [callow.mark...@] > Sent: Friday, November 08, 2013 4:01 AM > To: Olli Etuaho; Jeff Gilbert > Cc: Florian B?sch; public webgl > Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE > > On 2013/11/07 23:26, Olli Etuaho wrote: > > Tests that can cause OOM and thus context loss are problematic. In Chrome, the page will be forbidden from creating any more WebGL contexts after context loss. This would cause all subsequent tests to fail. So we should not have such tests in the main automated test suite. We could put such a test in the extras, though. Do you think this would be good enough? > > > Refusing to create any more contexts for the page after it has made just 1 excessive attempt seems too extreme. Given the uncertainty about whether a given texture can be accommodated one can easily imagine an application written to try at its preferred size and, if it gets OOM, retrying at a smaller size. > > Regards > > -Mark > > -- > ???????????????????????????????????????????????????????????????? ???????????????????????????????????????????????????????????????? ??. > > NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Mon Nov 11 15:22:01 2013 From: kbr...@ (Kenneth Russell) Date: Mon, 11 Nov 2013 15:22:01 -0800 Subject: [Public WebGL] Rendering feedback loops in WebGL In-Reply-To: <976670393.6899649.1383961088835.JavaMail.zimbra@mozilla.com> References: <934383703.5068552.1382941543003.JavaMail.zimbra@mozilla.com> <526F1647.5090702@artspark.co.jp> <5270350A.4010102@mozilla.com> <976670393.6899649.1383961088835.JavaMail.zimbra@mozilla.com> Message-ID: Thanks. Merged this change. On Fri, Nov 8, 2013 at 5:38 PM, Jeff Gilbert wrote: > > I submitted a pull request to add language to the spec formalizing the generation of INVALID_OPERATION in cases where an operation would cause feedback as per the GLES2 spec: > https://github.com/KhronosGroup/WebGL/pull/415 > > -Jeff > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From tpa...@ Mon Nov 11 15:34:37 2013 From: tpa...@ (Tony Parisi) Date: Mon, 11 Nov 2013 15:34:37 -0800 Subject: [Public WebGL] Rendering feedback loops in WebGL In-Reply-To: References: <934383703.5068552.1382941543003.JavaMail.zimbra@mozilla.com> <526F1647.5090702@artspark.co.jp> <5270350A.4010102@mozilla.com> <976670393.6899649.1383961088835.JavaMail.zimbra@mozilla.com> Message-ID: My recommendation: I'm with Mark Callow. How about add supporting to the WebGL inspector (and other browser-based debugging tools), so developers can catch the error that way, but don't add the overhead to the draw call. On Mon, Nov 11, 2013 at 3:22 PM, Kenneth Russell wrote: > > Thanks. Merged this change. > > > On Fri, Nov 8, 2013 at 5:38 PM, Jeff Gilbert wrote: > > > > I submitted a pull request to add language to the spec formalizing the > generation of INVALID_OPERATION in cases where an operation would cause > feedback as per the GLES2 spec: > > https://github.com/KhronosGroup/WebGL/pull/415 > > > > -Jeff > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -- Tony Parisi tparisi...@ CTO at Large 415.902.8002 Skype auradeluxe Follow me on Twitter! http://twitter.com/auradeluxe Read my blog at http://www.tonyparisi.com/ Learn WebGL http://learningwebgl.com/ Read my book! *WebGL, Up and Running* http://shop.oreilly.com/product/0636920024729.do http://www.amazon.com/dp/144932357X -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Mon Nov 11 16:16:59 2013 From: jgi...@ (Jeff Gilbert) Date: Mon, 11 Nov 2013 16:16:59 -0800 (PST) Subject: [Public WebGL] Rendering feedback loops in WebGL In-Reply-To: References: <5270350A.4010102@mozilla.com> <976670393.6899649.1383961088835.JavaMail.zimbra@mozilla.com> Message-ID: <396222431.7086927.1384215419497.JavaMail.zimbra@mozilla.com> The desire to not add this check to draw calls presumes that this causes much in terms of overhead, particularly in regards to the existing overhead of draw calls. The extra overhead for this check shouldn't be large, and draw calls are already 'considered expensive'. -Jeff ----- Original Message ----- From: "Tony Parisi" To: "Kenneth Russell" Cc: "Jeff Gilbert" , "public webgl" Sent: Monday, November 11, 2013 3:34:37 PM Subject: Re: [Public WebGL] Rendering feedback loops in WebGL My recommendation: I'm with Mark Callow. How about add supporting to the WebGL inspector (and other browser-based debugging tools), so developers can catch the error that way, but don't add the overhead to the draw call. On Mon, Nov 11, 2013 at 3:22 PM, Kenneth Russell wrote: > > Thanks. Merged this change. > > > On Fri, Nov 8, 2013 at 5:38 PM, Jeff Gilbert wrote: > > > > I submitted a pull request to add language to the spec formalizing the > generation of INVALID_OPERATION in cases where an operation would cause > feedback as per the GLES2 spec: > > https://github.com/KhronosGroup/WebGL/pull/415 > > > > -Jeff > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -- Tony Parisi tparisi...@ CTO at Large 415.902.8002 Skype auradeluxe Follow me on Twitter! http://twitter.com/auradeluxe Read my blog at http://www.tonyparisi.com/ Learn WebGL http://learningwebgl.com/ Read my book! *WebGL, Up and Running* http://shop.oreilly.com/product/0636920024729.do http://www.amazon.com/dp/144932357X ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Mon Nov 11 18:37:34 2013 From: cal...@ (Mark Callow) Date: Tue, 12 Nov 2013 11:37:34 +0900 Subject: [Public WebGL] Rendering feedback loops in WebGL In-Reply-To: <396222431.7086927.1384215419497.JavaMail.zimbra@mozilla.com> References: <5270350A.4010102@mozilla.com> <976670393.6899649.1383961088835.JavaMail.zimbra@mozilla.com> <396222431.7086927.1384215419497.JavaMail.zimbra@mozilla.com> Message-ID: <5281946E.9020809@artspark.co.jp> On 2013/11/12 9:16, Jeff Gilbert wrote: > The desire to not add this check to draw calls presumes that this causes much in terms of overhead, particularly in regards to the existing overhead of draw calls. > The extra overhead for this check shouldn't be large, and draw calls are already 'considered expensive'. My desire to not add this check comes because each check adds a little bit more overhead, we'll quickly forget how many of these "overhead ... shouldn't be large" checks we have added and each one penalizes correctly written applications in their most basic function - drawing. Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oet...@ Tue Nov 12 05:52:50 2013 From: oet...@ (Olli Etuaho) Date: Tue, 12 Nov 2013 14:52:50 +0100 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: References: <2015959183.6449297.1383768110354.JavaMail.zimbra@mozilla.com> <527C45F8.1020406@artspark.co.jp> , Message-ID: It looks like there's relatively wide support for changing the test now, so here's a pull request: https://github.com/KhronosGroup/WebGL/pull/418 I agree we should continue discussing the OOM and context loss issues separately from this. -Olli ________________________________________ From: Kenneth Russell [kbr...@] Sent: Monday, November 11, 2013 11:52 PM To: Olli Etuaho Cc: Mark Callow; Jeff Gilbert; Florian B?sch; public webgl Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE Focusing on the cube map texture tests: Reducing the maximum tested cube map size -- but still requiring that allocation succeed at that size -- sounds like the best of some unsatisfying options. I suggest modifying the test to allocate up to a 2K cube map, if that seems acceptable. If that maximum texture size is advertised, I think it's reasonable for developers to expect that the implementation will succeed in allocating one, assuming no other applications on the system are consuming all of the remaining VRAM. Modifying the test to allow GL_OUT_OF_MEMORY errors at any size does not seem like a good option. It would allow poor implementations to advertise large maximum texture sizes and then simply fail to allocate them, while still remaining conformant. Artificially restricting a WebGL implementation's maximum texture size would force least-common-denominator behavior. The discussion about OUT_OF_MEMORY and context loss in the conformance tests is an important one, but to avoid conflating topics on this thread I suggest we start a new one for that. -Ken On Mon, Nov 11, 2013 at 7:14 AM, Olli Etuaho wrote: > I agree that Chrome could be a bit more lenient - to be exact I think context loss is allowed one or two times, but no more than that. However, I don't think it changes the situation much whether a browser allows context loss 1 time per page or 10 times. If it was 10 times, with the counter being reset on reload, and we decided to allow these unstable tests into the conformance suite, we would still have to put a limit to the number of tests that can cause context loss and carefully maintain it. Then we'd have to ask what tests among those that can cause context loss are the most important to include in the test suite, and will this change over time? > > Long-term I hope we can improve the stability in other ways so that the context loss mechanism comes up less often, but for now I think keeping the tests that can lose context in the extras and out of the main suite is the best we can do. I actually thought that this policy was based on earlier consensus, because this has been done with multiple tests already. > > -Olli > ________________________________________ > From: Mark Callow [callow.mark...@] > Sent: Friday, November 08, 2013 4:01 AM > To: Olli Etuaho; Jeff Gilbert > Cc: Florian B?sch; public webgl > Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE > > On 2013/11/07 23:26, Olli Etuaho wrote: > > Tests that can cause OOM and thus context loss are problematic. In Chrome, the page will be forbidden from creating any more WebGL contexts after context loss. This would cause all subsequent tests to fail. So we should not have such tests in the main automated test suite. We could put such a test in the extras, though. Do you think this would be good enough? > > > Refusing to create any more contexts for the page after it has made just 1 excessive attempt seems too extreme. Given the uncertainty about whether a given texture can be accommodated one can easily imagine an application written to try at its preferred size and, if it gets OOM, retrying at a smaller size. > > Regards > > -Mark > > -- > ???????????????????????????????????????????????????????????????? ???????????????????????????????????????????????????????????????? ??. > > NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. > From wba...@ Fri Nov 15 14:21:57 2013 From: wba...@ (Bill Baxter) Date: Fri, 15 Nov 2013 14:21:57 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio Message-ID: There appears to be a bit of under-spec'ed behavior around how fractional devicePixelRatios are handled, and it has led to there being different behaviors in different browsers. Firefox, Chrome, and IE11 all change the devicePixelRatio when using browser zoom. For example 125% zoom results in a 1.25 devicePixelRatio on what would have otherwise been a devicePixelRatio=1.0 display. The wiki (http://www.khronos.org/webgl/wiki/HandlingHighDPI) says this is how you resize your canvas in order to be 1:1 with physical device pixels: // set the size of the drawingBuffer based on the size it's displayed. canvas.width = canvas.clientWidth * devicePixelRatio; canvas.height = canvas.clientHeight * devicePixelRatio; However it in practice this only works with Firefox. For Chrome and IE you need to round: canvas.width = Math.round(canvas.clientWidth * devicePixelRatio); canvas.height = Math.round(canvas.clientHeight * devicePixelRatio); Here's a little demo you can play with: www.billbaxter.com/fractional-pixel-ratio The result of using the wrong method for your browser is that for some canvas sizes & zooms, you can't get pixel-perfect rendering. It would be nice to have this behavior actually spec'ed somewhere, and having everyone following it. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From jam...@ Fri Nov 15 14:29:47 2013 From: jam...@ (James Robinson) Date: Fri, 15 Nov 2013 14:29:47 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 2:21 PM, Bill Baxter wrote: > There appears to be a bit of under-spec'ed behavior around how fractional > devicePixelRatios are handled, and it has led to there being different > behaviors in different browsers. > > Firefox, Chrome, and IE11 all change the devicePixelRatio when using > browser zoom. For example 125% zoom results in a 1.25 devicePixelRatio on > what would have otherwise been a devicePixelRatio=1.0 display. > > The wiki (http://www.khronos.org/webgl/wiki/HandlingHighDPI) says this is > how you resize your canvas in order to be 1:1 with physical device pixels: > > // set the size of the drawingBuffer based on the size it's displayed. > canvas.width = canvas.clientWidth * devicePixelRatio; > canvas.height = canvas.clientHeight * devicePixelRatio; > The width and height attributes are unsigned longs in WebIDL, so this will floor to zero for the width and height. That's probably what you want - assuming you set up the gl viewport and draw calls appropriately. > > > However it in practice this only works with Firefox. > For Chrome and IE you need to round: > > canvas.width = Math.round(canvas.clientWidth * devicePixelRatio); > canvas.height = Math.round(canvas.clientHeight * devicePixelRatio); > > Here's a little demo you can play with: > www.billbaxter.com/fractional-pixel-ratio > It seems more likely that there's a bug in either the GL viewport handling here, the draw calls, or the scaling done by the browser when presenting the WebGL framebuffer to the viewport that results in the different rendering. The handling of devicePixelRatio and HTMLCanvasElement's attributes are consistent across different browsers. Could you try to isolate which it is? For instance, you can do readbacks to figure out if the WebGL framebuffer is the same in different browsers to tell if it's the drawing code behaving differently or if the framebuffer is being presented differently. > The result of using the wrong method for your browser is that for some > canvas sizes & zooms, you can't get pixel-perfect rendering. > > It would be nice to have this behavior actually spec'ed somewhere, and > having everyone following it. > The behavior here is all speced, AFAIK. What part do you think is underspecified? - James > --bb > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wba...@ Fri Nov 15 15:15:23 2013 From: wba...@ (Bill Baxter) Date: Fri, 15 Nov 2013 15:15:23 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 2:29 PM, James Robinson wrote: > > > > On Fri, Nov 15, 2013 at 2:21 PM, Bill Baxter wrote: > >> There appears to be a bit of under-spec'ed behavior around how fractional >> devicePixelRatios are handled, and it has led to there being different >> behaviors in different browsers. >> >> Firefox, Chrome, and IE11 all change the devicePixelRatio when using >> browser zoom. For example 125% zoom results in a 1.25 devicePixelRatio on >> what would have otherwise been a devicePixelRatio=1.0 display. >> >> The wiki (http://www.khronos.org/webgl/wiki/HandlingHighDPI) says this >> is how you resize your canvas in order to be 1:1 with physical device >> pixels: >> >> // set the size of the drawingBuffer based on the size it's displayed. >> canvas.width = canvas.clientWidth * devicePixelRatio; >> canvas.height = canvas.clientHeight * devicePixelRatio; >> > > The width and height attributes are unsigned longs in WebIDL, so this will > floor to zero for the width and height. That's probably what you want - > assuming you set up the gl viewport and draw calls appropriately. > Well, what I _want_ is to set up the canvas buffer to use the same number of device pixels that the browser is using when it blits the canvas to the screen. With 1.1 pixel zoom that works out to 570.9 device pixels, but of course the browser doesn't actually use a fractional number of pixels. Firefox rounds that down to 570, Chrome and IE round that up to 571. > > >> >> >> However it in practice this only works with Firefox. >> For Chrome and IE you need to round: >> >> canvas.width = Math.round(canvas.clientWidth * devicePixelRatio); >> canvas.height = Math.round(canvas.clientHeight * devicePixelRatio); >> > >> Here's a little demo you can play with: >> www.billbaxter.com/fractional-pixel-ratio >> > > It seems more likely that there's a bug in either the GL viewport handling > here, the draw calls, or the scaling done by the browser when presenting > the WebGL framebuffer to the viewport that results in the different > rendering. > Yep, those seem likely things, if by that you mean the browser's GL viewport handling, etc. My code is at http://billbaxter.com/fractional-pixel-ratio if you believe I am doing something wrong with my viewport. But if so then it would be hard to explain why the same code has different behaviors on different browsers. The code just draws a bunch of 1-pixel-wide rectangles. > The handling of devicePixelRatio and HTMLCanvasElement's attributes are > consistent across different browsers. Could you try to isolate which it > is? For instance, you can do readbacks to figure out if the WebGL > framebuffer is the same in different browsers to tell if it's the drawing > code behaving differently or if the framebuffer is being presented > differently. > I'll check, but I'm pretty sure the readback will show nicely-pixel aligned lines in all cases. I think it's the browser's presentation of the buffer that's different. > > >> The result of using the wrong method for your browser is that for some >> canvas sizes & zooms, you can't get pixel-perfect rendering. >> >> It would be nice to have this behavior actually spec'ed somewhere, and >> having everyone following it. >> > > The behavior here is all speced, AFAIK. What part do you think is > underspecified? > How many device pixels should be spanned by a 519 pixel wide canvas with devicePixelRatio of 1.1. At least I couldn't find it in a spec anywhere. Would be happy to be shown pointers. The wiki and a non-normative comment in the WebGL spec were the only things I found that seemed relevant. """ A WebGL application can achieve a 1:1 ratio between drawing buffer pixels and on-screen pixels on high-definition displays by examining properties like window.devicePixelRatio, scaling the canvas's width and height by that factor, and setting its CSS width and height to the original width and height. An application can simulate the effect of running on a higher-resolution display simply by scaling up the canvas's width and height properties.) """ --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From jam...@ Fri Nov 15 15:22:41 2013 From: jam...@ (James Robinson) Date: Fri, 15 Nov 2013 15:22:41 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 3:15 PM, Bill Baxter wrote: > On Fri, Nov 15, 2013 at 2:29 PM, James Robinson wrote: > >> >> >> >> On Fri, Nov 15, 2013 at 2:21 PM, Bill Baxter wrote: >> >>> There appears to be a bit of under-spec'ed behavior around how >>> fractional devicePixelRatios are handled, and it has led to there being >>> different behaviors in different browsers. >>> >>> Firefox, Chrome, and IE11 all change the devicePixelRatio when using >>> browser zoom. For example 125% zoom results in a 1.25 devicePixelRatio on >>> what would have otherwise been a devicePixelRatio=1.0 display. >>> >>> The wiki (http://www.khronos.org/webgl/wiki/HandlingHighDPI) says this >>> is how you resize your canvas in order to be 1:1 with physical device >>> pixels: >>> >>> // set the size of the drawingBuffer based on the size it's displayed. >>> canvas.width = canvas.clientWidth * devicePixelRatio; >>> canvas.height = canvas.clientHeight * devicePixelRatio; >>> >> >> The width and height attributes are unsigned longs in WebIDL, so this >> will floor to zero for the width and height. That's probably what you want >> - assuming you set up the gl viewport and draw calls appropriately. >> > > Well, what I _want_ is to set up the canvas buffer to use the same number > of device pixels that the browser is using when it blits the canvas to the > screen. With 1.1 pixel zoom that works out to 570.9 device pixels, but of > course the browser doesn't actually use a fractional number of pixels. > Firefox rounds that down to 570, Chrome and IE round that up to 571. > No, it doesn't. The number of pixels used for the WebGL framebuffer is exactly equal to the width and height attribute on the HTMLCanvasElement. Those are floored. The browser will then later composite this into the framebuffer, but whether or not it uses any intermediate steps when blitting or not is up to the browser. In Chrome, we'll blit from the WebGL framebuffer to the context used for the device without any intermediate buffers. At no point does the device pixel ratio come in to play here. > > >> >> >>> >>> >>> However it in practice this only works with Firefox. >>> For Chrome and IE you need to round: >>> >>> canvas.width = Math.round(canvas.clientWidth * devicePixelRatio); >>> canvas.height = Math.round(canvas.clientHeight * devicePixelRatio); >>> >> >>> Here's a little demo you can play with: >>> www.billbaxter.com/fractional-pixel-ratio >>> >> >> It seems more likely that there's a bug in either the GL viewport >> handling here, the draw calls, or the scaling done by the browser when >> presenting the WebGL framebuffer to the viewport that results in the >> different rendering. >> > > Yep, those seem likely things, if by that you mean the browser's GL > viewport handling, etc. My code is at > http://billbaxter.com/fractional-pixel-ratio if you believe I am doing > something wrong with my viewport. But if so then it would be hard to > explain why the same code has different behaviors on different browsers. > The code just draws a bunch of 1-pixel-wide rectangles. > > > >> The handling of devicePixelRatio and HTMLCanvasElement's attributes are >> consistent across different browsers. Could you try to isolate which it >> is? For instance, you can do readbacks to figure out if the WebGL >> framebuffer is the same in different browsers to tell if it's the drawing >> code behaving differently or if the framebuffer is being presented >> differently. >> > > I'll check, but I'm pretty sure the readback will show nicely-pixel > aligned lines in all cases. I think it's the browser's presentation of the > buffer that's different. > Please do check. Maybe browsers are using different filtering when doing the blit - knowing whether the source pixels are identical or not will help isolate if that's the case. > > >> >> >>> The result of using the wrong method for your browser is that for some >>> canvas sizes & zooms, you can't get pixel-perfect rendering. >>> >>> It would be nice to have this behavior actually spec'ed somewhere, and >>> having everyone following it. >>> >> >> The behavior here is all speced, AFAIK. What part do you think is >> underspecified? >> > > How many device pixels should be spanned by a 519 pixel wide canvas with > devicePixelRatio of 1.1. > Whatever you say in HTMLCanvasElement's width and height attributes is what the browser will use. - James > > At least I couldn't find it in a spec anywhere. Would be happy to be > shown pointers. The wiki and a non-normative comment in the WebGL spec > were the only things I found that seemed relevant. > """ > A WebGL application can achieve a 1:1 ratio between drawing buffer pixels > and on-screen pixels on high-definition displays by examining properties > like window.devicePixelRatio, scaling the canvas's width and height by that > factor, and setting its CSS width and height to the original width and > height. An application can simulate the effect of running on a > higher-resolution display simply by scaling up the canvas's width and > height properties.) > """ > > --bb > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wba...@ Fri Nov 15 16:04:16 2013 From: wba...@ (Bill Baxter) Date: Fri, 15 Nov 2013 16:04:16 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 3:22 PM, James Robinson wrote: > > > > On Fri, Nov 15, 2013 at 3:15 PM, Bill Baxter wrote: > >> On Fri, Nov 15, 2013 at 2:29 PM, James Robinson wrote: >> >>> >>> >>> >>> On Fri, Nov 15, 2013 at 2:21 PM, Bill Baxter wrote: >>> >>>> There appears to be a bit of under-spec'ed behavior around how >>>> fractional devicePixelRatios are handled, and it has led to there being >>>> different behaviors in different browsers. >>>> >>>> Firefox, Chrome, and IE11 all change the devicePixelRatio when using >>>> browser zoom. For example 125% zoom results in a 1.25 devicePixelRatio on >>>> what would have otherwise been a devicePixelRatio=1.0 display. >>>> >>>> The wiki (http://www.khronos.org/webgl/wiki/HandlingHighDPI) says this >>>> is how you resize your canvas in order to be 1:1 with physical device >>>> pixels: >>>> >>>> // set the size of the drawingBuffer based on the size it's displayed. >>>> canvas.width = canvas.clientWidth * devicePixelRatio; >>>> canvas.height = canvas.clientHeight * devicePixelRatio; >>>> >>> >>> The width and height attributes are unsigned longs in WebIDL, so this >>> will floor to zero for the width and height. That's probably what you want >>> - assuming you set up the gl viewport and draw calls appropriately. >>> >> >> Well, what I _want_ is to set up the canvas buffer to use the same number >> of device pixels that the browser is using when it blits the canvas to the >> screen. With 1.1 pixel zoom that works out to 570.9 device pixels, but of >> course the browser doesn't actually use a fractional number of pixels. >> Firefox rounds that down to 570, Chrome and IE round that up to 571. >> > > No, it doesn't. The number of pixels used for the WebGL framebuffer is > exactly equal to the width and height attribute on the HTMLCanvasElement. > Those are floored. > Yes, that's working fine, and those are floored as soon as you try to set them on the canvas element. console.log(canvas.width = 99.9) will print '99'. > The browser will then later composite this into the framebuffer, but > whether or not it uses any intermediate steps when blitting or not is up to > the browser. In Chrome, we'll blit from the WebGL framebuffer to the > context used for the device without any intermediate buffers. > Yes something in this part appears to be different between browsers. > At no point does the device pixel ratio come in to play here. > Somebody is deciding how many actual physical pixels on my display a canvas with CSS width 519 occupies. I think that whatever is deciding that must be inside Chrome, and it must take the devicePixelRatio into account. > > >> >> >>> >>> >>>> >>>> >>>> However it in practice this only works with Firefox. >>>> For Chrome and IE you need to round: >>>> >>>> canvas.width = Math.round(canvas.clientWidth * devicePixelRatio); >>>> canvas.height = Math.round(canvas.clientHeight * devicePixelRatio); >>>> >>> >>>> Here's a little demo you can play with: >>>> www.billbaxter.com/fractional-pixel-ratio >>>> >>> >>> It seems more likely that there's a bug in either the GL viewport >>> handling here, the draw calls, or the scaling done by the browser when >>> presenting the WebGL framebuffer to the viewport that results in the >>> different rendering. >>> >> >> Yep, those seem likely things, if by that you mean the browser's GL >> viewport handling, etc. My code is at >> http://billbaxter.com/fractional-pixel-ratio if you believe I am doing >> something wrong with my viewport. But if so then it would be hard to >> explain why the same code has different behaviors on different browsers. >> The code just draws a bunch of 1-pixel-wide rectangles. >> >> >> >>> The handling of devicePixelRatio and HTMLCanvasElement's attributes are >>> consistent across different browsers. Could you try to isolate which it >>> is? For instance, you can do readbacks to figure out if the WebGL >>> framebuffer is the same in different browsers to tell if it's the drawing >>> code behaving differently or if the framebuffer is being presented >>> differently. >>> >> >> I'll check, but I'm pretty sure the readback will show nicely-pixel >> aligned lines in all cases. I think it's the browser's presentation of the >> buffer that's different. >> > > Please do check. Maybe browsers are using different filtering when doing > the blit - knowing whether the source pixels are identical or not will help > isolate if that's the case. > Added a histogram of readback from the center of the canvas (the blurriest part) to the demo. billbaxter.com/fractional-pixel-ratio. It's showing [0:1024, 255:576, ] regardless of how blurry the webgl canvas looks. So the pixels in the framebuffer are as expected. It's somewhere in how they get shown on the browser page that the variability in rounding comes in. > > >> >> >>> >>> >>>> The result of using the wrong method for your browser is that for some >>>> canvas sizes & zooms, you can't get pixel-perfect rendering. >>>> >>>> It would be nice to have this behavior actually spec'ed somewhere, and >>>> having everyone following it. >>>> >>> >>> The behavior here is all speced, AFAIK. What part do you think is >>> underspecified? >>> >> >> How many device pixels should be spanned by a 519 pixel wide canvas with >> devicePixelRatio of 1.1. >> > > Whatever you say in HTMLCanvasElement's width and height attributes is > what the browser will use. > Yes, for the framebuffer, but the issue is what it does with the CSS width I tell it to use. > - James > > >> >> At least I couldn't find it in a spec anywhere. Would be happy to be >> shown pointers. The wiki and a non-normative comment in the WebGL spec >> were the only things I found that seemed relevant. >> """ >> A WebGL application can achieve a 1:1 ratio between drawing buffer pixels >> and on-screen pixels on high-definition displays by examining properties >> like window.devicePixelRatio, scaling the canvas's width and height by that >> factor, and setting its CSS width and height to the original width and >> height. An application can simulate the effect of running on a >> higher-resolution display simply by scaling up the canvas's width and >> height properties.) >> """ >> > Regards, --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From kir...@ Mon Nov 18 00:50:30 2013 From: kir...@ (Kirill Prazdnikov) Date: Mon, 18 Nov 2013 12:50:30 +0400 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: Message-ID: <5289D4D6.1000106@jetbrains.com> Hi Bill, I have the same issue in my application. It has not been solved. Chrome does not alter devicePixelRatio during zoom at all (probably it will be doing so soon). I have not found any way to get the real pixel size of a canvas on the screen. Firefox has correct devicePixelRatio, but there are still some issues with layout computations. i use y0 = Math.round(element.getBoundingClientRect().top * devicePixelRatio) y1 = Math.round(element.getBoundingClientRect().bottom * devicePixelRatio) to get actual screen occupation. Is it correct ? On 16.11.2013 2:21, Bill Baxter wrote: > There appears to be a bit of under-spec'ed behavior around how > fractional devicePixelRatios are handled, and it has led to there > being different behaviors in different browsers. > > Firefox, Chrome, and IE11 all change the devicePixelRatio when using > browser zoom. For example 125% zoom results in a 1.25 > devicePixelRatio on what would have otherwise been a > devicePixelRatio=1.0 display. > > The wiki (http://www.khronos.org/webgl/wiki/HandlingHighDPI) says this > is how you resize your canvas in order to be 1:1 with physical device > pixels: > > // set the size of the drawingBuffer based on the size it's displayed. > canvas.width = canvas.clientWidth * devicePixelRatio; > canvas.height = canvas.clientHeight * devicePixelRatio; > > > However it in practice this only works with Firefox. > For Chrome and IE you need to round: > > canvas.width = Math.round(canvas.clientWidth * devicePixelRatio); > canvas.height = Math.round(canvas.clientHeight * devicePixelRatio); > > Here's a little demo you can play with: > www.billbaxter.com/fractional-pixel-ratio > > > The result of using the wrong method for your browser is that for some > canvas sizes & zooms, you can't get pixel-perfect rendering. > > It would be nice to have this behavior actually spec'ed somewhere, and > having everyone following it. > > --bb > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kri...@ Mon Nov 18 01:10:18 2013 From: kri...@ (Kristian Sons) Date: Mon, 18 Nov 2013 10:10:18 +0100 Subject: [Public WebGL] Textures and DOM elements Message-ID: <5289D97A.5090009@dfki.de> Hi all, I remember there was a discussion about texture2D accepting an arbitrary DOM element. What is the status of this interesting initiative? Thanks a lot, Kristian -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775?2235 kristian.sons...@ http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Mon Nov 18 13:23:25 2013 From: jgi...@ (Jeff Gilbert) Date: Mon, 18 Nov 2013 13:23:25 -0800 (PST) Subject: [Public WebGL] Textures and DOM elements In-Reply-To: <5289D97A.5090009@dfki.de> References: <5289D97A.5090009@dfki.de> Message-ID: <1772130345.7884118.1384809805934.JavaMail.zimbra@mozilla.com> If I remember, the main issues were privacy- and security-related. I do not think these issues will be solved quickly. -Jeff ----- Original Message ----- From: "Kristian Sons" To: "public webgl" Sent: Monday, November 18, 2013 1:10:18 AM Subject: [Public WebGL] Textures and DOM elements Hi all, I remember there was a discussion about texture2D accepting an arbitrary DOM element. What is the status of this interesting initiative? Thanks a lot, Kristian -- _______________________________________________________________________________ Kristian Sons Deutsches Forschungszentrum f?r K?nstliche Intelligenz GmbH, DFKI Agenten und Simulierte Realit?t Campus, Geb. D 3 2, Raum 0.77 66123 Saarbr?cken, Germany Phone: +49 681 85775-3833 Phone: +49 681 302-3833 Fax: +49 681 85775?2235 kristian.sons...@ http://www.xml3d.org Gesch?ftsf?hrung: Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster (Vorsitzender) Dr. Walter Olthoff Vorsitzender des Aufsichtsrats: Prof. Dr. h.c. Hans A. Aukes Amtsgericht Kaiserslautern, HRB 2313 _______________________________________________________________________________ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Mon Nov 18 15:36:14 2013 From: kbr...@ (Kenneth Russell) Date: Mon, 18 Nov 2013 15:36:14 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: Message-ID: On Fri, Nov 15, 2013 at 4:04 PM, Bill Baxter wrote: > > > > On Fri, Nov 15, 2013 at 3:22 PM, James Robinson wrote: >> >> >> >> >> On Fri, Nov 15, 2013 at 3:15 PM, Bill Baxter wrote: >>> >>> On Fri, Nov 15, 2013 at 2:29 PM, James Robinson >>> wrote: >>>> >>>> >>>> >>>> >>>> On Fri, Nov 15, 2013 at 2:21 PM, Bill Baxter wrote: >>>>> >>>>> There appears to be a bit of under-spec'ed behavior around how >>>>> fractional devicePixelRatios are handled, and it has led to there being >>>>> different behaviors in different browsers. >>>>> >>>>> Firefox, Chrome, and IE11 all change the devicePixelRatio when using >>>>> browser zoom. For example 125% zoom results in a 1.25 devicePixelRatio on >>>>> what would have otherwise been a devicePixelRatio=1.0 display. >>>>> >>>>> The wiki (http://www.khronos.org/webgl/wiki/HandlingHighDPI) says this >>>>> is how you resize your canvas in order to be 1:1 with physical device >>>>> pixels: >>>>> >>>>> // set the size of the drawingBuffer based on the size it's >>>>> displayed. >>>>> canvas.width = canvas.clientWidth * devicePixelRatio; >>>>> canvas.height = canvas.clientHeight * devicePixelRatio; >>>> >>>> >>>> The width and height attributes are unsigned longs in WebIDL, so this >>>> will floor to zero for the width and height. That's probably what you want >>>> - assuming you set up the gl viewport and draw calls appropriately. >>> >>> >>> Well, what I _want_ is to set up the canvas buffer to use the same number >>> of device pixels that the browser is using when it blits the canvas to the >>> screen. With 1.1 pixel zoom that works out to 570.9 device pixels, but of >>> course the browser doesn't actually use a fractional number of pixels. >>> Firefox rounds that down to 570, Chrome and IE round that up to 571. >> >> >> No, it doesn't. The number of pixels used for the WebGL framebuffer is >> exactly equal to the width and height attribute on the HTMLCanvasElement. >> Those are floored. > > > Yes, that's working fine, and those are floored as soon as you try to set > them on the canvas element. console.log(canvas.width = 99.9) will print > '99'. > >> >> The browser will then later composite this into the framebuffer, but >> whether or not it uses any intermediate steps when blitting or not is up to >> the browser. In Chrome, we'll blit from the WebGL framebuffer to the >> context used for the device without any intermediate buffers. > > > Yes something in this part appears to be different between browsers. > >> >> At no point does the device pixel ratio come in to play here. > > > Somebody is deciding how many actual physical pixels on my display a canvas > with CSS width 519 occupies. I think that whatever is deciding that must be > inside Chrome, and it must take the devicePixelRatio into account. Yes, the presentation of individual elements to the screen is handled by the browser's compositor, and it wouldn't surprise me if there were differences in implementation between browsers. Since it's not possible to test the results of page compositing using just the browser's APIs (doing so requires reading back the rendered results from the screen), there are probably no cross-browser tests asserting identical behavior. Bill, it looks to me like both Chrome and Firefox need the call to Math.round() when computing the canvas's back buffer size per your formula above. Testing on a MacBook Pro Retina Display, I see blurry rendering with mode=floor at certain zoom levels in both browsers (testing Chrome 33 Canary), while mode=round is always sharp. Should we update http://www.khronos.org/webgl/wiki/HandlingHighDPI ? James or Dean, do you know where in the CSS or HTML specifications the cumulative transformation matrix for elements, all the way to device pixels, is defined? I assume that specs like CSS3 transforms [1] affect the definition, but I'm having difficulty even finding a reference for devicePixelRatio in the WHATWG specs. Thanks, -Ken [1] http://www.w3.org/TR/css3-transforms/ > >> >> >>> >>> >>>> >>>> >>>>> >>>>> >>>>> >>>>> However it in practice this only works with Firefox. >>>>> For Chrome and IE you need to round: >>>>> >>>>> canvas.width = Math.round(canvas.clientWidth * devicePixelRatio); >>>>> canvas.height = Math.round(canvas.clientHeight * devicePixelRatio); >>>>> >>>>> >>>>> Here's a little demo you can play with: >>>>> www.billbaxter.com/fractional-pixel-ratio >>>> >>>> >>>> It seems more likely that there's a bug in either the GL viewport >>>> handling here, the draw calls, or the scaling done by the browser when >>>> presenting the WebGL framebuffer to the viewport that results in the >>>> different rendering. >>> >>> >>> Yep, those seem likely things, if by that you mean the browser's GL >>> viewport handling, etc. My code is at >>> http://billbaxter.com/fractional-pixel-ratio if you believe I am doing >>> something wrong with my viewport. But if so then it would be hard to >>> explain why the same code has different behaviors on different browsers. >>> The code just draws a bunch of 1-pixel-wide rectangles. >>> >>> >>>> >>>> The handling of devicePixelRatio and HTMLCanvasElement's attributes are >>>> consistent across different browsers. Could you try to isolate which it is? >>>> For instance, you can do readbacks to figure out if the WebGL framebuffer is >>>> the same in different browsers to tell if it's the drawing code behaving >>>> differently or if the framebuffer is being presented differently. >>> >>> >>> I'll check, but I'm pretty sure the readback will show nicely-pixel >>> aligned lines in all cases. I think it's the browser's presentation of the >>> buffer that's different. >> >> >> Please do check. Maybe browsers are using different filtering when doing >> the blit - knowing whether the source pixels are identical or not will help >> isolate if that's the case. > > > Added a histogram of readback from the center of the canvas (the blurriest > part) to the demo. billbaxter.com/fractional-pixel-ratio. > It's showing [0:1024, 255:576, ] regardless of how blurry the webgl canvas > looks. > > So the pixels in the framebuffer are as expected. It's somewhere in how > they get shown on the browser page that the variability in rounding comes > in. > >> >> >>> >>> >>>> >>>> >>>>> >>>>> The result of using the wrong method for your browser is that for some >>>>> canvas sizes & zooms, you can't get pixel-perfect rendering. >>>>> >>>>> It would be nice to have this behavior actually spec'ed somewhere, and >>>>> having everyone following it. >>>> >>>> >>>> The behavior here is all speced, AFAIK. What part do you think is >>>> underspecified? >>> >>> >>> How many device pixels should be spanned by a 519 pixel wide canvas with >>> devicePixelRatio of 1.1. >> >> >> Whatever you say in HTMLCanvasElement's width and height attributes is >> what the browser will use. > > > Yes, for the framebuffer, but the issue is what it does with the CSS width I > tell it to use. > > >> >> - James >> >>> >>> >>> At least I couldn't find it in a spec anywhere. Would be happy to be >>> shown pointers. The wiki and a non-normative comment in the WebGL spec were >>> the only things I found that seemed relevant. >>> """ >>> A WebGL application can achieve a 1:1 ratio between drawing buffer pixels >>> and on-screen pixels on high-definition displays by examining properties >>> like window.devicePixelRatio, scaling the canvas's width and height by that >>> factor, and setting its CSS width and height to the original width and >>> height. An application can simulate the effect of running on a >>> higher-resolution display simply by scaling up the canvas's width and height >>> properties.) >>> """ > > > Regards, > --bb ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From din...@ Mon Nov 18 16:11:12 2013 From: din...@ (Dean Jackson) Date: Tue, 19 Nov 2013 11:11:12 +1100 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: Message-ID: <2D84ABF5-C4A9-4CD6-BDD8-17E5BF132BC5@apple.com> On 19 Nov 2013, at 10:36 am, Kenneth Russell wrote: > On Fri, Nov 15, 2013 at 4:04 PM, Bill Baxter wrote: >> >> >> >> On Fri, Nov 15, 2013 at 3:22 PM, James Robinson wrote: >>> >>> >>> >>> >>> On Fri, Nov 15, 2013 at 3:15 PM, Bill Baxter wrote: >>>> >>>> On Fri, Nov 15, 2013 at 2:29 PM, James Robinson >>>> wrote: >>>>> >>>>> >>>>> >>>>> >>>>> On Fri, Nov 15, 2013 at 2:21 PM, Bill Baxter wrote: >>>>>> >>>>>> There appears to be a bit of under-spec'ed behavior around how >>>>>> fractional devicePixelRatios are handled, and it has led to there being >>>>>> different behaviors in different browsers. >>>>>> >>>>>> Firefox, Chrome, and IE11 all change the devicePixelRatio when using >>>>>> browser zoom. For example 125% zoom results in a 1.25 devicePixelRatio on >>>>>> what would have otherwise been a devicePixelRatio=1.0 display. >>>>>> >>>>>> The wiki (http://www.khronos.org/webgl/wiki/HandlingHighDPI) says this >>>>>> is how you resize your canvas in order to be 1:1 with physical device >>>>>> pixels: >>>>>> >>>>>> // set the size of the drawingBuffer based on the size it's >>>>>> displayed. >>>>>> canvas.width = canvas.clientWidth * devicePixelRatio; >>>>>> canvas.height = canvas.clientHeight * devicePixelRatio; >>>>> >>>>> >>>>> The width and height attributes are unsigned longs in WebIDL, so this >>>>> will floor to zero for the width and height. That's probably what you want >>>>> - assuming you set up the gl viewport and draw calls appropriately. >>>> >>>> >>>> Well, what I _want_ is to set up the canvas buffer to use the same number >>>> of device pixels that the browser is using when it blits the canvas to the >>>> screen. With 1.1 pixel zoom that works out to 570.9 device pixels, but of >>>> course the browser doesn't actually use a fractional number of pixels. >>>> Firefox rounds that down to 570, Chrome and IE round that up to 571. >>> >>> >>> No, it doesn't. The number of pixels used for the WebGL framebuffer is >>> exactly equal to the width and height attribute on the HTMLCanvasElement. >>> Those are floored. >> >> >> Yes, that's working fine, and those are floored as soon as you try to set >> them on the canvas element. console.log(canvas.width = 99.9) will print >> '99'. >> >>> >>> The browser will then later composite this into the framebuffer, but >>> whether or not it uses any intermediate steps when blitting or not is up to >>> the browser. In Chrome, we'll blit from the WebGL framebuffer to the >>> context used for the device without any intermediate buffers. >> >> >> Yes something in this part appears to be different between browsers. >> >>> >>> At no point does the device pixel ratio come in to play here. >> >> >> Somebody is deciding how many actual physical pixels on my display a canvas >> with CSS width 519 occupies. I think that whatever is deciding that must be >> inside Chrome, and it must take the devicePixelRatio into account. > > Yes, the presentation of individual elements to the screen is handled > by the browser's compositor, and it wouldn't surprise me if there were > differences in implementation between browsers. Since it's not > possible to test the results of page compositing using just the > browser's APIs (doing so requires reading back the rendered results > from the screen), there are probably no cross-browser tests asserting > identical behavior. > > Bill, it looks to me like both Chrome and Firefox need the call to > Math.round() when computing the canvas's back buffer size per your > formula above. Testing on a MacBook Pro Retina Display, I see blurry > rendering with mode=floor at certain zoom levels in both browsers > (testing Chrome 33 Canary), while mode=round is always sharp. Should > we update http://www.khronos.org/webgl/wiki/HandlingHighDPI ? > > James or Dean, do you know where in the CSS or HTML specifications the > cumulative transformation matrix for elements, all the way to device > pixels, is defined? I assume that specs like CSS3 transforms [1] > affect the definition, but I'm having difficulty even finding a > reference for devicePixelRatio in the WHATWG specs. http://dev.w3.org/csswg/cssom-view/#dom-window-devicepixelratio Dean > > Thanks, > > -Ken > > [1] http://www.w3.org/TR/css3-transforms/ > > >> >>> >>> >>>> >>>> >>>>> >>>>> >>>>>> >>>>>> >>>>>> >>>>>> However it in practice this only works with Firefox. >>>>>> For Chrome and IE you need to round: >>>>>> >>>>>> canvas.width = Math.round(canvas.clientWidth * devicePixelRatio); >>>>>> canvas.height = Math.round(canvas.clientHeight * devicePixelRatio); >>>>>> >>>>>> >>>>>> Here's a little demo you can play with: >>>>>> www.billbaxter.com/fractional-pixel-ratio >>>>> >>>>> >>>>> It seems more likely that there's a bug in either the GL viewport >>>>> handling here, the draw calls, or the scaling done by the browser when >>>>> presenting the WebGL framebuffer to the viewport that results in the >>>>> different rendering. >>>> >>>> >>>> Yep, those seem likely things, if by that you mean the browser's GL >>>> viewport handling, etc. My code is at >>>> http://billbaxter.com/fractional-pixel-ratio if you believe I am doing >>>> something wrong with my viewport. But if so then it would be hard to >>>> explain why the same code has different behaviors on different browsers. >>>> The code just draws a bunch of 1-pixel-wide rectangles. >>>> >>>> >>>>> >>>>> The handling of devicePixelRatio and HTMLCanvasElement's attributes are >>>>> consistent across different browsers. Could you try to isolate which it is? >>>>> For instance, you can do readbacks to figure out if the WebGL framebuffer is >>>>> the same in different browsers to tell if it's the drawing code behaving >>>>> differently or if the framebuffer is being presented differently. >>>> >>>> >>>> I'll check, but I'm pretty sure the readback will show nicely-pixel >>>> aligned lines in all cases. I think it's the browser's presentation of the >>>> buffer that's different. >>> >>> >>> Please do check. Maybe browsers are using different filtering when doing >>> the blit - knowing whether the source pixels are identical or not will help >>> isolate if that's the case. >> >> >> Added a histogram of readback from the center of the canvas (the blurriest >> part) to the demo. billbaxter.com/fractional-pixel-ratio. >> It's showing [0:1024, 255:576, ] regardless of how blurry the webgl canvas >> looks. >> >> So the pixels in the framebuffer are as expected. It's somewhere in how >> they get shown on the browser page that the variability in rounding comes >> in. >> >>> >>> >>>> >>>> >>>>> >>>>> >>>>>> >>>>>> The result of using the wrong method for your browser is that for some >>>>>> canvas sizes & zooms, you can't get pixel-perfect rendering. >>>>>> >>>>>> It would be nice to have this behavior actually spec'ed somewhere, and >>>>>> having everyone following it. >>>>> >>>>> >>>>> The behavior here is all speced, AFAIK. What part do you think is >>>>> underspecified? >>>> >>>> >>>> How many device pixels should be spanned by a 519 pixel wide canvas with >>>> devicePixelRatio of 1.1. >>> >>> >>> Whatever you say in HTMLCanvasElement's width and height attributes is >>> what the browser will use. >> >> >> Yes, for the framebuffer, but the issue is what it does with the CSS width I >> tell it to use. >> >> >>> >>> - James >>> >>>> >>>> >>>> At least I couldn't find it in a spec anywhere. Would be happy to be >>>> shown pointers. The wiki and a non-normative comment in the WebGL spec were >>>> the only things I found that seemed relevant. >>>> """ >>>> A WebGL application can achieve a 1:1 ratio between drawing buffer pixels >>>> and on-screen pixels on high-definition displays by examining properties >>>> like window.devicePixelRatio, scaling the canvas's width and height by that >>>> factor, and setting its CSS width and height to the original width and >>>> height. An application can simulate the effect of running on a >>>> higher-resolution display simply by scaling up the canvas's width and height >>>> properties.) >>>> """ >> >> >> Regards, >> --bb ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From din...@ Mon Nov 18 16:35:05 2013 From: din...@ (Dean Jackson) Date: Tue, 19 Nov 2013 11:35:05 +1100 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: <2D84ABF5-C4A9-4CD6-BDD8-17E5BF132BC5@apple.com> References: <2D84ABF5-C4A9-4CD6-BDD8-17E5BF132BC5@apple.com> Message-ID: <25D69E4A-DFC3-403C-978A-AF977A771FA7@apple.com> On 19 Nov 2013, at 11:11 am, Dean Jackson wrote: >> >> James or Dean, do you know where in the CSS or HTML specifications the >> cumulative transformation matrix for elements, all the way to device >> pixels, is defined? I assume that specs like CSS3 transforms [1] >> affect the definition, but I'm having difficulty even finding a >> reference for devicePixelRatio in the WHATWG specs. > > http://dev.w3.org/csswg/cssom-view/#dom-window-devicepixelratio Note that when WebKit first implemented the feature, it did not intend the current definition (and still does not implement it that way). In particular, it was not supposed to reflect the current page zoom and pinch zoom. Dean ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From din...@ Tue Nov 19 10:23:08 2013 From: din...@ (Dean Jackson) Date: Wed, 20 Nov 2013 05:23:08 +1100 Subject: [Public WebGL] Textures and DOM elements In-Reply-To: <1772130345.7884118.1384809805934.JavaMail.zimbra@mozilla.com> References: <5289D97A.5090009@dfki.de> <1772130345.7884118.1384809805934.JavaMail.zimbra@mozilla.com> Message-ID: On 19 Nov 2013, at 8:23 am, Jeff Gilbert wrote: > > If I remember, the main issues were privacy- and security-related. I do not think these issues will be solved quickly. Canvas2DRenderingContext?s drawElement has the same issues, so any progress there would be shared by WebGL. However, as Jeff said, it?s not looking great for a solution. Dean > > ----- Original Message ----- > From: "Kristian Sons" > To: "public webgl" > Sent: Monday, November 18, 2013 1:10:18 AM > Subject: [Public WebGL] Textures and DOM elements > > > Hi all, > > I remember there was a discussion about texture2D accepting an arbitrary > DOM element. What is the status of this interesting initiative? > > Thanks a lot, > Kristian > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Nov 19 16:13:09 2013 From: kbr...@ (Kenneth Russell) Date: Tue, 19 Nov 2013 16:13:09 -0800 Subject: [Public WebGL] Textures and DOM elements In-Reply-To: References: <5289D97A.5090009@dfki.de> <1772130345.7884118.1384809805934.JavaMail.zimbra@mozilla.com> Message-ID: Actually, progress in this area has recently been made in the form of the WEBGL_security_sensitive_resources extension [1]. This extension for the first time attempts to define safe usage of security-sensitive resources within WebGL, by preventing side-channel timing attacks which allowed indirect deduction of the contents of those resources. This extension is the first step toward allowing WebGL to use DOM rendering results as textures. Another API or extension will need to be defined to describe how to render DOM into a WebGL texture (one which is defined as security-sensitive). -Ken [1] http://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/ On Tue, Nov 19, 2013 at 10:23 AM, Dean Jackson wrote: > > > On 19 Nov 2013, at 8:23 am, Jeff Gilbert wrote: > >> >> If I remember, the main issues were privacy- and security-related. I do not think these issues will be solved quickly. > > Canvas2DRenderingContext?s drawElement has the same issues, so any progress there would be shared by WebGL. However, as Jeff said, it?s not looking great for a solution. > > Dean > >> >> ----- Original Message ----- >> From: "Kristian Sons" >> To: "public webgl" >> Sent: Monday, November 18, 2013 1:10:18 AM >> Subject: [Public WebGL] Textures and DOM elements >> >> >> Hi all, >> >> I remember there was a discussion about texture2D accepting an arbitrary >> DOM element. What is the status of this interesting initiative? >> >> Thanks a lot, >> Kristian >> > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Nov 19 16:17:40 2013 From: kbr...@ (Kenneth Russell) Date: Tue, 19 Nov 2013 16:17:40 -0800 Subject: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE In-Reply-To: References: <2015959183.6449297.1383768110354.JavaMail.zimbra@mozilla.com> <527C45F8.1020406@artspark.co.jp> Message-ID: Belated thanks for working on this. -Ken On Tue, Nov 12, 2013 at 5:52 AM, Olli Etuaho wrote: > It looks like there's relatively wide support for changing the test now, so here's a pull request: https://github.com/KhronosGroup/WebGL/pull/418 > > I agree we should continue discussing the OOM and context loss issues separately from this. > > -Olli > ________________________________________ > From: Kenneth Russell [kbr...@] > Sent: Monday, November 11, 2013 11:52 PM > To: Olli Etuaho > Cc: Mark Callow; Jeff Gilbert; Florian B?sch; public webgl > Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE > > Focusing on the cube map texture tests: > > Reducing the maximum tested cube map size -- but still requiring that > allocation succeed at that size -- sounds like the best of some > unsatisfying options. > > I suggest modifying the test to allocate up to a 2K cube map, if that > seems acceptable. If that maximum texture size is advertised, I think > it's reasonable for developers to expect that the implementation will > succeed in allocating one, assuming no other applications on the > system are consuming all of the remaining VRAM. > > Modifying the test to allow GL_OUT_OF_MEMORY errors at any size does > not seem like a good option. It would allow poor implementations to > advertise large maximum texture sizes and then simply fail to allocate > them, while still remaining conformant. Artificially restricting a > WebGL implementation's maximum texture size would force > least-common-denominator behavior. > > The discussion about OUT_OF_MEMORY and context loss in the conformance > tests is an important one, but to avoid conflating topics on this > thread I suggest we start a new one for that. > > -Ken > > > > On Mon, Nov 11, 2013 at 7:14 AM, Olli Etuaho wrote: >> I agree that Chrome could be a bit more lenient - to be exact I think context loss is allowed one or two times, but no more than that. However, I don't think it changes the situation much whether a browser allows context loss 1 time per page or 10 times. If it was 10 times, with the counter being reset on reload, and we decided to allow these unstable tests into the conformance suite, we would still have to put a limit to the number of tests that can cause context loss and carefully maintain it. Then we'd have to ask what tests among those that can cause context loss are the most important to include in the test suite, and will this change over time? >> >> Long-term I hope we can improve the stability in other ways so that the context loss mechanism comes up less often, but for now I think keeping the tests that can lose context in the extras and out of the main suite is the best we can do. I actually thought that this policy was based on earlier consensus, because this has been done with multiple tests already. >> >> -Olli >> ________________________________________ >> From: Mark Callow [callow.mark...@] >> Sent: Friday, November 08, 2013 4:01 AM >> To: Olli Etuaho; Jeff Gilbert >> Cc: Florian B?sch; public webgl >> Subject: Re: [Public WebGL] Testing MAX_CUBE_MAP_TEXTURE_SIZE >> >> On 2013/11/07 23:26, Olli Etuaho wrote: >> >> Tests that can cause OOM and thus context loss are problematic. In Chrome, the page will be forbidden from creating any more WebGL contexts after context loss. This would cause all subsequent tests to fail. So we should not have such tests in the main automated test suite. We could put such a test in the extras, though. Do you think this would be good enough? >> >> >> Refusing to create any more contexts for the page after it has made just 1 excessive attempt seems too extreme. Given the uncertainty about whether a given texture can be accommodated one can easily imagine an application written to try at its preferred size and, if it gets OOM, retrying at a smaller size. >> >> Regards >> >> -Mark >> >> -- >> ???????????????????????????????????????????????????????????????? ???????????????????????????????????????????????????????????????? ??. >> >> NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Tue Nov 19 18:04:38 2013 From: cal...@ (Mark Callow) Date: Wed, 20 Nov 2013 11:04:38 +0900 Subject: [Public WebGL] Textures and DOM elements In-Reply-To: References: <5289D97A.5090009@dfki.de> <1772130345.7884118.1384809805934.JavaMail.zimbra@mozilla.com> Message-ID: <528C18B6.9010806@artspark.co.jp> On 2013/11/20 9:13, Kenneth Russell wrote: > [1] http://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/ It took me a few minutes to get my head around the definition of regular sampler vs. security sensitive sampler. It seemed backwards at first. The issue is the name "security sensitive". If it said "Otherwise, s is a secure sampler" it would be immediately clear. Then it can say "You can only use security sensitive resources with secure samplers." Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Nov 20 10:31:11 2013 From: kbr...@ (Kenneth Russell) Date: Wed, 20 Nov 2013 10:31:11 -0800 Subject: [Public WebGL] Textures and DOM elements In-Reply-To: <528C18B6.9010806@artspark.co.jp> References: <5289D97A.5090009@dfki.de> <1772130345.7884118.1384809805934.JavaMail.zimbra@mozilla.com> <528C18B6.9010806@artspark.co.jp> Message-ID: On Tue, Nov 19, 2013 at 6:04 PM, Mark Callow wrote: > On 2013/11/20 9:13, Kenneth Russell wrote: > > [1] > http://www.khronos.org/registry/webgl/extensions/WEBGL_security_sensitive_resources/ > > It took me a few minutes to get my head around the definition of regular > sampler vs. security sensitive sampler. It seemed backwards at first. The > issue is the name "security sensitive". If it said "Otherwise, s is a > secure sampler" it would be immediately clear. Then it can say "You can only > use security sensitive resources with secure samplers." Thanks, that sounds like a nice clarification. Max, would you be willing to submit a pull request making this change? -Ken > Regards > > -Mark > > -- > ???????????????????????????????????????????????????????????????? > ???????????????????????????????????????????????????????????????? ??. > > NOTE: This electronic mail message may contain confidential and privileged > information from HI Corporation. If you are not the intended recipient, any > disclosure, photocopying, distribution or use of the contents of the > received information is prohibited. If you have received this e-mail in > error, please notify the sender immediately and permanently delete this > message and all related copies. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Nov 20 11:56:13 2013 From: kbr...@ (Kenneth Russell) Date: Wed, 20 Nov 2013 11:56:13 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: <25D69E4A-DFC3-403C-978A-AF977A771FA7@apple.com> References: <2D84ABF5-C4A9-4CD6-BDD8-17E5BF132BC5@apple.com> <25D69E4A-DFC3-403C-978A-AF977A771FA7@apple.com> Message-ID: On Mon, Nov 18, 2013 at 4:35 PM, Dean Jackson wrote: > > On 19 Nov 2013, at 11:11 am, Dean Jackson wrote: > >>> >>> James or Dean, do you know where in the CSS or HTML specifications the >>> cumulative transformation matrix for elements, all the way to device >>> pixels, is defined? I assume that specs like CSS3 transforms [1] >>> affect the definition, but I'm having difficulty even finding a >>> reference for devicePixelRatio in the WHATWG specs. >> >> http://dev.w3.org/csswg/cssom-view/#dom-window-devicepixelratio > > Note that when WebKit first implemented the feature, it did not intend the current definition (and still does not implement it that way). In particular, it was not supposed to reflect the current page zoom and pinch zoom. Thanks for that pointer. Do you have any advice for Bill on how to accurately and reliably compute the size of his canvas element in device pixels? Even a solution specialized for the simple 2D case (no CSS transforms applied, especially not 3D transforms) would be helpful. Intuitively, it seems to me that rounding the product of window.devicePixelRatio with the canvas's client width and height (in CSS pixels) is the correct value. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From mvu...@ Wed Nov 20 14:19:30 2013 From: mvu...@ (Max Vujovic) Date: Wed, 20 Nov 2013 14:19:30 -0800 Subject: [Public WebGL] Textures and DOM elements In-Reply-To: References: <5289D97A.5090009@dfki.de> <1772130345.7884118.1384809805934.JavaMail.zimbra@mozilla.com> <528C18B6.9010806@artspark.co.jp> Message-ID: T24gTm92IDIwLCAyMDEzLCBhdCAxMDozMSBBTSwgS2VubmV0aCBSdXNzZWxsIDxrYnJAZ29vZ2xl LmNvbT4gd3JvdGU6DQoNCj4gT24gVHVlLCBOb3YgMTksIDIwMTMgYXQgNjowNCBQTSwgTWFyayBD YWxsb3cgPGNhbGxvdy5tYXJrQGFydHNwYXJrLmNvLmpwPiB3cm90ZToNCj4+IE9uIDIwMTMvMTEv MjAgOToxMywgS2VubmV0aCBSdXNzZWxsIHdyb3RlOg0KPj4gDQo+PiBbMV0NCj4+IGh0dHA6Ly93 d3cua2hyb25vcy5vcmcvcmVnaXN0cnkvd2ViZ2wvZXh0ZW5zaW9ucy9XRUJHTF9zZWN1cml0eV9z ZW5zaXRpdmVfcmVzb3VyY2VzLw0KPj4gDQo+PiBJdCB0b29rIG1lIGEgZmV3IG1pbnV0ZXMgdG8g Z2V0IG15IGhlYWQgYXJvdW5kIHRoZSBkZWZpbml0aW9uIG9mIHJlZ3VsYXINCj4+IHNhbXBsZXIg dnMuIHNlY3VyaXR5IHNlbnNpdGl2ZSBzYW1wbGVyLiBJdCBzZWVtZWQgYmFja3dhcmRzIGF0IGZp cnN0LiBUaGUNCj4+IGlzc3VlIGlzIHRoZSBuYW1lICJzZWN1cml0eSBzZW5zaXRpdmUiLiAgSWYg aXQgc2FpZCAiT3RoZXJ3aXNlLCBzIGlzIGENCj4+IHNlY3VyZSBzYW1wbGVyIiBpdCB3b3VsZCBi ZSBpbW1lZGlhdGVseSBjbGVhci4gVGhlbiBpdCBjYW4gc2F5ICJZb3UgY2FuIG9ubHkNCj4+IHVz ZSBzZWN1cml0eSBzZW5zaXRpdmUgcmVzb3VyY2VzIHdpdGggc2VjdXJlIHNhbXBsZXJzLiINCj4g DQo+IFRoYW5rcywgdGhhdCBzb3VuZHMgbGlrZSBhIG5pY2UgY2xhcmlmaWNhdGlvbi4NCj4gDQo+ IE1heCwgd291bGQgeW91IGJlIHdpbGxpbmcgdG8gc3VibWl0IGEgcHVsbCByZXF1ZXN0IG1ha2lu ZyB0aGlzIGNoYW5nZT8NCg0KQWJzb2x1dGVseS0gdGhhdOKAmXMgYSBncmVhdCBzdWdnZXN0aW9u Lg0KDQpUaGFua3MsDQpNYXgNCg0KPiANCj4gLUtlbg0KPiANCj4gDQo+PiBSZWdhcmRzDQo+PiAN Cj4+ICAgIC1NYXJrDQo+PiANCj4+IC0tDQo+PiDms6jmhI/vvJrjgZPjga7pm7vlrZDjg6Hjg7zj g6vjgavjga/jgIHmoKrlvI/kvJrnpL7jgqjjgqTjg4HjgqLjgqTjga7mqZ/lr4bmg4XloLHjgYzl kKvjgb7jgozjgabjgYTjgovloLTlkIjjgYzmnInjgorjgb7jgZnjgILmraPlvI/jgarjg6Hjg7zj g6vlj5fkv6HogIXjgafjga/nhKHjgYTloLTlkIjjga/jg6Hjg7zjg6vopIfoo73jgIENCj4+IOWG jemFjeS/oeOBvuOBn+OBr+aDheWgseOBruS9v+eUqOOCkuWbuuOBj+emgeOBmOOBpuOBiuOCiuOB vuOBmeOAguOCqOODqeODvOOAgeaJi+mBleOBhOOBp+OBk+OBruODoeODvOODq+OCkuWPl+OBkeWP luOCieOCjOOBvuOBl+OBn+OCieWJiumZpOOCkuihjOOBhOmFjeS/oeiAheOBq+OBlOmAo+e1oeOC kuOBiumhmOOBhOOBhOOBn+OBlyDjgb7jgZkuDQo+PiANCj4+IE5PVEU6IFRoaXMgZWxlY3Ryb25p YyBtYWlsIG1lc3NhZ2UgbWF5IGNvbnRhaW4gY29uZmlkZW50aWFsIGFuZCBwcml2aWxlZ2VkDQo+ PiBpbmZvcm1hdGlvbiBmcm9tIEhJIENvcnBvcmF0aW9uLiBJZiB5b3UgYXJlIG5vdCB0aGUgaW50 ZW5kZWQgcmVjaXBpZW50LCBhbnkNCj4+IGRpc2Nsb3N1cmUsIHBob3RvY29weWluZywgZGlzdHJp YnV0aW9uIG9yIHVzZSBvZiB0aGUgY29udGVudHMgb2YgdGhlDQo+PiByZWNlaXZlZCBpbmZvcm1h dGlvbiBpcyBwcm9oaWJpdGVkLiBJZiB5b3UgaGF2ZSByZWNlaXZlZCB0aGlzIGUtbWFpbCBpbg0K Pj4gZXJyb3IsIHBsZWFzZSBub3RpZnkgdGhlIHNlbmRlciBpbW1lZGlhdGVseSBhbmQgcGVybWFu ZW50bHkgZGVsZXRlIHRoaXMNCj4+IG1lc3NhZ2UgYW5kIGFsbCByZWxhdGVkIGNvcGllcy4NCg0K ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cab...@ Wed Nov 20 15:12:35 2013 From: cab...@ (Rik Cabanier) Date: Wed, 20 Nov 2013 15:12:35 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: <2D84ABF5-C4A9-4CD6-BDD8-17E5BF132BC5@apple.com> <25D69E4A-DFC3-403C-978A-AF977A771FA7@apple.com> Message-ID: On Wed, Nov 20, 2013 at 11:56 AM, Kenneth Russell wrote: > > On Mon, Nov 18, 2013 at 4:35 PM, Dean Jackson wrote: > > > > On 19 Nov 2013, at 11:11 am, Dean Jackson wrote: > > > >>> > >>> James or Dean, do you know where in the CSS or HTML specifications the > >>> cumulative transformation matrix for elements, all the way to device > >>> pixels, is defined? I assume that specs like CSS3 transforms [1] > >>> affect the definition, but I'm having difficulty even finding a > >>> reference for devicePixelRatio in the WHATWG specs. > >> > >> http://dev.w3.org/csswg/cssom-view/#dom-window-devicepixelratio > > > > Note that when WebKit first implemented the feature, it did not intend > the current definition (and still does not implement it that way). In > particular, it was not supposed to reflect the current page zoom and pinch > zoom. > > Thanks for that pointer. > > Do you have any advice for Bill on how to accurately and reliably > compute the size of his canvas element in device pixels? Even a > solution specialized for the simple 2D case (no CSS transforms > applied, especially not 3D transforms) would be helpful. Intuitively, > it seems to me that rounding the product of window.devicePixelRatio > with the canvas's client width and height (in CSS pixels) is the > correct value. > There currently is no way to do this in a standard way. Your proposal should work for user zoom (= control/command + "+"/"-") except that Chrome hasn't enabled it yet and Safari doesn't want to make the change. There is also no standard way to detect pinch zoom but you can figure it out with gestures on Safari or with touch events on the other browsers. Frameworks such as hammer.js will make it simplify your code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From din...@ Wed Nov 20 15:28:00 2013 From: din...@ (Dean Jackson) Date: Thu, 21 Nov 2013 10:28:00 +1100 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: <2D84ABF5-C4A9-4CD6-BDD8-17E5BF132BC5@apple.com> <25D69E4A-DFC3-403C-978A-AF977A771FA7@apple.com> Message-ID: On 21 Nov 2013, at 10:12 am, Rik Cabanier wrote: > > > > On Wed, Nov 20, 2013 at 11:56 AM, Kenneth Russell wrote: > > On Mon, Nov 18, 2013 at 4:35 PM, Dean Jackson wrote: > > > > On 19 Nov 2013, at 11:11 am, Dean Jackson wrote: > > > >>> > >>> James or Dean, do you know where in the CSS or HTML specifications the > >>> cumulative transformation matrix for elements, all the way to device > >>> pixels, is defined? I assume that specs like CSS3 transforms [1] > >>> affect the definition, but I'm having difficulty even finding a > >>> reference for devicePixelRatio in the WHATWG specs. > >> > >> http://dev.w3.org/csswg/cssom-view/#dom-window-devicepixelratio > > > > Note that when WebKit first implemented the feature, it did not intend the current definition (and still does not implement it that way). In particular, it was not supposed to reflect the current page zoom and pinch zoom. > > Thanks for that pointer. > > Do you have any advice for Bill on how to accurately and reliably > compute the size of his canvas element in device pixels? Even a > solution specialized for the simple 2D case (no CSS transforms > applied, especially not 3D transforms) would be helpful. Intuitively, > it seems to me that rounding the product of window.devicePixelRatio > with the canvas's client width and height (in CSS pixels) is the > correct value. That works in Safari on iOS (where we don?t have page zoom). > > There currently is no way to do this in a standard way. Some more info: - CSS transforms by themselves don?t change the size of the canvas backing store, nor devicePixelRatio. A developer may choose to update the backing store when setting a transform if they wish. - If you want your canvas backing store to be the same size as the canvas, it?s CSS size * devicePixelRatio (in Safari on iOS and OS X) At the recent CSS WG meeting, I proposed a new way to get devices pixels per CSS px (at 1:1 scale), page zoom factor, pinch zoom factor, and possibly a helper to determine the viewport size and location. However, at least some people at Google seemed to think this is not necessary, and that Safari should change devicePixelRatio (which we are reluctant to do). I?m now not sure what to do. What seems clear to me is that this whole area is a mess. Dean > Your proposal should work for user zoom (= control/command + "+"/"-") except that Chrome hasn't enabled it yet and Safari doesn't want to make the change. > > There is also no standard way to detect pinch zoom but you can figure it out with gestures on Safari or with touch events on the other browsers. Frameworks such as hammer.js will make it simplify your code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cab...@ Wed Nov 20 16:04:04 2013 From: cab...@ (Rik Cabanier) Date: Wed, 20 Nov 2013 16:04:04 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: <2D84ABF5-C4A9-4CD6-BDD8-17E5BF132BC5@apple.com> <25D69E4A-DFC3-403C-978A-AF977A771FA7@apple.com> Message-ID: On Wed, Nov 20, 2013 at 3:28 PM, Dean Jackson wrote: > > On 21 Nov 2013, at 10:12 am, Rik Cabanier wrote: > > > > > On Wed, Nov 20, 2013 at 11:56 AM, Kenneth Russell wrote: > >> >> On Mon, Nov 18, 2013 at 4:35 PM, Dean Jackson wrote: >> > >> > On 19 Nov 2013, at 11:11 am, Dean Jackson wrote: >> > >> >>> >> >>> James or Dean, do you know where in the CSS or HTML specifications the >> >>> cumulative transformation matrix for elements, all the way to device >> >>> pixels, is defined? I assume that specs like CSS3 transforms [1] >> >>> affect the definition, but I'm having difficulty even finding a >> >>> reference for devicePixelRatio in the WHATWG specs. >> >> >> >> http://dev.w3.org/csswg/cssom-view/#dom-window-devicepixelratio >> > >> > Note that when WebKit first implemented the feature, it did not intend >> the current definition (and still does not implement it that way). In >> particular, it was not supposed to reflect the current page zoom and pinch >> zoom. >> >> Thanks for that pointer. >> >> Do you have any advice for Bill on how to accurately and reliably >> compute the size of his canvas element in device pixels? Even a >> solution specialized for the simple 2D case (no CSS transforms >> applied, especially not 3D transforms) would be helpful. Intuitively, >> it seems to me that rounding the product of window.devicePixelRatio >> with the canvas's client width and height (in CSS pixels) is the >> correct value. >> > > That works in Safari on iOS (where we don?t have page zoom). > > > There currently is no way to do this in a standard way. > > > Some more info: > > - CSS transforms by themselves don?t change the size of the canvas backing > store, nor devicePixelRatio. A developer may choose to update the backing > store when setting a transform if they wish. > - If you want your canvas backing store to be the same size as the canvas, > it?s CSS size * devicePixelRatio (in Safari on iOS and OS X) > Right, there is scaling in the following order: 1. A factor that determines screen pixels per css pixel (=1 on most browsers, 2 on retina displays, 1.5 or 3 on some android devices) 2. user zoom which changes the ratio of screen pixels per css pixel 3. pinch zoom which changes your viewport 4. CSS transforms > At the recent CSS WG meeting, I proposed a new way to get devices pixels > per CSS px (at 1:1 scale), page zoom factor, pinch zoom factor, and > possibly a helper to determine the viewport size and location. However, at > least some people at Google seemed to think this is not necessary, and that > Safari should change devicePixelRatio (which we are reluctant to do). I?m > now not sure what to do. > Chrome, Firefox and IE want to collapse step 1 and 2 into devicePixelRatio. Safari believes that this will break sites so want a separate variable. CSS OM View also makes it adapt to user zoom: http://dev.w3.org/csswg/cssom-view/#dom-window-devicepixelratio That specification also introduces functions and events to solve this problem. AFAIK, step 3 can only be determined with events. > What seems clear to me is that this whole area is a mess. > Yes! I think figuring out actual device pixels wasn't a priority in the past (and maybe even frowned upon). > Your proposal should work for user zoom (= control/command + "+"/"-") > except that Chrome hasn't enabled it yet and Safari doesn't want to make > the change. > > There is also no standard way to detect pinch zoom but you can figure it > out with gestures on Safari or with touch events on the other browsers. > Frameworks such as hammer.js will make it simplify your code. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wba...@ Thu Nov 21 19:07:06 2013 From: wba...@ (Bill Baxter) Date: Thu, 21 Nov 2013 19:07:06 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: Message-ID: On Mon, Nov 18, 2013 at 3:36 PM, Kenneth Russell wrote: > > Yes, the presentation of individual elements to the screen is handled > by the browser's compositor, and it wouldn't surprise me if there were > differences in implementation between browsers. Since it's not > possible to test the results of page compositing using just the > browser's APIs (doing so requires reading back the rendered results > from the screen), there are probably no cross-browser tests asserting > identical behavior. > > Bill, it looks to me like both Chrome and Firefox need the call to > Math.round() when computing the canvas's back buffer size per your > formula above. Testing on a MacBook Pro Retina Display, I see blurry > rendering with mode=floor at certain zoom levels in both browsers > (testing Chrome 33 Canary), while mode=round is always sharp. Should > we update http://www.khronos.org/webgl/wiki/HandlingHighDPI ? > Hmm. I see that too when using the internal display. But when I plug the Retina Mac into an external non-highDPI monitor I need to use floor() for Firefox. Actually what it seems is that Firefox needs floor() for devicePixelRatio less than 2, and round() for devicePixelRatio greater than 2. The same holds for FF on linux (non-high-dpi) -- floor() until I browser zoom over 200%, then round(). Ugh. I think http://www.khronos.org/webgl/wiki/HandlingHighDPI should at least be updated to mention that you may need to round instead of floor to get 1:1 rendering, and it's just implementation dependent. --bb -------------- next part -------------- An HTML attachment was scrubbed... URL: From baj...@ Fri Nov 22 12:09:37 2013 From: baj...@ (Brandon Jones) Date: Fri, 22 Nov 2013 12:09:37 -0800 Subject: [Public WebGL] Reducing the number of iterations on conformance stress tests Message-ID: In the 1.0.2 and 1.0.3 conformance tests there's a couple of stress tests that take, objectively speaking, a really freaking long time to run. Namely: https://www.khronos.org/registry/webgl/sdk/tests/conformance/rendering/multisample-corruption.html Which iterates through drawing to a very large multisampled canvas and then copying the results to a canvas *100* times and: https://www.khronos.org/registry/webgl/sdk/tests/conformance/context/context-creation-and-destruction.html Which creates and destroys a WebGL context *500* times to ensure that the browser doesn't leak memory or fail to create new contexts after a while. On my ultra-beefy workstation these take ~30 seconds and 1 minutes, respectively. I've seen them take far longer on more average hardware. Since we're currently working on making the 1.0.2 suite part of out continuous build system, having 2 tests add over a minute and a half to the cycle time is pretty painful. The thing is that I'm not sure these tests require so many iterations to accurately test their respective conditions. For example: the multisampling test is aimed at catching driver bugs on OSX hardware, and any time I see it fail it's within the first 10-15 iterations. As such, I'd like to propose that we reduce the number of iterations on each. I suggest *25*iterations for multisample-corruption test and *50* for context-creation-and-destruction. If anyone has concerns about these proposed changes I'd love to hear them, especially if you are aware of implementation details that would make these limits insufficient. Otherwise I think it's probably in everyone's interest to cut down the time it takes to run the full conformance suite significantly. --Brandon -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Nov 22 12:15:17 2013 From: kbr...@ (Kenneth Russell) Date: Fri, 22 Nov 2013 12:15:17 -0800 Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: Message-ID: On Thu, Nov 21, 2013 at 7:07 PM, Bill Baxter wrote: > On Mon, Nov 18, 2013 at 3:36 PM, Kenneth Russell wrote: >> >> >> Yes, the presentation of individual elements to the screen is handled >> by the browser's compositor, and it wouldn't surprise me if there were >> differences in implementation between browsers. Since it's not >> possible to test the results of page compositing using just the >> browser's APIs (doing so requires reading back the rendered results >> from the screen), there are probably no cross-browser tests asserting >> identical behavior. >> >> Bill, it looks to me like both Chrome and Firefox need the call to >> Math.round() when computing the canvas's back buffer size per your >> formula above. Testing on a MacBook Pro Retina Display, I see blurry >> rendering with mode=floor at certain zoom levels in both browsers >> (testing Chrome 33 Canary), while mode=round is always sharp. Should >> we update http://www.khronos.org/webgl/wiki/HandlingHighDPI ? > > > Hmm. I see that too when using the internal display. But when I plug the > Retina Mac into an external non-highDPI monitor I need to use floor() for > Firefox. > Actually what it seems is that Firefox needs floor() for devicePixelRatio > less than 2, and round() for devicePixelRatio greater than 2. > The same holds for FF on linux (non-high-dpi) -- floor() until I browser > zoom over 200%, then round(). > > Ugh. > > I think http://www.khronos.org/webgl/wiki/HandlingHighDPI should at least be > updated to mention that you may need to round instead of floor to get 1:1 > rendering, and it's just implementation dependent. Done. Please send any feedback on the new text to the list. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Fri Nov 22 12:18:34 2013 From: kbr...@ (Kenneth Russell) Date: Fri, 22 Nov 2013 12:18:34 -0800 Subject: [Public WebGL] Reducing the number of iterations on conformance stress tests In-Reply-To: References: Message-ID: As much as I hate to potentially reduce test coverage, reducing the number of iterations for these tests will, realistically, catch all of the errors they've uncovered previously. I'd like to see these changes made in the top of tree and 1.0.2 conformance suites. The excessively long tests are preventing these tests from being run as part of Chromium's continuous integrations. -Ken On Fri, Nov 22, 2013 at 12:09 PM, Brandon Jones wrote: > In the 1.0.2 and 1.0.3 conformance tests there's a couple of stress tests > that take, objectively speaking, a really freaking long time to run. Namely: > > https://www.khronos.org/registry/webgl/sdk/tests/conformance/rendering/multisample-corruption.html > > Which iterates through drawing to a very large multisampled canvas and then > copying the results to a canvas 100 times and: > > https://www.khronos.org/registry/webgl/sdk/tests/conformance/context/context-creation-and-destruction.html > > Which creates and destroys a WebGL context 500 times to ensure that the > browser doesn't leak memory or fail to create new contexts after a while. On > my ultra-beefy workstation these take ~30 seconds and 1 minutes, > respectively. I've seen them take far longer on more average hardware. Since > we're currently working on making the 1.0.2 suite part of out continuous > build system, having 2 tests add over a minute and a half to the cycle time > is pretty painful. > > The thing is that I'm not sure these tests require so many iterations to > accurately test their respective conditions. For example: the multisampling > test is aimed at catching driver bugs on OSX hardware, and any time I see it > fail it's within the first 10-15 iterations. As such, I'd like to propose > that we reduce the number of iterations on each. I suggest 25 iterations for > multisample-corruption test and 50 for context-creation-and-destruction. > > If anyone has concerns about these proposed changes I'd love to hear them, > especially if you are aware of implementation details that would make these > limits insufficient. Otherwise I think it's probably in everyone's interest > to cut down the time it takes to run the full conformance suite > significantly. > > --Brandon ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cos...@ Sun Nov 24 06:45:28 2013 From: cos...@ (Liam Wilson) Date: Sun, 24 Nov 2013 14:45:28 +0000 (GMT) Subject: [Public WebGL] Determining the number of physical pixels for a particular devicePixelRatio In-Reply-To: References: Message-ID: <1385304328.48409.YahooMailNeo@web171901.mail.ir2.yahoo.com> > On Friday, 22 November 2013, 20:17, Kenneth Russell wrote: >> On Thu, Nov 21, 2013 at 7:07 PM, Bill Baxter wrote: >> I think http://www.khronos.org/webgl/wiki/HandlingHighDPI should at least be >> updated to mention that you may need to round instead of floor to get 1:1 >> rendering, and it's just implementation dependent. > > Done. Please send any feedback on the new text to the list. > > -Ken canvas.style.width = desiredWidthInCSSPixels + "px"; canvas.style.height = desiredHeightInCSSPixels + "px"; // set the size of the drawingBuffer var devicePixelRatio = window.devicePixelRatio || 1; canvas.width = desiredWidthInCSSPixels * devicePixelRatio; canvas.height = desiredHeightInCSSPixels * devicePixelRatio; With all due respect, that approach cannot work in general. It seems to intuitively work, and I attempted this solution, but after struggling to make it work for about 8 hours I came to the conclusion that it could not work. The problem is rounding. You always end up with values that are slightly off. For example, assume you want a canvas to be 101 CSS pixels wide and your devicePixelRatio is 1.5. You will need to set canvas.width to exactly 151.5 (which cannot be done). You end up setting canvas.width to 151 or 152 (which are slightly off). When drawn to the screen the canvas gets slightly scaled, and you get blurry rendering. I was attempting to solve this problem on Android using a 2D canvas. My best effort at a solution was: scale=(1/window.devicePixelRatio); document.head.innerHTML=''; * Note I'm locking the viewport so pinch zoom will not work. This is deliberate, I didn't want to complicate things by allowing zooming. The above is a horrible hack that only works in Chrome. Again this almost works, but Chrome for Android paints things incorrectly (http://crbug.com/317175). In Firefox this solution doesn't work (https://bugzilla.mozilla.org/show_bug.cgi?id=936906). Note there is a slight bug in the code attached to the Chrome bug. The code in the Firefox bug should be correct (https://bug936906.bugzilla.mozilla.org/attachment.cgi?id=829877). It may be that I am doing something subtly wrong, but the point is that this should not be a difficult. This should be trivial. In an ideal world you would just be able to do something like set devicePixelRatio to 1 from JavaScript. Thanks Liam Wilson ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From oet...@ Mon Nov 25 04:39:13 2013 From: oet...@ (Olli Etuaho) Date: Mon, 25 Nov 2013 13:39:13 +0100 Subject: [Public WebGL] Reducing the number of iterations on conformance stress tests In-Reply-To: References: , Message-ID: I'm in favor of trying to optimize these tests, but I have to disagree with reducing the test iterations that much. I've seen the multisample corruption test catch another bug in our internal testing on one occasion, and it did not reproduce consistently even with 100 iterations. I don't have as much of a problem with reducing the iterations in context-creation-and-destruction.html, but even there we should be conservative, taking it down to 50 iterations seems a bit too far. Maybe increasing the amount of GPU memory reserved per context while reducing iterations would make the test faster and keep the pressure on the memory management closer to what it is now? This would also be closer to a real-world scenario. The context-creation-and-destruction test could also be made a bit faster without affecting coverage significantly, patch here: https://github.com/KhronosGroup/WebGL/pull/424 If the test iterations are reduced at all, I'd also like to see longer-running versions of these tests put to the extras, so that it would be possible to do heavier stress testing around this area manually. -Olli ________________________________________ From: owner-public_webgl...@ [owner-public_webgl...@] On Behalf Of Kenneth Russell [kbr...@] Sent: Friday, November 22, 2013 10:18 PM To: Brandon Jones Cc: public webgl Subject: Re: [Public WebGL] Reducing the number of iterations on conformance stress tests As much as I hate to potentially reduce test coverage, reducing the number of iterations for these tests will, realistically, catch all of the errors they've uncovered previously. I'd like to see these changes made in the top of tree and 1.0.2 conformance suites. The excessively long tests are preventing these tests from being run as part of Chromium's continuous integrations. -Ken On Fri, Nov 22, 2013 at 12:09 PM, Brandon Jones wrote: > In the 1.0.2 and 1.0.3 conformance tests there's a couple of stress tests > that take, objectively speaking, a really freaking long time to run. Namely: > > https://www.khronos.org/registry/webgl/sdk/tests/conformance/rendering/multisample-corruption.html > > Which iterates through drawing to a very large multisampled canvas and then > copying the results to a canvas 100 times and: > > https://www.khronos.org/registry/webgl/sdk/tests/conformance/context/context-creation-and-destruction.html > > Which creates and destroys a WebGL context 500 times to ensure that the > browser doesn't leak memory or fail to create new contexts after a while. On > my ultra-beefy workstation these take ~30 seconds and 1 minutes, > respectively. I've seen them take far longer on more average hardware. Since > we're currently working on making the 1.0.2 suite part of out continuous > build system, having 2 tests add over a minute and a half to the cycle time > is pretty painful. > > The thing is that I'm not sure these tests require so many iterations to > accurately test their respective conditions. For example: the multisampling > test is aimed at catching driver bugs on OSX hardware, and any time I see it > fail it's within the first 10-15 iterations. As such, I'd like to propose > that we reduce the number of iterations on each. I suggest 25 iterations for > multisample-corruption test and 50 for context-creation-and-destruction. > > If anyone has concerns about these proposed changes I'd love to hear them, > especially if you are aware of implementation details that would make these > limits insufficient. Otherwise I think it's probably in everyone's interest > to cut down the time it takes to run the full conformance suite > significantly. > > --Brandon ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Mon Nov 25 10:26:41 2013 From: kbr...@ (Kenneth Russell) Date: Mon, 25 Nov 2013 10:26:41 -0800 Subject: [Public WebGL] Reducing the number of iterations on conformance stress tests In-Reply-To: References: Message-ID: Thanks for the feedback. We certainly don't want to lose test coverage. On the other hand, the durations of these tests are preventing using them at least for automated testing in Chrome, and I personally haven't seen them catch failures beyond around 25 iterations. It is good to know that the multisample corruption test caught a bug in your setup after over 100 iterations. Your idea to put "full" versions of these tests into extra/ is a good one. Brandon, would you modify your pull request to add versions into extra/ (sharing the code) and running the current number of iterations? -Ken On Mon, Nov 25, 2013 at 4:39 AM, Olli Etuaho wrote: > I'm in favor of trying to optimize these tests, but I have to disagree with reducing the test iterations that much. I've seen the multisample corruption test catch another bug in our internal testing on one occasion, and it did not reproduce consistently even with 100 iterations. I don't have as much of a problem with reducing the iterations in context-creation-and-destruction.html, but even there we should be conservative, taking it down to 50 iterations seems a bit too far. Maybe increasing the amount of GPU memory reserved per context while reducing iterations would make the test faster and keep the pressure on the memory management closer to what it is now? This would also be closer to a real-world scenario. > > The context-creation-and-destruction test could also be made a bit faster without affecting coverage significantly, patch here: https://github.com/KhronosGroup/WebGL/pull/424 > > If the test iterations are reduced at all, I'd also like to see longer-running versions of these tests put to the extras, so that it would be possible to do heavier stress testing around this area manually. > > -Olli > ________________________________________ > From: owner-public_webgl...@ [owner-public_webgl...@] On Behalf Of Kenneth Russell [kbr...@] > Sent: Friday, November 22, 2013 10:18 PM > To: Brandon Jones > Cc: public webgl > Subject: Re: [Public WebGL] Reducing the number of iterations on conformance stress tests > > As much as I hate to potentially reduce test coverage, reducing the > number of iterations for these tests will, realistically, catch all of > the errors they've uncovered previously. > > I'd like to see these changes made in the top of tree and 1.0.2 > conformance suites. The excessively long tests are preventing these > tests from being run as part of Chromium's continuous integrations. > > -Ken > > > > On Fri, Nov 22, 2013 at 12:09 PM, Brandon Jones wrote: >> In the 1.0.2 and 1.0.3 conformance tests there's a couple of stress tests >> that take, objectively speaking, a really freaking long time to run. Namely: >> >> https://www.khronos.org/registry/webgl/sdk/tests/conformance/rendering/multisample-corruption.html >> >> Which iterates through drawing to a very large multisampled canvas and then >> copying the results to a canvas 100 times and: >> >> https://www.khronos.org/registry/webgl/sdk/tests/conformance/context/context-creation-and-destruction.html >> >> Which creates and destroys a WebGL context 500 times to ensure that the >> browser doesn't leak memory or fail to create new contexts after a while. On >> my ultra-beefy workstation these take ~30 seconds and 1 minutes, >> respectively. I've seen them take far longer on more average hardware. Since >> we're currently working on making the 1.0.2 suite part of out continuous >> build system, having 2 tests add over a minute and a half to the cycle time >> is pretty painful. >> >> The thing is that I'm not sure these tests require so many iterations to >> accurately test their respective conditions. For example: the multisampling >> test is aimed at catching driver bugs on OSX hardware, and any time I see it >> fail it's within the first 10-15 iterations. As such, I'd like to propose >> that we reduce the number of iterations on each. I suggest 25 iterations for >> multisample-corruption test and 50 for context-creation-and-destruction. >> >> If anyone has concerns about these proposed changes I'd love to hear them, >> especially if you are aware of implementation details that would make these >> limits insufficient. Otherwise I think it's probably in everyone's interest >> to cut down the time it takes to run the full conformance suite >> significantly. >> >> --Brandon > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From baj...@ Mon Nov 25 10:33:16 2013 From: baj...@ (Brandon Jones) Date: Mon, 25 Nov 2013 10:33:16 -0800 Subject: [Public WebGL] Reducing the number of iterations on conformance stress tests In-Reply-To: References: Message-ID: Sure! I think that's a great idea. On Mon, Nov 25, 2013 at 10:26 AM, Kenneth Russell wrote: > Thanks for the feedback. We certainly don't want to lose test > coverage. On the other hand, the durations of these tests are > preventing using them at least for automated testing in Chrome, and I > personally haven't seen them catch failures beyond around 25 > iterations. It is good to know that the multisample corruption test > caught a bug in your setup after over 100 iterations. > > Your idea to put "full" versions of these tests into extra/ is a good > one. Brandon, would you modify your pull request to add versions into > extra/ (sharing the code) and running the current number of > iterations? > > -Ken > > > On Mon, Nov 25, 2013 at 4:39 AM, Olli Etuaho wrote: > > I'm in favor of trying to optimize these tests, but I have to disagree > with reducing the test iterations that much. I've seen the multisample > corruption test catch another bug in our internal testing on one occasion, > and it did not reproduce consistently even with 100 iterations. I don't > have as much of a problem with reducing the iterations in > context-creation-and-destruction.html, but even there we should be > conservative, taking it down to 50 iterations seems a bit too far. Maybe > increasing the amount of GPU memory reserved per context while reducing > iterations would make the test faster and keep the pressure on the memory > management closer to what it is now? This would also be closer to a > real-world scenario. > > > > The context-creation-and-destruction test could also be made a bit > faster without affecting coverage significantly, patch here: > https://github.com/KhronosGroup/WebGL/pull/424 > > > > If the test iterations are reduced at all, I'd also like to see > longer-running versions of these tests put to the extras, so that it would > be possible to do heavier stress testing around this area manually. > > > > -Olli > > ________________________________________ > > From: owner-public_webgl...@ [owner-public_webgl...@] > On Behalf Of Kenneth Russell [kbr...@] > > Sent: Friday, November 22, 2013 10:18 PM > > To: Brandon Jones > > Cc: public webgl > > Subject: Re: [Public WebGL] Reducing the number of iterations on > conformance stress tests > > > > As much as I hate to potentially reduce test coverage, reducing the > > number of iterations for these tests will, realistically, catch all of > > the errors they've uncovered previously. > > > > I'd like to see these changes made in the top of tree and 1.0.2 > > conformance suites. The excessively long tests are preventing these > > tests from being run as part of Chromium's continuous integrations. > > > > -Ken > > > > > > > > On Fri, Nov 22, 2013 at 12:09 PM, Brandon Jones > wrote: > >> In the 1.0.2 and 1.0.3 conformance tests there's a couple of stress > tests > >> that take, objectively speaking, a really freaking long time to run. > Namely: > >> > >> > https://www.khronos.org/registry/webgl/sdk/tests/conformance/rendering/multisample-corruption.html > >> > >> Which iterates through drawing to a very large multisampled canvas and > then > >> copying the results to a canvas 100 times and: > >> > >> > https://www.khronos.org/registry/webgl/sdk/tests/conformance/context/context-creation-and-destruction.html > >> > >> Which creates and destroys a WebGL context 500 times to ensure that the > >> browser doesn't leak memory or fail to create new contexts after a > while. On > >> my ultra-beefy workstation these take ~30 seconds and 1 minutes, > >> respectively. I've seen them take far longer on more average hardware. > Since > >> we're currently working on making the 1.0.2 suite part of out continuous > >> build system, having 2 tests add over a minute and a half to the cycle > time > >> is pretty painful. > >> > >> The thing is that I'm not sure these tests require so many iterations to > >> accurately test their respective conditions. For example: the > multisampling > >> test is aimed at catching driver bugs on OSX hardware, and any time I > see it > >> fail it's within the first 10-15 iterations. As such, I'd like to > propose > >> that we reduce the number of iterations on each. I suggest 25 > iterations for > >> multisample-corruption test and 50 for context-creation-and-destruction. > >> > >> If anyone has concerns about these proposed changes I'd love to hear > them, > >> especially if you are aware of implementation details that would make > these > >> limits insufficient. Otherwise I think it's probably in everyone's > interest > >> to cut down the time it takes to run the full conformance suite > >> significantly. > >> > >> --Brandon > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Wed Nov 27 11:55:55 2013 From: jgi...@ (Jeff Gilbert) Date: Wed, 27 Nov 2013 11:55:55 -0800 (PST) Subject: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status In-Reply-To: <341636762.9436415.1385582126167.JavaMail.zimbra@mozilla.com> Message-ID: <235330777.9436555.1385582155907.JavaMail.zimbra@mozilla.com> Hey all, I propose we promote WEBGL_compressed_texture_etc1 to Draft status. Is there any reason we know of that should disqualify it from promotion? -Jeff ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Nov 27 14:06:41 2013 From: kbr...@ (Kenneth Russell) Date: Wed, 27 Nov 2013 14:06:41 -0800 Subject: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status In-Reply-To: <235330777.9436555.1385582155907.JavaMail.zimbra@mozilla.com> References: <341636762.9436415.1385582126167.JavaMail.zimbra@mozilla.com> <235330777.9436555.1385582155907.JavaMail.zimbra@mozilla.com> Message-ID: +1 to promoting it to draft status. Have you verified it against the ETC1 specs to make sure all the math is correct? -Ken On Wed, Nov 27, 2013 at 11:55 AM, Jeff Gilbert wrote: > > Hey all, > > I propose we promote WEBGL_compressed_texture_etc1 to Draft status. Is there any reason we know of that should disqualify it from promotion? > > -Jeff > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Wed Nov 27 14:14:50 2013 From: jgi...@ (jgi...@) Date: Wed, 27 Nov 2013 14:14:50 -0800 Subject: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status In-Reply-To: References: <341636762.9436415.1385582126167.JavaMail.zimbra@mozilla.com> <235330777.9436555.1385582155907.JavaMail.zimbra@mozilla.com> Message-ID: I'll check it. Kenneth Russell wrote: >+1 to promoting it to draft status. > >Have you verified it against the ETC1 specs to make sure all the math >is correct? > >-Ken > > >On Wed, Nov 27, 2013 at 11:55 AM, Jeff Gilbert >wrote: >> >> Hey all, >> >> I propose we promote WEBGL_compressed_texture_etc1 to Draft status. >Is there any reason we know of that should disqualify it from >promotion? >> >> -Jeff >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Wed Nov 27 15:07:31 2013 From: jgi...@ (Jeff Gilbert) Date: Wed, 27 Nov 2013 15:07:31 -0800 (PST) Subject: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status In-Reply-To: References: <341636762.9436415.1385582126167.JavaMail.zimbra@mozilla.com> <235330777.9436555.1385582155907.JavaMail.zimbra@mozilla.com> Message-ID: <685479383.9454959.1385593651906.JavaMail.zimbra@mozilla.com> The math in that spec is correct. Textures are constructed out of 4x4 texel blocks, each of which is represented by a 64-bit value, thus the size in bytes is |number-of-blocks * 8|. I'll submit a pull request, pending Dean's input. -Jeff ----- Original Message ----- From: jgilbert...@ To: "Kenneth Russell" Cc: "public webgl" Sent: Wednesday, November 27, 2013 2:14:50 PM Subject: Re: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status I'll check it. Kenneth Russell wrote: >+1 to promoting it to draft status. > >Have you verified it against the ETC1 specs to make sure all the math >is correct? > >-Ken > > >On Wed, Nov 27, 2013 at 11:55 AM, Jeff Gilbert >wrote: >> >> Hey all, >> >> I propose we promote WEBGL_compressed_texture_etc1 to Draft status. >Is there any reason we know of that should disqualify it from >promotion? >> >> -Jeff >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Nov 27 16:02:13 2013 From: kbr...@ (Kenneth Russell) Date: Wed, 27 Nov 2013 16:02:13 -0800 Subject: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status In-Reply-To: <685479383.9454959.1385593651906.JavaMail.zimbra@mozilla.com> References: <341636762.9436415.1385582126167.JavaMail.zimbra@mozilla.com> <235330777.9436555.1385582155907.JavaMail.zimbra@mozilla.com> <685479383.9454959.1385593651906.JavaMail.zimbra@mozilla.com> Message-ID: I'd suggest just submitting the pull request and letting Dean merge or not merge it. On Wed, Nov 27, 2013 at 3:07 PM, Jeff Gilbert wrote: > The math in that spec is correct. Textures are constructed out of 4x4 texel blocks, each of which is represented by a 64-bit value, thus the size in bytes is |number-of-blocks * 8|. > > I'll submit a pull request, pending Dean's input. > > -Jeff > > ----- Original Message ----- > From: jgilbert...@ > To: "Kenneth Russell" > Cc: "public webgl" > Sent: Wednesday, November 27, 2013 2:14:50 PM > Subject: Re: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status > > I'll check it. > > Kenneth Russell wrote: >>+1 to promoting it to draft status. >> >>Have you verified it against the ETC1 specs to make sure all the math >>is correct? >> >>-Ken >> >> >>On Wed, Nov 27, 2013 at 11:55 AM, Jeff Gilbert >>wrote: >>> >>> Hey all, >>> >>> I propose we promote WEBGL_compressed_texture_etc1 to Draft status. >>Is there any reason we know of that should disqualify it from >>promotion? >>> >>> -Jeff >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> > > -- > Sent from my Android phone with K-9 Mail. Please excuse my brevity. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jgi...@ Wed Nov 27 16:15:21 2013 From: jgi...@ (Jeff Gilbert) Date: Wed, 27 Nov 2013 16:15:21 -0800 (PST) Subject: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status In-Reply-To: References: <341636762.9436415.1385582126167.JavaMail.zimbra@mozilla.com> <235330777.9436555.1385582155907.JavaMail.zimbra@mozilla.com> <685479383.9454959.1385593651906.JavaMail.zimbra@mozilla.com> Message-ID: <633542610.9459801.1385597721335.JavaMail.zimbra@mozilla.com> PR: https://github.com/KhronosGroup/WebGL/pull/429 ----- Original Message ----- From: "Kenneth Russell" To: "Jeff Gilbert" Cc: "public webgl" Sent: Wednesday, November 27, 2013 4:02:13 PM Subject: Re: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status I'd suggest just submitting the pull request and letting Dean merge or not merge it. On Wed, Nov 27, 2013 at 3:07 PM, Jeff Gilbert wrote: > The math in that spec is correct. Textures are constructed out of 4x4 texel blocks, each of which is represented by a 64-bit value, thus the size in bytes is |number-of-blocks * 8|. > > I'll submit a pull request, pending Dean's input. > > -Jeff > > ----- Original Message ----- > From: jgilbert...@ > To: "Kenneth Russell" > Cc: "public webgl" > Sent: Wednesday, November 27, 2013 2:14:50 PM > Subject: Re: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status > > I'll check it. > > Kenneth Russell wrote: >>+1 to promoting it to draft status. >> >>Have you verified it against the ETC1 specs to make sure all the math >>is correct? >> >>-Ken >> >> >>On Wed, Nov 27, 2013 at 11:55 AM, Jeff Gilbert >>wrote: >>> >>> Hey all, >>> >>> I propose we promote WEBGL_compressed_texture_etc1 to Draft status. >>Is there any reason we know of that should disqualify it from >>promotion? >>> >>> -Jeff >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> > > -- > Sent from my Android phone with K-9 Mail. Please excuse my brevity. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From din...@ Wed Nov 27 22:06:04 2013 From: din...@ (Dean Jackson) Date: Thu, 28 Nov 2013 17:06:04 +1100 Subject: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status In-Reply-To: <633542610.9459801.1385597721335.JavaMail.zimbra@mozilla.com> References: <341636762.9436415.1385582126167.JavaMail.zimbra@mozilla.com> <235330777.9436555.1385582155907.JavaMail.zimbra@mozilla.com> <685479383.9454959.1385593651906.JavaMail.zimbra@mozilla.com> <633542610.9459801.1385597721335.JavaMail.zimbra@mozilla.com> Message-ID: <808AD72D-8E44-4CA8-9FB9-2A326BCC0F56@apple.com> All good here. Merged. Dean On 28 Nov 2013, at 11:15 am, Jeff Gilbert wrote: > > PR: https://github.com/KhronosGroup/WebGL/pull/429 > > ----- Original Message ----- > From: "Kenneth Russell" > To: "Jeff Gilbert" > Cc: "public webgl" > Sent: Wednesday, November 27, 2013 4:02:13 PM > Subject: Re: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status > > I'd suggest just submitting the pull request and letting Dean merge or > not merge it. > > > On Wed, Nov 27, 2013 at 3:07 PM, Jeff Gilbert wrote: >> The math in that spec is correct. Textures are constructed out of 4x4 texel blocks, each of which is represented by a 64-bit value, thus the size in bytes is |number-of-blocks * 8|. >> >> I'll submit a pull request, pending Dean's input. >> >> -Jeff >> >> ----- Original Message ----- >> From: jgilbert...@ >> To: "Kenneth Russell" >> Cc: "public webgl" >> Sent: Wednesday, November 27, 2013 2:14:50 PM >> Subject: Re: [Public WebGL] Promoting WEBGL_compressed_texture_etc1 to Draft status >> >> I'll check it. >> >> Kenneth Russell wrote: >>> +1 to promoting it to draft status. >>> >>> Have you verified it against the ETC1 specs to make sure all the math >>> is correct? >>> >>> -Ken >>> >>> >>> On Wed, Nov 27, 2013 at 11:55 AM, Jeff Gilbert >>> wrote: >>>> >>>> Hey all, >>>> >>>> I propose we promote WEBGL_compressed_texture_etc1 to Draft status. >>> Is there any reason we know of that should disqualify it from >>> promotion? >>>> >>>> -Jeff >>>> >>>> ----------------------------------------------------------- >>>> You are currently subscribed to public_webgl...@ >>>> To unsubscribe, send an email to majordomo...@ with >>>> the following command in the body of your email: >>>> unsubscribe public_webgl >>>> ----------------------------------------------------------- >>>> >> >> -- >> Sent from my Android phone with K-9 Mail. Please excuse my brevity. > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl -----------------------------------------------------------