From cal...@ Mon Oct 1 00:04:09 2012 From: cal...@ (Mark Callow) Date: Mon, 01 Oct 2012 16:04:09 +0900 Subject: [Public WebGL] option for suggesting "low power" mode at context creation In-Reply-To: References: <844AB2E0-1382-40C8-9ABC-C91DE2B5FDF5@apple.com> <5063B7F0.90108@artspark.co.jp> <5064143A.70304@artspark.co.jp> <1D20BD88-DAF6-4F37-B5C2-78B149972782@apple.com> <50650DD6.7080708@artspark.co.jp> Message-ID: <50694069.20802@artspark.co.jp> On 2012/09/28 20:37, Florian B?sch wrote: > On Fri, Sep 28, 2012 at 4:39 AM, Mark Callow > > wrote: > > ... > > That said, I agree that global control over power usage belongs > in the OS. Given the way the OS X Energy Saver preferences are set > up "automatic switching" is the choice that indicates you want to > prolong battery life. This thread is happening because automatic > switching apparently isn't smart enough. > > ... > > Well gfx scaling as a function of user preference is not yet > considered by any OS, most prominently not by those OSes that > introduced it in the first place. The Energy Saver preference I mentioned above provides exactly that consideration. > So that is part of why scale selection is being discussed here. The > other part is that there is knowledge the author of a particular use > of WebGL has, that cannot be known or deduced. For instance, I know > that my deferred irradiance demo consumes tons of GPU resources, and > it has nothing to do with aliasing or running at 60fps. Define "tons of GPU resources". Buffer objects? Textures? Renderbuffers? The integrated GPU is part of a UMA so you have all of system memory to store these in. Vertex attributes? Uniforms? Texture units? On my MacBook Pro the only difference I see in the limits is that the max texture size and max viewport size on the integrated are 8192 while on the discreet Radeon they are 16384. > But if say, wikipedias entry of an icosaeder would like to rasterize a > simple icosaeder, they *know* that their use is very minimal, and that > any GPU no matter how slow will be able to run that @ 60FPS. And then > take for instance webglstats or modernizer etc. We know we don't want > peoples machines to flap GPUs in the wind, we're not doing anything of > performance interest. My suggested heuristic works fine for these cases. Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Mon Oct 1 00:18:50 2012 From: cal...@ (Mark Callow) Date: Mon, 01 Oct 2012 16:18:50 +0900 Subject: [Public WebGL] option for suggesting "low power" mode at context creation In-Reply-To: References: <844AB2E0-1382-40C8-9ABC-C91DE2B5FDF5@apple.com> <5063B7F0.90108@artspark.co.jp> <5064143A.70304@artspark.co.jp> <1D20BD88-DAF6-4F37-B5C2-78B149972782@apple.com> <50650DD6.7080708@artspark.co.jp> Message-ID: <506943DA.5040809@artspark.co.jp> On 2012/09/29 4:48, Kenneth Russell wrote: > However, I'd be hesitant to implement a heuristic like this for two > reasons. First, Mac OS is the only OS I know of that does the "deep > magic" to automatically migrate OpenGL resources between GPUs -- at > least, this is my understanding of how automatic graphics switching > works there. On other OSs like Windows I don't know how it works; I > think the D3D device can be created against a particular graphics > adapter, and I don't think resources can migrate between GPUs. For > best portability, an up-front decision when creating the context is > best. Context lost and restored events can be used to transition an app. from one GPU to another when there is no magic migration. > Second, if switching between GPUs really does work and one of the GPUs > doesn't pass the WebGL conformance suite, switching silently behind > the scenes could cause certain OpenGL operations to start failing in > the middle of the application's run. This would cause bugs that are > impossible to diagnose. An application could fail for the same reason with the proposed context creation preference though it maybe more likely to fail at the start. The way to debug it is the same. See if the app. works without the "save power" creation property == try with the OS or browser preference set to "Always use discrete". Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Oct 1 02:00:34 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Mon, 1 Oct 2012 11:00:34 +0200 Subject: [Public WebGL] option for suggesting "low power" mode at context creation In-Reply-To: <50694069.20802@artspark.co.jp> References: <844AB2E0-1382-40C8-9ABC-C91DE2B5FDF5@apple.com> <5063B7F0.90108@artspark.co.jp> <5064143A.70304@artspark.co.jp> <1D20BD88-DAF6-4F37-B5C2-78B149972782@apple.com> <50650DD6.7080708@artspark.co.jp> <50694069.20802@artspark.co.jp> Message-ID: On Mon, Oct 1, 2012 at 9:04 AM, Mark Callow wrote: > On 2012/09/28 20:37, Florian B?sch wrote: > > On Fri, Sep 28, 2012 at 4:39 AM, Mark Callow wrote: > >> ... >> >> That said, I agree that global control over power usage belongs in the >> OS. Given the way the OS X Energy Saver preferences are set up "automatic >> switching" is the choice that indicates you want to prolong battery life. >> This thread is happening because automatic switching apparently isn't smart >> enough. >> ... >> > Well gfx scaling as a function of user preference is not yet considered by > any OS, most prominently not by those OSes that introduced it in the first > place. > > The Energy Saver preference I mentioned above provides exactly that > consideration. > > So that is part of why scale selection is being discussed here. The > other part is that there is knowledge the author of a particular use of > WebGL has, that cannot be known or deduced. For instance, I know that my > deferred irradiance demo consumes tons of GPU resources, and it has nothing > to do with aliasing or running at 60fps. > > Define "tons of GPU resources". > Per Frame: Deferred irradiance: render scene albedo TT, render scene depth/normal TT, render scene depth from light TT, gaussian blur scene depth and depth squared TT, deferred shadow map precalculated lightprobe direct irradiance TT, deferred shadow map scene direct light TT, for i in range(3): sum up spherical harmonics for light probes from direct & indirect radiance, update indirect radiance from spherical harmonics, deferred render indirect scene irradiance TT via deferred light propagation values. Combine albedo, direct radiance, indirect radiance for output to display. > My suggested heuristic works fine for these cases. > > > 1) No, because var ctx = canvas.getContext('webgl'); $('button').click(function(){ // start doing something expensive with the context requring a discrete GPU }) 2) No, because var ctx = canvas.getContext('webgl'); preprocessSomethingExpensivePreprocessingFor10Seconds(ctx); runMinimalResourceMainloopThatWorksFineOnIntegratedGPU(ctx); 3) No, because var ctx = canvas.getContext('webgl'); runSomeCode(ctx); // randomly fails and not fails -------------- next part -------------- An HTML attachment was scrubbed... URL: From gri...@ Mon Oct 1 03:44:15 2012 From: gri...@ (Andrew) Date: Mon, 1 Oct 2012 12:44:15 +0200 Subject: [Public WebGL] uniform array of structs in latest chrome Message-ID: Hi, I'm not sure if this is a good place to write to, since the topics here look more advanced, and this is basically just a beginner's question, so feel free to redirect me. I've had a small webgl demo that's been running fine for months, but I noticed some time ago that in the chrome canary build, and now that I updated to chrome 22 in that as well, it stopped working correctly. It still works in Firefox. Basically what the problem comes down to is that I have a uniform array of struct objects that I use in my fragment shader to simply draw balls, like this: struct Ball { vec2 pos; float radius; }; const int numOfBalls = 9; uniform Ball u_balls[numOfBalls]; In the main I simply check if the current pixel (gl_FragCoord) belongs to at least one of the balls and I set the color accordingly. What happens is that one ball gets drawn to where it should, but the rest are stuck in the upper left corner of the browser window. I believe it's the first that is displayed, except when I set numOfBalls to 1, then that's too in the upper left corner. It's strange that when I capture a frame with WebGL Inspector, the uniform calls look correct, like: uniform2fv("u_balls[0].pos", [x, y]), etc. (where x, y is a float). Seems like radius is not being set either, but I have one uniform variable for the color of all the balls, and that seems to get passed along well, because the balls get that color. Does anyone have any ideas what could have broken my demo? I can put up a simplified version of the problem if this is not enough information. Any help is greatly appreciated! thanks, Andrew ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Mon Oct 1 04:09:07 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Mon, 1 Oct 2012 13:09:07 +0200 Subject: [Public WebGL] uniform array of structs in latest chrome In-Reply-To: References: Message-ID: According to the specification ( http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf) it should work. "The uniform qualifier can be used with any of the basic data types, or when declaring a variable whose type is a structure, or an array of any of these." The relevant conformance test ( https://www.khronos.org/registry/webgl/sdk/tests/conformance/glsl/misc/shader-with-array-of-structs-uniform.html) passes on my machine. Does it pass for you? On Mon, Oct 1, 2012 at 12:44 PM, Andrew wrote: > > Hi, > > I'm not sure if this is a good place to write to, since the topics > here look more advanced, and this is basically just a beginner's > question, so feel free to redirect me. > I've had a small webgl demo that's been running fine for months, but I > noticed some time ago that in the chrome canary build, and now that I > updated to chrome 22 in that as well, it stopped working correctly. It > still works in Firefox. > > Basically what the problem comes down to is that I have a uniform > array of struct objects that I use in my fragment shader to simply > draw balls, like this: > > struct Ball { > vec2 pos; > float radius; > }; > const int numOfBalls = 9; > uniform Ball u_balls[numOfBalls]; > > In the main I simply check if the current pixel (gl_FragCoord) belongs > to at least one of the balls and I set the color accordingly. > > What happens is that one ball gets drawn to where it should, but the > rest are stuck in the upper left corner of the browser window. I > believe it's the first that is displayed, except when I set numOfBalls > to 1, then that's too in the upper left corner. It's strange that when > I capture a frame with WebGL Inspector, the uniform calls look > correct, like: uniform2fv("u_balls[0].pos", [x, y]), etc. (where x, y > is a float). Seems like radius is not being set either, but I have one > uniform variable for the color of all the balls, and that seems to get > passed along well, because the balls get that color. > > Does anyone have any ideas what could have broken my demo? I can put > up a simplified version of the problem if this is not enough > information. Any help is greatly appreciated! > > thanks, > Andrew > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gri...@ Mon Oct 1 05:34:29 2012 From: gri...@ (Andrew) Date: Mon, 1 Oct 2012 14:34:29 +0200 Subject: [Public WebGL] uniform array of structs in latest chrome In-Reply-To: References: Message-ID: That test gave me errors: FAIL uniforms[0] should be u_colors[0].color1. Was u_colors[0].color1[0]. FAIL uniforms[1] should be u_colors[0].color2. Was u_colors[0].color2[0]. FAIL uniforms[2] should be u_colors[1].color1. Was u_colors[1].color1[0]. FAIL uniforms[3] should be u_colors[1].color2. Was u_colors[1].color2[0]. So I filed this bug: firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=795863 for chrome there was already something very similar: http://code.google.com/p/chromium/issues/detail?id=146234 On Mon, Oct 1, 2012 at 1:09 PM, Florian B?sch wrote: > According to the specification > (http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf) > it should work. > > "The uniform qualifier can be used with any of the basic data types, or when > declaring a variable whose > type is a structure, or an array of any of these." > > The relevant conformance test > (https://www.khronos.org/registry/webgl/sdk/tests/conformance/glsl/misc/shader-with-array-of-structs-uniform.html) > passes on my machine. Does it pass for you? > > On Mon, Oct 1, 2012 at 12:44 PM, Andrew wrote: >> >> >> Hi, >> >> I'm not sure if this is a good place to write to, since the topics >> here look more advanced, and this is basically just a beginner's >> question, so feel free to redirect me. >> I've had a small webgl demo that's been running fine for months, but I >> noticed some time ago that in the chrome canary build, and now that I >> updated to chrome 22 in that as well, it stopped working correctly. It >> still works in Firefox. >> >> Basically what the problem comes down to is that I have a uniform >> array of struct objects that I use in my fragment shader to simply >> draw balls, like this: >> >> struct Ball { >> vec2 pos; >> float radius; >> }; >> const int numOfBalls = 9; >> uniform Ball u_balls[numOfBalls]; >> >> In the main I simply check if the current pixel (gl_FragCoord) belongs >> to at least one of the balls and I set the color accordingly. >> >> What happens is that one ball gets drawn to where it should, but the >> rest are stuck in the upper left corner of the browser window. I >> believe it's the first that is displayed, except when I set numOfBalls >> to 1, then that's too in the upper left corner. It's strange that when >> I capture a frame with WebGL Inspector, the uniform calls look >> correct, like: uniform2fv("u_balls[0].pos", [x, y]), etc. (where x, y >> is a float). Seems like radius is not being set either, but I have one >> uniform variable for the color of all the balls, and that seems to get >> passed along well, because the balls get that color. >> >> Does anyone have any ideas what could have broken my demo? I can put >> up a simplified version of the problem if this is not enough >> information. Any help is greatly appreciated! >> >> thanks, >> Andrew >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Mon Oct 1 14:48:34 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Mon, 1 Oct 2012 14:48:34 -0700 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: <915809960.868631.1348754727917.JavaMail.root@mozilla.com> References: <50635854.9020306@mozilla.com> <915809960.868631.1348754727917.JavaMail.root@mozilla.com> Message-ID: On Thu, Sep 27, 2012 at 7:05 AM, Vladimir Vukicevic wrote: > > > Benoit and I had some conversations about this yesterday; doing perf tests > here is tricky. > > > 2) It's not clear that setTimeout(0) has any use. If one wants to > > measure stuff unrelated to compositing, postMessage should be at least > > as good as setTimeout(0) and often better in current browsers (not > > throttled). So we should be able to do with only RAF and postMessage. > > > > 3) When using postMessage, one should measure only the time taken by the > > payload, instead of measuring the lapse of time between two > > consecutive callbacks. > > It depends on what you're timing. If you want a benchmark that just times > how long WebGL calls take, then something like: > > var t0 = window.performance.now(); > do_calls(); > gl.finish(); > var elapsed = window.performance.now() - t0; > > should be enough. Do enough of those, triggered via setTimeout(0) to > avoid blocking the browser, and you should have a good time recording. But > that will only measure the actual WebGL calls + GPU execution time; if you > want to measure the entire rendering pipeline, that'll be trickier. > > I originally suggested using postMessage instead of rAF because rAF will > aim to give you 60fps (or whatever) and won't give you anything faster, > whereas postMessage is unthrottled [for now]... but there's no guarantee > that a full composite run will happen in between postMessage message > events, so it doesn't actually help. Ideally you'd have: > > |--- Frame 1 --------| |--- Frame 2 --------| ... > [callback] [composite] [callback] [composite] > > and you could measure the start of frame 2 minus the start of frame 1 as > the time, but if you're using postMessage, you could easily get: > > [callback] [callback] [composite] [callback] [composite] [callback] > [callback] [callback] [composite] ... etc. > > since there's no guarantee that you'll get a full frame per callback. > > Benoit came up with an interesting approach to actually measure this, > though. We should use requestAnimationFrame, but then adjust the payload > until we -just- hit 60fps (or whatever the target cap is). Pick some GPU > workload that you're measuring (perhaps a really simple one if you want to > measure compositing overhead) and run it in a loop in the frame callback; > keep increasing the number of loop iterations until you start dropping > under 60fps. The result of the test is the number of iterations of the > workload you can run during a frame and still maintain 60fps. If the > browser's compositing overhead increases, then the number of iterations you > can do decreases; similarly, if the time it takes to execute an iteration > goes up, the final score decreases as well. > So just FYI, I have a harness that does this https://code.google.com/p/webglsamples/source/browse/js/perf-harness.js and a test that uses that harness http://webglsamples.googlecode.com/hg/lots-o-images/lots-o-images-draw-elements.html Unfortunately I've had lots of problems using it because of timing variations depending on the platform. Issue #1: Frame averaging My first attempt to use an average the frame rates across N frames. Say N is 16 frames. If the average was below the target (60fps) then I'd double the number of things to draw. If it was above the target I'd half the number of things to draw. Basically I'd get an average frame rate that is high and the harness would keep adding more things to draw even though the instantaneous frame rate was too slow. So I'd end up increasing the number of things to draw over several frames, finally the average would be low enough to start decreasing the number but now I'd have the problem in the opposite direction, it would be decreasing the number for several frames because the average was too low even though the instantaneous was high Issue #2: Inconsistent frame rates My second attempt (the current perf harness) uses instantaneous frame rates. It uses a "velocity" for how much more or less to draw each frame. It doubles velocity if fps is below the target. The moment it goes above the target it cuts reverses the velocity and divides by 4. That seemed like it would quickly ping pong to the max draw count. It does on some platforms but on others (for example in my Linux box) the frame rate I get ping pongs between 30 and 60 fps (at least according to the timers) so that even when drawing only 10-50 things it's never increasing the count. You can manually lower the target frame rate to like 25fps and the count will then jump to 2k-5k, then move the target back up to 60fps and it will stay there. Maybe you guys have some ideas on how to fix it > > Benoit suggested that we try to keep the result of that be time, but I > don't think we should; I'm not sure if time has any specific meaning there > (because unless you call glFinish you don't really know who's going to be > paying the full GPU cost); better to keep the two separate. > > So, that said, I think there are really only two types of tests that we > need: > > 1 - tests that measure WebGL call speed; these can be run from > setTimeout(0) and will just measure raw elapsed time for a set of calls. > > 2 - tests that measure full compositing performance; these should run from > requestAnimationFrame, and should use the approach bjacob came up with > above. > > This -should- give us pretty solid perf coverage; it should be very useful > as a performance regression test for WebGL implementations. In theory, all > tests could be run as #2, though that will only really give us 16ms > precision for time.. and for some things (texture conversion speed on > upload, for example) smaller time differences could result. > > - Vlad > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Oct 1 15:03:46 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 2 Oct 2012 00:03:46 +0200 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: References: <50635854.9020306@mozilla.com> <915809960.868631.1348754727917.JavaMail.root@mozilla.com> Message-ID: If I understand this correctly you're trying to make an end-run around the fact that you want to measure composite, yet don't want to be bothered by composite... or something like that. So in the performance there are two measure: 1) how much time (including gl.finish) does it take to clear the rendering queue and the buffer you're intending to composite is now filled 2) How much time does it take to actually composite this with the page. Isn't it that pretty much no matter what you put onto your buffer, composite will take the the same time? I mean, it can be black, or transparent, or white, or full of noise, doesn't matter. Every pixel gets blended. So what you do during frame-rendering actually has 0 impact to regressions in composition, so these are two completely different topics. For frame render time, gl.finish and the high perf timer should be sufficient to detect FPS/composite/rAf independent times. And to measure compositing, well, maybe add some API that lets you measure that specifically. Could be interesting anways for app developers as well, to know how much time they have left for actually doing stuff per frame. On Mon, Oct 1, 2012 at 11:48 PM, Gregg Tavares (??) wrote: > > > > On Thu, Sep 27, 2012 at 7:05 AM, Vladimir Vukicevic wrote: > >> >> >> Benoit and I had some conversations about this yesterday; doing perf >> tests here is tricky. >> >> > 2) It's not clear that setTimeout(0) has any use. If one wants to >> > measure stuff unrelated to compositing, postMessage should be at least >> > as good as setTimeout(0) and often better in current browsers (not >> > throttled). So we should be able to do with only RAF and postMessage. >> > >> > 3) When using postMessage, one should measure only the time taken by the >> > payload, instead of measuring the lapse of time between two >> > consecutive callbacks. >> >> It depends on what you're timing. If you want a benchmark that just >> times how long WebGL calls take, then something like: >> >> var t0 = window.performance.now(); >> do_calls(); >> gl.finish(); >> var elapsed = window.performance.now() - t0; >> >> should be enough. Do enough of those, triggered via setTimeout(0) to >> avoid blocking the browser, and you should have a good time recording. But >> that will only measure the actual WebGL calls + GPU execution time; if you >> want to measure the entire rendering pipeline, that'll be trickier. >> >> I originally suggested using postMessage instead of rAF because rAF will >> aim to give you 60fps (or whatever) and won't give you anything faster, >> whereas postMessage is unthrottled [for now]... but there's no guarantee >> that a full composite run will happen in between postMessage message >> events, so it doesn't actually help. Ideally you'd have: >> >> |--- Frame 1 --------| |--- Frame 2 --------| ... >> [callback] [composite] [callback] [composite] >> >> and you could measure the start of frame 2 minus the start of frame 1 as >> the time, but if you're using postMessage, you could easily get: >> >> [callback] [callback] [composite] [callback] [composite] [callback] >> [callback] [callback] [composite] ... etc. >> >> since there's no guarantee that you'll get a full frame per callback. >> >> Benoit came up with an interesting approach to actually measure this, >> though. We should use requestAnimationFrame, but then adjust the payload >> until we -just- hit 60fps (or whatever the target cap is). Pick some GPU >> workload that you're measuring (perhaps a really simple one if you want to >> measure compositing overhead) and run it in a loop in the frame callback; >> keep increasing the number of loop iterations until you start dropping >> under 60fps. The result of the test is the number of iterations of the >> workload you can run during a frame and still maintain 60fps. If the >> browser's compositing overhead increases, then the number of iterations you >> can do decreases; similarly, if the time it takes to execute an iteration >> goes up, the final score decreases as well. >> > > So just FYI, I have a harness that does this > > https://code.google.com/p/webglsamples/source/browse/js/perf-harness.js > > and a test that uses that harness > > > http://webglsamples.googlecode.com/hg/lots-o-images/lots-o-images-draw-elements.html > > Unfortunately I've had lots of problems using it because of timing > variations depending on the platform. > > Issue #1: Frame averaging > > My first attempt to use an average the frame rates across N frames. Say N > is 16 frames. If the average was below the target (60fps) then I'd double > the number of things to draw. If it was above the target I'd half the > number of things to draw. > > Basically I'd get an average frame rate that is high and the harness would > keep adding more things to draw even though the instantaneous frame rate > was too slow. So I'd end up increasing the number of things to draw over > several frames, finally the average would be low enough to start decreasing > the number but now I'd have the problem in the opposite direction, it would > be decreasing the number for several frames because the average was too low > even though the instantaneous was high > > Issue #2: Inconsistent frame rates > > My second attempt (the current perf harness) uses instantaneous frame > rates. It uses a "velocity" for how much more or less to draw each frame. > It doubles velocity if fps is below the target. The moment it goes above > the target it cuts reverses the velocity and divides by 4. That seemed like > it would quickly ping pong to the max draw count. It does on some platforms > but on others (for example in my Linux box) the frame rate I get ping pongs > between 30 and 60 fps (at least according to the timers) so that even when > drawing only 10-50 things it's never increasing the count. You can manually > lower the target frame rate to like 25fps and the count will then jump to > 2k-5k, then move the target back up to 60fps and it will stay there. > > Maybe you guys have some ideas on how to fix it > > > > > > > >> >> Benoit suggested that we try to keep the result of that be time, but I >> don't think we should; I'm not sure if time has any specific meaning there >> (because unless you call glFinish you don't really know who's going to be >> paying the full GPU cost); better to keep the two separate. >> >> So, that said, I think there are really only two types of tests that we >> need: >> >> 1 - tests that measure WebGL call speed; these can be run from >> setTimeout(0) and will just measure raw elapsed time for a set of calls. >> >> 2 - tests that measure full compositing performance; these should run >> from requestAnimationFrame, and should use the approach bjacob came up with >> above. >> >> This -should- give us pretty solid perf coverage; it should be very >> useful as a performance regression test for WebGL implementations. In >> theory, all tests could be run as #2, though that will only really give us >> 16ms precision for time.. and for some things (texture conversion speed on >> upload, for example) smaller time differences could result. >> >> - Vlad >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Oct 1 15:22:42 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Mon, 1 Oct 2012 15:22:42 -0700 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: References: <50635854.9020306@mozilla.com> <915809960.868631.1348754727917.JavaMail.root@mozilla.com> Message-ID: Actually I don't know what each person's goals are. My goal was to provide a harness to be able to find out how much stuff you can draw at 60fps using different techniques. Tests that stall the pipeline with calls to gl.finish will not do that. This is especially true in Chrome with its multi process architecture were WebGL is just generating commands that end up getting executed in parallel in another process. Calling gl.finish stalls both processes and removes all the parallelism. On Mon, Oct 1, 2012 at 3:03 PM, Florian B?sch wrote: > If I understand this correctly you're trying to make an end-run around the > fact that you want to measure composite, yet don't want to be bothered by > composite... or something like that. > > So in the performance there are two measure: > 1) how much time (including gl.finish) does it take to clear the rendering > queue and the buffer you're intending to composite is now filled > 2) How much time does it take to actually composite this with the page. > > Isn't it that pretty much no matter what you put onto your buffer, > composite will take the the same time? I mean, it can be black, or > transparent, or white, or full of noise, doesn't matter. Every pixel gets > blended. So what you do during frame-rendering actually has 0 impact to > regressions in composition, so these are two completely different topics. > For frame render time, gl.finish and the high perf timer should be > sufficient to detect FPS/composite/rAf independent times. And to measure > compositing, well, maybe add some API that lets you measure that > specifically. Could be interesting anways for app developers as well, to > know how much time they have left for actually doing stuff per frame. > > > On Mon, Oct 1, 2012 at 11:48 PM, Gregg Tavares (??) wrote: > >> >> >> >> On Thu, Sep 27, 2012 at 7:05 AM, Vladimir Vukicevic > > wrote: >> >>> >>> >>> Benoit and I had some conversations about this yesterday; doing perf >>> tests here is tricky. >>> >>> > 2) It's not clear that setTimeout(0) has any use. If one wants to >>> > measure stuff unrelated to compositing, postMessage should be at least >>> > as good as setTimeout(0) and often better in current browsers (not >>> > throttled). So we should be able to do with only RAF and postMessage. >>> > >>> > 3) When using postMessage, one should measure only the time taken by >>> the >>> > payload, instead of measuring the lapse of time between two >>> > consecutive callbacks. >>> >>> It depends on what you're timing. If you want a benchmark that just >>> times how long WebGL calls take, then something like: >>> >>> var t0 = window.performance.now(); >>> do_calls(); >>> gl.finish(); >>> var elapsed = window.performance.now() - t0; >>> >>> should be enough. Do enough of those, triggered via setTimeout(0) to >>> avoid blocking the browser, and you should have a good time recording. But >>> that will only measure the actual WebGL calls + GPU execution time; if you >>> want to measure the entire rendering pipeline, that'll be trickier. >>> >>> I originally suggested using postMessage instead of rAF because rAF will >>> aim to give you 60fps (or whatever) and won't give you anything faster, >>> whereas postMessage is unthrottled [for now]... but there's no guarantee >>> that a full composite run will happen in between postMessage message >>> events, so it doesn't actually help. Ideally you'd have: >>> >>> |--- Frame 1 --------| |--- Frame 2 --------| ... >>> [callback] [composite] [callback] [composite] >>> >>> and you could measure the start of frame 2 minus the start of frame 1 as >>> the time, but if you're using postMessage, you could easily get: >>> >>> [callback] [callback] [composite] [callback] [composite] [callback] >>> [callback] [callback] [composite] ... etc. >>> >>> since there's no guarantee that you'll get a full frame per callback. >>> >>> Benoit came up with an interesting approach to actually measure this, >>> though. We should use requestAnimationFrame, but then adjust the payload >>> until we -just- hit 60fps (or whatever the target cap is). Pick some GPU >>> workload that you're measuring (perhaps a really simple one if you want to >>> measure compositing overhead) and run it in a loop in the frame callback; >>> keep increasing the number of loop iterations until you start dropping >>> under 60fps. The result of the test is the number of iterations of the >>> workload you can run during a frame and still maintain 60fps. If the >>> browser's compositing overhead increases, then the number of iterations you >>> can do decreases; similarly, if the time it takes to execute an iteration >>> goes up, the final score decreases as well. >>> >> >> So just FYI, I have a harness that does this >> >> https://code.google.com/p/webglsamples/source/browse/js/perf-harness.js >> >> and a test that uses that harness >> >> >> http://webglsamples.googlecode.com/hg/lots-o-images/lots-o-images-draw-elements.html >> >> Unfortunately I've had lots of problems using it because of timing >> variations depending on the platform. >> >> Issue #1: Frame averaging >> >> My first attempt to use an average the frame rates across N frames. Say N >> is 16 frames. If the average was below the target (60fps) then I'd double >> the number of things to draw. If it was above the target I'd half the >> number of things to draw. >> >> Basically I'd get an average frame rate that is high and the harness >> would keep adding more things to draw even though the instantaneous frame >> rate was too slow. So I'd end up increasing the number of things to draw >> over several frames, finally the average would be low enough to start >> decreasing the number but now I'd have the problem in the opposite >> direction, it would be decreasing the number for several frames because the >> average was too low even though the instantaneous was high >> >> Issue #2: Inconsistent frame rates >> >> My second attempt (the current perf harness) uses instantaneous frame >> rates. It uses a "velocity" for how much more or less to draw each frame. >> It doubles velocity if fps is below the target. The moment it goes above >> the target it cuts reverses the velocity and divides by 4. That seemed like >> it would quickly ping pong to the max draw count. It does on some platforms >> but on others (for example in my Linux box) the frame rate I get ping pongs >> between 30 and 60 fps (at least according to the timers) so that even when >> drawing only 10-50 things it's never increasing the count. You can manually >> lower the target frame rate to like 25fps and the count will then jump to >> 2k-5k, then move the target back up to 60fps and it will stay there. >> >> Maybe you guys have some ideas on how to fix it >> >> >> >> >> >> >> >>> >>> Benoit suggested that we try to keep the result of that be time, but I >>> don't think we should; I'm not sure if time has any specific meaning there >>> (because unless you call glFinish you don't really know who's going to be >>> paying the full GPU cost); better to keep the two separate. >>> >>> So, that said, I think there are really only two types of tests that we >>> need: >>> >>> 1 - tests that measure WebGL call speed; these can be run from >>> setTimeout(0) and will just measure raw elapsed time for a set of calls. >>> >>> 2 - tests that measure full compositing performance; these should run >>> from requestAnimationFrame, and should use the approach bjacob came up with >>> above. >>> >>> This -should- give us pretty solid perf coverage; it should be very >>> useful as a performance regression test for WebGL implementations. In >>> theory, all tests could be run as #2, though that will only really give us >>> 16ms precision for time.. and for some things (texture conversion speed on >>> upload, for example) smaller time differences could result. >>> >>> - Vlad >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Mon Oct 1 15:44:46 2012 From: vla...@ (Vladimir Vukicevic) Date: Mon, 1 Oct 2012 15:44:46 -0700 (PDT) Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: Message-ID: <96910951.1409240.1349131486262.JavaMail.root@mozilla.com> ----- Original Message ----- > If I understand this correctly you're trying to make an end-run > around the fact that you want to measure composite, yet don't want > to be bothered by composite... or something like that. > > > So in the performance there are two measure: > 1) how much time (including gl.finish) does it take to clear the > rendering queue and the buffer you're intending to composite is now > filled Right, that's the first measurement type that I outlined. Just straight time elapsed in JS. > 2) How much time does it take to actually composite this with the > page. > > > Isn't it that pretty much no matter what you put onto your buffer, > composite will take the the same time? I mean, it can be black, or > transparent, or white, or full of noise, doesn't matter. Every pixel > gets blended. This is corect. > So what you do during frame-rendering actually has 0 > impact to regressions in composition, so these are two completely > different topics. For frame render time, gl.finish and the high perf > timer should be sufficient to detect FPS/composite/rAf independent > times. And to measure compositing, well, maybe add some API that > lets you measure that specifically. Could be interesting anways for > app developers as well, to know how much time they have left for > actually doing stuff per frame. Yup -- it's not that we don't want to measure composite, we're making an end-run around the fact that there exists no API for measuring composite. That API could also be somewhat complicated to define, because each browser could have a very different idea of what it actually means to composite. So, instead, I'm more interested in treating it as "overhead" -- I want to measure how much overhead a browser adds to each frame. This could be compositing overhead, it could be function call/JS bridge overhead, etc. That's where the second attempt comes into play, and it sounds like what gregg's already tried to implement. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Mon Oct 1 17:05:15 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 2 Oct 2012 02:05:15 +0200 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: References: <50635854.9020306@mozilla.com> <915809960.868631.1348754727917.JavaMail.root@mozilla.com> Message-ID: On Tue, Oct 2, 2012 at 12:22 AM, Gregg Tavares (??) wrote: > Actually I don't know what each person's goals are. > > My goal was to provide a harness to be able to find out how much stuff you > can draw at 60fps using different techniques. Tests that stall the pipeline > with calls to gl.finish will not do that. This is especially true in Chrome > with its multi process architecture were WebGL is just generating commands > that end up getting executed in parallel in another process. Calling > gl.finish stalls both processes and removes all the parallelism. > Well unless you call a stalling function (such as finish, texImage2D, bufferData, readPixels, compileShader, linkProgram, uniform**[v]) it doesn't much matter that it's off process. The driver itself won't stall on other calls (at least in theory). And data delivered as in those stalling calls will stall the other process anyway, since it's got to finish reading the bits before it can let the sending process continue lest that process deletes or modifies the stuff in flight (actually map buffers would be really nice, but then we're getting into fence territory and dragons live there). Anyways, if you emit no stalling calls whatsoever the rendering queue would just fill up and you'd be none the wiser at 60fps, so the browser has to finish it eventually, that'll be when the browsers GL context performs a buffer swap at the very latest (assuming an accelerated compositor). So what you can do without gl.finish() is pretty much the same that you can do with gl.finish(), but if you don't call gl.finish() you might be free (to some degree or other) to do other stuff while the GPU and GPU process churns over the rendering. So gl.finish() is actually a fairly good way to measure how long stuff takes. Of course it's not a terribly performant way to do things because while you wait to see how long stuff takes, you could do other stuff, but you get the drift. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Oct 1 17:27:21 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Mon, 1 Oct 2012 17:27:21 -0700 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: References: <50635854.9020306@mozilla.com> <915809960.868631.1348754727917.JavaMail.root@mozilla.com> Message-ID: On Mon, Oct 1, 2012 at 5:05 PM, Florian B?sch wrote: > On Tue, Oct 2, 2012 at 12:22 AM, Gregg Tavares (??) wrote: > >> Actually I don't know what each person's goals are. >> >> My goal was to provide a harness to be able to find out how much stuff >> you can draw at 60fps using different techniques. Tests that stall the >> pipeline with calls to gl.finish will not do that. This is especially true >> in Chrome with its multi process architecture were WebGL is just generating >> commands that end up getting executed in parallel in another process. >> Calling gl.finish stalls both processes and removes all the parallelism. >> > Well unless you call a stalling function (such as finish, texImage2D, > bufferData, readPixels, compileShader, linkProgram, uniform**[v]) it > doesn't much matter that it's off process. The driver itself won't stall on > other calls (at least in theory). And data delivered as in those stalling > calls will stall the other process anyway, since it's got to finish reading > the bits before it can let the sending process continue lest that process > deletes or modifies the stuff in flight (actually map buffers would be > really nice, but then we're getting into fence territory and dragons live > there). Anyways, if you emit no stalling calls whatsoever the rendering > queue would just fill up and you'd be none the wiser at 60fps, so the > browser has to finish it eventually, that'll be when the browsers GL > context performs a buffer swap at the very latest (assuming an accelerated > compositor). So what you can do without gl.finish() is pretty much the same > that you can do with gl.finish(), but if you don't call gl.finish() you > might be free (to some degree or other) to do other stuff while the GPU and > GPU process churns over the rendering. So gl.finish() is actually a fairly > good way to measure how long stuff takes. Of course it's not a terribly > performant way to do things because while you wait to see how long stuff > takes, you could do other stuff, but you get the drift. > I'm not sure I follow what you're trying to explain. If I have 1 core I get this 1:[S][LONG][S][LONG][S][LONG][S][LONG][S][LONG][S][LONG] On 2 cores I get this 1:[S][S][S][S][S][S] 2:[LONG][LONG][LONG][LONG][LONG][LONG] In the example above I'm doing 6 short [S] and 6 long [LONG] operations where I made their size represent the time they take to execute. With 2 processes I can execute 4 more operations, 10 total in the same amount of time 1 core took to process 6 opretations. 1:[S][S][S][S][S][S][S][S][S][S] 2:[LONG][LONG][LONG][LONG][LONG][LONG][LONG][LONG][LONG] That assumes I'm not GPU bound but if I'm GPU bound the only thing that matters is the GPU. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Oct 1 17:35:00 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 2 Oct 2012 02:35:00 +0200 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: References: <50635854.9020306@mozilla.com> <915809960.868631.1348754727917.JavaMail.root@mozilla.com> Message-ID: On Tue, Oct 2, 2012 at 2:27 AM, Gregg Tavares (??) wrote: > I'm not sure I follow what you're trying to explain. If I have 1 core I > get this > > 1:[S][LONG][S][LONG][S][LONG][S][LONG][S][LONG][S][LONG] > > On 2 cores I get this > > 1:[S][S][S][S][S][S] > 2:[LONG][LONG][LONG][LONG][LONG][LONG] > > In the example above I'm doing 6 short [S] and 6 long [LONG] operations > where I made their size represent the time they take to execute. > > With 2 processes I can execute 4 more operations, 10 total in the same > amount of time 1 core took to process 6 opretations. > > 1:[S][S][S][S][S][S][S][S][S][S] > 2:[LONG][LONG][LONG][LONG][LONG][LONG][LONG][LONG][LONG] > > That assumes I'm not GPU bound but if I'm GPU bound the only thing that > matters is the GPU. > If you're not CPU bound on the rendering, and you avoid blocking calls, then it'll look a bit like this: JS: [s][s][s][s] (like 0.1ms) GPU process: [s][s][s][s] [...........................................................................................................................................................swap] Driver: [stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff] So it really doesn't matter for the GPU bound performance measurement if you call gl.finish() because that'll just make it look like this: JS: [s][s][s][s][...........................................................................................................................................................................finish] GPU process: [s][s][s][s] [................................................................................................................................................. finish][swap] Driver: [stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff] -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndu...@ Mon Oct 1 17:41:14 2012 From: ndu...@ (Nat Duca) Date: Mon, 1 Oct 2012 17:41:14 -0700 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: References: <50635854.9020306@mozilla.com> <915809960.868631.1348754727917.JavaMail.root@mozilla.com> Message-ID: In this topic, techniques like arb_timer_query (and the equivalent "how much time was spent in the driver") become very important. Armed with these pieces of information, you can accomplish much of what is being discussed here. On Mon, Oct 1, 2012 at 5:27 PM, Gregg Tavares (??) wrote: > > > > On Mon, Oct 1, 2012 at 5:05 PM, Florian B?sch wrote: > >> On Tue, Oct 2, 2012 at 12:22 AM, Gregg Tavares (??) wrote: >> >>> Actually I don't know what each person's goals are. >>> >>> My goal was to provide a harness to be able to find out how much stuff >>> you can draw at 60fps using different techniques. Tests that stall the >>> pipeline with calls to gl.finish will not do that. This is especially true >>> in Chrome with its multi process architecture were WebGL is just generating >>> commands that end up getting executed in parallel in another process. >>> Calling gl.finish stalls both processes and removes all the parallelism. >>> >> Well unless you call a stalling function (such as finish, texImage2D, >> bufferData, readPixels, compileShader, linkProgram, uniform**[v]) it >> doesn't much matter that it's off process. The driver itself won't stall on >> other calls (at least in theory). And data delivered as in those stalling >> calls will stall the other process anyway, since it's got to finish reading >> the bits before it can let the sending process continue lest that process >> deletes or modifies the stuff in flight (actually map buffers would be >> really nice, but then we're getting into fence territory and dragons live >> there). Anyways, if you emit no stalling calls whatsoever the rendering >> queue would just fill up and you'd be none the wiser at 60fps, so the >> browser has to finish it eventually, that'll be when the browsers GL >> context performs a buffer swap at the very latest (assuming an accelerated >> compositor). So what you can do without gl.finish() is pretty much the same >> that you can do with gl.finish(), but if you don't call gl.finish() you >> might be free (to some degree or other) to do other stuff while the GPU and >> GPU process churns over the rendering. So gl.finish() is actually a fairly >> good way to measure how long stuff takes. Of course it's not a terribly >> performant way to do things because while you wait to see how long stuff >> takes, you could do other stuff, but you get the drift. >> > > I'm not sure I follow what you're trying to explain. If I have 1 core I > get this > > 1:[S][LONG][S][LONG][S][LONG][S][LONG][S][LONG][S][LONG] > > On 2 cores I get this > > 1:[S][S][S][S][S][S] > 2:[LONG][LONG][LONG][LONG][LONG][LONG] > > In the example above I'm doing 6 short [S] and 6 long [LONG] operations > where I made their size represent the time they take to execute. > > With 2 processes I can execute 4 more operations, 10 total in the same > amount of time 1 core took to process 6 opretations. > > 1:[S][S][S][S][S][S][S][S][S][S] > 2:[LONG][LONG][LONG][LONG][LONG][LONG][LONG][LONG][LONG] > > That assumes I'm not GPU bound but if I'm GPU bound the only thing that > matters is the GPU. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Oct 1 17:46:47 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 2 Oct 2012 02:46:47 +0200 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: References: <50635854.9020306@mozilla.com> <915809960.868631.1348754727917.JavaMail.root@mozilla.com> Message-ID: On Tue, Oct 2, 2012 at 2:41 AM, Nat Duca wrote: > In this topic, techniques like arb_timer_query (and the equivalent "how > much time was spent in the driver") become very important. > > Armed with these pieces of information, you can accomplish much of what is > being discussed here. > That's a cool idea. 1. Will it work in the way browsers use the context? (as in does the GPU processes context map cleanly to a WebGL context) 2. ES does not have the extension, would that block introduction of it? 3. Can Direct3D9 do those? -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Mon Oct 1 19:03:35 2012 From: cal...@ (Mark Callow) Date: Tue, 02 Oct 2012 11:03:35 +0900 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: <96910951.1409240.1349131486262.JavaMail.root@mozilla.com> References: <96910951.1409240.1349131486262.JavaMail.root@mozilla.com> Message-ID: <506A4B77.2000804@artspark.co.jp> On 2012/10/02 7:44, Vladimir Vukicevic wrote: > Yup -- it's not that we don't want to measure composite, we're making an end-run around the fact that there exists no API for measuring composite. That API could also be somewhat complicated to define, because each browser could have a very different idea of what it actually means to composite. So, instead, I'm more interested in treating it as "overhead" -- I want to measure how much overhead a browser adds to each frame. This could be compositing overhead, it could be function call/JS bridge overhead, etc. That's where the second attempt comes into play, and it sounds like what gregg's already tried to implement. I've been thinking about this in respect to WEBGL_dynamic_texture. The best idea I've come up with so far is an event the app can listen for which tells it that the canvas has reached the screen. The event payload would be the actual timestamp of the start of scanning the composited frame out to the display. It has been pointed out that firing this every frame would be too much overhead so it should probably be sent every nth frame where n can be specified by the app. Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From baj...@ Mon Oct 1 20:19:04 2012 From: baj...@ (Brandon Jones) Date: Mon, 1 Oct 2012 20:19:04 -0700 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: References: <50635854.9020306@mozilla.com> <915809960.868631.1348754727917.JavaMail.root@mozilla.com> Message-ID: Ben Vanik has already done some research on the ARB_timer_query extension, so he'd be a good one to fill in the blanks on questions like this. It's something that I was looking at tackling in the near future, so I'd be curious to know just how much of an impact it would have on the accuracy of these benchmarks. --Brandon On Oct 1, 2012 5:50 PM, "Florian B?sch" wrote: > > > On Tue, Oct 2, 2012 at 2:41 AM, Nat Duca wrote: > >> In this topic, techniques like arb_timer_query (and the equivalent "how >> much time was spent in the driver") become very important. >> >> Armed with these pieces of information, you can accomplish much of what >> is being discussed here. >> > That's a cool idea. > > 1. Will it work in the way browsers use the context? (as in does the > GPU processes context map cleanly to a WebGL context) > 2. ES does not have the extension, would that block introduction of it? > 3. Can Direct3D9 do those? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Tue Oct 2 12:55:48 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Tue, 2 Oct 2012 12:55:48 -0700 Subject: [Public WebGL] WebGL perf regression tests moved to GitHub; design still in flux In-Reply-To: References: <50635854.9020306@mozilla.com> <915809960.868631.1348754727917.JavaMail.root@mozilla.com> Message-ID: On Mon, Oct 1, 2012 at 5:35 PM, Florian B?sch wrote: > On Tue, Oct 2, 2012 at 2:27 AM, Gregg Tavares (??) wrote: > >> I'm not sure I follow what you're trying to explain. If I have 1 core I >> get this >> >> 1:[S][LONG][S][LONG][S][LONG][S][LONG][S][LONG][S][LONG] >> >> On 2 cores I get this >> >> 1:[S][S][S][S][S][S] >> 2:[LONG][LONG][LONG][LONG][LONG][LONG] >> >> In the example above I'm doing 6 short [S] and 6 long [LONG] operations >> where I made their size represent the time they take to execute. >> >> With 2 processes I can execute 4 more operations, 10 total in the same >> amount of time 1 core took to process 6 opretations. >> >> 1:[S][S][S][S][S][S][S][S][S][S] >> 2:[LONG][LONG][LONG][LONG][LONG][LONG][LONG][LONG][LONG] >> >> That assumes I'm not GPU bound but if I'm GPU bound the only thing that >> matters is the GPU. >> > > If you're not CPU bound on the rendering, and you avoid blocking calls, > then it'll look a bit like this: > JS: [s][s][s][s] (like 0.1ms) > GPU process: [s][s][s][s] > [...........................................................................................................................................................swap] > Driver: > [stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff] > > So it really doesn't matter for the GPU bound performance measurement if > you call gl.finish() because that'll just make it look like this: > JS: > [s][s][s][s][...........................................................................................................................................................................finish] > GPU process: [s][s][s][s] > [................................................................................................................................................. > finish][swap] > Driver: > [stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff][stuff] > That's not what I observe at all. The driver may do some of its operations in a separate thread/process but not all of them. I still get vastly more calls WebGL calls on multi-process than single process. The whole point of the harness I linked to is to see how many draw calls you can make so in other words it's attempting to be CPU bound. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Wed Oct 3 14:24:09 2012 From: cma...@ (Chris Marrin) Date: Wed, 03 Oct 2012 14:24:09 -0700 Subject: [Public WebGL] option for suggesting "low power" mode at context creation In-Reply-To: References: <844AB2E0-1382-40C8-9ABC-C91DE2B5FDF5@apple.com> <5063B7F0.90108@artspark.co.jp> <5064143A.70304@artspark.co.jp> <1D20BD88-DAF6-4F37-B5C2-78B149972782@apple.com> <50650DD6.7080708@artspark.co.jp> Message-ID: On Sep 28, 2012, at 12:48 PM, Kenneth Russell wrote: > > On Fri, Sep 28, 2012 at 4:37 AM, Florian B?sch wrote: >> On Fri, Sep 28, 2012 at 4:39 AM, Mark Callow >> wrote: >>> >>> On 28/09/2012 06:28, Florian B?sch wrote: >>> >>> On macbooks there's a gfx control app that overrides OSX GPU selection. I >>> don't think a global control should be part of the browser. A web developer >>> can easily offer the user a choice (like SD vs. HD) and set the hint >>> accordingly. If anything the global control over power usage belongs into >>> the operating systems settings, right next to disabling wifi, bluetooth, >>> airplane mode etc. >>> >>> There is no standard app for that. Only geeks will have the 3rd party app >>> installed. >>> >>> That said, I agree that global control over power usage belongs in the >>> OS. Given the way the OS X Energy Saver preferences are set up "automatic >>> switching" is the choice that indicates you want to prolong battery life. >>> This thread is happening because automatic switching apparently isn't smart >>> enough. >>> >>> Why isn't something like the following algorithm sufficient when the user >>> has selected prolong battery life? >>> >>> If the app requests anti-aliasing and anti-aliasing consumes more power >>> ignore the request. >>> Start running on the integrated GPU. >>> If the app is calling requestAnimationFrame repeatedly and fails to >>> achieve 60fps, switch to the discreet GPU. >> >> Well gfx scaling as a function of user preference is not yet considered by >> any OS, most prominently not by those OSes that introduced it in the first >> place. So that is part of why scale selection is being discussed here. The >> other part is that there is knowledge the author of a particular use of >> WebGL has, that cannot be known or deduced. For instance, I know that my >> deferred irradiance demo consumes tons of GPU resources, and it has nothing >> to do with aliasing or running at 60fps. But if say, wikipedias entry of an >> icosaeder would like to rasterize a simple icosaeder, they *know* that their >> use is very minimal, and that any GPU no matter how slow will be able to run >> that @ 60FPS. And then take for instance webglstats or modernizer etc. We >> know we don't want peoples machines to flap GPUs in the wind, we're not >> doing anything of performance interest. There's no way you can deduce this >> from the behavior of the application. This goes straight back at being an >> NP-complete problem. You're trying to infer the complexity of a turing >> complete program which has been shown to be NP-complete. > > I think the problem of determining heuristics for automatic GPU > switching is a little less difficult than you make it out to be. > Mark's heuristics seem like they would work assuming that all of the > tracking could be put into the web browser to understand that calls > against a given WebGL context were being made on behalf of a given > chain of requestAnimationFrame callbacks. To avoid dithering between > GPUs, the switch between the low-power and high-power GPU per context > could be made unidirectional. Dynamic compilers for languages like > JavaScript do similar run-time measurement to decide where to focus > optimizations. > > However, I'd be hesitant to implement a heuristic like this for two > reasons. First, Mac OS is the only OS I know of that does the "deep > magic" to automatically migrate OpenGL resources between GPUs -- at > least, this is my understanding of how automatic graphics switching > works there. On other OSs like Windows I don't know how it works; I > think the D3D device can be created against a particular graphics > adapter, and I don't think resources can migrate between GPUs. For > best portability, an up-front decision when creating the context is > best. > > Second, if switching between GPUs really does work and one of the GPUs > doesn't pass the WebGL conformance suite, switching silently behind > the scenes could cause certain OpenGL operations to start failing in > the middle of the application's run. This would cause bugs that are > impossible to diagnose. But to be clear, this is all implementation detail. An implementation that is able to do some very clever heuristics to switch back and forth without the user knowing (other than a possible change in quality) is free to do so, just like another implementation might ignore the flag altogether. ----- ~Chris Marrin cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Oct 3 14:29:44 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 3 Oct 2012 23:29:44 +0200 Subject: [Public WebGL] option for suggesting "low power" mode at context creation In-Reply-To: References: <844AB2E0-1382-40C8-9ABC-C91DE2B5FDF5@apple.com> <5063B7F0.90108@artspark.co.jp> <5064143A.70304@artspark.co.jp> <1D20BD88-DAF6-4F37-B5C2-78B149972782@apple.com> <50650DD6.7080708@artspark.co.jp> Message-ID: On Wed, Oct 3, 2012 at 11:24 PM, Chris Marrin wrote: > But to be clear, this is all implementation detail. An implementation that > is able to do some very clever heuristics to switch back and forth without > the user knowing (other than a possible change in quality) is free to do > so, just like another implementation might ignore the flag altogether. > I'm not objecting to dynamic scaling if your implementation can do that. I'm objecting to the idea that without the capability to be able to that, you could somehow arrive at a relevant decision of what setting to use before or shortly after the context is created that shall represent a valid choice for the entire lifetime of the context. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kos...@ Wed Oct 3 14:34:43 2012 From: kos...@ (David Sheets) Date: Wed, 3 Oct 2012 14:34:43 -0700 Subject: [Public WebGL] option for suggesting "low power" mode at context creation In-Reply-To: References: <844AB2E0-1382-40C8-9ABC-C91DE2B5FDF5@apple.com> <5063B7F0.90108@artspark.co.jp> <5064143A.70304@artspark.co.jp> <1D20BD88-DAF6-4F37-B5C2-78B149972782@apple.com> <50650DD6.7080708@artspark.co.jp> Message-ID: On Wed, Oct 3, 2012 at 2:24 PM, Chris Marrin wrote: > But to be clear, this is all implementation detail. An implementation that > is able to do some very clever heuristics to switch back and forth without > the user knowing (other than a possible change in quality) is free to do so, > just like another implementation might ignore the flag altogether. The shading language specification does not fully specify certain semantic properties of shaders that may differ between renderer/driver/GPU (signed zeros, NaNs, varying precisions, &c). Authors may profile the renderer they are running on and may load programs to take advantage of each different renderer's capabilities. How will users report bugs in content with automagical GPU switching and tuned shaders? I am in favor of hints and switching if it is deterministic and detectable. I believe this is possible with a default renderer bias (e.g. high performance unless otherwise requested) and a flag corresponding to each alternate renderer hint to indicate if that alternate renderer is being used. David ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From ste...@ Thu Oct 4 05:57:33 2012 From: ste...@ (Steve Baker) Date: Thu, 4 Oct 2012 07:57:33 -0500 Subject: [Public WebGL] option for suggesting "low power" mode at context creation In-Reply-To: References: <844AB2E0-1382-40C8-9ABC-C91DE2B5FDF5@apple.com> <5063B7F0.90108@artspark.co.jp> <5064143A.70304@artspark.co.jp> <1D20BD88-DAF6-4F37-B5C2-78B149972782@apple.com> <50650DD6.7080708@artspark.co.jp> Message-ID: <9d58ec01785f603baf23945ef35ece70.squirrel@webmail.sjbaker.org> Florian B?sch wrote: > On Wed, Oct 3, 2012 at 11:24 PM, Chris Marrin wrote: > >> But to be clear, this is all implementation detail. An implementation >> that >> is able to do some very clever heuristics to switch back and forth >> without >> the user knowing (other than a possible change in quality) is free to do >> so, just like another implementation might ignore the flag altogether. >> > I'm not objecting to dynamic scaling if your implementation can do that. > I'm objecting to the idea that without the capability to be able to that, > you could somehow arrive at a relevant decision of what setting to use > before or shortly after the context is created that shall represent a > valid > choice for the entire lifetime of the context. Dynamic scaling is a dangerous thing if done automatically and without the application's consent. Remember, not all "rendering" is producing a pretty picture on the screen. Lots of times you're using the GPU to generate something that is very specific to your needs. I've used the GPU to do things like calculating the position of 65,536 particles (packed into a 256x256 texture) by calculating (v+a*t*t/2) where the velocity of each particle is stored in a second texture. You don't want to even *consider* scaling that rendering operation! This isn't just a matter of getting a fuzzier or sharper final image! -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From sin...@ Fri Oct 5 17:17:45 2012 From: sin...@ (Colin Mackenzie) Date: Fri, 5 Oct 2012 20:17:45 -0400 Subject: [Public WebGL] High number of failures in conformance suite Message-ID: I apologize if this is the wrong forum for my question, and would appreciate being directed toward the proper destination if so. I'm trying to track down an issue a user of my WebGL framework is experiencing, and in doing so, requested that he run the WebGL conformance suite v1.1. He's running Chrome 22 on Mac OS X 10.6.8 with Intel GMA X3100 graphics. Here's his summary: Tests PASSED: 3241 Tests FAILED: 657 Tests TIMED OUT: 31 See https://gist.github.com/3840329 for the complete test results. I have not yet asked him to run the conformance tests in Firefox, because he has indicated that the original issue with my framework does not appear in FF. Given the high number of failures, what should I advise him to do? Raise an issue with the browser vendor? Or, if this many failures are expected for his platform, should I ignore them and continue working to resolve the issue within my framework? Not sure if it's helpful, but here's the issue as originally reported to the framework: https://github.com/sinisterchipmunk/jax/issues/63 Thanks, Colin MacKenzie IV -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sat Oct 6 02:41:26 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Sat, 6 Oct 2012 11:41:26 +0200 Subject: [Public WebGL] High number of failures in conformance suite In-Reply-To: References: Message-ID: In order to be sure not to run into networking issues, the conformance tests should be run locally. Once the test have run, there is a button to "display text summary" of the tests. Please collect the following from your user: - text summary of the tests - Browser/Version - Operating System/Version - GPU Model - Driver Version Make this information available here and we should be able to advise appropriate next steps. On Sat, Oct 6, 2012 at 2:17 AM, Colin Mackenzie wrote: > I apologize if this is the wrong forum for my question, and would > appreciate being directed toward the proper destination if so. > > I'm trying to track down an issue a user of my WebGL framework is > experiencing, and in doing so, requested that he run the WebGL conformance > suite v1.1. He's running Chrome 22 on Mac OS X 10.6.8 with Intel GMA X3100 > graphics. Here's his summary: > > Tests PASSED: 3241 > Tests FAILED: 657 > Tests TIMED OUT: 31 > > See https://gist.github.com/3840329 for the complete test results. > > I have not yet asked him to run the conformance tests in Firefox, because > he has indicated that the original issue with my framework does not appear > in FF. > > Given the high number of failures, what should I advise him to do? Raise > an issue with the browser vendor? Or, if this many failures are expected > for his platform, should I ignore them and continue working to resolve the > issue within my framework? > > Not sure if it's helpful, but here's the issue as originally reported to > the framework: https://github.com/sinisterchipmunk/jax/issues/63 > > Thanks, > Colin MacKenzie IV > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sin...@ Sun Oct 7 07:11:59 2012 From: sin...@ (Colin Mackenzie) Date: Sun, 7 Oct 2012 10:11:59 -0400 Subject: [Public WebGL] High number of failures in conformance suite In-Reply-To: References: Message-ID: I had him run the tests again locally and here's what he had to say: I ran the test locally with the SimpleHTTPServer and the results seem exactly the same : Tests PASSED: 3241 Tests FAILED: 657 Tests TIMED OUT: 31 Text summary of the tests : https://gist.github.com/3848167 Browser : Chrome 22.0.1229.79 OS : MacOS 10.6.8 GPU Model : GMA X3100 Driver Version : I really don't know about it. I just keep my system up to date. (Maybe these lines are useful ? "Fournisseur : Intel (0x8086) / Identifiant du p?riph?rique : 0x2a02 / Identifiant de r?vision : 0x0003") Thanks, Colin On Sat, Oct 6, 2012 at 5:41 AM, Florian B?sch wrote: > In order to be sure not to run into networking issues, the conformance > tests should be run locally. > > Once the test have run, there is a button to "display text summary" of the > tests. Please collect the following from your user: > - text summary of the tests > - Browser/Version > - Operating System/Version > - GPU Model > - Driver Version > > Make this information available here and we should be able to advise > appropriate next steps. > > > On Sat, Oct 6, 2012 at 2:17 AM, Colin Mackenzie < > sinisterchipmunk...@> wrote: > >> I apologize if this is the wrong forum for my question, and would >> appreciate being directed toward the proper destination if so. >> >> I'm trying to track down an issue a user of my WebGL framework is >> experiencing, and in doing so, requested that he run the WebGL conformance >> suite v1.1. He's running Chrome 22 on Mac OS X 10.6.8 with Intel GMA X3100 >> graphics. Here's his summary: >> >> Tests PASSED: 3241 >> Tests FAILED: 657 >> Tests TIMED OUT: 31 >> >> See https://gist.github.com/3840329 for the complete test results. >> >> I have not yet asked him to run the conformance tests in Firefox, because >> he has indicated that the original issue with my framework does not appear >> in FF. >> >> Given the high number of failures, what should I advise him to do? Raise >> an issue with the browser vendor? Or, if this many failures are expected >> for his platform, should I ignore them and continue working to resolve the >> issue within my framework? >> >> Not sure if it's helpful, but here's the issue as originally reported to >> the framework: https://github.com/sinisterchipmunk/jax/issues/63 >> >> Thanks, >> Colin MacKenzie IV >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sun Oct 7 07:31:48 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Sun, 7 Oct 2012 16:31:48 +0200 Subject: [Public WebGL] High number of failures in conformance suite In-Reply-To: References: Message-ID: The card is not on the list of unsupported devices of Chrome and the OS is on the list of supported operating systems: http://support.google.com/chrome/bin/answer.py?hl=en&answer=1220892 There are lots of errors around texture functions and glsl. I don't see a clear pattern of where it fails other than some tendencies. I would post this information as is to the chrome issue list ( http://code.google.com/p/chromium/issues/list) with a title like "webgl 657 conformance tests failed on OSX 10.6/GMA X3100" There are 3 other issues where people have problems with the GMA X3100 - http://code.google.com/p/chromium/issues/detail?id=40825 - http://code.google.com/p/chromium/issues/detail?id=67558 - http://code.google.com/p/chromium/issues/detail?id=100781 I would mention in the ticket that the conformance works trough in Firefox. The best hope to get to the bottom of this issue would be for a chrome developer with OSX 10.6.8 and a GMA X3100 to try to replicate the issue. Note that one possible outcome of this may be that the X3100 (or its older 10.6 driver) will be blacklisted. On Sun, Oct 7, 2012 at 4:11 PM, Colin Mackenzie wrote: > I had him run the tests again locally and here's what he had to say: > > I ran the test locally with the SimpleHTTPServer and the results seem > exactly the same : > > Tests PASSED: 3241 > Tests FAILED: 657 > Tests TIMED OUT: 31 > > Text summary of the tests : https://gist.github.com/3848167 > Browser : Chrome 22.0.1229.79 > OS : MacOS 10.6.8 > GPU Model : GMA X3100 > Driver Version : I really don't know about it. I just keep my system up to > date. (Maybe these lines are useful ? "Fournisseur : Intel (0x8086) / > Identifiant du p?riph?rique : 0x2a02 / Identifiant de r?vision : 0x0003") > > > Thanks, > > Colin > > > > On Sat, Oct 6, 2012 at 5:41 AM, Florian B?sch wrote: > >> In order to be sure not to run into networking issues, the conformance >> tests should be run locally. >> >> Once the test have run, there is a button to "display text summary" of >> the tests. Please collect the following from your user: >> - text summary of the tests >> - Browser/Version >> - Operating System/Version >> - GPU Model >> - Driver Version >> >> Make this information available here and we should be able to advise >> appropriate next steps. >> >> >> On Sat, Oct 6, 2012 at 2:17 AM, Colin Mackenzie < >> sinisterchipmunk...@> wrote: >> >>> I apologize if this is the wrong forum for my question, and would >>> appreciate being directed toward the proper destination if so. >>> >>> I'm trying to track down an issue a user of my WebGL framework is >>> experiencing, and in doing so, requested that he run the WebGL conformance >>> suite v1.1. He's running Chrome 22 on Mac OS X 10.6.8 with Intel GMA X3100 >>> graphics. Here's his summary: >>> >>> Tests PASSED: 3241 >>> Tests FAILED: 657 >>> Tests TIMED OUT: 31 >>> >>> See https://gist.github.com/3840329 for the complete test results. >>> >>> I have not yet asked him to run the conformance tests in Firefox, >>> because he has indicated that the original issue with my framework does not >>> appear in FF. >>> >>> Given the high number of failures, what should I advise him to do? Raise >>> an issue with the browser vendor? Or, if this many failures are expected >>> for his platform, should I ignore them and continue working to resolve the >>> issue within my framework? >>> >>> Not sure if it's helpful, but here's the issue as originally reported to >>> the framework: https://github.com/sinisterchipmunk/jax/issues/63 >>> >>> Thanks, >>> Colin MacKenzie IV >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sin...@ Sun Oct 7 10:30:28 2012 From: sin...@ (Colin Mackenzie) Date: Sun, 7 Oct 2012 13:30:28 -0400 Subject: [Public WebGL] High number of failures in conformance suite In-Reply-To: References: Message-ID: Thanks very much for the advice. We don't yet know for sure that the suite passes on Firefox yet. He didn't encounter the original issue within my framework in FF, so hasn't tested the conformance suite in FF yet. I've just requested that he run the suite on Firefox, and send the text summary. Once I get that, I'll forward all the information on to the Chrome tracker (and the Firefox one if there are a lot of failures there as well). Thanks again, Colin On Sun, Oct 7, 2012 at 10:31 AM, Florian B?sch wrote: > The card is not on the list of unsupported devices of Chrome and the OS is > on the list of supported operating systems: > http://support.google.com/chrome/bin/answer.py?hl=en&answer=1220892 > > There are lots of errors around texture functions and glsl. I don't see a > clear pattern of where it fails other than some tendencies. I would post > this information as is to the chrome issue list ( > http://code.google.com/p/chromium/issues/list) with a title like "webgl > 657 conformance tests failed on OSX 10.6/GMA X3100" > > There are 3 other issues where people have problems with the GMA X3100 > - http://code.google.com/p/chromium/issues/detail?id=40825 > - http://code.google.com/p/chromium/issues/detail?id=67558 > - http://code.google.com/p/chromium/issues/detail?id=100781 > > I would mention in the ticket that the conformance works trough in > Firefox. The best hope to get to the bottom of this issue would be for a > chrome developer with OSX 10.6.8 and a GMA X3100 to try to replicate the > issue. Note that one possible outcome of this may be that the X3100 (or its > older 10.6 driver) will be blacklisted. > > > On Sun, Oct 7, 2012 at 4:11 PM, Colin Mackenzie < > sinisterchipmunk...@> wrote: > >> I had him run the tests again locally and here's what he had to say: >> >> I ran the test locally with the SimpleHTTPServer and the results seem >> exactly the same : >> >> Tests PASSED: 3241 >> Tests FAILED: 657 >> Tests TIMED OUT: 31 >> >> Text summary of the tests : https://gist.github.com/3848167 >> Browser : Chrome 22.0.1229.79 >> OS : MacOS 10.6.8 >> GPU Model : GMA X3100 >> Driver Version : I really don't know about it. I just keep my system up >> to date. (Maybe these lines are useful ? "Fournisseur : Intel (0x8086) / >> Identifiant du p?riph?rique : 0x2a02 / Identifiant de r?vision : 0x0003") >> >> >> Thanks, >> >> Colin >> >> >> >> On Sat, Oct 6, 2012 at 5:41 AM, Florian B?sch wrote: >> >>> In order to be sure not to run into networking issues, the conformance >>> tests should be run locally. >>> >>> Once the test have run, there is a button to "display text summary" of >>> the tests. Please collect the following from your user: >>> - text summary of the tests >>> - Browser/Version >>> - Operating System/Version >>> - GPU Model >>> - Driver Version >>> >>> Make this information available here and we should be able to advise >>> appropriate next steps. >>> >>> >>> On Sat, Oct 6, 2012 at 2:17 AM, Colin Mackenzie < >>> sinisterchipmunk...@> wrote: >>> >>>> I apologize if this is the wrong forum for my question, and would >>>> appreciate being directed toward the proper destination if so. >>>> >>>> I'm trying to track down an issue a user of my WebGL framework is >>>> experiencing, and in doing so, requested that he run the WebGL conformance >>>> suite v1.1. He's running Chrome 22 on Mac OS X 10.6.8 with Intel GMA X3100 >>>> graphics. Here's his summary: >>>> >>>> Tests PASSED: 3241 >>>> Tests FAILED: 657 >>>> Tests TIMED OUT: 31 >>>> >>>> See https://gist.github.com/3840329 for the complete test results. >>>> >>>> I have not yet asked him to run the conformance tests in Firefox, >>>> because he has indicated that the original issue with my framework does not >>>> appear in FF. >>>> >>>> Given the high number of failures, what should I advise him to do? >>>> Raise an issue with the browser vendor? Or, if this many failures are >>>> expected for his platform, should I ignore them and continue working to >>>> resolve the issue within my framework? >>>> >>>> Not sure if it's helpful, but here's the issue as originally reported >>>> to the framework: https://github.com/sinisterchipmunk/jax/issues/63 >>>> >>>> Thanks, >>>> Colin MacKenzie IV >>>> >>> >>> >> > -- Colin MacKenzie IV http://www.thoughtsincomputation.com http://twitter.com/sinisterchipmnk -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Oct 8 17:56:11 2012 From: kbr...@ (Kenneth Russell) Date: Mon, 8 Oct 2012 17:56:11 -0700 Subject: [Public WebGL] High number of failures in conformance suite In-Reply-To: References: Message-ID: On Mac OS X, the most attention has been paid by browser vendors to NVIDIA and AMD GPUs. On Macs with AMD GPUs running 10.8.2, the 1.0.1 version of the conformance suite passes 100%. With NVIDIA GPUs running 10.8.2, there is exactly one test failure, and it's being tracked by a Radar. I personally have not triaged the outstanding conformance suite failures with Intel GPUs on Mac OS yet, and suspect that the other browser vendors haven't, either. Your user should upgrade from 10.6.8 to 10.8. It's unlikely that any graphics driver bug will be fixed in the future on either the 10.6.x or 10.7.x train. -Ken On Sun, Oct 7, 2012 at 10:30 AM, Colin Mackenzie wrote: > Thanks very much for the advice. We don't yet know for sure that the suite > passes on Firefox yet. He didn't encounter the original issue within my > framework in FF, so hasn't tested the conformance suite in FF yet. > > I've just requested that he run the suite on Firefox, and send the text > summary. Once I get that, I'll forward all the information on to the Chrome > tracker (and the Firefox one if there are a lot of failures there as well). > > Thanks again, > Colin > > > On Sun, Oct 7, 2012 at 10:31 AM, Florian B?sch wrote: >> >> The card is not on the list of unsupported devices of Chrome and the OS is >> on the list of supported operating systems: >> http://support.google.com/chrome/bin/answer.py?hl=en&answer=1220892 >> >> There are lots of errors around texture functions and glsl. I don't see a >> clear pattern of where it fails other than some tendencies. I would post >> this information as is to the chrome issue list >> (http://code.google.com/p/chromium/issues/list) with a title like "webgl 657 >> conformance tests failed on OSX 10.6/GMA X3100" >> >> There are 3 other issues where people have problems with the GMA X3100 >> - http://code.google.com/p/chromium/issues/detail?id=40825 >> - http://code.google.com/p/chromium/issues/detail?id=67558 >> - http://code.google.com/p/chromium/issues/detail?id=100781 >> >> I would mention in the ticket that the conformance works trough in >> Firefox. The best hope to get to the bottom of this issue would be for a >> chrome developer with OSX 10.6.8 and a GMA X3100 to try to replicate the >> issue. Note that one possible outcome of this may be that the X3100 (or its >> older 10.6 driver) will be blacklisted. >> >> >> On Sun, Oct 7, 2012 at 4:11 PM, Colin Mackenzie >> wrote: >>> >>> I had him run the tests again locally and here's what he had to say: >>> >>> I ran the test locally with the SimpleHTTPServer and the results seem >>> exactly the same : >>> >>> Tests PASSED: 3241 >>> Tests FAILED: 657 >>> Tests TIMED OUT: 31 >>> >>> Text summary of the tests : https://gist.github.com/3848167 >>> Browser : Chrome 22.0.1229.79 >>> OS : MacOS 10.6.8 >>> GPU Model : GMA X3100 >>> Driver Version : I really don't know about it. I just keep my system up >>> to date. (Maybe these lines are useful ? "Fournisseur : Intel (0x8086) / >>> Identifiant du p?riph?rique : 0x2a02 / Identifiant de r?vision : 0x0003") >>> >>> >>> Thanks, >>> >>> Colin >>> >>> >>> >>> On Sat, Oct 6, 2012 at 5:41 AM, Florian B?sch wrote: >>>> >>>> In order to be sure not to run into networking issues, the conformance >>>> tests should be run locally. >>>> >>>> Once the test have run, there is a button to "display text summary" of >>>> the tests. Please collect the following from your user: >>>> - text summary of the tests >>>> - Browser/Version >>>> - Operating System/Version >>>> - GPU Model >>>> - Driver Version >>>> >>>> Make this information available here and we should be able to advise >>>> appropriate next steps. >>>> >>>> >>>> On Sat, Oct 6, 2012 at 2:17 AM, Colin Mackenzie >>>> wrote: >>>>> >>>>> I apologize if this is the wrong forum for my question, and would >>>>> appreciate being directed toward the proper destination if so. >>>>> >>>>> I'm trying to track down an issue a user of my WebGL framework is >>>>> experiencing, and in doing so, requested that he run the WebGL conformance >>>>> suite v1.1. He's running Chrome 22 on Mac OS X 10.6.8 with Intel GMA X3100 >>>>> graphics. Here's his summary: >>>>> >>>>> Tests PASSED: 3241 >>>>> Tests FAILED: 657 >>>>> Tests TIMED OUT: 31 >>>>> >>>>> See https://gist.github.com/3840329 for the complete test results. >>>>> >>>>> I have not yet asked him to run the conformance tests in Firefox, >>>>> because he has indicated that the original issue with my framework does not >>>>> appear in FF. >>>>> >>>>> Given the high number of failures, what should I advise him to do? >>>>> Raise an issue with the browser vendor? Or, if this many failures are >>>>> expected for his platform, should I ignore them and continue working to >>>>> resolve the issue within my framework? >>>>> >>>>> Not sure if it's helpful, but here's the issue as originally reported >>>>> to the framework: https://github.com/sinisterchipmunk/jax/issues/63 >>>>> >>>>> Thanks, >>>>> Colin MacKenzie IV >>>> >>>> >>> >> > > > > -- > Colin MacKenzie IV > http://www.thoughtsincomputation.com > http://twitter.com/sinisterchipmnk > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Mon Oct 8 17:59:24 2012 From: kbr...@ (Kenneth Russell) Date: Mon, 8 Oct 2012 17:59:24 -0700 Subject: [Public WebGL] High number of failures in conformance suite In-Reply-To: References: Message-ID: I forgot to mention that there's a bug in Intel's OpenGL driver (Apple Radar 10670574) which prevents the WebGL 1.0.1 conformance suite from running to completion in Firefox, and causes many failures when run in Chrome. I haven't re-tested the conformance suite on Intel GPUs recently to see if there's been an improvement with Mac OS 10.8.2. -Ken On Mon, Oct 8, 2012 at 5:56 PM, Kenneth Russell wrote: > On Mac OS X, the most attention has been paid by browser vendors to > NVIDIA and AMD GPUs. On Macs with AMD GPUs running 10.8.2, the 1.0.1 > version of the conformance suite passes 100%. With NVIDIA GPUs running > 10.8.2, there is exactly one test failure, and it's being tracked by a > Radar. > > I personally have not triaged the outstanding conformance suite > failures with Intel GPUs on Mac OS yet, and suspect that the other > browser vendors haven't, either. > > Your user should upgrade from 10.6.8 to 10.8. It's unlikely that any > graphics driver bug will be fixed in the future on either the 10.6.x > or 10.7.x train. > > -Ken > > > On Sun, Oct 7, 2012 at 10:30 AM, Colin Mackenzie > wrote: >> Thanks very much for the advice. We don't yet know for sure that the suite >> passes on Firefox yet. He didn't encounter the original issue within my >> framework in FF, so hasn't tested the conformance suite in FF yet. >> >> I've just requested that he run the suite on Firefox, and send the text >> summary. Once I get that, I'll forward all the information on to the Chrome >> tracker (and the Firefox one if there are a lot of failures there as well). >> >> Thanks again, >> Colin >> >> >> On Sun, Oct 7, 2012 at 10:31 AM, Florian B?sch wrote: >>> >>> The card is not on the list of unsupported devices of Chrome and the OS is >>> on the list of supported operating systems: >>> http://support.google.com/chrome/bin/answer.py?hl=en&answer=1220892 >>> >>> There are lots of errors around texture functions and glsl. I don't see a >>> clear pattern of where it fails other than some tendencies. I would post >>> this information as is to the chrome issue list >>> (http://code.google.com/p/chromium/issues/list) with a title like "webgl 657 >>> conformance tests failed on OSX 10.6/GMA X3100" >>> >>> There are 3 other issues where people have problems with the GMA X3100 >>> - http://code.google.com/p/chromium/issues/detail?id=40825 >>> - http://code.google.com/p/chromium/issues/detail?id=67558 >>> - http://code.google.com/p/chromium/issues/detail?id=100781 >>> >>> I would mention in the ticket that the conformance works trough in >>> Firefox. The best hope to get to the bottom of this issue would be for a >>> chrome developer with OSX 10.6.8 and a GMA X3100 to try to replicate the >>> issue. Note that one possible outcome of this may be that the X3100 (or its >>> older 10.6 driver) will be blacklisted. >>> >>> >>> On Sun, Oct 7, 2012 at 4:11 PM, Colin Mackenzie >>> wrote: >>>> >>>> I had him run the tests again locally and here's what he had to say: >>>> >>>> I ran the test locally with the SimpleHTTPServer and the results seem >>>> exactly the same : >>>> >>>> Tests PASSED: 3241 >>>> Tests FAILED: 657 >>>> Tests TIMED OUT: 31 >>>> >>>> Text summary of the tests : https://gist.github.com/3848167 >>>> Browser : Chrome 22.0.1229.79 >>>> OS : MacOS 10.6.8 >>>> GPU Model : GMA X3100 >>>> Driver Version : I really don't know about it. I just keep my system up >>>> to date. (Maybe these lines are useful ? "Fournisseur : Intel (0x8086) / >>>> Identifiant du p?riph?rique : 0x2a02 / Identifiant de r?vision : 0x0003") >>>> >>>> >>>> Thanks, >>>> >>>> Colin >>>> >>>> >>>> >>>> On Sat, Oct 6, 2012 at 5:41 AM, Florian B?sch wrote: >>>>> >>>>> In order to be sure not to run into networking issues, the conformance >>>>> tests should be run locally. >>>>> >>>>> Once the test have run, there is a button to "display text summary" of >>>>> the tests. Please collect the following from your user: >>>>> - text summary of the tests >>>>> - Browser/Version >>>>> - Operating System/Version >>>>> - GPU Model >>>>> - Driver Version >>>>> >>>>> Make this information available here and we should be able to advise >>>>> appropriate next steps. >>>>> >>>>> >>>>> On Sat, Oct 6, 2012 at 2:17 AM, Colin Mackenzie >>>>> wrote: >>>>>> >>>>>> I apologize if this is the wrong forum for my question, and would >>>>>> appreciate being directed toward the proper destination if so. >>>>>> >>>>>> I'm trying to track down an issue a user of my WebGL framework is >>>>>> experiencing, and in doing so, requested that he run the WebGL conformance >>>>>> suite v1.1. He's running Chrome 22 on Mac OS X 10.6.8 with Intel GMA X3100 >>>>>> graphics. Here's his summary: >>>>>> >>>>>> Tests PASSED: 3241 >>>>>> Tests FAILED: 657 >>>>>> Tests TIMED OUT: 31 >>>>>> >>>>>> See https://gist.github.com/3840329 for the complete test results. >>>>>> >>>>>> I have not yet asked him to run the conformance tests in Firefox, >>>>>> because he has indicated that the original issue with my framework does not >>>>>> appear in FF. >>>>>> >>>>>> Given the high number of failures, what should I advise him to do? >>>>>> Raise an issue with the browser vendor? Or, if this many failures are >>>>>> expected for his platform, should I ignore them and continue working to >>>>>> resolve the issue within my framework? >>>>>> >>>>>> Not sure if it's helpful, but here's the issue as originally reported >>>>>> to the framework: https://github.com/sinisterchipmunk/jax/issues/63 >>>>>> >>>>>> Thanks, >>>>>> Colin MacKenzie IV >>>>> >>>>> >>>> >>> >> >> >> >> -- >> Colin MacKenzie IV >> http://www.thoughtsincomputation.com >> http://twitter.com/sinisterchipmnk >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From dan...@ Tue Oct 9 06:23:30 2012 From: dan...@ (Daniel Koch) Date: Tue, 9 Oct 2012 09:23:30 -0400 Subject: [Public WebGL] High number of failures in conformance suite In-Reply-To: References: Message-ID: <694E5D66-812C-4E8E-B082-F080D5639C8A@transgaming.com> FWIW, The GMA X3100 is not supported on 10.8. (See https://developer.apple.com/graphicsimaging/opengl/capabilities/GLInfo_1080.html) I wouldn't expect them to fair much better on 10.7.x either. Our experience with the GL drivers on those cards hasn't ever been terribly great, and as far as I can tell, they've been effectively in maintenance-only mode for a few major OS revisions, so I wouldn't recommend trying to support WebGL on them. Daniel On 2012-10-08, at 8:56 PM, Kenneth Russell wrote: > > On Mac OS X, the most attention has been paid by browser vendors to > NVIDIA and AMD GPUs. On Macs with AMD GPUs running 10.8.2, the 1.0.1 > version of the conformance suite passes 100%. With NVIDIA GPUs running > 10.8.2, there is exactly one test failure, and it's being tracked by a > Radar. > > I personally have not triaged the outstanding conformance suite > failures with Intel GPUs on Mac OS yet, and suspect that the other > browser vendors haven't, either. > > Your user should upgrade from 10.6.8 to 10.8. It's unlikely that any > graphics driver bug will be fixed in the future on either the 10.6.x > or 10.7.x train. > > -Ken > > > On Sun, Oct 7, 2012 at 10:30 AM, Colin Mackenzie > wrote: >> Thanks very much for the advice. We don't yet know for sure that the suite >> passes on Firefox yet. He didn't encounter the original issue within my >> framework in FF, so hasn't tested the conformance suite in FF yet. >> >> I've just requested that he run the suite on Firefox, and send the text >> summary. Once I get that, I'll forward all the information on to the Chrome >> tracker (and the Firefox one if there are a lot of failures there as well). >> >> Thanks again, >> Colin >> >> >> On Sun, Oct 7, 2012 at 10:31 AM, Florian B?sch wrote: >>> >>> The card is not on the list of unsupported devices of Chrome and the OS is >>> on the list of supported operating systems: >>> http://support.google.com/chrome/bin/answer.py?hl=en&answer=1220892 >>> >>> There are lots of errors around texture functions and glsl. I don't see a >>> clear pattern of where it fails other than some tendencies. I would post >>> this information as is to the chrome issue list >>> (http://code.google.com/p/chromium/issues/list) with a title like "webgl 657 >>> conformance tests failed on OSX 10.6/GMA X3100" >>> >>> There are 3 other issues where people have problems with the GMA X3100 >>> - http://code.google.com/p/chromium/issues/detail?id=40825 >>> - http://code.google.com/p/chromium/issues/detail?id=67558 >>> - http://code.google.com/p/chromium/issues/detail?id=100781 >>> >>> I would mention in the ticket that the conformance works trough in >>> Firefox. The best hope to get to the bottom of this issue would be for a >>> chrome developer with OSX 10.6.8 and a GMA X3100 to try to replicate the >>> issue. Note that one possible outcome of this may be that the X3100 (or its >>> older 10.6 driver) will be blacklisted. >>> >>> >>> On Sun, Oct 7, 2012 at 4:11 PM, Colin Mackenzie >>> wrote: >>>> >>>> I had him run the tests again locally and here's what he had to say: >>>> >>>> I ran the test locally with the SimpleHTTPServer and the results seem >>>> exactly the same : >>>> >>>> Tests PASSED: 3241 >>>> Tests FAILED: 657 >>>> Tests TIMED OUT: 31 >>>> >>>> Text summary of the tests : https://gist.github.com/3848167 >>>> Browser : Chrome 22.0.1229.79 >>>> OS : MacOS 10.6.8 >>>> GPU Model : GMA X3100 >>>> Driver Version : I really don't know about it. I just keep my system up >>>> to date. (Maybe these lines are useful ? "Fournisseur : Intel (0x8086) / >>>> Identifiant du p?riph?rique : 0x2a02 / Identifiant de r?vision : 0x0003") >>>> >>>> >>>> Thanks, >>>> >>>> Colin >>>> >>>> >>>> >>>> On Sat, Oct 6, 2012 at 5:41 AM, Florian B?sch wrote: >>>>> >>>>> In order to be sure not to run into networking issues, the conformance >>>>> tests should be run locally. >>>>> >>>>> Once the test have run, there is a button to "display text summary" of >>>>> the tests. Please collect the following from your user: >>>>> - text summary of the tests >>>>> - Browser/Version >>>>> - Operating System/Version >>>>> - GPU Model >>>>> - Driver Version >>>>> >>>>> Make this information available here and we should be able to advise >>>>> appropriate next steps. >>>>> >>>>> >>>>> On Sat, Oct 6, 2012 at 2:17 AM, Colin Mackenzie >>>>> wrote: >>>>>> >>>>>> I apologize if this is the wrong forum for my question, and would >>>>>> appreciate being directed toward the proper destination if so. >>>>>> >>>>>> I'm trying to track down an issue a user of my WebGL framework is >>>>>> experiencing, and in doing so, requested that he run the WebGL conformance >>>>>> suite v1.1. He's running Chrome 22 on Mac OS X 10.6.8 with Intel GMA X3100 >>>>>> graphics. Here's his summary: >>>>>> >>>>>> Tests PASSED: 3241 >>>>>> Tests FAILED: 657 >>>>>> Tests TIMED OUT: 31 >>>>>> >>>>>> See https://gist.github.com/3840329 for the complete test results. >>>>>> >>>>>> I have not yet asked him to run the conformance tests in Firefox, >>>>>> because he has indicated that the original issue with my framework does not >>>>>> appear in FF. >>>>>> >>>>>> Given the high number of failures, what should I advise him to do? >>>>>> Raise an issue with the browser vendor? Or, if this many failures are >>>>>> expected for his platform, should I ignore them and continue working to >>>>>> resolve the issue within my framework? >>>>>> >>>>>> Not sure if it's helpful, but here's the issue as originally reported >>>>>> to the framework: https://github.com/sinisterchipmunk/jax/issues/63 >>>>>> >>>>>> Thanks, >>>>>> Colin MacKenzie IV >>>>> >>>>> >>>> >>> >> >> >> >> -- >> Colin MacKenzie IV >> http://www.thoughtsincomputation.com >> http://twitter.com/sinisterchipmnk >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > _____________________________________________________________________ Daniel Koch Manager, Graphics Technology TransGaming E: daniel...@ _____________________________________________________________________ This email and any files transmitted herein are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Thu Oct 11 11:57:03 2012 From: bja...@ (Benoit Jacob) Date: Thu, 11 Oct 2012 14:57:03 -0400 Subject: [Public WebGL] Extension interfaces should be [NoInterfaceObject] Message-ID: <5077167F.6020309@mozilla.com> Hello, Currently, the IDL for extensions speficies interfaces names in a way that forces compliant implementations to expose them on the global object. For instance: interface WEBGL_compressed_texture_atc There is a concern about polluting the global object with arbitrarily many names that may not even be easy to trace back to WebGL for someone not versed in WebGL, for example OES_standard_derivatives. WebIDL has a provision for that: the [NoInterfaceObject] attribute. It is documented there: http://www.w3.org/TR/WebIDL/#NoInterfaceObject Do you agree that we should use it here? So we could add arbitrary extensions without having to worry about their names polluting the global namespace? If we agree on this, we should email public-script-coord...@ as asked for in the above link. That would make WebGL extensions "supplemental interfaces". In my limited understanding, the concern here is that [NoInterfaceObject] is a ECMAScript-specific feature. I may be missing something else though. The alternative, I guess, is to rename WebGL extension interfaces to something more cleanly namespaced, e.g.: OES_standard_derivatives -> WebGLExtensionStandardDerivatives EXT_texture_filter_anisotropic -> WebGLExtensionTextureFilterAnisotropic which is FWIW what we do in Mozilla's C++ implementation. Cheers, Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Oct 11 12:08:37 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 11 Oct 2012 21:08:37 +0200 Subject: [Public WebGL] Extension interfaces should be [NoInterfaceObject] In-Reply-To: <5077167F.6020309@mozilla.com> References: <5077167F.6020309@mozilla.com> Message-ID: I've not yet have had a reason to use the interface object. I'll presume it would be for things like adding stuff to its prototype, which you only really do if you need to write a shim. I can't conceive of a need to write an extension shim. So I don't think not having the interface objects on the global object matters. I would also certainly not object to any kind of naming scheme on the interface object. I would suggest however to stuff them somewhere in their own namespace on top or as alternative to a clearly webgl associated naming scheme. Either way, it doesn't matter for me. So no objections here. On Thu, Oct 11, 2012 at 8:57 PM, Benoit Jacob wrote: > Hello, > > Currently, the IDL for extensions speficies interfaces names in a way that > forces compliant implementations to expose them on the global object. For > instance: > > interface WEBGL_compressed_texture_atc > > > There is a concern about polluting the global object with arbitrarily many > names that may not even be easy to trace back to WebGL for someone not > versed in WebGL, for example OES_standard_derivatives. > > WebIDL has a provision for that: the [NoInterfaceObject] attribute. It is > documented there: > http://www.w3.org/TR/WebIDL/#NoInterfaceObject > > Do you agree that we should use it here? So we could add arbitrary > extensions without having to worry about their names polluting the global > namespace? > > If we agree on this, we should email public-script-coord...@ as asked > for in the above link. That would make WebGL extensions "supplemental > interfaces". In my limited understanding, the concern here is that > [NoInterfaceObject] is a ECMAScript-specific feature. I may be missing > something else though. > > The alternative, I guess, is to rename WebGL extension interfaces to > something more cleanly namespaced, e.g.: > > OES_standard_derivatives -> WebGLExtensionStandardDerivatives > EXT_texture_filter_anisotropic -> WebGLExtensionTextureFilterAnisotropic > > which is FWIW what we do in Mozilla's C++ implementation. > > Cheers, > Benoit > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzb...@ Thu Oct 11 12:17:02 2012 From: bzb...@ (Boris Zbarsky) Date: Thu, 11 Oct 2012 15:17:02 -0400 Subject: [Public WebGL] Extension interfaces should be [NoInterfaceObject] In-Reply-To: References: <5077167F.6020309@mozilla.com> Message-ID: <50771B2E.2060701@mit.edu> On 10/11/12 3:08 PM, Florian B?sch wrote: > I've not yet have had a reason to use the interface object. I'll presume > it would be for things like adding stuff to its prototype, which you > only really do if you need to write a shim. You can actually add stuff to the prototype even if [NoInterfaceObject]. You just have to create an instance object first, then get its prototype... -Boris ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Thu Oct 11 14:18:56 2012 From: kbr...@ (Kenneth Russell) Date: Thu, 11 Oct 2012 14:18:56 -0700 Subject: [Public WebGL] Extension interfaces should be [NoInterfaceObject] In-Reply-To: <5077167F.6020309@mozilla.com> References: <5077167F.6020309@mozilla.com> Message-ID: This sounds fine to me. The names of the extension objects were chosen to be easy to recognize in the IDL and also be legal JavaScript identifiers. Renaming them to something like WebGLExtensionStandardDerivatives would in my opinion break the ease of recognition. Hiding them via [NoInterfaceObject] seems reasonable. -Ken On Thu, Oct 11, 2012 at 11:57 AM, Benoit Jacob wrote: > Hello, > > Currently, the IDL for extensions speficies interfaces names in a way that > forces compliant implementations to expose them on the global object. For > instance: > > interface WEBGL_compressed_texture_atc > > There is a concern about polluting the global object with arbitrarily many > names that may not even be easy to trace back to WebGL for someone not > versed in WebGL, for example OES_standard_derivatives. > > WebIDL has a provision for that: the [NoInterfaceObject] attribute. It is > documented there: > http://www.w3.org/TR/WebIDL/#NoInterfaceObject > > Do you agree that we should use it here? So we could add arbitrary > extensions without having to worry about their names polluting the global > namespace? > > If we agree on this, we should email public-script-coord...@ as asked for > in the above link. That would make WebGL extensions "supplemental > interfaces". In my limited understanding, the concern here is that > [NoInterfaceObject] is a ECMAScript-specific feature. I may be missing > something else though. > > The alternative, I guess, is to rename WebGL extension interfaces to > something more cleanly namespaced, e.g.: > > OES_standard_derivatives -> WebGLExtensionStandardDerivatives > EXT_texture_filter_anisotropic -> WebGLExtensionTextureFilterAnisotropic > > which is FWIW what we do in Mozilla's C++ implementation. > > Cheers, > Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From baj...@ Tue Oct 16 12:44:11 2012 From: baj...@ (Brandon Jones) Date: Tue, 16 Oct 2012 12:44:11 -0700 Subject: [Public WebGL] Moving OES_vertex_array_object and OES_element_index_uint out of draft Message-ID: It seems like the OES_vertex_array_object and OES_element_index_uint extensions are both in a good position to be moved out of draft status and given community approval. They both have conformance tests available now and neither have unresolved issues. Are there any objections to giving either extension official status? --Brandon -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Tue Oct 16 12:50:41 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Tue, 16 Oct 2012 12:50:41 -0700 Subject: [Public WebGL] Moving OES_vertex_array_object and OES_element_index_uint out of draft In-Reply-To: References: Message-ID: sounds good to me On Tue, Oct 16, 2012 at 12:44 PM, Brandon Jones wrote: > It seems like the OES_vertex_array_object and OES_element_index_uint > extensions are both in a good position to be moved out of draft status and > given community approval. They both have conformance tests available now > and neither have unresolved issues. Are there any objections to giving > either extension official status? > > --Brandon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Oct 16 12:55:32 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 16 Oct 2012 12:55:32 -0700 Subject: [Public WebGL] Moving OES_vertex_array_object and OES_element_index_uint out of draft In-Reply-To: References: Message-ID: Sounds good to me too. On Tue, Oct 16, 2012 at 12:50 PM, Gregg Tavares (??) wrote: > sounds good to me > > > On Tue, Oct 16, 2012 at 12:44 PM, Brandon Jones wrote: >> >> It seems like the OES_vertex_array_object and OES_element_index_uint >> extensions are both in a good position to be moved out of draft status and >> given community approval. They both have conformance tests available now and >> neither have unresolved issues. Are there any objections to giving either >> extension official status? >> >> --Brandon > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Tue Oct 16 14:02:28 2012 From: bja...@ (Benoit Jacob) Date: Tue, 16 Oct 2012 17:02:28 -0400 Subject: [Public WebGL] Moving OES_vertex_array_object and OES_element_index_uint out of draft In-Reply-To: References: Message-ID: <507DCB64.3050501@mozilla.com> I support this! On 12-10-16 03:55 PM, Kenneth Russell wrote: > Sounds good to me too. > > On Tue, Oct 16, 2012 at 12:50 PM, Gregg Tavares (??) wrote: >> sounds good to me >> >> >> On Tue, Oct 16, 2012 at 12:44 PM, Brandon Jones wrote: >>> It seems like the OES_vertex_array_object and OES_element_index_uint >>> extensions are both in a good position to be moved out of draft status and >>> given community approval. They both have conformance tests available now and >>> neither have unresolved issues. Are there any objections to giving either >>> extension official status? >>> >>> --Brandon >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From din...@ Tue Oct 16 15:07:58 2012 From: din...@ (Dean Jackson) Date: Wed, 17 Oct 2012 09:07:58 +1100 Subject: [Public WebGL] Moving OES_vertex_array_object and OES_element_index_uint out of draft In-Reply-To: References: Message-ID: <4F2D7B91-9855-4076-941A-2AAABB4B58E0@apple.com> Fine with Apple. Dean On 17/10/2012, at 6:44 AM, Brandon Jones wrote: > It seems like the OES_vertex_array_object and OES_element_index_uint extensions are both in a good position to be moved out of draft status and given community approval. They both have conformance tests available now and neither have unresolved issues. Are there any objections to giving either extension official status? > > --Brandon ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Tue Oct 16 15:54:54 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Tue, 16 Oct 2012 15:54:54 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video Message-ID: I don't think the spec makes this clear what happens when you try to call texImage2D or texSubImage2D on an image or video that is not yet loaded. Example video = document.createElement("video"); video.src = "http://mysite.com/myvideo"; video.play(); function render() { gl.bindTexture(...) gl.texImage2D(..., video); gl.drawArrays(...); window.requestAnimationFrame(render); } Chrome right now will synthesize a GL error if the system hasn't actually gotten the video to start (as in if it's still buffering). Off the top of my head, it seems like it would just be friendlier to make a black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that aren't loaded yet. Otherwise, having to check that the video is ready before calling texImage2D seems kind of burdensome on the developer. If they want to check they should use video.addEventListener('playing') or similar. If they were making a video player they'd have to add a bunch of logic when queuing the next video. Same with images. img = document.createElement("img"); img.src = "http://mysite.com/myimage"; function render() { gl.bindTexture(...) gl.texImage2D(..., img); gl.drawArrays(...); window.requestAnimationFrame(render); } If you want to know if the image has loaded use img.onload but otherwise don't fail the call? What do you think? Good idea? Bad idea? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Oct 16 16:07:36 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 17 Oct 2012 01:07:36 +0200 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: On Wed, Oct 17, 2012 at 12:54 AM, Gregg Tavares (??) wrote: > Off the top of my head, it seems like it would just be friendlier to make > a black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that aren't > loaded yet. > Seeing a black texture is nearly universally accepted as "stuff's not uploaded" for 3D programmers. So imo that's a perfectly fine idea. > Otherwise, having to check that the video is ready before calling > texImage2D seems kind of burdensome on the developer. If they want to check > they should use video.addEventListener('playing') or similar. If they were > making a video player they'd have to add a bunch of logic when queuing the > next video. > Although "playing" is not synonymous with "buffer ready". A video can be paused and you'd like to show the still image from where it stands. > Same with images. > > img = document.createElement("img"); > img.src = "http://mysite.com/myimage"; > > function render() { > gl.bindTexture(...) > gl.texImage2D(..., img); > gl.drawArrays(...); > window.requestAnimationFrame(render); > } > > If you want to know if the image has loaded use img.onload but otherwise > don't fail the call? > I usually wait for onload in images before usage. Usually in the form of a bulk loader before drawing commences. But image onload signals "buffer ready" better than videos "playing". -------------- next part -------------- An HTML attachment was scrubbed... URL: From apo...@ Tue Oct 16 16:26:24 2012 From: apo...@ (Acorn Pooley) Date: Tue, 16 Oct 2012 16:26:24 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: <20121016232624.GC20035@nvidia.com> > Off the top of my head, it seems like it would just be friendlier to make a black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that aren't loaded yet. FWIW the EGLStream extension makes the texture incomplete if the first frame is not ready when the consumer calls Acquire (a similar situation). Incomplete means texture fetches return (0,0,0,1). So what you suggest seems like a good choice. -Acorn ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Tue Oct 16 16:28:59 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Tue, 16 Oct 2012 16:28:59 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: On Tue, Oct 16, 2012 at 4:07 PM, Florian B?sch wrote: > On Wed, Oct 17, 2012 at 12:54 AM, Gregg Tavares (??) wrote: > >> Off the top of my head, it seems like it would just be friendlier to make >> a black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that aren't >> loaded yet. >> > Seeing a black texture is nearly universally accepted as "stuff's not > uploaded" for 3D programmers. So imo that's a perfectly fine idea. > > >> Otherwise, having to check that the video is ready before calling >> texImage2D seems kind of burdensome on the developer. If they want to check >> they should use video.addEventListener('playing') or similar. If they were >> making a video player they'd have to add a bunch of logic when queuing the >> next video. >> > Although "playing" is not synonymous with "buffer ready". A video can be > paused and you'd like to show the still image from where it stands. > Let me be more clear. My position is that how to figure out if a video is ready or a image is loaded is outside the scope of WebGL. WebGL should only do 1 of 3 things 1) if the URL is a different domain and there's no CORS permission throw a security exception 2) if the image/video is not loaded/ready/buffered then for texImage2D make 1 pixel black texture, for texSubImage2D do nothing. 3) otherwise upload the image/video into the texture. So whether it's playing, bufferReady, onload, etc. is an HTML spec issue, not a WebGL issue. WebGL's spec only needs to handle the 3 cases above > > >> Same with images. >> >> img = document.createElement("img"); >> img.src = "http://mysite.com/myimage"; >> >> function render() { >> gl.bindTexture(...) >> gl.texImage2D(..., img); >> gl.drawArrays(...); >> window.requestAnimationFrame(render); >> } >> >> If you want to know if the image has loaded use img.onload but otherwise >> don't fail the call? >> > I usually wait for onload in images before usage. Usually in the form of a > bulk loader before drawing commences. But image onload signals "buffer > ready" better than videos "playing". > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Oct 16 16:28:28 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 16 Oct 2012 16:28:28 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) wrote: > I don't think the spec makes this clear what happens when you try to call > texImage2D or texSubImage2D on an image or video that is not yet loaded. Right, the behavior in this case is not defined. I'm pretty sure this was discussed a long time ago in the working group, but it seemed difficult to write a test for any defined behavior, so by consensus implementations generate INVALID_OPERATION when attempting to upload incomplete images or video to WebGL textures. > Example > > video = document.createElement("video"); > video.src = "http://mysite.com/myvideo"; > video.play(); > > function render() { > gl.bindTexture(...) > gl.texImage2D(..., video); > gl.drawArrays(...); > window.requestAnimationFrame(render); > } > > Chrome right now will synthesize a GL error if the system hasn't actually > gotten the video to start (as in if it's still buffering). > > Off the top of my head, it seems like it would just be friendlier to make a > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that aren't > loaded yet. > > Otherwise, having to check that the video is ready before calling texImage2D > seems kind of burdensome on the developer. If they want to check they should > use video.addEventListener('playing') or similar. If they were making a > video player they'd have to add a bunch of logic when queuing the next > video. > > Same with images. > > img = document.createElement("img"); > img.src = "http://mysite.com/myimage"; > > function render() { > gl.bindTexture(...) > gl.texImage2D(..., img); > gl.drawArrays(...); > window.requestAnimationFrame(render); > } > > If you want to know if the image has loaded use img.onload but otherwise > don't fail the call? > > What do you think? Good idea? Bad idea? My initial impression is that this change is not a good idea. It would expose the specification, implementations and applications to a lot of corner case behaviors. For example, if a video's width and height hasn't been received yet, then texImage2D(..., video) would have to allocate a 1x1 texture; but if the width and height are known, then it would have to allocate the right amount of storage. A naive app might call texImage2D only the first time and texSubImage2D subsequently, so if the first texImage2D call was made before the metadata was downloaded, it would never render correctly. I think the current fail-fast behavior is best, and already has the result that it renders a black texture if the upload fails; the call will (implicitly) generate INVALID_OPERATION, and the texture will be incomplete. If we want to spec this more tightly then we'll need to do more work in the conformance suite to forcibly stall HTTP downloads of the video resources in the test suite at well known points. I'm vaguely aware that WebKit's HTTP tests do this with a custom server. Requiring a custom server in order to run the WebGL conformance suite at all would have pretty significant disadvantages. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Tue Oct 16 16:45:24 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 17 Oct 2012 01:45:24 +0200 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: On Wed, Oct 17, 2012 at 1:28 AM, Kenneth Russell wrote: > Requiring a > custom server in order to run the WebGL conformance suite at all would > have pretty significant disadvantages. > Couldn't we use File or something passing it to a blob passing it to a url object passing it to the src and then drip-feeding image bytes into the file in JS? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Tue Oct 16 16:46:13 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Tue, 16 Oct 2012 16:46:13 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: On Tue, Oct 16, 2012 at 4:28 PM, Kenneth Russell wrote: > On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) > wrote: > > I don't think the spec makes this clear what happens when you try to call > > texImage2D or texSubImage2D on an image or video that is not yet loaded. > > Right, the behavior in this case is not defined. I'm pretty sure this > was discussed a long time ago in the working group, but it seemed > difficult to write a test for any defined behavior, so by consensus > implementations generate INVALID_OPERATION when attempting to upload > incomplete images or video to WebGL textures. > I don't see a problem writing a test. Basically var playing = false video.src = url video.play(); video.addEventListener('playing', function() { playing = true;}); waitForVideo() { gl.bindTexture(...) gl.texImage2D(..., video); if (playing) { doTestsOnVideoContent(); if (gl.getError() != gl.NO_ERROR) { testFailed("there should be no errors"); } } else { requestAnimationFrame(waitForVideo); } This basically says an implementation is never allowed to generate an error. it might not be a perfect test but neither is the current one. In fact this should test it just fine video = document.createElement("video"); gl.texImage2D(..., video); glErrorShouldBe(gl.NO_ERROR); video.src = "someValidURL": gl.texImage2D(..., video); glErrorShouldBe(gl.NO_ERROR); video.src = "someOtherValidURL": gl.texImage2D(..., video); glErrorShouldBe(gl.NO_ERROR); Then do the other 'playing' event tests. > > > Example > > > > video = document.createElement("video"); > > video.src = "http://mysite.com/myvideo"; > > video.play(); > > > > function render() { > > gl.bindTexture(...) > > gl.texImage2D(..., video); > > gl.drawArrays(...); > > window.requestAnimationFrame(render); > > } > > > > Chrome right now will synthesize a GL error if the system hasn't actually > > gotten the video to start (as in if it's still buffering). > > > > Off the top of my head, it seems like it would just be friendlier to > make a > > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that aren't > > loaded yet. > > > > Otherwise, having to check that the video is ready before calling > texImage2D > > seems kind of burdensome on the developer. If they want to check they > should > > use video.addEventListener('playing') or similar. If they were making a > > video player they'd have to add a bunch of logic when queuing the next > > video. > > > > Same with images. > > > > img = document.createElement("img"); > > img.src = "http://mysite.com/myimage"; > > > > function render() { > > gl.bindTexture(...) > > gl.texImage2D(..., img); > > gl.drawArrays(...); > > window.requestAnimationFrame(render); > > } > > > > If you want to know if the image has loaded use img.onload but otherwise > > don't fail the call? > > > > What do you think? Good idea? Bad idea? > > My initial impression is that this change is not a good idea. It would > expose the specification, implementations and applications to a lot of > corner case behaviors. For example, if a video's width and height > hasn't been received yet, then texImage2D(..., video) would have to > allocate a 1x1 texture; but if the width and height are known, then it > would have to allocate the right amount of storage. A naive app might > call texImage2D only the first time and texSubImage2D subsequently, so > if the first texImage2D call was made before the metadata was > downloaded, it would never render correctly. I think the current > fail-fast behavior is best, and already has the result that it renders > a black texture if the upload fails; the call will (implicitly) > generate INVALID_OPERATION, and the texture will be incomplete. > I don't see how that's worse than the current situation which is you call texImage2D and pray it works. You have no idea if it's going to work or not. If you do this does it work? okToUseVideo = false; video = document.createElement("video"); video.src = "movie#1" video.addEventListener('playing', function() { okToUseVideo = true; } frameCount = 0; function render() { if (okToUseVideo) { gl.texImage2D(... , video); ++frameCount; if (frameCount > 1000) { video.src = "movie2"; } } } Basically after some amount of time I switch video.src to a new movie. Is the video off limits now? Will it use the old frame from the old movie until the new movie is buffered or will it give me INVALID_OPERATION? I have no idea and it's not specified. Same with img. img.src = "image#1" img.onload = function() { img.src = "image#2"; gl.texImage2D(...., img); // what's this? old image, no image, INVALID_OPERATION? } The texSubImage2D issue is not helped by the current spec. If video.src = useChoosenURL then you have no idea what the width and height are until the 'playing' event (or whatever event) which is no different than if we changed it. Changing it IMO means far less broken websites and I can't see any disadvantages. Sure you can get a 1x1 pixel texture to start and call texSubImage2D now but you can do that already. > > If we want to spec this more tightly then we'll need to do more work > in the conformance suite to forcibly stall HTTP downloads of the video > resources in the test suite at well known points. I'm vaguely aware > that WebKit's HTTP tests do this with a custom server. Requiring a > custom server in order to run the WebGL conformance suite at all would > have pretty significant disadvantages. > > -Ken > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Oct 16 18:51:57 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 16 Oct 2012 18:51:57 -0700 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal Message-ID: Please review an extension proposal adding multiple render target functionality to WebGL: http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ It mirrors a proposed extension for ANGLE which adds this functionality to OpenGL ES 2.0: https://angleproject.googlecode.com/svn/trunk/extensions/ANGLE_multiple_render_targets.txt Please provide your feedback on both. Apologies for how long it took to produce these extensions. They couldn't be drafted until OpenGL ES 3.0 was released, because in order to avoid compatibility issues in the future, it was necessary to derive their contents from the ES 3.0 specification. The draft ANGLE extension modifies the same areas of the OpenGL ES 2.0 specification modified by the GL_NV_fbo_color_attachments and GL_NV_draw_buffers extensions, but draws the majority of its text and semantics from the ES 3.0 spec. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Wed Oct 17 02:11:23 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 17 Oct 2012 11:11:23 +0200 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: Message-ID: I think that's fine. I've got one question though. I know that when I attach a floating point texture, that the output values are not clamped to 0-1. I also know that the OpenGL ES 2.0 specification states the clamping as behavior (which is also mentioned in this extension). I'm not sure, but I think the desktop versions of OpenGL don't specify the clamping at the shader output level (there's some function to control clamping). I'm a bit worried that this might be a behavioral difference between Mobiles/Desktops that could shoot you in the foot if/when you find a mobile with floating point texture render target support. Anybody know if mobiles do the clamping regardless of target format? On Wed, Oct 17, 2012 at 3:51 AM, Kenneth Russell wrote: > > Please review an extension proposal adding multiple render target > functionality to WebGL: > > > http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ > > It mirrors a proposed extension for ANGLE which adds this > functionality to OpenGL ES 2.0: > > > https://angleproject.googlecode.com/svn/trunk/extensions/ANGLE_multiple_render_targets.txt > > Please provide your feedback on both. > > Apologies for how long it took to produce these extensions. They > couldn't be drafted until OpenGL ES 3.0 was released, because in order > to avoid compatibility issues in the future, it was necessary to > derive their contents from the ES 3.0 specification. The draft ANGLE > extension modifies the same areas of the OpenGL ES 2.0 specification > modified by the GL_NV_fbo_color_attachments and GL_NV_draw_buffers > extensions, but draws the majority of its text and semantics from the > ES 3.0 spec. > > -Ken > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed Oct 17 09:31:39 2012 From: bja...@ (Benoit Jacob) Date: Wed, 17 Oct 2012 12:31:39 -0400 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: Message-ID: <507EDD6B.8020007@mozilla.com> Instead of introduding new families of 16 symbolic constants, couldn't we switch to an index-based API? I realize that the current proposal was motivated by similarity to the existing OpenGL interface, but since the symbolic constants have adjacent values, it should be straightforward to map one API onto the other one, for the uses of e.g. automated porting of OpenGL applications to WebGL. Benoit On 12-10-16 09:51 PM, Kenneth Russell wrote: > Please review an extension proposal adding multiple render target > functionality to WebGL: > > http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ > > It mirrors a proposed extension for ANGLE which adds this > functionality to OpenGL ES 2.0: > > https://angleproject.googlecode.com/svn/trunk/extensions/ANGLE_multiple_render_targets.txt > > Please provide your feedback on both. > > Apologies for how long it took to produce these extensions. They > couldn't be drafted until OpenGL ES 3.0 was released, because in order > to avoid compatibility issues in the future, it was necessary to > derive their contents from the ES 3.0 specification. The draft ANGLE > extension modifies the same areas of the OpenGL ES 2.0 specification > modified by the GL_NV_fbo_color_attachments and GL_NV_draw_buffers > extensions, but draws the majority of its text and semantics from the > ES 3.0 spec. > > -Ken > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From baj...@ Wed Oct 17 09:52:16 2012 From: baj...@ (Brandon Jones) Date: Wed, 17 Oct 2012 09:52:16 -0700 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: <507EDD6B.8020007@mozilla.com> References: <507EDD6B.8020007@mozilla.com> Message-ID: Extension proposal sounds good here! On Wed, Oct 17, 2012 at 9:31 AM, Benoit Jacob wrote: > > Instead of introduding new families of 16 symbolic constants, couldn't > we switch to an index-based API? > The same could be said about gl.activeTexture, could it not? In fact, that would probably have avoided a minor class of error that I've seen pop up from time to time where users don't realize that they need to pass the symbolic constant instead of just "1". But given that the precedent of activeTexture is already in place I have a hard time saying we should break tradition just for this extension. It doesn't gain us much and introduces yet another minor inconsistency with desktop GL that will turn into footnotes and warnings on every blog and tutorial, and adds one more step that you have to internalize when porting desktop code by hand. I just don't see the benefit. --Brandon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ced...@ Wed Oct 17 10:09:03 2012 From: ced...@ (Cedric Vivier) Date: Thu, 18 Oct 2012 01:09:03 +0800 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: <507EDD6B.8020007@mozilla.com> References: <507EDD6B.8020007@mozilla.com> Message-ID: On Thu, Oct 18, 2012 at 12:31 AM, Benoit Jacob wrote: > Instead of introduding new families of 16 symbolic constants, couldn't > we switch to an index-based API? I don't think this is expressible with WebIDL (?). Even if it were, though, I don't think this would be a clear improvement as it indeed diverges from OpenGL and therefore adds documentation/porting work. Also this would be inconsistent within WebGL itself, since the core API have TEXTURE_n and such that are not index-based. On the other hand, I think we could get rid of all those verbose _WEBGL suffixes, while _ANGLE is necessary for the 'native' version of the extension (based off C), the namespacing here is already done through the WebGL extension object. Thoughts? PS: unfortunately there is one approved extension with the _WEBGL suffix on constants (WEBGL_debug_renderer_info) - fortunately it is the only one and WEBGL_debug_shaders does not have WEBGL suffix on the getTranslatedShaderSource function (like DrawBuffersWEBGL has, in this proposal). PS2: we could probably remove those suffixes from some of the draft extensions that are currently using them. > > I realize that the current proposal was motivated by similarity to the > existing OpenGL interface, but since the symbolic constants have > adjacent values, it should be straightforward to map one API onto the > other one, for the uses of e.g. automated porting of OpenGL applications > to WebGL. > > Benoit > > > On 12-10-16 09:51 PM, Kenneth Russell wrote: >> Please review an extension proposal adding multiple render target >> functionality to WebGL: >> >> http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ >> >> It mirrors a proposed extension for ANGLE which adds this >> functionality to OpenGL ES 2.0: >> >> https://angleproject.googlecode.com/svn/trunk/extensions/ANGLE_multiple_render_targets.txt >> >> Please provide your feedback on both. >> >> Apologies for how long it took to produce these extensions. They >> couldn't be drafted until OpenGL ES 3.0 was released, because in order >> to avoid compatibility issues in the future, it was necessary to >> derive their contents from the ES 3.0 specification. The draft ANGLE >> extension modifies the same areas of the OpenGL ES 2.0 specification >> modified by the GL_NV_fbo_color_attachments and GL_NV_draw_buffers >> extensions, but draws the majority of its text and semantics from the >> ES 3.0 spec. >> >> -Ken >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bzb...@ Wed Oct 17 10:20:12 2012 From: bzb...@ (Boris Zbarsky) Date: Wed, 17 Oct 2012 13:20:12 -0400 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: <507EDD6B.8020007@mozilla.com> Message-ID: <507EE8CC.40000@mit.edu> On 10/17/12 1:09 PM, Cedric Vivier wrote: > > On Thu, Oct 18, 2012 at 12:31 AM, Benoit Jacob wrote: >> Instead of introduding new families of 16 symbolic constants, couldn't >> we switch to an index-based API? > > I don't think this is expressible with WebIDL (?). What exactly are we trying to express? -Boris ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Oct 17 11:03:23 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 17 Oct 2012 11:03:23 -0700 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: <507EDD6B.8020007@mozilla.com> References: <507EDD6B.8020007@mozilla.com> Message-ID: This would represent an unnecessary divergence from the underlying OpenGL API. I'm strongly in favor of using the existing API semantics. -Ken On Wed, Oct 17, 2012 at 9:31 AM, Benoit Jacob wrote: > > Instead of introduding new families of 16 symbolic constants, couldn't > we switch to an index-based API? > > I realize that the current proposal was motivated by similarity to the > existing OpenGL interface, but since the symbolic constants have > adjacent values, it should be straightforward to map one API onto the > other one, for the uses of e.g. automated porting of OpenGL applications > to WebGL. > > Benoit > > > On 12-10-16 09:51 PM, Kenneth Russell wrote: >> Please review an extension proposal adding multiple render target >> functionality to WebGL: >> >> http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ >> >> It mirrors a proposed extension for ANGLE which adds this >> functionality to OpenGL ES 2.0: >> >> https://angleproject.googlecode.com/svn/trunk/extensions/ANGLE_multiple_render_targets.txt >> >> Please provide your feedback on both. >> >> Apologies for how long it took to produce these extensions. They >> couldn't be drafted until OpenGL ES 3.0 was released, because in order >> to avoid compatibility issues in the future, it was necessary to >> derive their contents from the ES 3.0 specification. The draft ANGLE >> extension modifies the same areas of the OpenGL ES 2.0 specification >> modified by the GL_NV_fbo_color_attachments and GL_NV_draw_buffers >> extensions, but draws the majority of its text and semantics from the >> ES 3.0 spec. >> >> -Ken >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Oct 17 11:04:02 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 17 Oct 2012 11:04:02 -0700 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: Message-ID: This is pretty much an orthogonal question to the MRT proposal. Please start a new thread for it. -Ken On Wed, Oct 17, 2012 at 2:11 AM, Florian B?sch wrote: > I think that's fine. I've got one question though. I know that when I attach > a floating point texture, that the output values are not clamped to 0-1. I > also know that the OpenGL ES 2.0 specification states the clamping as > behavior (which is also mentioned in this extension). I'm not sure, but I > think the desktop versions of OpenGL don't specify the clamping at the > shader output level (there's some function to control clamping). > > I'm a bit worried that this might be a behavioral difference between > Mobiles/Desktops that could shoot you in the foot if/when you find a mobile > with floating point texture render target support. Anybody know if mobiles > do the clamping regardless of target format? > > On Wed, Oct 17, 2012 at 3:51 AM, Kenneth Russell wrote: >> >> >> Please review an extension proposal adding multiple render target >> functionality to WebGL: >> >> >> http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ >> >> It mirrors a proposed extension for ANGLE which adds this >> functionality to OpenGL ES 2.0: >> >> >> https://angleproject.googlecode.com/svn/trunk/extensions/ANGLE_multiple_render_targets.txt >> >> Please provide your feedback on both. >> >> Apologies for how long it took to produce these extensions. They >> couldn't be drafted until OpenGL ES 3.0 was released, because in order >> to avoid compatibility issues in the future, it was necessary to >> derive their contents from the ES 3.0 specification. The draft ANGLE >> extension modifies the same areas of the OpenGL ES 2.0 specification >> modified by the GL_NV_fbo_color_attachments and GL_NV_draw_buffers >> extensions, but draws the majority of its text and semantics from the >> ES 3.0 spec. >> >> -Ken >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Wed Oct 17 11:09:20 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Wed, 17 Oct 2012 11:09:20 -0700 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: <507EDD6B.8020007@mozilla.com> References: <507EDD6B.8020007@mozilla.com> Message-ID: I'm mixed on using indices instead of constants. One minor advantage to constants is given a value you can convert it back into a meaningful string. If I just have '4' is that the 4th texture? the 4th drawbuffer? the 4th attachment? With constants that's clear. 0x84C4 is TEXTURE4 0x882A is DRAWBUFFER4 0x8CE4 is COLOR_ATTACHMENT4 Which helps with auto generated logging code. You can write wrappers that look at the IDL and make GLenums get logged by calling something like var glEnumToString = function(gl, value) { for (var p in gl) { if (gl[p] == value) { return p; } } return "0x" + value.toString(16); }; Where as if they are just indices then you'd have to write different code for each function. On Wed, Oct 17, 2012 at 9:31 AM, Benoit Jacob wrote: > > Instead of introduding new families of 16 symbolic constants, couldn't > we switch to an index-based API? > > I realize that the current proposal was motivated by similarity to the > existing OpenGL interface, but since the symbolic constants have > adjacent values, it should be straightforward to map one API onto the > other one, for the uses of e.g. automated porting of OpenGL applications > to WebGL. > > Benoit > > > On 12-10-16 09:51 PM, Kenneth Russell wrote: > > Please review an extension proposal adding multiple render target > > functionality to WebGL: > > > > > http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ > > > > It mirrors a proposed extension for ANGLE which adds this > > functionality to OpenGL ES 2.0: > > > > > https://angleproject.googlecode.com/svn/trunk/extensions/ANGLE_multiple_render_targets.txt > > > > Please provide your feedback on both. > > > > Apologies for how long it took to produce these extensions. They > > couldn't be drafted until OpenGL ES 3.0 was released, because in order > > to avoid compatibility issues in the future, it was necessary to > > derive their contents from the ES 3.0 specification. The draft ANGLE > > extension modifies the same areas of the OpenGL ES 2.0 specification > > modified by the GL_NV_fbo_color_attachments and GL_NV_draw_buffers > > extensions, but draws the majority of its text and semantics from the > > ES 3.0 spec. > > > > -Ken > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Oct 17 11:14:20 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 17 Oct 2012 11:14:20 -0700 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: Message-ID: Ah, sorry, pulled the trigger too quickly on that email. I see the section you're pointing out. It's been pointed out most vehemently by Mark Callow that I in particular glossed over the subtleties of floating-point render targets when adding optional support for them in the WebGL exposure of OES_texture_float. There is an ongoing discussion about the correctness of the OES_texture_float conformance test as well (http://www.khronos.org/bugzilla/show_bug.cgi?id=729). Another OpenGL ES extension proposal should be coming soon which supports floating-point render targets more thoroughly than WebGL's exposure of OES_texture_float, and I think that rather than try to patch up the existing specs we should just wait for that one to come along and then point to it. From the standpoint of the MRT extension, OES_texture_float should be considered to affect individual color attachments in the same way that it affects the previous single color attachment to FBOs. I wouldn't want to do a half-baked job again trying to add FP render target support to this MRT extension so again let's wait for the new extension. The vast majority of ES 2.0 hardware will not support floating-point render targets, but my hope is that some or most ES 3.0 hardware will. -Ken On Wed, Oct 17, 2012 at 11:04 AM, Kenneth Russell wrote: > This is pretty much an orthogonal question to the MRT proposal. Please > start a new thread for it. > > -Ken > > > On Wed, Oct 17, 2012 at 2:11 AM, Florian B?sch wrote: >> I think that's fine. I've got one question though. I know that when I attach >> a floating point texture, that the output values are not clamped to 0-1. I >> also know that the OpenGL ES 2.0 specification states the clamping as >> behavior (which is also mentioned in this extension). I'm not sure, but I >> think the desktop versions of OpenGL don't specify the clamping at the >> shader output level (there's some function to control clamping). >> >> I'm a bit worried that this might be a behavioral difference between >> Mobiles/Desktops that could shoot you in the foot if/when you find a mobile >> with floating point texture render target support. Anybody know if mobiles >> do the clamping regardless of target format? >> >> On Wed, Oct 17, 2012 at 3:51 AM, Kenneth Russell wrote: >>> >>> >>> Please review an extension proposal adding multiple render target >>> functionality to WebGL: >>> >>> >>> http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ >>> >>> It mirrors a proposed extension for ANGLE which adds this >>> functionality to OpenGL ES 2.0: >>> >>> >>> https://angleproject.googlecode.com/svn/trunk/extensions/ANGLE_multiple_render_targets.txt >>> >>> Please provide your feedback on both. >>> >>> Apologies for how long it took to produce these extensions. They >>> couldn't be drafted until OpenGL ES 3.0 was released, because in order >>> to avoid compatibility issues in the future, it was necessary to >>> derive their contents from the ES 3.0 specification. The draft ANGLE >>> extension modifies the same areas of the OpenGL ES 2.0 specification >>> modified by the GL_NV_fbo_color_attachments and GL_NV_draw_buffers >>> extensions, but draws the majority of its text and semantics from the >>> ES 3.0 spec. >>> >>> -Ken >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Wed Oct 17 11:25:07 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 17 Oct 2012 20:25:07 +0200 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: Message-ID: Right, so the idea's to introduce FP-textures properly a second time around which would modify the framebuffer/mrt extensions clamping behavior? On Wed, Oct 17, 2012 at 8:14 PM, Kenneth Russell wrote: > Ah, sorry, pulled the trigger too quickly on that email. I see the > section you're pointing out. > > It's been pointed out most vehemently by Mark Callow that I in > particular glossed over the subtleties of floating-point render > targets when adding optional support for them in the WebGL exposure of > OES_texture_float. There is an ongoing discussion about the > correctness of the OES_texture_float conformance test as well > (http://www.khronos.org/bugzilla/show_bug.cgi?id=729). > > Another OpenGL ES extension proposal should be coming soon which > supports floating-point render targets more thoroughly than WebGL's > exposure of OES_texture_float, and I think that rather than try to > patch up the existing specs we should just wait for that one to come > along and then point to it. From the standpoint of the MRT extension, > OES_texture_float should be considered to affect individual color > attachments in the same way that it affects the previous single color > attachment to FBOs. I wouldn't want to do a half-baked job again > trying to add FP render target support to this MRT extension so again > let's wait for the new extension. > > The vast majority of ES 2.0 hardware will not support floating-point > render targets, but my hope is that some or most ES 3.0 hardware will. > > -Ken > > > On Wed, Oct 17, 2012 at 11:04 AM, Kenneth Russell wrote: > > This is pretty much an orthogonal question to the MRT proposal. Please > > start a new thread for it. > > > > -Ken > > > > > > On Wed, Oct 17, 2012 at 2:11 AM, Florian B?sch wrote: > >> I think that's fine. I've got one question though. I know that when I > attach > >> a floating point texture, that the output values are not clamped to > 0-1. I > >> also know that the OpenGL ES 2.0 specification states the clamping as > >> behavior (which is also mentioned in this extension). I'm not sure, but > I > >> think the desktop versions of OpenGL don't specify the clamping at the > >> shader output level (there's some function to control clamping). > >> > >> I'm a bit worried that this might be a behavioral difference between > >> Mobiles/Desktops that could shoot you in the foot if/when you find a > mobile > >> with floating point texture render target support. Anybody know if > mobiles > >> do the clamping regardless of target format? > >> > >> On Wed, Oct 17, 2012 at 3:51 AM, Kenneth Russell > wrote: > >>> > >>> > >>> Please review an extension proposal adding multiple render target > >>> functionality to WebGL: > >>> > >>> > >>> > http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ > >>> > >>> It mirrors a proposed extension for ANGLE which adds this > >>> functionality to OpenGL ES 2.0: > >>> > >>> > >>> > https://angleproject.googlecode.com/svn/trunk/extensions/ANGLE_multiple_render_targets.txt > >>> > >>> Please provide your feedback on both. > >>> > >>> Apologies for how long it took to produce these extensions. They > >>> couldn't be drafted until OpenGL ES 3.0 was released, because in order > >>> to avoid compatibility issues in the future, it was necessary to > >>> derive their contents from the ES 3.0 specification. The draft ANGLE > >>> extension modifies the same areas of the OpenGL ES 2.0 specification > >>> modified by the GL_NV_fbo_color_attachments and GL_NV_draw_buffers > >>> extensions, but draws the majority of its text and semantics from the > >>> ES 3.0 spec. > >>> > >>> -Ken > >>> > >>> ----------------------------------------------------------- > >>> You are currently subscribed to public_webgl...@ > >>> To unsubscribe, send an email to majordomo...@ with > >>> the following command in the body of your email: > >>> unsubscribe public_webgl > >>> ----------------------------------------------------------- > >>> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Oct 17 11:45:20 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 17 Oct 2012 11:45:20 -0700 Subject: [Public WebGL] Please review WebGL multiple render target extension proposal In-Reply-To: References: Message-ID: Yes. On Wed, Oct 17, 2012 at 11:25 AM, Florian B?sch wrote: > Right, so the idea's to introduce FP-textures properly a second time around > which would modify the framebuffer/mrt extensions clamping behavior? > > > On Wed, Oct 17, 2012 at 8:14 PM, Kenneth Russell wrote: >> >> Ah, sorry, pulled the trigger too quickly on that email. I see the >> section you're pointing out. >> >> It's been pointed out most vehemently by Mark Callow that I in >> particular glossed over the subtleties of floating-point render >> targets when adding optional support for them in the WebGL exposure of >> OES_texture_float. There is an ongoing discussion about the >> correctness of the OES_texture_float conformance test as well >> (http://www.khronos.org/bugzilla/show_bug.cgi?id=729). >> >> Another OpenGL ES extension proposal should be coming soon which >> supports floating-point render targets more thoroughly than WebGL's >> exposure of OES_texture_float, and I think that rather than try to >> patch up the existing specs we should just wait for that one to come >> along and then point to it. From the standpoint of the MRT extension, >> OES_texture_float should be considered to affect individual color >> attachments in the same way that it affects the previous single color >> attachment to FBOs. I wouldn't want to do a half-baked job again >> trying to add FP render target support to this MRT extension so again >> let's wait for the new extension. >> >> The vast majority of ES 2.0 hardware will not support floating-point >> render targets, but my hope is that some or most ES 3.0 hardware will. >> >> -Ken >> >> >> On Wed, Oct 17, 2012 at 11:04 AM, Kenneth Russell wrote: >> > This is pretty much an orthogonal question to the MRT proposal. Please >> > start a new thread for it. >> > >> > -Ken >> > >> > >> > On Wed, Oct 17, 2012 at 2:11 AM, Florian B?sch wrote: >> >> I think that's fine. I've got one question though. I know that when I >> >> attach >> >> a floating point texture, that the output values are not clamped to >> >> 0-1. I >> >> also know that the OpenGL ES 2.0 specification states the clamping as >> >> behavior (which is also mentioned in this extension). I'm not sure, but >> >> I >> >> think the desktop versions of OpenGL don't specify the clamping at the >> >> shader output level (there's some function to control clamping). >> >> >> >> I'm a bit worried that this might be a behavioral difference between >> >> Mobiles/Desktops that could shoot you in the foot if/when you find a >> >> mobile >> >> with floating point texture render target support. Anybody know if >> >> mobiles >> >> do the clamping regardless of target format? >> >> >> >> On Wed, Oct 17, 2012 at 3:51 AM, Kenneth Russell >> >> wrote: >> >>> >> >>> >> >>> Please review an extension proposal adding multiple render target >> >>> functionality to WebGL: >> >>> >> >>> >> >>> >> >>> http://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiple_render_targets/ >> >>> >> >>> It mirrors a proposed extension for ANGLE which adds this >> >>> functionality to OpenGL ES 2.0: >> >>> >> >>> >> >>> >> >>> https://angleproject.googlecode.com/svn/trunk/extensions/ANGLE_multiple_render_targets.txt >> >>> >> >>> Please provide your feedback on both. >> >>> >> >>> Apologies for how long it took to produce these extensions. They >> >>> couldn't be drafted until OpenGL ES 3.0 was released, because in order >> >>> to avoid compatibility issues in the future, it was necessary to >> >>> derive their contents from the ES 3.0 specification. The draft ANGLE >> >>> extension modifies the same areas of the OpenGL ES 2.0 specification >> >>> modified by the GL_NV_fbo_color_attachments and GL_NV_draw_buffers >> >>> extensions, but draws the majority of its text and semantics from the >> >>> ES 3.0 spec. >> >>> >> >>> -Ken >> >>> >> >>> ----------------------------------------------------------- >> >>> You are currently subscribed to public_webgl...@ >> >>> To unsubscribe, send an email to majordomo...@ with >> >>> the following command in the body of your email: >> >>> unsubscribe public_webgl >> >>> ----------------------------------------------------------- >> >>> >> >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Wed Oct 17 18:04:05 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Wed, 17 Oct 2012 18:04:05 -0700 Subject: [Public WebGL] using the same context with multiple canvases Message-ID: I'd like to pursue this idea further that Florian brought up which is that, for drawing to multiple canvases, rather than share resources across contexts it would be much nicer to just be able to use the same context with multiple canvases. Something like ctx1 = canvas1.getContext("webgl"); ctx1.clearColor(0,1,0,1); ctx1.clear(gl.COLOR_BUFFER_BIT); canvas2.setContext(ctx1); ctx1.clear(gl.COLOR_BUFFER_BIT); Clear's both canvas1 and canvas2 to green Issues: Under the current WebGL the only way to setup a canvas is by calling getContext(..., contextCreationParameters). Whether the canvas is RGB or RGBA, has depth or stencil, is premultiplied or not, has it's backbuffer preserved or not, etc.. Which leads to some questions. Is the current API okay. You'd create 2 contexts still but just use one of them as in ctx1 = canvas1.getContext("webgl". { alpha: true } ); ctx2 = canvas1.getContext("webgl". { alpha: false } ); ctx1.clearColor(0,1,0,1); ctx1.clear(gl.COLOR_BUFFER_BIT); canvas2.setContext(ctx1); ctx1.clear(gl.COLOR_BUFFER_BIT); After the cal to cavnas2.setContext(ctx1) what does ctx2 do? Does it still draw to cavnas2 as well? Does it get GL_INVALID_FRAMEBUFFER_OPERATION if a draw/clear function is called and no FBO is bound? Does it become lost? What does ctx1.canvas reference and what does ctx2.canvas reference? What does ctx1.getContextAttributes() return? The attributes for canvas1 or canvas2? I think arguably being able to setContext on a canvas is the right way to solve drawing to multiple canvas with the same resources but there are some unfortunate existing API choices. Thoughts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed Oct 17 18:20:51 2012 From: bja...@ (Benoit Jacob) Date: Wed, 17 Oct 2012 21:20:51 -0400 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: References: Message-ID: <507F5973.10808@mozilla.com> Agree that sharing a single context between canvases seems like a more workable approach than the opposite -- it will lead to less memory overhead (no need to have N OpenGL contexts for N canvases, and no need to rely on OpenGL share groups). Some random thoughts: - what happened to the previously discussed idea of an idea like this: gl.bindFramebuffer(othercanvas); - a big problem with the approach of having to getContext before telling that you want to use another existing context, is that it will force the browser to create a drawing buffer which you may turn out to never use. We need an API that avoids such waste by allowing to avoid creating drawing buffers. The earlier-discussed bindFramebuffer(canvas) proposal, applied to a new canvas that doesn't already have a context on it, was a possible way to solve that problem. - I have a feeling that allowing multiple sets of context attributes is going to be a headache. If we settle on an API that formally allows specifying different context attribs, I'd at least recommend requiring the all the attributes to agree across all canvases, at least as a first step -- we can always relax that later, but if we allow it from the start, we won't be able to change it anymore. - Likewise, allowing canvases from different principals is going to be a source of pain, so I'd make it a requirement that canvases sharing a context must be on the same principal. Benoit On 12-10-17 09:04 PM, Gregg Tavares (??) wrote: > I'd like to pursue this idea further that Florian brought up which is > that, for drawing to multiple canvases, rather than share resources > across contexts it would be much nicer to just be able to use the same > context with multiple canvases. Something like > > ctx1 = canvas1.getContext("webgl"); > ctx1.clearColor(0,1,0,1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > canvas2.setContext(ctx1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > > > Clear's both canvas1 and canvas2 to green > > Issues: Under the current WebGL the only way to setup a canvas is by > calling getContext(..., contextCreationParameters). Whether the canvas > is RGB or RGBA, has depth or stencil, is premultiplied or not, has > it's backbuffer preserved or not, etc.. > > Which leads to some questions. > > Is the current API okay. You'd create 2 contexts still but just use > one of them as in > > ctx1 = canvas1.getContext("webgl". { alpha: true } ); > ctx2 = canvas1.getContext("webgl". { alpha: false } ); > ctx1.clearColor(0,1,0,1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > canvas2.setContext(ctx1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > > > After the cal to cavnas2.setContext(ctx1) what does ctx2 do? Does it > still draw to cavnas2 as well? Does it get > GL_INVALID_FRAMEBUFFER_OPERATION if a draw/clear function is called > and no FBO is bound? Does it become lost? What does ctx1.canvas > reference and what does ctx2.canvas reference? What does > ctx1.getContextAttributes() return? The attributes for canvas1 or > canvas2? > > I think arguably being able to setContext on a canvas is the right way > to solve drawing to multiple canvas with the same resources but there > are some unfortunate existing API choices. > > Thoughts? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kpr...@ Wed Oct 17 18:23:15 2012 From: kpr...@ (Kevin Reid) Date: Wed, 17 Oct 2012 18:23:15 -0700 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: References: Message-ID: <5D1AB32E-8ADE-4E3A-AFF4-64C8F21A294A@switchb.org> On Oct 17, 2012, at 18:04, Gregg Tavares (??) wrote: > I'd like to pursue this idea further that Florian brought up which is that, for drawing to multiple canvases, rather than share resources across contexts it would be much nicer to just be able to use the same context with multiple canvases. Something like > > ctx1 = canvas1.getContext("webgl"); > ctx1.clearColor(0,1,0,1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > canvas2.setContext(ctx1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > > Clear's both canvas1 and canvas2 to green > > Issues: Under the current WebGL the only way to setup a canvas is by calling getContext(..., contextCreationParameters). Whether the canvas is RGB or RGBA, has depth or stencil, is premultiplied or not, has it's backbuffer preserved or not, etc.. How about getContext("webgl", { attach: ctx1, other params... }); ? This way there never needs to exist a second context, and the canvas DOM interface doesn't need to be extended. -- Kevin Reid ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From baj...@ Wed Oct 17 19:42:55 2012 From: baj...@ (Brandon Jones) Date: Wed, 17 Oct 2012 19:42:55 -0700 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <5D1AB32E-8ADE-4E3A-AFF4-64C8F21A294A@switchb.org> References: <5D1AB32E-8ADE-4E3A-AFF4-64C8F21A294A@switchb.org> Message-ID: This is just my opinion, but it feels more natural to flip the bindings so that the canvas is being set as a target for the context, like so: gl = canvas1.getContext("webgl"); gl.clearColor(0, 1, 0, 1); gl.clear(gl.COLOR_BUFFER_BIT); *gl.bindCanvas(canvas2);* gl.clear(gl.COLOR_BUFFER_BIT); By representing the canvas as a property of the context, I feel it better implies the fact that the context can only draw to a single canvas at a time. But like I said, just my opinion. I'm a big fan of this idea overall! It will certainly make life easier for developers that are trying to do anything resembling the traditional 4-view modeling tools, but there's some fun opportunities for other more creative uses too! --Brandon On Wed, Oct 17, 2012 at 6:23 PM, Kevin Reid wrote: > > On Oct 17, 2012, at 18:04, Gregg Tavares (??) wrote: > > > I'd like to pursue this idea further that Florian brought up which is > that, for drawing to multiple canvases, rather than share resources across > contexts it would be much nicer to just be able to use the same context > with multiple canvases. Something like > > > > ctx1 = canvas1.getContext("webgl"); > > ctx1.clearColor(0,1,0,1); > > ctx1.clear(gl.COLOR_BUFFER_BIT); > > canvas2.setContext(ctx1); > > ctx1.clear(gl.COLOR_BUFFER_BIT); > > > > Clear's both canvas1 and canvas2 to green > > > > Issues: Under the current WebGL the only way to setup a canvas is by > calling getContext(..., contextCreationParameters). Whether the canvas is > RGB or RGBA, has depth or stencil, is premultiplied or not, has it's > backbuffer preserved or not, etc.. > > How about getContext("webgl", { attach: ctx1, other params... }); ? This > way there never needs to exist a second context, and the canvas DOM > interface doesn't need to be extended. > > -- > Kevin Reid > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Oct 18 01:52:00 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 18 Oct 2012 10:52:00 +0200 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <5D1AB32E-8ADE-4E3A-AFF4-64C8F21A294A@switchb.org> References: <5D1AB32E-8ADE-4E3A-AFF4-64C8F21A294A@switchb.org> Message-ID: On Thu, Oct 18, 2012 at 3:23 AM, Kevin Reid wrote: > > On Oct 17, 2012, at 18:04, Gregg Tavares (??) wrote: > > > I'd like to pursue this idea further that Florian brought up which is > that, for drawing to multiple canvases, rather than share resources across > contexts it would be much nicer to just be able to use the same context > with multiple canvases. Something like > > > > ctx1 = canvas1.getContext("webgl"); > > ctx1.clearColor(0,1,0,1); > > ctx1.clear(gl.COLOR_BUFFER_BIT); > > canvas2.setContext(ctx1); > > ctx1.clear(gl.COLOR_BUFFER_BIT); > > > > Clear's both canvas1 and canvas2 to green > > > > Issues: Under the current WebGL the only way to setup a canvas is by > calling getContext(..., contextCreationParameters). Whether the canvas is > RGB or RGBA, has depth or stencil, is premultiplied or not, has it's > backbuffer preserved or not, etc.. > > How about getContext("webgl", { attach: ctx1, other params... }); ? This > way there never needs to exist a second context, and the canvas DOM > interface doesn't need to be extended. > It'd still need setContext or similar to determine which context you're currently drawing to though. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Oct 18 01:56:29 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 18 Oct 2012 10:56:29 +0200 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: References: <5D1AB32E-8ADE-4E3A-AFF4-64C8F21A294A@switchb.org> Message-ID: On Thu, Oct 18, 2012 at 4:42 AM, Brandon Jones wrote: > but there's some fun opportunities for other more creative uses too! > I'm thinking of various kinds of GPU driven content creation tools that would benefit as well as game GUIs that have a main view and several views (like unit selection, radar, map, etc.) embedded in a HTML overlay. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Thu Oct 18 09:40:54 2012 From: cma...@ (Chris Marrin) Date: Thu, 18 Oct 2012 09:40:54 -0700 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: References: Message-ID: <9EA1A768-B87C-43C1-A1C2-4F95F544F3AE@apple.com> On Oct 17, 2012, at 6:04 PM, Gregg Tavares (??) wrote: > I'd like to pursue this idea further that Florian brought up which is that, for drawing to multiple canvases, rather than share resources across contexts it would be much nicer to just be able to use the same context with multiple canvases. Something like > > ctx1 = canvas1.getContext("webgl"); > ctx1.clearColor(0,1,0,1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > canvas2.setContext(ctx1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > > Clear's both canvas1 and canvas2 to green > > Issues: Under the current WebGL the only way to setup a canvas is by calling getContext(..., contextCreationParameters). Whether the canvas is RGB or RGBA, has depth or stencil, is premultiplied or not, has it's backbuffer preserved or not, etc.. > > Which leads to some questions. > > Is the current API okay. You'd create 2 contexts still but just use one of them as in > > ctx1 = canvas1.getContext("webgl". { alpha: true } ); > ctx2 = canvas1.getContext("webgl". { alpha: false } ); > ctx1.clearColor(0,1,0,1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > canvas2.setContext(ctx1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > > After the cal to cavnas2.setContext(ctx1) what does ctx2 do? Does it still draw to cavnas2 as well? Does it get GL_INVALID_FRAMEBUFFER_OPERATION if a draw/clear function is called and no FBO is bound? Does it become lost? What does ctx1.canvas reference and what does ctx2.canvas reference? What does ctx1.getContextAttributes() return? The attributes for canvas1 or canvas2? > > I think arguably being able to setContext on a canvas is the right way to solve drawing to multiple canvas with the same resources but there are some unfortunate existing API choices. I like the idea of sharing a context across canvases, although I think there are issues with all the proposals that have been made in this thread. What we're really talking about here is making it possible to have multiple framebuffers in the context be attached to canvases. In the WebKit implementation, the framebuffer associated with the canvas is just another FBO which we send out to the page as needed. These FBO's do have a specially prepared RenderBuffers, but that's all done at setup time. For us the getContext call does two completely separate things. It creates an actual GL context, and then creates an FBO associated with the canvas for which the context was created. So separating those two functions would be easy. So maybe we keep getContext() as a composite operation (and for compatibility). But then we create two new methods on the WebGLRenderingContext: WebGLRenderingContext createContext(); WebGLCanvasFrameBuffer createCanvas(HTMLCanvasElement, WebGLContextAttributes); void bindCanvasFrameBuffer(WebGLCanvasFrameBuffer); WebGLCanvasFrameBuffer currentCanvasFrameBuffer(); So this: ctx = WebGLRenderingContext.createContext(); canvasFB = ctx.createCanvas(canvas, attrs); ctx.bindCanvasFrameBuffer(canvasFB); Would be the same as: ctx = canvas.getContext("webgl", attrs); canvasFB = ctx.currentCanvasFrameBuffer(); This essentially inverts the current API, which I like because it avoids any changes to the HTMLCanvasElement API. It also avoids adding parameters to WebGLContextAttributes, which would not be a very strongly typed approach. This would also allow us to create a WebGL context not associated with any canvas, allowing pure offscreen rendering. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben...@ Thu Oct 18 09:47:58 2012 From: ben...@ (Ben Vanik) Date: Thu, 18 Oct 2012 09:47:58 -0700 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <9EA1A768-B87C-43C1-A1C2-4F95F544F3AE@apple.com> References: <9EA1A768-B87C-43C1-A1C2-4F95F544F3AE@apple.com> Message-ID: I'm liking that approach. If you change the static createContext() to just being the WebGLRenderingContext constructor you'd match what Web Audio does ('new WebGLRenderingContext()' like 'new AudioContext()'), which is cool. It also matches what I'd believe the semantics of WebGL-in-a-worker would look like, where there's no to create a context from. On Thu, Oct 18, 2012 at 9:40 AM, Chris Marrin wrote: > > On Oct 17, 2012, at 6:04 PM, Gregg Tavares (??) wrote: > > I'd like to pursue this idea further that Florian brought up which is > that, for drawing to multiple canvases, rather than share resources across > contexts it would be much nicer to just be able to use the same context > with multiple canvases. Something like > > ctx1 = canvas1.getContext("webgl"); > ctx1.clearColor(0,1,0,1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > canvas2.setContext(ctx1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > > > Clear's both canvas1 and canvas2 to green > > Issues: Under the current WebGL the only way to setup a canvas is by > calling getContext(..., contextCreationParameters). Whether the canvas is > RGB or RGBA, has depth or stencil, is premultiplied or not, has it's > backbuffer preserved or not, etc.. > > Which leads to some questions. > > Is the current API okay. You'd create 2 contexts still but just use one of > them as in > > ctx1 = canvas1.getContext("webgl". { alpha: true } ); > ctx2 = canvas1.getContext("webgl". { alpha: false } ); > ctx1.clearColor(0,1,0,1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > canvas2.setContext(ctx1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > > > After the cal to cavnas2.setContext(ctx1) what does ctx2 do? Does it still > draw to cavnas2 as well? Does it get GL_INVALID_FRAMEBUFFER_OPERATION if a > draw/clear function is called and no FBO is bound? Does it become lost? > What does ctx1.canvas reference and what does ctx2.canvas reference? What > does ctx1.getContextAttributes() return? The attributes for canvas1 or > canvas2? > > I think arguably being able to setContext on a canvas is the right way to > solve drawing to multiple canvas with the same resources but there are some > unfortunate existing API choices. > > > I like the idea of sharing a context across canvases, although I think > there are issues with all the proposals that have been made in this thread. > > What we're really talking about here is making it possible to have > multiple framebuffers in the context be attached to canvases. In the WebKit > implementation, the framebuffer associated with the canvas is just another > FBO which we send out to the page as needed. These FBO's do have a > specially prepared RenderBuffers, but that's all done at setup time. > > For us the getContext call does two completely separate things. It creates > an actual GL context, and then creates an FBO associated with the canvas > for which the context was created. So separating those two functions would > be easy. > > So maybe we keep getContext() as a composite operation (and for > compatibility). But then we create two new methods on the > WebGLRenderingContext: > > WebGLRenderingContext createContext(); > WebGLCanvasFrameBuffer > createCanvas(HTMLCanvasElement, WebGLContextAttributes); > void bindCanvasFrameBuffer(WebGLCanvasFrameBuffer); > WebGLCanvasFrameBuffer currentCanvasFrameBuffer(); > > So this: > > ctx = WebGLRenderingContext.createContext(); > canvasFB = ctx.createCanvas(canvas, attrs); > ctx.bindCanvasFrameBuffer(canvasFB); > > Would be the same as: > > ctx = canvas.getContext("webgl", attrs); > canvasFB = ctx.currentCanvasFrameBuffer(); > > This essentially inverts the current API, which I like because it avoids > any changes to the HTMLCanvasElement API. It also avoids adding parameters > to WebGLContextAttributes, which would not be a very strongly typed > approach. > > This would also allow us to create a WebGL context not associated with any > canvas, allowing pure offscreen rendering. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Oct 18 10:31:11 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 18 Oct 2012 19:31:11 +0200 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <9EA1A768-B87C-43C1-A1C2-4F95F544F3AE@apple.com> References: <9EA1A768-B87C-43C1-A1C2-4F95F544F3AE@apple.com> Message-ID: On Thu, Oct 18, 2012 at 6:40 PM, Chris Marrin wrote: > ctx = WebGLRenderingContext.createContext(); > canvasFB = ctx.createCanvas(canvas, attrs); > ctx.bindCanvasFrameBuffer(canvasFB); > > Would be the same as: > > ctx = canvas.getContext("webgl", attrs); > canvasFB = ctx.currentCanvasFrameBuffer(); > I like this idea, but I find the naming/semantic a bit confusing. Here's some suggestions: Framebuffer is confusing since it's also used by one context to imply something different. I'd substitute by "FrontBuffer". Creating a context without a frontbuffer: var ctx = new WebGLRenderingContext(); Creating a frontbuffer: var frontbuffer = ctx.createFrontBuffer(canvas); Setting the frontbuffer to be rendered to: ctx.frontbuffer = frontbuffer Getting the frontbuffer currently used: ctx.frontbuffer Getting the canvas a frontbuffer is attached to: frontbuffer.canvas Getting the frontbuffer a canvas is attached to: canvas.frontbuffer -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu Oct 18 10:34:00 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Thu, 18 Oct 2012 10:34:00 -0700 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: References: <9EA1A768-B87C-43C1-A1C2-4F95F544F3AE@apple.com> Message-ID: If we're discussing naming maybe it should be called DrawingBuffer since we already have 'preserveDrawingBuffer: true/false as one of the creation parameers? On Thu, Oct 18, 2012 at 10:31 AM, Florian B?sch wrote: > On Thu, Oct 18, 2012 at 6:40 PM, Chris Marrin wrote: > >> ctx = WebGLRenderingContext.createContext(); >> canvasFB = ctx.createCanvas(canvas, attrs); >> ctx.bindCanvasFrameBuffer(canvasFB); >> >> Would be the same as: >> >> ctx = canvas.getContext("webgl", attrs); >> canvasFB = ctx.currentCanvasFrameBuffer(); >> > > I like this idea, but I find the naming/semantic a bit confusing. Here's > some suggestions: > > Framebuffer is confusing since it's also used by one context to imply > something different. I'd substitute by "FrontBuffer". > > Creating a context without a frontbuffer: > var ctx = new WebGLRenderingContext(); > > Creating a frontbuffer: > var frontbuffer = ctx.createFrontBuffer(canvas); > > Setting the frontbuffer to be rendered to: > ctx.frontbuffer = frontbuffer > > Getting the frontbuffer currently used: > ctx.frontbuffer > > Getting the canvas a frontbuffer is attached to: > frontbuffer.canvas > > Getting the frontbuffer a canvas is attached to: > canvas.frontbuffer > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Oct 18 10:59:05 2012 From: kbr...@ (Kenneth Russell) Date: Thu, 18 Oct 2012 10:59:05 -0700 Subject: [Public WebGL] Moving OES_vertex_array_object and OES_element_index_uint out of draft In-Reply-To: <4F2D7B91-9855-4076-941A-2AAABB4B58E0@apple.com> References: <4F2D7B91-9855-4076-941A-2AAABB4B58E0@apple.com> Message-ID: Great. Since there haven't been any objections, this change has been made. See http://www.khronos.org/registry/webgl/extensions/ . -Ken On Tue, Oct 16, 2012 at 3:07 PM, Dean Jackson wrote: > > Fine with Apple. > > Dean > > On 17/10/2012, at 6:44 AM, Brandon Jones wrote: > >> It seems like the OES_vertex_array_object and OES_element_index_uint extensions are both in a good position to be moved out of draft status and given community approval. They both have conformance tests available now and neither have unresolved issues. Are there any objections to giving either extension official status? >> >> --Brandon > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Thu Oct 18 11:53:32 2012 From: kbr...@ (Kenneth Russell) Date: Thu, 18 Oct 2012 11:53:32 -0700 Subject: [Public WebGL] Bug in oes-texture-float conformance test Message-ID: Mark Callow has pointed out yet another problem in WebGL's exposure of the OES_texture_float extension: http://www.khronos.org/registry/webgl/extensions/OES_texture_float/ The WebGL version of this extension supports optional rendering to floating-point (FP) textures, even though this is strictly speaking not allowed by OpenGL ES 2.0. Further, to date, the WebGL conformance test for this extension has required that FP render targets be supported, as a quality-of-implementation issue. The expectation has basically been that this extension would only be available on desktop platforms until next-generation (OpenGL ES 3.0) hardware arrives. Now there are some mobile vendors wishing to expose the OES_texture_float extension in their WebGL implementations. They support the underlying GL_OES_texture_float extension but not FP render targets. See http://www.khronos.org/bugzilla/show_bug.cgi?id=729 , which points out quite rightly that the conformance test is testing something the spec doesn't say. Should we: 1) Change the conformance test to make FP render target support optional 2) Change the spec to require FP render target support (1) might break some WebGL demos because they might be incorrectly assuming that the OES_texture_float extension implies FP render target support -- because the conformance test has enforced this to date. (2) would prevent the majority of OpenGL ES 2.0 implementations from exposing OES_texture_float support. I've been told that the extension is useful even without FP render target support, though my opinion is that it's much more useful when FP render targets are supported. Thoughts please? Would like to resolve this issue soon. Thanks, -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Thu Oct 18 12:03:00 2012 From: bja...@ (Benoit Jacob) Date: Thu, 18 Oct 2012 15:03:00 -0400 Subject: [Public WebGL] Bug in oes-texture-float conformance test In-Reply-To: References: Message-ID: <50805264.4060805@mozilla.com> I'm slightly leaning in favor of 1) because it is much easier to relax a conformance test than it is to make a spec change that can effectively break real WebGL content on certain hardware. I don't have any data on how useful float textures are without the ability to render to them. Benoit On 12-10-18 02:53 PM, Kenneth Russell wrote: > Mark Callow has pointed out yet another problem in WebGL's exposure of > the OES_texture_float extension: > http://www.khronos.org/registry/webgl/extensions/OES_texture_float/ > > The WebGL version of this extension supports optional rendering to > floating-point (FP) textures, even though this is strictly speaking > not allowed by OpenGL ES 2.0. > > Further, to date, the WebGL conformance test for this extension has > required that FP render targets be supported, as a > quality-of-implementation issue. The expectation has basically been > that this extension would only be available on desktop platforms until > next-generation (OpenGL ES 3.0) hardware arrives. > > Now there are some mobile vendors wishing to expose the > OES_texture_float extension in their WebGL implementations. They > support the underlying GL_OES_texture_float extension but not FP > render targets. See > http://www.khronos.org/bugzilla/show_bug.cgi?id=729 , which points out > quite rightly that the conformance test is testing something the spec > doesn't say. > > Should we: > > 1) Change the conformance test to make FP render target support optional > 2) Change the spec to require FP render target support > > (1) might break some WebGL demos because they might be incorrectly > assuming that the OES_texture_float extension implies FP render target > support -- because the conformance test has enforced this to date. > (2) would prevent the majority of OpenGL ES 2.0 implementations from > exposing OES_texture_float support. I've been told that the extension > is useful even without FP render target support, though my opinion is > that it's much more useful when FP render targets are supported. > > Thoughts please? Would like to resolve this issue soon. > > Thanks, > > -Ken > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Thu Oct 18 12:19:01 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 18 Oct 2012 21:19:01 +0200 Subject: [Public WebGL] Bug in oes-texture-float conformance test In-Reply-To: <50805264.4060805@mozilla.com> References: <50805264.4060805@mozilla.com> Message-ID: I remember that the discussion about the validity of rendertargets has come up on this list and the concensus has been that unless the FBO you attached any combination of rendertargets to has passed the validity test you won't know if those are valid rendertargets on your current platform. So aparts from some very basic rendertargets, a lot is optional. On Thu, Oct 18, 2012 at 9:03 PM, Benoit Jacob wrote: > > I'm slightly leaning in favor of 1) because it is much easier to relax a > conformance test than it is to make a spec change that can effectively > break real WebGL content on certain hardware. > > I don't have any data on how useful float textures are without the > ability to render to them. > > Benoit > > On 12-10-18 02:53 PM, Kenneth Russell wrote: > > Mark Callow has pointed out yet another problem in WebGL's exposure of > > the OES_texture_float extension: > > http://www.khronos.org/registry/webgl/extensions/OES_texture_float/ > > > > The WebGL version of this extension supports optional rendering to > > floating-point (FP) textures, even though this is strictly speaking > > not allowed by OpenGL ES 2.0. > > > > Further, to date, the WebGL conformance test for this extension has > > required that FP render targets be supported, as a > > quality-of-implementation issue. The expectation has basically been > > that this extension would only be available on desktop platforms until > > next-generation (OpenGL ES 3.0) hardware arrives. > > > > Now there are some mobile vendors wishing to expose the > > OES_texture_float extension in their WebGL implementations. They > > support the underlying GL_OES_texture_float extension but not FP > > render targets. See > > http://www.khronos.org/bugzilla/show_bug.cgi?id=729 , which points out > > quite rightly that the conformance test is testing something the spec > > doesn't say. > > > > Should we: > > > > 1) Change the conformance test to make FP render target support optional > > 2) Change the spec to require FP render target support > > > > (1) might break some WebGL demos because they might be incorrectly > > assuming that the OES_texture_float extension implies FP render target > > support -- because the conformance test has enforced this to date. > > (2) would prevent the majority of OpenGL ES 2.0 implementations from > > exposing OES_texture_float support. I've been told that the extension > > is useful even without FP render target support, though my opinion is > > that it's much more useful when FP render targets are supported. > > > > Thoughts please? Would like to resolve this issue soon. > > > > Thanks, > > > > -Ken > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Thu Oct 18 12:54:12 2012 From: cma...@ (Chris Marrin) Date: Thu, 18 Oct 2012 12:54:12 -0700 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: References: <9EA1A768-B87C-43C1-A1C2-4F95F544F3AE@apple.com> Message-ID: <0E85F349-A1DC-4040-9F6B-C687BE2A0ECE@apple.com> On Oct 18, 2012, at 10:34 AM, Gregg Tavares (??) wrote: > If we're discussing naming maybe it should be called DrawingBuffer since we already have 'preserveDrawingBuffer: true/false as one of the creation parameers? I like drawing buffer, and I like the idea of creating the context with a ctor. So maybe: [Constructor] interface WebGLRenderingContext { WebGLDrawingBuffer createDrawingBuffer(HTMLCanvasElement, WebGLContextAttributes); void bindDrawingBuffer(WebGLDrawingBuffer); WebGLDrawingBuffer currentDrawingBuffer(); } I don't particularly like the idea of using attributes to set and get the drawingBuffer because we haven't done that for any other parts of the API, but I could live with it. > > > On Thu, Oct 18, 2012 at 10:31 AM, Florian B?sch wrote: > On Thu, Oct 18, 2012 at 6:40 PM, Chris Marrin wrote: > ctx = WebGLRenderingContext.createContext(); > canvasFB = ctx.createCanvas(canvas, attrs); > ctx.bindCanvasFrameBuffer(canvasFB); > > Would be the same as: > > ctx = canvas.getContext("webgl", attrs); > canvasFB = ctx.currentCanvasFrameBuffer(); > > I like this idea, but I find the naming/semantic a bit confusing. Here's some suggestions: > > Framebuffer is confusing since it's also used by one context to imply something different. I'd substitute by "FrontBuffer". > > Creating a context without a frontbuffer: > var ctx = new WebGLRenderingContext(); > > Creating a frontbuffer: > var frontbuffer = ctx.createFrontBuffer(canvas); > > Setting the frontbuffer to be rendered to: > ctx.frontbuffer = frontbuffer > > Getting the frontbuffer currently used: > ctx.frontbuffer > > Getting the canvas a frontbuffer is attached to: > frontbuffer.canvas > > Getting the frontbuffer a canvas is attached to: > canvas.frontbuffer > > ----- ~Chris Marrin cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Oct 18 13:18:45 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 18 Oct 2012 22:18:45 +0200 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <0E85F349-A1DC-4040-9F6B-C687BE2A0ECE@apple.com> References: <9EA1A768-B87C-43C1-A1C2-4F95F544F3AE@apple.com> <0E85F349-A1DC-4040-9F6B-C687BE2A0ECE@apple.com> Message-ID: On Thu, Oct 18, 2012 at 9:54 PM, Chris Marrin wrote: > I don't particularly like the idea of using attributes to set and get the > drawingBuffer because we haven't done that for any other parts of the API, > but I could live with it. > We'll need to reconcile ctx.canvas somehow. I think there's two questions: 1) What happens to ctx.canvas when setting a different drawing buffer? 2) Are there possibly some parameters involved in setting a drawing buffer for the context (that'd rule out getters/setters anyway so we wouldn't have to think about it)? In case of getters/setters being contentious, I'd suggest gl.getDrawingBuffer() and gl.setDrawingBuffer() -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Oct 18 14:07:23 2012 From: kbr...@ (Kenneth Russell) Date: Thu, 18 Oct 2012 14:07:23 -0700 Subject: [Public WebGL] Running WebGL conformance tests from the command line In-Reply-To: <506612CE.5040106@mozilla.com> References: <262397197.5012205.1348865765907.JavaMail.root@mozilla.com> <506612CE.5040106@mozilla.com> Message-ID: Several changes have been made to the conformance suite runner and it is now ready for consumption by GPU vendors. Some of the key points: - It now validates the results of running the conformance tests; if any tests fail or time out, this is caught and reported. - It returns a zero exit code if all tests pass, and nonzero if any tests fail. - It generates an appropriately populated Firefox user.prefs file. - Configurations have been added to test both the default backend (e.g. ANGLE on Windows) and forced use of OpenGL on Windows, for both Firefox and Chrome. - It's been tested with both Firefox and Chrome on Windows, Mac and Linux. See https://github.com/KhronosGroup/WebGL/blob/master/other/test-runner/README.md for some documentation. We'll contact various GPU vendors individually to integrate it into their test suites. Initially we'll push for running the 1.0.1 conformance suite, but will aim to expand to the top of tree suite soon (which should now, thankfully, be a trivial upgrade). -Ken On Fri, Sep 28, 2012 at 2:12 PM, Benoit Jacob wrote: > > It shipped in 14. > https://bugzilla.mozilla.org/show_bug.cgi?id=686735 > Benoit > > On 12-09-28 04:56 PM, Jeff Gilbert wrote: >> Disabling workarounds is ("gfx.work-around-driver-bugs", false), though I think this is still Nightly-only, at the moment. >> >> -Jeff >> >> ----- Original Message ----- >> From: "Boris Zbarsky" >> To: "Mark Callow" >> Cc: "Kenneth Russell" , "public webgl" >> Sent: Thursday, September 27, 2012 9:12:47 PM >> Subject: Re: [Public WebGL] Running WebGL conformance tests from the command line >> >> >> On 9/27/12 11:47 PM, Mark Callow wrote: >>> echo 'user_pref("webgl.force-enabled", true);' >> $prefs # to turn off >>> the blacklist >>> echo 'user_pref("webgl.prefer-native-gl", true);' >> $prefs # to use >>> OpenGL instead of ANGLE >>> >>> I don't know how you express the value shown in about:config as "true" >>> in the above format. I'm guessing. >> Your guess is correct. So yeah, that's what you want. ;) >> >> -Boris >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Fri Oct 19 15:09:45 2012 From: kbr...@ (Kenneth Russell) Date: Fri, 19 Oct 2012 15:09:45 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: On Tue, Oct 16, 2012 at 4:46 PM, Gregg Tavares (??) wrote: > > > > On Tue, Oct 16, 2012 at 4:28 PM, Kenneth Russell wrote: >> >> On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) >> wrote: >> > I don't think the spec makes this clear what happens when you try to >> > call >> > texImage2D or texSubImage2D on an image or video that is not yet loaded. >> >> Right, the behavior in this case is not defined. I'm pretty sure this >> was discussed a long time ago in the working group, but it seemed >> difficult to write a test for any defined behavior, so by consensus >> implementations generate INVALID_OPERATION when attempting to upload >> incomplete images or video to WebGL textures. > > > I don't see a problem writing a test. Basically > > var playing = false > video.src = url > video.play(); > video.addEventListener('playing', function() { playing = true;}); > > waitForVideo() { > gl.bindTexture(...) > gl.texImage2D(..., video); > if (playing) { > doTestsOnVideoContent(); > if (gl.getError() != gl.NO_ERROR) { testFailed("there should be no > errors"); } > } else { > requestAnimationFrame(waitForVideo); > } > > This basically says an implementation is never allowed to generate an error. > it might not be a perfect test but neither is the current one. In fact this > should test it just fine > > video = document.createElement("video"); > gl.texImage2D(..., video); > glErrorShouldBe(gl.NO_ERROR); > video.src = "someValidURL": > gl.texImage2D(..., video); > glErrorShouldBe(gl.NO_ERROR); > video.src = "someOtherValidURL": > gl.texImage2D(..., video); > glErrorShouldBe(gl.NO_ERROR); > > Then do the other 'playing' event tests. > >> >> >> > Example >> > >> > video = document.createElement("video"); >> > video.src = "http://mysite.com/myvideo"; >> > video.play(); >> > >> > function render() { >> > gl.bindTexture(...) >> > gl.texImage2D(..., video); >> > gl.drawArrays(...); >> > window.requestAnimationFrame(render); >> > } >> > >> > Chrome right now will synthesize a GL error if the system hasn't >> > actually >> > gotten the video to start (as in if it's still buffering). >> > >> > Off the top of my head, it seems like it would just be friendlier to >> > make a >> > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that aren't >> > loaded yet. >> > >> > Otherwise, having to check that the video is ready before calling >> > texImage2D >> > seems kind of burdensome on the developer. If they want to check they >> > should >> > use video.addEventListener('playing') or similar. If they were making a >> > video player they'd have to add a bunch of logic when queuing the next >> > video. >> > >> > Same with images. >> > >> > img = document.createElement("img"); >> > img.src = "http://mysite.com/myimage"; >> > >> > function render() { >> > gl.bindTexture(...) >> > gl.texImage2D(..., img); >> > gl.drawArrays(...); >> > window.requestAnimationFrame(render); >> > } >> > >> > If you want to know if the image has loaded use img.onload but otherwise >> > don't fail the call? >> > >> > What do you think? Good idea? Bad idea? >> >> My initial impression is that this change is not a good idea. It would >> expose the specification, implementations and applications to a lot of >> corner case behaviors. For example, if a video's width and height >> hasn't been received yet, then texImage2D(..., video) would have to >> allocate a 1x1 texture; but if the width and height are known, then it >> would have to allocate the right amount of storage. A naive app might >> call texImage2D only the first time and texSubImage2D subsequently, so >> if the first texImage2D call was made before the metadata was >> downloaded, it would never render correctly. I think the current >> fail-fast behavior is best, and already has the result that it renders >> a black texture if the upload fails; the call will (implicitly) >> generate INVALID_OPERATION, and the texture will be incomplete. > > > I don't see how that's worse than the current situation which is you call > texImage2D and pray it works. You have no idea if it's going to work or not. > If you do this does it work? > > okToUseVideo = false; > video = document.createElement("video"); > video.src = "movie#1" > video.addEventListener('playing', function() { okToUseVideo = true; } > > frameCount = 0; > > function render() { > if (okToUseVideo) { > gl.texImage2D(... , video); > > ++frameCount; > if (frameCount > 1000) { > video.src = "movie2"; > } > } > } > > > Basically after some amount of time I switch video.src to a new movie. Is > the video off limits now? The assumption should be "yes". When the source of the media element is set, it is immediately invalid to use until the onload / playing handler is called. > Will it use the old frame from the old movie until > the new movie is buffered or will it give me INVALID_OPERATION? I have no > idea and it's not specified. Same with img. > > img.src = "image#1" > img.onload = function() { > img.src = "image#2"; > gl.texImage2D(...., img); // what's this? old image, no image, > INVALID_OPERATION? > } Yes, this should be assumed to produce INVALID_OPERATION. > The texSubImage2D issue is not helped by the current spec. If video.src = > useChoosenURL then you have no idea what the width and height are until the > 'playing' event (or whatever event) which is no different than if we changed > it. > > Changing it IMO means far less broken websites and I can't see any > disadvantages. Sure you can get a 1x1 pixel texture to start and call > texSubImage2D now but you can do that already. With the current fail-fast behavior, where INVALID_OPERATION will be generated, the texture will be in one of two states: (1) its previous state, because the texImage2D call failed; or (2) having the width and height of the incoming media element. With the proposed behavior to never generate an error, the texture will be either 1x1 or width x height. Another problem with this proposal is that there is no way with the ES 2.0 or WebGL API to query the size of a level of a texture, because GetTexLevelParameter was removed from the OpenGL ES API (and, unfortunately, not reintroduced in ES 3.0). Therefore the behavior is completely silent -- there is no way for the application developer to find out what happened (no error reported, and still renders like an incomplete texture would). I agree that the failing behavior should be specified and, more importantly, tests written verifying it. If the INVALID_OPERATION error were spec'ed, would that address your primary concern? I'm not convinced that silently making the texture 1x1 is a good path to take. -Ken >> >> >> If we want to spec this more tightly then we'll need to do more work >> in the conformance suite to forcibly stall HTTP downloads of the video >> resources in the test suite at well known points. I'm vaguely aware >> that WebKit's HTTP tests do this with a custom server. Requiring a >> custom server in order to run the WebGL conformance suite at all would >> have pretty significant disadvantages. >> >> -Ken > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Sun Oct 21 23:52:26 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Sun, 21 Oct 2012 23:52:26 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: My concern is if you have a video player library that plays a playlist of videos and then you decide later to add some WebGL to it, you have to go through all kinds of contortions just to avoid an INVALID_OPERATION. The framework you're using for the video playlists seems like it would be unlikely to have the needed functionality built in to let you know when top stop calling texImage2D and when to start again. If the error was removed and 1x1 was substituted that problem would go away I don't see what error adds except to make it more complicated. On Fri, Oct 19, 2012 at 3:09 PM, Kenneth Russell wrote: > On Tue, Oct 16, 2012 at 4:46 PM, Gregg Tavares (??) > wrote: > > > > > > > > On Tue, Oct 16, 2012 at 4:28 PM, Kenneth Russell wrote: > >> > >> On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) > >> wrote: > >> > I don't think the spec makes this clear what happens when you try to > >> > call > >> > texImage2D or texSubImage2D on an image or video that is not yet > loaded. > >> > >> Right, the behavior in this case is not defined. I'm pretty sure this > >> was discussed a long time ago in the working group, but it seemed > >> difficult to write a test for any defined behavior, so by consensus > >> implementations generate INVALID_OPERATION when attempting to upload > >> incomplete images or video to WebGL textures. > > > > > > I don't see a problem writing a test. Basically > > > > var playing = false > > video.src = url > > video.play(); > > video.addEventListener('playing', function() { playing = true;}); > > > > waitForVideo() { > > gl.bindTexture(...) > > gl.texImage2D(..., video); > > if (playing) { > > doTestsOnVideoContent(); > > if (gl.getError() != gl.NO_ERROR) { testFailed("there should be no > > errors"); } > > } else { > > requestAnimationFrame(waitForVideo); > > } > > > > This basically says an implementation is never allowed to generate an > error. > > it might not be a perfect test but neither is the current one. In fact > this > > should test it just fine > > > > video = document.createElement("video"); > > gl.texImage2D(..., video); > > glErrorShouldBe(gl.NO_ERROR); > > video.src = "someValidURL": > > gl.texImage2D(..., video); > > glErrorShouldBe(gl.NO_ERROR); > > video.src = "someOtherValidURL": > > gl.texImage2D(..., video); > > glErrorShouldBe(gl.NO_ERROR); > > > > Then do the other 'playing' event tests. > > > >> > >> > >> > Example > >> > > >> > video = document.createElement("video"); > >> > video.src = "http://mysite.com/myvideo"; > >> > video.play(); > >> > > >> > function render() { > >> > gl.bindTexture(...) > >> > gl.texImage2D(..., video); > >> > gl.drawArrays(...); > >> > window.requestAnimationFrame(render); > >> > } > >> > > >> > Chrome right now will synthesize a GL error if the system hasn't > >> > actually > >> > gotten the video to start (as in if it's still buffering). > >> > > >> > Off the top of my head, it seems like it would just be friendlier to > >> > make a > >> > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that aren't > >> > loaded yet. > >> > > >> > Otherwise, having to check that the video is ready before calling > >> > texImage2D > >> > seems kind of burdensome on the developer. If they want to check they > >> > should > >> > use video.addEventListener('playing') or similar. If they were making > a > >> > video player they'd have to add a bunch of logic when queuing the next > >> > video. > >> > > >> > Same with images. > >> > > >> > img = document.createElement("img"); > >> > img.src = "http://mysite.com/myimage"; > >> > > >> > function render() { > >> > gl.bindTexture(...) > >> > gl.texImage2D(..., img); > >> > gl.drawArrays(...); > >> > window.requestAnimationFrame(render); > >> > } > >> > > >> > If you want to know if the image has loaded use img.onload but > otherwise > >> > don't fail the call? > >> > > >> > What do you think? Good idea? Bad idea? > >> > >> My initial impression is that this change is not a good idea. It would > >> expose the specification, implementations and applications to a lot of > >> corner case behaviors. For example, if a video's width and height > >> hasn't been received yet, then texImage2D(..., video) would have to > >> allocate a 1x1 texture; but if the width and height are known, then it > >> would have to allocate the right amount of storage. A naive app might > >> call texImage2D only the first time and texSubImage2D subsequently, so > >> if the first texImage2D call was made before the metadata was > >> downloaded, it would never render correctly. I think the current > >> fail-fast behavior is best, and already has the result that it renders > >> a black texture if the upload fails; the call will (implicitly) > >> generate INVALID_OPERATION, and the texture will be incomplete. > > > > > > I don't see how that's worse than the current situation which is you call > > texImage2D and pray it works. You have no idea if it's going to work or > not. > > If you do this does it work? > > > > okToUseVideo = false; > > video = document.createElement("video"); > > video.src = "movie#1" > > video.addEventListener('playing', function() { okToUseVideo = true; } > > > > frameCount = 0; > > > > function render() { > > if (okToUseVideo) { > > gl.texImage2D(... , video); > > > > ++frameCount; > > if (frameCount > 1000) { > > video.src = "movie2"; > > } > > } > > } > > > > > > Basically after some amount of time I switch video.src to a new movie. Is > > the video off limits now? > > The assumption should be "yes". When the source of the media element > is set, it is immediately invalid to use until the onload / playing > handler is called. > > > Will it use the old frame from the old movie until > > the new movie is buffered or will it give me INVALID_OPERATION? I have no > > idea and it's not specified. Same with img. > > > > img.src = "image#1" > > img.onload = function() { > > img.src = "image#2"; > > gl.texImage2D(...., img); // what's this? old image, no image, > > INVALID_OPERATION? > > } > > Yes, this should be assumed to produce INVALID_OPERATION. > > > > The texSubImage2D issue is not helped by the current spec. If video.src = > > useChoosenURL then you have no idea what the width and height are until > the > > 'playing' event (or whatever event) which is no different than if we > changed > > it. > > > > Changing it IMO means far less broken websites and I can't see any > > disadvantages. Sure you can get a 1x1 pixel texture to start and call > > texSubImage2D now but you can do that already. > > With the current fail-fast behavior, where INVALID_OPERATION will be > generated, the texture will be in one of two states: (1) its previous > state, because the texImage2D call failed; or (2) having the width and > height of the incoming media element. > > With the proposed behavior to never generate an error, the texture > will be either 1x1 or width x height. Another problem with this > proposal is that there is no way with the ES 2.0 or WebGL API to query > the size of a level of a texture, because GetTexLevelParameter was > removed from the OpenGL ES API (and, unfortunately, not reintroduced > in ES 3.0). Therefore the behavior is completely silent -- there is no > way for the application developer to find out what happened (no error > reported, and still renders like an incomplete texture would). > > I agree that the failing behavior should be specified and, more > importantly, tests written verifying it. If the INVALID_OPERATION > error were spec'ed, would that address your primary concern? I'm not > convinced that silently making the texture 1x1 is a good path to take. > > -Ken > > > >> > >> > >> If we want to spec this more tightly then we'll need to do more work > >> in the conformance suite to forcibly stall HTTP downloads of the video > >> resources in the test suite at well known points. I'm vaguely aware > >> that WebKit's HTTP tests do this with a custom server. Requiring a > >> custom server in order to run the WebGL conformance suite at all would > >> have pretty significant disadvantages. > >> > >> -Ken > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Oct 22 10:32:18 2012 From: kbr...@ (Kenneth Russell) Date: Mon, 22 Oct 2012 10:32:18 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: On Sun, Oct 21, 2012 at 11:52 PM, Gregg Tavares (??) wrote: > My concern is if you have a video player library that plays a playlist of > videos and then you decide later to add some WebGL to it, you have to go > through all kinds of contortions just to avoid an INVALID_OPERATION. The > framework you're using for the video playlists seems like it would be > unlikely to have the needed functionality built in to let you know when top > stop calling texImage2D and when to start again. > > If the error was removed and 1x1 was substituted that problem would go away > > I don't see what error adds except to make it more complicated. Let's think through this use case. The playlist library iterates through different videos. As it switches between videos it's clearly necessary to send some sort of event to the application, because the videos might be different resolutions. Unless the application is calling texImage2D all the time (instead of texSubImage2D most of the time), it will have to watch for these events to figure out when it might need to reallocate the texture at a different size. (If the app calls texImage2D all the time, then it doesn't matter whether WebGL generates INVALID_OPERATION or a 1x1 black texture. The app will never need to know whether the upload succeeded or failed, because failures will be transient, and the app will automatically recover from them when the next video starts playing. If texImage2D generates INVALID_OPERATION, the previous contents of the texture will be preserved.) If WebGL generates INVALID_OPERATION for incomplete texture uploads, then the app either needs to watch for those errors coming back from OpenGL, or watch for the onplay event to know when it's guaranteed to be OK to upload a frame. If the app doesn't do this, and mostly calls texSubImage2D to upload new frames, then the video can get "stuck" as the app tries to upload new frames via texSubImage2D while the underlying texture is the wrong size. If WebGL silently creates a 1x1 black texture for incomplete texture uploads, then the app still needs to watch for the onplay event to know when to call texImage2D instead of texSubImage2D. Otherwise, the video can still get "stuck" displaying a 1x1 black texture. As far as I see it, the app has to do the same work regardless of whether incomplete texture uploads generate an error or silently produce a 1x1 black texture. The main difference is that the error can be observed by the application, but the silent allocation of the 1x1 black texture can not, because glGetTexLevelParameter doesn't exist in the GLES API. For this reason I continue to think that generating INVALID_OPERATION is the more transparent behavior for WebGL. Regardless of the decision here, I agree with you that the behavior should be specified and we collectively should try to write some tests for it -- ideally, ones that explicitly force failure or pauses at certain download points, rather than just trying to load and play the video many times. -Ken > On Fri, Oct 19, 2012 at 3:09 PM, Kenneth Russell wrote: >> >> On Tue, Oct 16, 2012 at 4:46 PM, Gregg Tavares (??) >> wrote: >> > >> > >> > >> > On Tue, Oct 16, 2012 at 4:28 PM, Kenneth Russell wrote: >> >> >> >> On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) >> >> wrote: >> >> > I don't think the spec makes this clear what happens when you try to >> >> > call >> >> > texImage2D or texSubImage2D on an image or video that is not yet >> >> > loaded. >> >> >> >> Right, the behavior in this case is not defined. I'm pretty sure this >> >> was discussed a long time ago in the working group, but it seemed >> >> difficult to write a test for any defined behavior, so by consensus >> >> implementations generate INVALID_OPERATION when attempting to upload >> >> incomplete images or video to WebGL textures. >> > >> > >> > I don't see a problem writing a test. Basically >> > >> > var playing = false >> > video.src = url >> > video.play(); >> > video.addEventListener('playing', function() { playing = true;}); >> > >> > waitForVideo() { >> > gl.bindTexture(...) >> > gl.texImage2D(..., video); >> > if (playing) { >> > doTestsOnVideoContent(); >> > if (gl.getError() != gl.NO_ERROR) { testFailed("there should be no >> > errors"); } >> > } else { >> > requestAnimationFrame(waitForVideo); >> > } >> > >> > This basically says an implementation is never allowed to generate an >> > error. >> > it might not be a perfect test but neither is the current one. In fact >> > this >> > should test it just fine >> > >> > video = document.createElement("video"); >> > gl.texImage2D(..., video); >> > glErrorShouldBe(gl.NO_ERROR); >> > video.src = "someValidURL": >> > gl.texImage2D(..., video); >> > glErrorShouldBe(gl.NO_ERROR); >> > video.src = "someOtherValidURL": >> > gl.texImage2D(..., video); >> > glErrorShouldBe(gl.NO_ERROR); >> > >> > Then do the other 'playing' event tests. >> > >> >> >> >> >> >> > Example >> >> > >> >> > video = document.createElement("video"); >> >> > video.src = "http://mysite.com/myvideo"; >> >> > video.play(); >> >> > >> >> > function render() { >> >> > gl.bindTexture(...) >> >> > gl.texImage2D(..., video); >> >> > gl.drawArrays(...); >> >> > window.requestAnimationFrame(render); >> >> > } >> >> > >> >> > Chrome right now will synthesize a GL error if the system hasn't >> >> > actually >> >> > gotten the video to start (as in if it's still buffering). >> >> > >> >> > Off the top of my head, it seems like it would just be friendlier to >> >> > make a >> >> > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that >> >> > aren't >> >> > loaded yet. >> >> > >> >> > Otherwise, having to check that the video is ready before calling >> >> > texImage2D >> >> > seems kind of burdensome on the developer. If they want to check they >> >> > should >> >> > use video.addEventListener('playing') or similar. If they were making >> >> > a >> >> > video player they'd have to add a bunch of logic when queuing the >> >> > next >> >> > video. >> >> > >> >> > Same with images. >> >> > >> >> > img = document.createElement("img"); >> >> > img.src = "http://mysite.com/myimage"; >> >> > >> >> > function render() { >> >> > gl.bindTexture(...) >> >> > gl.texImage2D(..., img); >> >> > gl.drawArrays(...); >> >> > window.requestAnimationFrame(render); >> >> > } >> >> > >> >> > If you want to know if the image has loaded use img.onload but >> >> > otherwise >> >> > don't fail the call? >> >> > >> >> > What do you think? Good idea? Bad idea? >> >> >> >> My initial impression is that this change is not a good idea. It would >> >> expose the specification, implementations and applications to a lot of >> >> corner case behaviors. For example, if a video's width and height >> >> hasn't been received yet, then texImage2D(..., video) would have to >> >> allocate a 1x1 texture; but if the width and height are known, then it >> >> would have to allocate the right amount of storage. A naive app might >> >> call texImage2D only the first time and texSubImage2D subsequently, so >> >> if the first texImage2D call was made before the metadata was >> >> downloaded, it would never render correctly. I think the current >> >> fail-fast behavior is best, and already has the result that it renders >> >> a black texture if the upload fails; the call will (implicitly) >> >> generate INVALID_OPERATION, and the texture will be incomplete. >> > >> > >> > I don't see how that's worse than the current situation which is you >> > call >> > texImage2D and pray it works. You have no idea if it's going to work or >> > not. >> > If you do this does it work? >> > >> > okToUseVideo = false; >> > video = document.createElement("video"); >> > video.src = "movie#1" >> > video.addEventListener('playing', function() { okToUseVideo = true; } >> > >> > frameCount = 0; >> > >> > function render() { >> > if (okToUseVideo) { >> > gl.texImage2D(... , video); >> > >> > ++frameCount; >> > if (frameCount > 1000) { >> > video.src = "movie2"; >> > } >> > } >> > } >> > >> > >> > Basically after some amount of time I switch video.src to a new movie. >> > Is >> > the video off limits now? >> >> The assumption should be "yes". When the source of the media element >> is set, it is immediately invalid to use until the onload / playing >> handler is called. >> >> > Will it use the old frame from the old movie until >> > the new movie is buffered or will it give me INVALID_OPERATION? I have >> > no >> > idea and it's not specified. Same with img. >> > >> > img.src = "image#1" >> > img.onload = function() { >> > img.src = "image#2"; >> > gl.texImage2D(...., img); // what's this? old image, no image, >> > INVALID_OPERATION? >> > } >> >> Yes, this should be assumed to produce INVALID_OPERATION. >> >> >> > The texSubImage2D issue is not helped by the current spec. If video.src >> > = >> > useChoosenURL then you have no idea what the width and height are until >> > the >> > 'playing' event (or whatever event) which is no different than if we >> > changed >> > it. >> > >> > Changing it IMO means far less broken websites and I can't see any >> > disadvantages. Sure you can get a 1x1 pixel texture to start and call >> > texSubImage2D now but you can do that already. >> >> With the current fail-fast behavior, where INVALID_OPERATION will be >> generated, the texture will be in one of two states: (1) its previous >> state, because the texImage2D call failed; or (2) having the width and >> height of the incoming media element. >> >> With the proposed behavior to never generate an error, the texture >> will be either 1x1 or width x height. Another problem with this >> proposal is that there is no way with the ES 2.0 or WebGL API to query >> the size of a level of a texture, because GetTexLevelParameter was >> removed from the OpenGL ES API (and, unfortunately, not reintroduced >> in ES 3.0). Therefore the behavior is completely silent -- there is no >> way for the application developer to find out what happened (no error >> reported, and still renders like an incomplete texture would). >> >> I agree that the failing behavior should be specified and, more >> importantly, tests written verifying it. If the INVALID_OPERATION >> error were spec'ed, would that address your primary concern? I'm not >> convinced that silently making the texture 1x1 is a good path to take. >> >> -Ken >> >> >> >> >> >> >> >> If we want to spec this more tightly then we'll need to do more work >> >> in the conformance suite to forcibly stall HTTP downloads of the video >> >> resources in the test suite at well known points. I'm vaguely aware >> >> that WebKit's HTTP tests do this with a custom server. Requiring a >> >> custom server in order to run the WebGL conformance suite at all would >> >> have pretty significant disadvantages. >> >> >> >> -Ken >> > >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Mon Oct 22 11:28:43 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Mon, 22 Oct 2012 11:28:43 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: So checked. Three.js always uses texImage2D for video So does this http://dev.opera.com/static/articles/2012/webgl-postprocessing/webgl-pp/multipass.html and this http://badassjs.com/post/16472398856/photobooth-style-live-video-effects-in-javascript-and and this https://developer.mozilla.org/en-US/docs/WebGL/Animating_textures_in_WebGL and this http://videos.mozilla.org/serv/mozhacks/flight-of-the-navigator/ and this http://radiapp.com/samples/Video_HPtrailer_WebGL_demo.html and this http://sp0t.org/videoriot/ This one use texSubImage2D http://webglsamples.googlecode.com/hg/color-adjust/color-adjust.html I only bring that up to point out that the fact that texSubImage2D usage would be problematic may not be that important On the other hand, both video and WebRTC are planning on dynamic resolution adjustments based on how fast your internet connection is. I bring that up because it means knowing the resolution so you can call texSubImage2D seems like something you can only count on if you have 100% control over the video source. I seems like to help WebGL fit more uses cases (cases where you don't control those sources like say a video site or a video chat site) that it would be more useful if things "just worked" which is relatively easy if texImage2D just works and seems like it would be rather painful if it could possibly generate an error in various states of switching resolutions or changing videos. I feel like GetTexLevelParameter is an orthogonal issue. There are plenty of things you can do without knowing the resolution of the image/video. Also, while an unloaded video/image could return 1x1, off the top of my head, what it returns while changing resolutions or queuing the next video could be implementation defined. Some browsers might return 1x1 while switching. Others might return the last valid frame until such time as a frame from the new source is ready. As long as there is no error that seems better to me. On Mon, Oct 22, 2012 at 10:32 AM, Kenneth Russell wrote: > On Sun, Oct 21, 2012 at 11:52 PM, Gregg Tavares (??) > wrote: > > My concern is if you have a video player library that plays a playlist of > > videos and then you decide later to add some WebGL to it, you have to go > > through all kinds of contortions just to avoid an INVALID_OPERATION. The > > framework you're using for the video playlists seems like it would be > > unlikely to have the needed functionality built in to let you know when > top > > stop calling texImage2D and when to start again. > > > > If the error was removed and 1x1 was substituted that problem would go > away > > > > I don't see what error adds except to make it more complicated. > > Let's think through this use case. The playlist library iterates > through different videos. As it switches between videos it's clearly > necessary to send some sort of event to the application, because the > videos might be different resolutions. Unless the application is > calling texImage2D all the time (instead of texSubImage2D most of the > time), it will have to watch for these events to figure out when it > might need to reallocate the texture at a different size. (If the app > calls texImage2D all the time, then it doesn't matter whether WebGL > generates INVALID_OPERATION or a 1x1 black texture. The app will never > need to know whether the upload succeeded or failed, because failures > will be transient, and the app will automatically recover from them > when the next video starts playing. If texImage2D generates > INVALID_OPERATION, the previous contents of the texture will be > preserved.) > > If WebGL generates INVALID_OPERATION for incomplete texture uploads, > then the app either needs to watch for those errors coming back from > OpenGL, or watch for the onplay event to know when it's guaranteed to > be OK to upload a frame. If the app doesn't do this, and mostly calls > texSubImage2D to upload new frames, then the video can get "stuck" as > the app tries to upload new frames via texSubImage2D while the > underlying texture is the wrong size. > > If WebGL silently creates a 1x1 black texture for incomplete texture > uploads, then the app still needs to watch for the onplay event to > know when to call texImage2D instead of texSubImage2D. Otherwise, the > video can still get "stuck" displaying a 1x1 black texture. > > As far as I see it, the app has to do the same work regardless of > whether incomplete texture uploads generate an error or silently > produce a 1x1 black texture. The main difference is that the error can > be observed by the application, but the silent allocation of the 1x1 > black texture can not, because glGetTexLevelParameter doesn't exist in > the GLES API. For this reason I continue to think that generating > INVALID_OPERATION is the more transparent behavior for WebGL. > > Regardless of the decision here, I agree with you that the behavior > should be specified and we collectively should try to write some tests > for it -- ideally, ones that explicitly force failure or pauses at > certain download points, rather than just trying to load and play the > video many times. > > -Ken > > > > > On Fri, Oct 19, 2012 at 3:09 PM, Kenneth Russell wrote: > >> > >> On Tue, Oct 16, 2012 at 4:46 PM, Gregg Tavares (??) > >> wrote: > >> > > >> > > >> > > >> > On Tue, Oct 16, 2012 at 4:28 PM, Kenneth Russell > wrote: > >> >> > >> >> On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) > > >> >> wrote: > >> >> > I don't think the spec makes this clear what happens when you try > to > >> >> > call > >> >> > texImage2D or texSubImage2D on an image or video that is not yet > >> >> > loaded. > >> >> > >> >> Right, the behavior in this case is not defined. I'm pretty sure this > >> >> was discussed a long time ago in the working group, but it seemed > >> >> difficult to write a test for any defined behavior, so by consensus > >> >> implementations generate INVALID_OPERATION when attempting to upload > >> >> incomplete images or video to WebGL textures. > >> > > >> > > >> > I don't see a problem writing a test. Basically > >> > > >> > var playing = false > >> > video.src = url > >> > video.play(); > >> > video.addEventListener('playing', function() { playing = true;}); > >> > > >> > waitForVideo() { > >> > gl.bindTexture(...) > >> > gl.texImage2D(..., video); > >> > if (playing) { > >> > doTestsOnVideoContent(); > >> > if (gl.getError() != gl.NO_ERROR) { testFailed("there should be > no > >> > errors"); } > >> > } else { > >> > requestAnimationFrame(waitForVideo); > >> > } > >> > > >> > This basically says an implementation is never allowed to generate an > >> > error. > >> > it might not be a perfect test but neither is the current one. In fact > >> > this > >> > should test it just fine > >> > > >> > video = document.createElement("video"); > >> > gl.texImage2D(..., video); > >> > glErrorShouldBe(gl.NO_ERROR); > >> > video.src = "someValidURL": > >> > gl.texImage2D(..., video); > >> > glErrorShouldBe(gl.NO_ERROR); > >> > video.src = "someOtherValidURL": > >> > gl.texImage2D(..., video); > >> > glErrorShouldBe(gl.NO_ERROR); > >> > > >> > Then do the other 'playing' event tests. > >> > > >> >> > >> >> > >> >> > Example > >> >> > > >> >> > video = document.createElement("video"); > >> >> > video.src = "http://mysite.com/myvideo"; > >> >> > video.play(); > >> >> > > >> >> > function render() { > >> >> > gl.bindTexture(...) > >> >> > gl.texImage2D(..., video); > >> >> > gl.drawArrays(...); > >> >> > window.requestAnimationFrame(render); > >> >> > } > >> >> > > >> >> > Chrome right now will synthesize a GL error if the system hasn't > >> >> > actually > >> >> > gotten the video to start (as in if it's still buffering). > >> >> > > >> >> > Off the top of my head, it seems like it would just be friendlier > to > >> >> > make a > >> >> > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that > >> >> > aren't > >> >> > loaded yet. > >> >> > > >> >> > Otherwise, having to check that the video is ready before calling > >> >> > texImage2D > >> >> > seems kind of burdensome on the developer. If they want to check > they > >> >> > should > >> >> > use video.addEventListener('playing') or similar. If they were > making > >> >> > a > >> >> > video player they'd have to add a bunch of logic when queuing the > >> >> > next > >> >> > video. > >> >> > > >> >> > Same with images. > >> >> > > >> >> > img = document.createElement("img"); > >> >> > img.src = "http://mysite.com/myimage"; > >> >> > > >> >> > function render() { > >> >> > gl.bindTexture(...) > >> >> > gl.texImage2D(..., img); > >> >> > gl.drawArrays(...); > >> >> > window.requestAnimationFrame(render); > >> >> > } > >> >> > > >> >> > If you want to know if the image has loaded use img.onload but > >> >> > otherwise > >> >> > don't fail the call? > >> >> > > >> >> > What do you think? Good idea? Bad idea? > >> >> > >> >> My initial impression is that this change is not a good idea. It > would > >> >> expose the specification, implementations and applications to a lot > of > >> >> corner case behaviors. For example, if a video's width and height > >> >> hasn't been received yet, then texImage2D(..., video) would have to > >> >> allocate a 1x1 texture; but if the width and height are known, then > it > >> >> would have to allocate the right amount of storage. A naive app might > >> >> call texImage2D only the first time and texSubImage2D subsequently, > so > >> >> if the first texImage2D call was made before the metadata was > >> >> downloaded, it would never render correctly. I think the current > >> >> fail-fast behavior is best, and already has the result that it > renders > >> >> a black texture if the upload fails; the call will (implicitly) > >> >> generate INVALID_OPERATION, and the texture will be incomplete. > >> > > >> > > >> > I don't see how that's worse than the current situation which is you > >> > call > >> > texImage2D and pray it works. You have no idea if it's going to work > or > >> > not. > >> > If you do this does it work? > >> > > >> > okToUseVideo = false; > >> > video = document.createElement("video"); > >> > video.src = "movie#1" > >> > video.addEventListener('playing', function() { okToUseVideo = true; > } > >> > > >> > frameCount = 0; > >> > > >> > function render() { > >> > if (okToUseVideo) { > >> > gl.texImage2D(... , video); > >> > > >> > ++frameCount; > >> > if (frameCount > 1000) { > >> > video.src = "movie2"; > >> > } > >> > } > >> > } > >> > > >> > > >> > Basically after some amount of time I switch video.src to a new movie. > >> > Is > >> > the video off limits now? > >> > >> The assumption should be "yes". When the source of the media element > >> is set, it is immediately invalid to use until the onload / playing > >> handler is called. > >> > >> > Will it use the old frame from the old movie until > >> > the new movie is buffered or will it give me INVALID_OPERATION? I have > >> > no > >> > idea and it's not specified. Same with img. > >> > > >> > img.src = "image#1" > >> > img.onload = function() { > >> > img.src = "image#2"; > >> > gl.texImage2D(...., img); // what's this? old image, no image, > >> > INVALID_OPERATION? > >> > } > >> > >> Yes, this should be assumed to produce INVALID_OPERATION. > >> > >> > >> > The texSubImage2D issue is not helped by the current spec. If > video.src > >> > = > >> > useChoosenURL then you have no idea what the width and height are > until > >> > the > >> > 'playing' event (or whatever event) which is no different than if we > >> > changed > >> > it. > >> > > >> > Changing it IMO means far less broken websites and I can't see any > >> > disadvantages. Sure you can get a 1x1 pixel texture to start and call > >> > texSubImage2D now but you can do that already. > >> > >> With the current fail-fast behavior, where INVALID_OPERATION will be > >> generated, the texture will be in one of two states: (1) its previous > >> state, because the texImage2D call failed; or (2) having the width and > >> height of the incoming media element. > >> > >> With the proposed behavior to never generate an error, the texture > >> will be either 1x1 or width x height. Another problem with this > >> proposal is that there is no way with the ES 2.0 or WebGL API to query > >> the size of a level of a texture, because GetTexLevelParameter was > >> removed from the OpenGL ES API (and, unfortunately, not reintroduced > >> in ES 3.0). Therefore the behavior is completely silent -- there is no > >> way for the application developer to find out what happened (no error > >> reported, and still renders like an incomplete texture would). > >> > >> I agree that the failing behavior should be specified and, more > >> importantly, tests written verifying it. If the INVALID_OPERATION > >> error were spec'ed, would that address your primary concern? I'm not > >> convinced that silently making the texture 1x1 is a good path to take. > >> > >> -Ken > >> > >> > >> >> > >> >> > >> >> If we want to spec this more tightly then we'll need to do more work > >> >> in the conformance suite to forcibly stall HTTP downloads of the > video > >> >> resources in the test suite at well known points. I'm vaguely aware > >> >> that WebKit's HTTP tests do this with a custom server. Requiring a > >> >> custom server in order to run the WebGL conformance suite at all > would > >> >> have pretty significant disadvantages. > >> >> > >> >> -Ken > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Mon Oct 22 08:00:53 2012 From: vla...@ (Vladimir Vukicevic) Date: Mon, 22 Oct 2012 11:00:53 -0400 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <507F5973.10808@mozilla.com> References: <507F5973.10808@mozilla.com> Message-ID: <50855FA5.5060600@mozilla.com> On 10/17/2012 9:20 PM, Benoit Jacob wrote: > Agree that sharing a single context between canvases seems like a more > workable approach than the opposite -- it will lead to less memory > overhead (no need to have N OpenGL contexts for N canvases, and no > need to rely on OpenGL share groups). > > Some random thoughts: > > - what happened to the previously discussed idea of an idea like this: > > gl.bindFramebuffer(othercanvas); Yep, I prefer this instead of a new setCanvas call as well. You should be able to treat canvases as just framebuffers whose configuration you can't modify. Note that we already made this easy on ourselves by having WebGLFramebuffer be an object... so things like getting the currently bound framebuffer work naturally, just returning a Canvas element object instead of a WebGLFramebuffer :) > - a big problem with the approach of having to getContext before > telling that you want to use another existing context, is that it will > force the browser to create a drawing buffer which you may turn out to > never use. We need an API that avoids such waste by allowing to avoid > creating drawing buffers. The earlier-discussed > bindFramebuffer(canvas) proposal, applied to a new canvas that doesn't > already have a context on it, was a possible way to solve that problem. I think you actually want both -- you want to do getContext() on the second canvas, giving it the existing context as part of the params. Brandon's proposal of 'attach' would work well here. > - I have a feeling that allowing multiple sets of context attributes > is going to be a headache. If we settle on an API that formally allows > specifying different context attribs, I'd at least recommend requiring > the all the attributes to agree across all canvases, at least as a > first step -- we can always relax that later, but if we allow it from > the start, we won't be able to change it anymore. If we take the view that each canvas operates like a framebuffer, then the attribs will be used to configure the framebuffer for that canvas. For example, you may want one canvas that has depth but no alpha, and another where you need alpha but don't care about depth. So, for example: var gl = canvas1.getContext("webgl", { depth: true, alpha: false }); var gl2 = canvas2.getContext("webgl", { attach: ctx, depth: false, alpha: true }); assert(gl === gl2); gl.clearColor(1, 0, 0, 1); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); ctx.bindFramebuffer(ctx.FRAMEBUFFER, canvas2); gl.clearColor(0, 1, 0, 1); gl.clear(gl.COLOR_BUFFER_BIT); /* null always returns to the canvas that created the context, * but the canvas object is an alias -- these two lines are identical: */ ctx.bindFramebuffer(ctx.FRAMEBUFFER, null); ctx.bindFramebuffer(ctx.FRAMEBUFFER, canvas1); From Gregg's email: > After the cal to cavnas2.setContext(ctx1) what does ctx2 do? Does it > still draw to cavnas2 as well? Does it get > GL_INVALID_FRAMEBUFFER_OPERATION if a draw/clear function is called > and no FBO is bound? Does it become lost? What does ctx1.canvas > reference and what does ctx2.canvas reference? What does > ctx1.getContextAttributes() return? The attributes for canvas1 or > canvas2? My thoughts... ctx1/ctx2 should be equivalent, so no need to specify draw/clear with no binding behaviour, different loss behaviour, etc. ctx.canvas always references the creating canvas; we don't really have much need to get the canvas from the context anyway. (We can and maybe should introduce something like ctx.getAttachedCanvases() or something?) ctx.getContextAttibutes() we can extend to be ctx.getContextAttributes(canvas), with default of null being the original canvas (i.e. canvas1). - Vlad > > - Likewise, allowing canvases from different principals is going to be > a source of pain, so I'd make it a requirement that canvases sharing a > context must be on the same principal. > > Benoit > > On 12-10-17 09:04 PM, Gregg Tavares (??) wrote: >> I'd like to pursue this idea further that Florian brought up which is >> that, for drawing to multiple canvases, rather than share resources >> across contexts it would be much nicer to just be able to use the >> same context with multiple canvases. Something like >> >> ctx1 = canvas1.getContext("webgl"); >> ctx1.clearColor(0,1,0,1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> canvas2.setContext(ctx1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> >> >> Clear's both canvas1 and canvas2 to green >> >> Issues: Under the current WebGL the only way to setup a canvas is by >> calling getContext(..., contextCreationParameters). Whether the >> canvas is RGB or RGBA, has depth or stencil, is premultiplied or not, >> has it's backbuffer preserved or not, etc.. >> >> Which leads to some questions. >> >> Is the current API okay. You'd create 2 contexts still but just use >> one of them as in >> >> ctx1 = canvas1.getContext("webgl". { alpha: true } ); >> ctx2 = canvas1.getContext("webgl". { alpha: false } ); >> ctx1.clearColor(0,1,0,1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> canvas2.setContext(ctx1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> >> >> After the cal to cavnas2.setContext(ctx1) what does ctx2 do? Does it >> still draw to cavnas2 as well? Does it get >> GL_INVALID_FRAMEBUFFER_OPERATION if a draw/clear function is called >> and no FBO is bound? Does it become lost? What does ctx1.canvas >> reference and what does ctx2.canvas reference? What does >> ctx1.getContextAttributes() return? The attributes for canvas1 or >> canvas2? >> >> I think arguably being able to setContext on a canvas is the right >> way to solve drawing to multiple canvas with the same resources but >> there are some unfortunate existing API choices. >> >> Thoughts? >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzb...@ Mon Oct 22 13:05:13 2012 From: bzb...@ (Boris Zbarsky) Date: Mon, 22 Oct 2012 16:05:13 -0400 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null Message-ID: <5085A6F9.6020707@mit.edu> https://www.khronos.org/registry/webgl/sdk/tests/conformance/context/context-lost.html tests that getContextAttributes() returns null if the context has been lost. But per the spec, getContextAttributes never returns null. So as far as I can tell, the test is wrong. And it just started failing in Gecko when I fixed the spec bug we had and stopped returning null in the lost-context case. It would probably be better to test that even in lost-context cases the attributes returned are the ones the context was created with. -Boris ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Mon Oct 22 13:42:45 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Mon, 22 Oct 2012 13:42:45 -0700 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <50855FA5.5060600@mozilla.com> References: <507F5973.10808@mozilla.com> <50855FA5.5060600@mozilla.com> Message-ID: On Mon, Oct 22, 2012 at 8:00 AM, Vladimir Vukicevic wrote: > On 10/17/2012 9:20 PM, Benoit Jacob wrote: > > Agree that sharing a single context between canvases seems like a more > workable approach than the opposite -- it will lead to less memory overhead > (no need to have N OpenGL contexts for N canvases, and no need to rely on > OpenGL share groups). > > Some random thoughts: > > - what happened to the previously discussed idea of an idea like this: > > gl.bindFramebuffer(othercanvas); > > > Yep, I prefer this instead of a new setCanvas call as well. You should be > able to treat canvases as just framebuffers whose configuration you can't > modify. Note that we already made this easy on ourselves by having > WebGLFramebuffer be an object... so things like getting the currently bound > framebuffer work naturally, just returning a Canvas element object instead > of a WebGLFramebuffer :) > > > - a big problem with the approach of having to getContext before telling > that you want to use another existing context, is that it will force the > browser to create a drawing buffer which you may turn out to never use. We > need an API that avoids such waste by allowing to avoid creating drawing > buffers. The earlier-discussed bindFramebuffer(canvas) proposal, applied to > a new canvas that doesn't already have a context on it, was a possible way > to solve that problem. > > > I think you actually want both -- you want to do getContext() on the > second canvas, giving it the existing context as part of the params. > Brandon's proposal of 'attach' would work well here. > > > - I have a feeling that allowing multiple sets of context attributes is > going to be a headache. If we settle on an API that formally allows > specifying different context attribs, I'd at least recommend requiring the > all the attributes to agree across all canvases, at least as a first step > -- we can always relax that later, but if we allow it from the start, we > won't be able to change it anymore. > > > If we take the view that each canvas operates like a framebuffer, then the > attribs will be used to configure the framebuffer for that canvas. For > example, you may want one canvas that has depth but no alpha, and another > where you need alpha but don't care about depth. > > So, for example: > > var gl = canvas1.getContext("webgl", { depth: true, alpha: false }); > var gl2 = canvas2.getContext("webgl", { attach: ctx, depth: false, alpha: > true }); > assert(gl === gl2); > > gl.clearColor(1, 0, 0, 1); > gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); > > ctx.bindFramebuffer(ctx.FRAMEBUFFER, canvas2); > > gl.clearColor(0, 1, 0, 1); > gl.clear(gl.COLOR_BUFFER_BIT); > > /* null always returns to the canvas that created the context, > * but the canvas object is an alias -- these two lines are identical: > */ > ctx.bindFramebuffer(ctx.FRAMEBUFFER, null); > ctx.bindFramebuffer(ctx.FRAMEBUFFER, canvas1); > > > From Gregg's email: > > After the cal to cavnas2.setContext(ctx1) what does ctx2 do? Does it > still draw to cavnas2 as well? Does it get GL_INVALID_FRAMEBUFFER_OPERATION > if a draw/clear function is called and no FBO is bound? Does it become > lost? What does ctx1.canvas reference and what does ctx2.canvas reference? > What does ctx1.getContextAttributes() return? The attributes for canvas1 or > canvas2? > > > My thoughts... ctx1/ctx2 should be equivalent, so no need to specify > draw/clear with no binding behaviour, different loss behaviour, etc. > ctx.canvas always references the creating canvas; we don't really have much > need to get the canvas from the context anyway. > I get the canvas from the context all over the place in my code. Since I can get the width/height of various things that way gl.canvas.width or gl.canvas.clientWidth or function takeScreenshot(gl) { return gl.canvas.toDataURL(); } etc. I means I don't have to pass both a context and a canvas. If ctx.canvas does not point to the canvas currently being rendered to that would break lots of my code. Given 2 canvases of different sizes you'd need to set the viewport after switching. It would arguably be nice to be able to do this ctx.bindFramebuffer(ctx.FRAMEBUFFER, someCanvas); ctx.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight); var aspect = gl.canvas.clientWidth / gl.canvas.clientHeight; perspectiveMatrix = someMathLib.perspective(fieldOfView, aspect, zNear, zFar); I'm also not sure that gl.bindFramebuffer(..., canvas) makes sense vs a new function like gl.setDrawingBuffer(canvas). Conceptually its the same but it's not really because ... gl.bindFramebuffer(gl.FRAMEBUFFER, canvas); gl.framebufferTexture2D(...) // should fail gl.framebufferRenderbuffer(..) // should fail gl.deleteFramebuffer(canvas); // should fail Where as a new function like say gl.setDrawingBuffer(canvas) would make it clear it's "special". Then this gl.bindFramebuffer(gl.FRAMEBUFFER, mybuffer); gl.bindDrawingBuffer(canvas); would still render to mybuffer. To render to the current canvas/drawingBuffer you'd still do gl.bindFramebuffer(gl.FRAMEBUFFER, null); // render to current drawingBuffer. > (We can and maybe should introduce something like > ctx.getAttachedCanvases() or something?) ctx.getContextAttibutes() we can > extend to be ctx.getContextAttributes(canvas), with default of null being > the original canvas (i.e. canvas1). > > - Vlad > > > > - Likewise, allowing canvases from different principals is going to be a > source of pain, so I'd make it a requirement that canvases sharing a > context must be on the same principal. > > Benoit > > On 12-10-17 09:04 PM, Gregg Tavares (??) wrote: > > I'd like to pursue this idea further that Florian brought up which is > that, for drawing to multiple canvases, rather than share resources across > contexts it would be much nicer to just be able to use the same context > with multiple canvases. Something like > > ctx1 = canvas1.getContext("webgl"); > ctx1.clearColor(0,1,0,1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > canvas2.setContext(ctx1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > > > Clear's both canvas1 and canvas2 to green > > Issues: Under the current WebGL the only way to setup a canvas is by > calling getContext(..., contextCreationParameters). Whether the canvas is > RGB or RGBA, has depth or stencil, is premultiplied or not, has it's > backbuffer preserved or not, etc.. > > Which leads to some questions. > > Is the current API okay. You'd create 2 contexts still but just use one > of them as in > > ctx1 = canvas1.getContext("webgl". { alpha: true } ); > ctx2 = canvas1.getContext("webgl". { alpha: false } ); > ctx1.clearColor(0,1,0,1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > canvas2.setContext(ctx1); > ctx1.clear(gl.COLOR_BUFFER_BIT); > > > After the cal to cavnas2.setContext(ctx1) what does ctx2 do? Does it > still draw to cavnas2 as well? Does it get GL_INVALID_FRAMEBUFFER_OPERATION > if a draw/clear function is called and no FBO is bound? Does it become > lost? What does ctx1.canvas reference and what does ctx2.canvas reference? > What does ctx1.getContextAttributes() return? The attributes for canvas1 or > canvas2? > > I think arguably being able to setContext on a canvas is the right way > to solve drawing to multiple canvas with the same resources but there are > some unfortunate existing API choices. > > Thoughts? > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Oct 22 13:54:21 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Mon, 22 Oct 2012 13:54:21 -0700 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null In-Reply-To: <5085A6F9.6020707@mit.edu> References: <5085A6F9.6020707@mit.edu> Message-ID: That might be a bug in the test but just fyi, the attributes can change when the context is restored. As an example: Find a win box with 2 GPUs. Disable 2nd GPU, run a WebGL program, enable 2nd GPU, disable first GPU. In an idea WebGL impl the context would be lost on the first gpu and restored on the second. The second will have different capabilities (different limits) and might have different bugs (different features disabled like anti-aliasing) On Mon, Oct 22, 2012 at 1:05 PM, Boris Zbarsky wrote: > > https://www.khronos.org/**registry/webgl/sdk/tests/** > conformance/context/context-**lost.htmltests that getContextAttributes() returns null if the context has been lost. > > But per the spec, getContextAttributes never returns null. So as far as I > can tell, the test is wrong. And it just started failing in Gecko when I > fixed the spec bug we had and stopped returning null in the lost-context > case. > > It would probably be better to test that even in lost-context cases the > attributes returned are the ones the context was created with. > > -Boris > > ------------------------------**----------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ------------------------------**----------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzb...@ Mon Oct 22 14:10:30 2012 From: bzb...@ (Boris Zbarsky) Date: Mon, 22 Oct 2012 17:10:30 -0400 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null In-Reply-To: References: <5085A6F9.6020707@mit.edu> Message-ID: <5085B646.3060003@mit.edu> On 10/22/12 4:54 PM, Gregg Tavares (??) wrote: > That might be a bug in the test but just fyi, the attributes can change > when the context is restored. As an example: Find a win box with 2 GPUs. > Disable 2nd GPU, run a WebGL program, enable 2nd GPU, disable first GPU. > In an idea WebGL impl the context would be lost on the first gpu and > restored on the second. The second will have different capabilities > (different limits) and might have different bugs (different features > disabled like anti-aliasing) OK. So what should getContextAttributes() be returning while in the context-lost state, then? In particular, what should it return if a drawing buffer was never created? The spec doesn't seem to actually define this.... -Boris ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From vla...@ Mon Oct 22 14:57:50 2012 From: vla...@ (Vladimir Vukicevic) Date: Mon, 22 Oct 2012 14:57:50 -0700 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: References: <507F5973.10808@mozilla.com> <50855FA5.5060600@mozilla.com> Message-ID: <5085C15E.2030500@mozilla.com> On 10/22/2012 1:42 PM, Gregg Tavares (??) wrote: > > On Mon, Oct 22, 2012 at 8:00 AM, Vladimir Vukicevic > > wrote: > > > My thoughts... ctx1/ctx2 should be equivalent, so no need to > specify draw/clear with no binding behaviour, different loss > behaviour, etc. ctx.canvas always references the creating canvas; > we don't really have much need to get the canvas from the context > anyway. > > > I get the canvas from the context all over the place in my code. Since > I can get the width/height of various things that way > > gl.canvas.width > > or > > gl.canvas.clientWidth > > or > > function takeScreenshot(gl) { > return gl.canvas.toDataURL(); > } > > etc. I means I don't have to pass both a context and a canvas. If > ctx.canvas does not point to the canvas currently being rendered to > that would break lots of my code. Fair enough, I suppose gl.canvas can just switch around to point at the current one. That would at least let libraries treat the gl object in a consistent way regardless of how many canvases the developer is drawing to. > Given 2 canvases of different sizes you'd need to set the viewport > after switching. It would arguably be nice to be able to do this > > ctx.bindFramebuffer(ctx.FRAMEBUFFER, someCanvas); > ctx.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight); > var aspect = gl.canvas.clientWidth / gl.canvas.clientHeight; > perspectiveMatrix = someMathLib.perspective(fieldOfView, aspect, > zNear, zFar); > > I'm also not sure that gl.bindFramebuffer(..., canvas) makes sense vs > a new function like gl.setDrawingBuffer(canvas). Conceptually its the > same but it's not really because ... > > gl.bindFramebuffer(gl.FRAMEBUFFER, canvas); > gl.framebufferTexture2D(...) // should fail > gl.framebufferRenderbuffer(..) // should fail > gl.deleteFramebuffer(canvas); // should fail We already have a special case where these all fail -- where the framebuffer is the special object 'null'. It's really just extending that special case... > Where as a new function like say gl.setDrawingBuffer(canvas) would > make it clear it's "special". Then this > > gl.bindFramebuffer(gl.FRAMEBUFFER, mybuffer); > gl.bindDrawingBuffer(canvas); > > would still render to mybuffer. To render to the current > canvas/drawingBuffer you'd still do > > gl.bindFramebuffer(gl.FRAMEBUFFER, null); // render to current > drawingBuffer. This feels a lot more confusing to me -- we already have framebuffer machinery that's well defined, and the current framebuffer is conceptually "where rendering goes". Introducing new API for switching canvases seems like it would just add an unnecessary layer to this. Additionally, if we do, then in the future we'll always have to consider our canvas-targetting functions for how they interact with extensions, etc. If we just define canvases to be framebuffers, and use the regular framebuffer approach, then we reduce the amount of work going forward. It feels much cleaner to me, and a lot less developer-thought required ("Which framebuffer is bound? That's where drawing is going." vs. "Which framebuffer is bound and which canvas is bound? Is the framebuffer null? Ok then it's going to the currently bound canvas, otherwise it's going to the currently bound framebuffer.") - Vlad > > (We can and maybe should introduce something like > ctx.getAttachedCanvases() or something?) > ctx.getContextAttibutes() we can extend to be > ctx.getContextAttributes(canvas), with default of null being the > original canvas (i.e. canvas1). > > - Vlad > > >> >> - Likewise, allowing canvases from different principals is going >> to be a source of pain, so I'd make it a requirement that >> canvases sharing a context must be on the same principal. >> >> Benoit >> >> On 12-10-17 09:04 PM, Gregg Tavares (??) wrote: >>> I'd like to pursue this idea further that Florian brought up >>> which is that, for drawing to multiple canvases, rather than >>> share resources across contexts it would be much nicer to just >>> be able to use the same context with multiple canvases. >>> Something like >>> >>> ctx1 = canvas1.getContext("webgl"); >>> ctx1.clearColor(0,1,0,1); >>> ctx1.clear(gl.COLOR_BUFFER_BIT); >>> canvas2.setContext(ctx1); >>> ctx1.clear(gl.COLOR_BUFFER_BIT); >>> >>> >>> Clear's both canvas1 and canvas2 to green >>> >>> Issues: Under the current WebGL the only way to setup a canvas >>> is by calling getContext(..., contextCreationParameters). >>> Whether the canvas is RGB or RGBA, has depth or stencil, is >>> premultiplied or not, has it's backbuffer preserved or not, etc.. >>> >>> Which leads to some questions. >>> >>> Is the current API okay. You'd create 2 contexts still but just >>> use one of them as in >>> >>> ctx1 = canvas1.getContext("webgl". { alpha: true } ); >>> ctx2 = canvas1.getContext("webgl". { alpha: false } ); >>> ctx1.clearColor(0,1,0,1); >>> ctx1.clear(gl.COLOR_BUFFER_BIT); >>> canvas2.setContext(ctx1); >>> ctx1.clear(gl.COLOR_BUFFER_BIT); >>> >>> >>> After the cal to cavnas2.setContext(ctx1) what does ctx2 do? >>> Does it still draw to cavnas2 as well? Does it get >>> GL_INVALID_FRAMEBUFFER_OPERATION if a draw/clear function is >>> called and no FBO is bound? Does it become lost? What does >>> ctx1.canvas reference and what does ctx2.canvas reference? What >>> does ctx1.getContextAttributes() return? The attributes for >>> canvas1 or canvas2? >>> >>> I think arguably being able to setContext on a canvas is the >>> right way to solve drawing to multiple canvas with the same >>> resources but there are some unfortunate existing API choices. >>> >>> Thoughts? >>> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Oct 22 15:40:06 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 23 Oct 2012 00:40:06 +0200 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <5085C15E.2030500@mozilla.com> References: <507F5973.10808@mozilla.com> <50855FA5.5060600@mozilla.com> <5085C15E.2030500@mozilla.com> Message-ID: I don't object to using the framebuffer functions that already exist. My objection to "anything else than framebuffer" referred to the suggestion to introduce some drawing buffer special functions with framebuffer in the name. On Mon, Oct 22, 2012 at 11:57 PM, Vladimir Vukicevic wrote: > On 10/22/2012 1:42 PM, Gregg Tavares (??) wrote: > > > On Mon, Oct 22, 2012 at 8:00 AM, Vladimir Vukicevic wrote: > >> >> My thoughts... ctx1/ctx2 should be equivalent, so no need to specify >> draw/clear with no binding behaviour, different loss behaviour, etc. >> ctx.canvas always references the creating canvas; we don't really have much >> need to get the canvas from the context anyway. >> > > I get the canvas from the context all over the place in my code. Since I > can get the width/height of various things that way > > gl.canvas.width > > or > > gl.canvas.clientWidth > > or > > function takeScreenshot(gl) { > return gl.canvas.toDataURL(); > } > > etc. I means I don't have to pass both a context and a canvas. If > ctx.canvas does not point to the canvas currently being rendered to that > would break lots of my code. > > > Fair enough, I suppose gl.canvas can just switch around to point at the > current one. That would at least let libraries treat the gl object in a > consistent way regardless of how many canvases the developer is drawing to. > > > Given 2 canvases of different sizes you'd need to set the viewport > after switching. It would arguably be nice to be able to do this > > ctx.bindFramebuffer(ctx.FRAMEBUFFER, someCanvas); > ctx.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight); > var aspect = gl.canvas.clientWidth / gl.canvas.clientHeight; > perspectiveMatrix = someMathLib.perspective(fieldOfView, aspect, > zNear, zFar); > > I'm also not sure that gl.bindFramebuffer(..., canvas) makes sense vs a > new function like gl.setDrawingBuffer(canvas). Conceptually its the same > but it's not really because ... > > gl.bindFramebuffer(gl.FRAMEBUFFER, canvas); > gl.framebufferTexture2D(...) // should fail > gl.framebufferRenderbuffer(..) // should fail > gl.deleteFramebuffer(canvas); // should fail > > > We already have a special case where these all fail -- where the > framebuffer is the special object 'null'. It's really just extending that > special case... > > > Where as a new function like say gl.setDrawingBuffer(canvas) would make > it clear it's "special". Then this > > gl.bindFramebuffer(gl.FRAMEBUFFER, mybuffer); > gl.bindDrawingBuffer(canvas); > > would still render to mybuffer. To render to the current > canvas/drawingBuffer you'd still do > > gl.bindFramebuffer(gl.FRAMEBUFFER, null); // render to current > drawingBuffer. > > > This feels a lot more confusing to me -- we already have framebuffer > machinery that's well defined, and the current framebuffer is conceptually > "where rendering goes". Introducing new API for switching canvases seems > like it would just add an unnecessary layer to this. Additionally, if we > do, then in the future we'll always have to consider our canvas-targetting > functions for how they interact with extensions, etc. If we just define > canvases to be framebuffers, and use the regular framebuffer approach, then > we reduce the amount of work going forward. It feels much cleaner to me, > and a lot less developer-thought required ("Which framebuffer is bound? > That's where drawing is going." vs. "Which framebuffer is bound and which > canvas is bound? Is the framebuffer null? Ok then it's going to the > currently bound canvas, otherwise it's going to the currently bound > framebuffer.") > > - Vlad > > > > > > >> (We can and maybe should introduce something like >> ctx.getAttachedCanvases() or something?) ctx.getContextAttibutes() we can >> extend to be ctx.getContextAttributes(canvas), with default of null being >> the original canvas (i.e. canvas1). >> >> - Vlad >> >> >> >> - Likewise, allowing canvases from different principals is going to be a >> source of pain, so I'd make it a requirement that canvases sharing a >> context must be on the same principal. >> >> Benoit >> >> On 12-10-17 09:04 PM, Gregg Tavares (??) wrote: >> >> I'd like to pursue this idea further that Florian brought up which is >> that, for drawing to multiple canvases, rather than share resources across >> contexts it would be much nicer to just be able to use the same context >> with multiple canvases. Something like >> >> ctx1 = canvas1.getContext("webgl"); >> ctx1.clearColor(0,1,0,1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> canvas2.setContext(ctx1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> >> >> Clear's both canvas1 and canvas2 to green >> >> Issues: Under the current WebGL the only way to setup a canvas is by >> calling getContext(..., contextCreationParameters). Whether the canvas is >> RGB or RGBA, has depth or stencil, is premultiplied or not, has it's >> backbuffer preserved or not, etc.. >> >> Which leads to some questions. >> >> Is the current API okay. You'd create 2 contexts still but just use one >> of them as in >> >> ctx1 = canvas1.getContext("webgl". { alpha: true } ); >> ctx2 = canvas1.getContext("webgl". { alpha: false } ); >> ctx1.clearColor(0,1,0,1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> canvas2.setContext(ctx1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> >> >> After the cal to cavnas2.setContext(ctx1) what does ctx2 do? Does it >> still draw to cavnas2 as well? Does it get GL_INVALID_FRAMEBUFFER_OPERATION >> if a draw/clear function is called and no FBO is bound? Does it become >> lost? What does ctx1.canvas reference and what does ctx2.canvas reference? >> What does ctx1.getContextAttributes() return? The attributes for canvas1 or >> canvas2? >> >> I think arguably being able to setContext on a canvas is the right way >> to solve drawing to multiple canvas with the same resources but there are >> some unfortunate existing API choices. >> >> Thoughts? >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Mon Oct 22 17:56:45 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Mon, 22 Oct 2012 17:56:45 -0700 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <5085C15E.2030500@mozilla.com> References: <507F5973.10808@mozilla.com> <50855FA5.5060600@mozilla.com> <5085C15E.2030500@mozilla.com> Message-ID: On Mon, Oct 22, 2012 at 2:57 PM, Vladimir Vukicevic wrote: > On 10/22/2012 1:42 PM, Gregg Tavares (??) wrote: > > > On Mon, Oct 22, 2012 at 8:00 AM, Vladimir Vukicevic wrote: > >> >> My thoughts... ctx1/ctx2 should be equivalent, so no need to specify >> draw/clear with no binding behaviour, different loss behaviour, etc. >> ctx.canvas always references the creating canvas; we don't really have much >> need to get the canvas from the context anyway. >> > > I get the canvas from the context all over the place in my code. Since I > can get the width/height of various things that way > > gl.canvas.width > > or > > gl.canvas.clientWidth > > or > > function takeScreenshot(gl) { > return gl.canvas.toDataURL(); > } > > etc. I means I don't have to pass both a context and a canvas. If > ctx.canvas does not point to the canvas currently being rendered to that > would break lots of my code. > > > Fair enough, I suppose gl.canvas can just switch around to point at the > current one. That would at least let libraries treat the gl object in a > consistent way regardless of how many canvases the developer is drawing to. > > > Given 2 canvases of different sizes you'd need to set the viewport > after switching. It would arguably be nice to be able to do this > > ctx.bindFramebuffer(ctx.FRAMEBUFFER, someCanvas); > ctx.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight); > var aspect = gl.canvas.clientWidth / gl.canvas.clientHeight; > perspectiveMatrix = someMathLib.perspective(fieldOfView, aspect, > zNear, zFar); > > I'm also not sure that gl.bindFramebuffer(..., canvas) makes sense vs a > new function like gl.setDrawingBuffer(canvas). Conceptually its the same > but it's not really because ... > > gl.bindFramebuffer(gl.FRAMEBUFFER, canvas); > gl.framebufferTexture2D(...) // should fail > gl.framebufferRenderbuffer(..) // should fail > gl.deleteFramebuffer(canvas); // should fail > > > We already have a special case where these all fail -- where the > framebuffer is the special object 'null'. It's really just extending that > special case... > > > Where as a new function like say gl.setDrawingBuffer(canvas) would make > it clear it's "special". Then this > > gl.bindFramebuffer(gl.FRAMEBUFFER, mybuffer); > gl.bindDrawingBuffer(canvas); > > would still render to mybuffer. To render to the current > canvas/drawingBuffer you'd still do > > gl.bindFramebuffer(gl.FRAMEBUFFER, null); // render to current > drawingBuffer. > > > This feels a lot more confusing to me -- we already have framebuffer > machinery that's well defined, and the current framebuffer is conceptually > "where rendering goes". Introducing new API for switching canvases seems > like it would just add an unnecessary layer to this. Additionally, if we > do, then in the future we'll always have to consider our canvas-targetting > functions for how they interact with extensions, etc. If we just define > canvases to be framebuffers, and use the regular framebuffer approach, then > we reduce the amount of work going forward. It feels much cleaner to me, > and a lot less developer-thought required ("Which framebuffer is bound? > That's where drawing is going." vs. "Which framebuffer is bound and which > canvas is bound? Is the framebuffer null? Ok then it's going to the > currently bound canvas, otherwise it's going to the currently bound > framebuffer.") > I see your point. I'm just thinking off all the exceptions you have to write into the spec. gl.bindFramebuffer(gl.FRAMEBUFFER, canvas); gl.bindFramebuffer(gl.FRAMEBUFFER, null); // this is the same as the previous call if that canvas was already bound? gl.bindFramebuffre(gl.FRAMEBUFFER, myFBO); gl.getParamater(gl.FRAMEBUFFER_BINDING); // returns myFBO gl.bindFramebuffre(gl.FRAMEBUFFER, null); gl.getParamater(gl.FRAMEBUFFER_BINDING); // returns null gl.bindFramebuffre(gl.FRAMEBUFFER, canvas); gl.getParamater(gl.FRAMEBUFFER_BINDING); // returns null??? But I just bound something gl.bindFramebuffer(gl.FRAMEBUFFER, myFBO); gl.framebufferTexture2D(gl.FRAMEBUFFER, ...); // works gl.bindFramebuffer(gl.FRAMEBUFFER, null); gl.framebufferTexture2D(gl.FRAMEBUFFER, ...); // fails because there is no FBO gl.bindFramebuffer(gl.FRAMEBUFFER, canvas); gl.framebufferTexture2D(gl.FRAMEBUFFER, ...); // fails?? I just bound something. Seems like It should succeed. I can change the size of an FBO like this gl.bindFramebuffer(gl.FRAMEBUFFER, myFBO); texture = gl.getFramebufferParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME); gl.bindTexture(gl.TEXTURE_2D, texture); gl.texImage2D(gl.TEXTURE, 0, gl.RGBA, newWidth, newHeight, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); But to change the size of the canvas I can't do that. I've got to call other functions. If those are different why not this one? Another thing to consider. If we ever do support shared resources, per the ES spec FBOs can not be shared. So the drawingBuffer would not really be an FBO. It would be a texture (for color) and a 1 or more renderbuffers (for depth/stencil) and would have to be a different FBO in each context that uses it. So it's not really binding the same FBO in that case. Each context internally would probably have an "drawingBufferFBO" and you'd end up internally calling WebGLRenderingContext::bindDrawingBuffer(HTMLCanvasElement* canvas) { // bind the context specific fbo for it's simulated backbuffer. glBindFramebuffer(GL_FRAMEBUFFER, context->drawing_buffer_fbo); // attach the shared texture and renderbuffer to the unshared fbo object. glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, canvas->drawing_buffer_color_texture_, 0); glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, canvas->drawing_buffer_depth_renderbuffer_); // restore the current fbo in the context (or not?) glBindFramebuffer(GL_FRAMEBUFFER, context->current_fbo ? context->current_fbo : context_drawing_buffer_fbo); } > > - Vlad > > > > > > >> (We can and maybe should introduce something like >> ctx.getAttachedCanvases() or something?) ctx.getContextAttibutes() we can >> extend to be ctx.getContextAttributes(canvas), with default of null being >> the original canvas (i.e. canvas1). >> >> - Vlad >> >> >> >> - Likewise, allowing canvases from different principals is going to be a >> source of pain, so I'd make it a requirement that canvases sharing a >> context must be on the same principal. >> >> Benoit >> >> On 12-10-17 09:04 PM, Gregg Tavares (??) wrote: >> >> I'd like to pursue this idea further that Florian brought up which is >> that, for drawing to multiple canvases, rather than share resources across >> contexts it would be much nicer to just be able to use the same context >> with multiple canvases. Something like >> >> ctx1 = canvas1.getContext("webgl"); >> ctx1.clearColor(0,1,0,1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> canvas2.setContext(ctx1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> >> >> Clear's both canvas1 and canvas2 to green >> >> Issues: Under the current WebGL the only way to setup a canvas is by >> calling getContext(..., contextCreationParameters). Whether the canvas is >> RGB or RGBA, has depth or stencil, is premultiplied or not, has it's >> backbuffer preserved or not, etc.. >> >> Which leads to some questions. >> >> Is the current API okay. You'd create 2 contexts still but just use one >> of them as in >> >> ctx1 = canvas1.getContext("webgl". { alpha: true } ); >> ctx2 = canvas1.getContext("webgl". { alpha: false } ); >> ctx1.clearColor(0,1,0,1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> canvas2.setContext(ctx1); >> ctx1.clear(gl.COLOR_BUFFER_BIT); >> >> >> After the cal to cavnas2.setContext(ctx1) what does ctx2 do? Does it >> still draw to cavnas2 as well? Does it get GL_INVALID_FRAMEBUFFER_OPERATION >> if a draw/clear function is called and no FBO is bound? Does it become >> lost? What does ctx1.canvas reference and what does ctx2.canvas reference? >> What does ctx1.getContextAttributes() return? The attributes for canvas1 or >> canvas2? >> >> I think arguably being able to setContext on a canvas is the right way >> to solve drawing to multiple canvas with the same resources but there are >> some unfortunate existing API choices. >> >> Thoughts? >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Mon Oct 22 19:00:35 2012 From: cal...@ (Mark Callow) Date: Tue, 23 Oct 2012 11:00:35 +0900 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: <5085FA43.7040808@artspark.co.jp> On 2012/10/23 2:32, Kenneth Russell wrote: > On Sun, Oct 21, 2012 at 11:52 PM, Gregg Tavares (??) wrote: > ... > ... (If the app > calls texImage2D all the time, then it doesn't matter whether WebGL > generates INVALID_OPERATION or a 1x1 black texture. The app will never > need to know whether the upload succeeded or failed, because failures > will be transient, and the app will automatically recover from them > when the next video starts playing. If texImage2D generates > INVALID_OPERATION, the previous contents of the texture will be > preserved.) But an INVALID_OPERATION error will be latched so it does matter. If the program does not check and just keeps calling gl.texImage2D then the latched INVALID_OPERATION will likely cause a failure elsewhere in the application. So generating an error causes slightly more work for the application. Regards -Mark Please note that, due to the integration of management operations following establishment of our new holding company, my e-mail address has changed to callow.mark<@>artspark<.>co<.>jp. I can receive messages at the old address for the rest of this year but please update your address book as soon as possible. -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Tue Oct 23 00:01:24 2012 From: cal...@ (Mark Callow) Date: Tue, 23 Oct 2012 16:01:24 +0900 Subject: [Public WebGL] using the same context with multiple canvases In-Reply-To: <50855FA5.5060600@mozilla.com> References: <507F5973.10808@mozilla.com> <50855FA5.5060600@mozilla.com> Message-ID: <508640C4.7060003@artspark.co.jp> On 2012/10/23 0:00, Vladimir Vukicevic wrote: > ... > If we take the view that each canvas operates like a framebuffer, then > the attribs will be used to configure the framebuffer for that > canvas. For example, you may want one canvas that has depth but no > alpha, and another where you need alpha but don't care about depth. > > So, for example: > > var gl = canvas1.getContext("webgl", { depth: true, alpha: false }); > var gl2 = canvas2.getContext("webgl", { attach: ctx, depth: false, > alpha: true }); > assert(gl === gl2); > > ... ctx.getContextAttibutes() we can extend to be > ctx.getContextAttributes(canvas), with default of null being the > original canvas (i.e. canvas1). There is a contradiction highlighted by the above where certain attributes are framebuffer (canvas) attributes yet we continue to set and get them as context attributes. Which attributes are you allowed to change when getting gl2 above without affecting canvas1? The canvas is really a color attachment to an FBO. The only attribute that directly applies is alpha. Unfortunately context attributes and surface attributes were conflated by EGL (and its precursors). Because of this conflation, which is behind most of the issues Gregg raised, I prefer to not re-use bindFramebuffer but instead have a new command such as ctx.setDefaultFramebuffer or ctx.setCanvas. Regards -Mark -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Wed Oct 24 00:20:32 2012 From: cal...@ (Mark Callow) Date: Wed, 24 Oct 2012 16:20:32 +0900 Subject: [Public WebGL] WEBGL_dynamic_texture redux Message-ID: <508796C0.8000605@artspark.co.jp> Hi, I am updating the WEBGL_dynamic_texture proposal to (a) provide a better interface to the stream producer (HTMLVideoElement, etc.) and (b) provide tools for handling timing and synchronization issues. Rather than writing spec. text I have been playing with sample code to see how various ideas feel. The entire sample program is attached. Please review it and send your feedback. Hopefully the embedded comments and IDL interface definitions give sufficient background for understanding. (a) stemmed from David Sheets comments to this list requesting the stream interface be added to the producer HTML elements. The sample code offers two alternatives shown in the extract below: augmenting the producer element with the stream interface or keeping it as a separate object. For (b) I've added query functions based on a monotonically increasing counter to retrieve the current value and to retrieve the value the last time the canvas was presented (updated to the screen). The first part of the extract shows how the video producer and texture consumer are connected via a new wdtStream interface. The second part, the drawFrame function shows acquire and release of the frames and also how to determine how long it is taking to display the frames, whether any are being missed, etc. Once we're all happy with this, I'll update the spec. text and then I think we'll be able to move it from proposal to draft. // // connectVideo // // Connect video from the passed HTMLVideoElement to the texture // currently bound to TEXTURE_EXTERNAL_OES on the active texture // unit. // // First a wdtStream object is created with its consumer set to // the texture. Once the video is loaded, it is set as the // producer. This could potentially fail, depending on the // video format. // // interface wdtStream { // enum state { // // Consumer connected; waiting for producer to connect // wdtStreamConnecting, // // Producer & consumer connected. No frames yet. */ // wdtStreamEmpty, // wdtStreamNewFrameAvailable, // wdtStreamOldFrameAvailable, // wdtStreamDisconnected // }; // // Time taken from acquireImage to posting drawing buffer; default 0? // readonly int consumerLatency; // // Frame # (aka Media Stream Count) of most recently inserted frame // // Value is 1 at first frame. // readonly int producerFrame; // // MSC of most recently acquired frame. // readonly int consumerFrame; // // timeout for acquireImage; default 0 // int acquireTimeout; // // void setConsumerLatency(int); // }; // // function connectVideo(ctx, video) { g.loadingFiles.push(video); g.videoReady = false; //----------------------------- // Options for connecting to video //----------------------------- // OPTION 1: method on WDT extension augments video element // with a wdtStream object. ctx.dte.createStream(video); assert(video.wdtStream.state == wdtStreamConnecting); //----------------------------- // OPTION 2: method returns a stream object. g.vstream = ctx.dte.createStream(ctx); assert(g.vstream.state == wdtStreamConnecting); //----------------------------- video.onload = function() { g.loadingFiles.splice(g.loadingFiles.indexOf(video), 1); *try* { // OPTION 1: video object augmented with stream video.wdtStream.connect(); assert(video.wdtStream.state == wdtStreamEmpty); //----------------------------- // OPTION 2: separate stream object g.vstream.connectProducer(video); assert(g.stream.state == wdtStreamEmpty); //------------------------------ *if* (!video.autoplay) { video.play(); // Play video } g.videoReady = true; } *catch* (e) { *window*.*alert*("Video texture setup failed: " + e.name); } }; } function drawFrame(gl) { var lastFrame; var syncValues; var latency; var graphicsMSCBase; // Make sure the canvas is sized correctly. reshape(gl); // Clear the canvas gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); // Matrix set-up deleted ... // To avoid duplicating everything below for each option, use a // temporary variable. This will not be necessary in the final // code. // OPTION 1: augmented video object var vstream = g.video.wdtStream; // OPTION 2: separate stream object var vstream = g.vstream; // In the following // UST is a monotonically increasing counter never adjusted by NTP etc. // The unit is nanoseconds but the frequency of update will vary from // system to system. The average frequency at which the counter is // updated should be 5x the highest MSC frequency supported. For // example if highest MSC is 48kHz (audio) the update frequency // should be 240kHz. Most OSes have this kind of counter available. // // MSC is the media stream count. It is incremented once/sample; for // video that means once/frame, for audio once/sample. For graphics, // it is incremented once/screen refresh. // // CPC is the canvas presentation count. It is incremented once // each time the canvas is presented. // *if* (graphicsMSCBase == *undefined*) { graphicsMSCBase = gl.dte.getSyncValues().msc; } *if* (lastFrame.msc && vstream.producerFrame > lastFrame.msc + 1) { // Missed a frame! Simplify rendering? } *if* (!latency.frameCount) { // Initialize latency.frameCount = 0; latency.accumTotal = 0; } *if* (lastFrame.ust) { syncValues = gl.dte.getSyncValues(); // interface syncValues { // // UST of last present // readonly attribute long long ust; // // Screen refresh count (aka MSC) at last present // // Initialized to 0 on browser start // readonly attribute long msc; // // Canvas presentation count at last present // // Initialized to 0 at canvas creation. // readonly attribute long cpc; // }; // XXX What happens to cpc when switch to another tab? *if* (syncValues.msc - graphicsMSCBase != syncValues.cpc) { // We are not keeping up with screen refresh! // Or are we? If cpc increment stops when canvas hidden, // will need some way to know canvas was hidden so app // won't just assume its not keeping up and therefore // adjust its rendering. graphicsMSCBase = syncValues.msc; // reset base. } latency.accumValue += syncValues.ust - lastFrame.ust; latency.frameCount++; *if* (latency.frameCount == 30) { vstream.setConsumerLatency(latency.accumValue / 30); latency.frameCount = 0; latency.accumValue = 0; } } *if* (g.videoReady) { *if* (g.video.wdtStream.acquireImage()) { // Record UST of frame acquisition. // No such system function in JS so it is added to extension. lastFrame.ust = gl.dte.ustnow(); lastFrame.msc = vstream.consumerFrame; } // OPTION 2: vstream.acquireImage(); lastFrame = g.stream.consumerFrame; } // Draw the cube gl.drawElements(gl.TRIANGLES, g.box.numIndices, gl.UNSIGNED_BYTE, 0); *if* (g.videoReady) vtream.releaseImage(); // Show the framerate framerate.snapshot(); currentAngle += incAngle; *if* (currentAngle > 360) currentAngle -= 360; } Regards -Mark Please note that, due to the integration of management operations following establishment of our new holding company, my e-mail address has changed to callow.mark<@>artspark<.>co<.>jp. I can receive messages at the old address for the rest of this year but please update your address book as soon as possible. -- ???????????????????????????????????? ?????????????????????????? ?? ??????? ???????????????????????????????????? ??????????????????? ?? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Oct 24 12:17:30 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 24 Oct 2012 12:17:30 -0700 Subject: [Public WebGL] Bug in oes-texture-float conformance test In-Reply-To: References: <50805264.4060805@mozilla.com> Message-ID: OK. I've updated the oes-texture-float.html conformance suite both in 1.0.1 and top of tree to make the floating-point render target support optional, to match the WebGL extension spec. -Ken On Thu, Oct 18, 2012 at 12:19 PM, Florian B?sch wrote: > I remember that the discussion about the validity of rendertargets has come > up on this list and the concensus has been that unless the FBO you attached > any combination of rendertargets to has passed the validity test you won't > know if those are valid rendertargets on your current platform. So aparts > from some very basic rendertargets, a lot is optional. > > > On Thu, Oct 18, 2012 at 9:03 PM, Benoit Jacob wrote: >> >> >> I'm slightly leaning in favor of 1) because it is much easier to relax a >> conformance test than it is to make a spec change that can effectively >> break real WebGL content on certain hardware. >> >> I don't have any data on how useful float textures are without the >> ability to render to them. >> >> Benoit >> >> On 12-10-18 02:53 PM, Kenneth Russell wrote: >> > Mark Callow has pointed out yet another problem in WebGL's exposure of >> > the OES_texture_float extension: >> > http://www.khronos.org/registry/webgl/extensions/OES_texture_float/ >> > >> > The WebGL version of this extension supports optional rendering to >> > floating-point (FP) textures, even though this is strictly speaking >> > not allowed by OpenGL ES 2.0. >> > >> > Further, to date, the WebGL conformance test for this extension has >> > required that FP render targets be supported, as a >> > quality-of-implementation issue. The expectation has basically been >> > that this extension would only be available on desktop platforms until >> > next-generation (OpenGL ES 3.0) hardware arrives. >> > >> > Now there are some mobile vendors wishing to expose the >> > OES_texture_float extension in their WebGL implementations. They >> > support the underlying GL_OES_texture_float extension but not FP >> > render targets. See >> > http://www.khronos.org/bugzilla/show_bug.cgi?id=729 , which points out >> > quite rightly that the conformance test is testing something the spec >> > doesn't say. >> > >> > Should we: >> > >> > 1) Change the conformance test to make FP render target support optional >> > 2) Change the spec to require FP render target support >> > >> > (1) might break some WebGL demos because they might be incorrectly >> > assuming that the OES_texture_float extension implies FP render target >> > support -- because the conformance test has enforced this to date. >> > (2) would prevent the majority of OpenGL ES 2.0 implementations from >> > exposing OES_texture_float support. I've been told that the extension >> > is useful even without FP render target support, though my opinion is >> > that it's much more useful when FP render targets are supported. >> > >> > Thoughts please? Would like to resolve this issue soon. >> > >> > Thanks, >> > >> > -Ken >> > >> > ----------------------------------------------------------- >> > You are currently subscribed to public_webgl...@ >> > To unsubscribe, send an email to majordomo...@ with >> > the following command in the body of your email: >> > unsubscribe public_webgl >> > ----------------------------------------------------------- >> > >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Wed Oct 24 12:42:48 2012 From: bja...@ (Benoit Jacob) Date: Wed, 24 Oct 2012 15:42:48 -0400 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null In-Reply-To: <5085B646.3060003@mit.edu> References: <5085A6F9.6020707@mit.edu> <5085B646.3060003@mit.edu> Message-ID: <508844B8.4050208@mozilla.com> On 12-10-22 05:10 PM, Boris Zbarsky wrote: > > On 10/22/12 4:54 PM, Gregg Tavares (??) wrote: >> That might be a bug in the test but just fyi, the attributes can change >> when the context is restored. As an example: Find a win box with 2 GPUs. >> Disable 2nd GPU, run a WebGL program, enable 2nd GPU, disable first GPU. >> In an idea WebGL impl the context would be lost on the first gpu and >> restored on the second. The second will have different capabilities >> (different limits) and might have different bugs (different features >> disabled like anti-aliasing) > > OK. So what should getContextAttributes() be returning while in the > context-lost state, then? > > In particular, what should it return if a drawing buffer was never > created? The spec doesn't seem to actually define this.... Ping! This was an important thread IMO. Benoit > > -Boris > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Thu Oct 25 10:38:22 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Thu, 25 Oct 2012 10:38:22 -0700 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null In-Reply-To: <508844B8.4050208@mozilla.com> References: <5085A6F9.6020707@mit.edu> <5085B646.3060003@mit.edu> <508844B8.4050208@mozilla.com> Message-ID: Off the top of my head I'd say returning the lost context's attributes is fine? On Wed, Oct 24, 2012 at 12:42 PM, Benoit Jacob wrote: > > On 12-10-22 05:10 PM, Boris Zbarsky wrote: > > > > On 10/22/12 4:54 PM, Gregg Tavares (??) wrote: > >> That might be a bug in the test but just fyi, the attributes can change > >> when the context is restored. As an example: Find a win box with 2 GPUs. > >> Disable 2nd GPU, run a WebGL program, enable 2nd GPU, disable first GPU. > >> In an idea WebGL impl the context would be lost on the first gpu and > >> restored on the second. The second will have different capabilities > >> (different limits) and might have different bugs (different features > >> disabled like anti-aliasing) > > > > OK. So what should getContextAttributes() be returning while in the > > context-lost state, then? > > > > In particular, what should it return if a drawing buffer was never > > created? The spec doesn't seem to actually define this.... > > Ping! This was an important thread IMO. > > Benoit > > > > > -Boris > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzb...@ Thu Oct 25 11:05:12 2012 From: bzb...@ (Boris Zbarsky) Date: Thu, 25 Oct 2012 14:05:12 -0400 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null In-Reply-To: References: <5085A6F9.6020707@mit.edu> <5085B646.3060003@mit.edu> <508844B8.4050208@mozilla.com> Message-ID: <50897F58.7060306@mit.edu> On 10/25/12 1:38 PM, Gregg Tavares (??) wrote: > Off the top of my head I'd say returning the lost context's attributes > is fine? Can we never be in a lost-context state without having created a context at any point? -Boris ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From bja...@ Thu Oct 25 11:28:47 2012 From: bja...@ (Benoit Jacob) Date: Thu, 25 Oct 2012 14:28:47 -0400 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null In-Reply-To: <50897F58.7060306@mit.edu> References: <5085A6F9.6020707@mit.edu> <5085B646.3060003@mit.edu> <508844B8.4050208@mozilla.com> <50897F58.7060306@mit.edu> Message-ID: <508984DF.4080904@mozilla.com> On 12-10-25 02:05 PM, Boris Zbarsky wrote: > > On 10/25/12 1:38 PM, Gregg Tavares (??) wrote: >> Off the top of my head I'd say returning the lost context's attributes >> is fine? > > Can we never be in a lost-context state without having created a > context at any point? This is not currently possible. lost-context state is a property of a WebGL context, so you can't get there without having earlier created that context. Benoit > > -Boris > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Thu Oct 25 11:29:59 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Thu, 25 Oct 2012 11:29:59 -0700 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null In-Reply-To: <50897F58.7060306@mit.edu> References: <5085A6F9.6020707@mit.edu> <5085B646.3060003@mit.edu> <508844B8.4050208@mozilla.com> <50897F58.7060306@mit.edu> Message-ID: Yes, we can actually. Ideally we can call var gl = canvas.getContext("webgl"); and the context is already lost. isContextLost() returns true and the contextlostevent will be delivered once the current event exits Note, the samples on the WebGL wiki and the ones on webglsamples.googlecode.com all handle this case. That does bring up a good point as to why getContextAttrbutes should maybe return null during contextlost. I know that sucks though since you have to be defensive about accessing it but I don't know what it could report since there is no context to give attributes about. On Thu, Oct 25, 2012 at 11:05 AM, Boris Zbarsky wrote: > > On 10/25/12 1:38 PM, Gregg Tavares (??) wrote: > >> Off the top of my head I'd say returning the lost context's attributes >> is fine? >> > > Can we never be in a lost-context state without having created a context > at any point? > > > -Boris > > ------------------------------**----------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ------------------------------**----------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu Oct 25 11:38:31 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Thu, 25 Oct 2012 11:38:31 -0700 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null In-Reply-To: <508984DF.4080904@mozilla.com> References: <5085A6F9.6020707@mit.edu> <5085B646.3060003@mit.edu> <508844B8.4050208@mozilla.com> <50897F58.7060306@mit.edu> <508984DF.4080904@mozilla.com> Message-ID: Maybe it's a semantic issue but the thing you get back from canvas.getContext is a handle to context, not a context itself. When the context is lost, that handle effectively points to no context. You can call canvas.getContext and have it return a handle (a WebGLRenderingContext) that is lost On Thu, Oct 25, 2012 at 11:28 AM, Benoit Jacob wrote: > > On 12-10-25 02:05 PM, Boris Zbarsky wrote: > > > > On 10/25/12 1:38 PM, Gregg Tavares (??) wrote: > >> Off the top of my head I'd say returning the lost context's attributes > >> is fine? > > > > Can we never be in a lost-context state without having created a > > context at any point? > > This is not currently possible. lost-context state is a property of a > WebGL context, so you can't get there without having earlier created > that context. > > Benoit > > > > > -Boris > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > unsubscribe public_webgl > > ----------------------------------------------------------- > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Oct 30 10:26:58 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 30 Oct 2012 18:26:58 +0100 Subject: [Public WebGL] CORS and resource provider awareness Message-ID: I'm recently hitting a problem a lot that shouldn't exist. This is about using cross origin images/videos in WebGL. And I'd like everybody to be aware that 1) there are nasty restrictions that "evolved" and that 2) most resource providers are oblivious to the issue and if perhaps we can rise awareness it would help. A short and probably inaccurate history of cross origin history: 1) canvas and webgl came along, everything was fine, we could get images. 2) Somebody decided that presented a security issue and vendors implemented canvas/webgl/image tainting, things where fine, most legitimate uses wouldn't try to send the image data around. 3) CORS came along and everybody rejoiced, finally a way to share those resources and mark the ones that are not security sensitive 4) Vendors seeing CORS decide that it's now legitimate to drop the old tainting model and just flatly prohibit cross origin access to resources if the CORS headers are not set. What's broken? Most providers after step #4 of resources are oblivious that suddenly resources they intended to be embeddable/sharable are now no longer fully accessible to canvas, and not accessible at all to WebGL. How to solve it (not): get rid of CORS and cross origin restrictions. No really, I would prefer this, but it's not gonna happen. How to solve it (really now): Providers of resources *have* to be aware that they have to set cross origin headers now and implement CORS. There's no way around it. Please, please do it. It sucks if you don't. Recent example: google static maps Other examples: everything everywhere TL;DR Please set CORS headers, your're killing baby seals. Thanks ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Oct 30 11:15:31 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 30 Oct 2012 11:15:31 -0700 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: Message-ID: At the time the changes were made to the WebGL specification to disallow access to cross-origin resources for security reasons, the Picasa team at Google was most helpful in adding support for anonymous CORS requests for those pictures which are publicly accessible on the web. I would encourage you and others to do a persistent writeup of the issues (on a blog? on the WebGL wiki? If you want to do the latter and run into any problems, let me know) and directly contact the resource providers you care about. One slight gotcha is that due to limitations in browsers' caches, it's essential that the provider watch for cookie-less requests where the "Origin:" header is set, and set the header "Access-Control-Allow-Origin: *" in response. If they set the "Vary: Origin" response header, then most browsers won't cache the result. This was discovered during the work with the Picasa team. Essentially, anonymous CORS requests are all we know how to make work right now -- but hopefully getting them well supported will still enable many interesting kinds of WebGL applications. It is *not* safe for the resource provider just to set the "Access-Control-Allow-Origin: *" header all the time -- they need to be aware of the security consequences. Access-controlled resources must *not* have this header set. Separately, the WebGL community should collectively pursue the idea of checking whether shaders obey the timing restrictions being defined by the CSS shaders specification. If that works, then WebGL applications would once again be able to safely access cross-domain media. -Ken On Tue, Oct 30, 2012 at 10:26 AM, Florian B?sch wrote: > > I'm recently hitting a problem a lot that shouldn't exist. This is > about using cross origin images/videos in WebGL. And I'd like > everybody to be aware that 1) there are nasty restrictions that > "evolved" and that 2) most resource providers are oblivious to the > issue and if perhaps we can rise awareness it would help. > > A short and probably inaccurate history of cross origin history: > 1) canvas and webgl came along, everything was fine, we could get images. > 2) Somebody decided that presented a security issue and vendors > implemented canvas/webgl/image tainting, things where fine, most > legitimate uses wouldn't try to send the image data around. > 3) CORS came along and everybody rejoiced, finally a way to share > those resources and mark the ones that are not security sensitive > 4) Vendors seeing CORS decide that it's now legitimate to drop the old > tainting model and just flatly prohibit cross origin access to > resources if the CORS headers are not set. > > What's broken? Most providers after step #4 of resources are oblivious > that suddenly resources they intended to be embeddable/sharable are > now no longer fully accessible to canvas, and not accessible at all to > WebGL. > > How to solve it (not): get rid of CORS and cross origin restrictions. > No really, I would prefer this, but it's not gonna happen. > How to solve it (really now): Providers of resources *have* to be > aware that they have to set cross origin headers now and implement > CORS. There's no way around it. Please, please do it. It sucks if you > don't. > > Recent example: google static maps > Other examples: everything everywhere > > TL;DR > Please set CORS headers, your're killing baby seals. > > Thanks > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Tue Oct 30 11:53:25 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 30 Oct 2012 19:53:25 +0100 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: Message-ID: On Tue, Oct 30, 2012 at 7:15 PM, Kenneth Russell wrote: > Separately, the WebGL community should collectively pursue the idea of > checking whether shaders obey the timing restrictions being defined by > the CSS shaders specification. If that works, then WebGL applications > would once again be able to safely access cross-domain media. Far as I could find the CSS shader spec has not yet decided on a restricting mechanism (timing or otherwise). Btw. I think that timing restrictions are not a good idea because it implies obeying a fixed time window for shader execution, which in turn implies that: 1) shaders *may* be aborted, which breaks apps. 2) drawArrays/Elements calls *will* be delayed from returning degrading app performance. To me those issues look worse than what they're trying to solve, i.e. the proverbial cure that kills the patient. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Tue Oct 30 12:37:45 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 30 Oct 2012 12:37:45 -0700 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: Message-ID: On Tue, Oct 30, 2012 at 11:53 AM, Florian B?sch wrote: > On Tue, Oct 30, 2012 at 7:15 PM, Kenneth Russell wrote: >> Separately, the WebGL community should collectively pursue the idea of >> checking whether shaders obey the timing restrictions being defined by >> the CSS shaders specification. If that works, then WebGL applications >> would once again be able to safely access cross-domain media. > Far as I could find the CSS shader spec has not yet decided on a > restricting mechanism (timing or otherwise). > > Btw. I think that timing restrictions are not a good idea because it > implies obeying a fixed time window for shader execution, which in > turn implies that: > 1) shaders *may* be aborted, which breaks apps. > 2) drawArrays/Elements calls *will* be delayed from returning > degrading app performance. The timing restriction patch that I intended to refer to was Adobe's contribution to the ANGLE project (http://cs.chromium.org , search for SH_TIMING_RESTRICTIONS) which prevents control flow decisions from being made based on values fetched from textures. I believe that this would defend against the side-channel timing attack which forced the WebGL spec to disallow the use of cross-origin media. -Ken > To me those issues look worse than what they're trying to solve, i.e. > the proverbial cure that kills the patient. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Tue Oct 30 12:40:57 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 30 Oct 2012 20:40:57 +0100 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: Message-ID: On Tue, Oct 30, 2012 at 8:37 PM, Kenneth Russell wrote: > The timing restriction patch that I intended to refer to was Adobe's > contribution to the ANGLE project (http://cs.chromium.org , search for > SH_TIMING_RESTRICTIONS) which prevents control flow decisions from > being made based on values fetched from textures. I believe that this > would defend against the side-channel timing attack which forced the > WebGL spec to disallow the use of cross-origin media. I do do code that makes control flow decisions based on values fetched from textures. Thank you for breaking my code. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Tue Oct 30 12:41:54 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Tue, 30 Oct 2012 20:41:54 +0100 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: Message-ID: Btw. trigger rally is using such code right now for terrain rendering as well. Thank you for breaking jareikos code as well. On Tue, Oct 30, 2012 at 8:40 PM, Florian B?sch wrote: > On Tue, Oct 30, 2012 at 8:37 PM, Kenneth Russell wrote: >> The timing restriction patch that I intended to refer to was Adobe's >> contribution to the ANGLE project (http://cs.chromium.org , search for >> SH_TIMING_RESTRICTIONS) which prevents control flow decisions from >> being made based on values fetched from textures. I believe that this >> would defend against the side-channel timing attack which forced the >> WebGL spec to disallow the use of cross-origin media. > I do do code that makes control flow decisions based on values fetched > from textures. Thank you for breaking my code. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jef...@ Tue Oct 30 12:48:28 2012 From: jef...@ (Jeff Russell) Date: Tue, 30 Oct 2012 15:48:28 -0400 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: Message-ID: > The timing restriction patch that I intended to refer to was Adobe's > contribution to the ANGLE project (http://cs.chromium.org , search for > SH_TIMING_RESTRICTIONS) which prevents control flow decisions from > being made based on values fetched from textures. I believe that this > would defend against the side-channel timing attack which forced the > WebGL spec to disallow the use of cross-origin media. > Woah? Really? That sounds pretty intrusive. -- Jeff Russell Engineer, Marmoset www.marmoset.co -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Oct 30 12:54:16 2012 From: kbr...@ (Kenneth Russell) Date: Tue, 30 Oct 2012 12:54:16 -0700 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: Message-ID: Nobody's code would be broken. The idea, a variant of which Gregg Tavares has posted before to this mailing list, would be: - A WebGLRenderingContext would start life with a "shader_clean" bit set. - The "shader_clean" bit continues to be set so long as all shaders loaded into the WebGLRenderingContext obey the timing restrictions set in the ANGLE patch. - While the "shader_clean" bit is set: - It is legal to upload cross-domain media into WebGL textures. Doing so taints the canvas and prevents toDataURL(), readPixels(), etc., as the pre-CORS WebGL spec did. - If the "shader_clean" bit transitions from true to false, and any cross-domain media was uploaded as a texture, then the context would be lost. - If the "shader_clean" bit was false, the existing CORS restrictions would be enforced. This would not break any existing programs. It would, however, allow cross-domain media to be used in conjunction with a set of useful WebGL shaders. -Ken On Tue, Oct 30, 2012 at 12:41 PM, Florian B?sch wrote: > Btw. trigger rally is using such code right now for terrain rendering > as well. Thank you for breaking jareikos code as well. > > On Tue, Oct 30, 2012 at 8:40 PM, Florian B?sch wrote: >> On Tue, Oct 30, 2012 at 8:37 PM, Kenneth Russell wrote: >>> The timing restriction patch that I intended to refer to was Adobe's >>> contribution to the ANGLE project (http://cs.chromium.org , search for >>> SH_TIMING_RESTRICTIONS) which prevents control flow decisions from >>> being made based on values fetched from textures. I believe that this >>> would defend against the side-channel timing attack which forced the >>> WebGL spec to disallow the use of cross-origin media. >> I do do code that makes control flow decisions based on values fetched >> from textures. Thank you for breaking my code. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From ale...@ Tue Oct 30 15:51:45 2012 From: ale...@ (Aleksandar Rodic) Date: Tue, 30 Oct 2012 15:51:45 -0700 Subject: [Public WebGL] Microsoft Broke the Silence In-Reply-To: References: Message-ID: This is still very ambiguous statement but it sounds like WebGL is not out of the option. http://m.cnet.com/news/web-standards-vet-marches-microsoft-to-the-front-lines--q&a/57541396 Aki @xyz_ak -------------- next part -------------- An HTML attachment was scrubbed... URL: From baj...@ Tue Oct 30 16:06:42 2012 From: baj...@ (Brandon Jones) Date: Tue, 30 Oct 2012 16:06:42 -0700 Subject: [Public WebGL] Microsoft Broke the Silence In-Reply-To: References: Message-ID: I'm not sure if this is "breaking the silence", as it's just a reiteration of their previously stated position. There's a small window of possibility hinted at, but the phrasing is very political: *"If we can solve the security problems, I think we'd seriously look at some way of producing 3D graphics for the Web."* Which indicates that they're interested in 3D on the web, but not necessarily WebGL. If they're waiting for security problems to be solved before moving, however, WebGL could very well have built up enough momentum that it might force their hand in a decision to support it. What it means to us is that we need to hammer out the security issues quickly and comprehensively so that there's no excuse *not* to support 3D on the Web. Of interest is how much feedback they apparently get about WebGL: *"Over the lifetime of IE8, IE9, and IE10, every single time we said we added newer features... [a request for WebGL support] is one of the first five responses."* So I guess the community is dogging them about it fairly persistently. (The WebGL community rocks, by the way!) --Brandon Jones On Tue, Oct 30, 2012 at 3:51 PM, Aleksandar Rodic wrote: > This is still very ambiguous statement but it sounds like WebGL is not out > of the option. > > > http://m.cnet.com/news/web-standards-vet-marches-microsoft-to-the-front-lines--q&a/57541396 > > Aki > @xyz_ak > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Tue Oct 30 16:11:22 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 31 Oct 2012 00:11:22 +0100 Subject: [Public WebGL] Microsoft Broke the Silence In-Reply-To: References: Message-ID: On Tue, Oct 30, 2012 at 11:51 PM, Aleksandar Rodic wrote: > This is still very ambiguous statement but it sounds like WebGL is not out > of the option. On WebGL: Microsoft/Shankland you're still slinging the security FUD heavily. Do you actually have any working exploit to show? On DRM/Video Codecs: Microsoft argues that the DRM blackbox being unspecified is some sort of best practise, just like video/audio codecs where unspecified. Hey, that was the worst idea in web standards history, period. That's the "best practise" and you want to do "more of that", git of my lawn. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Tue Oct 30 21:11:17 2012 From: cal...@ (Mark Callow) Date: Wed, 31 Oct 2012 13:11:17 +0900 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: Message-ID: <5090A4E5.2060803@artspark.co.jp> On 2012/10/31 3:53, Florian B?sch wrote: > Far as I could find the CSS shader spec has not yet decided on a > restricting mechanism (timing or otherwise). > > Btw. I think that timing restrictions are not a good idea because it > implies obeying a fixed time window for shader execution, which in > turn implies that: > 1) shaders *may* be aborted, which breaks apps. > 2) drawArrays/Elements calls *will* be delayed from returning > degrading app performance. > > To me those issues look worse than what they're trying to solve, i.e. > the proverbial cure that kills the patient. As Ken notes in his later message, this is not the approach being considered by CSS shaders. I want to point out that, even if preventing timing of drawing was the approach, there would be no need to delay returns from drawArrays/Elements. They return essentially immediately and provide no useful information about rendering time; its the pipeline in action. What would need to be done is to disable finish() and have the browser call requestAnimationFrame at regular intervals, e.g. at screen refresh. Unfortunately if some rendering takes more than the screen refresh interval, eventually something eventually has to give and at some point requestAnimationFrame will not be called on schedule. This permits a very coarse level of measurement such as whether the texture contains a pixel of a particular color. However these steps would prevent the Context exploit. Most of the ways to make a shader artificially run for a very long time could be stopped by better GLSL optimizers and loop count limits. Simply aborting execution after a set time would work for sure. Maybe these steps would be less objectionable to those writing shaders for cross-origin textures than the restrictions proposed by the CSS group. Regards -Mark Please note that, due to the integration of management operations following establishment of our new holding company, my e-mail address has changed to callow.mark<@>artspark<.>co<.>jp. I can receive messages at the old address for the rest of this year but please update your address book as soon as possible. -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Oct 31 01:27:05 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 31 Oct 2012 09:27:05 +0100 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: <5090A4E5.2060803@artspark.co.jp> References: <5090A4E5.2060803@artspark.co.jp> Message-ID: On Wed, Oct 31, 2012 at 5:11 AM, Mark Callow wrote: > As Ken notes in his later message, this is not the approach being > considered by CSS shaders. I want to point out that, even if preventing > timing of drawing was the approach, there would be no need to delay returns > from drawArrays/Elements. They return essentially immediately and provide > no useful information about rendering time; its the pipeline in action. > > What would need to be done is to disable finish() and have the browser > call requestAnimationFrame at regular intervals, e.g. at screen refresh. > Unfortunately if some rendering takes more than the screen refresh > interval, eventually something eventually has to give and at some point > requestAnimationFrame will not be called on schedule. This permits a very > coarse level of measurement such as whether the texture contains a pixel of > a particular color. However these steps would prevent the Context exploit. > Finish is an artificial synchronization primitive which can be substituted by any of the natural synchronization primitives (bufferData, texImage2D, etc.). It is possible to build code that won't work without explicit finish. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Wed Oct 31 02:20:29 2012 From: cal...@ (Mark Callow) Date: Wed, 31 Oct 2012 18:20:29 +0900 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: <5090A4E5.2060803@artspark.co.jp> Message-ID: <5090ED5D.7050202@artspark.co.jp> On 2012/10/31 17:27, Florian B?sch wrote: > > Finish is an artificial synchronization primitive which can be > substituted by any of the natural synchronization primitives > (bufferData, texImage2D, etc.). It is possible to build code that > won't work without explicit finish. Where is it written that bufferData and texImage2D do synchronization? Yes it is possible to build code that won't work without explicit finish. It is also possible to build code that won't work without the shader features being restricted by the CSS filters work. The question is which limitation is least objectionable? Regards -Mark Please note that, due to the integration of management operations following establishment of our new holding company, my e-mail address has changed to callow.mark<@>artspark<.>co<.>jp. I can receive messages at the old address for the rest of this year but please update your address book as soon as possible. -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Oct 31 03:00:11 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 31 Oct 2012 11:00:11 +0100 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: <5090ED5D.7050202@artspark.co.jp> References: <5090A4E5.2060803@artspark.co.jp> <5090ED5D.7050202@artspark.co.jp> Message-ID: On Wed, Oct 31, 2012 at 10:20 AM, Mark Callow wrote: > On 2012/10/31 17:27, Florian B?sch wrote: > Where is it written that bufferData and texImage2D do synchronization? > The driver maintains a command queue of things tell the GPU to do. Each command may take an infinite amount of time. In order to speed up overall performance the driver lets the client race ahead of the queue, i.e. calls to the driver return before the commands have finished, this is known as asynchronous execution. If you call finish, then the driver blocks on that call until the command queue is empty. There are 3 different parts of memory that are usually involved in the process of shuffling data around that go with commands: 1) client memory 2) driver memory 3) gpu memory. Driver memory is usually quite fixed and by far and large doesn't implement any elaborate caching scheme, after all, you wouldn't want the driver to segfault with a memory overflow, that would be bad. So what happens if the driver receives a request to upload data from client memory to GPU memory? The driver reads the client memory and puts the bytes onto the GPU. It needs to block while it does that because it needs to be sure to have read all the clients bytes before the client would continue and do things like delete that memory area or change it, resulting in invalid or garbled data. So, how can the driver, with a queue full of things, that are not known to execute in finite time, know when the GPU has gotten all the bytes that the client requested to be uploaded? The answer is quite simple. Since the command to upload was the last command to be put in the queue, the driver knows that the GPU has done all the receiving of client bytes when the command queue is empty. Incidentially, that's the same as flush. So there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Oct 31 03:11:01 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 31 Oct 2012 11:11:01 +0100 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: <5090A4E5.2060803@artspark.co.jp> <5090ED5D.7050202@artspark.co.jp> Message-ID: Btw. that's the reason people have come up with stuff like mapBuffer that put a restriction on what the client can do with that memory (don't delete it) so the driver can read in the bytes whenever it feels like, and let the client race ahead of the queue without needing to hold the client up in an upload block. Of course mapBuffer can result in garbled data (where the client has updated the buffer) but this is intentional for such things as vertex streaming and the like where calling bufferData all the time would slow the client down since it'd be like calling finish all the time. On Wed, Oct 31, 2012 at 11:00 AM, Florian B?sch wrote: > On Wed, Oct 31, 2012 at 10:20 AM, Mark Callow > wrote: >> >> On 2012/10/31 17:27, Florian B?sch wrote: >> Where is it written that bufferData and texImage2D do synchronization? > > > The driver maintains a command queue of things tell the GPU to do. Each > command may take an infinite amount of time. In order to speed up overall > performance the driver lets the client race ahead of the queue, i.e. calls > to the driver return before the commands have finished, this is known as > asynchronous execution. If you call finish, then the driver blocks on that > call until the command queue is empty. There are 3 different parts of memory > that are usually involved in the process of shuffling data around that go > with commands: 1) client memory 2) driver memory 3) gpu memory. Driver > memory is usually quite fixed and by far and large doesn't implement any > elaborate caching scheme, after all, you wouldn't want the driver to > segfault with a memory overflow, that would be bad. > > So what happens if the driver receives a request to upload data from client > memory to GPU memory? The driver reads the client memory and puts the bytes > onto the GPU. It needs to block while it does that because it needs to be > sure to have read all the clients bytes before the client would continue and > do things like delete that memory area or change it, resulting in invalid or > garbled data. So, how can the driver, with a queue full of things, that are > not known to execute in finite time, know when the GPU has gotten all the > bytes that the client requested to be uploaded? The answer is quite simple. > Since the command to upload was the last command to be put in the queue, the > driver knows that the GPU has done all the receiving of client bytes when > the command queue is empty. Incidentially, that's the same as flush. > > So there. > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From jba...@ Wed Oct 31 09:36:53 2012 From: jba...@ (John Bauman) Date: Wed, 31 Oct 2012 09:36:53 -0700 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: <5090A4E5.2060803@artspark.co.jp> <5090ED5D.7050202@artspark.co.jp> Message-ID: That's definitely one way to do it, but there are many other ways to implement that. For example doing a copy to another buffer on the CPU, or making a copy-on-write copy of the data. Otherwise, any code that uploaded data to the GPU during the course of a frame would be really slow. On Wed, Oct 31, 2012 at 3:00 AM, Florian B?sch wrote: > On Wed, Oct 31, 2012 at 10:20 AM, Mark Callow wrote: > >> On 2012/10/31 17:27, Florian B?sch wrote: >> Where is it written that bufferData and texImage2D do synchronization? >> > > The driver maintains a command queue of things tell the GPU to do. Each > command may take an infinite amount of time. In order to speed up overall > performance the driver lets the client race ahead of the queue, i.e. calls > to the driver return before the commands have finished, this is known as > asynchronous execution. If you call finish, then the driver blocks on that > call until the command queue is empty. There are 3 different parts of > memory that are usually involved in the process of shuffling data around > that go with commands: 1) client memory 2) driver memory 3) gpu memory. > Driver memory is usually quite fixed and by far and large doesn't implement > any elaborate caching scheme, after all, you wouldn't want the driver to > segfault with a memory overflow, that would be bad. > > So what happens if the driver receives a request to upload data from > client memory to GPU memory? The driver reads the client memory and puts > the bytes onto the GPU. It needs to block while it does that because it > needs to be sure to have read all the clients bytes before the client would > continue and do things like delete that memory area or change it, resulting > in invalid or garbled data. So, how can the driver, with a queue full of > things, that are not known to execute in finite time, know when the GPU has > gotten all the bytes that the client requested to be uploaded? The answer > is quite simple. Since the command to upload was the last command to be put > in the queue, the driver knows that the GPU has done all the receiving of > client bytes when the command queue is empty. Incidentially, that's the > same as flush. > > So there. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Oct 31 10:37:23 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 31 Oct 2012 18:37:23 +0100 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: <5090A4E5.2060803@artspark.co.jp> <5090ED5D.7050202@artspark.co.jp> Message-ID: On Wed, Oct 31, 2012 at 5:36 PM, John Bauman wrote: > That's definitely one way to do it, but there are many other ways to > implement that. For example doing a copy to another buffer on the CPU, or > making a copy-on-write copy of the data. Otherwise, any code that uploaded > data to the GPU during the course of a frame would be really slow. It would be possible in theory for the driver to keep a cache of copied data around until the command queue catches up with that data. That approach has limitations where you could use this to just feed the driver data until it crashes with an out of memory segfault. That's why it's usually not done in any big way and major upload operations will be equivalent to finish. For uniforms there are mostly no blocks because those size is pretty strictly limited and the driver does avoid the finish with a copy operation. But for things like hundreds of megabytes of texel and vertex data... not really. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Oct 31 10:38:51 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Wed, 31 Oct 2012 18:38:51 +0100 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: <5090A4E5.2060803@artspark.co.jp> <5090ED5D.7050202@artspark.co.jp> Message-ID: Oh btw. this copy-on-write you can forget on mobiles anyway where you don't have endless gigabytes of client ram. On Wed, Oct 31, 2012 at 6:37 PM, Florian B?sch wrote: > On Wed, Oct 31, 2012 at 5:36 PM, John Bauman wrote: > >> That's definitely one way to do it, but there are many other ways to >> implement that. For example doing a copy to another buffer on the CPU, or >> making a copy-on-write copy of the data. Otherwise, any code that uploaded >> data to the GPU during the course of a frame would be really slow. > > It would be possible in theory for the driver to keep a cache of copied > data around until the command queue catches up with that data. That > approach has limitations where you could use this to just feed the driver > data until it crashes with an out of memory segfault. That's why it's > usually not done in any big way and major upload operations will be > equivalent to finish. For uniforms there are mostly no blocks because those > size is pretty strictly limited and the driver does avoid the finish with a > copy operation. But for things like hundreds of megabytes of texel and > vertex data... not really. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Oct 31 14:02:25 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 31 Oct 2012 14:02:25 -0700 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null In-Reply-To: References: <5085A6F9.6020707@mit.edu> <5085B646.3060003@mit.edu> <508844B8.4050208@mozilla.com> <50897F58.7060306@mit.edu> Message-ID: Sorry for the delay replying; was swamped. After giving this more thought, I would like to change getContextAttributes to return "WebGLContextAttributes?", and define that the context returns null for this value while in the context lost state. The "actual context parameters" are associated with the drawing buffer, and there is no drawing buffer while the context is in the lost state. I agree with Gregg's assessment that there's no reasonable value to be returned in some situations. Some months ago there was a plan to add an asynchronous context creation bit to the WebGLContextAttributes; when set, the context would be created in the lost state and have a webglcontextrestored event fired at it in the future. I think it would still be a good idea to add this API, but at this point I would really like to stabilize the top of tree conformance suite, and take another snapshot of it and the improved spec as "version 1.0.2", before forging ahead with more API additions. To be concrete: are there any objections to making the return value of getContextAttributes nullable? -Ken On Thu, Oct 25, 2012 at 11:29 AM, Gregg Tavares (??) wrote: > Yes, we can actually. Ideally we can call > > var gl = canvas.getContext("webgl"); > > and the context is already lost. isContextLost() returns true and the > contextlostevent will be delivered once the current event exits > > Note, the samples on the WebGL wiki and the ones on > webglsamples.googlecode.com all handle this case. > > That does bring up a good point as to why getContextAttrbutes should maybe > return null during contextlost. I know that sucks though since you have to > be defensive about accessing it but I don't know what it could report since > there is no context to give attributes about. > > > > On Thu, Oct 25, 2012 at 11:05 AM, Boris Zbarsky wrote: >> >> >> On 10/25/12 1:38 PM, Gregg Tavares (??) wrote: >>> >>> Off the top of my head I'd say returning the lost context's attributes >>> is fine? >> >> >> Can we never be in a lost-context state without having created a context >> at any point? >> >> >> -Boris >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> unsubscribe public_webgl >> ----------------------------------------------------------- >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Wed Oct 31 15:00:25 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 31 Oct 2012 15:00:25 -0700 Subject: [Public WebGL] WEBGL_dynamic_texture redux In-Reply-To: <508796C0.8000605@artspark.co.jp> References: <508796C0.8000605@artspark.co.jp> Message-ID: Hi Mark, Thanks for putting together this sample. A few thoughts. - Keeping the video stream separate from the video element seems cleaner. I think we should avoid APIs which require mutating HTML elements, and in particular, adding new properties. - When thinking about creating streams for arbitrary canvases, not just video elements, how will the producerFrame be handled? Today, when JavaScript code modifies a 2D canvas, the results are made available (a) when the web page is composited, (b) when a readback API like toDataURL / getImageData is called, or (c) when WebGL uploads the canvas via texImage2D / texSubImage2D. It's a "pull" model. When a stream is connected, and the canvas is modified in the current JavaScript callback, should acquireImage() cause a flush of the upstream canvas and an update of the producerFrame at the same time? I think it should. Is there any issue with spec'ing this behavior? - You point out that cpc won't update when the tab is backgrounded, but there are other issues: (a) the WebGL app might decide to not produce new frames sometimes (if the scene isn't updating); (b) msc won't update if the browser decides not to repaint because the page didn't update at all; (c) for backgrounded tabs it's unlikely that msc will update, and requestAnimationFrame will stop, but setTimeout based timers will still probably fire. Are there issues with any of these behaviors? I think probably not; applications will use the page visibility API to know when they've been backgrounded and suspend any measurements of frame rate. - Should this spec reference http://dvcs.w3.org/hg/webperf/raw-file/tip/specs/HighResolutionTime/Overview.htmlfor the high-resolution timestamps instead of defining another concept? -Ken On Wed, Oct 24, 2012 at 12:20 AM, Mark Callow wrote: > Hi, > > I am updating the WEBGL_dynamic_texture proposal to (a) provide a better > interface to the stream producer (HTMLVideoElement, etc.) and (b) provide > tools for handling timing and synchronization issues. Rather than writing > spec. text I have been playing with sample code to see how various ideas > feel. The entire sample program is attached. Please review it and send your > feedback. Hopefully the embedded comments and IDL interface definitions > give sufficient background for understanding. > > (a) stemmed from David Sheets comments to this list requesting the stream > interface be added to the producer HTML elements. The sample code offers > two alternatives shown in the extract below: augmenting the producer > element with the stream interface or keeping it as a separate object. > > For (b) I've added query functions based on a monotonically increasing > counter to retrieve the current value and to retrieve the value the last > time the canvas was presented (updated to the screen). > > The first part of the extract shows how the video producer and texture > consumer are connected via a new wdtStream interface. The second part, the > drawFrame function shows acquire and release of the frames and also how to > determine how long it is taking to display the frames, whether any are > being missed, etc. > > Once we're all happy with this, I'll update the spec. text and then I > think we'll be able to move it from proposal to draft. > > // > // connectVideo > // > // Connect video from the passed HTMLVideoElement to the texture > // currently bound to TEXTURE_EXTERNAL_OES on the active texture > // unit. > // > // First a wdtStream object is created with its consumer set to > // the texture. Once the video is loaded, it is set as the > // producer. This could potentially fail, depending on the > // video format. > // > // interface wdtStream { > // enum state { > // // Consumer connected; waiting for producer to connect > // wdtStreamConnecting, > // // Producer & consumer connected. No frames yet. */ > // wdtStreamEmpty, > // wdtStreamNewFrameAvailable, > // wdtStreamOldFrameAvailable, > // wdtStreamDisconnected > // }; > // // Time taken from acquireImage to posting drawing buffer; default > 0? > // readonly int consumerLatency; > // // Frame # (aka Media Stream Count) of most recently inserted frame > // // Value is 1 at first frame. > // readonly int producerFrame; > // // MSC of most recently acquired frame. > // readonly int consumerFrame; > // // timeout for acquireImage; default 0 > // int acquireTimeout; > // > // void setConsumerLatency(int); > // }; > // > > // > function connectVideo(ctx, video) > { > g.loadingFiles.push(video); > g.videoReady = false; > > //----------------------------- > // Options for connecting to video > //----------------------------- > // OPTION 1: method on WDT extension augments video element > // with a wdtStream object. > ctx.dte.createStream(video); > assert(video.wdtStream.state == wdtStreamConnecting); > //----------------------------- > // OPTION 2: method returns a stream object. > g.vstream = ctx.dte.createStream(ctx); > assert(g.vstream.state == wdtStreamConnecting); > //----------------------------- > > video.onload = function() { > g.loadingFiles.splice(g.loadingFiles.indexOf(video), 1); > *try* { > // OPTION 1: video object augmented with stream > video.wdtStream.connect(); > assert(video.wdtStream.state == wdtStreamEmpty); > //----------------------------- > // OPTION 2: separate stream object > g.vstream.connectProducer(video); > assert(g.stream.state == wdtStreamEmpty); > //------------------------------ > *if* (!video.autoplay) { > video.play(); // Play video > } > g.videoReady = true; > } *catch* (e) { > *window*.*alert*("Video texture setup failed: " + e.name); > } > }; > } > > function drawFrame(gl) > { > var lastFrame; > var syncValues; > var latency; > var graphicsMSCBase; > > // Make sure the canvas is sized correctly. > reshape(gl); > > // Clear the canvas > gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); > > // Matrix set-up deleted ... > > // To avoid duplicating everything below for each option, use a > // temporary variable. This will not be necessary in the final > // code. > // OPTION 1: augmented video object > var vstream = g.video.wdtStream; > // OPTION 2: separate stream object > var vstream = g.vstream; > > // In the following > // UST is a monotonically increasing counter never adjusted by NTP > etc. > // The unit is nanoseconds but the frequency of update will vary from > // system to system. The average frequency at which the counter is > // updated should be 5x the highest MSC frequency supported. For > // example if highest MSC is 48kHz (audio) the update frequency > // should be 240kHz. Most OSes have this kind of counter available. > // > // MSC is the media stream count. It is incremented once/sample; for > // video that means once/frame, for audio once/sample. For graphics, > // it is incremented once/screen refresh. > // > // CPC is the canvas presentation count. It is incremented once > // each time the canvas is presented. > // > > *if* (graphicsMSCBase == *undefined*) { > graphicsMSCBase = gl.dte.getSyncValues().msc; > } > > *if* (lastFrame.msc && vstream.producerFrame > lastFrame.msc + 1) { > // Missed a frame! Simplify rendering? > } > > *if* (!latency.frameCount) { > // Initialize > latency.frameCount = 0; > latency.accumTotal = 0; > } > > *if* (lastFrame.ust) { > syncValues = gl.dte.getSyncValues(); > // interface syncValues { > // // UST of last present > // readonly attribute long long ust; > // // Screen refresh count (aka MSC) at last present > // // Initialized to 0 on browser start > // readonly attribute long msc; > // // Canvas presentation count at last present > // // Initialized to 0 at canvas creation. > // readonly attribute long cpc; > // }; > // XXX What happens to cpc when switch to another tab? > *if* (syncValues.msc - graphicsMSCBase != syncValues.cpc) { > // We are not keeping up with screen refresh! > // Or are we? If cpc increment stops when canvas hidden, > // will need some way to know canvas was hidden so app > // won't just assume its not keeping up and therefore > // adjust its rendering. > graphicsMSCBase = syncValues.msc; // reset base. > } > latency.accumValue += syncValues.ust - lastFrame.ust; > latency.frameCount++; > *if* (latency.frameCount == 30) { > vstream.setConsumerLatency(latency.accumValue / 30); > latency.frameCount = 0; > latency.accumValue = 0; > } > } > > *if* (g.videoReady) { > *if* (g.video.wdtStream.acquireImage()) { > // Record UST of frame acquisition. > // No such system function in JS so it is added to extension. > lastFrame.ust = gl.dte.ustnow(); > lastFrame.msc = vstream.consumerFrame; > } > // OPTION 2: > vstream.acquireImage(); > lastFrame = g.stream.consumerFrame; > } > > // Draw the cube > gl.drawElements(gl.TRIANGLES, g.box.numIndices, gl.UNSIGNED_BYTE, 0); > > *if* (g.videoReady) > vtream.releaseImage(); > > // Show the framerate > framerate.snapshot(); > > currentAngle += incAngle; > *if* (currentAngle > 360) > currentAngle -= 360; > } > > > > Regards > > -Mark > Please note that, due to the integration of management operations > following establishment of our new holding company, my e-mail address has > changed to callow.mark<@>artspark<.>co<.>jp. I can receive messages at the > old address for the rest of this year but please update your address book > as soon as possible. > -- > ?????????????????????????????????????????????????????????????? ?? > ?????????????????????????????????????????????????????????????? ?? ??. > > NOTE: This electronic mail message may contain confidential and privileged > information from HI Corporation. If you are not the intended recipient, any > disclosure, photocopying, distribution or use of the contents of the > received information is prohibited. If you have received this e-mail in > error, please notify the sender immediately and permanently delete this > message and all related copies. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Oct 31 15:15:29 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 31 Oct 2012 15:15:29 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: On Mon, Oct 22, 2012 at 11:28 AM, Gregg Tavares (??) wrote: > So checked. Three.js always uses texImage2D for video > So does this > http://dev.opera.com/static/articles/2012/webgl-postprocessing/webgl-pp/multipass.html > and this > http://badassjs.com/post/16472398856/photobooth-style-live-video-effects-in-javascript-and > and this > https://developer.mozilla.org/en-US/docs/WebGL/Animating_textures_in_WebGL > and this http://videos.mozilla.org/serv/mozhacks/flight-of-the-navigator/ > and this http://radiapp.com/samples/Video_HPtrailer_WebGL_demo.html > and this http://sp0t.org/videoriot/ > > This one use texSubImage2D > http://webglsamples.googlecode.com/hg/color-adjust/color-adjust.html > > I only bring that up to point out that the fact that texSubImage2D usage > would be problematic may not be that important > > On the other hand, both video and WebRTC are planning on dynamic resolution > adjustments based on how fast your internet connection is. > > I bring that up because it means knowing the resolution so you can call > texSubImage2D seems like something you can only count on if you have 100% > control over the video source. > > I seems like to help WebGL fit more uses cases (cases where you don't > control those sources like say a video site or a video chat site) that it > would be more useful if things "just worked" which is relatively easy if > texImage2D just works and seems like it would be rather painful if it could > possibly generate an error in various states of switching resolutions or > changing videos. > > I feel like GetTexLevelParameter is an orthogonal issue. There are plenty of > things you can do without knowing the resolution of the image/video. > > Also, while an unloaded video/image could return 1x1, off the top of my > head, what it returns while changing resolutions or queuing the next video > could be implementation defined. Some browsers might return 1x1 while > switching. Others might return the last valid frame until such time as a > frame from the new source is ready. As long as there is no error that seems > better to me. It seems that the consensus is to change the WebGL specification as you suggest. The changes would essentially be: - If an incomplete HTMLImageElement or HTMLVideoElement is passed to either texImage2D or texSubImage2D, then the source is treated as a 1x1 texture containing the RGBA value (0, 0, 0, 1). This means that texImage2D will cause the level of the passed texture object to be redefined as size 1x1; texSubImage2D will upload a single black pixel at the given xoffset and yoffset. It would be defined that an OpenGL error would never occur as a result of these calls. Are there any objections to these changes? If not, I'll update the spec. -Ken > > On Mon, Oct 22, 2012 at 10:32 AM, Kenneth Russell wrote: >> >> On Sun, Oct 21, 2012 at 11:52 PM, Gregg Tavares (??) >> wrote: >> > My concern is if you have a video player library that plays a playlist >> > of >> > videos and then you decide later to add some WebGL to it, you have to go >> > through all kinds of contortions just to avoid an INVALID_OPERATION. The >> > framework you're using for the video playlists seems like it would be >> > unlikely to have the needed functionality built in to let you know when >> > top >> > stop calling texImage2D and when to start again. >> > >> > If the error was removed and 1x1 was substituted that problem would go >> > away >> > >> > I don't see what error adds except to make it more complicated. >> >> Let's think through this use case. The playlist library iterates >> through different videos. As it switches between videos it's clearly >> necessary to send some sort of event to the application, because the >> videos might be different resolutions. Unless the application is >> calling texImage2D all the time (instead of texSubImage2D most of the >> time), it will have to watch for these events to figure out when it >> might need to reallocate the texture at a different size. (If the app >> calls texImage2D all the time, then it doesn't matter whether WebGL >> generates INVALID_OPERATION or a 1x1 black texture. The app will never >> need to know whether the upload succeeded or failed, because failures >> will be transient, and the app will automatically recover from them >> when the next video starts playing. If texImage2D generates >> INVALID_OPERATION, the previous contents of the texture will be >> preserved.) >> >> If WebGL generates INVALID_OPERATION for incomplete texture uploads, >> then the app either needs to watch for those errors coming back from >> OpenGL, or watch for the onplay event to know when it's guaranteed to >> be OK to upload a frame. If the app doesn't do this, and mostly calls >> texSubImage2D to upload new frames, then the video can get "stuck" as >> the app tries to upload new frames via texSubImage2D while the >> underlying texture is the wrong size. >> >> If WebGL silently creates a 1x1 black texture for incomplete texture >> uploads, then the app still needs to watch for the onplay event to >> know when to call texImage2D instead of texSubImage2D. Otherwise, the >> video can still get "stuck" displaying a 1x1 black texture. >> >> As far as I see it, the app has to do the same work regardless of >> whether incomplete texture uploads generate an error or silently >> produce a 1x1 black texture. The main difference is that the error can >> be observed by the application, but the silent allocation of the 1x1 >> black texture can not, because glGetTexLevelParameter doesn't exist in >> the GLES API. For this reason I continue to think that generating >> INVALID_OPERATION is the more transparent behavior for WebGL. >> >> Regardless of the decision here, I agree with you that the behavior >> should be specified and we collectively should try to write some tests >> for it -- ideally, ones that explicitly force failure or pauses at >> certain download points, rather than just trying to load and play the >> video many times. >> >> -Ken >> >> >> >> > On Fri, Oct 19, 2012 at 3:09 PM, Kenneth Russell wrote: >> >> >> >> On Tue, Oct 16, 2012 at 4:46 PM, Gregg Tavares (??) >> >> wrote: >> >> > >> >> > >> >> > >> >> > On Tue, Oct 16, 2012 at 4:28 PM, Kenneth Russell >> >> > wrote: >> >> >> >> >> >> On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) >> >> >> >> >> >> wrote: >> >> >> > I don't think the spec makes this clear what happens when you try >> >> >> > to >> >> >> > call >> >> >> > texImage2D or texSubImage2D on an image or video that is not yet >> >> >> > loaded. >> >> >> >> >> >> Right, the behavior in this case is not defined. I'm pretty sure >> >> >> this >> >> >> was discussed a long time ago in the working group, but it seemed >> >> >> difficult to write a test for any defined behavior, so by consensus >> >> >> implementations generate INVALID_OPERATION when attempting to upload >> >> >> incomplete images or video to WebGL textures. >> >> > >> >> > >> >> > I don't see a problem writing a test. Basically >> >> > >> >> > var playing = false >> >> > video.src = url >> >> > video.play(); >> >> > video.addEventListener('playing', function() { playing = true;}); >> >> > >> >> > waitForVideo() { >> >> > gl.bindTexture(...) >> >> > gl.texImage2D(..., video); >> >> > if (playing) { >> >> > doTestsOnVideoContent(); >> >> > if (gl.getError() != gl.NO_ERROR) { testFailed("there should be >> >> > no >> >> > errors"); } >> >> > } else { >> >> > requestAnimationFrame(waitForVideo); >> >> > } >> >> > >> >> > This basically says an implementation is never allowed to generate an >> >> > error. >> >> > it might not be a perfect test but neither is the current one. In >> >> > fact >> >> > this >> >> > should test it just fine >> >> > >> >> > video = document.createElement("video"); >> >> > gl.texImage2D(..., video); >> >> > glErrorShouldBe(gl.NO_ERROR); >> >> > video.src = "someValidURL": >> >> > gl.texImage2D(..., video); >> >> > glErrorShouldBe(gl.NO_ERROR); >> >> > video.src = "someOtherValidURL": >> >> > gl.texImage2D(..., video); >> >> > glErrorShouldBe(gl.NO_ERROR); >> >> > >> >> > Then do the other 'playing' event tests. >> >> > >> >> >> >> >> >> >> >> >> > Example >> >> >> > >> >> >> > video = document.createElement("video"); >> >> >> > video.src = "http://mysite.com/myvideo"; >> >> >> > video.play(); >> >> >> > >> >> >> > function render() { >> >> >> > gl.bindTexture(...) >> >> >> > gl.texImage2D(..., video); >> >> >> > gl.drawArrays(...); >> >> >> > window.requestAnimationFrame(render); >> >> >> > } >> >> >> > >> >> >> > Chrome right now will synthesize a GL error if the system hasn't >> >> >> > actually >> >> >> > gotten the video to start (as in if it's still buffering). >> >> >> > >> >> >> > Off the top of my head, it seems like it would just be friendlier >> >> >> > to >> >> >> > make a >> >> >> > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that >> >> >> > aren't >> >> >> > loaded yet. >> >> >> > >> >> >> > Otherwise, having to check that the video is ready before calling >> >> >> > texImage2D >> >> >> > seems kind of burdensome on the developer. If they want to check >> >> >> > they >> >> >> > should >> >> >> > use video.addEventListener('playing') or similar. If they were >> >> >> > making >> >> >> > a >> >> >> > video player they'd have to add a bunch of logic when queuing the >> >> >> > next >> >> >> > video. >> >> >> > >> >> >> > Same with images. >> >> >> > >> >> >> > img = document.createElement("img"); >> >> >> > img.src = "http://mysite.com/myimage"; >> >> >> > >> >> >> > function render() { >> >> >> > gl.bindTexture(...) >> >> >> > gl.texImage2D(..., img); >> >> >> > gl.drawArrays(...); >> >> >> > window.requestAnimationFrame(render); >> >> >> > } >> >> >> > >> >> >> > If you want to know if the image has loaded use img.onload but >> >> >> > otherwise >> >> >> > don't fail the call? >> >> >> > >> >> >> > What do you think? Good idea? Bad idea? >> >> >> >> >> >> My initial impression is that this change is not a good idea. It >> >> >> would >> >> >> expose the specification, implementations and applications to a lot >> >> >> of >> >> >> corner case behaviors. For example, if a video's width and height >> >> >> hasn't been received yet, then texImage2D(..., video) would have to >> >> >> allocate a 1x1 texture; but if the width and height are known, then >> >> >> it >> >> >> would have to allocate the right amount of storage. A naive app >> >> >> might >> >> >> call texImage2D only the first time and texSubImage2D subsequently, >> >> >> so >> >> >> if the first texImage2D call was made before the metadata was >> >> >> downloaded, it would never render correctly. I think the current >> >> >> fail-fast behavior is best, and already has the result that it >> >> >> renders >> >> >> a black texture if the upload fails; the call will (implicitly) >> >> >> generate INVALID_OPERATION, and the texture will be incomplete. >> >> > >> >> > >> >> > I don't see how that's worse than the current situation which is you >> >> > call >> >> > texImage2D and pray it works. You have no idea if it's going to work >> >> > or >> >> > not. >> >> > If you do this does it work? >> >> > >> >> > okToUseVideo = false; >> >> > video = document.createElement("video"); >> >> > video.src = "movie#1" >> >> > video.addEventListener('playing', function() { okToUseVideo = true; >> >> > } >> >> > >> >> > frameCount = 0; >> >> > >> >> > function render() { >> >> > if (okToUseVideo) { >> >> > gl.texImage2D(... , video); >> >> > >> >> > ++frameCount; >> >> > if (frameCount > 1000) { >> >> > video.src = "movie2"; >> >> > } >> >> > } >> >> > } >> >> > >> >> > >> >> > Basically after some amount of time I switch video.src to a new >> >> > movie. >> >> > Is >> >> > the video off limits now? >> >> >> >> The assumption should be "yes". When the source of the media element >> >> is set, it is immediately invalid to use until the onload / playing >> >> handler is called. >> >> >> >> > Will it use the old frame from the old movie until >> >> > the new movie is buffered or will it give me INVALID_OPERATION? I >> >> > have >> >> > no >> >> > idea and it's not specified. Same with img. >> >> > >> >> > img.src = "image#1" >> >> > img.onload = function() { >> >> > img.src = "image#2"; >> >> > gl.texImage2D(...., img); // what's this? old image, no image, >> >> > INVALID_OPERATION? >> >> > } >> >> >> >> Yes, this should be assumed to produce INVALID_OPERATION. >> >> >> >> >> >> > The texSubImage2D issue is not helped by the current spec. If >> >> > video.src >> >> > = >> >> > useChoosenURL then you have no idea what the width and height are >> >> > until >> >> > the >> >> > 'playing' event (or whatever event) which is no different than if we >> >> > changed >> >> > it. >> >> > >> >> > Changing it IMO means far less broken websites and I can't see any >> >> > disadvantages. Sure you can get a 1x1 pixel texture to start and call >> >> > texSubImage2D now but you can do that already. >> >> >> >> With the current fail-fast behavior, where INVALID_OPERATION will be >> >> generated, the texture will be in one of two states: (1) its previous >> >> state, because the texImage2D call failed; or (2) having the width and >> >> height of the incoming media element. >> >> >> >> With the proposed behavior to never generate an error, the texture >> >> will be either 1x1 or width x height. Another problem with this >> >> proposal is that there is no way with the ES 2.0 or WebGL API to query >> >> the size of a level of a texture, because GetTexLevelParameter was >> >> removed from the OpenGL ES API (and, unfortunately, not reintroduced >> >> in ES 3.0). Therefore the behavior is completely silent -- there is no >> >> way for the application developer to find out what happened (no error >> >> reported, and still renders like an incomplete texture would). >> >> >> >> I agree that the failing behavior should be specified and, more >> >> importantly, tests written verifying it. If the INVALID_OPERATION >> >> error were spec'ed, would that address your primary concern? I'm not >> >> convinced that silently making the texture 1x1 is a good path to take. >> >> >> >> -Ken >> >> >> >> >> >> >> >> >> >> >> >> >> If we want to spec this more tightly then we'll need to do more work >> >> >> in the conformance suite to forcibly stall HTTP downloads of the >> >> >> video >> >> >> resources in the test suite at well known points. I'm vaguely aware >> >> >> that WebKit's HTTP tests do this with a custom server. Requiring a >> >> >> custom server in order to run the WebGL conformance suite at all >> >> >> would >> >> >> have pretty significant disadvantages. >> >> >> >> >> >> -Ken >> >> > >> >> > >> > >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From gma...@ Wed Oct 31 15:49:10 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Wed, 31 Oct 2012 15:49:10 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: Let's discuss this a little more. What if both of those were no-ops? Better? Worse? In that case calling texImage2D on an unfinished image/video would leave your texture as is. For a new texture that would be 0x0 texture which renders as transparent black. This would also mean switching images/videos would potentially keep the last image. On Wed, Oct 31, 2012 at 3:15 PM, Kenneth Russell wrote: > On Mon, Oct 22, 2012 at 11:28 AM, Gregg Tavares (??) > wrote: > > So checked. Three.js always uses texImage2D for video > > So does this > > > http://dev.opera.com/static/articles/2012/webgl-postprocessing/webgl-pp/multipass.html > > and this > > > http://badassjs.com/post/16472398856/photobooth-style-live-video-effects-in-javascript-and > > and this > > > https://developer.mozilla.org/en-US/docs/WebGL/Animating_textures_in_WebGL > > and this > http://videos.mozilla.org/serv/mozhacks/flight-of-the-navigator/ > > and this http://radiapp.com/samples/Video_HPtrailer_WebGL_demo.html > > and this http://sp0t.org/videoriot/ > > > > This one use texSubImage2D > > http://webglsamples.googlecode.com/hg/color-adjust/color-adjust.html > > > > I only bring that up to point out that the fact that texSubImage2D usage > > would be problematic may not be that important > > > > On the other hand, both video and WebRTC are planning on dynamic > resolution > > adjustments based on how fast your internet connection is. > > > > I bring that up because it means knowing the resolution so you can call > > texSubImage2D seems like something you can only count on if you have 100% > > control over the video source. > > > > I seems like to help WebGL fit more uses cases (cases where you don't > > control those sources like say a video site or a video chat site) that it > > would be more useful if things "just worked" which is relatively easy if > > texImage2D just works and seems like it would be rather painful if it > could > > possibly generate an error in various states of switching resolutions or > > changing videos. > > > > I feel like GetTexLevelParameter is an orthogonal issue. There are > plenty of > > things you can do without knowing the resolution of the image/video. > > > > Also, while an unloaded video/image could return 1x1, off the top of my > > head, what it returns while changing resolutions or queuing the next > video > > could be implementation defined. Some browsers might return 1x1 while > > switching. Others might return the last valid frame until such time as a > > frame from the new source is ready. As long as there is no error that > seems > > better to me. > > It seems that the consensus is to change the WebGL specification as you > suggest. > > The changes would essentially be: > > - If an incomplete HTMLImageElement or HTMLVideoElement is passed to > either texImage2D or texSubImage2D, then the source is treated as a > 1x1 texture containing the RGBA value (0, 0, 0, 1). This means that > texImage2D will cause the level of the passed texture object to be > redefined as size 1x1; texSubImage2D will upload a single black pixel > at the given xoffset and yoffset. It would be defined that an OpenGL > error would never occur as a result of these calls. > > Are there any objections to these changes? If not, I'll update the spec. > > -Ken > > > > > > > On Mon, Oct 22, 2012 at 10:32 AM, Kenneth Russell > wrote: > >> > >> On Sun, Oct 21, 2012 at 11:52 PM, Gregg Tavares (??) > >> wrote: > >> > My concern is if you have a video player library that plays a playlist > >> > of > >> > videos and then you decide later to add some WebGL to it, you have to > go > >> > through all kinds of contortions just to avoid an INVALID_OPERATION. > The > >> > framework you're using for the video playlists seems like it would be > >> > unlikely to have the needed functionality built in to let you know > when > >> > top > >> > stop calling texImage2D and when to start again. > >> > > >> > If the error was removed and 1x1 was substituted that problem would go > >> > away > >> > > >> > I don't see what error adds except to make it more complicated. > >> > >> Let's think through this use case. The playlist library iterates > >> through different videos. As it switches between videos it's clearly > >> necessary to send some sort of event to the application, because the > >> videos might be different resolutions. Unless the application is > >> calling texImage2D all the time (instead of texSubImage2D most of the > >> time), it will have to watch for these events to figure out when it > >> might need to reallocate the texture at a different size. (If the app > >> calls texImage2D all the time, then it doesn't matter whether WebGL > >> generates INVALID_OPERATION or a 1x1 black texture. The app will never > >> need to know whether the upload succeeded or failed, because failures > >> will be transient, and the app will automatically recover from them > >> when the next video starts playing. If texImage2D generates > >> INVALID_OPERATION, the previous contents of the texture will be > >> preserved.) > >> > >> If WebGL generates INVALID_OPERATION for incomplete texture uploads, > >> then the app either needs to watch for those errors coming back from > >> OpenGL, or watch for the onplay event to know when it's guaranteed to > >> be OK to upload a frame. If the app doesn't do this, and mostly calls > >> texSubImage2D to upload new frames, then the video can get "stuck" as > >> the app tries to upload new frames via texSubImage2D while the > >> underlying texture is the wrong size. > >> > >> If WebGL silently creates a 1x1 black texture for incomplete texture > >> uploads, then the app still needs to watch for the onplay event to > >> know when to call texImage2D instead of texSubImage2D. Otherwise, the > >> video can still get "stuck" displaying a 1x1 black texture. > >> > >> As far as I see it, the app has to do the same work regardless of > >> whether incomplete texture uploads generate an error or silently > >> produce a 1x1 black texture. The main difference is that the error can > >> be observed by the application, but the silent allocation of the 1x1 > >> black texture can not, because glGetTexLevelParameter doesn't exist in > >> the GLES API. For this reason I continue to think that generating > >> INVALID_OPERATION is the more transparent behavior for WebGL. > >> > >> Regardless of the decision here, I agree with you that the behavior > >> should be specified and we collectively should try to write some tests > >> for it -- ideally, ones that explicitly force failure or pauses at > >> certain download points, rather than just trying to load and play the > >> video many times. > >> > >> -Ken > >> > >> > >> > >> > On Fri, Oct 19, 2012 at 3:09 PM, Kenneth Russell > wrote: > >> >> > >> >> On Tue, Oct 16, 2012 at 4:46 PM, Gregg Tavares (??) > > >> >> wrote: > >> >> > > >> >> > > >> >> > > >> >> > On Tue, Oct 16, 2012 at 4:28 PM, Kenneth Russell > >> >> > wrote: > >> >> >> > >> >> >> On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) > >> >> >> > >> >> >> wrote: > >> >> >> > I don't think the spec makes this clear what happens when you > try > >> >> >> > to > >> >> >> > call > >> >> >> > texImage2D or texSubImage2D on an image or video that is not yet > >> >> >> > loaded. > >> >> >> > >> >> >> Right, the behavior in this case is not defined. I'm pretty sure > >> >> >> this > >> >> >> was discussed a long time ago in the working group, but it seemed > >> >> >> difficult to write a test for any defined behavior, so by > consensus > >> >> >> implementations generate INVALID_OPERATION when attempting to > upload > >> >> >> incomplete images or video to WebGL textures. > >> >> > > >> >> > > >> >> > I don't see a problem writing a test. Basically > >> >> > > >> >> > var playing = false > >> >> > video.src = url > >> >> > video.play(); > >> >> > video.addEventListener('playing', function() { playing = true;}); > >> >> > > >> >> > waitForVideo() { > >> >> > gl.bindTexture(...) > >> >> > gl.texImage2D(..., video); > >> >> > if (playing) { > >> >> > doTestsOnVideoContent(); > >> >> > if (gl.getError() != gl.NO_ERROR) { testFailed("there should > be > >> >> > no > >> >> > errors"); } > >> >> > } else { > >> >> > requestAnimationFrame(waitForVideo); > >> >> > } > >> >> > > >> >> > This basically says an implementation is never allowed to generate > an > >> >> > error. > >> >> > it might not be a perfect test but neither is the current one. In > >> >> > fact > >> >> > this > >> >> > should test it just fine > >> >> > > >> >> > video = document.createElement("video"); > >> >> > gl.texImage2D(..., video); > >> >> > glErrorShouldBe(gl.NO_ERROR); > >> >> > video.src = "someValidURL": > >> >> > gl.texImage2D(..., video); > >> >> > glErrorShouldBe(gl.NO_ERROR); > >> >> > video.src = "someOtherValidURL": > >> >> > gl.texImage2D(..., video); > >> >> > glErrorShouldBe(gl.NO_ERROR); > >> >> > > >> >> > Then do the other 'playing' event tests. > >> >> > > >> >> >> > >> >> >> > >> >> >> > Example > >> >> >> > > >> >> >> > video = document.createElement("video"); > >> >> >> > video.src = "http://mysite.com/myvideo"; > >> >> >> > video.play(); > >> >> >> > > >> >> >> > function render() { > >> >> >> > gl.bindTexture(...) > >> >> >> > gl.texImage2D(..., video); > >> >> >> > gl.drawArrays(...); > >> >> >> > window.requestAnimationFrame(render); > >> >> >> > } > >> >> >> > > >> >> >> > Chrome right now will synthesize a GL error if the system hasn't > >> >> >> > actually > >> >> >> > gotten the video to start (as in if it's still buffering). > >> >> >> > > >> >> >> > Off the top of my head, it seems like it would just be > friendlier > >> >> >> > to > >> >> >> > make a > >> >> >> > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that > >> >> >> > aren't > >> >> >> > loaded yet. > >> >> >> > > >> >> >> > Otherwise, having to check that the video is ready before > calling > >> >> >> > texImage2D > >> >> >> > seems kind of burdensome on the developer. If they want to check > >> >> >> > they > >> >> >> > should > >> >> >> > use video.addEventListener('playing') or similar. If they were > >> >> >> > making > >> >> >> > a > >> >> >> > video player they'd have to add a bunch of logic when queuing > the > >> >> >> > next > >> >> >> > video. > >> >> >> > > >> >> >> > Same with images. > >> >> >> > > >> >> >> > img = document.createElement("img"); > >> >> >> > img.src = "http://mysite.com/myimage"; > >> >> >> > > >> >> >> > function render() { > >> >> >> > gl.bindTexture(...) > >> >> >> > gl.texImage2D(..., img); > >> >> >> > gl.drawArrays(...); > >> >> >> > window.requestAnimationFrame(render); > >> >> >> > } > >> >> >> > > >> >> >> > If you want to know if the image has loaded use img.onload but > >> >> >> > otherwise > >> >> >> > don't fail the call? > >> >> >> > > >> >> >> > What do you think? Good idea? Bad idea? > >> >> >> > >> >> >> My initial impression is that this change is not a good idea. It > >> >> >> would > >> >> >> expose the specification, implementations and applications to a > lot > >> >> >> of > >> >> >> corner case behaviors. For example, if a video's width and height > >> >> >> hasn't been received yet, then texImage2D(..., video) would have > to > >> >> >> allocate a 1x1 texture; but if the width and height are known, > then > >> >> >> it > >> >> >> would have to allocate the right amount of storage. A naive app > >> >> >> might > >> >> >> call texImage2D only the first time and texSubImage2D > subsequently, > >> >> >> so > >> >> >> if the first texImage2D call was made before the metadata was > >> >> >> downloaded, it would never render correctly. I think the current > >> >> >> fail-fast behavior is best, and already has the result that it > >> >> >> renders > >> >> >> a black texture if the upload fails; the call will (implicitly) > >> >> >> generate INVALID_OPERATION, and the texture will be incomplete. > >> >> > > >> >> > > >> >> > I don't see how that's worse than the current situation which is > you > >> >> > call > >> >> > texImage2D and pray it works. You have no idea if it's going to > work > >> >> > or > >> >> > not. > >> >> > If you do this does it work? > >> >> > > >> >> > okToUseVideo = false; > >> >> > video = document.createElement("video"); > >> >> > video.src = "movie#1" > >> >> > video.addEventListener('playing', function() { okToUseVideo = > true; > >> >> > } > >> >> > > >> >> > frameCount = 0; > >> >> > > >> >> > function render() { > >> >> > if (okToUseVideo) { > >> >> > gl.texImage2D(... , video); > >> >> > > >> >> > ++frameCount; > >> >> > if (frameCount > 1000) { > >> >> > video.src = "movie2"; > >> >> > } > >> >> > } > >> >> > } > >> >> > > >> >> > > >> >> > Basically after some amount of time I switch video.src to a new > >> >> > movie. > >> >> > Is > >> >> > the video off limits now? > >> >> > >> >> The assumption should be "yes". When the source of the media element > >> >> is set, it is immediately invalid to use until the onload / playing > >> >> handler is called. > >> >> > >> >> > Will it use the old frame from the old movie until > >> >> > the new movie is buffered or will it give me INVALID_OPERATION? I > >> >> > have > >> >> > no > >> >> > idea and it's not specified. Same with img. > >> >> > > >> >> > img.src = "image#1" > >> >> > img.onload = function() { > >> >> > img.src = "image#2"; > >> >> > gl.texImage2D(...., img); // what's this? old image, no image, > >> >> > INVALID_OPERATION? > >> >> > } > >> >> > >> >> Yes, this should be assumed to produce INVALID_OPERATION. > >> >> > >> >> > >> >> > The texSubImage2D issue is not helped by the current spec. If > >> >> > video.src > >> >> > = > >> >> > useChoosenURL then you have no idea what the width and height are > >> >> > until > >> >> > the > >> >> > 'playing' event (or whatever event) which is no different than if > we > >> >> > changed > >> >> > it. > >> >> > > >> >> > Changing it IMO means far less broken websites and I can't see any > >> >> > disadvantages. Sure you can get a 1x1 pixel texture to start and > call > >> >> > texSubImage2D now but you can do that already. > >> >> > >> >> With the current fail-fast behavior, where INVALID_OPERATION will be > >> >> generated, the texture will be in one of two states: (1) its previous > >> >> state, because the texImage2D call failed; or (2) having the width > and > >> >> height of the incoming media element. > >> >> > >> >> With the proposed behavior to never generate an error, the texture > >> >> will be either 1x1 or width x height. Another problem with this > >> >> proposal is that there is no way with the ES 2.0 or WebGL API to > query > >> >> the size of a level of a texture, because GetTexLevelParameter was > >> >> removed from the OpenGL ES API (and, unfortunately, not reintroduced > >> >> in ES 3.0). Therefore the behavior is completely silent -- there is > no > >> >> way for the application developer to find out what happened (no error > >> >> reported, and still renders like an incomplete texture would). > >> >> > >> >> I agree that the failing behavior should be specified and, more > >> >> importantly, tests written verifying it. If the INVALID_OPERATION > >> >> error were spec'ed, would that address your primary concern? I'm not > >> >> convinced that silently making the texture 1x1 is a good path to > take. > >> >> > >> >> -Ken > >> >> > >> >> > >> >> >> > >> >> >> > >> >> >> If we want to spec this more tightly then we'll need to do more > work > >> >> >> in the conformance suite to forcibly stall HTTP downloads of the > >> >> >> video > >> >> >> resources in the test suite at well known points. I'm vaguely > aware > >> >> >> that WebKit's HTTP tests do this with a custom server. Requiring a > >> >> >> custom server in order to run the WebGL conformance suite at all > >> >> >> would > >> >> >> have pretty significant disadvantages. > >> >> >> > >> >> >> -Ken > >> >> > > >> >> > > >> > > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Wed Oct 31 15:51:18 2012 From: cma...@ (Chris Marrin) Date: Wed, 31 Oct 2012 15:51:18 -0700 Subject: [Public WebGL] Microsoft Broke the Silence In-Reply-To: References: Message-ID: <0EA9975A-096B-43DC-99B2-731C03F1A0AF@apple.com> On Oct 30, 2012, at 4:06 PM, Brandon Jones wrote: > I'm not sure if this is "breaking the silence", as it's just a reiteration of their previously stated position. There's a small window of possibility hinted at, but the phrasing is very political: > > "If we can solve the security problems, I think we'd seriously look at some way of producing 3D graphics for the Web." Interestingly, they say WE in the first part and yet have never participated in WebGL, nor have they ever made any sort of proposal or even given a direct and specific critique of what their security concerns are. I think it would be healthy for them to make a proposal for 3D on the web, even if it were at odds with WebGL. It would mean they are talking and that's the first step. > > Which indicates that they're interested in 3D on the web, but not necessarily WebGL. If they're waiting for security problems to be solved before moving, however, WebGL could very well have built up enough momentum that it might force their hand in a decision to support it. What it means to us is that we need to hammer out the security issues quickly and comprehensively so that there's no excuse not to support 3D on the Web. I don't think we will ever "hammer out the security issues" to the satisfaction of some people. All we can do is get a few years of reliable operation under our belts. They either come around or they don't. Does Windows Phone 8 do curated apps? Can someone write a WebKit based browser for them? If so, perhaps IE support won't really be relevant :-) ----- ~Chris Marrin cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From baj...@ Wed Oct 31 16:16:14 2012 From: baj...@ (Brandon Jones) Date: Wed, 31 Oct 2012 16:16:14 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: The only downside I can see to the no-ops option is that if you are re-using textures as part of your app it can lead to "stranger" behavior than the 1x1 black pixel option. In some circumstances it may mask errors which otherwise would have been apparent by virtue of the black texture. It's a minor thing, but I think that the 1x1 black texture option is ever so slightly more developer friendly. On Wed, Oct 31, 2012 at 3:49 PM, Gregg Tavares (??) wrote: > Let's discuss this a little more. > > What if both of those were no-ops? Better? Worse? > > In that case calling texImage2D on an unfinished image/video would leave > your texture as is. For a new texture that would be 0x0 texture which > renders as transparent black. This would also mean switching images/videos > would potentially keep the last image. > > > > > > On Wed, Oct 31, 2012 at 3:15 PM, Kenneth Russell wrote: > >> On Mon, Oct 22, 2012 at 11:28 AM, Gregg Tavares (??) >> wrote: >> > So checked. Three.js always uses texImage2D for video >> > So does this >> > >> http://dev.opera.com/static/articles/2012/webgl-postprocessing/webgl-pp/multipass.html >> > and this >> > >> http://badassjs.com/post/16472398856/photobooth-style-live-video-effects-in-javascript-and >> > and this >> > >> https://developer.mozilla.org/en-US/docs/WebGL/Animating_textures_in_WebGL >> > and this >> http://videos.mozilla.org/serv/mozhacks/flight-of-the-navigator/ >> > and this http://radiapp.com/samples/Video_HPtrailer_WebGL_demo.html >> > and this http://sp0t.org/videoriot/ >> > >> > This one use texSubImage2D >> > http://webglsamples.googlecode.com/hg/color-adjust/color-adjust.html >> > >> > I only bring that up to point out that the fact that texSubImage2D usage >> > would be problematic may not be that important >> > >> > On the other hand, both video and WebRTC are planning on dynamic >> resolution >> > adjustments based on how fast your internet connection is. >> > >> > I bring that up because it means knowing the resolution so you can call >> > texSubImage2D seems like something you can only count on if you have >> 100% >> > control over the video source. >> > >> > I seems like to help WebGL fit more uses cases (cases where you don't >> > control those sources like say a video site or a video chat site) that >> it >> > would be more useful if things "just worked" which is relatively easy if >> > texImage2D just works and seems like it would be rather painful if it >> could >> > possibly generate an error in various states of switching resolutions or >> > changing videos. >> > >> > I feel like GetTexLevelParameter is an orthogonal issue. There are >> plenty of >> > things you can do without knowing the resolution of the image/video. >> > >> > Also, while an unloaded video/image could return 1x1, off the top of my >> > head, what it returns while changing resolutions or queuing the next >> video >> > could be implementation defined. Some browsers might return 1x1 while >> > switching. Others might return the last valid frame until such time as a >> > frame from the new source is ready. As long as there is no error that >> seems >> > better to me. >> >> It seems that the consensus is to change the WebGL specification as you >> suggest. >> >> The changes would essentially be: >> >> - If an incomplete HTMLImageElement or HTMLVideoElement is passed to >> either texImage2D or texSubImage2D, then the source is treated as a >> 1x1 texture containing the RGBA value (0, 0, 0, 1). This means that >> texImage2D will cause the level of the passed texture object to be >> redefined as size 1x1; texSubImage2D will upload a single black pixel >> at the given xoffset and yoffset. It would be defined that an OpenGL >> error would never occur as a result of these calls. >> >> Are there any objections to these changes? If not, I'll update the spec. >> >> -Ken >> >> >> >> > >> > On Mon, Oct 22, 2012 at 10:32 AM, Kenneth Russell >> wrote: >> >> >> >> On Sun, Oct 21, 2012 at 11:52 PM, Gregg Tavares (??) >> >> wrote: >> >> > My concern is if you have a video player library that plays a >> playlist >> >> > of >> >> > videos and then you decide later to add some WebGL to it, you have >> to go >> >> > through all kinds of contortions just to avoid an INVALID_OPERATION. >> The >> >> > framework you're using for the video playlists seems like it would be >> >> > unlikely to have the needed functionality built in to let you know >> when >> >> > top >> >> > stop calling texImage2D and when to start again. >> >> > >> >> > If the error was removed and 1x1 was substituted that problem would >> go >> >> > away >> >> > >> >> > I don't see what error adds except to make it more complicated. >> >> >> >> Let's think through this use case. The playlist library iterates >> >> through different videos. As it switches between videos it's clearly >> >> necessary to send some sort of event to the application, because the >> >> videos might be different resolutions. Unless the application is >> >> calling texImage2D all the time (instead of texSubImage2D most of the >> >> time), it will have to watch for these events to figure out when it >> >> might need to reallocate the texture at a different size. (If the app >> >> calls texImage2D all the time, then it doesn't matter whether WebGL >> >> generates INVALID_OPERATION or a 1x1 black texture. The app will never >> >> need to know whether the upload succeeded or failed, because failures >> >> will be transient, and the app will automatically recover from them >> >> when the next video starts playing. If texImage2D generates >> >> INVALID_OPERATION, the previous contents of the texture will be >> >> preserved.) >> >> >> >> If WebGL generates INVALID_OPERATION for incomplete texture uploads, >> >> then the app either needs to watch for those errors coming back from >> >> OpenGL, or watch for the onplay event to know when it's guaranteed to >> >> be OK to upload a frame. If the app doesn't do this, and mostly calls >> >> texSubImage2D to upload new frames, then the video can get "stuck" as >> >> the app tries to upload new frames via texSubImage2D while the >> >> underlying texture is the wrong size. >> >> >> >> If WebGL silently creates a 1x1 black texture for incomplete texture >> >> uploads, then the app still needs to watch for the onplay event to >> >> know when to call texImage2D instead of texSubImage2D. Otherwise, the >> >> video can still get "stuck" displaying a 1x1 black texture. >> >> >> >> As far as I see it, the app has to do the same work regardless of >> >> whether incomplete texture uploads generate an error or silently >> >> produce a 1x1 black texture. The main difference is that the error can >> >> be observed by the application, but the silent allocation of the 1x1 >> >> black texture can not, because glGetTexLevelParameter doesn't exist in >> >> the GLES API. For this reason I continue to think that generating >> >> INVALID_OPERATION is the more transparent behavior for WebGL. >> >> >> >> Regardless of the decision here, I agree with you that the behavior >> >> should be specified and we collectively should try to write some tests >> >> for it -- ideally, ones that explicitly force failure or pauses at >> >> certain download points, rather than just trying to load and play the >> >> video many times. >> >> >> >> -Ken >> >> >> >> >> >> >> >> > On Fri, Oct 19, 2012 at 3:09 PM, Kenneth Russell >> wrote: >> >> >> >> >> >> On Tue, Oct 16, 2012 at 4:46 PM, Gregg Tavares (??) < >> gman...@> >> >> >> wrote: >> >> >> > >> >> >> > >> >> >> > >> >> >> > On Tue, Oct 16, 2012 at 4:28 PM, Kenneth Russell >> >> >> > wrote: >> >> >> >> >> >> >> >> On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) >> >> >> >> >> >> >> >> wrote: >> >> >> >> > I don't think the spec makes this clear what happens when you >> try >> >> >> >> > to >> >> >> >> > call >> >> >> >> > texImage2D or texSubImage2D on an image or video that is not >> yet >> >> >> >> > loaded. >> >> >> >> >> >> >> >> Right, the behavior in this case is not defined. I'm pretty sure >> >> >> >> this >> >> >> >> was discussed a long time ago in the working group, but it seemed >> >> >> >> difficult to write a test for any defined behavior, so by >> consensus >> >> >> >> implementations generate INVALID_OPERATION when attempting to >> upload >> >> >> >> incomplete images or video to WebGL textures. >> >> >> > >> >> >> > >> >> >> > I don't see a problem writing a test. Basically >> >> >> > >> >> >> > var playing = false >> >> >> > video.src = url >> >> >> > video.play(); >> >> >> > video.addEventListener('playing', function() { playing = true;}); >> >> >> > >> >> >> > waitForVideo() { >> >> >> > gl.bindTexture(...) >> >> >> > gl.texImage2D(..., video); >> >> >> > if (playing) { >> >> >> > doTestsOnVideoContent(); >> >> >> > if (gl.getError() != gl.NO_ERROR) { testFailed("there should >> be >> >> >> > no >> >> >> > errors"); } >> >> >> > } else { >> >> >> > requestAnimationFrame(waitForVideo); >> >> >> > } >> >> >> > >> >> >> > This basically says an implementation is never allowed to >> generate an >> >> >> > error. >> >> >> > it might not be a perfect test but neither is the current one. In >> >> >> > fact >> >> >> > this >> >> >> > should test it just fine >> >> >> > >> >> >> > video = document.createElement("video"); >> >> >> > gl.texImage2D(..., video); >> >> >> > glErrorShouldBe(gl.NO_ERROR); >> >> >> > video.src = "someValidURL": >> >> >> > gl.texImage2D(..., video); >> >> >> > glErrorShouldBe(gl.NO_ERROR); >> >> >> > video.src = "someOtherValidURL": >> >> >> > gl.texImage2D(..., video); >> >> >> > glErrorShouldBe(gl.NO_ERROR); >> >> >> > >> >> >> > Then do the other 'playing' event tests. >> >> >> > >> >> >> >> >> >> >> >> >> >> >> >> > Example >> >> >> >> > >> >> >> >> > video = document.createElement("video"); >> >> >> >> > video.src = "http://mysite.com/myvideo"; >> >> >> >> > video.play(); >> >> >> >> > >> >> >> >> > function render() { >> >> >> >> > gl.bindTexture(...) >> >> >> >> > gl.texImage2D(..., video); >> >> >> >> > gl.drawArrays(...); >> >> >> >> > window.requestAnimationFrame(render); >> >> >> >> > } >> >> >> >> > >> >> >> >> > Chrome right now will synthesize a GL error if the system >> hasn't >> >> >> >> > actually >> >> >> >> > gotten the video to start (as in if it's still buffering). >> >> >> >> > >> >> >> >> > Off the top of my head, it seems like it would just be >> friendlier >> >> >> >> > to >> >> >> >> > make a >> >> >> >> > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos that >> >> >> >> > aren't >> >> >> >> > loaded yet. >> >> >> >> > >> >> >> >> > Otherwise, having to check that the video is ready before >> calling >> >> >> >> > texImage2D >> >> >> >> > seems kind of burdensome on the developer. If they want to >> check >> >> >> >> > they >> >> >> >> > should >> >> >> >> > use video.addEventListener('playing') or similar. If they were >> >> >> >> > making >> >> >> >> > a >> >> >> >> > video player they'd have to add a bunch of logic when queuing >> the >> >> >> >> > next >> >> >> >> > video. >> >> >> >> > >> >> >> >> > Same with images. >> >> >> >> > >> >> >> >> > img = document.createElement("img"); >> >> >> >> > img.src = "http://mysite.com/myimage"; >> >> >> >> > >> >> >> >> > function render() { >> >> >> >> > gl.bindTexture(...) >> >> >> >> > gl.texImage2D(..., img); >> >> >> >> > gl.drawArrays(...); >> >> >> >> > window.requestAnimationFrame(render); >> >> >> >> > } >> >> >> >> > >> >> >> >> > If you want to know if the image has loaded use img.onload but >> >> >> >> > otherwise >> >> >> >> > don't fail the call? >> >> >> >> > >> >> >> >> > What do you think? Good idea? Bad idea? >> >> >> >> >> >> >> >> My initial impression is that this change is not a good idea. It >> >> >> >> would >> >> >> >> expose the specification, implementations and applications to a >> lot >> >> >> >> of >> >> >> >> corner case behaviors. For example, if a video's width and height >> >> >> >> hasn't been received yet, then texImage2D(..., video) would have >> to >> >> >> >> allocate a 1x1 texture; but if the width and height are known, >> then >> >> >> >> it >> >> >> >> would have to allocate the right amount of storage. A naive app >> >> >> >> might >> >> >> >> call texImage2D only the first time and texSubImage2D >> subsequently, >> >> >> >> so >> >> >> >> if the first texImage2D call was made before the metadata was >> >> >> >> downloaded, it would never render correctly. I think the current >> >> >> >> fail-fast behavior is best, and already has the result that it >> >> >> >> renders >> >> >> >> a black texture if the upload fails; the call will (implicitly) >> >> >> >> generate INVALID_OPERATION, and the texture will be incomplete. >> >> >> > >> >> >> > >> >> >> > I don't see how that's worse than the current situation which is >> you >> >> >> > call >> >> >> > texImage2D and pray it works. You have no idea if it's going to >> work >> >> >> > or >> >> >> > not. >> >> >> > If you do this does it work? >> >> >> > >> >> >> > okToUseVideo = false; >> >> >> > video = document.createElement("video"); >> >> >> > video.src = "movie#1" >> >> >> > video.addEventListener('playing', function() { okToUseVideo = >> true; >> >> >> > } >> >> >> > >> >> >> > frameCount = 0; >> >> >> > >> >> >> > function render() { >> >> >> > if (okToUseVideo) { >> >> >> > gl.texImage2D(... , video); >> >> >> > >> >> >> > ++frameCount; >> >> >> > if (frameCount > 1000) { >> >> >> > video.src = "movie2"; >> >> >> > } >> >> >> > } >> >> >> > } >> >> >> > >> >> >> > >> >> >> > Basically after some amount of time I switch video.src to a new >> >> >> > movie. >> >> >> > Is >> >> >> > the video off limits now? >> >> >> >> >> >> The assumption should be "yes". When the source of the media element >> >> >> is set, it is immediately invalid to use until the onload / playing >> >> >> handler is called. >> >> >> >> >> >> > Will it use the old frame from the old movie until >> >> >> > the new movie is buffered or will it give me INVALID_OPERATION? I >> >> >> > have >> >> >> > no >> >> >> > idea and it's not specified. Same with img. >> >> >> > >> >> >> > img.src = "image#1" >> >> >> > img.onload = function() { >> >> >> > img.src = "image#2"; >> >> >> > gl.texImage2D(...., img); // what's this? old image, no image, >> >> >> > INVALID_OPERATION? >> >> >> > } >> >> >> >> >> >> Yes, this should be assumed to produce INVALID_OPERATION. >> >> >> >> >> >> >> >> >> > The texSubImage2D issue is not helped by the current spec. If >> >> >> > video.src >> >> >> > = >> >> >> > useChoosenURL then you have no idea what the width and height are >> >> >> > until >> >> >> > the >> >> >> > 'playing' event (or whatever event) which is no different than if >> we >> >> >> > changed >> >> >> > it. >> >> >> > >> >> >> > Changing it IMO means far less broken websites and I can't see any >> >> >> > disadvantages. Sure you can get a 1x1 pixel texture to start and >> call >> >> >> > texSubImage2D now but you can do that already. >> >> >> >> >> >> With the current fail-fast behavior, where INVALID_OPERATION will be >> >> >> generated, the texture will be in one of two states: (1) its >> previous >> >> >> state, because the texImage2D call failed; or (2) having the width >> and >> >> >> height of the incoming media element. >> >> >> >> >> >> With the proposed behavior to never generate an error, the texture >> >> >> will be either 1x1 or width x height. Another problem with this >> >> >> proposal is that there is no way with the ES 2.0 or WebGL API to >> query >> >> >> the size of a level of a texture, because GetTexLevelParameter was >> >> >> removed from the OpenGL ES API (and, unfortunately, not reintroduced >> >> >> in ES 3.0). Therefore the behavior is completely silent -- there is >> no >> >> >> way for the application developer to find out what happened (no >> error >> >> >> reported, and still renders like an incomplete texture would). >> >> >> >> >> >> I agree that the failing behavior should be specified and, more >> >> >> importantly, tests written verifying it. If the INVALID_OPERATION >> >> >> error were spec'ed, would that address your primary concern? I'm not >> >> >> convinced that silently making the texture 1x1 is a good path to >> take. >> >> >> >> >> >> -Ken >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> If we want to spec this more tightly then we'll need to do more >> work >> >> >> >> in the conformance suite to forcibly stall HTTP downloads of the >> >> >> >> video >> >> >> >> resources in the test suite at well known points. I'm vaguely >> aware >> >> >> >> that WebKit's HTTP tests do this with a custom server. Requiring >> a >> >> >> >> custom server in order to run the WebGL conformance suite at all >> >> >> >> would >> >> >> >> have pretty significant disadvantages. >> >> >> >> >> >> >> >> -Ken >> >> >> > >> >> >> > >> >> > >> >> > >> > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed Oct 31 16:44:05 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Wed, 31 Oct 2012 16:44:05 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: Is that more developer friendly? It means if you want to show the same frame between videos you have to do extra work. Conversely if it's a no-op and you actually do want to show black then you've got do extra work. 2 more things to throw into the fire: Should progressive loading be supported? Example var update = true; img.src = "someProgressiveLarge.jpg"; img.onload = function() { updateTexture(); update = false; } function updateTexture() { if (update) { gl.texImage2D(...., img); } } function render() { updateTexture(); gl.drawXXX(...); requestAnimationFrame(render); } I don't think it should be required. The question is is it valid for an WebGL implementation to provide intermediate results between setting img.src and receiving the img 'load' event? Should animated images (gif) be supported? I just did a test and animated gif img tags start displaying as soon as 1 frame is available but the 'load' event is not delivered until the entire file has been downloaded. I guess that's really a form of progressive loading but after it has finished loading is it okay (but not mandatory?) for an implementation to provide a different frame each time texImage2D is called. On Wed, Oct 31, 2012 at 4:16 PM, Brandon Jones wrote: > The only downside I can see to the no-ops option is that if you are > re-using textures as part of your app it can lead to "stranger" behavior > than the 1x1 black pixel option. In some circumstances it may mask errors > which otherwise would have been apparent by virtue of the black texture. > > It's a minor thing, but I think that the 1x1 black texture option is ever > so slightly more developer friendly. > > > On Wed, Oct 31, 2012 at 3:49 PM, Gregg Tavares (??) wrote: > >> Let's discuss this a little more. >> >> What if both of those were no-ops? Better? Worse? >> >> In that case calling texImage2D on an unfinished image/video would leave >> your texture as is. For a new texture that would be 0x0 texture which >> renders as transparent black. This would also mean switching images/videos >> would potentially keep the last image. >> >> >> >> >> >> On Wed, Oct 31, 2012 at 3:15 PM, Kenneth Russell wrote: >> >>> On Mon, Oct 22, 2012 at 11:28 AM, Gregg Tavares (??) >>> wrote: >>> > So checked. Three.js always uses texImage2D for video >>> > So does this >>> > >>> http://dev.opera.com/static/articles/2012/webgl-postprocessing/webgl-pp/multipass.html >>> > and this >>> > >>> http://badassjs.com/post/16472398856/photobooth-style-live-video-effects-in-javascript-and >>> > and this >>> > >>> https://developer.mozilla.org/en-US/docs/WebGL/Animating_textures_in_WebGL >>> > and this >>> http://videos.mozilla.org/serv/mozhacks/flight-of-the-navigator/ >>> > and this http://radiapp.com/samples/Video_HPtrailer_WebGL_demo.html >>> > and this http://sp0t.org/videoriot/ >>> > >>> > This one use texSubImage2D >>> > http://webglsamples.googlecode.com/hg/color-adjust/color-adjust.html >>> > >>> > I only bring that up to point out that the fact that texSubImage2D >>> usage >>> > would be problematic may not be that important >>> > >>> > On the other hand, both video and WebRTC are planning on dynamic >>> resolution >>> > adjustments based on how fast your internet connection is. >>> > >>> > I bring that up because it means knowing the resolution so you can call >>> > texSubImage2D seems like something you can only count on if you have >>> 100% >>> > control over the video source. >>> > >>> > I seems like to help WebGL fit more uses cases (cases where you don't >>> > control those sources like say a video site or a video chat site) that >>> it >>> > would be more useful if things "just worked" which is relatively easy >>> if >>> > texImage2D just works and seems like it would be rather painful if it >>> could >>> > possibly generate an error in various states of switching resolutions >>> or >>> > changing videos. >>> > >>> > I feel like GetTexLevelParameter is an orthogonal issue. There are >>> plenty of >>> > things you can do without knowing the resolution of the image/video. >>> > >>> > Also, while an unloaded video/image could return 1x1, off the top of my >>> > head, what it returns while changing resolutions or queuing the next >>> video >>> > could be implementation defined. Some browsers might return 1x1 while >>> > switching. Others might return the last valid frame until such time as >>> a >>> > frame from the new source is ready. As long as there is no error that >>> seems >>> > better to me. >>> >>> It seems that the consensus is to change the WebGL specification as you >>> suggest. >>> >>> The changes would essentially be: >>> >>> - If an incomplete HTMLImageElement or HTMLVideoElement is passed to >>> either texImage2D or texSubImage2D, then the source is treated as a >>> 1x1 texture containing the RGBA value (0, 0, 0, 1). This means that >>> texImage2D will cause the level of the passed texture object to be >>> redefined as size 1x1; texSubImage2D will upload a single black pixel >>> at the given xoffset and yoffset. It would be defined that an OpenGL >>> error would never occur as a result of these calls. >>> >>> Are there any objections to these changes? If not, I'll update the spec. >>> >>> -Ken >>> >>> >>> >>> > >>> > On Mon, Oct 22, 2012 at 10:32 AM, Kenneth Russell >>> wrote: >>> >> >>> >> On Sun, Oct 21, 2012 at 11:52 PM, Gregg Tavares (??) >> > >>> >> wrote: >>> >> > My concern is if you have a video player library that plays a >>> playlist >>> >> > of >>> >> > videos and then you decide later to add some WebGL to it, you have >>> to go >>> >> > through all kinds of contortions just to avoid an >>> INVALID_OPERATION. The >>> >> > framework you're using for the video playlists seems like it would >>> be >>> >> > unlikely to have the needed functionality built in to let you know >>> when >>> >> > top >>> >> > stop calling texImage2D and when to start again. >>> >> > >>> >> > If the error was removed and 1x1 was substituted that problem would >>> go >>> >> > away >>> >> > >>> >> > I don't see what error adds except to make it more complicated. >>> >> >>> >> Let's think through this use case. The playlist library iterates >>> >> through different videos. As it switches between videos it's clearly >>> >> necessary to send some sort of event to the application, because the >>> >> videos might be different resolutions. Unless the application is >>> >> calling texImage2D all the time (instead of texSubImage2D most of the >>> >> time), it will have to watch for these events to figure out when it >>> >> might need to reallocate the texture at a different size. (If the app >>> >> calls texImage2D all the time, then it doesn't matter whether WebGL >>> >> generates INVALID_OPERATION or a 1x1 black texture. The app will never >>> >> need to know whether the upload succeeded or failed, because failures >>> >> will be transient, and the app will automatically recover from them >>> >> when the next video starts playing. If texImage2D generates >>> >> INVALID_OPERATION, the previous contents of the texture will be >>> >> preserved.) >>> >> >>> >> If WebGL generates INVALID_OPERATION for incomplete texture uploads, >>> >> then the app either needs to watch for those errors coming back from >>> >> OpenGL, or watch for the onplay event to know when it's guaranteed to >>> >> be OK to upload a frame. If the app doesn't do this, and mostly calls >>> >> texSubImage2D to upload new frames, then the video can get "stuck" as >>> >> the app tries to upload new frames via texSubImage2D while the >>> >> underlying texture is the wrong size. >>> >> >>> >> If WebGL silently creates a 1x1 black texture for incomplete texture >>> >> uploads, then the app still needs to watch for the onplay event to >>> >> know when to call texImage2D instead of texSubImage2D. Otherwise, the >>> >> video can still get "stuck" displaying a 1x1 black texture. >>> >> >>> >> As far as I see it, the app has to do the same work regardless of >>> >> whether incomplete texture uploads generate an error or silently >>> >> produce a 1x1 black texture. The main difference is that the error can >>> >> be observed by the application, but the silent allocation of the 1x1 >>> >> black texture can not, because glGetTexLevelParameter doesn't exist in >>> >> the GLES API. For this reason I continue to think that generating >>> >> INVALID_OPERATION is the more transparent behavior for WebGL. >>> >> >>> >> Regardless of the decision here, I agree with you that the behavior >>> >> should be specified and we collectively should try to write some tests >>> >> for it -- ideally, ones that explicitly force failure or pauses at >>> >> certain download points, rather than just trying to load and play the >>> >> video many times. >>> >> >>> >> -Ken >>> >> >>> >> >>> >> >>> >> > On Fri, Oct 19, 2012 at 3:09 PM, Kenneth Russell >>> wrote: >>> >> >> >>> >> >> On Tue, Oct 16, 2012 at 4:46 PM, Gregg Tavares (??) < >>> gman...@> >>> >> >> wrote: >>> >> >> > >>> >> >> > >>> >> >> > >>> >> >> > On Tue, Oct 16, 2012 at 4:28 PM, Kenneth Russell >> > >>> >> >> > wrote: >>> >> >> >> >>> >> >> >> On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) >>> >> >> >> >>> >> >> >> wrote: >>> >> >> >> > I don't think the spec makes this clear what happens when you >>> try >>> >> >> >> > to >>> >> >> >> > call >>> >> >> >> > texImage2D or texSubImage2D on an image or video that is not >>> yet >>> >> >> >> > loaded. >>> >> >> >> >>> >> >> >> Right, the behavior in this case is not defined. I'm pretty sure >>> >> >> >> this >>> >> >> >> was discussed a long time ago in the working group, but it >>> seemed >>> >> >> >> difficult to write a test for any defined behavior, so by >>> consensus >>> >> >> >> implementations generate INVALID_OPERATION when attempting to >>> upload >>> >> >> >> incomplete images or video to WebGL textures. >>> >> >> > >>> >> >> > >>> >> >> > I don't see a problem writing a test. Basically >>> >> >> > >>> >> >> > var playing = false >>> >> >> > video.src = url >>> >> >> > video.play(); >>> >> >> > video.addEventListener('playing', function() { playing = true;}); >>> >> >> > >>> >> >> > waitForVideo() { >>> >> >> > gl.bindTexture(...) >>> >> >> > gl.texImage2D(..., video); >>> >> >> > if (playing) { >>> >> >> > doTestsOnVideoContent(); >>> >> >> > if (gl.getError() != gl.NO_ERROR) { testFailed("there >>> should be >>> >> >> > no >>> >> >> > errors"); } >>> >> >> > } else { >>> >> >> > requestAnimationFrame(waitForVideo); >>> >> >> > } >>> >> >> > >>> >> >> > This basically says an implementation is never allowed to >>> generate an >>> >> >> > error. >>> >> >> > it might not be a perfect test but neither is the current one. In >>> >> >> > fact >>> >> >> > this >>> >> >> > should test it just fine >>> >> >> > >>> >> >> > video = document.createElement("video"); >>> >> >> > gl.texImage2D(..., video); >>> >> >> > glErrorShouldBe(gl.NO_ERROR); >>> >> >> > video.src = "someValidURL": >>> >> >> > gl.texImage2D(..., video); >>> >> >> > glErrorShouldBe(gl.NO_ERROR); >>> >> >> > video.src = "someOtherValidURL": >>> >> >> > gl.texImage2D(..., video); >>> >> >> > glErrorShouldBe(gl.NO_ERROR); >>> >> >> > >>> >> >> > Then do the other 'playing' event tests. >>> >> >> > >>> >> >> >> >>> >> >> >> >>> >> >> >> > Example >>> >> >> >> > >>> >> >> >> > video = document.createElement("video"); >>> >> >> >> > video.src = "http://mysite.com/myvideo"; >>> >> >> >> > video.play(); >>> >> >> >> > >>> >> >> >> > function render() { >>> >> >> >> > gl.bindTexture(...) >>> >> >> >> > gl.texImage2D(..., video); >>> >> >> >> > gl.drawArrays(...); >>> >> >> >> > window.requestAnimationFrame(render); >>> >> >> >> > } >>> >> >> >> > >>> >> >> >> > Chrome right now will synthesize a GL error if the system >>> hasn't >>> >> >> >> > actually >>> >> >> >> > gotten the video to start (as in if it's still buffering). >>> >> >> >> > >>> >> >> >> > Off the top of my head, it seems like it would just be >>> friendlier >>> >> >> >> > to >>> >> >> >> > make a >>> >> >> >> > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos >>> that >>> >> >> >> > aren't >>> >> >> >> > loaded yet. >>> >> >> >> > >>> >> >> >> > Otherwise, having to check that the video is ready before >>> calling >>> >> >> >> > texImage2D >>> >> >> >> > seems kind of burdensome on the developer. If they want to >>> check >>> >> >> >> > they >>> >> >> >> > should >>> >> >> >> > use video.addEventListener('playing') or similar. If they were >>> >> >> >> > making >>> >> >> >> > a >>> >> >> >> > video player they'd have to add a bunch of logic when queuing >>> the >>> >> >> >> > next >>> >> >> >> > video. >>> >> >> >> > >>> >> >> >> > Same with images. >>> >> >> >> > >>> >> >> >> > img = document.createElement("img"); >>> >> >> >> > img.src = "http://mysite.com/myimage"; >>> >> >> >> > >>> >> >> >> > function render() { >>> >> >> >> > gl.bindTexture(...) >>> >> >> >> > gl.texImage2D(..., img); >>> >> >> >> > gl.drawArrays(...); >>> >> >> >> > window.requestAnimationFrame(render); >>> >> >> >> > } >>> >> >> >> > >>> >> >> >> > If you want to know if the image has loaded use img.onload but >>> >> >> >> > otherwise >>> >> >> >> > don't fail the call? >>> >> >> >> > >>> >> >> >> > What do you think? Good idea? Bad idea? >>> >> >> >> >>> >> >> >> My initial impression is that this change is not a good idea. It >>> >> >> >> would >>> >> >> >> expose the specification, implementations and applications to a >>> lot >>> >> >> >> of >>> >> >> >> corner case behaviors. For example, if a video's width and >>> height >>> >> >> >> hasn't been received yet, then texImage2D(..., video) would >>> have to >>> >> >> >> allocate a 1x1 texture; but if the width and height are known, >>> then >>> >> >> >> it >>> >> >> >> would have to allocate the right amount of storage. A naive app >>> >> >> >> might >>> >> >> >> call texImage2D only the first time and texSubImage2D >>> subsequently, >>> >> >> >> so >>> >> >> >> if the first texImage2D call was made before the metadata was >>> >> >> >> downloaded, it would never render correctly. I think the current >>> >> >> >> fail-fast behavior is best, and already has the result that it >>> >> >> >> renders >>> >> >> >> a black texture if the upload fails; the call will (implicitly) >>> >> >> >> generate INVALID_OPERATION, and the texture will be incomplete. >>> >> >> > >>> >> >> > >>> >> >> > I don't see how that's worse than the current situation which is >>> you >>> >> >> > call >>> >> >> > texImage2D and pray it works. You have no idea if it's going to >>> work >>> >> >> > or >>> >> >> > not. >>> >> >> > If you do this does it work? >>> >> >> > >>> >> >> > okToUseVideo = false; >>> >> >> > video = document.createElement("video"); >>> >> >> > video.src = "movie#1" >>> >> >> > video.addEventListener('playing', function() { okToUseVideo = >>> true; >>> >> >> > } >>> >> >> > >>> >> >> > frameCount = 0; >>> >> >> > >>> >> >> > function render() { >>> >> >> > if (okToUseVideo) { >>> >> >> > gl.texImage2D(... , video); >>> >> >> > >>> >> >> > ++frameCount; >>> >> >> > if (frameCount > 1000) { >>> >> >> > video.src = "movie2"; >>> >> >> > } >>> >> >> > } >>> >> >> > } >>> >> >> > >>> >> >> > >>> >> >> > Basically after some amount of time I switch video.src to a new >>> >> >> > movie. >>> >> >> > Is >>> >> >> > the video off limits now? >>> >> >> >>> >> >> The assumption should be "yes". When the source of the media >>> element >>> >> >> is set, it is immediately invalid to use until the onload / playing >>> >> >> handler is called. >>> >> >> >>> >> >> > Will it use the old frame from the old movie until >>> >> >> > the new movie is buffered or will it give me INVALID_OPERATION? I >>> >> >> > have >>> >> >> > no >>> >> >> > idea and it's not specified. Same with img. >>> >> >> > >>> >> >> > img.src = "image#1" >>> >> >> > img.onload = function() { >>> >> >> > img.src = "image#2"; >>> >> >> > gl.texImage2D(...., img); // what's this? old image, no image, >>> >> >> > INVALID_OPERATION? >>> >> >> > } >>> >> >> >>> >> >> Yes, this should be assumed to produce INVALID_OPERATION. >>> >> >> >>> >> >> >>> >> >> > The texSubImage2D issue is not helped by the current spec. If >>> >> >> > video.src >>> >> >> > = >>> >> >> > useChoosenURL then you have no idea what the width and height are >>> >> >> > until >>> >> >> > the >>> >> >> > 'playing' event (or whatever event) which is no different than >>> if we >>> >> >> > changed >>> >> >> > it. >>> >> >> > >>> >> >> > Changing it IMO means far less broken websites and I can't see >>> any >>> >> >> > disadvantages. Sure you can get a 1x1 pixel texture to start and >>> call >>> >> >> > texSubImage2D now but you can do that already. >>> >> >> >>> >> >> With the current fail-fast behavior, where INVALID_OPERATION will >>> be >>> >> >> generated, the texture will be in one of two states: (1) its >>> previous >>> >> >> state, because the texImage2D call failed; or (2) having the width >>> and >>> >> >> height of the incoming media element. >>> >> >> >>> >> >> With the proposed behavior to never generate an error, the texture >>> >> >> will be either 1x1 or width x height. Another problem with this >>> >> >> proposal is that there is no way with the ES 2.0 or WebGL API to >>> query >>> >> >> the size of a level of a texture, because GetTexLevelParameter was >>> >> >> removed from the OpenGL ES API (and, unfortunately, not >>> reintroduced >>> >> >> in ES 3.0). Therefore the behavior is completely silent -- there >>> is no >>> >> >> way for the application developer to find out what happened (no >>> error >>> >> >> reported, and still renders like an incomplete texture would). >>> >> >> >>> >> >> I agree that the failing behavior should be specified and, more >>> >> >> importantly, tests written verifying it. If the INVALID_OPERATION >>> >> >> error were spec'ed, would that address your primary concern? I'm >>> not >>> >> >> convinced that silently making the texture 1x1 is a good path to >>> take. >>> >> >> >>> >> >> -Ken >>> >> >> >>> >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> If we want to spec this more tightly then we'll need to do more >>> work >>> >> >> >> in the conformance suite to forcibly stall HTTP downloads of the >>> >> >> >> video >>> >> >> >> resources in the test suite at well known points. I'm vaguely >>> aware >>> >> >> >> that WebKit's HTTP tests do this with a custom server. >>> Requiring a >>> >> >> >> custom server in order to run the WebGL conformance suite at all >>> >> >> >> would >>> >> >> >> have pretty significant disadvantages. >>> >> >> >> >>> >> >> >> -Ken >>> >> >> > >>> >> >> > >>> >> > >>> >> > >>> > >>> > >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed Oct 31 16:55:55 2012 From: bja...@ (Benoit Jacob) Date: Wed, 31 Oct 2012 19:55:55 -0400 Subject: [Public WebGL] Bug in context-lost.html conformance test: getContextAttributes should not return null In-Reply-To: References: <5085A6F9.6020707@mit.edu> <5085B646.3060003@mit.edu> <508844B8.4050208@mozilla.com> <50897F58.7060306@mit.edu> Message-ID: <5091BA8B.1040801@mozilla.com> No objection from me. Benoit On 12-10-31 05:02 PM, Kenneth Russell wrote: > Sorry for the delay replying; was swamped. > > After giving this more thought, I would like to change > getContextAttributes to return "WebGLContextAttributes?", and define > that the context returns null for this value while in the context lost > state. The "actual context parameters" are associated with the drawing > buffer, and there is no drawing buffer while the context is in the > lost state. I agree with Gregg's assessment that there's no reasonable > value to be returned in some situations. > > Some months ago there was a plan to add an asynchronous context > creation bit to the WebGLContextAttributes; when set, the context > would be created in the lost state and have a webglcontextrestored > event fired at it in the future. I think it would still be a good idea > to add this API, but at this point I would really like to stabilize > the top of tree conformance suite, and take another snapshot of it and > the improved spec as "version 1.0.2", before forging ahead with more > API additions. > > To be concrete: are there any objections to making the return value of > getContextAttributes nullable? > > -Ken > > > On Thu, Oct 25, 2012 at 11:29 AM, Gregg Tavares (??) wrote: >> Yes, we can actually. Ideally we can call >> >> var gl = canvas.getContext("webgl"); >> >> and the context is already lost. isContextLost() returns true and the >> contextlostevent will be delivered once the current event exits >> >> Note, the samples on the WebGL wiki and the ones on >> webglsamples.googlecode.com all handle this case. >> >> That does bring up a good point as to why getContextAttrbutes should maybe >> return null during contextlost. I know that sucks though since you have to >> be defensive about accessing it but I don't know what it could report since >> there is no context to give attributes about. >> >> >> >> On Thu, Oct 25, 2012 at 11:05 AM, Boris Zbarsky wrote: >>> >>> On 10/25/12 1:38 PM, Gregg Tavares (??) wrote: >>>> Off the top of my head I'd say returning the lost context's attributes >>>> is fine? >>> >>> Can we never be in a lost-context state without having created a context >>> at any point? >>> >>> >>> -Boris >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> unsubscribe public_webgl >>> ----------------------------------------------------------- >>> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From baj...@ Wed Oct 31 17:03:40 2012 From: baj...@ (Brandon Jones) Date: Wed, 31 Oct 2012 17:03:40 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: On Wed, Oct 31, 2012 at 4:44 PM, Gregg Tavares (??) wrote: > Is that more developer friendly? It means if you want to show the same > frame between videos you have to do extra work. > I'm a little unclear on what you're referring to here. I assume you mean "show the last frame from a previous video in a playlist while a the next video is loading?" I can see how that may be a nice side effect, but it also feels a bit hacky. If I want my video player to nicely transition from one video to the next I have all the events at my disposal to monitor that and do the Right Thing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sin...@ Wed Oct 31 17:09:11 2012 From: sin...@ (Colin Mackenzie) Date: Wed, 31 Oct 2012 20:09:11 -0400 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: Whether we go with no-ops or 1x1 black, either way the developer would have to recognize the result for what it is; that is to say, they'd have to be aware that incomplete images or videos can cause this result, because the result itself is not self-explanatory. There are cases to be made that the developer _should_ always know what they're doing, but the reality is that many will not remember or even consult the spec. They'll follow a tutorial or Q&A site and may or may not have complete information to go by. So, I think it'd be nice if this scenario also logs a warning to the console. I don't know that this is must-have, but it's certainly nice to have, in my opinion. On Wed, Oct 31, 2012 at 7:16 PM, Brandon Jones wrote: > The only downside I can see to the no-ops option is that if you are > re-using textures as part of your app it can lead to "stranger" behavior > than the 1x1 black pixel option. In some circumstances it may mask errors > which otherwise would have been apparent by virtue of the black texture. > > It's a minor thing, but I think that the 1x1 black texture option is ever > so slightly more developer friendly. > > > On Wed, Oct 31, 2012 at 3:49 PM, Gregg Tavares (??) wrote: > >> Let's discuss this a little more. >> >> What if both of those were no-ops? Better? Worse? >> >> In that case calling texImage2D on an unfinished image/video would leave >> your texture as is. For a new texture that would be 0x0 texture which >> renders as transparent black. This would also mean switching images/videos >> would potentially keep the last image. >> >> >> >> >> >> On Wed, Oct 31, 2012 at 3:15 PM, Kenneth Russell wrote: >> >>> On Mon, Oct 22, 2012 at 11:28 AM, Gregg Tavares (??) >>> wrote: >>> > So checked. Three.js always uses texImage2D for video >>> > So does this >>> > >>> http://dev.opera.com/static/articles/2012/webgl-postprocessing/webgl-pp/multipass.html >>> > and this >>> > >>> http://badassjs.com/post/16472398856/photobooth-style-live-video-effects-in-javascript-and >>> > and this >>> > >>> https://developer.mozilla.org/en-US/docs/WebGL/Animating_textures_in_WebGL >>> > and this >>> http://videos.mozilla.org/serv/mozhacks/flight-of-the-navigator/ >>> > and this http://radiapp.com/samples/Video_HPtrailer_WebGL_demo.html >>> > and this http://sp0t.org/videoriot/ >>> > >>> > This one use texSubImage2D >>> > http://webglsamples.googlecode.com/hg/color-adjust/color-adjust.html >>> > >>> > I only bring that up to point out that the fact that texSubImage2D >>> usage >>> > would be problematic may not be that important >>> > >>> > On the other hand, both video and WebRTC are planning on dynamic >>> resolution >>> > adjustments based on how fast your internet connection is. >>> > >>> > I bring that up because it means knowing the resolution so you can call >>> > texSubImage2D seems like something you can only count on if you have >>> 100% >>> > control over the video source. >>> > >>> > I seems like to help WebGL fit more uses cases (cases where you don't >>> > control those sources like say a video site or a video chat site) that >>> it >>> > would be more useful if things "just worked" which is relatively easy >>> if >>> > texImage2D just works and seems like it would be rather painful if it >>> could >>> > possibly generate an error in various states of switching resolutions >>> or >>> > changing videos. >>> > >>> > I feel like GetTexLevelParameter is an orthogonal issue. There are >>> plenty of >>> > things you can do without knowing the resolution of the image/video. >>> > >>> > Also, while an unloaded video/image could return 1x1, off the top of my >>> > head, what it returns while changing resolutions or queuing the next >>> video >>> > could be implementation defined. Some browsers might return 1x1 while >>> > switching. Others might return the last valid frame until such time as >>> a >>> > frame from the new source is ready. As long as there is no error that >>> seems >>> > better to me. >>> >>> It seems that the consensus is to change the WebGL specification as you >>> suggest. >>> >>> The changes would essentially be: >>> >>> - If an incomplete HTMLImageElement or HTMLVideoElement is passed to >>> either texImage2D or texSubImage2D, then the source is treated as a >>> 1x1 texture containing the RGBA value (0, 0, 0, 1). This means that >>> texImage2D will cause the level of the passed texture object to be >>> redefined as size 1x1; texSubImage2D will upload a single black pixel >>> at the given xoffset and yoffset. It would be defined that an OpenGL >>> error would never occur as a result of these calls. >>> >>> Are there any objections to these changes? If not, I'll update the spec. >>> >>> -Ken >>> >>> >>> >>> > >>> > On Mon, Oct 22, 2012 at 10:32 AM, Kenneth Russell >>> wrote: >>> >> >>> >> On Sun, Oct 21, 2012 at 11:52 PM, Gregg Tavares (??) >> > >>> >> wrote: >>> >> > My concern is if you have a video player library that plays a >>> playlist >>> >> > of >>> >> > videos and then you decide later to add some WebGL to it, you have >>> to go >>> >> > through all kinds of contortions just to avoid an >>> INVALID_OPERATION. The >>> >> > framework you're using for the video playlists seems like it would >>> be >>> >> > unlikely to have the needed functionality built in to let you know >>> when >>> >> > top >>> >> > stop calling texImage2D and when to start again. >>> >> > >>> >> > If the error was removed and 1x1 was substituted that problem would >>> go >>> >> > away >>> >> > >>> >> > I don't see what error adds except to make it more complicated. >>> >> >>> >> Let's think through this use case. The playlist library iterates >>> >> through different videos. As it switches between videos it's clearly >>> >> necessary to send some sort of event to the application, because the >>> >> videos might be different resolutions. Unless the application is >>> >> calling texImage2D all the time (instead of texSubImage2D most of the >>> >> time), it will have to watch for these events to figure out when it >>> >> might need to reallocate the texture at a different size. (If the app >>> >> calls texImage2D all the time, then it doesn't matter whether WebGL >>> >> generates INVALID_OPERATION or a 1x1 black texture. The app will never >>> >> need to know whether the upload succeeded or failed, because failures >>> >> will be transient, and the app will automatically recover from them >>> >> when the next video starts playing. If texImage2D generates >>> >> INVALID_OPERATION, the previous contents of the texture will be >>> >> preserved.) >>> >> >>> >> If WebGL generates INVALID_OPERATION for incomplete texture uploads, >>> >> then the app either needs to watch for those errors coming back from >>> >> OpenGL, or watch for the onplay event to know when it's guaranteed to >>> >> be OK to upload a frame. If the app doesn't do this, and mostly calls >>> >> texSubImage2D to upload new frames, then the video can get "stuck" as >>> >> the app tries to upload new frames via texSubImage2D while the >>> >> underlying texture is the wrong size. >>> >> >>> >> If WebGL silently creates a 1x1 black texture for incomplete texture >>> >> uploads, then the app still needs to watch for the onplay event to >>> >> know when to call texImage2D instead of texSubImage2D. Otherwise, the >>> >> video can still get "stuck" displaying a 1x1 black texture. >>> >> >>> >> As far as I see it, the app has to do the same work regardless of >>> >> whether incomplete texture uploads generate an error or silently >>> >> produce a 1x1 black texture. The main difference is that the error can >>> >> be observed by the application, but the silent allocation of the 1x1 >>> >> black texture can not, because glGetTexLevelParameter doesn't exist in >>> >> the GLES API. For this reason I continue to think that generating >>> >> INVALID_OPERATION is the more transparent behavior for WebGL. >>> >> >>> >> Regardless of the decision here, I agree with you that the behavior >>> >> should be specified and we collectively should try to write some tests >>> >> for it -- ideally, ones that explicitly force failure or pauses at >>> >> certain download points, rather than just trying to load and play the >>> >> video many times. >>> >> >>> >> -Ken >>> >> >>> >> >>> >> >>> >> > On Fri, Oct 19, 2012 at 3:09 PM, Kenneth Russell >>> wrote: >>> >> >> >>> >> >> On Tue, Oct 16, 2012 at 4:46 PM, Gregg Tavares (??) < >>> gman...@> >>> >> >> wrote: >>> >> >> > >>> >> >> > >>> >> >> > >>> >> >> > On Tue, Oct 16, 2012 at 4:28 PM, Kenneth Russell >> > >>> >> >> > wrote: >>> >> >> >> >>> >> >> >> On Tue, Oct 16, 2012 at 3:54 PM, Gregg Tavares (??) >>> >> >> >> >>> >> >> >> wrote: >>> >> >> >> > I don't think the spec makes this clear what happens when you >>> try >>> >> >> >> > to >>> >> >> >> > call >>> >> >> >> > texImage2D or texSubImage2D on an image or video that is not >>> yet >>> >> >> >> > loaded. >>> >> >> >> >>> >> >> >> Right, the behavior in this case is not defined. I'm pretty sure >>> >> >> >> this >>> >> >> >> was discussed a long time ago in the working group, but it >>> seemed >>> >> >> >> difficult to write a test for any defined behavior, so by >>> consensus >>> >> >> >> implementations generate INVALID_OPERATION when attempting to >>> upload >>> >> >> >> incomplete images or video to WebGL textures. >>> >> >> > >>> >> >> > >>> >> >> > I don't see a problem writing a test. Basically >>> >> >> > >>> >> >> > var playing = false >>> >> >> > video.src = url >>> >> >> > video.play(); >>> >> >> > video.addEventListener('playing', function() { playing = true;}); >>> >> >> > >>> >> >> > waitForVideo() { >>> >> >> > gl.bindTexture(...) >>> >> >> > gl.texImage2D(..., video); >>> >> >> > if (playing) { >>> >> >> > doTestsOnVideoContent(); >>> >> >> > if (gl.getError() != gl.NO_ERROR) { testFailed("there >>> should be >>> >> >> > no >>> >> >> > errors"); } >>> >> >> > } else { >>> >> >> > requestAnimationFrame(waitForVideo); >>> >> >> > } >>> >> >> > >>> >> >> > This basically says an implementation is never allowed to >>> generate an >>> >> >> > error. >>> >> >> > it might not be a perfect test but neither is the current one. In >>> >> >> > fact >>> >> >> > this >>> >> >> > should test it just fine >>> >> >> > >>> >> >> > video = document.createElement("video"); >>> >> >> > gl.texImage2D(..., video); >>> >> >> > glErrorShouldBe(gl.NO_ERROR); >>> >> >> > video.src = "someValidURL": >>> >> >> > gl.texImage2D(..., video); >>> >> >> > glErrorShouldBe(gl.NO_ERROR); >>> >> >> > video.src = "someOtherValidURL": >>> >> >> > gl.texImage2D(..., video); >>> >> >> > glErrorShouldBe(gl.NO_ERROR); >>> >> >> > >>> >> >> > Then do the other 'playing' event tests. >>> >> >> > >>> >> >> >> >>> >> >> >> >>> >> >> >> > Example >>> >> >> >> > >>> >> >> >> > video = document.createElement("video"); >>> >> >> >> > video.src = "http://mysite.com/myvideo"; >>> >> >> >> > video.play(); >>> >> >> >> > >>> >> >> >> > function render() { >>> >> >> >> > gl.bindTexture(...) >>> >> >> >> > gl.texImage2D(..., video); >>> >> >> >> > gl.drawArrays(...); >>> >> >> >> > window.requestAnimationFrame(render); >>> >> >> >> > } >>> >> >> >> > >>> >> >> >> > Chrome right now will synthesize a GL error if the system >>> hasn't >>> >> >> >> > actually >>> >> >> >> > gotten the video to start (as in if it's still buffering). >>> >> >> >> > >>> >> >> >> > Off the top of my head, it seems like it would just be >>> friendlier >>> >> >> >> > to >>> >> >> >> > make a >>> >> >> >> > black texture (0,0,0,1) or (0,0,0,0) 1x1 pixels for videos >>> that >>> >> >> >> > aren't >>> >> >> >> > loaded yet. >>> >> >> >> > >>> >> >> >> > Otherwise, having to check that the video is ready before >>> calling >>> >> >> >> > texImage2D >>> >> >> >> > seems kind of burdensome on the developer. If they want to >>> check >>> >> >> >> > they >>> >> >> >> > should >>> >> >> >> > use video.addEventListener('playing') or similar. If they were >>> >> >> >> > making >>> >> >> >> > a >>> >> >> >> > video player they'd have to add a bunch of logic when queuing >>> the >>> >> >> >> > next >>> >> >> >> > video. >>> >> >> >> > >>> >> >> >> > Same with images. >>> >> >> >> > >>> >> >> >> > img = document.createElement("img"); >>> >> >> >> > img.src = "http://mysite.com/myimage"; >>> >> >> >> > >>> >> >> >> > function render() { >>> >> >> >> > gl.bindTexture(...) >>> >> >> >> > gl.texImage2D(..., img); >>> >> >> >> > gl.drawArrays(...); >>> >> >> >> > window.requestAnimationFrame(render); >>> >> >> >> > } >>> >> >> >> > >>> >> >> >> > If you want to know if the image has loaded use img.onload but >>> >> >> >> > otherwise >>> >> >> >> > don't fail the call? >>> >> >> >> > >>> >> >> >> > What do you think? Good idea? Bad idea? >>> >> >> >> >>> >> >> >> My initial impression is that this change is not a good idea. It >>> >> >> >> would >>> >> >> >> expose the specification, implementations and applications to a >>> lot >>> >> >> >> of >>> >> >> >> corner case behaviors. For example, if a video's width and >>> height >>> >> >> >> hasn't been received yet, then texImage2D(..., video) would >>> have to >>> >> >> >> allocate a 1x1 texture; but if the width and height are known, >>> then >>> >> >> >> it >>> >> >> >> would have to allocate the right amount of storage. A naive app >>> >> >> >> might >>> >> >> >> call texImage2D only the first time and texSubImage2D >>> subsequently, >>> >> >> >> so >>> >> >> >> if the first texImage2D call was made before the metadata was >>> >> >> >> downloaded, it would never render correctly. I think the current >>> >> >> >> fail-fast behavior is best, and already has the result that it >>> >> >> >> renders >>> >> >> >> a black texture if the upload fails; the call will (implicitly) >>> >> >> >> generate INVALID_OPERATION, and the texture will be incomplete. >>> >> >> > >>> >> >> > >>> >> >> > I don't see how that's worse than the current situation which is >>> you >>> >> >> > call >>> >> >> > texImage2D and pray it works. You have no idea if it's going to >>> work >>> >> >> > or >>> >> >> > not. >>> >> >> > If you do this does it work? >>> >> >> > >>> >> >> > okToUseVideo = false; >>> >> >> > video = document.createElement("video"); >>> >> >> > video.src = "movie#1" >>> >> >> > video.addEventListener('playing', function() { okToUseVideo = >>> true; >>> >> >> > } >>> >> >> > >>> >> >> > frameCount = 0; >>> >> >> > >>> >> >> > function render() { >>> >> >> > if (okToUseVideo) { >>> >> >> > gl.texImage2D(... , video); >>> >> >> > >>> >> >> > ++frameCount; >>> >> >> > if (frameCount > 1000) { >>> >> >> > video.src = "movie2"; >>> >> >> > } >>> >> >> > } >>> >> >> > } >>> >> >> > >>> >> >> > >>> >> >> > Basically after some amount of time I switch video.src to a new >>> >> >> > movie. >>> >> >> > Is >>> >> >> > the video off limits now? >>> >> >> >>> >> >> The assumption should be "yes". When the source of the media >>> element >>> >> >> is set, it is immediately invalid to use until the onload / playing >>> >> >> handler is called. >>> >> >> >>> >> >> > Will it use the old frame from the old movie until >>> >> >> > the new movie is buffered or will it give me INVALID_OPERATION? I >>> >> >> > have >>> >> >> > no >>> >> >> > idea and it's not specified. Same with img. >>> >> >> > >>> >> >> > img.src = "image#1" >>> >> >> > img.onload = function() { >>> >> >> > img.src = "image#2"; >>> >> >> > gl.texImage2D(...., img); // what's this? old image, no image, >>> >> >> > INVALID_OPERATION? >>> >> >> > } >>> >> >> >>> >> >> Yes, this should be assumed to produce INVALID_OPERATION. >>> >> >> >>> >> >> >>> >> >> > The texSubImage2D issue is not helped by the current spec. If >>> >> >> > video.src >>> >> >> > = >>> >> >> > useChoosenURL then you have no idea what the width and height are >>> >> >> > until >>> >> >> > the >>> >> >> > 'playing' event (or whatever event) which is no different than >>> if we >>> >> >> > changed >>> >> >> > it. >>> >> >> > >>> >> >> > Changing it IMO means far less broken websites and I can't see >>> any >>> >> >> > disadvantages. Sure you can get a 1x1 pixel texture to start and >>> call >>> >> >> > texSubImage2D now but you can do that already. >>> >> >> >>> >> >> With the current fail-fast behavior, where INVALID_OPERATION will >>> be >>> >> >> generated, the texture will be in one of two states: (1) its >>> previous >>> >> >> state, because the texImage2D call failed; or (2) having the width >>> and >>> >> >> height of the incoming media element. >>> >> >> >>> >> >> With the proposed behavior to never generate an error, the texture >>> >> >> will be either 1x1 or width x height. Another problem with this >>> >> >> proposal is that there is no way with the ES 2.0 or WebGL API to >>> query >>> >> >> the size of a level of a texture, because GetTexLevelParameter was >>> >> >> removed from the OpenGL ES API (and, unfortunately, not >>> reintroduced >>> >> >> in ES 3.0). Therefore the behavior is completely silent -- there >>> is no >>> >> >> way for the application developer to find out what happened (no >>> error >>> >> >> reported, and still renders like an incomplete texture would). >>> >> >> >>> >> >> I agree that the failing behavior should be specified and, more >>> >> >> importantly, tests written verifying it. If the INVALID_OPERATION >>> >> >> error were spec'ed, would that address your primary concern? I'm >>> not >>> >> >> convinced that silently making the texture 1x1 is a good path to >>> take. >>> >> >> >>> >> >> -Ken >>> >> >> >>> >> >> >>> >> >> >> >>> >> >> >> >>> >> >> >> If we want to spec this more tightly then we'll need to do more >>> work >>> >> >> >> in the conformance suite to forcibly stall HTTP downloads of the >>> >> >> >> video >>> >> >> >> resources in the test suite at well known points. I'm vaguely >>> aware >>> >> >> >> that WebKit's HTTP tests do this with a custom server. >>> Requiring a >>> >> >> >> custom server in order to run the WebGL conformance suite at all >>> >> >> >> would >>> >> >> >> have pretty significant disadvantages. >>> >> >> >> >>> >> >> >> -Ken >>> >> >> > >>> >> >> > >>> >> > >>> >> > >>> > >>> > >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Wed Oct 31 17:41:35 2012 From: gma...@ (=?UTF-8?B?R3JlZ2cgVGF2YXJlcyAo56S+55SoKQ==?=) Date: Wed, 31 Oct 2012 17:41:35 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: On Wed, Oct 31, 2012 at 5:03 PM, Brandon Jones wrote: > On Wed, Oct 31, 2012 at 4:44 PM, Gregg Tavares (??) wrote: > >> Is that more developer friendly? It means if you want to show the same >> frame between videos you have to do extra work. >> > > I'm a little unclear on what you're referring to here. I assume you mean > "show the last frame from a previous video in a playlist while a the next > video is loading?" I can see how that may be a nice side effect, but it > also feels a bit hacky. If I want my video player to nicely transition from > one video to the next I have all the events at my disposal to monitor that > and do the Right Thing. > Sorry, I'm assuming you don't have all the events. That you are not running the entire thing yourself. That some site like youtube, vimeo or G+ hangouts provides a javascript API that gives you video tag controlled by their Javascript and that you're trying incorporate it into some WebGL app and it may or may not give you all the events you need to know every detail. I'm also assuming that even if you write it yourself, most developers would prefer to show the last frame of a video than black. var cue = 0; var playlist = [ "video1.mp4", "video2.mp4", "video3.mp4", "video4.mp4", ]; video.addEventListener('ended', cueNextVideo); function cueNextVideo() { if (cue >= playlist.length) { return; } video.src = playlist[cue++]; video.play(); } function render() { gl.texImage2D(...., video); gl.drawXXX(...); requestAnimationFrame(render); } cueNextVideo(); render(); That seems developer friendly to me. It just works. (assumes my fiction that developers would prefer the last frame between videos ;-p) Here's what I guess the alternative is var cue = 0; var playlist = [ "video1.mp4", "video2.mp4", "video3.mp4", "video4.mp4", ]; var update = false; video.addEventListener('ended', cueNextVideo); video.addEventListener('playing', function() { update = true; }); function cueNextVideo() { if (cue >= playlist.length) { return; } update = false; video.src = playlist[cue++]; video.play(); } function render() { if (update) { gl.texImage2D(...., video); } gl.drawXXX(...); requestAnimationFrame(render); } cueNextVideo(); render(); Not that burdensome either. I feel strongly that tex*Sub*Image2D should be a no-op if nothing is available. For texImage2D it seems like it should be a no-op or a 0x0 texture. Not 1x1. My preference is no-op but I don't feel that strongly. I agree with Colin, a console warning might would be useful either way. Although it could also get in the way of legit apps. Especially if we decide to allow progressive loading. -------------- next part -------------- An HTML attachment was scrubbed... URL: From baj...@ Wed Oct 31 18:49:41 2012 From: baj...@ (Brandon Jones) Date: Wed, 31 Oct 2012 18:49:41 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: I agree with Colin as well, no matter what we do we should work on making sure the developer understands the results they are getting. Unfortunately if you do that via a console.log you may spam the console with warnings about intended behavior in the case of Gregg's no-op video playlist example. On Oct 31, 2012 5:41 PM, "Gregg Tavares (??)" wrote: > > > > On Wed, Oct 31, 2012 at 5:03 PM, Brandon Jones wrote: > >> On Wed, Oct 31, 2012 at 4:44 PM, Gregg Tavares (??) wrote: >> >>> Is that more developer friendly? It means if you want to show the same >>> frame between videos you have to do extra work. >>> >> >> I'm a little unclear on what you're referring to here. I assume you mean >> "show the last frame from a previous video in a playlist while a the next >> video is loading?" I can see how that may be a nice side effect, but it >> also feels a bit hacky. If I want my video player to nicely transition from >> one video to the next I have all the events at my disposal to monitor that >> and do the Right Thing. >> > > Sorry, I'm assuming you don't have all the events. That you are not > running the entire thing yourself. That some site like youtube, vimeo or G+ > hangouts provides a javascript API that gives you video tag controlled by > their Javascript and that you're trying incorporate it into some WebGL app > and it may or may not give you all the events you need to know every detail. > > I'm also assuming that even if you write it yourself, most developers > would prefer to show the last frame of a video than black. > > > var cue = 0; > var playlist = [ > "video1.mp4", > "video2.mp4", > "video3.mp4", > "video4.mp4", > ]; > > video.addEventListener('ended', cueNextVideo); > > function cueNextVideo() { > if (cue >= playlist.length) { > return; > } > video.src = playlist[cue++]; > video.play(); > } > > function render() { > gl.texImage2D(...., video); > gl.drawXXX(...); > requestAnimationFrame(render); > } > > cueNextVideo(); > render(); > > That seems developer friendly to me. It just works. (assumes my fiction > that developers would prefer the last frame between videos ;-p) > > Here's what I guess the alternative is > > > var cue = 0; > var playlist = [ > "video1.mp4", > "video2.mp4", > "video3.mp4", > "video4.mp4", > ]; > var update = false; > > video.addEventListener('ended', cueNextVideo); > video.addEventListener('playing', function() { > update = true; > }); > > function cueNextVideo() { > if (cue >= playlist.length) { > return; > } > update = false; > video.src = playlist[cue++]; > video.play(); > } > > function render() { > if (update) { > gl.texImage2D(...., video); > } > gl.drawXXX(...); > requestAnimationFrame(render); > } > > cueNextVideo(); > render(); > > Not that burdensome either. I feel strongly that tex*Sub*Image2D should > be a no-op if nothing is available. For texImage2D it seems like it should > be a no-op or a 0x0 texture. Not 1x1. My preference is no-op but I don't > feel that strongly. > > I agree with Colin, a console warning might would be useful either way. > Although it could also get in the way of legit apps. Especially if we > decide to allow progressive loading. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Oct 31 19:03:43 2012 From: kbr...@ (Kenneth Russell) Date: Wed, 31 Oct 2012 19:03:43 -0700 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: For reference, the 2D Canvas API says about drawImage: "If the image argument is an HTMLImageElement object that is not fully decodable, or if the image argument is an HTMLVideoElement object whose readyState attribute is either HAVE_NOTHING or HAVE_METADATA, then the implementation must return without drawing anything." http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#drawing-images-to-the-canvas For symmetry, that would argue to make both texImage2D and texSubImage2D be no-ops if passed incomplete images or video elements. Note that the current unspecified behavior of generating an INVALID_OPERATION error already does this. If we get rid of this error and simply log a warning to the console, are we improving the situation for developers? Or are we removing the only avenue by which they can tell via the API that something is flaky in their application? -Ken On Wed, Oct 31, 2012 at 6:49 PM, Brandon Jones wrote: > I agree with Colin as well, no matter what we do we should work on making > sure the developer understands the results they are getting. > > Unfortunately if you do that via a console.log you may spam the console with > warnings about intended behavior in the case of Gregg's no-op video playlist > example. > > On Oct 31, 2012 5:41 PM, "Gregg Tavares (??)" wrote: >> >> >> >> >> On Wed, Oct 31, 2012 at 5:03 PM, Brandon Jones wrote: >>> >>> On Wed, Oct 31, 2012 at 4:44 PM, Gregg Tavares (??) >>> wrote: >>>> >>>> Is that more developer friendly? It means if you want to show the same >>>> frame between videos you have to do extra work. >>> >>> >>> I'm a little unclear on what you're referring to here. I assume you mean >>> "show the last frame from a previous video in a playlist while a the next >>> video is loading?" I can see how that may be a nice side effect, but it also >>> feels a bit hacky. If I want my video player to nicely transition from one >>> video to the next I have all the events at my disposal to monitor that and >>> do the Right Thing. >> >> >> Sorry, I'm assuming you don't have all the events. That you are not >> running the entire thing yourself. That some site like youtube, vimeo or G+ >> hangouts provides a javascript API that gives you video tag controlled by >> their Javascript and that you're trying incorporate it into some WebGL app >> and it may or may not give you all the events you need to know every detail. >> >> I'm also assuming that even if you write it yourself, most developers >> would prefer to show the last frame of a video than black. >> >> >> var cue = 0; >> var playlist = [ >> "video1.mp4", >> "video2.mp4", >> "video3.mp4", >> "video4.mp4", >> ]; >> >> video.addEventListener('ended', cueNextVideo); >> >> function cueNextVideo() { >> if (cue >= playlist.length) { >> return; >> } >> video.src = playlist[cue++]; >> video.play(); >> } >> >> function render() { >> gl.texImage2D(...., video); >> gl.drawXXX(...); >> requestAnimationFrame(render); >> } >> >> cueNextVideo(); >> render(); >> >> That seems developer friendly to me. It just works. (assumes my fiction >> that developers would prefer the last frame between videos ;-p) >> >> Here's what I guess the alternative is >> >> >> var cue = 0; >> var playlist = [ >> "video1.mp4", >> "video2.mp4", >> "video3.mp4", >> "video4.mp4", >> ]; >> var update = false; >> >> video.addEventListener('ended', cueNextVideo); >> video.addEventListener('playing', function() { >> update = true; >> }); >> >> function cueNextVideo() { >> if (cue >= playlist.length) { >> return; >> } >> update = false; >> video.src = playlist[cue++]; >> video.play(); >> } >> >> function render() { >> if (update) { >> gl.texImage2D(...., video); >> } >> gl.drawXXX(...); >> requestAnimationFrame(render); >> } >> >> cueNextVideo(); >> render(); >> >> Not that burdensome either. I feel strongly that texSubImage2D should be >> a no-op if nothing is available. For texImage2D it seems like it should be a >> no-op or a 0x0 texture. Not 1x1. My preference is no-op but I don't feel >> that strongly. >> >> I agree with Colin, a console warning might would be useful either way. >> Although it could also get in the way of legit apps. Especially if we decide >> to allow progressive loading. >> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From cal...@ Wed Oct 31 19:13:32 2012 From: cal...@ (Mark Callow) Date: Thu, 01 Nov 2012 11:13:32 +0900 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: References: <5090A4E5.2060803@artspark.co.jp> <5090ED5D.7050202@artspark.co.jp> Message-ID: <5091DACC.5070600@artspark.co.jp> On 2012/11/01 1:36, John Bauman wrote: > That's definitely one way to do it, but there are many other ways to > implement that. For example doing a copy to another buffer on the CPU, > or making a copy-on-write copy of the data. Otherwise, any code that > uploaded data to the GPU during the course of a frame would be really > slow. > Or have the CPU copy the data to the GPU memory while the GPU continues working on the command buffer. Regards -Mark Please note that, due to the integration of management operations following establishment of our new holding company, my e-mail address has changed to callow.mark<@>artspark<.>co<.>jp. I can receive messages at the old address for the rest of this year but please update your address book as soon as possible. -- ???????????????????????????????????? ???????????????????????????? ??????? ???????????????????????????????????? ????????????????????? ??. NOTE: This electronic mail message may contain confidential and privileged information from HI Corporation. If you are not the intended recipient, any disclosure, photocopying, distribution or use of the contents of the received information is prohibited. If you have received this e-mail in error, please notify the sender immediately and permanently delete this message and all related copies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sin...@ Wed Oct 31 19:15:22 2012 From: sin...@ (Colin Mackenzie) Date: Wed, 31 Oct 2012 22:15:22 -0400 Subject: [Public WebGL] texImage2D on unfinished images and video In-Reply-To: References: Message-ID: > Unfortunately if you do that via a console.log you may spam the console > with warnings about intended behavior in the case of Gregg's no-op video > playlist example. Would it be a viable approach to log the warning only once, or only once per image/video? I don't really know what's happening behind the scenes but it seems to me that when a browser decides to create a 1x1 texture (or perform a no-op) then it should still have a handle to the offending incomplete image. So, maybe it can attach some "already logged" state to the image to avoid spamming the console. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Oct 31 23:44:05 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 1 Nov 2012 07:44:05 +0100 Subject: [Public WebGL] Microsoft Broke the Silence In-Reply-To: <0EA9975A-096B-43DC-99B2-731C03F1A0AF@apple.com> References: <0EA9975A-096B-43DC-99B2-731C03F1A0AF@apple.com> Message-ID: On Wed, Oct 31, 2012 at 11:51 PM, Chris Marrin wrote: > Interestingly, they say WE in the first part and yet have never > participated in WebGL, nor have they ever made any sort of proposal or even > given a direct and specific critique of what their security concerns are. I > think it would be healthy for them to make a proposal for 3D on the web, > even if it were at odds with WebGL. It would mean they are talking and > that's the first step. > Yeah the first step in the conversation with Microsoft would have to be them specifying what problems they specifically have. If they're just gonna reiterate their security FUD without any means to discuss it specifically, there's no progress. > Does Windows Phone 8 do curated apps? Can someone write a WebKit based > browser for them? If so, perhaps IE support won't really be relevant :-) > Windows 8 phones (like WART) does only allow Windows app store apps. The windows app-stores TOS read much like Apple ones in that interpreters are out, and even if they aren't, the APIs to allow you to set a pointer to memory and execute it (jused by JITing engines like V8, Safari etc.) is no longer present in Modern UI. It does look like Microsoft wants to prevent any other browser than IE being present on Windows 8 phones and WART. So since it looks like nobody's gonna be able to write a truly competing browser for Windows 8 phones/WART (quite like iOS, *ahem*) IE support does seem to be quite relevant. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Wed Oct 31 23:53:48 2012 From: pya...@ (=?ISO-8859-1?Q?Florian_B=F6sch?=) Date: Thu, 1 Nov 2012 07:53:48 +0100 Subject: [Public WebGL] CORS and resource provider awareness In-Reply-To: <5091DACC.5070600@artspark.co.jp> References: <5090A4E5.2060803@artspark.co.jp> <5090ED5D.7050202@artspark.co.jp> <5091DACC.5070600@artspark.co.jp> Message-ID: On Thu, Nov 1, 2012 at 3:13 AM, Mark Callow wrote: > Or have the CPU copy the data to the GPU memory while the GPU continues > working on the command buffer. > Yeah but any subsequent command could depend on those bytes. So unless you've uploaded the bytes, the next command after the upload can't continue safely. And unless you block you can't fulfill your guarantee that the memory used to pass the bytes is now back in the ownership of the client. And unless you have plenty of free, unused ram and a willingness to nuke the page cache table and blow up the driver memory footprint you can't do a ram->ram copy on write. The fact of the matter is, you don't know, and can't control, what the driver does and when he syncs. Finish is just one function guaranteed to sync. There is no guarantee any (or all) of the others won't sync. It's up to the driver. OpenGL leaves this entirely unspecified. So trying to shoehorn syncing restrictions into prohibitions onto flush is like trying to plug a hole with a sieve. -------------- next part -------------- An HTML attachment was scrubbed... URL: