From ste...@ Sun Aug 1 16:51:30 2010 From: ste...@ (Steve Baker) Date: Sun, 01 Aug 2010 18:51:30 -0500 Subject: [Public WebGL] Canvas resize. Message-ID: <4C560882.3010906@sjbaker.org> So I have an HTML5 - and I used canvas.getContext (yaddayadda) to get the GL rendering context...but I don't understand how canvas and context a relating to one-another. If something resizes the canvas using: canvas.width=whatever; canvas.height=whatever; ...the GL context doesn't resize with it - instead, it stays the same number of pixels and is rendered into the bottom-left corner of an up-sized canvas or cropped if it's a down-sized canvas. I guess I'm supposed to destroy the GL context, reload textures and shaders and create a new one? But that's a time-consuming thing. I'd kinda hoped that the old GL context could be stretched or squashed to fit the canvas until I could recreate and replace it so that the user experience would be that the image might go a bit blurry or aliassy for a few seconds after a resize. Is there a way to do that - or am I just going about this all wrong? -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From squ...@ Sun Aug 1 15:33:20 2010 From: squ...@ (squ...@) Date: Sun, 01 Aug 2010 22:33:20 +0000 Subject: [Public WebGL] Canvas resize. In-Reply-To: <4C560882.3010906@sjbaker.org> Message-ID: <00163645838cc86464048ccaabdf@google.com> WebGL follows the same rules as other GLs; that is, you have to manage the viewport size yourself. What you need to do is add a Resize event listener to the canvas element, and then call the viewport method on the context: glContext.viewport(x, y, width, height). The x&y are relative to the canvas bottom left corner, so if you want to maintain a render viewport the same size as the canvas, something like this would work: canvas.addEventListener("resize", function() { glContext.viewport(0, 0, canvas.width, canvas.height); }); So no, you don't have to destroy the context and recreate it. (I'm not aware of a way to destroy the context actually...as further getContext() calls return the original context) On Aug 1, 2010 5:51pm, Steve Baker wrote: > So I have an HTML5 - and I used canvas.getContext (yaddayadda) > to get the GL rendering context...but I don't understand how canvas and > context a relating to one-another. > If something resizes the canvas using: > canvas.width=whatever; > canvas.height=whatever; > ...the GL context doesn't resize with it - instead, it stays the same > number of pixels and is rendered into the bottom-left corner of an > up-sized canvas or cropped if it's a down-sized canvas. > I guess I'm supposed to destroy the GL context, reload textures and > shaders and create a new one? But that's a time-consuming thing. > I'd kinda hoped that the old GL context could be stretched or squashed > to fit the canvas until I could recreate and replace it so that the user > experience would be that the image might go a bit blurry or aliassy for > a few seconds after a resize. Is there a way to do that - or am I just > going about this all wrong? > -- Steve > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Sun Aug 1 15:36:32 2010 From: vla...@ (Vladimir Vukicevic) Date: Sun, 1 Aug 2010 15:36:32 -0700 (PDT) Subject: [Public WebGL] Canvas resize. In-Reply-To: <4C560882.3010906@sjbaker.org> Message-ID: <78818488.152450.1280702192085.JavaMail.root@cm-mail03.mozilla.org> If you're doing this in Minefield, we had a bug for a few days where we regressed sizing. However, even now with that fixed, you have to manually set the viewport to the new size via viewport(), as per the WebGL spec. If you're either running a nightly build with that bug, or haven't reset the viewport, you'll see the behaviour that you're describing. - Vlad ----- Original Message ----- > So I have an HTML5 - and I used canvas.getContext > (yaddayadda) > to get the GL rendering context...but I don't understand how canvas > and > context a relating to one-another. > > If something resizes the canvas using: > > canvas.width=whatever; > canvas.height=whatever; > > ...the GL context doesn't resize with it - instead, it stays the same > number of pixels and is rendered into the bottom-left corner of an > up-sized canvas or cropped if it's a down-sized canvas. > > I guess I'm supposed to destroy the GL context, reload textures and > shaders and create a new one? But that's a time-consuming thing. > > I'd kinda hoped that the old GL context could be stretched or squashed > to fit the canvas until I could recreate and replace it so that the > user > experience would be that the image might go a bit blurry or aliassy > for > a few seconds after a resize. Is there a way to do that - or am I just > going about this all wrong? > > -- Steve > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Sun Aug 1 18:41:23 2010 From: ste...@ (Steve Baker) Date: Sun, 01 Aug 2010 20:41:23 -0500 Subject: [Public WebGL] Canvas resize. In-Reply-To: <78818488.152450.1280702192085.JavaMail.root@cm-mail03.mozilla.org> References: <78818488.152450.1280702192085.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C562243.2090500@sjbaker.org> Aha! I am indeed running an older version of Minefield. The nightly builds over the last few days have been flakey - crashing for reasons other than WebGL when doing nothing more complicated than showing Wikipedia pages - so I had reverted back to an older version that I still had lying around on my hard drive that seemed to be more stable...when I run with last night's build, my canvas resize works as expected. Many thanks! -- Steve Vladimir Vukicevic wrote: > If you're doing this in Minefield, we had a bug for a few days where we regressed sizing. However, even now with that fixed, you have to manually set the viewport to the new size via viewport(), as per the WebGL spec. If you're either running a nightly build with that bug, or haven't reset the viewport, you'll see the behaviour that you're describing. > > - Vlad > > ----- Original Message ----- > >> So I have an HTML5 - and I used canvas.getContext >> (yaddayadda) >> to get the GL rendering context...but I don't understand how canvas >> and >> context a relating to one-another. >> >> If something resizes the canvas using: >> >> canvas.width=whatever; >> canvas.height=whatever; >> >> ...the GL context doesn't resize with it - instead, it stays the same >> number of pixels and is rendered into the bottom-left corner of an >> up-sized canvas or cropped if it's a down-sized canvas. >> >> I guess I'm supposed to destroy the GL context, reload textures and >> shaders and create a new one? But that's a time-consuming thing. >> >> I'd kinda hoped that the old GL context could be stretched or squashed >> to fit the canvas until I could recreate and replace it so that the >> user >> experience would be that the image might go a bit blurry or aliassy >> for >> a few seconds after a resize. Is there a way to do that - or am I just >> going about this all wrong? >> >> -- Steve >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Mon Aug 2 13:39:59 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Mon, 2 Aug 2010 13:39:59 -0700 Subject: [Public WebGL] Re: maximum length of identifiers in WebGL GLSL In-Reply-To: References: Message-ID: ping. Anyone have an opinion on this? There are 2 issues. Issue #1: It's possible a GL driver could be exploited by passing it giant identifiers. Examples. ctx.bindAttribLocation(program, 4MegIdentifier, location); ctx.shaderSource(shader, shaderWith4MegIdentifiers, ...); Issue #2: Having a Min/Max identifier size of WebGL means programs will be more likely to work across platforms. Right now one platform might have a 64 character limit, another a 128 character limit. If you're developing on the 128 character limit system you won't know your WebGL program is not going to work on some other system. If instead the spec says it must support 128 characters to be WebGL conformance than an system who's OpenGL driver does not support 128 character will either need a new driver or the WebGL implementation will need to alias the ids. In either case WebGL becomes more compatible across system. Note that the GLSL translator everyone is currently using already has a 128 character limit but it would be nice if that limit (or some limit) was formalized in the spec. On Thu, Jul 29, 2010 at 3:11 PM, Gregg Tavares (wrk) wrote: > Do we want to specify a maximum identifier length for WebGL GLSL? > > I didn't see one in the GLSL spec. I was going to write a test with really > long identifiers (4meg) to see if I could find some drivers that had > problems with them but it might be better to just require WebGL to enforce > some maximum length. 64 chars? 128 chars? 256 chars which will make shaders > less likely to fail on some drivers. > > Thoughts? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Mon Aug 2 13:55:57 2010 From: vla...@ (Vladimir Vukicevic) Date: Mon, 2 Aug 2010 13:55:57 -0700 (PDT) Subject: [Public WebGL] Re: maximum length of identifiers in WebGL GLSL In-Reply-To: Message-ID: <688728492.161209.1280782557243.JavaMail.root@cm-mail03.mozilla.org> I would be fine with a 128 character limit; 64 might also be fine, but I worry that someone making extensive use of structs could end up with some fairly lengthy identifiers. 64 is still a good bit even with structs though; does anyone have any experience with long identifiers in actual real-world GLSL? - Vlad ----- Original Message ----- > ping. > > > Anyone have an opinion on this? > > > There are 2 issues. > > > Issue #1: It's possible a GL driver could be exploited by passing it > giant identifiers. Examples. > > > ctx.bindAttribLocation(program, 4MegIdentifier, location); > ctx.shaderSource(shader, shaderWith4MegIdentifiers, ...); > > > Issue #2: Having a Min/Max identifier size of WebGL means programs > will be more likely to work across platforms. > > > Right now one platform might have a 64 character limit, another a 128 > character limit. If you're developing on the 128 character limit > system you won't know your WebGL program is not going to work on some > other system. If instead the spec says it must support 128 characters > to be WebGL conformance than an system who's OpenGL driver does not > support 128 character will either need a new driver or the WebGL > implementation will need to alias the ids. In either case WebGL > becomes more compatible across system. > > > Note that the GLSL translator everyone is currently using already has > a 128 character limit but it would be nice if that limit (or some > limit) was formalized in the spec. > > > > > > > > On Thu, Jul 29, 2010 at 3:11 PM, Gregg Tavares (wrk) < gman...@ > > wrote: > > > Do we want to specify a maximum identifier length for WebGL GLSL? > > > I didn't see one in the GLSL spec. I was going to write a test with > really long identifiers (4meg) to see if I could find some drivers > that had problems with them but it might be better to just require > WebGL to enforce some maximum length. 64 chars? 128 chars? 256 chars > which will make shaders less likely to fail on some drivers. > > > Thoughts? ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From tu...@ Mon Aug 2 14:03:01 2010 From: tu...@ (Thatcher Ulrich) Date: Mon, 2 Aug 2010 17:03:01 -0400 Subject: [Public WebGL] Re: maximum length of identifiers in WebGL GLSL In-Reply-To: References: Message-ID: 2 cents: * a fixed specified limit is good. * it should be a generous limit, since some devs will probably machine-generate their shaders (think C++ name mangling etc). On the other hand, a huge limit won't be exercised much. How about 256 chars? That should encompass all human-written shaders, and then some. For machine-written shaders, it could be exceeded but it's plenty of chars to generate a unique ID by hashing down a very long id. -T On Mon, Aug 2, 2010 at 4:39 PM, Gregg Tavares (wrk) wrote: > ping. > Anyone have an opinion on this? > There are 2 issues. > Issue #1: It's possible a GL driver could be exploited by passing it giant > identifiers. ?Examples. > ctx.bindAttribLocation(program, 4MegIdentifier, location); > ctx.shaderSource(shader, shaderWith4MegIdentifiers, ...); > Issue #2: Having a Min/Max identifier size of WebGL means programs will be > more likely to work across platforms. > Right now one platform might have a 64 character limit, another a 128 > character limit. If you're developing on the 128 character limit system you > won't know your WebGL program is not going to work on some other system. ?If > instead the spec says it must support 128 characters to be WebGL conformance > than an system who's OpenGL driver does not support 128 character will > either need a new driver or the WebGL implementation will need to alias the > ids. In either case WebGL becomes more compatible across system. > Note that the GLSL translator everyone is currently using already has a 128 > character limit but it would be nice if that limit (or some limit) was > formalized in the spec. > > > > On Thu, Jul 29, 2010 at 3:11 PM, Gregg Tavares (wrk) > wrote: >> >> Do we want to specify a maximum identifier length for WebGL GLSL? >> I didn't see one in the GLSL spec. ?I was going to write a test with >> really long identifiers (4meg) to see if I could find some drivers that had >> problems with them but it might be better to just require WebGL to enforce >> some maximum length. ?64 chars? 128 chars? 256 chars which will make shaders >> less likely to fail on some drivers. >> Thoughts? > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Mon Aug 2 15:37:28 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 2 Aug 2010 15:37:28 -0700 Subject: [Public WebGL] Re: maximum length of identifiers in WebGL GLSL In-Reply-To: References: Message-ID: On Mon, Aug 2, 2010 at 2:03 PM, Thatcher Ulrich wrote: > 2 cents: > > * a fixed specified limit is good. > > * it should be a generous limit, since some devs will probably > machine-generate their shaders (think C++ name mangling etc). ?On the > other hand, a huge limit won't be exercised much. ?How about 256 > chars? ?That should encompass all human-written shaders, and then > some. ?For machine-written shaders, it could be exceeded but it's > plenty of chars to generate a unique ID by hashing down a very long > id. It seems reasonable to specify a maximum identifier length. The only issue would be if some drivers do not even support the length we decide to specify. I was also going to suggest a 256 character maximum to better handle autogenerated shaders. We could spec that, implement it in the shader validator and test on several drivers to see whether it appears to be widely supportable. -Ken > -T > > On Mon, Aug 2, 2010 at 4:39 PM, Gregg Tavares (wrk) wrote: >> ping. >> Anyone have an opinion on this? >> There are 2 issues. >> Issue #1: It's possible a GL driver could be exploited by passing it giant >> identifiers. ?Examples. >> ctx.bindAttribLocation(program, 4MegIdentifier, location); >> ctx.shaderSource(shader, shaderWith4MegIdentifiers, ...); >> Issue #2: Having a Min/Max identifier size of WebGL means programs will be >> more likely to work across platforms. >> Right now one platform might have a 64 character limit, another a 128 >> character limit. If you're developing on the 128 character limit system you >> won't know your WebGL program is not going to work on some other system. ?If >> instead the spec says it must support 128 characters to be WebGL conformance >> than an system who's OpenGL driver does not support 128 character will >> either need a new driver or the WebGL implementation will need to alias the >> ids. In either case WebGL becomes more compatible across system. >> Note that the GLSL translator everyone is currently using already has a 128 >> character limit but it would be nice if that limit (or some limit) was >> formalized in the spec. >> >> >> >> On Thu, Jul 29, 2010 at 3:11 PM, Gregg Tavares (wrk) >> wrote: >>> >>> Do we want to specify a maximum identifier length for WebGL GLSL? >>> I didn't see one in the GLSL spec. ?I was going to write a test with >>> really long identifiers (4meg) to see if I could find some drivers that had >>> problems with them but it might be better to just require WebGL to enforce >>> some maximum length. ?64 chars? 128 chars? 256 chars which will make shaders >>> less likely to fail on some drivers. >>> Thoughts? >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Tue Aug 3 06:33:11 2010 From: ste...@ (Steve Baker) Date: Tue, 03 Aug 2010 08:33:11 -0500 Subject: [Public WebGL] Re: maximum length of identifiers in WebGL GLSL In-Reply-To: <688728492.161209.1280782557243.JavaMail.root@cm-mail03.mozilla.org> References: <688728492.161209.1280782557243.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C581A97.5050009@sjbaker.org> OK - so we have: abcdefg.hijklmnop.qrstuv.wxyz Are we talking about the length of the whole thing - or just the length of one part (like 'qrstuv' in the example above)? * If it's the latter, then (IMHO) a 64 character limit is fine - nobody likes to type 64 character variable names! * If it's the former then 64 is perhaps a little tight and we should consider requiring 128...although I guess I could live with 64 if it were absolutely necessary. -- Steve Vladimir Vukicevic wrote: > I would be fine with a 128 character limit; 64 might also be fine, but I worry that someone making extensive use of structs could end up with some fairly lengthy identifiers. 64 is still a good bit even with structs though; does anyone have any experience with long identifiers in actual real-world GLSL? > > - Vlad > > ----- Original Message ----- > >> ping. >> >> >> Anyone have an opinion on this? >> >> >> There are 2 issues. >> >> >> Issue #1: It's possible a GL driver could be exploited by passing it >> giant identifiers. Examples. >> >> >> ctx.bindAttribLocation(program, 4MegIdentifier, location); >> ctx.shaderSource(shader, shaderWith4MegIdentifiers, ...); >> >> >> Issue #2: Having a Min/Max identifier size of WebGL means programs >> will be more likely to work across platforms. >> >> >> Right now one platform might have a 64 character limit, another a 128 >> character limit. If you're developing on the 128 character limit >> system you won't know your WebGL program is not going to work on some >> other system. If instead the spec says it must support 128 characters >> to be WebGL conformance than an system who's OpenGL driver does not >> support 128 character will either need a new driver or the WebGL >> implementation will need to alias the ids. In either case WebGL >> becomes more compatible across system. >> >> >> Note that the GLSL translator everyone is currently using already has >> a 128 character limit but it would be nice if that limit (or some >> limit) was formalized in the spec. >> >> >> >> >> >> >> >> On Thu, Jul 29, 2010 at 3:11 PM, Gregg Tavares (wrk) < gman...@ >> >>> wrote: >>> >> Do we want to specify a maximum identifier length for WebGL GLSL? >> >> >> I didn't see one in the GLSL spec. I was going to write a test with >> really long identifiers (4meg) to see if I could find some drivers >> that had problems with them but it might be better to just require >> WebGL to enforce some maximum length. 64 chars? 128 chars? 256 chars >> which will make shaders less likely to fail on some drivers. >> >> >> Thoughts? >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Tue Aug 3 11:02:55 2010 From: vla...@ (Vladimir Vukicevic) Date: Tue, 3 Aug 2010 11:02:55 -0700 (PDT) Subject: [Public WebGL] Re: maximum length of identifiers in WebGL GLSL In-Reply-To: <4C581A97.5050009@sjbaker.org> Message-ID: <1315692653.170772.1280858575723.JavaMail.root@cm-mail03.mozilla.org> The whole string, I believe -- basically the thing that you'd pass to getUniformLocation(). - Vlad ----- Original Message ----- > OK - so we have: > > abcdefg.hijklmnop.qrstuv.wxyz > > Are we talking about the length of the whole thing - or just the > length > of one part (like 'qrstuv' in the example above)? > > * If it's the latter, then (IMHO) a 64 character limit is fine - > nobody > likes to type 64 character variable names! > * If it's the former then 64 is perhaps a little tight and we should > consider requiring 128...although I guess I could live with 64 if it > were absolutely necessary. > > -- Steve > > Vladimir Vukicevic wrote: > > I would be fine with a 128 character limit; 64 might also be fine, > > but I worry that someone making extensive use of structs could end > > up with some fairly lengthy identifiers. 64 is still a good bit even > > with structs though; does anyone have any experience with long > > identifiers in actual real-world GLSL? > > > > - Vlad > > > > ----- Original Message ----- > > > >> ping. > >> > >> > >> Anyone have an opinion on this? > >> > >> > >> There are 2 issues. > >> > >> > >> Issue #1: It's possible a GL driver could be exploited by passing > >> it > >> giant identifiers. Examples. > >> > >> > >> ctx.bindAttribLocation(program, 4MegIdentifier, location); > >> ctx.shaderSource(shader, shaderWith4MegIdentifiers, ...); > >> > >> > >> Issue #2: Having a Min/Max identifier size of WebGL means programs > >> will be more likely to work across platforms. > >> > >> > >> Right now one platform might have a 64 character limit, another a > >> 128 > >> character limit. If you're developing on the 128 character limit > >> system you won't know your WebGL program is not going to work on > >> some > >> other system. If instead the spec says it must support 128 > >> characters > >> to be WebGL conformance than an system who's OpenGL driver does not > >> support 128 character will either need a new driver or the WebGL > >> implementation will need to alias the ids. In either case WebGL > >> becomes more compatible across system. > >> > >> > >> Note that the GLSL translator everyone is currently using already > >> has > >> a 128 character limit but it would be nice if that limit (or some > >> limit) was formalized in the spec. > >> > >> > >> > >> > >> > >> > >> > >> On Thu, Jul 29, 2010 at 3:11 PM, Gregg Tavares (wrk) < > >> gman...@ > >> > >>> wrote: > >>> > >> Do we want to specify a maximum identifier length for WebGL GLSL? > >> > >> > >> I didn't see one in the GLSL spec. I was going to write a test with > >> really long identifiers (4meg) to see if I could find some drivers > >> that had problems with them but it might be better to just require > >> WebGL to enforce some maximum length. 64 chars? 128 chars? 256 > >> chars > >> which will make shaders less likely to fail on some drivers. > >> > >> > >> Thoughts? > >> > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Tue Aug 3 16:36:54 2010 From: cal...@ (Mark Callow) Date: Tue, 03 Aug 2010 16:36:54 -0700 Subject: [Public WebGL] maximum length of identifiers in WebGL GLSL In-Reply-To: References: Message-ID: <4C58A816.6080707@hicorp.co.jp> Hi Gregg, As I wrote on this topic, what feels like several months ago, MAX_VARYING_FLOATS is deprecated. The translator should do #define MIN(x,y) x > y ? y : x glGetIntegerv(GL_MAX_VERTEX_OUTPUT_COMPONENTS, &max_vertex_output_comps); glGetIntegerv(GL_MAX_FRAGMENT_INPUT_COMPONENTS, &max_fragment_input_comps); max_varying_vectors_ = MIN(max_vertex_output_comps, max_fragment_input_comps) / 4; Regards -Mark On 2010/07/30 9:25, Gregg Tavares (wrk) wrote: > > > > Coincidently I was going to post a similar question with regards > to gl_MaxVertexUniformVectors, gl_MaxFragmentUniformVectors and > gl_MaxVaryingVectors ; they do not exist on GL desktop so I guess > a GLSL ES to GLSL validator/translator needs to do something here. > > Should the translator replace references to those with a literal > value as in ES 2.0 spec (ie. 256, 256 and 15 respectively) or use > a calculated value using the desktop's gl_Max**Components* with > some formula to find the **Vector* equivalent ? > > > The translator already does this and uses values queried from GL > > glGetIntegerv( > GL_MAX_FRAGMENT_UNIFORM_COMPONENTS, &max_fragment_uniform_vectors_); > max_fragment_uniform_vectors_ /= 4; > > glGetIntegerv(GL_MAX_VARYING_FLOATS, &max_varying_vectors_); > max_varying_vectors_ /= 4; > > glGetIntegerv(GL_MAX_VERTEX_UNIFORM_COMPONENTS, > &max_vertex_uniform_vectors_); > max_vertex_uniform_vectors_ /= 4; > > > > > > Regards, > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 378 bytes Desc: not available URL: From gma...@ Tue Aug 3 17:14:45 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Tue, 3 Aug 2010 17:14:45 -0700 Subject: [Public WebGL] maximum varying vectors / uniforms etc. Message-ID: On Tue, Aug 3, 2010 at 4:36 PM, Mark Callow wrote: > Hi Gregg, > > As I wrote on this topic, what feels like several months ago, > MAX_VARYING_FLOATS is deprecated. The translator should do > I mis-stated. The translator does not look up these values. It expects them to be passed in by whatever code uses it so if that code is running on OpenGL then it uses certain code. If it's running on OpenGL ES 2.0 it uses different code. It's up to the particular implementation to figure out what's appropriate for it's own platform/s > > #define MIN(x,y) x > y ? y : x > > glGetIntegerv(GL_MAX_VERTEX_OUTPUT_COMPONENTS, &max_vertex_output_comps); > glGetIntegerv(GL_MAX_FRAGMENT_INPUT_COMPONENTS, > &max_fragment_input_comps); > max_varying_vectors_ = MIN(max_vertex_output_comps, > max_fragment_input_comps) / 4; > Most of the WebGL implementations are running on top of OpenGL 2.1 These constants don't appear to exist in OpenGL 2.1. Am I missing something? Should these constants be used on OpenGL 2.1 or are we talking about some other OpenGL version? > > Regards > > -Mark > > > On 2010/07/30 9:25, Gregg Tavares (wrk) wrote: > > > > >> Coincidently I was going to post a similar question with regards >> to gl_MaxVertexUniformVectors, gl_MaxFragmentUniformVectors and >> gl_MaxVaryingVectors ; they do not exist on GL desktop so I guess a GLSL ES >> to GLSL validator/translator needs to do something here. >> >> Should the translator replace references to those with a literal value >> as in ES 2.0 spec (ie. 256, 256 and 15 respectively) or use a calculated >> value using the desktop's gl_Max**Components* with some formula to find the >> **Vector* equivalent ? >> > > The translator already does this and uses values queried from GL > > glGetIntegerv( > GL_MAX_FRAGMENT_UNIFORM_COMPONENTS, &max_fragment_uniform_vectors_); > max_fragment_uniform_vectors_ /= 4; > > glGetIntegerv(GL_MAX_VARYING_FLOATS, &max_varying_vectors_); > max_varying_vectors_ /= 4; > > glGetIntegerv(GL_MAX_VERTEX_UNIFORM_COMPONENTS, > &max_vertex_uniform_vectors_); > max_vertex_uniform_vectors_ /= 4; > > > >> >> >> Regards, >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Tue Aug 3 17:52:24 2010 From: cal...@ (Mark Callow) Date: Tue, 03 Aug 2010 17:52:24 -0700 Subject: [Public WebGL] maximum varying vectors / uniforms etc. In-Reply-To: References: Message-ID: <4C58B9C8.8090300@hicorp.co.jp> They are in OpenGL 3.x and 4.x. MAX_VARYING_FLOATS was deprecated in OpenGL 3.1. Regards -Mark On 2010/08/03 17:14, Gregg Tavares (wrk) wrote: > > On Tue, Aug 3, 2010 at 4:36 PM, Mark Callow > wrote: > > Hi Gregg, > > As I wrote on this topic, what feels like several months ago, > MAX_VARYING_FLOATS is deprecated. The translator should do > > > I mis-stated. > > The translator does not look up these values. It expects them to be > passed in by whatever code uses it so if that code is running on > OpenGL then it uses certain code. If it's running on OpenGL ES 2.0 it > uses different code. It's up to the particular implementation to > figure out what's appropriate for it's own platform/s > > > > #define MIN(x,y) x > y ? y : x > > glGetIntegerv(GL_MAX_VERTEX_OUTPUT_COMPONENTS, > &max_vertex_output_comps); > glGetIntegerv(GL_MAX_FRAGMENT_INPUT_COMPONENTS, > &max_fragment_input_comps); > max_varying_vectors_ = MIN(max_vertex_output_comps, > max_fragment_input_comps) / 4; > > > Most of the WebGL implementations are running on top of OpenGL 2.1 > These constants don't appear to exist in OpenGL 2.1. Am I missing > something? Should these constants be used on OpenGL 2.1 or are we > talking about some other OpenGL version? > > > > Regards > > -Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 378 bytes Desc: not available URL: From ced...@ Wed Aug 4 20:46:15 2010 From: ced...@ (Cedric Vivier) Date: Thu, 5 Aug 2010 11:46:15 +0800 Subject: [Public WebGL] TypedArrays and mapped memory regions Message-ID: Hi, TypedArrays and their underlying ArrayBuffer gives efficient native/direct access and interoperability with low-level languages to memory regions. Native memory regions do not always have read/write access like assumed right now by the spec though, some native APIs give back read-only or write-only memory regions. Currently these APIs (eg. such as mmap but also closer to WebGL glMapBuffer, or buffer locking APIs) are not usable from Javascript without defining custom 'illegal access' behavior to TypedArrays and/or, worse, writing new custom buffer types altogether to be able to safeguard such invalid access attempts. IMO being able to write Javascript wrappers for such native functions with a common, consistent, well-specified interface such as TypedArrays would be beneficial. Potential users for this are WebGL extensions, File API, Node.JS(?), etc... I propose a minimally intrusive addition to the spec to define behavior that makes TypedArrays usage compatible with any mapped memory region : [addition to ArrayBuffer type spec] readonly attribute boolean readable; Read-only property. True if the the ArrayBuffer contents can be read from, false otherwise. readonly attribute boolean writable; Read-only property. True if the the ArrayBuffer contents can be written to, false otherwise. [addition to ArrayBuffer constructor spec] The contents of the ArrayBuffer are initialized to 0 and both readable and writable attributes are true. [addition to TypedArray getter spec] If the underlying ArrayBuffer is not readable (readable attribute set to false) an INVALID_ACCESS_ERR is raised. [addition to TypedArray setter spec] If the underlying ArrayBuffer is not writable (writable attribute set to false) an INVALID_ACCESS_ERR is raised. It's the responsibility of Javascript native implementors to manage these attributes appropriately with the wrapped API and the Javascript engine internally (the access attributes being read-only from Javascript's user perspective). Benefits : - frontend APIs have consistent illegal access behavior and aren't required to implement/document custom types and behavior over existing TypedArray implementations. - for some APIs memory usage and GC churning can be greatly reduced as this allows zero-copy mechanisms. Prospective example with glMapBuffer : (NB: not the only potential user and I do not support its inclusion in WebGL 1.0 of course ;) array = gl.mapBuffer(GL_ARRAY_BUFFER, GL_WRITEONLY); //array is writable only //fill the array with, say, procedurally generated data from an audio waveform gl.unmapBuffer(); //array is now neither readable or writable as backend storage is unmapped //any attempt to access its contents will raise an INVALID_ACCESS_ERR Proposed patch to support this spec addition in Mozilla : http://neonux.com/webgl/js_readable_writable.patch Proposed patch to add a helper for native JS code to create an ArrayBuffer from a mapped region : http://neonux.com/webgl/js_createarraybuffermapped.patch (patch not required to support the spec addition of course) Thoughts ? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Thu Aug 5 11:30:47 2010 From: bja...@ (Benoit Jacob) Date: Thu, 5 Aug 2010 11:30:47 -0700 (PDT) Subject: [Public WebGL] texParameteri and texParameterf Message-ID: <719180765.196026.1281033047333.JavaMail.root@cm-mail03.mozilla.org> Hi, Do you think that texParameteri and texParameterf could be merged into a single texParameter method that would interprete its 'param' argument either as an int or as a float, depending on the 'pname' argument' ? Maybe that would be more in line with the rest of our API, and more javascript-ish? Note that in standard OpenGL ES / WebGL there is currently no float parameter at all, although extensions might add some. Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Aug 5 12:10:46 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 5 Aug 2010 12:10:46 -0700 Subject: [Public WebGL] TypedArrays and mapped memory regions In-Reply-To: References: Message-ID: On Wed, Aug 4, 2010 at 8:46 PM, Cedric Vivier wrote: > Hi, > > TypedArrays and their underlying ArrayBuffer gives efficient native/direct > access and interoperability with low-level languages to memory regions. > Native memory regions do not always have read/write access like assumed > right now by the spec though, some native APIs give back read-only or > write-only memory regions. > Currently these APIs (eg. such as mmap but also closer to WebGL glMapBuffer, > or buffer locking APIs) are not usable from Javascript without defining > custom 'illegal access' behavior to TypedArrays and/or, worse, writing new > custom buffer types altogether to be able to safeguard such invalid access > attempts. > IMO being able to write Javascript wrappers for such native functions with a > common, consistent, well-specified interface such as TypedArrays would be > beneficial. > Potential users for this are WebGL extensions, File API, Node.JS(?), etc... > > I propose a minimally intrusive addition to the spec to define behavior that > makes TypedArrays usage compatible with any mapped memory region : > > [addition to ArrayBuffer type spec] > readonly attribute boolean readable; > Read-only property. > True if the the ArrayBuffer contents can be read from, false otherwise. > readonly attribute boolean writable; > Read-only property. > True if the the ArrayBuffer contents can be written to, false otherwise. > > [addition to ArrayBuffer constructor spec] > The contents of the ArrayBuffer are initialized to 0 and both readable and > writable attributes are true. > [addition to TypedArray getter spec] > If the underlying ArrayBuffer is not readable (readable attribute set to > false) an INVALID_ACCESS_ERR is raised. > > [addition to TypedArray setter spec] > If the underlying ArrayBuffer is not writable (writable attribute set to > false) an INVALID_ACCESS_ERR is raised. > > > It's the responsibility of Javascript native implementors to manage these > attributes appropriately with the wrapped API and the Javascript engine > internally (the access attributes being read-only from Javascript's user > perspective). > Benefits : > - frontend APIs have consistent illegal access behavior and aren't required > to implement/document custom types and behavior over existing TypedArray > implementations. > - for some APIs memory usage and GC churning can be greatly reduced as this > allows zero-copy mechanisms. > > Prospective example with glMapBuffer : (NB: not the only potential user and > I do not support its inclusion in WebGL 1.0 of course ;) > array = gl.mapBuffer(GL_ARRAY_BUFFER, GL_WRITEONLY); > //array is writable only > //fill the array with, say, procedurally generated data from an audio > waveform > gl.unmapBuffer(); > //array is now neither readable or writable as backend storage is unmapped > //any attempt to access its contents will raise an INVALID_ACCESS_ERR > > > Proposed patch to support this spec addition in Mozilla : > http://neonux.com/webgl/js_readable_writable.patch > Proposed patch to add a helper for native JS code to create an ArrayBuffer > from a mapped region : > http://neonux.com/webgl/js_createarraybuffermapped.patch > (patch not required to support the spec addition of course) > > Thoughts ? I've had some experience with support for read-only buffers and dynamically unmapping buffers in Java's New I/O API and implementation. In that API, the read-only attribute was only one of a few that could be toggled per buffer. Others were whether or not the buffer was in big-endian mode, and whether the buffer was properly aligned to support fast stores on CPU architectures that didn't support unaligned stores. In order to maintain high performance, it is essential to avoid run-time checks for these attributes on each read and write. For this reason, in the Java implementation of these concepts, many subclasses of abstract base classes like FloatBuffer and IntBuffer were generated behind the scenes. There were something like eight subclasses generated per type. Unfortunately, the proliferation of these subclasses was also a performance disaster; they defeated the devirtualization techniques used at the time, so in many cases, every read and write turned into a virtual call. This led to a factor of ten performance hit in some common use cases. I think we need to be very, very careful about adding functionality like this to the TypedArray specification to avoid similar performance disasters. Even though some good optimization work has already been done for TypedArrays, the performance of the current spec in current ECMAScript virtual machines is not near what is desired. Until we have achieved acceptable baseline performance of the current TypedArrays, and then can do good before-and-after performance comparisons of feature additions such as this one, I am not in favor of adding this functionality. APIs like this one can still be bound to ECMAScript, just with an additional memory copy upon map / unmap. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Aug 5 12:16:36 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 5 Aug 2010 12:16:36 -0700 Subject: [Public WebGL] texParameteri and texParameterf In-Reply-To: <719180765.196026.1281033047333.JavaMail.root@cm-mail03.mozilla.org> References: <719180765.196026.1281033047333.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Aug 5, 2010 at 11:30 AM, Benoit Jacob wrote: > Hi, > > Do you think that texParameteri and texParameterf could be merged into a single texParameter method that would interprete its 'param' argument either as an int or as a float, depending on the 'pname' argument' ? > > Maybe that would be more in line with the rest of our API, and more javascript-ish? > > Note that in standard OpenGL ES / WebGL there is currently no float parameter at all, although extensions might add some. I'm pretty sure there was a specific reason we preserved the i and f suffixes -- some parameters which can accept either an integer or floating point value and have slightly different semantics. Jon Leech would probably remember exactly which ones. They might exist only in desktop GL. Regardless, I think we should preserve both entry points, because in JavaScript it's impossible in all cases to determine whether a given number is supposed to be treated semantically as an integer or floating-point value. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Thu Aug 5 13:15:17 2010 From: bja...@ (Benoit Jacob) Date: Thu, 5 Aug 2010 13:15:17 -0700 (PDT) Subject: [Public WebGL] texParameteri and texParameterf In-Reply-To: Message-ID: <1604952789.197071.1281039317482.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Thu, Aug 5, 2010 at 11:30 AM, Benoit Jacob > wrote: > > Hi, > > > > Do you think that texParameteri and texParameterf could be merged > > into a single texParameter method that would interprete its 'param' > > argument either as an int or as a float, depending on the 'pname' > > argument' ? > > > > Maybe that would be more in line with the rest of our API, and more > > javascript-ish? > > > > Note that in standard OpenGL ES / WebGL there is currently no float > > parameter at all, although extensions might add some. > > I'm pretty sure there was a specific reason we preserved the i and f > suffixes -- some parameters which can accept either an integer or > floating point value and have slightly different semantics. Thanks for the speedy answer. I have one more question: what should be the behavior when calling texParameterf with a pname that expects an integer value? 1. should it just fail with INVALID_ENUM? 2. or should it succeed provided that the float param happens to be integral (e.g. 123.000f)? I feel much in favor of 1. and what you say above seems to go in this direction, since you say that some pname might eventually have different semantics in texParameterf. But I just need to ask, because WebKit's current implementation is apparently allowing the int pnames with texParameterf, http://trac.webkit.org/browser/trunk/WebCore/html/canvas/WebGLRenderingContext.cpp#L2338 Cheers, Benoit > Jon Leech > would probably remember exactly which ones. They might exist only in > desktop GL. Regardless, I think we should preserve both entry points, > because in JavaScript it's impossible in all cases to determine > whether a given number is supposed to be treated semantically as an > integer or floating-point value. > > -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Thu Aug 5 13:34:02 2010 From: bja...@ (Benoit Jacob) Date: Thu, 5 Aug 2010 13:34:02 -0700 (PDT) Subject: [Public WebGL] Re: texParameteri and texParameterf In-Reply-To: <20100805202352.GA3038@brad.oddhack.org> Message-ID: <2062332599.197330.1281040442824.JavaMail.root@cm-mail03.mozilla.org> (re-adding the list in CC) ----- Original Message ----- > On Thu, Aug 05, 2010 at 01:15:17PM -0700, Benoit Jacob wrote: > > Thanks for the speedy answer. I have one more question: what should > > be the behavior when calling texParameterf with a pname that expects > > an integer value? > > 1. should it just fail with INVALID_ENUM? > > 2. or should it succeed provided that the float param happens to be > > integral (e.g. 123.000f)? > > It does not fail in GL, there are defined conversion rules. > > Jon Ah OK. I hadn't read the desktop GL spec. I have read the WebGL and GL ES specs and haven't found any clue about whether this should fail or not. Does someone know where this is defined, or should the spec be clarified? Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Thu Aug 5 13:56:19 2010 From: bja...@ (Benoit Jacob) Date: Thu, 5 Aug 2010 13:56:19 -0700 (PDT) Subject: [Public WebGL] Re: texParameteri and texParameterf In-Reply-To: <20100805204900.GA3146@brad.oddhack.org> Message-ID: <1393972496.197580.1281041779789.JavaMail.root@cm-mail03.mozilla.org> (You don't mind that I re-add the list in CC, do you?) ----- Original Message ----- > On Thu, Aug 05, 2010 at 01:34:02PM -0700, Benoit Jacob wrote: > > Ah OK. I hadn't read the desktop GL spec. > > > > I have read the WebGL and GL ES specs and haven't found any clue > > about whether this should fail or not. Does someone know where this > > is defined, or should the spec be clarified? > > Need to clarify exactly what you're asking about. You said "an > integer value". But glTexParameter* can specify either float, int, or > enum values. For a float or int value, there are well-defined numeric > conversions; you can specify int values for a parameter that's > internally float, or vice-versa, and get conversions as described in > chapter 2 of the GL/ES specs. Thanks, but this Chapter 2 is 40 pages long; could you please refer to a more specific part of it? > > For an enum value, you in principle could specify it as a float but > it should only succeed if the conversion is lossless and the result is > an accepted value for that parameter, per the general description of > INVALID_ENUM errors also in ch. 2. In practice I wouldn't be surprised > for there to be some variability in implementations as this is a > really > obscure case - nobody in C is likely to write something like > > GLfloat enumVal = (GLfloat)GL_LINEAR; > glTexParameterfv(GL_TEXTURE2D, GL_TEXTURE_MIN_FILTER, &enumVal); > > or > > glTexParameterf(GL_TEXTURE2D, GL_TEXTURE_MIN_FILTER, > (GLfloat)GL_LINEAR); Ah, thanks, this is exactly what I was inquiring about. I need to have a closer look at that chapter 2. Cheers, Benoit > > Jon ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Thu Aug 5 14:36:29 2010 From: bja...@ (Benoit Jacob) Date: Thu, 5 Aug 2010 14:36:29 -0700 (PDT) Subject: [Public WebGL] Re: texParameteri and texParameterf In-Reply-To: <20100805210732.GA3318@brad.oddhack.org> Message-ID: <1965967659.198130.1281044188981.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Thu, Aug 05, 2010 at 01:56:19PM -0700, Benoit Jacob wrote: > > (You don't mind that I re-add the list in CC, do you?) > > That's fine. I am not subscribed => probably can't post to it and > don't want to get bounces. > > > Thanks, but this Chapter 2 is 40 pages long; could you please refer > > to a more specific part of it? > > "Data Conversion" and "GL Errors" subsections of the ES spec. Thanks, I have read these sections now, but they don't seem to imply that texParameterf should accept integer parameters if the float value happens to be integral. Indeed, in 2.1.2 Data Conversion, it's said that float-to-integer conversions are done with the formula int = (float * (2^b - 1) - 1) / 2 where b is the bit width of the integer type, i.e. they are talking about a different kind of conversion. To be honest it sounds saner to me, and it makes my life easier, to just let texParameterf reject pnames that are meant to take integers (unless in the future some explitly says it accepts floats) so I think I'll stick with that until someone shows me some spec proving me wrong! Thanks for your patience, Benoit > > Jon ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kwa...@ Thu Aug 5 15:04:02 2010 From: kwa...@ (Kenneth Waters) Date: Thu, 5 Aug 2010 15:04:02 -0700 Subject: [Public WebGL] Re: texParameteri and texParameterf In-Reply-To: <1965967659.198130.1281044188981.JavaMail.root@cm-mail03.mozilla.org> References: <20100805210732.GA3318@brad.oddhack.org> <1965967659.198130.1281044188981.JavaMail.root@cm-mail03.mozilla.org> Message-ID: > > To be honest it sounds saner to me, and it makes my life easier, to just > let texParameterf reject pnames that are meant to take integers (unless in > the future some explitly says it accepts floats) so I think I'll stick with > that until someone shows me some spec proving me wrong! > Section 6.1.2 defines the state conversion rules for queries. I admit that it's unclear from the GL 2.0 and GL-ES 2.0 specs what you do when you set state; however, it's always been my understanding that the same conversion rules apply. -- Kenneth Waters -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Thu Aug 5 17:25:28 2010 From: bja...@ (Benoit Jacob) Date: Thu, 5 Aug 2010 17:25:28 -0700 (PDT) Subject: [Public WebGL] Re: texParameteri and texParameterf In-Reply-To: Message-ID: <1703325848.199319.1281054328910.JavaMail.root@cm-mail03.mozilla.org> To be honest it sounds saner to me, and it makes my life easier, to just let texParameterf reject pnames that are meant to take integers (unless in the future some explitly says it accepts floats) so I think I'll stick with that until someone shows me some spec proving me wrong! Section 6.1.2 defines the state conversion rules for queries. I admit that it's unclear from the GL 2.0 and GL-ES 2.0 specs what you do when you set state; however, it's always been my understanding that the same conversion rules apply. -- Kenneth Waters Ah, thanks, I hadn't seen this section 6.1.2 about queries. So I have to admit that it makes it legitimate to apply the same conversion rules to setters. Since it seems the be the consensus among people here, I'll just follow your interpretation! Thanks, Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Fri Aug 6 10:44:42 2010 From: cal...@ (Mark Callow) Date: Fri, 06 Aug 2010 10:44:42 -0700 Subject: [Public WebGL] OpenGL ES 2.0 driver for desktop Message-ID: <4C5C4A0A.4090100@hicorp.co.jp> Of direct interest to WebGL implementers, and likely of interest to others on this list, AMD last week at Siggraph announced OpenGL ES 2.0 drivers for their desktop graphics cards. Paradoxically it will probably complicate things for WebGL implementations running on the desktop that wish to use this driver when available. Hopefully other desktop graphics vendors will follow suit, thus removing the complication. BTW, these AMD drivers pass the OpenGL ES 2.0 conformance tests. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From cal...@ Fri Aug 6 10:56:38 2010 From: cal...@ (Mark Callow) Date: Fri, 06 Aug 2010 10:56:38 -0700 Subject: [Public WebGL] KTX file format Message-ID: <4C5C4CD6.20109@hicorp.co.jp> Some time ago there was discussion on this list downloading textures in a compressed format. Well last week at Siggraph, Khronos announced the KTX file format and tools . KTX (Khronos Texture) is a lightweight file format for OpenGL^? textures, designed around how textures are loaded in OpenGL. KTX files contain all the parameters needed for texture loading. A single file can contain everything from a simple base-level 2D texture through to an array texture with all mipmap levels. Textures can be stored in one of the compressed formats, e.g. ETC1, supported by OpenGL family APIs and extensions or can be stored uncompressed. We expect most texture compression tools will add support for KTX and we expect ETC (Ericsson Texture Compression) to become more widely available in the future. Thus together these two standards will ultimately provide a solution to downloading compressed texture data for WebGL. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From cal...@ Fri Aug 6 11:03:58 2010 From: cal...@ (Mark Callow) Date: Fri, 06 Aug 2010 11:03:58 -0700 Subject: [Public WebGL] Why GL 2.x? Message-ID: <4C5C4E8E.40804@hicorp.co.jp> In several discussions on this list recently, it has emerged that people are using GL 2.x drivers and documentation. I am curious why this is. Is it (a) because it is hard to get the lastest 3.x or 4.x GL drivers from the desktop vendors. Or (b) is it because GPUs that can support GL 3.x are still relatively rare in the real world. Or (c) some other reason? Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From cma...@ Fri Aug 6 11:17:11 2010 From: cma...@ (Chris Marrin) Date: Fri, 06 Aug 2010 11:17:11 -0700 Subject: [Public WebGL] KTX file format In-Reply-To: <4C5C4CD6.20109@hicorp.co.jp> References: <4C5C4CD6.20109@hicorp.co.jp> Message-ID: On Aug 6, 2010, at 10:56 AM, Mark Callow wrote: > Some time ago there was discussion on this list downloading textures in a compressed format. Well last week at Siggraph, Khronos announced the KTX file format and tools. > > KTX (Khronos Texture) is a lightweight file format for OpenGL? textures, designed around how textures are loaded in OpenGL. KTX files contain all the parameters needed for texture loading. A single file can contain everything from a simple base-level 2D texture through to an array texture with all mipmap levels. Textures can be stored in one of the compressed formats, e.g. ETC1, supported by OpenGL family APIs and extensions or can be stored uncompressed. > > We expect most texture compression tools will add support for KTX and we expect ETC (Ericsson Texture Compression) to become more widely available in the future. Thus together these two standards will ultimately provide a solution to downloading compressed texture data for WebGL. It's sad that ETC1 has a oppressive license (http://www.khronos.org/opengles/sdk/tools/KTX/doc/libktx/licensing.html) which will probably make it unusable for most WebGL implementations. IANAL, but it looks to me like an implementation of WebGL on top of D3D will not be able to use ETC1, since it would not be "a middleware API that is built on top of a Khronos API". Too bad. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Fri Aug 6 11:27:30 2010 From: cal...@ (Mark Callow) Date: Fri, 06 Aug 2010 11:27:30 -0700 Subject: [Public WebGL] KTX file format In-Reply-To: References: <4C5C4CD6.20109@hicorp.co.jp> Message-ID: <4C5C5412.5020408@hicorp.co.jp> Ericsson's intent is that the code be usable for purposes related to Khronos Group APIs which WebGL surely is. I see that the license as currently written would not cover WebGL on D3D. This case is very similar to using the decompressor in a software implementation of OpenGL ES which they definitely intended to be permissible. So I am confident we can get Ericsson to amend the license. Thanks for pointing out the problem. Regards -Mark On 2010/08/06 11:17, Chris Marrin wrote: > On Aug 6, 2010, at 10:56 AM, Mark Callow wrote: > > >> Some time ago there was discussion on this list downloading textures in a compressed format. Well last week at Siggraph, Khronos announced the KTX file format and tools. >> >> KTX (Khronos Texture) is a lightweight file format for OpenGL? textures, designed around how textures are loaded in OpenGL. KTX files contain all the parameters needed for texture loading. A single file can contain everything from a simple base-level 2D texture through to an array texture with all mipmap levels. Textures can be stored in one of the compressed formats, e.g. ETC1, supported by OpenGL family APIs and extensions or can be stored uncompressed. >> >> We expect most texture compression tools will add support for KTX and we expect ETC (Ericsson Texture Compression) to become more widely available in the future. Thus together these two standards will ultimately provide a solution to downloading compressed texture data for WebGL. >> > It's sad that ETC1 has a oppressive license (http://www.khronos.org/opengles/sdk/tools/KTX/doc/libktx/licensing.html) which will probably make it unusable for most WebGL implementations. IANAL, but it looks to me like an implementation of WebGL on top of D3D will not be able to use ETC1, since it would not be "a middleware API that is built on top of a Khronos API". > > Too bad. > > ----- > ~Chris > cmarrin...@ > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 378 bytes Desc: not available URL: From cal...@ Fri Aug 6 11:54:41 2010 From: cal...@ (Mark Callow) Date: Fri, 06 Aug 2010 11:54:41 -0700 Subject: [Public WebGL] KTX file format In-Reply-To: <4C5C5412.5020408@hicorp.co.jp> References: <4C5C4CD6.20109@hicorp.co.jp> <4C5C5412.5020408@hicorp.co.jp> Message-ID: <4C5C5A71.3020309@hicorp.co.jp> My contact at Ericsson is on vacation until the end of the month so there will be a delay in resolving this. Regards -Mark On 2010/08/06 11:27, Mark Callow wrote: > > Ericsson's intent is that the code be usable for purposes related to > Khronos Group APIs which WebGL surely is. I see that the license as > currently written would not cover WebGL on D3D. This case is very > similar to using the decompressor in a software implementation of > OpenGL ES which they definitely intended to be permissible. So I am > confident we can get Ericsson to amend the license. Thanks for > pointing out the problem. > > Regards > > -Mark > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 378 bytes Desc: not available URL: From cma...@ Fri Aug 6 12:02:59 2010 From: cma...@ (Chris Marrin) Date: Fri, 06 Aug 2010 12:02:59 -0700 Subject: [Public WebGL] KTX file format In-Reply-To: <4C5C5412.5020408@hicorp.co.jp> References: <4C5C4CD6.20109@hicorp.co.jp> <4C5C5412.5020408@hicorp.co.jp> Message-ID: <1278B449-A7D1-4583-89B9-CEDFCC0F932E@apple.com> On Aug 6, 2010, at 11:27 AM, Mark Callow wrote: > Ericsson's intent is that the code be usable for purposes related to Khronos Group APIs which WebGL surely is. I see that the license as currently written would not cover WebGL on D3D. This case is very similar to using the decompressor in a software implementation of OpenGL ES which they definitely intended to be permissible. So I am confident we can get Ericsson to amend the license. Thanks for pointing out the problem. Unless they amend it to be BSD compatible, I think it's a showstopper for companies like Mozilla. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Fri Aug 6 12:05:00 2010 From: cma...@ (Chris Marrin) Date: Fri, 06 Aug 2010 12:05:00 -0700 Subject: [Public WebGL] KTX file format In-Reply-To: <4C5C5A71.3020309@hicorp.co.jp> References: <4C5C4CD6.20109@hicorp.co.jp> <4C5C5412.5020408@hicorp.co.jp> <4C5C5A71.3020309@hicorp.co.jp> Message-ID: On Aug 6, 2010, at 11:54 AM, Mark Callow wrote: > > My contact at Ericsson is on vacation until the end of the month so there will be a delay in resolving this. Thanks for working on this Mark. I really do hope it can be changed to be a simple BSD compatible license. But it seems unlikely since their piece was called out specifically from the general Khronos license, which is (apparently) BSD compatible. But I can always hope! Thanks again... ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Fri Aug 6 14:49:06 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 6 Aug 2010 14:49:06 -0700 (PDT) Subject: [Public WebGL] Why GL 2.x? In-Reply-To: <4C5C4E8E.40804@hicorp.co.jp> Message-ID: <156942840.210106.1281131346215.JavaMail.root@cm-mail03.mozilla.org> Hmm, what do you mean by "people are using GL 2.x drivers and documentation"? I would think people are using whatever driver they have and in general ignoring the actual max GL version supported; I'd expect most people working webgl to have relatively recent drivers (which in the windows case are at least GL 3.x, unless the hardware just can't do it). For the documentation piece, I'm not sure what's being referenced, though. - Vlad ----- Original Message ----- > In several discussions on this list recently, it has emerged that > people are using GL 2.x drivers and documentation. I am curious why > this is. Is it (a) because it is hard to get the lastest 3.x or 4.x GL > drivers from the desktop vendors. Or (b) is it because GPUs that can > support GL 3.x are still relatively rare in the real world. Or (c) > some other reason? > > > Regards > > > > -Mark ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Fri Aug 6 14:51:02 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 6 Aug 2010 14:51:02 -0700 (PDT) Subject: [Public WebGL] KTX file format In-Reply-To: <1278B449-A7D1-4583-89B9-CEDFCC0F932E@apple.com> Message-ID: <1755816650.210129.1281131462340.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Aug 6, 2010, at 11:27 AM, Mark Callow wrote: > > > Ericsson's intent is that the code be usable for purposes related to > > Khronos Group APIs which WebGL surely is. I see that the license as > > currently written would not cover WebGL on D3D. This case is very > > similar to using the decompressor in a software implementation of > > OpenGL ES which they definitely intended to be permissible. So I am > > confident we can get Ericsson to amend the license. Thanks for > > pointing out the problem. > > Unless they amend it to be BSD compatible, I think it's a showstopper > for companies like Mozilla. Yep, it would definitely be a challenge, though it would depend on what the license is; doesn't necessarily have to be BSD, but still has to be fairly open. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Fri Aug 6 16:03:05 2010 From: cal...@ (Mark Callow) Date: Fri, 06 Aug 2010 16:03:05 -0700 Subject: [Public WebGL] Why GL 2.x? In-Reply-To: <156942840.210106.1281131346215.JavaMail.root@cm-mail03.mozilla.org> References: <156942840.210106.1281131346215.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C5C94A9.3060806@hicorp.co.jp> An example of "using" is the message from Greg about the shader validator. He stated he is linking with an OpenGL 2.x library and therefore didn't find certain #defines. As for documentation, there have been several recent messages containing words to the effect of "I looked in the 2.x spec." or I" couldn't find Y in the 2.x spec." Regards -Mark On 2010/08/06 14:49, Vladimir Vukicevic wrote: > Hmm, what do you mean by "people are using GL 2.x drivers and documentation"? I would think people are using whatever driver they have and in general ignoring the actual max GL version supported; I'd expect most people working webgl to have relatively recent drivers (which in the windows case are at least GL 3.x, unless the hardware just can't do it). For the documentation piece, I'm not sure what's being referenced, though. > > - Vlad > > ----- Original Message ----- > >> In several discussions on this list recently, it has emerged that >> people are using GL 2.x drivers and documentation. I am curious why >> this is. Is it (a) because it is hard to get the lastest 3.x or 4.x GL >> drivers from the desktop vendors. Or (b) is it because GPUs that can >> support GL 3.x are still relatively rare in the real world. Or (c) >> some other reason? >> >> >> Regards >> >> >> >> -Mark >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From cal...@ Fri Aug 6 16:16:05 2010 From: cal...@ (Mark Callow) Date: Fri, 06 Aug 2010 16:16:05 -0700 Subject: [Public WebGL] Shader validator / demos Message-ID: <4C5C97B5.2050003@hicorp.co.jp> The shader validator still is not on by default in the Firefox 4 beta 4 nightly that I installed an hour ago. The majority of the demos in the public wiki do not run in Firefox. The ones that originate from Apple all give, among other things, a "console is not defined" error. -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From cal...@ Fri Aug 6 16:22:39 2010 From: cal...@ (Mark Callow) Date: Fri, 06 Aug 2010 16:22:39 -0700 Subject: [Public WebGL] download/activity indicator Message-ID: <4C5C993F.9060400@hicorp.co.jp> When WebGL apps are loading there often be times when the browser status bar says "Done" but nothing is apparently happening. E.g, the Spore demo. In the case of Spore it is downloading the model data. In other cases in fact nothing is happening because the app. is broken. This is very user unfriendly. Can browser vendors make the download/activity indicator operate when WebGL apps are loading model data? Then if the browser says "Done" and nothing has appeared, users will know there is a problem. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From vla...@ Fri Aug 6 16:26:06 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 6 Aug 2010 16:26:06 -0700 (PDT) Subject: [Public WebGL] Shader validator / demos In-Reply-To: <4C5C97B5.2050003@hicorp.co.jp> Message-ID: <278157373.210905.1281137166620.JavaMail.root@cm-mail03.mozilla.org> Yup, that is correct. I was planning on turning things on yesterday, but we had some build infrastructure problems preventing me from doing so. You should be able to flip the pref as mentioned in my original mail and things should work. As far as console is not defined, some of the demos depend on a non-standard console object always being present; I'll go through and see about fixing those. You can get a console defined in Fx by opening up the (work-in-progress) console from the tools menu, though. - Vlad ----- Original Message ----- > The shader validator still is not on by default in the Firefox 4 beta > 4 nightly that I installed an hour ago. > > > The majority of the demos in the public wiki do not run in Firefox. > The ones that originate from Apple all give, among other things, a > "console is not defined" error. > > > > > -Mark ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Fri Aug 6 17:39:55 2010 From: cal...@ (Mark Callow) Date: Fri, 06 Aug 2010 17:39:55 -0700 Subject: [Public WebGL] Shader validator / demos In-Reply-To: <278157373.210905.1281137166620.JavaMail.root@cm-mail03.mozilla.org> References: <278157373.210905.1281137166620.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C5CAB5B.2020702@hicorp.co.jp> Thanks Vlad. I did turn on the shader validator via about:config and it does work. The error message box that pops up when a shader has an error would be meaningless to 99.8% of users. Maybe a heading like "This page's WebGL shader has the following errors:" would help. Regards -Mark On 2010/08/06 16:26, Vladimir Vukicevic wrote: > Yup, that is correct. I was planning on turning things on yesterday, but we had some build infrastructure problems preventing me from doing so. You should be able to flip the pref as mentioned in my original mail and things should work. > > As far as console is not defined, some of the demos depend on a non-standard console object always being present; I'll go through and see about fixing those. You can get a console defined in Fx by opening up the (work-in-progress) console from the tools menu, though. > > - Vlad > > ----- Original Message ----- > >> The shader validator still is not on by default in the Firefox 4 beta >> 4 nightly that I installed an hour ago. >> >> >> The majority of the demos in the public wiki do not run in Firefox. >> The ones that originate from Apple all give, among other things, a >> "console is not defined" error. >> >> >> >> >> -Mark >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From vla...@ Fri Aug 6 17:45:07 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 6 Aug 2010 17:45:07 -0700 (PDT) Subject: [Public WebGL] Shader validator / demos In-Reply-To: <4C5CAB5B.2020702@hicorp.co.jp> Message-ID: <2114977624.211179.1281141907232.JavaMail.root@cm-mail03.mozilla.org> The problem with this (and the network case, incidentally) is that there's really no way for the browser to know -- any of the error messages being displayed here, including the message box, are all from the content app itself and not being generated by the browser. I guess the browser could log some kind of error if compileShader() ever fails, but it definitely wouldn't pop up any kind of dialog, so the best it could do is just write something to its JS console, which most users won't think to look at anyway. Even then, telling them that the WebGL shader has errors wouldn't be of much use to them, because it's not something that they can fix. The same is true for resource loads; the browser could indicate network traffic is happening when there are any outstanding background requests, but to users that just looks like "this page has never finished loading". Instead, the apps themselves should inform users about what's going on; e.g. gmail's loading screen is a good example. - Vlad ----- Original Message ----- > Thanks Vlad. I did turn on the shader validator via about:config and > it does work. The error message box that pops up when a shader has an > error would be meaningless to 99.8% of users. Maybe a heading like > "This page's WebGL shader has the following errors:" would help. > > > Regards > > > > -Mark > > On 2010/08/06 16:26, Vladimir Vukicevic wrote: > > Yup, that is correct. I was planning on turning things on yesterday, > but we had some build infrastructure problems preventing me from doing > so. You should be able to flip the pref as mentioned in my original > mail and things should work. > > As far as console is not defined, some of the demos depend on a > non-standard console object always being present; I'll go through and > see about fixing those. You can get a console defined in Fx by opening > up the (work-in-progress) console from the tools menu, though. > > - Vlad > > ----- Original Message ----- > > The shader validator still is not on by default in the Firefox 4 beta > 4 nightly that I installed an hour ago. > > > The majority of the demos in the public wiki do not run in Firefox. > The ones that originate from Apple all give, among other things, a > "console is not defined" error. > > > > > -Mark ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Fri Aug 6 18:07:30 2010 From: cal...@ (Mark Callow) Date: Fri, 06 Aug 2010 18:07:30 -0700 Subject: [Public WebGL] Shader validator / demos In-Reply-To: <2114977624.211179.1281141907232.JavaMail.root@cm-mail03.mozilla.org> References: <2114977624.211179.1281141907232.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C5CB1D2.3080204@hicorp.co.jp> A network activity indicator is preferable to "Done". At least then I might wait instead of immediately deciding the page is useless and moving on. Regards -Mark On 2010/08/06 17:45, Vladimir Vukicevic wrote: > ... > > The same is true for resource loads; the browser could indicate network traffic is happening when there are any outstanding background requests, but to users that just looks like "this page has never finished loading". Instead, the apps themselves should inform users about what's going on; e.g. gmail's loading screen is a good example. > > - Vlad > > ----- Original Message ----- > >> Thanks Vlad. I did turn on the shader validator via about:config and >> it does work. The error message box that pops up when a shader has an >> error would be meaningless to 99.8% of users. Maybe a heading like >> "This page's WebGL shader has the following errors:" would help. >> >> >> Regards >> >> >> >> -Mark >> >> On 2010/08/06 16:26, Vladimir Vukicevic wrote: >> >> Yup, that is correct. I was planning on turning things on yesterday, >> but we had some build infrastructure problems preventing me from doing >> so. You should be able to flip the pref as mentioned in my original >> mail and things should work. >> >> As far as console is not defined, some of the demos depend on a >> non-standard console object always being present; I'll go through and >> see about fixing those. You can get a console defined in Fx by opening >> up the (work-in-progress) console from the tools menu, though. >> >> - Vlad >> >> ----- Original Message ----- >> >> The shader validator still is not on by default in the Firefox 4 beta >> 4 nightly that I installed an hour ago. >> >> >> The majority of the demos in the public wiki do not run in Firefox. >> The ones that originate from Apple all give, among other things, a >> "console is not defined" error. >> >> >> >> >> -Mark >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From squ...@ Fri Aug 6 22:10:52 2010 From: squ...@ (Chris Long) Date: Fri, 6 Aug 2010 23:10:52 -0600 Subject: [Public WebGL] download/activity indicator In-Reply-To: <4C5C993F.9060400@hicorp.co.jp> References: <4C5C993F.9060400@hicorp.co.jp> Message-ID: This less of a browser issue and more of an issue for the application writers, who should be providing their own indicators of progress/lack-of-progress of downloading. The browser really doesn't know if that background download is essential for the 3D to run or if it's being loaded for use later, or even if it's being streamed on the fly (say, displaying a mesh without textures while still downloading the image data). It's no different from a desktop GL app, that needs to inform its users how things are going as well...the OS doesn't handle it, the app does. Only difference is the addition of network latency on top of disk and memory latency. 2010/8/6 Mark Callow : > When WebGL apps are loading there often be times when the browser status bar > says "Done" but nothing is apparently happening. E.g, the Spore demo. In the > case of Spore it is downloading the model data. In other cases in fact > nothing is happening because the app. is broken. > > This is very user unfriendly. Can browser vendors make the download/activity > indicator operate when WebGL apps are loading model data? Then if the > browser says "Done" and nothing has appeared, users will know there is a > problem. > > Regards > > -Mark > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Sat Aug 7 05:58:46 2010 From: cma...@ (Chris Marrin) Date: Sat, 07 Aug 2010 05:58:46 -0700 Subject: [Public WebGL] download/activity indicator In-Reply-To: <4C5C993F.9060400@hicorp.co.jp> References: <4C5C993F.9060400@hicorp.co.jp> Message-ID: <8EA03D15-4A4D-4837-8C78-DC1380B5305C@apple.com> On Aug 6, 2010, at 4:22 PM, Mark Callow wrote: > When WebGL apps are loading there often be times when the browser status bar says "Done" but nothing is apparently happening. E.g, the Spore demo. In the case of Spore it is downloading the model data. In other cases in fact nothing is happening because the app. is broken. > > This is very user unfriendly. Can browser vendors make the download/activity indicator operate when WebGL apps are loading model data? Then if the browser says "Done" and nothing has appeared, users will know there is a problem. > The Spore model is being downloaded with XHR, which gives plenty of events telling you its progress. But it's up to the author to use these events to give the user feedback. The fact that Vlad is not doing this is just shameful :-) (I don't do it either!) You do see some demos that popup a dialog if they fail to get a WebGL context and that give a busy indicator while data is being loaded. You're starting to see WebGL libraries that have this functionality built in. So in the future I think you'll see the feedback more... ----- ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Sun Aug 8 09:55:16 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Sun, 8 Aug 2010 09:55:16 -0700 Subject: [Public WebGL] Re: texParameteri and texParameterf In-Reply-To: <1703325848.199319.1281054328910.JavaMail.root@cm-mail03.mozilla.org> References: <1703325848.199319.1281054328910.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Aug 5, 2010 at 5:25 PM, Benoit Jacob wrote: > > > ------------------------------ > > To be honest it sounds saner to me, and it makes my life easier, to just >> let texParameterf reject pnames that are meant to take integers (unless in >> the future some explitly says it accepts floats) so I think I'll stick with >> that until someone shows me some spec proving me wrong! >> > > Section 6.1.2 defines the state conversion rules for queries. I admit that > it's unclear from the GL 2.0 and GL-ES 2.0 specs what you do when you set > state; however, it's always been my understanding that the same conversion > rules apply. > > -- Kenneth Waters > > Ah, thanks, I hadn't seen this section 6.1.2 about queries. > > So I have to admit that it makes it legitimate to apply the same conversion > rules to setters. > > Since it seems the be the consensus among people here, I'll just follow > your interpretation! > I think the conversion is fine. The i/f just basically says whether you call the i or f function. There is sometimes a difference between functions. For example you can only set a sampler uniform with the i functions, not the f functions. > > Thanks, > Benoit > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Sun Aug 8 10:05:40 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Sun, 8 Aug 2010 10:05:40 -0700 Subject: [Public WebGL] OpenGL ES 2.0 driver for desktop In-Reply-To: <4C5C4A0A.4090100@hicorp.co.jp> References: <4C5C4A0A.4090100@hicorp.co.jp> Message-ID: On Fri, Aug 6, 2010 at 10:44 AM, Mark Callow wrote: > Of direct interest to WebGL implementers, and likely of interest to > others on this list, AMD last week at Siggraph announced OpenGL ES 2.0 > driversfor their desktop graphics cards. > > Paradoxically it will probably complicate things for WebGL implementations > running on the desktop that wish to use this driver when available. > Hopefully other desktop graphics vendors will follow suit, thus removing the > complication. > > BTW, these AMD drivers pass the OpenGL ES 2.0 conformance tests. > That's awesome that this is progressing. Unfortunately WebGL is far stricter than the OpenGL ES 2.0 conformance tests since we want WebGL to support only 2.0 with no extensions. That means the amount of work required to implement WebGL on top of OpenGL vs OpenGL ES 2.0 is really about the same. All enums have to be checked to reject extensions, NPOT textures have to be checked for and 2.0 restrictions enforced even in the presence of GL_OES_texture_npot, Buffers have to be checked out of range access on draw calls, shaders have to be checked they are not using any non 2.0 features, etc... > Regards > > -Mark > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Sun Aug 8 10:12:52 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Sun, 8 Aug 2010 10:12:52 -0700 Subject: [Public WebGL] Shader validator / demos In-Reply-To: <278157373.210905.1281137166620.JavaMail.root@cm-mail03.mozilla.org> References: <4C5C97B5.2050003@hicorp.co.jp> <278157373.210905.1281137166620.JavaMail.root@cm-mail03.mozilla.org> Message-ID: I'll fix these right now On Fri, Aug 6, 2010 at 4:26 PM, Vladimir Vukicevic wrote: > Yup, that is correct. I was planning on turning things on yesterday, but > we had some build infrastructure problems preventing me from doing so. You > should be able to flip the pref as mentioned in my original mail and things > should work. > > As far as console is not defined, some of the demos depend on a > non-standard console object always being present; I'll go through and see > about fixing those. You can get a console defined in Fx by opening up the > (work-in-progress) console from the tools menu, though. > > - Vlad > > ----- Original Message ----- > > The shader validator still is not on by default in the Firefox 4 beta > > 4 nightly that I installed an hour ago. > > > > > > The majority of the demos in the public wiki do not run in Firefox. > > The ones that originate from Apple all give, among other things, a > > "console is not defined" error. > > > > > > > > > > -Mark > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Sun Aug 8 10:57:47 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Sun, 8 Aug 2010 10:57:47 -0700 Subject: [Public WebGL] Shader validator / demos In-Reply-To: References: <4C5C97B5.2050003@hicorp.co.jp> <278157373.210905.1281137166620.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Sun, Aug 8, 2010 at 10:12 AM, Gregg Tavares (wrk) wrote: > I'll fix these right now Okay, I removed or refactored all calls to console.log but, all the Webkit samples use TypedArray.set which apparently firefox has not implemented yet? In other words: a = Float32Array(3); a.set([1,2,3]); // exception: a.set is not a function > > > On Fri, Aug 6, 2010 at 4:26 PM, Vladimir Vukicevic wrote: > >> Yup, that is correct. I was planning on turning things on yesterday, but >> we had some build infrastructure problems preventing me from doing so. You >> should be able to flip the pref as mentioned in my original mail and things >> should work. >> >> As far as console is not defined, some of the demos depend on a >> non-standard console object always being present; I'll go through and see >> about fixing those. You can get a console defined in Fx by opening up the >> (work-in-progress) console from the tools menu, though. >> >> - Vlad >> >> ----- Original Message ----- >> > The shader validator still is not on by default in the Firefox 4 beta >> > 4 nightly that I installed an hour ago. >> > >> > >> > The majority of the demos in the public wiki do not run in Firefox. >> > The ones that originate from Apple all give, among other things, a >> > "console is not defined" error. >> > >> > >> > >> > >> > -Mark >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Sun Aug 8 21:48:12 2010 From: ste...@ (Steve Baker) Date: Sun, 08 Aug 2010 23:48:12 -0500 Subject: [Public WebGL] Why GL 2.x? In-Reply-To: <4C5C4E8E.40804@hicorp.co.jp> References: <4C5C4E8E.40804@hicorp.co.jp> Message-ID: <4C5F888C.9090903@sjbaker.org> If WebGL can be made to work on top of 2.x - then that extends the range of ancient hardware that it'll work on. If we can make that happen without compromise to the new standard - then it's a good thing that people are checking the details. -- Steve Mark Callow wrote: > > In several discussions on this list recently, it has emerged that > people are using GL 2.x drivers and documentation. I am curious why > this is. Is it (a) because it is hard to get the lastest 3.x or 4.x GL > drivers from the desktop vendors. Or (b) is it because GPUs that can > support GL 3.x are still relatively rare in the real world. Or (c) > some other reason? > > Regards > > -Mark > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Sun Aug 8 22:07:28 2010 From: ste...@ (Steve Baker) Date: Mon, 09 Aug 2010 00:07:28 -0500 Subject: [Public WebGL] download/activity indicator In-Reply-To: <8EA03D15-4A4D-4837-8C78-DC1380B5305C@apple.com> References: <4C5C993F.9060400@hicorp.co.jp> <8EA03D15-4A4D-4837-8C78-DC1380B5305C@apple.com> Message-ID: <4C5F8D10.4020900@sjbaker.org> It's easy enough for the application writer to handle this. WebGL (like OpenGL) is supposed to be a low level API - you have to expect to handle some of these things for yourself. Having the browser tell the user "We're not ready to play yet because stuff is downloading" is a really bad idea. My present app pre-loads things in background while getting the user signed in, choosing what (s)he wants to play, designing an avatar, etc. I *REALLY* don't want the browser telling them that they have to wait when I've gone to a hell of a lot of trouble to make a smooth user experience that specifically avoids them having to wait! Doing that makes my application look like it takes longer to start up than it really does. JavaScript/HTTP supports background loading - so the browser can't possibly know when we're done. The job of a low level API like OpenGL/WebGL is to make it possible for carefully written applications to behave nicely - the job of middleware is to make that easy to do for developers who don't have the time to do a proper job of it themselves. Hence, it is entirely inappropriate for the browser/WebGL to tell the user things that aren't necessarily true. It's OK to have some kind of built-in activity indicator - but it shouldn't be over-specific about whether the user should wait or not. Some things have to be left for the application...this is one of them. If your application is going to take more than a couple of seconds to get to the point where the user can interact with it - then it should provide a load indicator of its own. That said - there is room for a layer of middleware on top of WebGL that might provide those kinds of services automatically. -- Steve Chris Marrin wrote: > > On Aug 6, 2010, at 4:22 PM, Mark Callow wrote: > >> When WebGL apps are loading there often be times when the browser >> status bar says "Done" but nothing is apparently happening. E.g, the >> Spore demo. In the case of Spore it is downloading the model data. In >> other cases in fact nothing is happening because the app. is broken. >> >> This is very user unfriendly. Can browser vendors make the >> download/activity indicator operate when WebGL apps are loading model >> data? Then if the browser says "Done" and nothing has appeared, users >> will know there is a problem. >> > The Spore model is being downloaded with XHR, which gives plenty of > events telling you its progress. But it's up to the author to use > these events to give the user feedback. The fact that Vlad is not > doing this is just shameful :-) (I don't do it either!) > > You do see some demos that popup a dialog if they fail to get a WebGL > context and that give a busy indicator while data is being loaded. > You're starting to see WebGL libraries that have this functionality > built in. So in the future I think you'll see the feedback more... > > ----- > ~Chris > cmarrin...@ > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Aug 9 11:13:05 2010 From: ste...@ (ste...@) Date: Mon, 9 Aug 2010 11:13:05 -0700 Subject: [Public WebGL] texImage2D changed... Message-ID: I loaded up last night's Minefield-for-Windows 32 bit build (20100809 Win 4.0b4pre) - and it crashed on loading my WebGL app (which had been working just fine 30 seconds earlier on a nightly build from a few days ago!) :-( In desperation I tried turning *off* the webgl.shader_validator - and doing that got me a decent error message - which was that this line of JavaScript code: gl.texImage2D( gl.TEXTURE_2D, 0, image, true ) ; ...has too few parameters. I can't imagine that the validator would know or care about that - so probably the validator is crashing for some unrelated reason. Anyway - I guess that form of texImage2D got obsoleted somehow. I looked at the WebGL spec and some of the working example code to see how we load images from URL's nowadays...and as a quick test, I tried this: gl.texImage2D ( gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image); That gets my program running again...but now my textures are all screwed up. I'm using PNG and my textures are a mix of RGB's and RGBA's - with the occasional monochrome PNG tossed in for good measure. So presumably I need to set the two format parameters according to what's actually in the PNG file. However, that's a major pain in the butt. I certainly don't want to have to write JavaScript code to read the PNG header (libPNG for JavaScript?!?) just to find out whether it's RGB, RGBA, Luminance or Luminance/Alpha. My art path automagically chooses the most efficient PNG format - so it's more or less on the whim of the art guys how my textures show up. This newer form of texImage2D simply isn't very helpful. If we are extending the traditional OpenGL API to include image loading (which I think is "A Good Thing" because parsing image files in client-side JavaScript is painful) - then we should do it properly and have it set the format parameters intelligently by examining the incoming image data rather than having everyone have to write JavaScript code to do that. Am I missing something here? Is there still a way to get WebGL to auto-detect the format? This evidently worked before (like, yesterday!) - so clearly it could be made to work again...I think it should. Thanks. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Mon Aug 9 11:24:37 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Mon, 9 Aug 2010 11:24:37 -0700 Subject: [Public WebGL] texImage2D changed... In-Reply-To: References: Message-ID: On Mon, Aug 9, 2010 at 11:13 AM, wrote: > I loaded up last night's Minefield-for-Windows 32 bit build (20100809 Win > 4.0b4pre) - and it crashed on loading my WebGL app (which had been working > just fine 30 seconds earlier on a nightly build from a few days ago!) > > :-( > > In desperation I tried turning *off* the webgl.shader_validator - and > doing that got me a decent error message - which was that this line of > JavaScript code: > > gl.texImage2D( gl.TEXTURE_2D, 0, image, true ) ; > > ...has too few parameters. I can't imagine that the validator would know > or care about that - so probably the validator is crashing for some > unrelated reason. > > Anyway - I guess that form of texImage2D got obsoleted somehow. I looked > at the WebGL spec and some of the working example code to see how we load > images from URL's nowadays...and as a quick test, I tried this: > > gl.texImage2D ( gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, > gl.UNSIGNED_BYTE, image); > > That gets my program running again...but now my textures are all screwed > up. I'm using PNG and my textures are a mix of RGB's and RGBA's - with > the occasional monochrome PNG tossed in for good measure. So presumably I > need to set the two format parameters according to what's actually in the > PNG file. > No, you don't have to set those to match the png file. The WebGL spec details that WebGL will convert the image to the format you specify. The new API is far more flexible than the old one. > > However, that's a major pain in the butt. I certainly don't want to have > to write JavaScript code to read the PNG header (libPNG for JavaScript?!?) > just to find out whether it's RGB, RGBA, Luminance or Luminance/Alpha. My > art path automagically chooses the most efficient PNG format - so it's > more or less on the whim of the art guys how my textures show up. > > This newer form of texImage2D simply isn't very helpful. If we are > extending the traditional OpenGL API to include image loading (which I > think is "A Good Thing" because parsing image files in client-side > JavaScript is painful) - then we should do it properly and have it set the > format parameters intelligently by examining the incoming image data > rather than having everyone have to write JavaScript code to do that. > > Am I missing something here? Is there still a way to get WebGL to > auto-detect the format? This evidently worked before (like, yesterday!) - > so clearly it could be made to work again...I think it should. > Thanks. > > -- Steve > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Mon Aug 9 11:32:53 2010 From: bja...@ (Benoit Jacob) Date: Mon, 9 Aug 2010 11:32:53 -0700 (PDT) Subject: [Public WebGL] texImage2D changed... In-Reply-To: Message-ID: <129981355.228596.1281378772997.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > I loaded up last night's Minefield-for-Windows 32 bit build (20100809 > Win > 4.0b4pre) - and it crashed on loading my WebGL app > (which had been > working > just fine 30 seconds earlier on a nightly build from a few days ago!) > > :-( > > In desperation I tried turning *off* the webgl.shader_validator Ah OK, crash in the shader validator, you wouldn't be the only one hitting that: https://bugzilla.mozilla.org/show_bug.cgi?id=585502 > - and > doing that got me a decent error message - which was that this line of > JavaScript code: > > gl.texImage2D( gl.TEXTURE_2D, 0, image, true ) ; So this is a completely separate, unrelated issue. The crash you mentioned above was Firefox's fault; this on the other hand is that your code is still using the old texImage2D API. > > ...has too few parameters. I can't imagine that the validator would > know > or care about that - so probably the validator is crashing for some > unrelated reason. Yes. > > Anyway - I guess that form of texImage2D got obsoleted somehow. Yes. Did you read this e-mail that Vlad recently sent to this list? https://www.khronos.org/webgl/public-mailing-list/archives/1007/msg00034.html > I > looked > at the WebGL spec and some of the working example code to see how we > load > images from URL's nowadays...and as a quick test, I tried this: > > gl.texImage2D ( gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, > gl.UNSIGNED_BYTE, image); > > That gets my program running again...but now my textures are all > screwed > up. > I'm using PNG and my textures are a mix of RGB's and RGBA's - with > the occasional monochrome PNG tossed in for good measure. OK so my guess would be that it's because we in Firefox are currently only supporting RGB and RGBA with 8 bytes per channel. Can you try this in Chromium, if it works there then that would confirm it. We are looking at reusing the nice reusable that Ken wrote there for Chromium. > So > presumably I > need to set the two format parameters according to what's actually in > the > PNG file. No no, you don't need to. First of all, like in OpenGL ES, the format and internalformat parameters are just required to be equal, so yes they are redundant. If your image is an actual Image object or a HTML element, then you really don't have to specify the format it's in, the format parameter passed to texImage2D only specifies the format in which you want it to be passed to the GL; the WebGL implementation does the conversion for you. The only reason why you'd have to care about the actual format of your image data, is if you're passing this data in a plain buffer/array. (Disclaimer: i could be wrong, anyone here knowing better please correct me). > Am I missing something here? Is there still a way to get WebGL to > auto-detect the format? As explained above, it does (except of course if you just pass a plain buffer). Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Mon Aug 9 11:53:55 2010 From: vla...@ (Vladimir Vukicevic) Date: Mon, 9 Aug 2010 11:53:55 -0700 (PDT) Subject: [Public WebGL] texImage2D changed... In-Reply-To: <129981355.228596.1281378772997.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <2100662345.228927.1281380035845.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > ----- Original Message ----- > > I loaded up last night's Minefield-for-Windows 32 bit build > > (20100809 > > Win > > 4.0b4pre) - and it crashed on loading my WebGL app > > (which had been > > working > > just fine 30 seconds earlier on a nightly build from a few days > > ago!) > > > > :-( > > > > In desperation I tried turning *off* the webgl.shader_validator > > Ah OK, crash in the shader validator, you wouldn't be the only one > hitting that: > https://bugzilla.mozilla.org/show_bug.cgi?id=585502 Yep, sorry about that -- I missed that in testing as I was rushing to get a few things checked in. A fix should be in shortly, and should be fixed in tomorrow's nightly. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Aug 9 11:57:58 2010 From: ste...@ (ste...@) Date: Mon, 9 Aug 2010 11:57:58 -0700 Subject: [Public WebGL] texImage2D changed... In-Reply-To: References: Message-ID: <6128c95bf048a0edcddf60e0df7573a7.squirrel@webmail.sjbaker.org> > On Mon, Aug 9, 2010 at 11:13 AM, wrote: >> I looked >> at the WebGL spec and some of the working example code to see how we >> load >> images from URL's nowadays...and as a quick test, I tried this: >> >> gl.texImage2D ( gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, >> gl.UNSIGNED_BYTE, image); >> >> That gets my program running again...but now my textures are all screwed >> up. I'm using PNG and my textures are a mix of RGB's and RGBA's - with >> the occasional monochrome PNG tossed in for good measure. So presumably >> I >> need to set the two format parameters according to what's actually in >> the >> PNG file. > > No, you don't have to set those to match the png file. The WebGL spec > details that WebGL will convert the image to the format you specify. The > new > API is far more flexible than the old one. Ah! Thank goodness for that! But the documentation is far from clear - and what I think it says to do doesn't work. The closest I could find to an explanation is: "The source image data is conceptually first converted to the data type and format specified by the format and type arguments, and then transferred to the OpenGL implementation." ...so I suppose it's saying: Actual File Format ==> externalFormat ==> internalFormat Which would suggest that if I unconditionally set both format parameters to gl.RGBA - then it should convert everything to 4 bytes per texel no matter whether my PNG has 1,2,3 or 4 bytes - presumably setting A=1 if the source image is an RGB-only PNG and spreading greyscale PNG's out into full RGBA's. If so, that's kinda wasteful if the file is really a 1 byte luminance-only thing. But perhaps the word "conceptually" in the spec means that it's not REALLY going to allocate 4 bytes per texel - but merely arrange that when the shader reads it, it'll appear as if there were 4 bytes present. That would make sense...but it really ought to be clearer on what it'll actually do. If we're planning on making this work on itty-bitty cellphones, we can't afford to waste texture memory - so the spec needs to be really clear on what will happen. The wording should make it clear whether I can be lazy and always say gl.RGBA and rely on the underlying implementation not to waste texture memory - or whether I still have to parse the image file in order to ask for the internalFormat that's efficient for whatever file format I happen to have been handed by my art tools. Also, if the conversion is automatic, then why do I have to provide both an internalFormat and externalFormat parameter? Seems like it should ignore the externalFormat and assume that from the file header. Anyway - in practice, this isn't working. Setting them both to gl.RGBA produces textures that are squashed up (like it's trying to read 4 bytes per texel when there are only 3)...setting them both to gl.RGB produces some other screwed up mess. Is there some magic value I need to use to tell it "Do this automatically"? Bottom line: HELP!! What exactly do I type to get back the behavior I had yesterday? Thanks in Advance... -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Aug 9 12:13:28 2010 From: ste...@ (ste...@) Date: Mon, 9 Aug 2010 12:13:28 -0700 Subject: [Public WebGL] texImage2D changed... In-Reply-To: <129981355.228596.1281378772997.JavaMail.root@cm-mail03.mozilla.org> References: <129981355.228596.1281378772997.JavaMail.root@cm-mail03.mozilla.org> Message-ID: >> In desperation I tried turning *off* the webgl.shader_validator > > Ah OK, crash in the shader validator, you wouldn't be the only one hitting > that: > https://bugzilla.mozilla.org/show_bug.cgi?id=585502 Ah - OK - that explains that part! >> Anyway - I guess that form of texImage2D got obsoleted somehow. > > Yes. Did you read this e-mail that Vlad recently sent to this list? > https://www.khronos.org/webgl/public-mailing-list/archives/1007/msg00034.html Oooohhh! My bad! I did read it - but I got so engrossed in the bit about the validator that I kinda zoned out on the last part. I must remember to get some sleep sometime! > OK so my guess would be that it's because we in Firefox are currently only > supporting RGB and RGBA with 8 bytes per channel. Can you try this in > Chromium, if it works there then that would confirm it. We are looking at > reusing the nice reusable that Ken wrote there for Chromium. Yeah - but it'll have to wait for a day or so - I'm not tracking the Chromium builds currently. >> So presumably I >> need to set the two format parameters according to what's actually in >> the PNG file. > > No no, you don't need to. First of all, like in OpenGL ES, the format and > internalformat parameters are just required to be equal, so yes they are > redundant. > > If your image is an actual Image object or a HTML element, then you really > don't have to specify the format it's in, the format parameter passed to > texImage2D only specifies the format in which you want it to be passed to > the GL; the WebGL implementation does the conversion for you. Right - but am I wasting texture memory if I set the internalFormat to RGBA when my file is really a 1 byte monochrome format? If so, then I guess I still have to know what file format I have in order to set that parameter correctly for efficient memory use? Many thanks! -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Mon Aug 9 12:15:51 2010 From: bja...@ (Benoit Jacob) Date: Mon, 9 Aug 2010 12:15:51 -0700 (PDT) Subject: [Public WebGL] texImage2D changed... In-Reply-To: <6128c95bf048a0edcddf60e0df7573a7.squirrel@webmail.sjbaker.org> Message-ID: <528271134.229211.1281381351443.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > > On Mon, Aug 9, 2010 at 11:13 AM, wrote: > > > >> I looked > >> at the WebGL spec and some of the working example code to see how > >> we > >> load > >> images from URL's nowadays...and as a quick test, I tried this: > >> > >> gl.texImage2D ( gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, > >> gl.UNSIGNED_BYTE, image); > >> > >> That gets my program running again...but now my textures are all > >> screwed > >> up. I'm using PNG and my textures are a mix of RGB's and RGBA's - > >> with > >> the occasional monochrome PNG tossed in for good measure. So > >> presumably > >> I > >> need to set the two format parameters according to what's actually > >> in > >> the > >> PNG file. > > > > No, you don't have to set those to match the png file. The WebGL > > spec > > details that WebGL will convert the image to the format you specify. > > The > > new > > API is far more flexible than the old one. > > Ah! Thank goodness for that! But the documentation is far from clear - > and what I think it says to do doesn't work. > > The closest I could find to an explanation is: > > "The source image data is conceptually first converted > to the data type and format specified by the format > and type arguments, and then transferred to the OpenGL > implementation." > > ...so I suppose it's saying: > > Actual File Format ==> externalFormat ==> internalFormat > > Which would suggest that if I unconditionally set both format > parameters > to gl.RGBA - then it should convert everything to 4 bytes per texel no > matter whether my PNG has 1,2,3 or 4 bytes - presumably setting A=1 if > the > source image is an RGB-only PNG and spreading greyscale PNG's out into > full RGBA's. If so, that's kinda wasteful if the file is really a 1 > byte > luminance-only thing. > > But perhaps the word "conceptually" in the spec means that it's not > REALLY > going to allocate 4 bytes per texel - but merely arrange that when the > shader reads it, it'll appear as if there were 4 bytes present. That > would make sense...but it really ought to be clearer on what it'll > actually do. If we're planning on making this work on itty-bitty > cellphones, we can't afford to waste texture memory - so the spec > needs to > be really clear on what will happen. > > The wording should make it clear whether I can be lazy and always say > gl.RGBA and rely on the underlying implementation not to waste texture > memory - or whether I still have to parse the image file in order to > ask > for the internalFormat that's efficient for whatever file format I > happen > to have been handed by my art tools. > > Also, if the conversion is automatic, then why do I have to provide > both > an internalFormat and externalFormat parameter? Seems like it should > ignore the externalFormat and assume that from the file header. > > Anyway - in practice, this isn't working. Setting them both to gl.RGBA > produces textures that are squashed up (like it's trying to read 4 > bytes > per texel when there are only 3)...setting them both to gl.RGB > produces > some other screwed up mess. Is there some magic value I need to use to > tell it "Do this automatically"? > > Bottom line: HELP!! What exactly do I type to get back the behavior I > had > yesterday? As I said in my previous e-mail: could you first try in Chromium just to make sure that your issues aren't just caused by Firefox's current lack of support for certain texture formats? Within a couple of weeks we should be there too. Cheers, Benoit > > Thanks in Advance... > > -- Steve > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Mon Aug 9 12:22:26 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 9 Aug 2010 12:22:26 -0700 Subject: [Public WebGL] texImage2D changed... In-Reply-To: <6128c95bf048a0edcddf60e0df7573a7.squirrel@webmail.sjbaker.org> References: <6128c95bf048a0edcddf60e0df7573a7.squirrel@webmail.sjbaker.org> Message-ID: On Mon, Aug 9, 2010 at 11:57 AM, wrote: >> On Mon, Aug 9, 2010 at 11:13 AM, wrote: > > >>> I looked >>> at the WebGL spec and some of the working example code to see how we >>> load >>> images from URL's nowadays...and as a quick test, I tried this: >>> >>> ? ?gl.texImage2D ( gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, >>> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?gl.UNSIGNED_BYTE, image); >>> >>> That gets my program running again...but now my textures are all screwed >>> up. ?I'm using PNG and my textures are a mix of RGB's and RGBA's - with >>> the occasional monochrome PNG tossed in for good measure. ?So presumably >>> I >>> need to set the two format parameters according to what's actually in >>> the >>> PNG file. >> >> No, you don't have to set those to match the png file. The WebGL spec >> details that WebGL will convert the image to the format you specify. The >> new >> API is far more flexible than the old one. > > Ah! ?Thank goodness for that! ?But the documentation is far from clear - > and what I think it says to do doesn't work. > > The closest I could find to an explanation is: > > ? "The source image data is conceptually first converted > ? ?to the data type and format specified by the format > ? ?and type arguments, and then transferred to the OpenGL > ? ?implementation." > > ...so I suppose it's saying: > > ? Actual File Format ==> externalFormat ==> internalFormat The reason the spec is worded like this is as follows: if you specify UNSIGNED_SHORT_5_5_5_1 as the type and RGBA as the format, then it would be an error if the WebGL implementation actually preserved all 8 bits per channel. A compliant implementation needs to convert to the 5_5_5_1 packed format before uploading to the GL, so that behavior is reproducible across all platforms. > Which would suggest that if I unconditionally set both format parameters > to gl.RGBA - then it should convert everything to 4 bytes per texel no > matter whether my PNG has 1,2,3 or 4 bytes - presumably setting A=1 if the > source image is an RGB-only PNG and spreading greyscale PNG's out into > full RGBA's. ?If so, that's kinda wasteful if the file is really a 1 byte > luminance-only thing. > > But perhaps the word "conceptually" in the spec means that it's not REALLY > going to allocate 4 bytes per texel - but merely arrange that when the > shader reads it, it'll appear as if there were 4 bytes present. ?That > would make sense...but it really ought to be clearer on what it'll > actually do. ?If we're planning on making this work on itty-bitty > cellphones, we can't afford to waste texture memory - so the spec needs to > be really clear on what will happen. If you want to explicitly use less texture memory then you should consider using UNSIGNED_SHORT_5_6_5 for RGB textures and UNSIGNED_SHORT_5_5_5_1 for RGBA textures. > The wording should make it clear whether I can be lazy and always say > gl.RGBA and rely on the underlying implementation not to waste texture > memory - or whether I still have to parse the image file in order to ask > for the internalFormat that's efficient for whatever file format I happen > to have been handed by my art tools. > > Also, if the conversion is automatic, then why do I have to provide both > an internalFormat and externalFormat parameter? ?Seems like it should > ignore the externalFormat and assume that from the file header. The function signature was chosen to match the OpenGL signature, and to accommodate present and future semantics of OpenGL ES. > Anyway - in practice, this isn't working. ?Setting them both to gl.RGBA > produces textures that are squashed up (like it's trying to read 4 bytes > per texel when there are only 3)...setting them both to gl.RGB produces > some other screwed up mess. ?Is there some magic value I need to use to > tell it "Do this automatically"? > > Bottom line: HELP!! What exactly do I type to get back the behavior I had > yesterday? As Benoit mentioned, could you please try the top of tree Chromium builds? I am pretty sure they implement the correct semantics for format specification during texture uploads. -Ken > Thanks in Advance... > > ?-- Steve > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Aug 9 12:39:06 2010 From: ste...@ (ste...@) Date: Mon, 9 Aug 2010 12:39:06 -0700 Subject: [Public WebGL] ETC compression method. In-Reply-To: <4C5C4CD6.20109@hicorp.co.jp> References: <4C5C4CD6.20109@hicorp.co.jp> Message-ID: It's great to hear that we'll have a compressed texture format available. The weakest link in doing web-based 3D graphics is the time the textures take to load - so this is a critical issue. Sadly, I have never worked with ETC images (although I know a lot about DXT). Does anyone know of a readable online description of how ETC compresses data? I hate to have to figure it out from the 'etcpack' stuff. I presume it's a lossy scheme - and I need to understand how it loses so that I can advise my artists on when they can usefully use compression and when they cannot. eg: Can I compress normal maps without getting horrible artifacting? Is it better to store RGB and A in separate maps rather than trying to compress them together? What is the trade-off between using lower resolution uncompressed maps versus higher resolution compressed maps? A Google search doesn't turn up anything interesting (because 'ETC' matches 'etcetera') and the Wikipedia article on ETC is about one line long! (cf gallons of available documentation on DXTn compression). -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Mon Aug 9 12:40:42 2010 From: vla...@ (Vladimir Vukicevic) Date: Mon, 9 Aug 2010 12:40:42 -0700 (PDT) Subject: [Public WebGL] texImage2D changed... In-Reply-To: <528271134.229211.1281381351443.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1538843621.229458.1281382842634.JavaMail.root@cm-mail03.mozilla.org> I don't really understand why gl.RGBA isn't working, though; there should be no conversion necessary, as internally all images get expanded out to RGBA anyway (currently). So that should work, there should be no format conversion needed at all in this case... Steve, what's the format of the original image that you're loading? - Vlad ----- Original Message ----- > ----- Original Message ----- > > > On Mon, Aug 9, 2010 at 11:13 AM, wrote: > > > > > > >> I looked > > >> at the WebGL spec and some of the working example code to see how > > >> we > > >> load > > >> images from URL's nowadays...and as a quick test, I tried this: > > >> > > >> gl.texImage2D ( gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, > > >> gl.UNSIGNED_BYTE, image); > > >> > > >> That gets my program running again...but now my textures are all > > >> screwed > > >> up. I'm using PNG and my textures are a mix of RGB's and RGBA's - > > >> with > > >> the occasional monochrome PNG tossed in for good measure. So > > >> presumably > > >> I > > >> need to set the two format parameters according to what's > > >> actually > > >> in > > >> the > > >> PNG file. > > > > > > No, you don't have to set those to match the png file. The WebGL > > > spec > > > details that WebGL will convert the image to the format you > > > specify. > > > The > > > new > > > API is far more flexible than the old one. > > > > Ah! Thank goodness for that! But the documentation is far from clear > > - > > and what I think it says to do doesn't work. > > > > The closest I could find to an explanation is: > > > > "The source image data is conceptually first converted > > to the data type and format specified by the format > > and type arguments, and then transferred to the OpenGL > > implementation." > > > > ...so I suppose it's saying: > > > > Actual File Format ==> externalFormat ==> internalFormat > > > > Which would suggest that if I unconditionally set both format > > parameters > > to gl.RGBA - then it should convert everything to 4 bytes per texel > > no > > matter whether my PNG has 1,2,3 or 4 bytes - presumably setting A=1 > > if > > the > > source image is an RGB-only PNG and spreading greyscale PNG's out > > into > > full RGBA's. If so, that's kinda wasteful if the file is really a 1 > > byte > > luminance-only thing. > > > > But perhaps the word "conceptually" in the spec means that it's not > > REALLY > > going to allocate 4 bytes per texel - but merely arrange that when > > the > > shader reads it, it'll appear as if there were 4 bytes present. That > > would make sense...but it really ought to be clearer on what it'll > > actually do. If we're planning on making this work on itty-bitty > > cellphones, we can't afford to waste texture memory - so the spec > > needs to > > be really clear on what will happen. > > > > The wording should make it clear whether I can be lazy and always > > say > > gl.RGBA and rely on the underlying implementation not to waste > > texture > > memory - or whether I still have to parse the image file in order to > > ask > > for the internalFormat that's efficient for whatever file format I > > happen > > to have been handed by my art tools. > > > > Also, if the conversion is automatic, then why do I have to provide > > both > > an internalFormat and externalFormat parameter? Seems like it should > > ignore the externalFormat and assume that from the file header. > > > > Anyway - in practice, this isn't working. Setting them both to > > gl.RGBA > > produces textures that are squashed up (like it's trying to read 4 > > bytes > > per texel when there are only 3)...setting them both to gl.RGB > > produces > > some other screwed up mess. Is there some magic value I need to use > > to > > tell it "Do this automatically"? > > > > Bottom line: HELP!! What exactly do I type to get back the behavior > > I > > had > > yesterday? > > As I said in my previous e-mail: could you first try in Chromium just > to make sure that your issues aren't just caused by Firefox's current > lack of support for certain texture formats? Within a couple of weeks > we should be there too. > > Cheers, > Benoit > > > > > Thanks in Advance... > > > > -- Steve > > > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Mon Aug 9 13:23:05 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Mon, 9 Aug 2010 13:23:05 -0700 Subject: [Public WebGL] texImage2D changed... In-Reply-To: <6128c95bf048a0edcddf60e0df7573a7.squirrel@webmail.sjbaker.org> References: <6128c95bf048a0edcddf60e0df7573a7.squirrel@webmail.sjbaker.org> Message-ID: On Mon, Aug 9, 2010 at 11:57 AM, wrote: > > On Mon, Aug 9, 2010 at 11:13 AM, wrote: > > > >> I looked > >> at the WebGL spec and some of the working example code to see how we > >> load > >> images from URL's nowadays...and as a quick test, I tried this: > >> > >> gl.texImage2D ( gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, > >> gl.UNSIGNED_BYTE, image); > >> > >> That gets my program running again...but now my textures are all screwed > >> up. I'm using PNG and my textures are a mix of RGB's and RGBA's - with > >> the occasional monochrome PNG tossed in for good measure. So presumably > >> I > >> need to set the two format parameters according to what's actually in > >> the > >> PNG file. > > > > No, you don't have to set those to match the png file. The WebGL spec > > details that WebGL will convert the image to the format you specify. The > > new > > API is far more flexible than the old one. > > Ah! Thank goodness for that! But the documentation is far from clear - > and what I think it says to do doesn't work. > > The closest I could find to an explanation is: > > "The source image data is conceptually first converted > to the data type and format specified by the format > and type arguments, and then transferred to the OpenGL > implementation." > > ...so I suppose it's saying: > > Actual File Format ==> externalFormat ==> internalFormat > > Which would suggest that if I unconditionally set both format parameters > to gl.RGBA - then it should convert everything to 4 bytes per texel no > matter whether my PNG has 1,2,3 or 4 bytes - presumably setting A=1 if the > source image is an RGB-only PNG and spreading greyscale PNG's out into > full RGBA's. Correct > If so, that's kinda wasteful if the file is really a 1 byte > luminance-only thing. > Well, if you know it's luminance texture then you can chose GL_LUMINANCE as your format. WebGL couldn't do this automagically because the way channels are provided to the shader are different depending on the format. So, regardless, you have to tell it format you want since only you know how you want to use the texture. This also means you can load RGBA images as GL_LUMINANCE which gives you more flexibility because many browsers don't support 1 channel textures internally. The always expand to RGBA. So now you can even take a JPG, a GIF, a VIDEO, and convert to 1 channel or convert to RGBA and UNSIGNED_SHORT_5_6_5 or whatever if you want. That's far more flexible than it was. > But perhaps the word "conceptually" in the spec means that it's not REALLY > going to allocate 4 bytes per texel - but merely arrange that when the > shader reads it, it'll appear as if there were 4 bytes present. That > would make sense...but it really ought to be clearer on what it'll > actually do. If we're planning on making this work on itty-bitty > cellphones, we can't afford to waste texture memory - so the spec needs to > be really clear on what will happen. > > The wording should make it clear whether I can be lazy and always say > gl.RGBA and rely on the underlying implementation not to waste texture > memory - or whether I still have to parse the image file in order to ask > for the internalFormat that's efficient for whatever file format I happen > to have been handed by my art tools. > > Also, if the conversion is automatic, then why do I have to provide both > an internalFormat and externalFormat parameter? Seems like it should > ignore the externalFormat and assume that from the file header. > Because we are following the OpenGL ES 2.0 spec. In a future version of OpenGL ES those parameters will be allowed to be different and OpenGL ES itself will be doing conversions between format and internal_format. > > Anyway - in practice, this isn't working. Setting them both to gl.RGBA > produces textures that are squashed up (like it's trying to read 4 bytes > per texel when there are only 3)...setting them both to gl.RGB produces > some other screwed up mess. Is there some magic value I need to use to > tell it "Do this automatically"? > That sounds like a bug. I've converted all the demos on the wiki and all the conformance test. Most of them required changing gl.texImage2D(target, level, img) to gl.texImage2D(target, level, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, img) and that was it. A few others required adding gl.PixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true); But that was it. They all worked. Hope we can figure out what issue is causing trouble for you. > > Bottom line: HELP!! What exactly do I type to get back the behavior I had > yesterday? > > Thanks in Advance... > > -- Steve > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Mon Aug 9 18:59:27 2010 From: ste...@ (Steve Baker) Date: Mon, 09 Aug 2010 20:59:27 -0500 Subject: [Public WebGL] texImage2D changed... In-Reply-To: References: <6128c95bf048a0edcddf60e0df7573a7.squirrel@webmail.sjbaker.org> Message-ID: <4C60B27F.3060308@sjbaker.org> Gregg Tavares (wrk) wrote: > A few others required adding > gl.PixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true); > > But that was it. They all worked. Hope we can figure out what issue > is causing trouble for you. Aha! That was it - the last parameter of the old call (which I confess, I shamelessly cut/pasted from one of the demos without looking too closely) was flipping the image upside-down (evidently that's needed for PNG images) - when I do a gl.pixelStorei ( gl.UNPACK_FLIP_Y_WEBGL, true ) ; ...everything looks OK. (Most of my textures are heavily atlassed - which makes it hard to see things like that!) Anyway - aside from the known bug with the validator under Windows (which I've temporarily turned off), I'm back with my graphics looking good. I'm still concerned about memory consumption...but maybe the wonders of KTX and ETC will make that moot. **MANY** thanks guys! (My game is gonna need a bigger "Thanks to..." page!) -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Mon Aug 9 18:02:39 2010 From: cal...@ (Mark Callow) Date: Mon, 09 Aug 2010 18:02:39 -0700 Subject: [Public WebGL] Re: ETC compression method. In-Reply-To: References: <4C5C4CD6.20109@hicorp.co.jp> Message-ID: <4C60A52F.5080408@hicorp.co.jp> Hi Steve, The only public description I know of is in the OES_compressed_ETC1_RGB8_texture extension spec. There are presentations that Jacob made to the Khronos Group which would give you the information you are looking for but I cannot turn them lose. Jacob is on vacation until the near the end of this month. Once he returns I'll ask him if he can release one of his presentations. Note that ETC1 is for RGB. For alpha you'll have to wait for ETC2. ETC is certainly an unfortunate name from the point of view of web searches. Regards -Mark On 2010/08/09 12:39, steve...@ wrote: > It's great to hear that we'll have a compressed texture format available. > The weakest link in doing web-based 3D graphics is the time the textures > take to load - so this is a critical issue. > > Sadly, I have never worked with ETC images (although I know a lot about DXT). > > Does anyone know of a readable online description of how ETC compresses > data? I hate to have to figure it out from the 'etcpack' stuff. I > presume it's a lossy scheme - and I need to understand how it loses so > that I can advise my artists on when they can usefully use compression and > when they cannot. eg: Can I compress normal maps without getting horrible > artifacting? Is it better to store RGB and A in separate maps rather than > trying to compress them together? What is the trade-off between using > lower resolution uncompressed maps versus higher resolution compressed > maps? > > A Google search doesn't turn up anything interesting (because 'ETC' > matches 'etcetera') and the Wikipedia article on ETC is about one line > long! (cf gallons of available documentation on DXTn compression). > > -- Steve > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 378 bytes Desc: not available URL: From ste...@ Tue Aug 10 01:09:44 2010 From: ste...@ (Steve Baker) Date: Tue, 10 Aug 2010 03:09:44 -0500 Subject: [Public WebGL] Re: ETC compression method. In-Reply-To: <4C60A52F.5080408@hicorp.co.jp> References: <4C5C4CD6.20109@hicorp.co.jp> <4C60A52F.5080408@hicorp.co.jp> Message-ID: <4C610948.3080301@sjbaker.org> Thanks Mark. That's great info! I can figure it out from there. So if I'm reading this correctly, basically, ETC1 gives us 6x compression on 8/8/8 RGB data (compared to - for comparison - 8x compression with the DXT1 method) - with two unique 4/4/4 colors or two rather similar 5/5/5 colors per 4x4 texel block - split into the top and bottom (or left and right) halves of the block and some intensity (sort-of) variation between the texels within each half. But there doesn't seem to be any blending of color within the 4x2 block so the color changes will alias rather badly - while intensity variations should be reasonably well represented. I'm guessing it'll look sharper than DXT1 but with poorer color resolution/precision. That should work reasonably well when the image isn't magnified because our eyes don't need color at such high frequency as they need monochrome data. But when the texture is magnified, ETC color is likely to look pretty blocky. I presume ETC1 does a terrible job of encoding normal maps because the per-texel "intensity" variations will just have to be re-normalized out again in the shader - leaving just two 4/4/4 normals per 4x4 block. On paper you'd be way better off just dropping the texture resolution by a factor of two and storing your normals as 5/6/5 (not using compression at all) than using ETC1. It'll be interesting to see what happens in ETC2. -- Steve Mark Callow wrote: > > Hi Steve, > > The only public description I know of is in the > OES_compressed_ETC1_RGB8_texture > > extension spec. There are presentations that Jacob made to the Khronos > Group which would give you the information you are looking for but I > cannot turn them lose. Jacob is on vacation until the near the end of > this month. Once he returns I'll ask him if he can release one of his > presentations. > > Note that ETC1 is for RGB. For alpha you'll have to wait for ETC2. > > ETC is certainly an unfortunate name from the point of view of web > searches. > > Regards > > -Mark > > > > On 2010/08/09 12:39, steve...@ wrote: >> It's great to hear that we'll have a compressed texture format available. >> The weakest link in doing web-based 3D graphics is the time the textures >> take to load - so this is a critical issue. >> >> Sadly, I have never worked with ETC images (although I know a lot about DXT). >> >> Does anyone know of a readable online description of how ETC compresses >> data? I hate to have to figure it out from the 'etcpack' stuff. I >> presume it's a lossy scheme - and I need to understand how it loses so >> that I can advise my artists on when they can usefully use compression and >> when they cannot. eg: Can I compress normal maps without getting horrible >> artifacting? Is it better to store RGB and A in separate maps rather than >> trying to compress them together? What is the trade-off between using >> lower resolution uncompressed maps versus higher resolution compressed >> maps? >> >> A Google search doesn't turn up anything interesting (because 'ETC' >> matches 'etcetera') and the Wikipedia article on ETC is about one line >> long! (cf gallons of available documentation on DXTn compression). >> >> -- Steve >> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Tue Aug 10 11:05:42 2010 From: cal...@ (Mark Callow) Date: Tue, 10 Aug 2010 11:05:42 -0700 Subject: [Public WebGL] download/activity indicator In-Reply-To: <4C5F8D10.4020900@sjbaker.org> References: <4C5C993F.9060400@hicorp.co.jp> <8EA03D15-4A4D-4837-8C78-DC1380B5305C@apple.com> <4C5F8D10.4020900@sjbaker.org> Message-ID: <4C6194F6.4060308@hicorp.co.jp> I was not for a moment suggesting the browser should tell the user "We're not ready to play yet." I am simply saying that a blank page with Done written in the status bar is less useful than a blank page with an activity indicator in the status bar. I agree it is ideally the application's responsibility to provide the user with appropriate feedback during loading but many applications fall short, even some written, as Chris wittily pointed out, by people close to WebGL. Such applications could tarnish the WebGL name because people on seeing a blank page and Done will say "this is crap" or "this is broken", move on and then tell their friends WebGL doesn't work. If an activity indicator is showing while the page is blank they may (a) wait a few more seconds and so actually see something and (b) if their patience is exhausted before anything appears, they may be more likely to blame the app rather than WebGL. Having the browser status bar displaying an activity indicator does not detract from any application provided feedback. If the application provides good feedback the user probably won't even notice the browser status bar. Regards -Mark On 2010/08/08 22:07, Steve Baker wrote: > It's easy enough for the application writer to handle this. WebGL (like > OpenGL) is supposed to be a low level API - you have to expect to handle > some of these things for yourself. > > Having the browser tell the user "We're not ready to play yet because > stuff is downloading" is a really bad idea. My present app pre-loads > things in background while getting the user signed in, choosing what > (s)he wants to play, designing an avatar, etc. I *REALLY* don't want > the browser telling them that they have to wait when I've gone to a hell > of a lot of trouble to make a smooth user experience that specifically > avoids them having to wait! Doing that makes my application look like > it takes longer to start up than it really does. JavaScript/HTTP > supports background loading - so the browser can't possibly know when > we're done. > > The job of a low level API like OpenGL/WebGL is to make it possible for > carefully written applications to behave nicely - the job of middleware > is to make that easy to do for developers who don't have the time to do > a proper job of it themselves. > > Hence, it is entirely inappropriate for the browser/WebGL to tell the > user things that aren't necessarily true. It's OK to have some kind of > built-in activity indicator - but it shouldn't be over-specific about > whether the user should wait or not. Some things have to be left for > the application...this is one of them. If your application is going to > take more than a couple of seconds to get to the point where the user > can interact with it - then it should provide a load indicator of its own. > > That said - there is room for a layer of middleware on top of WebGL that > might provide those kinds of services automatically. > > -- Steve > > > Chris Marrin wrote: > >> On Aug 6, 2010, at 4:22 PM, Mark Callow wrote: >> >> >>> When WebGL apps are loading there often be times when the browser >>> status bar says "Done" but nothing is apparently happening. E.g, the >>> Spore demo. In the case of Spore it is downloading the model data. In >>> other cases in fact nothing is happening because the app. is broken. >>> >>> This is very user unfriendly. Can browser vendors make the >>> download/activity indicator operate when WebGL apps are loading model >>> data? Then if the browser says "Done" and nothing has appeared, users >>> will know there is a problem. >>> >>> >> The Spore model is being downloaded with XHR, which gives plenty of >> events telling you its progress. But it's up to the author to use >> these events to give the user feedback. The fact that Vlad is not >> doing this is just shameful :-) (I don't do it either!) >> >> You do see some demos that popup a dialog if they fail to get a WebGL >> context and that give a busy indicator while data is being loaded. >> You're starting to see WebGL libraries that have this functionality >> built in. So in the future I think you'll see the feedback more... >> >> ----- >> ~Chris >> cmarrin...@ >> >> >> >> >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 378 bytes Desc: not available URL: From gma...@ Tue Aug 10 11:42:14 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Tue, 10 Aug 2010 11:42:14 -0700 Subject: [Public WebGL] download/activity indicator In-Reply-To: <4C6194F6.4060308@hicorp.co.jp> References: <4C5C993F.9060400@hicorp.co.jp> <8EA03D15-4A4D-4837-8C78-DC1380B5305C@apple.com> <4C5F8D10.4020900@sjbaker.org> <4C6194F6.4060308@hicorp.co.jp> Message-ID: On Tue, Aug 10, 2010 at 11:05 AM, Mark Callow wrote: > I was not for a moment suggesting the browser should tell the user "We're > not ready to play yet." I am simply saying that a blank page with Done > written in the status bar is less useful than a blank page with an activity > indicator in the status bar. I agree it is ideally the application's > responsibility to provide the user with appropriate feedback during loading > but many applications fall short, even some written, as Chris wittily > pointed out, by people close to WebGL. > > Such applications could tarnish the WebGL name because people on seeing a > blank page and Done will say "this is crap" or "this is broken", move on and > then tell their friends WebGL doesn't work. If an activity indicator is > showing while the page is blank they may (a) wait a few more seconds and so > actually see something and (b) if their patience is exhausted before > anything appears, they may be more likely to blame the app rather than > WebGL. > > Having the browser status bar displaying an activity indicator does not > detract from any application provided feedback. If the application provides > good feedback the user probably won't even notice the browser status bar. > Unfortunately it would effect lots of existing apps. Email apps are constantly doing requests behind the scenes. So are chat apps and several image viewing apps. Whether it's appropriate for that activity to be displayed to the user is something that only the app can know. I'll be happy to add progress indicators to any of the demos on the wiki if no one has any objections. I'll try to add them to any new demos as well. > Regards > > -Mark > > > > On 2010/08/08 22:07, Steve Baker wrote: > > It's easy enough for the application writer to handle this. WebGL (like > OpenGL) is supposed to be a low level API - you have to expect to handle > some of these things for yourself. > > Having the browser tell the user "We're not ready to play yet because > stuff is downloading" is a really bad idea. My present app pre-loads > things in background while getting the user signed in, choosing what > (s)he wants to play, designing an avatar, etc. I *REALLY* don't want > the browser telling them that they have to wait when I've gone to a hell > of a lot of trouble to make a smooth user experience that specifically > avoids them having to wait! Doing that makes my application look like > it takes longer to start up than it really does. JavaScript/HTTP > supports background loading - so the browser can't possibly know when > we're done. > > The job of a low level API like OpenGL/WebGL is to make it possible for > carefully written applications to behave nicely - the job of middleware > is to make that easy to do for developers who don't have the time to do > a proper job of it themselves. > > Hence, it is entirely inappropriate for the browser/WebGL to tell the > user things that aren't necessarily true. It's OK to have some kind of > built-in activity indicator - but it shouldn't be over-specific about > whether the user should wait or not. Some things have to be left for > the application...this is one of them. If your application is going to > take more than a couple of seconds to get to the point where the user > can interact with it - then it should provide a load indicator of its own. > > That said - there is room for a layer of middleware on top of WebGL that > might provide those kinds of services automatically. > > -- Steve > > > Chris Marrin wrote: > > > On Aug 6, 2010, at 4:22 PM, Mark Callow wrote: > > > > When WebGL apps are loading there often be times when the browser > status bar says "Done" but nothing is apparently happening. E.g, the > Spore demo. In the case of Spore it is downloading the model data. In > other cases in fact nothing is happening because the app. is broken. > > This is very user unfriendly. Can browser vendors make the > download/activity indicator operate when WebGL apps are loading model > data? Then if the browser says "Done" and nothing has appeared, users > will know there is a problem. > > > > The Spore model is being downloaded with XHR, which gives plenty of > events telling you its progress. But it's up to the author to use > these events to give the user feedback. The fact that Vlad is not > doing this is just shameful :-) (I don't do it either!) > > You do see some demos that popup a dialog if they fail to get a WebGL > context and that give a busy indicator while data is being loaded. > You're starting to see WebGL libraries that have this functionality > built in. So in the future I think you'll see the feedback more... > > ----- > ~Chriscmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Tue Aug 10 13:48:11 2010 From: ste...@ (ste...@) Date: Tue, 10 Aug 2010 13:48:11 -0700 Subject: [Public WebGL] download/activity indicator In-Reply-To: References: <4C5C993F.9060400@hicorp.co.jp> <8EA03D15-4A4D-4837-8C78-DC1380B5305C@apple.com> <4C5F8D10.4020900@sjbaker.org> <4C6194F6.4060308@hicorp.co.jp> Message-ID: > On Tue, Aug 10, 2010 at 11:05 AM, Mark Callow > wrote: > >> I was not for a moment suggesting the browser should tell the user >> "We're >> not ready to play yet." I am simply saying that a blank page with Done >> written in the status bar is less useful than a blank page with an >> activity >> indicator in the status bar. I agree it is ideally the application's >> responsibility to provide the user with appropriate feedback during >> loading >> but many applications fall short, even some written, as Chris wittily >> pointed out, by people close to WebGL. >> >> Such applications could tarnish the WebGL name because people on seeing >> a >> blank page and Done will say "this is crap" or "this is broken", move on >> and >> then tell their friends WebGL doesn't work. If an activity indicator is >> showing while the page is blank they may (a) wait a few more seconds and >> so >> actually see something and (b) if their patience is exhausted before >> anything appears, they may be more likely to blame the app rather than >> WebGL. >> >> Having the browser status bar displaying an activity indicator does not >> detract from any application provided feedback. If the application >> provides >> good feedback the user probably won't even notice the browser status >> bar. >> > Unfortunately it would effect lots of existing apps. Email apps are > constantly doing requests behind the scenes. So are chat apps and several > image viewing apps. Whether it's appropriate for that activity to be > displayed to the user is something that only the app can know. > > I'll be happy to add progress indicators to any of the demos on the wiki > if > no one has any objections. I'll try to add them to any new demos as well. But if the 'activity indicator' doesn't necessarily mean "don't worry - the application is just waiting for the network" - then what does it mean? * If the application is doing a bunch of heavy JavaScript - then the indicator won't show that and you'll still see "DONE" when the app is still working. * If the application is still loading stuff while trying to keep the user interested while it does so then the activity indicator will indicate that we're not "DONE" when in fact we are ready for user interaction. * If the application continually streams stuff - then it will never be "DONE" and the browser can't tell whether it is or not. The heart of the problem is that "DONE" is just the wrong word here. Nobody can tell whether the application has finished loading or not...perhaps not even the application itself. IMHO, this is not a WebGL issue. There are at least three or four things the browser could show the user down in the bottom corners of the display: 1) That there is network activity. 2) That JavaScript code is actively executing. 3) That there have been JavaScript fatal errors flagged in the Error Console. 4) That the application has registered mouse and/or keyboard event notification - and that the user might try wiggling the mouse, clicking or typing. But sadly, none of those necessarily means that we're "DONE" or "BROKEN" or "WAITING" - which is what the end user would actually like to know. These apply equally to all kinds of pages - not just WebGL. The same exact problem exists for Canvas applications, Flash, HTML5 audio and video tags...you name it! This is a wider problem than WebGL. It's a systemic problem now that browsers are not simply displayers of static content. The best thing we can do to sell people on the idea that WebGL is "cool" is to launch the new browsers all on the same day to make a media splash and show that there is broad support (except for IE of course). And to announce that with at least a handful of really kick-ass applications - not just spinning teapots - actual working, useful/interesting/compelling things, all nicely polished, tested to death and running on fast, solid servers. Once everyone has seen that it can do super-amazing-cool-stuff, subsequent crappy experiences from poorly-written applications will be squarely blamed on the application author - because everyone will have seen that WebGL does actually work. 3D graphics isn't easy - there WILL be crappy applications - there is nothing we can do about that other than to set a high bar to start with. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Wed Aug 11 10:05:29 2010 From: ala...@ (ala...@) Date: Wed, 11 Aug 2010 10:05:29 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv Message-ID: <4C62D859.2060205@mechnicality.com> Hi I'm seeing a problem with vertexAttrib4fv which manifests on Linux x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX The pseudo code is something like: if (! colorVertexAttributeData) { int colorIndex = getAttribLocation(shader, "a_color") if (colorIndex != -1) { disableVertexAttribArray(colorIndex) vertexAttrib4fv(colorIndex, defaultColor) } } where my shader has: attribute vec3 a_color; in the attribute definitions. The purpose behind the code is for the cases where the data doesn't have a color attribute channel (using Max speak) so that it is displayed with a default color, so I use a constant vertex attribute. This works fine in the win setup, but on the linux box Chrome 6.0.472.25 dev ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces a black image. By "ignore the color" I've set a default value of vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is happening. In chrome, the objects are this color rather than the color provided in the vertexAttrib statement. It seems to me that the likely candidates are either the graphics driver or some issue with WebGL and X86_64 linux. When there is a color attribute channel in the vertex attray then the vertexAttribPointer correctly finds and displays the color on all platforms. Regards Alan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Wed Aug 11 10:34:53 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 11 Aug 2010 10:34:53 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <4C62D859.2060205@mechnicality.com> References: <4C62D859.2060205@mechnicality.com> Message-ID: On Wed, Aug 11, 2010 at 10:05 AM, alan...@ wrote: > Hi > > I'm seeing a problem with vertexAttrib4fv which manifests on Linux > x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX > > The pseudo code is something like: > > > ? ?if (! colorVertexAttributeData) { > > ? ? ? ? int colorIndex = getAttribLocation(shader, "a_color") > ? ? ? ? if (colorIndex != -1) { > ? ? ? ? ? ? ? ? disableVertexAttribArray(colorIndex) > ? ? ? ? ? ? ? ? vertexAttrib4fv(colorIndex, defaultColor) > ? ? ? ? } > ? } > > > where my shader has: > > attribute vec3 a_color; > > in the attribute definitions. > > The purpose behind the code is for the cases where the data doesn't have a > color attribute channel (using Max speak) so that it is displayed with a > default color, so I use a constant vertex attribute. > > This works fine in the win setup, but on the linux box Chrome 6.0.472.25 dev > ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces a black > image. By "ignore the color" I've set a default value of > vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is happening. > In chrome, the objects are this color rather than the color provided in the > vertexAttrib statement. I don't have a really useful suggestion except to please try the top of tree Chromium build rather than the dev channel build. WebGL is still under rapid development. See the WebGL wiki for download instructions. If you can boil this down into a small test case please post it. -Ken > It seems to me that the likely candidates are either the graphics driver or > some issue with WebGL and X86_64 linux. > > When there is a color attribute channel in the vertex attray then the > vertexAttribPointer correctly finds and displays the color on all platforms. > > Regards > > Alan > > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Aug 11 11:21:09 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 11 Aug 2010 11:21:09 -0700 (PDT) Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: Message-ID: <1457501774.250559.1281550869466.JavaMail.root@cm-mail03.mozilla.org> Hmm, would be interesting to know what attrib location getAttribLocation is returning, though of course, you can't know that :/ I wonder if this is something to do with attribute 0 needing to be an enabled array? Is a_color the first attribute definition in your shader? Though I thought we had error checks for 0 not being an enabled array. - Vlad ----- Original Message ----- > On Wed, Aug 11, 2010 at 10:05 AM, alan...@ > wrote: > > Hi > > > > I'm seeing a problem with vertexAttrib4fv which manifests on Linux > > x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX > > > > The pseudo code is something like: > > > > > > ? ?if (! colorVertexAttributeData) { > > > > ? ? ? ? int colorIndex = getAttribLocation(shader, "a_color") > > ? ? ? ? if (colorIndex != -1) { > > ? ? ? ? ? ? ? ? disableVertexAttribArray(colorIndex) > > ? ? ? ? ? ? ? ? vertexAttrib4fv(colorIndex, defaultColor) > > ? ? ? ? } > > ? } > > > > > > where my shader has: > > > > attribute vec3 a_color; > > > > in the attribute definitions. > > > > The purpose behind the code is for the cases where the data doesn't > > have a > > color attribute channel (using Max speak) so that it is displayed > > with a > > default color, so I use a constant vertex attribute. > > > > This works fine in the win setup, but on the linux box Chrome > > 6.0.472.25 dev > > ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces a > > black > > image. By "ignore the color" I've set a default value of > > vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is > > happening. > > In chrome, the objects are this color rather than the color provided > > in the > > vertexAttrib statement. > > I don't have a really useful suggestion except to please try the top > of tree Chromium build rather than the dev channel build. WebGL is > still under rapid development. See the WebGL wiki for download > instructions. > > If you can boil this down into a small test case please post it. > > -Ken > > > It seems to me that the likely candidates are either the graphics > > driver or > > some issue with WebGL and X86_64 linux. > > > > When there is a color attribute channel in the vertex attray then > > the > > vertexAttribPointer correctly finds and displays the color on all > > platforms. > > > > Regards > > > > Alan > > > > > > > > > > > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Wed Aug 11 12:37:26 2010 From: ala...@ (Alan Chaney) Date: Wed, 11 Aug 2010 12:37:26 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <1457501774.250559.1281550869466.JavaMail.root@cm-mail03.mozilla.org> References: <1457501774.250559.1281550869466.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C62FBF6.9020205@mechnicality.com> Hi Vlad On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: > Hmm, would be interesting to know what attrib location getAttribLocation is returning, though of course, you can't know that :/ > Well, when I print it out, the attrib index is a number . I'm using the pattern of compiling and linking the shaders and then using the locations thus derived. I've checked, and it seems that on the linux box the attribute index for a_color is 0, but for the windows box the a_color index is 2. This is easily explained in that presumably the graphics driver determines the index during the linking process, and the drivers are ATI and Nvidia respectively. Of course, I could always set the attribute indices before linking, and for example make '0' correspond to 'a_position' which is pretty much always likely to be bound to an array. I notice that the Nvidia driver appears to have made a_position the 0th index The actual shader code is: attribute vec3 a_position; attribute vec3 a_normal; attribute vec4 a_color; attribute vec3 a_texcoord; and this is the order that the nvidia driver appears to use. > I wonder if this is something to do with attribute 0 needing to be an enabled array? Is a_color the first attribute definition in your shader? Though I thought we had error checks for 0 not being an enabled array. > I don't understand your comment about attribute 0 needing to be an enabled array. It seems from reading the spec that vertex attributes are handled slightly differently in WebGL than ES 2, but basically what I'm doing is to use disableVertexAttribArray to tell webgl that it isn't bound to an array. The only specific reference I can find is: 6.2 Enabled Vertex Attributes and Range Checking and the way that's written implies to me that if you call enableVertexAttribArray(n) then you need to make sure that n is bound with a vertex attribute array pointer statement. But I'm specifically calling disableVertexAttribArray because it isn't. My feeling at this point is that I'll rewrite the code to set the attrib locations prior to linking. If I do that, the logical thing is to call disableVertexAttribArray as I create them and assign specific default values (ie make them 'constant attributes') and then when I'm setting up a vertexatttrib array specifically enable the ones that I'm using. The nature of my application is such that different objects can use different combinations of attributes. I assume that if I enable them, draw them (drawArrays/drawElements) and then disable them that the GL state machine will handle that properly. Alternatively, I suppose I can track which ones are enabled/disabled and just make sure that as the scenegraph is traversed then the attributes are enabled/disabled according to the needs of the specific scene element. I'll report back but it probably won't be today because I have to do some other things this afternoon. Thanks Alan > - Vlad > > ----- Original Message ----- > >> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ >> wrote: >> >>> Hi >>> >>> I'm seeing a problem with vertexAttrib4fv which manifests on Linux >>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX >>> >>> The pseudo code is something like: >>> >>> >>> if (! colorVertexAttributeData) { >>> >>> int colorIndex = getAttribLocation(shader, "a_color") >>> if (colorIndex != -1) { >>> disableVertexAttribArray(colorIndex) >>> vertexAttrib4fv(colorIndex, defaultColor) >>> } >>> } >>> >>> >>> where my shader has: >>> >>> attribute vec3 a_color; >>> >>> in the attribute definitions. >>> >>> The purpose behind the code is for the cases where the data doesn't >>> have a >>> color attribute channel (using Max speak) so that it is displayed >>> with a >>> default color, so I use a constant vertex attribute. >>> >>> This works fine in the win setup, but on the linux box Chrome >>> 6.0.472.25 dev >>> ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces a >>> black >>> image. By "ignore the color" I've set a default value of >>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is >>> happening. >>> In chrome, the objects are this color rather than the color provided >>> in the >>> vertexAttrib statement. >>> >> I don't have a really useful suggestion except to please try the top >> of tree Chromium build rather than the dev channel build. WebGL is >> still under rapid development. See the WebGL wiki for download >> instructions. >> >> If you can boil this down into a small test case please post it. >> >> -Ken >> >> >>> It seems to me that the likely candidates are either the graphics >>> driver or >>> some issue with WebGL and X86_64 linux. >>> >>> When there is a color attribute channel in the vertex attray then >>> the >>> vertexAttribPointer correctly finds and displays the color on all >>> platforms. >>> >>> Regards >>> >>> Alan >>> >>> >>> >>> >>> >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> >>> >>> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Wed Aug 11 12:38:03 2010 From: ala...@ (Alan Chaney) Date: Wed, 11 Aug 2010 12:38:03 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: References: <4C62D859.2060205@mechnicality.com> Message-ID: <4C62FC1B.6060305@mechnicality.com> Hi Ken Vlad has indicated that it may be because on the linux box the color attribute is coming out as index 0. I'm going to proceed as I've indicated in my reply to him. If it still doesn't work, I'll try and produce a small javascript only test case. thanks Alan On 8/11/2010 10:34 AM, Kenneth Russell wrote: > On Wed, Aug 11, 2010 at 10:05 AM, alan...@ > wrote: > >> Hi >> >> I'm seeing a problem with vertexAttrib4fv which manifests on Linux >> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX >> >> The pseudo code is something like: >> >> >> if (! colorVertexAttributeData) { >> >> int colorIndex = getAttribLocation(shader, "a_color") >> if (colorIndex != -1) { >> disableVertexAttribArray(colorIndex) >> vertexAttrib4fv(colorIndex, defaultColor) >> } >> } >> >> >> where my shader has: >> >> attribute vec3 a_color; >> >> in the attribute definitions. >> >> The purpose behind the code is for the cases where the data doesn't have a >> color attribute channel (using Max speak) so that it is displayed with a >> default color, so I use a constant vertex attribute. >> >> This works fine in the win setup, but on the linux box Chrome 6.0.472.25 dev >> ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces a black >> image. By "ignore the color" I've set a default value of >> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is happening. >> In chrome, the objects are this color rather than the color provided in the >> vertexAttrib statement. >> > I don't have a really useful suggestion except to please try the top > of tree Chromium build rather than the dev channel build. WebGL is > still under rapid development. See the WebGL wiki for download > instructions. > > If you can boil this down into a small test case please post it. > > -Ken > > >> It seems to me that the likely candidates are either the graphics driver or >> some issue with WebGL and X86_64 linux. >> >> When there is a color attribute channel in the vertex attray then the >> vertexAttribPointer correctly finds and displays the color on all platforms. >> >> Regards >> >> Alan >> >> >> >> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> >> >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Aug 11 12:54:45 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 11 Aug 2010 12:54:45 -0700 (PDT) Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <4C62FBF6.9020205@mechnicality.com> Message-ID: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> Oh right, I keep forgetting attrib indices are (have to be) just numbers. I can't remember where we had this discussion, but I /think/ we decided that this was a driver bug (requiring attrib index 0 to be an enabled array). At least, we didn't add any spec language requiring it, and I see that we don't do any explicit checks for it -- and that there are conformance tests that require a constant vertex attrib 0 to not fail (gl-bind-attrib-location-test.html). - Vlad ----- Original Message ----- > Hi Vlad > > On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: > > Hmm, would be interesting to know what attrib location > > getAttribLocation is returning, though of course, you can't know > > that :/ > > > Well, when I print it out, the attrib index is a number . I'm using > the > pattern of compiling and linking the shaders and then using the > locations thus derived. I've checked, and it seems that on the linux > box > the attribute index for a_color is 0, but for the windows box the > a_color index is 2. > > This is easily explained in that presumably the graphics driver > determines the index during the linking process, and the drivers are > ATI > and Nvidia respectively. > > Of course, I could always set the attribute indices before linking, > and > for example make '0' correspond to 'a_position' which is pretty much > always likely to be bound to an array. I notice that the Nvidia driver > appears to have made a_position the 0th index > > The actual shader code is: > > attribute vec3 a_position; > attribute vec3 a_normal; > attribute vec4 a_color; > attribute vec3 a_texcoord; > > and this is the order that the nvidia driver appears to use. > > > I wonder if this is something to do with attribute 0 needing to be > > an enabled array? Is a_color the first attribute definition in your > > shader? Though I thought we had error checks for 0 not being an > > enabled array. > > > I don't understand your comment about attribute 0 needing to be an > enabled array. It seems from reading the spec that vertex attributes > are > handled slightly differently in WebGL than ES 2, but basically what > I'm > doing is to use disableVertexAttribArray to tell webgl that it isn't > bound to an array. > > The only specific reference I can find is: > > 6.2 Enabled Vertex Attributes and Range Checking > > and the way that's written implies to me that if you call > enableVertexAttribArray(n) then you need to make sure that n is bound > with a vertex attribute array pointer statement. But I'm specifically > calling disableVertexAttribArray because it isn't. > > My feeling at this point is that I'll rewrite the code to set the > attrib > locations prior to linking. If I do that, the logical thing is to call > disableVertexAttribArray as I create them and assign specific default > values (ie make them 'constant attributes') and then when I'm setting > up > a vertexatttrib array specifically enable the ones that I'm using. The > nature of my application is such that different objects can use > different combinations of attributes. I assume that if I enable them, > draw them (drawArrays/drawElements) and then disable them that the GL > state machine will handle that properly. Alternatively, I suppose I > can > track which ones are enabled/disabled and just make sure that as the > scenegraph is traversed then the attributes are enabled/disabled > according to the needs of the specific scene element. > > I'll report back but it probably won't be today because I have to do > some other things this afternoon. > > > Thanks > > Alan > > > > - Vlad > > > > ----- Original Message ----- > > > >> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ > >> wrote: > >> > >>> Hi > >>> > >>> I'm seeing a problem with vertexAttrib4fv which manifests on Linux > >>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX > >>> > >>> The pseudo code is something like: > >>> > >>> > >>> if (! colorVertexAttributeData) { > >>> > >>> int colorIndex = getAttribLocation(shader, "a_color") > >>> if (colorIndex != -1) { > >>> disableVertexAttribArray(colorIndex) > >>> vertexAttrib4fv(colorIndex, defaultColor) > >>> } > >>> } > >>> > >>> > >>> where my shader has: > >>> > >>> attribute vec3 a_color; > >>> > >>> in the attribute definitions. > >>> > >>> The purpose behind the code is for the cases where the data > >>> doesn't > >>> have a > >>> color attribute channel (using Max speak) so that it is displayed > >>> with a > >>> default color, so I use a constant vertex attribute. > >>> > >>> This works fine in the win setup, but on the linux box Chrome > >>> 6.0.472.25 dev > >>> ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces a > >>> black > >>> image. By "ignore the color" I've set a default value of > >>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is > >>> happening. > >>> In chrome, the objects are this color rather than the color > >>> provided > >>> in the > >>> vertexAttrib statement. > >>> > >> I don't have a really useful suggestion except to please try the > >> top > >> of tree Chromium build rather than the dev channel build. WebGL is > >> still under rapid development. See the WebGL wiki for download > >> instructions. > >> > >> If you can boil this down into a small test case please post it. > >> > >> -Ken > >> > >> > >>> It seems to me that the likely candidates are either the graphics > >>> driver or > >>> some issue with WebGL and X86_64 linux. > >>> > >>> When there is a color attribute channel in the vertex attray then > >>> the > >>> vertexAttribPointer correctly finds and displays the color on all > >>> platforms. > >>> > >>> Regards > >>> > >>> Alan > >>> > >>> > >>> > >>> > >>> > >>> > >>> ----------------------------------------------------------- > >>> You are currently subscribed to public_webgl...@ > >>> To unsubscribe, send an email to majordomo...@ with > >>> the following command in the body of your email: > >>> > >>> > >>> > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Wed Aug 11 13:04:17 2010 From: ala...@ (Alan Chaney) Date: Wed, 11 Aug 2010 13:04:17 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> References: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C630241.5090505@mechnicality.com> Thanks Vlad, Well, that would explain things. Looks like application writers will have to beware of this one. I'll modify my code as I outlined below and see if that fixes it on the Linux/ATI box. With my Khronos member hat on it looks like there will have to be work by some of the driver manufacturers if they want to claim that their drivers are WebGL compliant. Regards Alan On 8/11/2010 12:54 PM, Vladimir Vukicevic wrote: > Oh right, I keep forgetting attrib indices are (have to be) just numbers. I can't remember where we had this discussion, but I /think/ we decided that this was a driver bug (requiring attrib index 0 to be an enabled array). At least, we didn't add any spec language requiring it, and I see that we don't do any explicit checks for it -- and that there are conformance tests that require a constant vertex attrib 0 to not fail (gl-bind-attrib-location-test.html). > > - Vlad > > ----- Original Message ----- > >> Hi Vlad >> >> On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: >> >>> Hmm, would be interesting to know what attrib location >>> getAttribLocation is returning, though of course, you can't know >>> that :/ >>> >>> >> Well, when I print it out, the attrib index is a number . I'm using >> the >> pattern of compiling and linking the shaders and then using the >> locations thus derived. I've checked, and it seems that on the linux >> box >> the attribute index for a_color is 0, but for the windows box the >> a_color index is 2. >> >> This is easily explained in that presumably the graphics driver >> determines the index during the linking process, and the drivers are >> ATI >> and Nvidia respectively. >> >> Of course, I could always set the attribute indices before linking, >> and >> for example make '0' correspond to 'a_position' which is pretty much >> always likely to be bound to an array. I notice that the Nvidia driver >> appears to have made a_position the 0th index >> >> The actual shader code is: >> >> attribute vec3 a_position; >> attribute vec3 a_normal; >> attribute vec4 a_color; >> attribute vec3 a_texcoord; >> >> and this is the order that the nvidia driver appears to use. >> >> >>> I wonder if this is something to do with attribute 0 needing to be >>> an enabled array? Is a_color the first attribute definition in your >>> shader? Though I thought we had error checks for 0 not being an >>> enabled array. >>> >>> >> I don't understand your comment about attribute 0 needing to be an >> enabled array. It seems from reading the spec that vertex attributes >> are >> handled slightly differently in WebGL than ES 2, but basically what >> I'm >> doing is to use disableVertexAttribArray to tell webgl that it isn't >> bound to an array. >> >> The only specific reference I can find is: >> >> 6.2 Enabled Vertex Attributes and Range Checking >> >> and the way that's written implies to me that if you call >> enableVertexAttribArray(n) then you need to make sure that n is bound >> with a vertex attribute array pointer statement. But I'm specifically >> calling disableVertexAttribArray because it isn't. >> >> My feeling at this point is that I'll rewrite the code to set the >> attrib >> locations prior to linking. If I do that, the logical thing is to call >> disableVertexAttribArray as I create them and assign specific default >> values (ie make them 'constant attributes') and then when I'm setting >> up >> a vertexatttrib array specifically enable the ones that I'm using. The >> nature of my application is such that different objects can use >> different combinations of attributes. I assume that if I enable them, >> draw them (drawArrays/drawElements) and then disable them that the GL >> state machine will handle that properly. Alternatively, I suppose I >> can >> track which ones are enabled/disabled and just make sure that as the >> scenegraph is traversed then the attributes are enabled/disabled >> according to the needs of the specific scene element. >> >> I'll report back but it probably won't be today because I have to do >> some other things this afternoon. >> >> >> Thanks >> >> Alan >> >> >> >>> - Vlad >>> >>> ----- Original Message ----- >>> >>> >>>> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ >>>> wrote: >>>> >>>> >>>>> Hi >>>>> >>>>> I'm seeing a problem with vertexAttrib4fv which manifests on Linux >>>>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX >>>>> >>>>> The pseudo code is something like: >>>>> >>>>> >>>>> if (! colorVertexAttributeData) { >>>>> >>>>> int colorIndex = getAttribLocation(shader, "a_color") >>>>> if (colorIndex != -1) { >>>>> disableVertexAttribArray(colorIndex) >>>>> vertexAttrib4fv(colorIndex, defaultColor) >>>>> } >>>>> } >>>>> >>>>> >>>>> where my shader has: >>>>> >>>>> attribute vec3 a_color; >>>>> >>>>> in the attribute definitions. >>>>> >>>>> The purpose behind the code is for the cases where the data >>>>> doesn't >>>>> have a >>>>> color attribute channel (using Max speak) so that it is displayed >>>>> with a >>>>> default color, so I use a constant vertex attribute. >>>>> >>>>> This works fine in the win setup, but on the linux box Chrome >>>>> 6.0.472.25 dev >>>>> ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces a >>>>> black >>>>> image. By "ignore the color" I've set a default value of >>>>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is >>>>> happening. >>>>> In chrome, the objects are this color rather than the color >>>>> provided >>>>> in the >>>>> vertexAttrib statement. >>>>> >>>>> >>>> I don't have a really useful suggestion except to please try the >>>> top >>>> of tree Chromium build rather than the dev channel build. WebGL is >>>> still under rapid development. See the WebGL wiki for download >>>> instructions. >>>> >>>> If you can boil this down into a small test case please post it. >>>> >>>> -Ken >>>> >>>> >>>> >>>>> It seems to me that the likely candidates are either the graphics >>>>> driver or >>>>> some issue with WebGL and X86_64 linux. >>>>> >>>>> When there is a color attribute channel in the vertex attray then >>>>> the >>>>> vertexAttribPointer correctly finds and displays the color on all >>>>> platforms. >>>>> >>>>> Regards >>>>> >>>>> Alan >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> ----------------------------------------------------------- >>>>> You are currently subscribed to public_webgl...@ >>>>> To unsubscribe, send an email to majordomo...@ with >>>>> the following command in the body of your email: >>>>> >>>>> >>>>> >>>>> >>>> ----------------------------------------------------------- >>>> You are currently subscribed to public_webgl...@ >>>> To unsubscribe, send an email to majordomo...@ with >>>> the following command in the body of your email: >>>> >>>> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Wed Aug 11 13:37:50 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 11 Aug 2010 13:37:50 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <4C630241.5090505@mechnicality.com> References: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> <4C630241.5090505@mechnicality.com> Message-ID: Current Chromium builds, at least, should be emulating the OpenGL ES 2.0 behavior for vertex attribute 0. You should not need to modify your application code. Any other behavior of a WebGL implementation is not spec compliant and a bug should be filed with the appropriate browser vendor. -Ken On Wed, Aug 11, 2010 at 1:04 PM, Alan Chaney wrote: > Thanks Vlad, > > Well, that would explain things. Looks like application writers will have to > beware of this one. I'll modify my code as I outlined below and see if that > fixes it on the Linux/ATI box. With my Khronos member hat on it looks like > there will have to be work by some of the driver manufacturers if they want > to claim that their drivers are WebGL compliant. > > Regards > > Alan > > > On 8/11/2010 12:54 PM, Vladimir Vukicevic wrote: >> >> Oh right, I keep forgetting attrib indices are (have to be) just numbers. >> ?I can't remember where we had this discussion, but I /think/ we decided >> that this was a driver bug (requiring attrib index 0 to be an enabled >> array). ?At least, we didn't add any spec language requiring it, and I see >> that we don't do any explicit checks for it -- and that there are >> conformance tests that require a constant vertex attrib 0 to not fail >> (gl-bind-attrib-location-test.html). >> >> ? ? - Vlad >> >> ----- Original Message ----- >> >>> >>> Hi Vlad >>> >>> On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: >>> >>>> >>>> Hmm, would be interesting to know what attrib location >>>> getAttribLocation is returning, though of course, you can't know >>>> that :/ >>>> >>>> >>> >>> Well, when I print it out, the attrib index is a number . I'm using >>> the >>> pattern of compiling and linking the shaders and then using the >>> locations thus derived. I've checked, and it seems that on the linux >>> box >>> the attribute index for a_color is 0, but for the windows box the >>> a_color index is 2. >>> >>> This is easily explained in that presumably the graphics driver >>> determines the index during the linking process, and the drivers are >>> ATI >>> and Nvidia respectively. >>> >>> Of course, I could always set the attribute indices before linking, >>> and >>> for example make '0' correspond to 'a_position' which is pretty much >>> always likely to be bound to an array. I notice that the Nvidia driver >>> appears to have made a_position the 0th index >>> >>> The actual shader code is: >>> >>> attribute vec3 a_position; >>> attribute vec3 a_normal; >>> attribute vec4 a_color; >>> attribute vec3 a_texcoord; >>> >>> and this is the order that the nvidia driver appears to use. >>> >>> >>>> >>>> I wonder if this is something to do with attribute 0 needing to be >>>> an enabled array? Is a_color the first attribute definition in your >>>> shader? Though I thought we had error checks for 0 not being an >>>> enabled array. >>>> >>>> >>> >>> I don't understand your comment about attribute 0 needing to be an >>> enabled array. It seems from reading the spec that vertex attributes >>> are >>> handled slightly differently in WebGL than ES 2, but basically what >>> I'm >>> doing is to use disableVertexAttribArray to tell webgl that it isn't >>> bound to an array. >>> >>> The only specific reference I can find is: >>> >>> 6.2 Enabled Vertex Attributes and Range Checking >>> >>> and the way that's written implies to me that if you call >>> enableVertexAttribArray(n) then you need to make sure that n is bound >>> with a vertex attribute array pointer statement. But I'm specifically >>> calling disableVertexAttribArray because it isn't. >>> >>> My feeling at this point is that I'll rewrite the code to set the >>> attrib >>> locations prior to linking. If I do that, the logical thing is to call >>> disableVertexAttribArray as I create them and assign specific default >>> values (ie make them 'constant attributes') and then when I'm setting >>> up >>> a vertexatttrib array specifically enable the ones that I'm using. The >>> nature of my application is such that different objects can use >>> different combinations of attributes. I assume that if I enable them, >>> draw them (drawArrays/drawElements) and then disable them that the GL >>> state machine will handle that properly. Alternatively, I suppose I >>> can >>> track which ones are enabled/disabled and just make sure that as the >>> scenegraph is traversed then the attributes are enabled/disabled >>> according to the needs of the specific scene element. >>> >>> I'll report back but it probably won't be today because I have to do >>> some other things this afternoon. >>> >>> >>> Thanks >>> >>> Alan >>> >>> >>> >>>> >>>> ? ? ?- Vlad >>>> >>>> ----- Original Message ----- >>>> >>>> >>>>> >>>>> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ >>>>> ?wrote: >>>>> >>>>> >>>>>> >>>>>> Hi >>>>>> >>>>>> I'm seeing a problem with vertexAttrib4fv which manifests on Linux >>>>>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX >>>>>> >>>>>> The pseudo code is something like: >>>>>> >>>>>> >>>>>> ? ? if (! colorVertexAttributeData) { >>>>>> >>>>>> ? ? ? ? ?int colorIndex = getAttribLocation(shader, "a_color") >>>>>> ? ? ? ? ?if (colorIndex != -1) { >>>>>> ? ? ? ? ? ? ? ? ?disableVertexAttribArray(colorIndex) >>>>>> ? ? ? ? ? ? ? ? ?vertexAttrib4fv(colorIndex, defaultColor) >>>>>> ? ? ? ? ?} >>>>>> ? ?} >>>>>> >>>>>> >>>>>> where my shader has: >>>>>> >>>>>> attribute vec3 a_color; >>>>>> >>>>>> in the attribute definitions. >>>>>> >>>>>> The purpose behind the code is for the cases where the data >>>>>> doesn't >>>>>> have a >>>>>> color attribute channel (using Max speak) so that it is displayed >>>>>> with a >>>>>> default color, so I use a constant vertex attribute. >>>>>> >>>>>> This works fine in the win setup, but on the linux box Chrome >>>>>> 6.0.472.25 dev >>>>>> ignores the color and Firefox (3.7 -> ?4.0 beta 2) just produces a >>>>>> black >>>>>> image. By "ignore the color" I've set a default value of >>>>>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is >>>>>> happening. >>>>>> In chrome, the objects are this color rather than the color >>>>>> provided >>>>>> in the >>>>>> vertexAttrib statement. >>>>>> >>>>>> >>>>> >>>>> I don't have a really useful suggestion except to please try the >>>>> top >>>>> of tree Chromium build rather than the dev channel build. WebGL is >>>>> still under rapid development. See the WebGL wiki for download >>>>> instructions. >>>>> >>>>> If you can boil this down into a small test case please post it. >>>>> >>>>> -Ken >>>>> >>>>> >>>>> >>>>>> >>>>>> It seems to me that the likely candidates are either the graphics >>>>>> driver or >>>>>> some issue with WebGL and X86_64 linux. >>>>>> >>>>>> When there is a color attribute channel in the vertex attray then >>>>>> the >>>>>> vertexAttribPointer correctly finds and displays the color on all >>>>>> platforms. >>>>>> >>>>>> Regards >>>>>> >>>>>> Alan >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ----------------------------------------------------------- >>>>>> You are currently subscribed to public_webgl...@ >>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>> the following command in the body of your email: >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> ----------------------------------------------------------- >>>>> You are currently subscribed to public_webgl...@ >>>>> To unsubscribe, send an email to majordomo...@ with >>>>> the following command in the body of your email: >>>>> >>>>> >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Wed Aug 11 13:56:45 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 11 Aug 2010 13:56:45 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <4C630241.5090505@mechnicality.com> References: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> <4C630241.5090505@mechnicality.com> Message-ID: Hi Alan, Can you please tell me if these tests PASS for you? https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-vertex-attrib.html https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-bind-attrib-location-test.html https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-object-get-calls.html (note: On top of tree Chromium (6.0.491.0 (55567)) 3 of the tests in this last one don't pass at this time but those are not related to the issue you are seeing) If the rest of those tests don't pass for you then I can hopefully use them to track down the issue. Otherwise if they do pass for you if you could make a small sample and send it to me I'll be happy to try to track down the issue. On Wed, Aug 11, 2010 at 1:04 PM, Alan Chaney wrote: > Thanks Vlad, > > Well, that would explain things. Looks like application writers will have > to beware of this one. I'll modify my code as I outlined below and see if > that fixes it on the Linux/ATI box. With my Khronos member hat on it looks > like there will have to be work by some of the driver manufacturers if they > want to claim that their drivers are WebGL compliant. > > Regards > > Alan > > > > On 8/11/2010 12:54 PM, Vladimir Vukicevic wrote: > >> Oh right, I keep forgetting attrib indices are (have to be) just numbers. >> I can't remember where we had this discussion, but I /think/ we decided >> that this was a driver bug (requiring attrib index 0 to be an enabled >> array). At least, we didn't add any spec language requiring it, and I see >> that we don't do any explicit checks for it -- and that there are >> conformance tests that require a constant vertex attrib 0 to not fail >> (gl-bind-attrib-location-test.html). >> >> - Vlad >> >> ----- Original Message ----- >> >> >>> Hi Vlad >>> >>> On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: >>> >>> >>>> Hmm, would be interesting to know what attrib location >>>> getAttribLocation is returning, though of course, you can't know >>>> that :/ >>>> >>>> >>>> >>> Well, when I print it out, the attrib index is a number . I'm using >>> the >>> pattern of compiling and linking the shaders and then using the >>> locations thus derived. I've checked, and it seems that on the linux >>> box >>> the attribute index for a_color is 0, but for the windows box the >>> a_color index is 2. >>> >>> This is easily explained in that presumably the graphics driver >>> determines the index during the linking process, and the drivers are >>> ATI >>> and Nvidia respectively. >>> >>> Of course, I could always set the attribute indices before linking, >>> and >>> for example make '0' correspond to 'a_position' which is pretty much >>> always likely to be bound to an array. I notice that the Nvidia driver >>> appears to have made a_position the 0th index >>> >>> The actual shader code is: >>> >>> attribute vec3 a_position; >>> attribute vec3 a_normal; >>> attribute vec4 a_color; >>> attribute vec3 a_texcoord; >>> >>> and this is the order that the nvidia driver appears to use. >>> >>> >>> >>>> I wonder if this is something to do with attribute 0 needing to be >>>> an enabled array? Is a_color the first attribute definition in your >>>> shader? Though I thought we had error checks for 0 not being an >>>> enabled array. >>>> >>>> >>>> >>> I don't understand your comment about attribute 0 needing to be an >>> enabled array. It seems from reading the spec that vertex attributes >>> are >>> handled slightly differently in WebGL than ES 2, but basically what >>> I'm >>> doing is to use disableVertexAttribArray to tell webgl that it isn't >>> bound to an array. >>> >>> The only specific reference I can find is: >>> >>> 6.2 Enabled Vertex Attributes and Range Checking >>> >>> and the way that's written implies to me that if you call >>> enableVertexAttribArray(n) then you need to make sure that n is bound >>> with a vertex attribute array pointer statement. But I'm specifically >>> calling disableVertexAttribArray because it isn't. >>> >>> My feeling at this point is that I'll rewrite the code to set the >>> attrib >>> locations prior to linking. If I do that, the logical thing is to call >>> disableVertexAttribArray as I create them and assign specific default >>> values (ie make them 'constant attributes') and then when I'm setting >>> up >>> a vertexatttrib array specifically enable the ones that I'm using. The >>> nature of my application is such that different objects can use >>> different combinations of attributes. I assume that if I enable them, >>> draw them (drawArrays/drawElements) and then disable them that the GL >>> state machine will handle that properly. Alternatively, I suppose I >>> can >>> track which ones are enabled/disabled and just make sure that as the >>> scenegraph is traversed then the attributes are enabled/disabled >>> according to the needs of the specific scene element. >>> >>> I'll report back but it probably won't be today because I have to do >>> some other things this afternoon. >>> >>> >>> Thanks >>> >>> Alan >>> >>> >>> >>> >>>> - Vlad >>>> >>>> ----- Original Message ----- >>>> >>>> >>>> >>>>> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ >>>>> wrote: >>>>> >>>>> >>>>> >>>>>> Hi >>>>>> >>>>>> I'm seeing a problem with vertexAttrib4fv which manifests on Linux >>>>>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX >>>>>> >>>>>> The pseudo code is something like: >>>>>> >>>>>> >>>>>> if (! colorVertexAttributeData) { >>>>>> >>>>>> int colorIndex = getAttribLocation(shader, "a_color") >>>>>> if (colorIndex != -1) { >>>>>> disableVertexAttribArray(colorIndex) >>>>>> vertexAttrib4fv(colorIndex, defaultColor) >>>>>> } >>>>>> } >>>>>> >>>>>> >>>>>> where my shader has: >>>>>> >>>>>> attribute vec3 a_color; >>>>>> >>>>>> in the attribute definitions. >>>>>> >>>>>> The purpose behind the code is for the cases where the data >>>>>> doesn't >>>>>> have a >>>>>> color attribute channel (using Max speak) so that it is displayed >>>>>> with a >>>>>> default color, so I use a constant vertex attribute. >>>>>> >>>>>> This works fine in the win setup, but on the linux box Chrome >>>>>> 6.0.472.25 dev >>>>>> ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces a >>>>>> black >>>>>> image. By "ignore the color" I've set a default value of >>>>>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is >>>>>> happening. >>>>>> In chrome, the objects are this color rather than the color >>>>>> provided >>>>>> in the >>>>>> vertexAttrib statement. >>>>>> >>>>>> >>>>>> >>>>> I don't have a really useful suggestion except to please try the >>>>> top >>>>> of tree Chromium build rather than the dev channel build. WebGL is >>>>> still under rapid development. See the WebGL wiki for download >>>>> instructions. >>>>> >>>>> If you can boil this down into a small test case please post it. >>>>> >>>>> -Ken >>>>> >>>>> >>>>> >>>>> >>>>>> It seems to me that the likely candidates are either the graphics >>>>>> driver or >>>>>> some issue with WebGL and X86_64 linux. >>>>>> >>>>>> When there is a color attribute channel in the vertex attray then >>>>>> the >>>>>> vertexAttribPointer correctly finds and displays the color on all >>>>>> platforms. >>>>>> >>>>>> Regards >>>>>> >>>>>> Alan >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ----------------------------------------------------------- >>>>>> You are currently subscribed to public_webgl...@ >>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>> the following command in the body of your email: >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> ----------------------------------------------------------- >>>>> You are currently subscribed to public_webgl...@ >>>>> To unsubscribe, send an email to majordomo...@ with >>>>> the following command in the body of your email: >>>>> >>>>> >>>>> >>>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> >>> >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Wed Aug 11 14:14:10 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 11 Aug 2010 14:14:10 -0700 (PDT) Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: Message-ID: <2062755321.252423.1281561250305.JavaMail.root@cm-mail03.mozilla.org> I'm trying to find the description of GL ES 2.0 behaviour for attrib 0 -- what section is it in? - Vlad ----- Original Message ----- > Current Chromium builds, at least, should be emulating the OpenGL ES > 2.0 behavior for vertex attribute 0. You should not need to modify > your application code. Any other behavior of a WebGL implementation is > not spec compliant and a bug should be filed with the appropriate > browser vendor. > > -Ken > > On Wed, Aug 11, 2010 at 1:04 PM, Alan Chaney > wrote: > > Thanks Vlad, > > > > Well, that would explain things. Looks like application writers will > > have to > > beware of this one. I'll modify my code as I outlined below and see > > if that > > fixes it on the Linux/ATI box. With my Khronos member hat on it > > looks like > > there will have to be work by some of the driver manufacturers if > > they want > > to claim that their drivers are WebGL compliant. > > > > Regards > > > > Alan > > > > > > On 8/11/2010 12:54 PM, Vladimir Vukicevic wrote: > >> > >> Oh right, I keep forgetting attrib indices are (have to be) just > >> numbers. > >> ?I can't remember where we had this discussion, but I /think/ we > >> ?decided > >> that this was a driver bug (requiring attrib index 0 to be an > >> enabled > >> array). At least, we didn't add any spec language requiring it, and > >> I see > >> that we don't do any explicit checks for it -- and that there are > >> conformance tests that require a constant vertex attrib 0 to not > >> fail > >> (gl-bind-attrib-location-test.html). > >> > >> ? ? - Vlad > >> > >> ----- Original Message ----- > >> > >>> > >>> Hi Vlad > >>> > >>> On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: > >>> > >>>> > >>>> Hmm, would be interesting to know what attrib location > >>>> getAttribLocation is returning, though of course, you can't know > >>>> that :/ > >>>> > >>>> > >>> > >>> Well, when I print it out, the attrib index is a number . I'm > >>> using > >>> the > >>> pattern of compiling and linking the shaders and then using the > >>> locations thus derived. I've checked, and it seems that on the > >>> linux > >>> box > >>> the attribute index for a_color is 0, but for the windows box the > >>> a_color index is 2. > >>> > >>> This is easily explained in that presumably the graphics driver > >>> determines the index during the linking process, and the drivers > >>> are > >>> ATI > >>> and Nvidia respectively. > >>> > >>> Of course, I could always set the attribute indices before > >>> linking, > >>> and > >>> for example make '0' correspond to 'a_position' which is pretty > >>> much > >>> always likely to be bound to an array. I notice that the Nvidia > >>> driver > >>> appears to have made a_position the 0th index > >>> > >>> The actual shader code is: > >>> > >>> attribute vec3 a_position; > >>> attribute vec3 a_normal; > >>> attribute vec4 a_color; > >>> attribute vec3 a_texcoord; > >>> > >>> and this is the order that the nvidia driver appears to use. > >>> > >>> > >>>> > >>>> I wonder if this is something to do with attribute 0 needing to > >>>> be > >>>> an enabled array? Is a_color the first attribute definition in > >>>> your > >>>> shader? Though I thought we had error checks for 0 not being an > >>>> enabled array. > >>>> > >>>> > >>> > >>> I don't understand your comment about attribute 0 needing to be an > >>> enabled array. It seems from reading the spec that vertex > >>> attributes > >>> are > >>> handled slightly differently in WebGL than ES 2, but basically > >>> what > >>> I'm > >>> doing is to use disableVertexAttribArray to tell webgl that it > >>> isn't > >>> bound to an array. > >>> > >>> The only specific reference I can find is: > >>> > >>> 6.2 Enabled Vertex Attributes and Range Checking > >>> > >>> and the way that's written implies to me that if you call > >>> enableVertexAttribArray(n) then you need to make sure that n is > >>> bound > >>> with a vertex attribute array pointer statement. But I'm > >>> specifically > >>> calling disableVertexAttribArray because it isn't. > >>> > >>> My feeling at this point is that I'll rewrite the code to set the > >>> attrib > >>> locations prior to linking. If I do that, the logical thing is to > >>> call > >>> disableVertexAttribArray as I create them and assign specific > >>> default > >>> values (ie make them 'constant attributes') and then when I'm > >>> setting > >>> up > >>> a vertexatttrib array specifically enable the ones that I'm using. > >>> The > >>> nature of my application is such that different objects can use > >>> different combinations of attributes. I assume that if I enable > >>> them, > >>> draw them (drawArrays/drawElements) and then disable them that the > >>> GL > >>> state machine will handle that properly. Alternatively, I suppose > >>> I > >>> can > >>> track which ones are enabled/disabled and just make sure that as > >>> the > >>> scenegraph is traversed then the attributes are enabled/disabled > >>> according to the needs of the specific scene element. > >>> > >>> I'll report back but it probably won't be today because I have to > >>> do > >>> some other things this afternoon. > >>> > >>> > >>> Thanks > >>> > >>> Alan > >>> > >>> > >>> > >>>> > >>>> ? ? ?- Vlad > >>>> > >>>> ----- Original Message ----- > >>>> > >>>> > >>>>> > >>>>> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ > >>>>> wrote: > >>>>> > >>>>> > >>>>>> > >>>>>> Hi > >>>>>> > >>>>>> I'm seeing a problem with vertexAttrib4fv which manifests on > >>>>>> Linux > >>>>>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX > >>>>>> > >>>>>> The pseudo code is something like: > >>>>>> > >>>>>> > >>>>>> ? ? if (! colorVertexAttributeData) { > >>>>>> > >>>>>> ? ? ? ? ?int colorIndex = getAttribLocation(shader, "a_color") > >>>>>> ? ? ? ? ?if (colorIndex != -1) { > >>>>>> ? ? ? ? ? ? ? ? ?disableVertexAttribArray(colorIndex) > >>>>>> ? ? ? ? ? ? ? ? ?vertexAttrib4fv(colorIndex, defaultColor) > >>>>>> ? ? ? ? ?} > >>>>>> ? ?} > >>>>>> > >>>>>> > >>>>>> where my shader has: > >>>>>> > >>>>>> attribute vec3 a_color; > >>>>>> > >>>>>> in the attribute definitions. > >>>>>> > >>>>>> The purpose behind the code is for the cases where the data > >>>>>> doesn't > >>>>>> have a > >>>>>> color attribute channel (using Max speak) so that it is > >>>>>> displayed > >>>>>> with a > >>>>>> default color, so I use a constant vertex attribute. > >>>>>> > >>>>>> This works fine in the win setup, but on the linux box Chrome > >>>>>> 6.0.472.25 dev > >>>>>> ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces > >>>>>> a > >>>>>> black > >>>>>> image. By "ignore the color" I've set a default value of > >>>>>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is > >>>>>> happening. > >>>>>> In chrome, the objects are this color rather than the color > >>>>>> provided > >>>>>> in the > >>>>>> vertexAttrib statement. > >>>>>> > >>>>>> > >>>>> > >>>>> I don't have a really useful suggestion except to please try the > >>>>> top > >>>>> of tree Chromium build rather than the dev channel build. WebGL > >>>>> is > >>>>> still under rapid development. See the WebGL wiki for download > >>>>> instructions. > >>>>> > >>>>> If you can boil this down into a small test case please post it. > >>>>> > >>>>> -Ken > >>>>> > >>>>> > >>>>> > >>>>>> > >>>>>> It seems to me that the likely candidates are either the > >>>>>> graphics > >>>>>> driver or > >>>>>> some issue with WebGL and X86_64 linux. > >>>>>> > >>>>>> When there is a color attribute channel in the vertex attray > >>>>>> then > >>>>>> the > >>>>>> vertexAttribPointer correctly finds and displays the color on > >>>>>> all > >>>>>> platforms. > >>>>>> > >>>>>> Regards > >>>>>> > >>>>>> Alan > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> ----------------------------------------------------------- > >>>>>> You are currently subscribed to public_webgl...@ > >>>>>> To unsubscribe, send an email to majordomo...@ with > >>>>>> the following command in the body of your email: > >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>> > >>>>> ----------------------------------------------------------- > >>>>> You are currently subscribed to public_webgl...@ > >>>>> To unsubscribe, send an email to majordomo...@ with > >>>>> the following command in the body of your email: > >>>>> > >>>>> > >>> > >>> ----------------------------------------------------------- > >>> You are currently subscribed to public_webgl...@ > >>> To unsubscribe, send an email to majordomo...@ with > >>> the following command in the body of your email: > >>> > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Wed Aug 11 14:33:09 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 11 Aug 2010 14:33:09 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <2062755321.252423.1281561250305.JavaMail.root@cm-mail03.mozilla.org> References: <2062755321.252423.1281561250305.JavaMail.root@cm-mail03.mozilla.org> Message-ID: OpenGL ES 2.0 doesn't have any special semantics for vertex attribute 0. I don't see anything in section 2.7 of the OpenGL ES 2.0 spec that specifically mentions this. http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttrib.xml mentions the desktop OpenGL semantics briefly at the bottom of the Description section, but it doesn't explicitly mention that if you don't enable vertex attribute 0 as an array that you don't get any rendered output. See http://src.chromium.org/viewvc/chrome/trunk/src/gpu/command_buffer/service/gles2_cmd_decoder.cc?view=markup , GLES2DecoderImpl::SimulateAttrib0, for an example of how this can be emulated. -Ken On Wed, Aug 11, 2010 at 2:14 PM, Vladimir Vukicevic wrote: > I'm trying to find the description of GL ES 2.0 behaviour for attrib 0 -- what section is it in? > > ? ?- Vlad > > ----- Original Message ----- >> Current Chromium builds, at least, should be emulating the OpenGL ES >> 2.0 behavior for vertex attribute 0. You should not need to modify >> your application code. Any other behavior of a WebGL implementation is >> not spec compliant and a bug should be filed with the appropriate >> browser vendor. >> >> -Ken >> >> On Wed, Aug 11, 2010 at 1:04 PM, Alan Chaney >> wrote: >> > Thanks Vlad, >> > >> > Well, that would explain things. Looks like application writers will >> > have to >> > beware of this one. I'll modify my code as I outlined below and see >> > if that >> > fixes it on the Linux/ATI box. With my Khronos member hat on it >> > looks like >> > there will have to be work by some of the driver manufacturers if >> > they want >> > to claim that their drivers are WebGL compliant. >> > >> > Regards >> > >> > Alan >> > >> > >> > On 8/11/2010 12:54 PM, Vladimir Vukicevic wrote: >> >> >> >> Oh right, I keep forgetting attrib indices are (have to be) just >> >> numbers. >> >> ?I can't remember where we had this discussion, but I /think/ we >> >> ?decided >> >> that this was a driver bug (requiring attrib index 0 to be an >> >> enabled >> >> array). At least, we didn't add any spec language requiring it, and >> >> I see >> >> that we don't do any explicit checks for it -- and that there are >> >> conformance tests that require a constant vertex attrib 0 to not >> >> fail >> >> (gl-bind-attrib-location-test.html). >> >> >> >> ? ? - Vlad >> >> >> >> ----- Original Message ----- >> >> >> >>> >> >>> Hi Vlad >> >>> >> >>> On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: >> >>> >> >>>> >> >>>> Hmm, would be interesting to know what attrib location >> >>>> getAttribLocation is returning, though of course, you can't know >> >>>> that :/ >> >>>> >> >>>> >> >>> >> >>> Well, when I print it out, the attrib index is a number . I'm >> >>> using >> >>> the >> >>> pattern of compiling and linking the shaders and then using the >> >>> locations thus derived. I've checked, and it seems that on the >> >>> linux >> >>> box >> >>> the attribute index for a_color is 0, but for the windows box the >> >>> a_color index is 2. >> >>> >> >>> This is easily explained in that presumably the graphics driver >> >>> determines the index during the linking process, and the drivers >> >>> are >> >>> ATI >> >>> and Nvidia respectively. >> >>> >> >>> Of course, I could always set the attribute indices before >> >>> linking, >> >>> and >> >>> for example make '0' correspond to 'a_position' which is pretty >> >>> much >> >>> always likely to be bound to an array. I notice that the Nvidia >> >>> driver >> >>> appears to have made a_position the 0th index >> >>> >> >>> The actual shader code is: >> >>> >> >>> attribute vec3 a_position; >> >>> attribute vec3 a_normal; >> >>> attribute vec4 a_color; >> >>> attribute vec3 a_texcoord; >> >>> >> >>> and this is the order that the nvidia driver appears to use. >> >>> >> >>> >> >>>> >> >>>> I wonder if this is something to do with attribute 0 needing to >> >>>> be >> >>>> an enabled array? Is a_color the first attribute definition in >> >>>> your >> >>>> shader? Though I thought we had error checks for 0 not being an >> >>>> enabled array. >> >>>> >> >>>> >> >>> >> >>> I don't understand your comment about attribute 0 needing to be an >> >>> enabled array. It seems from reading the spec that vertex >> >>> attributes >> >>> are >> >>> handled slightly differently in WebGL than ES 2, but basically >> >>> what >> >>> I'm >> >>> doing is to use disableVertexAttribArray to tell webgl that it >> >>> isn't >> >>> bound to an array. >> >>> >> >>> The only specific reference I can find is: >> >>> >> >>> 6.2 Enabled Vertex Attributes and Range Checking >> >>> >> >>> and the way that's written implies to me that if you call >> >>> enableVertexAttribArray(n) then you need to make sure that n is >> >>> bound >> >>> with a vertex attribute array pointer statement. But I'm >> >>> specifically >> >>> calling disableVertexAttribArray because it isn't. >> >>> >> >>> My feeling at this point is that I'll rewrite the code to set the >> >>> attrib >> >>> locations prior to linking. If I do that, the logical thing is to >> >>> call >> >>> disableVertexAttribArray as I create them and assign specific >> >>> default >> >>> values (ie make them 'constant attributes') and then when I'm >> >>> setting >> >>> up >> >>> a vertexatttrib array specifically enable the ones that I'm using. >> >>> The >> >>> nature of my application is such that different objects can use >> >>> different combinations of attributes. I assume that if I enable >> >>> them, >> >>> draw them (drawArrays/drawElements) and then disable them that the >> >>> GL >> >>> state machine will handle that properly. Alternatively, I suppose >> >>> I >> >>> can >> >>> track which ones are enabled/disabled and just make sure that as >> >>> the >> >>> scenegraph is traversed then the attributes are enabled/disabled >> >>> according to the needs of the specific scene element. >> >>> >> >>> I'll report back but it probably won't be today because I have to >> >>> do >> >>> some other things this afternoon. >> >>> >> >>> >> >>> Thanks >> >>> >> >>> Alan >> >>> >> >>> >> >>> >> >>>> >> >>>> ? ? ?- Vlad >> >>>> >> >>>> ----- Original Message ----- >> >>>> >> >>>> >> >>>>> >> >>>>> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ >> >>>>> wrote: >> >>>>> >> >>>>> >> >>>>>> >> >>>>>> Hi >> >>>>>> >> >>>>>> I'm seeing a problem with vertexAttrib4fv which manifests on >> >>>>>> Linux >> >>>>>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX >> >>>>>> >> >>>>>> The pseudo code is something like: >> >>>>>> >> >>>>>> >> >>>>>> ? ? if (! colorVertexAttributeData) { >> >>>>>> >> >>>>>> ? ? ? ? ?int colorIndex = getAttribLocation(shader, "a_color") >> >>>>>> ? ? ? ? ?if (colorIndex != -1) { >> >>>>>> ? ? ? ? ? ? ? ? ?disableVertexAttribArray(colorIndex) >> >>>>>> ? ? ? ? ? ? ? ? ?vertexAttrib4fv(colorIndex, defaultColor) >> >>>>>> ? ? ? ? ?} >> >>>>>> ? ?} >> >>>>>> >> >>>>>> >> >>>>>> where my shader has: >> >>>>>> >> >>>>>> attribute vec3 a_color; >> >>>>>> >> >>>>>> in the attribute definitions. >> >>>>>> >> >>>>>> The purpose behind the code is for the cases where the data >> >>>>>> doesn't >> >>>>>> have a >> >>>>>> color attribute channel (using Max speak) so that it is >> >>>>>> displayed >> >>>>>> with a >> >>>>>> default color, so I use a constant vertex attribute. >> >>>>>> >> >>>>>> This works fine in the win setup, but on the linux box Chrome >> >>>>>> 6.0.472.25 dev >> >>>>>> ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces >> >>>>>> a >> >>>>>> black >> >>>>>> image. By "ignore the color" I've set a default value of >> >>>>>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is >> >>>>>> happening. >> >>>>>> In chrome, the objects are this color rather than the color >> >>>>>> provided >> >>>>>> in the >> >>>>>> vertexAttrib statement. >> >>>>>> >> >>>>>> >> >>>>> >> >>>>> I don't have a really useful suggestion except to please try the >> >>>>> top >> >>>>> of tree Chromium build rather than the dev channel build. WebGL >> >>>>> is >> >>>>> still under rapid development. See the WebGL wiki for download >> >>>>> instructions. >> >>>>> >> >>>>> If you can boil this down into a small test case please post it. >> >>>>> >> >>>>> -Ken >> >>>>> >> >>>>> >> >>>>> >> >>>>>> >> >>>>>> It seems to me that the likely candidates are either the >> >>>>>> graphics >> >>>>>> driver or >> >>>>>> some issue with WebGL and X86_64 linux. >> >>>>>> >> >>>>>> When there is a color attribute channel in the vertex attray >> >>>>>> then >> >>>>>> the >> >>>>>> vertexAttribPointer correctly finds and displays the color on >> >>>>>> all >> >>>>>> platforms. >> >>>>>> >> >>>>>> Regards >> >>>>>> >> >>>>>> Alan >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> ----------------------------------------------------------- >> >>>>>> You are currently subscribed to public_webgl...@ >> >>>>>> To unsubscribe, send an email to majordomo...@ with >> >>>>>> the following command in the body of your email: >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>> >> >>>>> ----------------------------------------------------------- >> >>>>> You are currently subscribed to public_webgl...@ >> >>>>> To unsubscribe, send an email to majordomo...@ with >> >>>>> the following command in the body of your email: >> >>>>> >> >>>>> >> >>> >> >>> ----------------------------------------------------------- >> >>> You are currently subscribed to public_webgl...@ >> >>> To unsubscribe, send an email to majordomo...@ with >> >>> the following command in the body of your email: >> >>> >> > >> > ----------------------------------------------------------- >> > You are currently subscribed to public_webgl...@ >> > To unsubscribe, send an email to majordomo...@ with >> > the following command in the body of your email: >> > >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Wed Aug 11 18:25:04 2010 From: ste...@ (Steve Baker) Date: Wed, 11 Aug 2010 20:25:04 -0500 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: References: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> <4C630241.5090505@mechnicality.com> Message-ID: <4C634D70.4090000@sjbaker.org> Gregg Tavares (wrk) wrote: > Hi Alan, > > Can you please tell me if these tests PASS for you? > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-vertex-attrib.html > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-bind-attrib-location-test.html > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-object-get-calls.html > (note: On top of tree Chromium (6.0.491.0 (55567)) 3 of the tests in > this last one don't pass at this time but those are not related to the > issue you are seeing) FYI: All three fail in some way on my Linux/Firefox/nVidia-6800 test box: --------------------------------------------------------- This test ensures WebGL implementations vertexAttrib can be set and read. On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE". Canvas.getContext PASS context exists Checking gl.vertexAttrib. FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[0] should be 1. Was 0. FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[1] should be 2. Was 0. FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[2] should be 3. Was 0. FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[3] should be 4. Was 0. FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[0] should be 5. Was 0. PASS gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[1] is 0 PASS gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[2] is 0 FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[3] should be 1. Was 0. PASS successfullyParsed is true TEST COMPLETE --------------------------------------------------------------------- This test ensures WebGL implementations don't allow names that start with 'gl_' when calling bindAttribLocation. On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE". Canvas.getContext PASS [object CanvasRenderingContextWebGL] is non-null. Checking gl.bindAttribLocation. PASS bindAttribLocation should return INVALID_OPERATION if name starts with 'gl_' PASS bindAttribLocation should return INVALID_OPERATION if name starts with 'gl_' PASS program linked successfully vPosition:3 vColor :2 PASS location of vPositon should be 3 PASS location of vColor should be 2 FAIL pixel at (20,15) is (0,0,0,255), should be (0,255,0,255) PASS program linked successfully vPosition:3 vColor :0 PASS location of vPositon should be 3 PASS location of vColor should be 0 FAIL pixel at (20,15) is (0,0,0,255), should be (255,0,0,255) PASS gl.getError() is gl.NO_ERROR PASS successfullyParsed is true TEST COMPLETE ----------------------------------------------------------------------------------- Test of get calls against GL objects like getBufferParameter, etc. On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE". PASS shaders.length is 2 PASS shaders[0] == standardVert && shaders[1] == standardFrag || shaders[1] == standardVert && shaders[0] == standardFrag is true PASS gl.getError() is gl.NO_ERROR FAIL gl.getAttachedShaders(null) should be undefined. Was null PASS gl.getError() is gl.INVALID_VALUE FAIL gl.getAttachedShaders(standardVert) should be undefined. Threw exception [Exception... "Could not convert JavaScript argument arg 0 [nsICanvasRenderingContextWebGL.getAttachedShaders]" nsresult: "0x80570009 (NS_ERROR_XPC_BAD_CONVERT_JS)" location: "JS frame :: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js :: shouldBeUndefined :: line 256" data: no] FAIL gl.getError() should be 1281. Was 0. PASS gl.getBufferParameter(gl.ARRAY_BUFFER, gl.BUFFER_SIZE) is 16 PASS gl.getBufferParameter(gl.ARRAY_BUFFER, gl.BUFFER_USAGE) is gl.DYNAMIC_DRAW PASS getError was expected value: NO_ERROR : PASS getError was expected value: NO_ERROR : PASS getError was expected value: NO_ERROR : PASS gl.checkFramebufferStatus(gl.FRAMEBUFFER) is gl.FRAMEBUFFER_COMPLETE PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE) is gl.TEXTURE PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) is texture PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL) is 0 PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_TEXTURE_CUBE_MAP_FACE) is 0 PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE) is gl.RENDERBUFFER PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) is renderbuffer PASS gl.getProgramParameter(standardProgram, gl.DELETE_STATUS) is false PASS gl.getProgramParameter(standardProgram, gl.LINK_STATUS) is true PASS typeof gl.getProgramParameter(standardProgram, gl.VALIDATE_STATUS) is "boolean" PASS typeof gl.getProgramParameter(standardProgram, gl.INFO_LOG_LENGTH) is "number" PASS gl.getProgramParameter(standardProgram, gl.ATTACHED_SHADERS) is 2 PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_ATTRIBUTES) is 2 PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_ATTRIBUTE_MAX_LENGTH) is non-zero. PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_UNIFORMS) is 1 PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_UNIFORM_MAX_LENGTH) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_WIDTH) is 2 PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_HEIGHT) is 2 PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_INTERNAL_FORMAT) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_DEPTH_SIZE) is non-zero. FAIL getError expected: NO_ERROR. Was INVALID_OPERATION : PASS getError was expected value: NO_ERROR : PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_RED_SIZE) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_GREEN_SIZE) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_BLUE_SIZE) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_ALPHA_SIZE) is non-zero. PASS gl.getShaderParameter(standardVert, gl.SHADER_TYPE) is gl.VERTEX_SHADER PASS gl.getShaderParameter(standardVert, gl.DELETE_STATUS) is false PASS gl.getShaderParameter(standardVert, gl.COMPILE_STATUS) is true PASS typeof gl.getShaderParameter(standardVert, gl.INFO_LOG_LENGTH) is "number" PASS gl.getShaderParameter(standardVert, gl.SHADER_SOURCE_LENGTH) is non-zero. PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER) is gl.NEAREST PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER) is gl.NEAREST PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S) is gl.CLAMP_TO_EDGE PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T) is gl.CLAMP_TO_EDGE PASS gl.getProgramParameter(boolProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(boolProgram, bvalLoc) is true PASS gl.getUniform(boolProgram, bval2Loc) is [1, 0] PASS gl.getUniform(boolProgram, bval3Loc) is [1, 0, 1] PASS gl.getUniform(boolProgram, bval4Loc) is [1, 0, 1, 0] PASS gl.getProgramParameter(intProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(intProgram, ivalLoc) is 1 PASS gl.getUniform(intProgram, ival2Loc) is [2, 3] PASS gl.getUniform(intProgram, ival3Loc) is [4, 5, 6] PASS gl.getUniform(intProgram, ival4Loc) is [7, 8, 9, 10] PASS gl.getProgramParameter(floatProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(floatProgram, fvalLoc) is 11 PASS gl.getUniform(floatProgram, fval2Loc) is [12, 13] PASS gl.getUniform(floatProgram, fval3Loc) is [14, 15, 16] PASS gl.getUniform(floatProgram, fval4Loc) is [17, 18, 19, 20] PASS gl.getProgramParameter(matProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(matProgram, mval2Loc) is [1, 2, 3, 4] PASS gl.getUniform(matProgram, mval3Loc) is [5, 6, 7, 8, 9, 10, 11, 12, 13] PASS gl.getUniform(matProgram, mval4Loc) is [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] FAIL gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_BUFFER_BINDING) should be [object WebGLBuffer] (of type object). Was 3 (of type number). PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_ENABLED) is true PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_SIZE) is 4 PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_STRIDE) is 0 PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_TYPE) is gl.FLOAT PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_NORMALIZED) is false PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_STRIDE) is 36 FAIL gl.getVertexAttribOffset(1, gl.VERTEX_ATTRIB_ARRAY_POINTER) should be 12. Threw exception [Exception... "Component returned failure code: 0x80004001 (NS_ERROR_NOT_IMPLEMENTED) [nsICanvasRenderingContextWebGL.getVertexAttribOffset]" nsresult: "0x80004001 (NS_ERROR_NOT_IMPLEMENTED)" location: "JS frame :: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js :: shouldBe :: line 151" data: no] PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_ENABLED) is false PASS gl.getVertexAttrib(1, gl.CURRENT_VERTEX_ATTRIB) is [5, 6, 7, 8] PASS getError was expected value: NO_ERROR : FAIL gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) should be null. Threw exception [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsICanvasRenderingContextWebGL.getFramebufferAttachmentParameter]" nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame :: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js :: shouldBe :: line 151" data: no] FAIL gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) should be null. Threw exception [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsICanvasRenderingContextWebGL.getFramebufferAttachmentParameter]" nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame :: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js :: shouldBe :: line 151" data: no] FAIL gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_BUFFER_BINDING) should be null (of type object). Was 0 (of type number). PASS getError was expected value: NO_ERROR : PASS successfullyParsed is true TEST COMPLETE -------------------------------------------------------------- ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Wed Aug 11 14:51:49 2010 From: ala...@ (ala...@) Date: Wed, 11 Aug 2010 14:51:49 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <4C634D70.4090000@sjbaker.org> References: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> <4C630241.5090505@mechnicality.com> <4C634D70.4090000@sjbaker.org> Message-ID: <4C631B75.7020806@mechnicality.com> Hi Steve Which version of linux? Alan On 08/11/2010 06:25 PM, Steve Baker wrote: > Gregg Tavares (wrk) wrote: > >> Hi Alan, >> >> Can you please tell me if these tests PASS for you? >> >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-vertex-attrib.html >> >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-bind-attrib-location-test.html >> >> https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-object-get-calls.html >> (note: On top of tree Chromium (6.0.491.0 (55567)) 3 of the tests in >> this last one don't pass at this time but those are not related to the >> issue you are seeing) >> > FYI: All three fail in some way on my Linux/Firefox/nVidia-6800 test box: > --------------------------------------------------------- > This test ensures WebGL implementations vertexAttrib can be set and read. > > On success, you will see a series of "PASS" messages, followed by "TEST > COMPLETE". > > Canvas.getContext > PASS context exists > > Checking gl.vertexAttrib. > FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[0] should be 1. Was 0. > FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[1] should be 2. Was 0. > FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[2] should be 3. Was 0. > FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[3] should be 4. Was 0. > FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[0] should be 5. Was 0. > PASS gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[1] is 0 > PASS gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[2] is 0 > FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[3] should be 1. Was 0. > > PASS successfullyParsed is true > > TEST COMPLETE > --------------------------------------------------------------------- > This test ensures WebGL implementations don't allow names that start > with 'gl_' when calling bindAttribLocation. > > On success, you will see a series of "PASS" messages, followed by "TEST > COMPLETE". > > > Canvas.getContext > PASS [object CanvasRenderingContextWebGL] is non-null. > > Checking gl.bindAttribLocation. > PASS bindAttribLocation should return INVALID_OPERATION if name starts > with 'gl_' > PASS bindAttribLocation should return INVALID_OPERATION if name starts > with 'gl_' > PASS program linked successfully > vPosition:3 > vColor :2 > PASS location of vPositon should be 3 > PASS location of vColor should be 2 > FAIL pixel at (20,15) is (0,0,0,255), should be (0,255,0,255) > PASS program linked successfully > vPosition:3 > vColor :0 > PASS location of vPositon should be 3 > PASS location of vColor should be 0 > FAIL pixel at (20,15) is (0,0,0,255), should be (255,0,0,255) > PASS gl.getError() is gl.NO_ERROR > > PASS successfullyParsed is true > > TEST COMPLETE > ----------------------------------------------------------------------------------- > Test of get calls against GL objects like getBufferParameter, etc. > > On success, you will see a series of "PASS" messages, followed by "TEST > COMPLETE". > > PASS shaders.length is 2 > PASS shaders[0] == standardVert&& shaders[1] == standardFrag || > shaders[1] == standardVert&& shaders[0] == standardFrag is true > PASS gl.getError() is gl.NO_ERROR > FAIL gl.getAttachedShaders(null) should be undefined. Was null > PASS gl.getError() is gl.INVALID_VALUE > FAIL gl.getAttachedShaders(standardVert) should be undefined. Threw > exception [Exception... "Could not convert JavaScript argument arg 0 > [nsICanvasRenderingContextWebGL.getAttachedShaders]" nsresult: > "0x80570009 (NS_ERROR_XPC_BAD_CONVERT_JS)" location: "JS frame :: > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js > :: shouldBeUndefined :: line 256" data: no] > FAIL gl.getError() should be 1281. Was 0. > PASS gl.getBufferParameter(gl.ARRAY_BUFFER, gl.BUFFER_SIZE) is 16 > PASS gl.getBufferParameter(gl.ARRAY_BUFFER, gl.BUFFER_USAGE) is > gl.DYNAMIC_DRAW > PASS getError was expected value: NO_ERROR : > PASS getError was expected value: NO_ERROR : > PASS getError was expected value: NO_ERROR : > PASS gl.checkFramebufferStatus(gl.FRAMEBUFFER) is gl.FRAMEBUFFER_COMPLETE > PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, > gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE) is gl.TEXTURE > PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, > gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) is texture > PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, > gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL) is 0 > PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, > gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_TEXTURE_CUBE_MAP_FACE) is 0 > PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, > gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE) is > gl.RENDERBUFFER > PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, > gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) is renderbuffer > PASS gl.getProgramParameter(standardProgram, gl.DELETE_STATUS) is false > PASS gl.getProgramParameter(standardProgram, gl.LINK_STATUS) is true > PASS typeof gl.getProgramParameter(standardProgram, gl.VALIDATE_STATUS) > is "boolean" > PASS typeof gl.getProgramParameter(standardProgram, gl.INFO_LOG_LENGTH) > is "number" > PASS gl.getProgramParameter(standardProgram, gl.ATTACHED_SHADERS) is 2 > PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_ATTRIBUTES) is 2 > PASS gl.getProgramParameter(standardProgram, > gl.ACTIVE_ATTRIBUTE_MAX_LENGTH) is non-zero. > PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_UNIFORMS) is 1 > PASS gl.getProgramParameter(standardProgram, > gl.ACTIVE_UNIFORM_MAX_LENGTH) is non-zero. > PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_WIDTH) > is 2 > PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, > gl.RENDERBUFFER_HEIGHT) is 2 > PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, > gl.RENDERBUFFER_INTERNAL_FORMAT) is non-zero. > PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, > gl.RENDERBUFFER_DEPTH_SIZE) is non-zero. > FAIL getError expected: NO_ERROR. Was INVALID_OPERATION : > PASS getError was expected value: NO_ERROR : > PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, > gl.RENDERBUFFER_RED_SIZE) is non-zero. > PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, > gl.RENDERBUFFER_GREEN_SIZE) is non-zero. > PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, > gl.RENDERBUFFER_BLUE_SIZE) is non-zero. > PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, > gl.RENDERBUFFER_ALPHA_SIZE) is non-zero. > PASS gl.getShaderParameter(standardVert, gl.SHADER_TYPE) is gl.VERTEX_SHADER > PASS gl.getShaderParameter(standardVert, gl.DELETE_STATUS) is false > PASS gl.getShaderParameter(standardVert, gl.COMPILE_STATUS) is true > PASS typeof gl.getShaderParameter(standardVert, gl.INFO_LOG_LENGTH) is > "number" > PASS gl.getShaderParameter(standardVert, gl.SHADER_SOURCE_LENGTH) is > non-zero. > PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER) is gl.NEAREST > PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER) is gl.NEAREST > PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S) is > gl.CLAMP_TO_EDGE > PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T) is > gl.CLAMP_TO_EDGE > PASS gl.getProgramParameter(boolProgram, gl.LINK_STATUS) is true > PASS getError was expected value: NO_ERROR : > PASS gl.getUniform(boolProgram, bvalLoc) is true > PASS gl.getUniform(boolProgram, bval2Loc) is [1, 0] > PASS gl.getUniform(boolProgram, bval3Loc) is [1, 0, 1] > PASS gl.getUniform(boolProgram, bval4Loc) is [1, 0, 1, 0] > PASS gl.getProgramParameter(intProgram, gl.LINK_STATUS) is true > PASS getError was expected value: NO_ERROR : > PASS gl.getUniform(intProgram, ivalLoc) is 1 > PASS gl.getUniform(intProgram, ival2Loc) is [2, 3] > PASS gl.getUniform(intProgram, ival3Loc) is [4, 5, 6] > PASS gl.getUniform(intProgram, ival4Loc) is [7, 8, 9, 10] > PASS gl.getProgramParameter(floatProgram, gl.LINK_STATUS) is true > PASS getError was expected value: NO_ERROR : > PASS gl.getUniform(floatProgram, fvalLoc) is 11 > PASS gl.getUniform(floatProgram, fval2Loc) is [12, 13] > PASS gl.getUniform(floatProgram, fval3Loc) is [14, 15, 16] > PASS gl.getUniform(floatProgram, fval4Loc) is [17, 18, 19, 20] > PASS gl.getProgramParameter(matProgram, gl.LINK_STATUS) is true > PASS getError was expected value: NO_ERROR : > PASS gl.getUniform(matProgram, mval2Loc) is [1, 2, 3, 4] > PASS gl.getUniform(matProgram, mval3Loc) is [5, 6, 7, 8, 9, 10, 11, 12, 13] > PASS gl.getUniform(matProgram, mval4Loc) is [14, 15, 16, 17, 18, 19, 20, > 21, 22, 23, 24, 25, 26, 27, 28, 29] > FAIL gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_BUFFER_BINDING) should > be [object WebGLBuffer] (of type object). Was 3 (of type number). > PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_ENABLED) is true > PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_SIZE) is 4 > PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_STRIDE) is 0 > PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_TYPE) is gl.FLOAT > PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_NORMALIZED) is false > PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_STRIDE) is 36 > FAIL gl.getVertexAttribOffset(1, gl.VERTEX_ATTRIB_ARRAY_POINTER) should > be 12. Threw exception [Exception... "Component returned failure code: > 0x80004001 (NS_ERROR_NOT_IMPLEMENTED) > [nsICanvasRenderingContextWebGL.getVertexAttribOffset]" nsresult: > "0x80004001 (NS_ERROR_NOT_IMPLEMENTED)" location: "JS frame :: > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js > :: shouldBe :: line 151" data: no] > PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_ENABLED) is false > PASS gl.getVertexAttrib(1, gl.CURRENT_VERTEX_ATTRIB) is [5, 6, 7, 8] > PASS getError was expected value: NO_ERROR : > FAIL gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, > gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) should be > null. Threw exception [Exception... "Component returned failure code: > 0x80004005 (NS_ERROR_FAILURE) > [nsICanvasRenderingContextWebGL.getFramebufferAttachmentParameter]" > nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame :: > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js > :: shouldBe :: line 151" data: no] > FAIL gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, > gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) should be > null. Threw exception [Exception... "Component returned failure code: > 0x80004005 (NS_ERROR_FAILURE) > [nsICanvasRenderingContextWebGL.getFramebufferAttachmentParameter]" > nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame :: > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js > :: shouldBe :: line 151" data: no] > FAIL gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_BUFFER_BINDING) should > be null (of type object). Was 0 (of type number). > PASS getError was expected value: NO_ERROR : > PASS successfullyParsed is true > > TEST COMPLETE > -------------------------------------------------------------- > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Wed Aug 11 14:53:32 2010 From: ala...@ (ala...@) Date: Wed, 11 Aug 2010 14:53:32 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: References: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> <4C630241.5090505@mechnicality.com> Message-ID: <4C631BDC.1050404@mechnicality.com> Hi Gregg Platform is Linux x86_64 Ubuntu 10.04 with ATI Radeon 5770 - I can send you the glxinfo off list if you'd like (its rather long to just paste here.) Ati information gives the driver as: 8.723.1-100408a-098580C-ATI 3.2.9756 compatibility profile context. The results aren't really conclusive, because almost all the combinations have some failures, whereas my results are pretty consistent - it works on the Nvidia/Win platform and not on L86_64/ATI platform I'm just about to try everything out on my laptop (HP/Vista64/nvidia) but I have to update some things so it will take a while. Thanks Alan On 08/11/2010 01:56 PM, Gregg Tavares (wrk) wrote: > Hi Alan, > > Can you please tell me if these tests PASS for you? > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-vertex-attrib.html Fails on FF 3.7a6pre and 4.0b4pre () Canvas.getContext PASS context exists Checking gl.vertexAttrib. FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[0] should be 1. Was 0. FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[1] should be 2. Was 0. FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[2] should be 3. Was 0. FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[3] should be 4. Was 0. FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[0] should be 5. Was 0. PASS gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[1] is 0 PASS gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[2] is 0 FAIL gl.getVertexAttrib(0, gl.CURRENT_VERTEX_ATTRIB)[3] should be 1. Was 0. PASS successfullyParsed is true However, on the latest Chromium build 6.0.492.0 (55767) it all works (but my app. doesn't) > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-bind-attrib-location-test.html ************ FF 3.7a6pre and FF 4.0b4pre give the following: Chromium 6.0.492.0 (55767) Canvas.getContext PASS [object WebGLRenderingContext] is non-null. Checking gl.bindAttribLocation. PASS bindAttribLocation should return INVALID_OPERATION if name starts with 'gl_' PASS bindAttribLocation should return INVALID_OPERATION if name starts with 'gl_' PASS program linked successfully vPosition:3 vColor :2 PASS location of vPositon should be 3 PASS location of vColor should be 2 FAIL pixel at (20,15) is (0,0,0,255), should be (0,255,0,255) PASS program linked successfully vPosition:3 vColor :0 PASS location of vPositon should be 3 PASS location of vColor should be 0 PASS drawing is correct FAIL gl.getError() should be 0. Was 1285. PASS successfullyParsed is true > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/conformance/gl-object-get-calls.html > (note: On top of tree Chromium (6.0.491.0 (55567)) 3 of the tests in > this last one don't pass at this time but those are not related to the > issue you are seeing) I found top of tree to be Chromium 6.0.492.0 (55767) ??? actually latest Win is 6.0.492.0 (following the links on the Khronos site as Ken suggested.) ******** This is what I got on FF 3.7a6pre Test of get calls against GL objects like getBufferParameter, etc. On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE". FAIL successfullyParsed should be true. Threw exception ReferenceError: successfullyParsed is not defined ******** This is what I got on FF 4.0b4pre Test of get calls against GL objects like getBufferParameter, etc. On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE". PASS shaders.length is 2 PASS shaders[0] == standardVert && shaders[1] == standardFrag || shaders[1] == standardVert && shaders[0] == standardFrag is true PASS gl.getError() is gl.NO_ERROR FAIL gl.getAttachedShaders(null) should be undefined. Was null PASS gl.getError() is gl.INVALID_VALUE FAIL gl.getAttachedShaders(standardVert) should be undefined. Threw exception [Exception... "Could not convert JavaScript argument arg 0 [nsICanvasRenderingContextWebGL.getAttachedShaders]" nsresult: "0x80570009 (NS_ERROR_XPC_BAD_CONVERT_JS)" location: "JS frame :: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js :: shouldBeUndefined :: line 256" data: no] FAIL gl.getError() should be 1281. Was 0. PASS gl.getBufferParameter(gl.ARRAY_BUFFER, gl.BUFFER_SIZE) is 16 PASS gl.getBufferParameter(gl.ARRAY_BUFFER, gl.BUFFER_USAGE) is gl.DYNAMIC_DRAW PASS getError was expected value: NO_ERROR : PASS getError was expected value: NO_ERROR : PASS getError was expected value: NO_ERROR : PASS gl.checkFramebufferStatus(gl.FRAMEBUFFER) is gl.FRAMEBUFFER_COMPLETE PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE) is gl.TEXTURE PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) is texture PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL) is 0 PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_TEXTURE_CUBE_MAP_FACE) is 0 PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE) is gl.RENDERBUFFER PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) is renderbuffer PASS gl.getProgramParameter(standardProgram, gl.DELETE_STATUS) is false PASS gl.getProgramParameter(standardProgram, gl.LINK_STATUS) is true PASS typeof gl.getProgramParameter(standardProgram, gl.VALIDATE_STATUS) is "boolean" PASS typeof gl.getProgramParameter(standardProgram, gl.INFO_LOG_LENGTH) is "number" PASS gl.getProgramParameter(standardProgram, gl.ATTACHED_SHADERS) is 2 PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_ATTRIBUTES) is 2 PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_ATTRIBUTE_MAX_LENGTH) is non-zero. PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_UNIFORMS) is 1 PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_UNIFORM_MAX_LENGTH) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_WIDTH) is 2 PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_HEIGHT) is 2 PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_INTERNAL_FORMAT) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_DEPTH_SIZE) is non-zero. PASS getError was expected value: NO_ERROR : PASS getError was expected value: NO_ERROR : PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_RED_SIZE) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_GREEN_SIZE) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_BLUE_SIZE) is non-zero. FAIL gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_ALPHA_SIZE) should be non-zero. Was 0 PASS gl.getShaderParameter(standardVert, gl.SHADER_TYPE) is gl.VERTEX_SHADER PASS gl.getShaderParameter(standardVert, gl.DELETE_STATUS) is false PASS gl.getShaderParameter(standardVert, gl.COMPILE_STATUS) is true PASS typeof gl.getShaderParameter(standardVert, gl.INFO_LOG_LENGTH) is "number" PASS gl.getShaderParameter(standardVert, gl.SHADER_SOURCE_LENGTH) is non-zero. PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER) is gl.NEAREST PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER) is gl.NEAREST PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S) is gl.CLAMP_TO_EDGE PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T) is gl.CLAMP_TO_EDGE PASS gl.getProgramParameter(boolProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(boolProgram, bvalLoc) is true PASS gl.getUniform(boolProgram, bval2Loc) is [1, 0] PASS gl.getUniform(boolProgram, bval3Loc) is [1, 0, 1] PASS gl.getUniform(boolProgram, bval4Loc) is [1, 0, 1, 0] PASS gl.getProgramParameter(intProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(intProgram, ivalLoc) is 1 PASS gl.getUniform(intProgram, ival2Loc) is [2, 3] PASS gl.getUniform(intProgram, ival3Loc) is [4, 5, 6] PASS gl.getUniform(intProgram, ival4Loc) is [7, 8, 9, 10] PASS gl.getProgramParameter(floatProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(floatProgram, fvalLoc) is 11 PASS gl.getUniform(floatProgram, fval2Loc) is [12, 13] PASS gl.getUniform(floatProgram, fval3Loc) is [14, 15, 16] PASS gl.getUniform(floatProgram, fval4Loc) is [17, 18, 19, 20] PASS gl.getProgramParameter(matProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(matProgram, mval2Loc) is [1, 2, 3, 4] PASS gl.getUniform(matProgram, mval3Loc) is [5, 6, 7, 8, 9, 10, 11, 12, 13] PASS gl.getUniform(matProgram, mval4Loc) is [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] FAIL gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_BUFFER_BINDING) should be [object WebGLBuffer] (of type object). Was 3 (of type number). PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_ENABLED) is true PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_SIZE) is 4 PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_STRIDE) is 0 PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_TYPE) is gl.FLOAT PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_NORMALIZED) is false PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_STRIDE) is 36 FAIL gl.getVertexAttribOffset(1, gl.VERTEX_ATTRIB_ARRAY_POINTER) should be 12. Threw exception [Exception... "Component returned failure code: 0x80004001 (NS_ERROR_NOT_IMPLEMENTED) [nsICanvasRenderingContextWebGL.getVertexAttribOffset]" nsresult: "0x80004001 (NS_ERROR_NOT_IMPLEMENTED)" location: "JS frame :: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js :: shouldBe :: line 151" data: no] PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_ENABLED) is false PASS gl.getVertexAttrib(1, gl.CURRENT_VERTEX_ATTRIB) is [5, 6, 7, 8] PASS getError was expected value: NO_ERROR : FAIL gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) should be null. Threw exception [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsICanvasRenderingContextWebGL.getFramebufferAttachmentParameter]" nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame :: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js :: shouldBe :: line 151" data: no] FAIL gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) should be null. Threw exception [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsICanvasRenderingContextWebGL.getFramebufferAttachmentParameter]" nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame :: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/resources/js-test-pre.js :: shouldBe :: line 151" data: no] FAIL gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_BUFFER_BINDING) should be null (of type object). Was 3 (of type number). PASS getError was expected value: NO_ERROR : PASS successfullyParsed is true TEST COMPLETE **************** And this is the Chromium On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE". PASS shaders.length is 2 PASS shaders[0] == standardVert && shaders[1] == standardFrag || shaders[1] == standardVert && shaders[0] == standardFrag is true PASS gl.getError() is gl.NO_ERROR PASS gl.getAttachedShaders(null) is undefined. PASS gl.getError() is gl.INVALID_VALUE PASS gl.getAttachedShaders(standardVert) is undefined. PASS gl.getError() is gl.INVALID_VALUE PASS gl.getBufferParameter(gl.ARRAY_BUFFER, gl.BUFFER_SIZE) is 16 PASS gl.getBufferParameter(gl.ARRAY_BUFFER, gl.BUFFER_USAGE) is gl.DYNAMIC_DRAW PASS getError was expected value: NO_ERROR : PASS getError was expected value: NO_ERROR : PASS getError was expected value: NO_ERROR : PASS gl.checkFramebufferStatus(gl.FRAMEBUFFER) is gl.FRAMEBUFFER_COMPLETE PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE) is gl.TEXTURE FAIL gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) should be [object WebGLTexture]. Was null. PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL) is 0 PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_TEXTURE_CUBE_MAP_FACE) is 0 PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE) is gl.RENDERBUFFER FAIL gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) should be [object WebGLRenderbuffer]. Was null. PASS gl.getProgramParameter(standardProgram, gl.DELETE_STATUS) is false PASS gl.getProgramParameter(standardProgram, gl.LINK_STATUS) is true PASS typeof gl.getProgramParameter(standardProgram, gl.VALIDATE_STATUS) is "boolean" PASS typeof gl.getProgramParameter(standardProgram, gl.INFO_LOG_LENGTH) is "number" PASS gl.getProgramParameter(standardProgram, gl.ATTACHED_SHADERS) is 2 PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_ATTRIBUTES) is 2 PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_ATTRIBUTE_MAX_LENGTH) is non-zero. PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_UNIFORMS) is 1 PASS gl.getProgramParameter(standardProgram, gl.ACTIVE_UNIFORM_MAX_LENGTH) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_WIDTH) is 2 PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_HEIGHT) is 2 PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_INTERNAL_FORMAT) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_DEPTH_SIZE) is non-zero. PASS getError was expected value: NO_ERROR : PASS getError was expected value: NO_ERROR : PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_RED_SIZE) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_GREEN_SIZE) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_BLUE_SIZE) is non-zero. PASS gl.getRenderbufferParameter(gl.RENDERBUFFER, gl.RENDERBUFFER_ALPHA_SIZE) is non-zero. PASS gl.getShaderParameter(standardVert, gl.SHADER_TYPE) is gl.VERTEX_SHADER PASS gl.getShaderParameter(standardVert, gl.DELETE_STATUS) is false PASS gl.getShaderParameter(standardVert, gl.COMPILE_STATUS) is true PASS typeof gl.getShaderParameter(standardVert, gl.INFO_LOG_LENGTH) is "number" PASS gl.getShaderParameter(standardVert, gl.SHADER_SOURCE_LENGTH) is non-zero. PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER) is gl.NEAREST PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER) is gl.NEAREST PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S) is gl.CLAMP_TO_EDGE PASS gl.getTexParameter(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T) is gl.CLAMP_TO_EDGE PASS gl.getProgramParameter(boolProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(boolProgram, bvalLoc) is true PASS gl.getUniform(boolProgram, bval2Loc) is [1, 0] PASS gl.getUniform(boolProgram, bval3Loc) is [1, 0, 1] PASS gl.getUniform(boolProgram, bval4Loc) is [1, 0, 1, 0] PASS gl.getProgramParameter(intProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(intProgram, ivalLoc) is 1 PASS gl.getUniform(intProgram, ival2Loc) is [2, 3] PASS gl.getUniform(intProgram, ival3Loc) is [4, 5, 6] PASS gl.getUniform(intProgram, ival4Loc) is [7, 8, 9, 10] PASS gl.getProgramParameter(floatProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(floatProgram, fvalLoc) is 11 PASS gl.getUniform(floatProgram, fval2Loc) is [12, 13] PASS gl.getUniform(floatProgram, fval3Loc) is [14, 15, 16] PASS gl.getUniform(floatProgram, fval4Loc) is [17, 18, 19, 20] PASS gl.getProgramParameter(matProgram, gl.LINK_STATUS) is true PASS getError was expected value: NO_ERROR : PASS gl.getUniform(matProgram, mval2Loc) is [1, 2, 3, 4] PASS gl.getUniform(matProgram, mval3Loc) is [5, 6, 7, 8, 9, 10, 11, 12, 13] PASS gl.getUniform(matProgram, mval4Loc) is [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_BUFFER_BINDING) is buffer PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_ENABLED) is true PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_SIZE) is 4 PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_STRIDE) is 0 PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_TYPE) is gl.FLOAT PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_NORMALIZED) is false PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_STRIDE) is 36 FAIL gl.getVertexAttribOffset(1, gl.VERTEX_ATTRIB_ARRAY_POINTER) should be 12. Was 0. PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_ENABLED) is false PASS gl.getVertexAttrib(1, gl.CURRENT_VERTEX_ATTRIB) is [5, 6, 7, 8] PASS getError was expected value: NO_ERROR : PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) is null PASS gl.getFramebufferAttachmentParameter(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.FRAMEBUFFER_ATTACHMENT_OBJECT_NAME) is null PASS gl.getVertexAttrib(1, gl.VERTEX_ATTRIB_ARRAY_BUFFER_BINDING) is null PASS getError was expected value: NO_ERROR : PASS successfullyParsed is true TEST COMPLETE > > If the rest of those tests don't pass for you then I can hopefully use > them to track down the issue. Otherwise if they do pass for you if you > could make a small sample and send it to me I'll be happy to try to > track down the issue. > > > On Wed, Aug 11, 2010 at 1:04 PM, Alan Chaney > wrote: > > Thanks Vlad, > > Well, that would explain things. Looks like application writers > will have to beware of this one. I'll modify my code as I outlined > below and see if that fixes it on the Linux/ATI box. With my > Khronos member hat on it looks like there will have to be work by > some of the driver manufacturers if they want to claim that their > drivers are WebGL compliant. > > Regards > > Alan > > > > On 8/11/2010 12:54 PM, Vladimir Vukicevic wrote: > > Oh right, I keep forgetting attrib indices are (have to be) > just numbers. I can't remember where we had this discussion, > but I /think/ we decided that this was a driver bug (requiring > attrib index 0 to be an enabled array). At least, we didn't > add any spec language requiring it, and I see that we don't do > any explicit checks for it -- and that there are conformance > tests that require a constant vertex attrib 0 to not fail > (gl-bind-attrib-location-test.html). > > - Vlad > > ----- Original Message ----- > > Hi Vlad > > On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: > > Hmm, would be interesting to know what attrib location > getAttribLocation is returning, though of course, you > can't know > that :/ > > > Well, when I print it out, the attrib index is a number . > I'm using > the > pattern of compiling and linking the shaders and then > using the > locations thus derived. I've checked, and it seems that on > the linux > box > the attribute index for a_color is 0, but for the windows > box the > a_color index is 2. > > This is easily explained in that presumably the graphics > driver > determines the index during the linking process, and the > drivers are > ATI > and Nvidia respectively. > > Of course, I could always set the attribute indices before > linking, > and > for example make '0' correspond to 'a_position' which is > pretty much > always likely to be bound to an array. I notice that the > Nvidia driver > appears to have made a_position the 0th index > > The actual shader code is: > > attribute vec3 a_position; > attribute vec3 a_normal; > attribute vec4 a_color; > attribute vec3 a_texcoord; > > and this is the order that the nvidia driver appears to use. > > > I wonder if this is something to do with attribute 0 > needing to be > an enabled array? Is a_color the first attribute > definition in your > shader? Though I thought we had error checks for 0 not > being an > enabled array. > > > I don't understand your comment about attribute 0 needing > to be an > enabled array. It seems from reading the spec that vertex > attributes > are > handled slightly differently in WebGL than ES 2, but > basically what > I'm > doing is to use disableVertexAttribArray to tell webgl > that it isn't > bound to an array. > > The only specific reference I can find is: > > 6.2 Enabled Vertex Attributes and Range Checking > > and the way that's written implies to me that if you call > enableVertexAttribArray(n) then you need to make sure that > n is bound > with a vertex attribute array pointer statement. But I'm > specifically > calling disableVertexAttribArray because it isn't. > > My feeling at this point is that I'll rewrite the code to > set the > attrib > locations prior to linking. If I do that, the logical > thing is to call > disableVertexAttribArray as I create them and assign > specific default > values (ie make them 'constant attributes') and then when > I'm setting > up > a vertexatttrib array specifically enable the ones that > I'm using. The > nature of my application is such that different objects > can use > different combinations of attributes. I assume that if I > enable them, > draw them (drawArrays/drawElements) and then disable them > that the GL > state machine will handle that properly. Alternatively, I > suppose I > can > track which ones are enabled/disabled and just make sure > that as the > scenegraph is traversed then the attributes are > enabled/disabled > according to the needs of the specific scene element. > > I'll report back but it probably won't be today because I > have to do > some other things this afternoon. > > > Thanks > > Alan > > > > - Vlad > > ----- Original Message ----- > > > On Wed, Aug 11, 2010 at 10:05 AM, > alan...@ > > wrote: > > > Hi > > I'm seeing a problem with vertexAttrib4fv > which manifests on Linux > x86_64/ATI Radeon 5770 but not on my > WinXP32/GForce 8800 GTX > > The pseudo code is something like: > > > if (! colorVertexAttributeData) { > > int colorIndex = > getAttribLocation(shader, "a_color") > if (colorIndex != -1) { > > disableVertexAttribArray(colorIndex) > vertexAttrib4fv(colorIndex, > defaultColor) > } > } > > > where my shader has: > > attribute vec3 a_color; > > in the attribute definitions. > > The purpose behind the code is for the cases > where the data > doesn't > have a > color attribute channel (using Max speak) so > that it is displayed > with a > default color, so I use a constant vertex > attribute. > > This works fine in the win setup, but on the > linux box Chrome > 6.0.472.25 dev > ignores the color and Firefox (3.7 -> 4.0 > beta 2) just produces a > black > image. By "ignore the color" I've set a > default value of > vec4(0.2,0.2,0.2,1.0) in the shader so I can > see if anything is > happening. > In chrome, the objects are this color rather > than the color > provided > in the > vertexAttrib statement. > > > I don't have a really useful suggestion except to > please try the > top > of tree Chromium build rather than the dev channel > build. WebGL is > still under rapid development. See the WebGL wiki > for download > instructions. > > If you can boil this down into a small test case > please post it. > > -Ken > > > > It seems to me that the likely candidates are > either the graphics > driver or > some issue with WebGL and X86_64 linux. > > When there is a color attribute channel in the > vertex attray then > the > vertexAttribPointer correctly finds and > displays the color on all > platforms. > > Regards > > Alan > > > > > > > ----------------------------------------------------------- > You are currently subscribed to > public_webgl...@ > . > To unsubscribe, send an email to > majordomo...@ > with > the following command in the body of your email: > > > > > ----------------------------------------------------------- > You are currently subscribed to > public_webgl...@ > . > To unsubscribe, send an email to > majordomo...@ > with > the following command in the body of your email: > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > . > To unsubscribe, send an email to majordomo...@ > with > the following command in the body of your email: > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > . > To unsubscribe, send an email to majordomo...@ > with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Wed Aug 11 14:56:34 2010 From: ala...@ (ala...@) Date: Wed, 11 Aug 2010 14:56:34 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: References: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> <4C630241.5090505@mechnicality.com> Message-ID: <4C631C92.5060708@mechnicality.com> On 08/11/2010 01:37 PM, Kenneth Russell wrote: > Current Chromium builds, at least, should be emulating the OpenGL ES > 2.0 behavior for vertex attribute 0. You should not need to modify > your application code. I've just downloaded 6.0.492.0 (55767) which I believe is "LATEST" and it behaves the same way. Unfortunately I can't run it in GWT dev mode so I can't easily see the vertex attribute indices. > Any other behavior of a WebGL implementation is > not spec compliant and a bug should be filed with the appropriate > browser vendor. > > -Ken > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Wed Aug 11 15:19:37 2010 From: ala...@ (ala...@) Date: Wed, 11 Aug 2010 15:19:37 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: References: <2062755321.252423.1281561250305.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C6321F9.1020505@mechnicality.com> Hmm I suspect this may be an ATI issue. I added the following line: gl.bindAttribLocation(shaderProgram, ATTR_INDEX_POSITION, ATTR_POSITION); where ATTR_INDEX_POSITION = 0 and ATTR_POSITION = "a_position" between attaching the shaders and linking the program and it all burst into life on the aforementioned linux box. I'll just double check it on the windows setup. Incidentally, the 'color' index is now 1 (was 0 before is 2 on nvidia) I realize of course that this is a workaround, but at least I can get on with my stuff. Happy to continue debugging the original problem, because the way I see it Ken is right, I shouldn't have to do what I've done above. Alan On 08/11/2010 02:33 PM, Kenneth Russell wrote: > OpenGL ES 2.0 doesn't have any special semantics for vertex attribute > 0. I don't see anything in section 2.7 of the OpenGL ES 2.0 spec that > specifically mentions this. > http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttrib.xml mentions > the desktop OpenGL semantics briefly at the bottom of the Description > section, but it doesn't explicitly mention that if you don't enable > vertex attribute 0 as an array that you don't get any rendered output. > > See http://src.chromium.org/viewvc/chrome/trunk/src/gpu/command_buffer/service/gles2_cmd_decoder.cc?view=markup > , GLES2DecoderImpl::SimulateAttrib0, for an example of how this can be > emulated. > > -Ken > > On Wed, Aug 11, 2010 at 2:14 PM, Vladimir Vukicevic > wrote: > >> I'm trying to find the description of GL ES 2.0 behaviour for attrib 0 -- what section is it in? >> >> - Vlad >> >> ----- Original Message ----- >> >>> Current Chromium builds, at least, should be emulating the OpenGL ES >>> 2.0 behavior for vertex attribute 0. You should not need to modify >>> your application code. Any other behavior of a WebGL implementation is >>> not spec compliant and a bug should be filed with the appropriate >>> browser vendor. >>> >>> -Ken >>> >>> On Wed, Aug 11, 2010 at 1:04 PM, Alan Chaney >>> wrote: >>> >>>> Thanks Vlad, >>>> >>>> Well, that would explain things. Looks like application writers will >>>> have to >>>> beware of this one. I'll modify my code as I outlined below and see >>>> if that >>>> fixes it on the Linux/ATI box. With my Khronos member hat on it >>>> looks like >>>> there will have to be work by some of the driver manufacturers if >>>> they want >>>> to claim that their drivers are WebGL compliant. >>>> >>>> Regards >>>> >>>> Alan >>>> >>>> >>>> On 8/11/2010 12:54 PM, Vladimir Vukicevic wrote: >>>> >>>>> Oh right, I keep forgetting attrib indices are (have to be) just >>>>> numbers. >>>>> I can't remember where we had this discussion, but I /think/ we >>>>> decided >>>>> that this was a driver bug (requiring attrib index 0 to be an >>>>> enabled >>>>> array). At least, we didn't add any spec language requiring it, and >>>>> I see >>>>> that we don't do any explicit checks for it -- and that there are >>>>> conformance tests that require a constant vertex attrib 0 to not >>>>> fail >>>>> (gl-bind-attrib-location-test.html). >>>>> >>>>> - Vlad >>>>> >>>>> ----- Original Message ----- >>>>> >>>>> >>>>>> Hi Vlad >>>>>> >>>>>> On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: >>>>>> >>>>>> >>>>>>> Hmm, would be interesting to know what attrib location >>>>>>> getAttribLocation is returning, though of course, you can't know >>>>>>> that :/ >>>>>>> >>>>>>> >>>>>>> >>>>>> Well, when I print it out, the attrib index is a number . I'm >>>>>> using >>>>>> the >>>>>> pattern of compiling and linking the shaders and then using the >>>>>> locations thus derived. I've checked, and it seems that on the >>>>>> linux >>>>>> box >>>>>> the attribute index for a_color is 0, but for the windows box the >>>>>> a_color index is 2. >>>>>> >>>>>> This is easily explained in that presumably the graphics driver >>>>>> determines the index during the linking process, and the drivers >>>>>> are >>>>>> ATI >>>>>> and Nvidia respectively. >>>>>> >>>>>> Of course, I could always set the attribute indices before >>>>>> linking, >>>>>> and >>>>>> for example make '0' correspond to 'a_position' which is pretty >>>>>> much >>>>>> always likely to be bound to an array. I notice that the Nvidia >>>>>> driver >>>>>> appears to have made a_position the 0th index >>>>>> >>>>>> The actual shader code is: >>>>>> >>>>>> attribute vec3 a_position; >>>>>> attribute vec3 a_normal; >>>>>> attribute vec4 a_color; >>>>>> attribute vec3 a_texcoord; >>>>>> >>>>>> and this is the order that the nvidia driver appears to use. >>>>>> >>>>>> >>>>>> >>>>>>> I wonder if this is something to do with attribute 0 needing to >>>>>>> be >>>>>>> an enabled array? Is a_color the first attribute definition in >>>>>>> your >>>>>>> shader? Though I thought we had error checks for 0 not being an >>>>>>> enabled array. >>>>>>> >>>>>>> >>>>>>> >>>>>> I don't understand your comment about attribute 0 needing to be an >>>>>> enabled array. It seems from reading the spec that vertex >>>>>> attributes >>>>>> are >>>>>> handled slightly differently in WebGL than ES 2, but basically >>>>>> what >>>>>> I'm >>>>>> doing is to use disableVertexAttribArray to tell webgl that it >>>>>> isn't >>>>>> bound to an array. >>>>>> >>>>>> The only specific reference I can find is: >>>>>> >>>>>> 6.2 Enabled Vertex Attributes and Range Checking >>>>>> >>>>>> and the way that's written implies to me that if you call >>>>>> enableVertexAttribArray(n) then you need to make sure that n is >>>>>> bound >>>>>> with a vertex attribute array pointer statement. But I'm >>>>>> specifically >>>>>> calling disableVertexAttribArray because it isn't. >>>>>> >>>>>> My feeling at this point is that I'll rewrite the code to set the >>>>>> attrib >>>>>> locations prior to linking. If I do that, the logical thing is to >>>>>> call >>>>>> disableVertexAttribArray as I create them and assign specific >>>>>> default >>>>>> values (ie make them 'constant attributes') and then when I'm >>>>>> setting >>>>>> up >>>>>> a vertexatttrib array specifically enable the ones that I'm using. >>>>>> The >>>>>> nature of my application is such that different objects can use >>>>>> different combinations of attributes. I assume that if I enable >>>>>> them, >>>>>> draw them (drawArrays/drawElements) and then disable them that the >>>>>> GL >>>>>> state machine will handle that properly. Alternatively, I suppose >>>>>> I >>>>>> can >>>>>> track which ones are enabled/disabled and just make sure that as >>>>>> the >>>>>> scenegraph is traversed then the attributes are enabled/disabled >>>>>> according to the needs of the specific scene element. >>>>>> >>>>>> I'll report back but it probably won't be today because I have to >>>>>> do >>>>>> some other things this afternoon. >>>>>> >>>>>> >>>>>> Thanks >>>>>> >>>>>> Alan >>>>>> >>>>>> >>>>>> >>>>>> >>>>>>> - Vlad >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ >>>>>>>> wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> Hi >>>>>>>>> >>>>>>>>> I'm seeing a problem with vertexAttrib4fv which manifests on >>>>>>>>> Linux >>>>>>>>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX >>>>>>>>> >>>>>>>>> The pseudo code is something like: >>>>>>>>> >>>>>>>>> >>>>>>>>> if (! colorVertexAttributeData) { >>>>>>>>> >>>>>>>>> int colorIndex = getAttribLocation(shader, "a_color") >>>>>>>>> if (colorIndex != -1) { >>>>>>>>> disableVertexAttribArray(colorIndex) >>>>>>>>> vertexAttrib4fv(colorIndex, defaultColor) >>>>>>>>> } >>>>>>>>> } >>>>>>>>> >>>>>>>>> >>>>>>>>> where my shader has: >>>>>>>>> >>>>>>>>> attribute vec3 a_color; >>>>>>>>> >>>>>>>>> in the attribute definitions. >>>>>>>>> >>>>>>>>> The purpose behind the code is for the cases where the data >>>>>>>>> doesn't >>>>>>>>> have a >>>>>>>>> color attribute channel (using Max speak) so that it is >>>>>>>>> displayed >>>>>>>>> with a >>>>>>>>> default color, so I use a constant vertex attribute. >>>>>>>>> >>>>>>>>> This works fine in the win setup, but on the linux box Chrome >>>>>>>>> 6.0.472.25 dev >>>>>>>>> ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces >>>>>>>>> a >>>>>>>>> black >>>>>>>>> image. By "ignore the color" I've set a default value of >>>>>>>>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is >>>>>>>>> happening. >>>>>>>>> In chrome, the objects are this color rather than the color >>>>>>>>> provided >>>>>>>>> in the >>>>>>>>> vertexAttrib statement. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> I don't have a really useful suggestion except to please try the >>>>>>>> top >>>>>>>> of tree Chromium build rather than the dev channel build. WebGL >>>>>>>> is >>>>>>>> still under rapid development. See the WebGL wiki for download >>>>>>>> instructions. >>>>>>>> >>>>>>>> If you can boil this down into a small test case please post it. >>>>>>>> >>>>>>>> -Ken >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> It seems to me that the likely candidates are either the >>>>>>>>> graphics >>>>>>>>> driver or >>>>>>>>> some issue with WebGL and X86_64 linux. >>>>>>>>> >>>>>>>>> When there is a color attribute channel in the vertex attray >>>>>>>>> then >>>>>>>>> the >>>>>>>>> vertexAttribPointer correctly finds and displays the color on >>>>>>>>> all >>>>>>>>> platforms. >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> >>>>>>>>> Alan >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ----------------------------------------------------------- >>>>>>>>> You are currently subscribed to public_webgl...@ >>>>>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>>>>> the following command in the body of your email: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> ----------------------------------------------------------- >>>>>>>> You are currently subscribed to public_webgl...@ >>>>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>>>> the following command in the body of your email: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>> ----------------------------------------------------------- >>>>>> You are currently subscribed to public_webgl...@ >>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>> the following command in the body of your email: >>>>>> >>>>>> >>>> ----------------------------------------------------------- >>>> You are currently subscribed to public_webgl...@ >>>> To unsubscribe, send an email to majordomo...@ with >>>> the following command in the body of your email: >>>> >>>> >>>> >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Wed Aug 11 15:35:13 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 11 Aug 2010 15:35:13 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <4C631C92.5060708@mechnicality.com> References: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> <4C630241.5090505@mechnicality.com> <4C631C92.5060708@mechnicality.com> Message-ID: On Wed, Aug 11, 2010 at 2:56 PM, alan...@ < alan...@> wrote: > On 08/11/2010 01:37 PM, Kenneth Russell wrote: > >> Current Chromium builds, at least, should be emulating the OpenGL ES >> 2.0 behavior for vertex attribute 0. You should not need to modify >> your application code. >> > I've just downloaded 6.0.492.0 (55767) which I believe is "LATEST" and it > behaves the same way. Unfortunately I can't run it in GWT dev mode so I > can't easily see the vertex attribute indices. Which switches are you passing to Chromium? Specifically you might see different results if you use --in-process-webgl or not. With the switch the bug is in Webkit. Without the switch the bug is in Chromium. Either way we'll look into it but it would help if you could try both switches. > > > > Any other behavior of a WebGL implementation is >> not spec compliant and a bug should be filed with the appropriate >> browser vendor. >> >> -Ken >> >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Wed Aug 11 19:23:49 2010 From: ste...@ (Steve Baker) Date: Wed, 11 Aug 2010 21:23:49 -0500 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <4C631B75.7020806@mechnicality.com> References: <1981268605.251589.1281556485410.JavaMail.root@cm-mail03.mozilla.org> <4C630241.5090505@mechnicality.com> <4C634D70.4090000@sjbaker.org> <4C631B75.7020806@mechnicality.com> Message-ID: <4C635B35.2090108@sjbaker.org> alan...@ wrote: > Hi Steve > > Which version of linux? This is a machine I keep around with older software, hardware, etc to test with. It was once OpenSuSE 11.something 64bit - but it's been patched about quite a bit. uname -a says: Linux gnosis 2.6.25.20-0.7-default #1 SMP 2010-02-26 20:32:57 +0100 x86_64 x86_64 x86_64 GNU/Linux glxinfo says: OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce 6800/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.18 Minefield says: Mozilla/5.0 (X11; Linux x86_64; rv:2.0b4pre) Gecko/20100810 Minefield/4.0b4pre It's quite impressive that WebGL runs on it at all! -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Wed Aug 11 15:37:20 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 11 Aug 2010 15:37:20 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: <4C6321F9.1020505@mechnicality.com> References: <2062755321.252423.1281561250305.JavaMail.root@cm-mail03.mozilla.org> <4C6321F9.1020505@mechnicality.com> Message-ID: On Wed, Aug 11, 2010 at 3:19 PM, alan...@ < alan...@> wrote: > Hmm > > I suspect this may be an ATI issue. I added the following line: > > > gl.bindAttribLocation(shaderProgram, ATTR_INDEX_POSITION, ATTR_POSITION); > > where ATTR_INDEX_POSITION = 0 and ATTR_POSITION = "a_position" > > between attaching the shaders and linking the program and it all burst into > life on the aforementioned linux box. I'll just double check it on the > windows setup. Incidentally, the 'color' index is now 1 (was 0 before is 2 > on nvidia) > > I realize of course that this is a workaround, but at least I can get on > with my stuff. Happy to continue debugging the original problem, because the > way I see it Ken is right, I shouldn't have to do what I've done above. > Yes, you should not have to do this and Chromium and Webkit should already be handling this case. Clearly there is a bug. This is not a problem with ATI drivers. It's a difference between OpenGL and OpenGL ES 2.0. OpenGL requires attrib 0 to be on. OpenGL ES 2.0 does not. > > > Alan > > > > > > On 08/11/2010 02:33 PM, Kenneth Russell wrote: > >> OpenGL ES 2.0 doesn't have any special semantics for vertex attribute >> 0. I don't see anything in section 2.7 of the OpenGL ES 2.0 spec that >> specifically mentions this. >> http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttrib.xml mentions >> the desktop OpenGL semantics briefly at the bottom of the Description >> section, but it doesn't explicitly mention that if you don't enable >> vertex attribute 0 as an array that you don't get any rendered output. >> >> See >> http://src.chromium.org/viewvc/chrome/trunk/src/gpu/command_buffer/service/gles2_cmd_decoder.cc?view=markup >> , GLES2DecoderImpl::SimulateAttrib0, for an example of how this can be >> emulated. >> >> -Ken >> >> On Wed, Aug 11, 2010 at 2:14 PM, Vladimir Vukicevic >> wrote: >> >> >>> I'm trying to find the description of GL ES 2.0 behaviour for attrib 0 -- >>> what section is it in? >>> >>> - Vlad >>> >>> ----- Original Message ----- >>> >>> >>>> Current Chromium builds, at least, should be emulating the OpenGL ES >>>> 2.0 behavior for vertex attribute 0. You should not need to modify >>>> your application code. Any other behavior of a WebGL implementation is >>>> not spec compliant and a bug should be filed with the appropriate >>>> browser vendor. >>>> >>>> -Ken >>>> >>>> On Wed, Aug 11, 2010 at 1:04 PM, Alan Chaney >>>> wrote: >>>> >>>> >>>>> Thanks Vlad, >>>>> >>>>> Well, that would explain things. Looks like application writers will >>>>> have to >>>>> beware of this one. I'll modify my code as I outlined below and see >>>>> if that >>>>> fixes it on the Linux/ATI box. With my Khronos member hat on it >>>>> looks like >>>>> there will have to be work by some of the driver manufacturers if >>>>> they want >>>>> to claim that their drivers are WebGL compliant. >>>>> >>>>> Regards >>>>> >>>>> Alan >>>>> >>>>> >>>>> On 8/11/2010 12:54 PM, Vladimir Vukicevic wrote: >>>>> >>>>> >>>>>> Oh right, I keep forgetting attrib indices are (have to be) just >>>>>> numbers. >>>>>> I can't remember where we had this discussion, but I /think/ we >>>>>> decided >>>>>> that this was a driver bug (requiring attrib index 0 to be an >>>>>> enabled >>>>>> array). At least, we didn't add any spec language requiring it, and >>>>>> I see >>>>>> that we don't do any explicit checks for it -- and that there are >>>>>> conformance tests that require a constant vertex attrib 0 to not >>>>>> fail >>>>>> (gl-bind-attrib-location-test.html). >>>>>> >>>>>> - Vlad >>>>>> >>>>>> ----- Original Message ----- >>>>>> >>>>>> >>>>>> >>>>>>> Hi Vlad >>>>>>> >>>>>>> On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>>> Hmm, would be interesting to know what attrib location >>>>>>>> getAttribLocation is returning, though of course, you can't know >>>>>>>> that :/ >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> Well, when I print it out, the attrib index is a number . I'm >>>>>>> using >>>>>>> the >>>>>>> pattern of compiling and linking the shaders and then using the >>>>>>> locations thus derived. I've checked, and it seems that on the >>>>>>> linux >>>>>>> box >>>>>>> the attribute index for a_color is 0, but for the windows box the >>>>>>> a_color index is 2. >>>>>>> >>>>>>> This is easily explained in that presumably the graphics driver >>>>>>> determines the index during the linking process, and the drivers >>>>>>> are >>>>>>> ATI >>>>>>> and Nvidia respectively. >>>>>>> >>>>>>> Of course, I could always set the attribute indices before >>>>>>> linking, >>>>>>> and >>>>>>> for example make '0' correspond to 'a_position' which is pretty >>>>>>> much >>>>>>> always likely to be bound to an array. I notice that the Nvidia >>>>>>> driver >>>>>>> appears to have made a_position the 0th index >>>>>>> >>>>>>> The actual shader code is: >>>>>>> >>>>>>> attribute vec3 a_position; >>>>>>> attribute vec3 a_normal; >>>>>>> attribute vec4 a_color; >>>>>>> attribute vec3 a_texcoord; >>>>>>> >>>>>>> and this is the order that the nvidia driver appears to use. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> I wonder if this is something to do with attribute 0 needing to >>>>>>>> be >>>>>>>> an enabled array? Is a_color the first attribute definition in >>>>>>>> your >>>>>>>> shader? Though I thought we had error checks for 0 not being an >>>>>>>> enabled array. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> I don't understand your comment about attribute 0 needing to be an >>>>>>> enabled array. It seems from reading the spec that vertex >>>>>>> attributes >>>>>>> are >>>>>>> handled slightly differently in WebGL than ES 2, but basically >>>>>>> what >>>>>>> I'm >>>>>>> doing is to use disableVertexAttribArray to tell webgl that it >>>>>>> isn't >>>>>>> bound to an array. >>>>>>> >>>>>>> The only specific reference I can find is: >>>>>>> >>>>>>> 6.2 Enabled Vertex Attributes and Range Checking >>>>>>> >>>>>>> and the way that's written implies to me that if you call >>>>>>> enableVertexAttribArray(n) then you need to make sure that n is >>>>>>> bound >>>>>>> with a vertex attribute array pointer statement. But I'm >>>>>>> specifically >>>>>>> calling disableVertexAttribArray because it isn't. >>>>>>> >>>>>>> My feeling at this point is that I'll rewrite the code to set the >>>>>>> attrib >>>>>>> locations prior to linking. If I do that, the logical thing is to >>>>>>> call >>>>>>> disableVertexAttribArray as I create them and assign specific >>>>>>> default >>>>>>> values (ie make them 'constant attributes') and then when I'm >>>>>>> setting >>>>>>> up >>>>>>> a vertexatttrib array specifically enable the ones that I'm using. >>>>>>> The >>>>>>> nature of my application is such that different objects can use >>>>>>> different combinations of attributes. I assume that if I enable >>>>>>> them, >>>>>>> draw them (drawArrays/drawElements) and then disable them that the >>>>>>> GL >>>>>>> state machine will handle that properly. Alternatively, I suppose >>>>>>> I >>>>>>> can >>>>>>> track which ones are enabled/disabled and just make sure that as >>>>>>> the >>>>>>> scenegraph is traversed then the attributes are enabled/disabled >>>>>>> according to the needs of the specific scene element. >>>>>>> >>>>>>> I'll report back but it probably won't be today because I have to >>>>>>> do >>>>>>> some other things this afternoon. >>>>>>> >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> Alan >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> - Vlad >>>>>>>> >>>>>>>> ----- Original Message ----- >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> Hi >>>>>>>>>> >>>>>>>>>> I'm seeing a problem with vertexAttrib4fv which manifests on >>>>>>>>>> Linux >>>>>>>>>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX >>>>>>>>>> >>>>>>>>>> The pseudo code is something like: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> if (! colorVertexAttributeData) { >>>>>>>>>> >>>>>>>>>> int colorIndex = getAttribLocation(shader, "a_color") >>>>>>>>>> if (colorIndex != -1) { >>>>>>>>>> disableVertexAttribArray(colorIndex) >>>>>>>>>> vertexAttrib4fv(colorIndex, defaultColor) >>>>>>>>>> } >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> where my shader has: >>>>>>>>>> >>>>>>>>>> attribute vec3 a_color; >>>>>>>>>> >>>>>>>>>> in the attribute definitions. >>>>>>>>>> >>>>>>>>>> The purpose behind the code is for the cases where the data >>>>>>>>>> doesn't >>>>>>>>>> have a >>>>>>>>>> color attribute channel (using Max speak) so that it is >>>>>>>>>> displayed >>>>>>>>>> with a >>>>>>>>>> default color, so I use a constant vertex attribute. >>>>>>>>>> >>>>>>>>>> This works fine in the win setup, but on the linux box Chrome >>>>>>>>>> 6.0.472.25 dev >>>>>>>>>> ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces >>>>>>>>>> a >>>>>>>>>> black >>>>>>>>>> image. By "ignore the color" I've set a default value of >>>>>>>>>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is >>>>>>>>>> happening. >>>>>>>>>> In chrome, the objects are this color rather than the color >>>>>>>>>> provided >>>>>>>>>> in the >>>>>>>>>> vertexAttrib statement. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> I don't have a really useful suggestion except to please try the >>>>>>>>> top >>>>>>>>> of tree Chromium build rather than the dev channel build. WebGL >>>>>>>>> is >>>>>>>>> still under rapid development. See the WebGL wiki for download >>>>>>>>> instructions. >>>>>>>>> >>>>>>>>> If you can boil this down into a small test case please post it. >>>>>>>>> >>>>>>>>> -Ken >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> It seems to me that the likely candidates are either the >>>>>>>>>> graphics >>>>>>>>>> driver or >>>>>>>>>> some issue with WebGL and X86_64 linux. >>>>>>>>>> >>>>>>>>>> When there is a color attribute channel in the vertex attray >>>>>>>>>> then >>>>>>>>>> the >>>>>>>>>> vertexAttribPointer correctly finds and displays the color on >>>>>>>>>> all >>>>>>>>>> platforms. >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> >>>>>>>>>> Alan >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ----------------------------------------------------------- >>>>>>>>>> You are currently subscribed to public_webgl...@ >>>>>>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>>>>>> the following command in the body of your email: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> ----------------------------------------------------------- >>>>>>>>> You are currently subscribed to public_webgl...@ >>>>>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>>>>> the following command in the body of your email: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> ----------------------------------------------------------- >>>>>>> You are currently subscribed to public_webgl...@ >>>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>>> the following command in the body of your email: >>>>>>> >>>>>>> >>>>>>> >>>>>> ----------------------------------------------------------- >>>>> You are currently subscribed to public_webgl...@ >>>>> To unsubscribe, send an email to majordomo...@ with >>>>> the following command in the body of your email: >>>>> >>>>> >>>>> >>>>> >>>> >>> >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Wed Aug 11 15:46:40 2010 From: ala...@ (Alan Chaney) Date: Wed, 11 Aug 2010 15:46:40 -0700 Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: References: <2062755321.252423.1281561250305.JavaMail.root@cm-mail03.mozilla.org> <4C6321F9.1020505@mechnicality.com> Message-ID: <4C632850.7030201@mechnicality.com> Hi Greg On 8/11/2010 3:37 PM, Gregg Tavares (wrk) wrote: > On Wed, Aug 11, 2010 at 3:19 PM, alan...@< > alan...@> wrote: > > >> Hmm >> >> I suspect this may be an ATI issue. I added the following line: >> >> >> gl.bindAttribLocation(shaderProgram, ATTR_INDEX_POSITION, ATTR_POSITION); >> >> where ATTR_INDEX_POSITION = 0 and ATTR_POSITION = "a_position" >> >> between attaching the shaders and linking the program and it all burst into >> life on the aforementioned linux box. I'll just double check it on the >> windows setup. Incidentally, the 'color' index is now 1 (was 0 before is 2 >> on nvidia) >> >> > > Yes, you should not have to do this and Chromium and Webkit should already > be handling this case. Clearly there is a bug. > > This is not a problem with ATI drivers. It's a difference between OpenGL and > OpenGL ES 2.0. OpenGL requires attrib 0 to be on. OpenGL ES 2.0 does not. > > > It may not be a specific *bug* with the ATI drivers but it seems that it manifests in ATI because for some reason it chose to make the color index 0. Of course, it may manifest in Nvidia with a different shader - its just that I've tried it with two Nvidia drivers and it seems to work. Anyway thanks to everybody for helping me work out how to workaround it. "molly" is moving on. I'll have a web demo online by the end of the month I hope. Regards Alan > >> >> Alan >> >> >> >> >> >> On 08/11/2010 02:33 PM, Kenneth Russell wrote: >> >> >>> OpenGL ES 2.0 doesn't have any special semantics for vertex attribute >>> 0. I don't see anything in section 2.7 of the OpenGL ES 2.0 spec that >>> specifically mentions this. >>> http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttrib.xml mentions >>> the desktop OpenGL semantics briefly at the bottom of the Description >>> section, but it doesn't explicitly mention that if you don't enable >>> vertex attribute 0 as an array that you don't get any rendered output. >>> >>> See >>> http://src.chromium.org/viewvc/chrome/trunk/src/gpu/command_buffer/service/gles2_cmd_decoder.cc?view=markup >>> , GLES2DecoderImpl::SimulateAttrib0, for an example of how this can be >>> emulated. >>> >>> -Ken >>> >>> On Wed, Aug 11, 2010 at 2:14 PM, Vladimir Vukicevic >>> wrote: >>> >>> >>> >>>> I'm trying to find the description of GL ES 2.0 behaviour for attrib 0 -- >>>> what section is it in? >>>> >>>> - Vlad >>>> >>>> ----- Original Message ----- >>>> >>>> >>>> >>>>> Current Chromium builds, at least, should be emulating the OpenGL ES >>>>> 2.0 behavior for vertex attribute 0. You should not need to modify >>>>> your application code. Any other behavior of a WebGL implementation is >>>>> not spec compliant and a bug should be filed with the appropriate >>>>> browser vendor. >>>>> >>>>> -Ken >>>>> >>>>> On Wed, Aug 11, 2010 at 1:04 PM, Alan Chaney >>>>> wrote: >>>>> >>>>> >>>>> >>>>>> Thanks Vlad, >>>>>> >>>>>> Well, that would explain things. Looks like application writers will >>>>>> have to >>>>>> beware of this one. I'll modify my code as I outlined below and see >>>>>> if that >>>>>> fixes it on the Linux/ATI box. With my Khronos member hat on it >>>>>> looks like >>>>>> there will have to be work by some of the driver manufacturers if >>>>>> they want >>>>>> to claim that their drivers are WebGL compliant. >>>>>> >>>>>> Regards >>>>>> >>>>>> Alan >>>>>> >>>>>> >>>>>> On 8/11/2010 12:54 PM, Vladimir Vukicevic wrote: >>>>>> >>>>>> >>>>>> >>>>>>> Oh right, I keep forgetting attrib indices are (have to be) just >>>>>>> numbers. >>>>>>> I can't remember where we had this discussion, but I /think/ we >>>>>>> decided >>>>>>> that this was a driver bug (requiring attrib index 0 to be an >>>>>>> enabled >>>>>>> array). At least, we didn't add any spec language requiring it, and >>>>>>> I see >>>>>>> that we don't do any explicit checks for it -- and that there are >>>>>>> conformance tests that require a constant vertex attrib 0 to not >>>>>>> fail >>>>>>> (gl-bind-attrib-location-test.html). >>>>>>> >>>>>>> - Vlad >>>>>>> >>>>>>> ----- Original Message ----- >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>>> Hi Vlad >>>>>>>> >>>>>>>> On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> Hmm, would be interesting to know what attrib location >>>>>>>>> getAttribLocation is returning, though of course, you can't know >>>>>>>>> that :/ >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> Well, when I print it out, the attrib index is a number . I'm >>>>>>>> using >>>>>>>> the >>>>>>>> pattern of compiling and linking the shaders and then using the >>>>>>>> locations thus derived. I've checked, and it seems that on the >>>>>>>> linux >>>>>>>> box >>>>>>>> the attribute index for a_color is 0, but for the windows box the >>>>>>>> a_color index is 2. >>>>>>>> >>>>>>>> This is easily explained in that presumably the graphics driver >>>>>>>> determines the index during the linking process, and the drivers >>>>>>>> are >>>>>>>> ATI >>>>>>>> and Nvidia respectively. >>>>>>>> >>>>>>>> Of course, I could always set the attribute indices before >>>>>>>> linking, >>>>>>>> and >>>>>>>> for example make '0' correspond to 'a_position' which is pretty >>>>>>>> much >>>>>>>> always likely to be bound to an array. I notice that the Nvidia >>>>>>>> driver >>>>>>>> appears to have made a_position the 0th index >>>>>>>> >>>>>>>> The actual shader code is: >>>>>>>> >>>>>>>> attribute vec3 a_position; >>>>>>>> attribute vec3 a_normal; >>>>>>>> attribute vec4 a_color; >>>>>>>> attribute vec3 a_texcoord; >>>>>>>> >>>>>>>> and this is the order that the nvidia driver appears to use. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> I wonder if this is something to do with attribute 0 needing to >>>>>>>>> be >>>>>>>>> an enabled array? Is a_color the first attribute definition in >>>>>>>>> your >>>>>>>>> shader? Though I thought we had error checks for 0 not being an >>>>>>>>> enabled array. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> I don't understand your comment about attribute 0 needing to be an >>>>>>>> enabled array. It seems from reading the spec that vertex >>>>>>>> attributes >>>>>>>> are >>>>>>>> handled slightly differently in WebGL than ES 2, but basically >>>>>>>> what >>>>>>>> I'm >>>>>>>> doing is to use disableVertexAttribArray to tell webgl that it >>>>>>>> isn't >>>>>>>> bound to an array. >>>>>>>> >>>>>>>> The only specific reference I can find is: >>>>>>>> >>>>>>>> 6.2 Enabled Vertex Attributes and Range Checking >>>>>>>> >>>>>>>> and the way that's written implies to me that if you call >>>>>>>> enableVertexAttribArray(n) then you need to make sure that n is >>>>>>>> bound >>>>>>>> with a vertex attribute array pointer statement. But I'm >>>>>>>> specifically >>>>>>>> calling disableVertexAttribArray because it isn't. >>>>>>>> >>>>>>>> My feeling at this point is that I'll rewrite the code to set the >>>>>>>> attrib >>>>>>>> locations prior to linking. If I do that, the logical thing is to >>>>>>>> call >>>>>>>> disableVertexAttribArray as I create them and assign specific >>>>>>>> default >>>>>>>> values (ie make them 'constant attributes') and then when I'm >>>>>>>> setting >>>>>>>> up >>>>>>>> a vertexatttrib array specifically enable the ones that I'm using. >>>>>>>> The >>>>>>>> nature of my application is such that different objects can use >>>>>>>> different combinations of attributes. I assume that if I enable >>>>>>>> them, >>>>>>>> draw them (drawArrays/drawElements) and then disable them that the >>>>>>>> GL >>>>>>>> state machine will handle that properly. Alternatively, I suppose >>>>>>>> I >>>>>>>> can >>>>>>>> track which ones are enabled/disabled and just make sure that as >>>>>>>> the >>>>>>>> scenegraph is traversed then the attributes are enabled/disabled >>>>>>>> according to the needs of the specific scene element. >>>>>>>> >>>>>>>> I'll report back but it probably won't be today because I have to >>>>>>>> do >>>>>>>> some other things this afternoon. >>>>>>>> >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> Alan >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> - Vlad >>>>>>>>> >>>>>>>>> ----- Original Message ----- >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> Hi >>>>>>>>>>> >>>>>>>>>>> I'm seeing a problem with vertexAttrib4fv which manifests on >>>>>>>>>>> Linux >>>>>>>>>>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX >>>>>>>>>>> >>>>>>>>>>> The pseudo code is something like: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> if (! colorVertexAttributeData) { >>>>>>>>>>> >>>>>>>>>>> int colorIndex = getAttribLocation(shader, "a_color") >>>>>>>>>>> if (colorIndex != -1) { >>>>>>>>>>> disableVertexAttribArray(colorIndex) >>>>>>>>>>> vertexAttrib4fv(colorIndex, defaultColor) >>>>>>>>>>> } >>>>>>>>>>> } >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> where my shader has: >>>>>>>>>>> >>>>>>>>>>> attribute vec3 a_color; >>>>>>>>>>> >>>>>>>>>>> in the attribute definitions. >>>>>>>>>>> >>>>>>>>>>> The purpose behind the code is for the cases where the data >>>>>>>>>>> doesn't >>>>>>>>>>> have a >>>>>>>>>>> color attribute channel (using Max speak) so that it is >>>>>>>>>>> displayed >>>>>>>>>>> with a >>>>>>>>>>> default color, so I use a constant vertex attribute. >>>>>>>>>>> >>>>>>>>>>> This works fine in the win setup, but on the linux box Chrome >>>>>>>>>>> 6.0.472.25 dev >>>>>>>>>>> ignores the color and Firefox (3.7 -> 4.0 beta 2) just produces >>>>>>>>>>> a >>>>>>>>>>> black >>>>>>>>>>> image. By "ignore the color" I've set a default value of >>>>>>>>>>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything is >>>>>>>>>>> happening. >>>>>>>>>>> In chrome, the objects are this color rather than the color >>>>>>>>>>> provided >>>>>>>>>>> in the >>>>>>>>>>> vertexAttrib statement. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> I don't have a really useful suggestion except to please try the >>>>>>>>>> top >>>>>>>>>> of tree Chromium build rather than the dev channel build. WebGL >>>>>>>>>> is >>>>>>>>>> still under rapid development. See the WebGL wiki for download >>>>>>>>>> instructions. >>>>>>>>>> >>>>>>>>>> If you can boil this down into a small test case please post it. >>>>>>>>>> >>>>>>>>>> -Ken >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> It seems to me that the likely candidates are either the >>>>>>>>>>> graphics >>>>>>>>>>> driver or >>>>>>>>>>> some issue with WebGL and X86_64 linux. >>>>>>>>>>> >>>>>>>>>>> When there is a color attribute channel in the vertex attray >>>>>>>>>>> then >>>>>>>>>>> the >>>>>>>>>>> vertexAttribPointer correctly finds and displays the color on >>>>>>>>>>> all >>>>>>>>>>> platforms. >>>>>>>>>>> >>>>>>>>>>> Regards >>>>>>>>>>> >>>>>>>>>>> Alan >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ----------------------------------------------------------- >>>>>>>>>>> You are currently subscribed to public_webgl...@ >>>>>>>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>>>>>>> the following command in the body of your email: >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> ----------------------------------------------------------- >>>>>>>>>> You are currently subscribed to public_webgl...@ >>>>>>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>>>>>> the following command in the body of your email: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> ----------------------------------------------------------- >>>>>>>>> >>>>>>>> You are currently subscribed to public_webgl...@ >>>>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>>>> the following command in the body of your email: >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> ----------------------------------------------------------- >>>>>>> >>>>>> You are currently subscribed to public_webgl...@ >>>>>> To unsubscribe, send an email to majordomo...@ with >>>>>> the following command in the body of your email: >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> >> >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Aug 11 16:05:16 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 11 Aug 2010 16:05:16 -0700 (PDT) Subject: [Public WebGL] Problems with vertexAttrib4fv In-Reply-To: Message-ID: <746275680.253502.1281567916908.JavaMail.root@cm-mail03.mozilla.org> Ah right, I had the GL/GL ES differences backwards. Argh, I thought we already handled this, but I guess not. Will fix! - Vlad ----- Original Message ----- > OpenGL ES 2.0 doesn't have any special semantics for vertex attribute > 0. I don't see anything in section 2.7 of the OpenGL ES 2.0 spec that > specifically mentions this. > http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttrib.xml mentions > the desktop OpenGL semantics briefly at the bottom of the Description > section, but it doesn't explicitly mention that if you don't enable > vertex attribute 0 as an array that you don't get any rendered output. > > See > http://src.chromium.org/viewvc/chrome/trunk/src/gpu/command_buffer/service/gles2_cmd_decoder.cc?view=markup > , GLES2DecoderImpl::SimulateAttrib0, for an example of how this can be > emulated. > > -Ken > > On Wed, Aug 11, 2010 at 2:14 PM, Vladimir Vukicevic > wrote: > > I'm trying to find the description of GL ES 2.0 behaviour for attrib > > 0 -- what section is it in? > > > > ? ?- Vlad > > > > ----- Original Message ----- > >> Current Chromium builds, at least, should be emulating the OpenGL > >> ES > >> 2.0 behavior for vertex attribute 0. You should not need to modify > >> your application code. Any other behavior of a WebGL implementation > >> is > >> not spec compliant and a bug should be filed with the appropriate > >> browser vendor. > >> > >> -Ken > >> > >> On Wed, Aug 11, 2010 at 1:04 PM, Alan Chaney > >> > >> wrote: > >> > Thanks Vlad, > >> > > >> > Well, that would explain things. Looks like application writers > >> > will > >> > have to > >> > beware of this one. I'll modify my code as I outlined below and > >> > see > >> > if that > >> > fixes it on the Linux/ATI box. With my Khronos member hat on it > >> > looks like > >> > there will have to be work by some of the driver manufacturers if > >> > they want > >> > to claim that their drivers are WebGL compliant. > >> > > >> > Regards > >> > > >> > Alan > >> > > >> > > >> > On 8/11/2010 12:54 PM, Vladimir Vukicevic wrote: > >> >> > >> >> Oh right, I keep forgetting attrib indices are (have to be) just > >> >> numbers. > >> >> ?I can't remember where we had this discussion, but I /think/ we > >> >> ?decided > >> >> that this was a driver bug (requiring attrib index 0 to be an > >> >> enabled > >> >> array). At least, we didn't add any spec language requiring it, > >> >> and > >> >> I see > >> >> that we don't do any explicit checks for it -- and that there > >> >> are > >> >> conformance tests that require a constant vertex attrib 0 to not > >> >> fail > >> >> (gl-bind-attrib-location-test.html). > >> >> > >> >> ? ? - Vlad > >> >> > >> >> ----- Original Message ----- > >> >> > >> >>> > >> >>> Hi Vlad > >> >>> > >> >>> On 8/11/2010 11:21 AM, Vladimir Vukicevic wrote: > >> >>> > >> >>>> > >> >>>> Hmm, would be interesting to know what attrib location > >> >>>> getAttribLocation is returning, though of course, you can't > >> >>>> know > >> >>>> that :/ > >> >>>> > >> >>>> > >> >>> > >> >>> Well, when I print it out, the attrib index is a number . I'm > >> >>> using > >> >>> the > >> >>> pattern of compiling and linking the shaders and then using the > >> >>> locations thus derived. I've checked, and it seems that on the > >> >>> linux > >> >>> box > >> >>> the attribute index for a_color is 0, but for the windows box > >> >>> the > >> >>> a_color index is 2. > >> >>> > >> >>> This is easily explained in that presumably the graphics driver > >> >>> determines the index during the linking process, and the > >> >>> drivers > >> >>> are > >> >>> ATI > >> >>> and Nvidia respectively. > >> >>> > >> >>> Of course, I could always set the attribute indices before > >> >>> linking, > >> >>> and > >> >>> for example make '0' correspond to 'a_position' which is pretty > >> >>> much > >> >>> always likely to be bound to an array. I notice that the Nvidia > >> >>> driver > >> >>> appears to have made a_position the 0th index > >> >>> > >> >>> The actual shader code is: > >> >>> > >> >>> attribute vec3 a_position; > >> >>> attribute vec3 a_normal; > >> >>> attribute vec4 a_color; > >> >>> attribute vec3 a_texcoord; > >> >>> > >> >>> and this is the order that the nvidia driver appears to use. > >> >>> > >> >>> > >> >>>> > >> >>>> I wonder if this is something to do with attribute 0 needing > >> >>>> to > >> >>>> be > >> >>>> an enabled array? Is a_color the first attribute definition in > >> >>>> your > >> >>>> shader? Though I thought we had error checks for 0 not being > >> >>>> an > >> >>>> enabled array. > >> >>>> > >> >>>> > >> >>> > >> >>> I don't understand your comment about attribute 0 needing to be > >> >>> an > >> >>> enabled array. It seems from reading the spec that vertex > >> >>> attributes > >> >>> are > >> >>> handled slightly differently in WebGL than ES 2, but basically > >> >>> what > >> >>> I'm > >> >>> doing is to use disableVertexAttribArray to tell webgl that it > >> >>> isn't > >> >>> bound to an array. > >> >>> > >> >>> The only specific reference I can find is: > >> >>> > >> >>> 6.2 Enabled Vertex Attributes and Range Checking > >> >>> > >> >>> and the way that's written implies to me that if you call > >> >>> enableVertexAttribArray(n) then you need to make sure that n is > >> >>> bound > >> >>> with a vertex attribute array pointer statement. But I'm > >> >>> specifically > >> >>> calling disableVertexAttribArray because it isn't. > >> >>> > >> >>> My feeling at this point is that I'll rewrite the code to set > >> >>> the > >> >>> attrib > >> >>> locations prior to linking. If I do that, the logical thing is > >> >>> to > >> >>> call > >> >>> disableVertexAttribArray as I create them and assign specific > >> >>> default > >> >>> values (ie make them 'constant attributes') and then when I'm > >> >>> setting > >> >>> up > >> >>> a vertexatttrib array specifically enable the ones that I'm > >> >>> using. > >> >>> The > >> >>> nature of my application is such that different objects can use > >> >>> different combinations of attributes. I assume that if I enable > >> >>> them, > >> >>> draw them (drawArrays/drawElements) and then disable them that > >> >>> the > >> >>> GL > >> >>> state machine will handle that properly. Alternatively, I > >> >>> suppose > >> >>> I > >> >>> can > >> >>> track which ones are enabled/disabled and just make sure that > >> >>> as > >> >>> the > >> >>> scenegraph is traversed then the attributes are > >> >>> enabled/disabled > >> >>> according to the needs of the specific scene element. > >> >>> > >> >>> I'll report back but it probably won't be today because I have > >> >>> to > >> >>> do > >> >>> some other things this afternoon. > >> >>> > >> >>> > >> >>> Thanks > >> >>> > >> >>> Alan > >> >>> > >> >>> > >> >>> > >> >>>> > >> >>>> ? ? ?- Vlad > >> >>>> > >> >>>> ----- Original Message ----- > >> >>>> > >> >>>> > >> >>>>> > >> >>>>> On Wed, Aug 11, 2010 at 10:05 AM, alan...@ > >> >>>>> wrote: > >> >>>>> > >> >>>>> > >> >>>>>> > >> >>>>>> Hi > >> >>>>>> > >> >>>>>> I'm seeing a problem with vertexAttrib4fv which manifests on > >> >>>>>> Linux > >> >>>>>> x86_64/ATI Radeon 5770 but not on my WinXP32/GForce 8800 GTX > >> >>>>>> > >> >>>>>> The pseudo code is something like: > >> >>>>>> > >> >>>>>> > >> >>>>>> ? ? if (! colorVertexAttributeData) { > >> >>>>>> > >> >>>>>> ? ? ? ? ?int colorIndex = getAttribLocation(shader, > >> >>>>>> ? ? ? ? ?"a_color") > >> >>>>>> ? ? ? ? ?if (colorIndex != -1) { > >> >>>>>> ? ? ? ? ? ? ? ? ?disableVertexAttribArray(colorIndex) > >> >>>>>> ? ? ? ? ? ? ? ? ?vertexAttrib4fv(colorIndex, defaultColor) > >> >>>>>> ? ? ? ? ?} > >> >>>>>> ? ?} > >> >>>>>> > >> >>>>>> > >> >>>>>> where my shader has: > >> >>>>>> > >> >>>>>> attribute vec3 a_color; > >> >>>>>> > >> >>>>>> in the attribute definitions. > >> >>>>>> > >> >>>>>> The purpose behind the code is for the cases where the data > >> >>>>>> doesn't > >> >>>>>> have a > >> >>>>>> color attribute channel (using Max speak) so that it is > >> >>>>>> displayed > >> >>>>>> with a > >> >>>>>> default color, so I use a constant vertex attribute. > >> >>>>>> > >> >>>>>> This works fine in the win setup, but on the linux box > >> >>>>>> Chrome > >> >>>>>> 6.0.472.25 dev > >> >>>>>> ignores the color and Firefox (3.7 -> 4.0 beta 2) just > >> >>>>>> produces > >> >>>>>> a > >> >>>>>> black > >> >>>>>> image. By "ignore the color" I've set a default value of > >> >>>>>> vec4(0.2,0.2,0.2,1.0) in the shader so I can see if anything > >> >>>>>> is > >> >>>>>> happening. > >> >>>>>> In chrome, the objects are this color rather than the color > >> >>>>>> provided > >> >>>>>> in the > >> >>>>>> vertexAttrib statement. > >> >>>>>> > >> >>>>>> > >> >>>>> > >> >>>>> I don't have a really useful suggestion except to please try > >> >>>>> the > >> >>>>> top > >> >>>>> of tree Chromium build rather than the dev channel build. > >> >>>>> WebGL > >> >>>>> is > >> >>>>> still under rapid development. See the WebGL wiki for > >> >>>>> download > >> >>>>> instructions. > >> >>>>> > >> >>>>> If you can boil this down into a small test case please post > >> >>>>> it. > >> >>>>> > >> >>>>> -Ken > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>>> > >> >>>>>> It seems to me that the likely candidates are either the > >> >>>>>> graphics > >> >>>>>> driver or > >> >>>>>> some issue with WebGL and X86_64 linux. > >> >>>>>> > >> >>>>>> When there is a color attribute channel in the vertex attray > >> >>>>>> then > >> >>>>>> the > >> >>>>>> vertexAttribPointer correctly finds and displays the color > >> >>>>>> on > >> >>>>>> all > >> >>>>>> platforms. > >> >>>>>> > >> >>>>>> Regards > >> >>>>>> > >> >>>>>> Alan > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> ----------------------------------------------------------- > >> >>>>>> You are currently subscribed to public_webgl...@ > >> >>>>>> To unsubscribe, send an email to majordomo...@ with > >> >>>>>> the following command in the body of your email: > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>> > >> >>>>> ----------------------------------------------------------- > >> >>>>> You are currently subscribed to public_webgl...@ > >> >>>>> To unsubscribe, send an email to majordomo...@ with > >> >>>>> the following command in the body of your email: > >> >>>>> > >> >>>>> > >> >>> > >> >>> ----------------------------------------------------------- > >> >>> You are currently subscribed to public_webgl...@ > >> >>> To unsubscribe, send an email to majordomo...@ with > >> >>> the following command in the body of your email: > >> >>> > >> > > >> > ----------------------------------------------------------- > >> > You are currently subscribed to public_webgl...@ > >> > To unsubscribe, send an email to majordomo...@ with > >> > the following command in the body of your email: > >> > > >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Wed Aug 11 17:25:35 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 11 Aug 2010 17:25:35 -0700 Subject: [Public WebGL] WebKit and Chromium WebGL updates Message-ID: A brief note on recent updates to the WebKit WebGL implementation shared by Safari and Chromium, and one on Chromium in particular. - WebGL shader validation and translation via the ANGLE project is now enabled by default in Chromium continuous builds. This means that all shaders loaded into WebGL must conform to the WebGL shading language requirements, which are a superset of the OpenGL ES 2.0 shading language specification. From a practical standpoint, you should only have to add the following line to your fragment shader to make it compliant: precision highp float; or precision mediump float; If you need to make your shader temporarily work both on compliant and non-compliant WebGL implementations, you can add the following: #ifdef GL_ES precision highp float; #endif If you encounter any problems with this change, please post to the list. You can temporarily disable the shader translator if necessary by passing the command line argument --disable-glsl-translator . The shader translator should be enabled imminently in WebKit nightly builds as well. - Support for the obsolete texImage2D and texSubImage2D variants, which did not include the format, internal format or type arguments and which accepted premultiplyAlpha and flipY as optional arguments, has been removed. - The obsolete WebGLArray type names, as well as WebGLArrayBuffer, have been removed. The current WebGL and TypedArray draft specifications describe the supported names. All of these changes are either present in the Chromium continuous builds and WebKit nightly builds, or will show up in the next one. Please post if you run into any problems. Thanks, -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From zhe...@ Wed Aug 11 18:55:22 2010 From: zhe...@ (Mo, Zhenyao) Date: Wed, 11 Aug 2010 18:55:22 -0700 Subject: [Public WebGL] js binding: function argument type checking Message-ID: Currently for a function's signature in WebKit, if an argument's type is a wrapper type (those JS objec ts that wrap c++ objects, for example, JSWebGLProgram, JSCSSRule, etc.) and if the input object's type does not match the signature, the input is casted to null and no TypeError is raised. Even though WebKit doesn't use Web IDL specially, I think we can look to the Web IDL spec for guidance on what the behavior should be. According to Web IDL spec (http://dev.w3.org/2006/webapi/WebIDL/), unless [AllowAny] is put in the signature, an TypeError should be raised if an argument type does not match its signature. The new automatic code generation for overloaded functions in WebKit DOES raise TypeError when it fails to determine which overloaded variant to call. We definitely need to do the strict type checking for WebGL functions. However, changing the default behavior of the IDL code generators might have a significant compatibility impact. It isn't clear to us whether the current behavior is intentional. If yes, please let us know and we will try to fix the WebGL part only. Otherwise we will modify the general rule instead. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From zhe...@ Wed Aug 11 19:07:03 2010 From: zhe...@ (Mo, Zhenyao) Date: Wed, 11 Aug 2010 19:07:03 -0700 Subject: [Public WebGL] Re: js binding: function argument type checking In-Reply-To: References: Message-ID: Sorry, the email was meant for webkit-dev. Mo On Wed, Aug 11, 2010 at 6:55 PM, Mo, Zhenyao wrote: > Currently for a function's signature in WebKit, if an argument's type > is a wrapper type (those JS objec ts that wrap c++ objects, for > example, JSWebGLProgram, JSCSSRule, etc.) and if the input object's > type does not match the signature, the input is casted to null and no > TypeError is raised. > > Even though WebKit doesn't use Web IDL specially, I think we can look > to the Web IDL spec for guidance on what the behavior should be. > According to Web IDL spec (http://dev.w3.org/2006/webapi/WebIDL/), > unless [AllowAny] is put in the signature, an TypeError should be > raised if an argument type does not match its signature. ?The new > automatic code generation for overloaded functions in WebKit DOES > raise TypeError when it fails to determine which overloaded variant to > call. > > We definitely need to do the strict type checking for WebGL functions. > ?However, changing the default behavior of the IDL code generators > might have a significant compatibility impact. It isn't clear to us > whether the current behavior is intentional. ?If yes, please let us > know and we will try to fix the WebGL part only. ?Otherwise we will > modify the general rule instead. > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Wed Aug 11 20:12:42 2010 From: ced...@ (Cedric Vivier) Date: Thu, 12 Aug 2010 11:12:42 +0800 Subject: [Public WebGL] WebKit and Chromium WebGL updates In-Reply-To: References: Message-ID: On Thu, Aug 12, 2010 at 08:25, Kenneth Russell wrote: > A brief note on recent updates to the WebKit WebGL implementation > shared by Safari and Chromium, and one on Chromium in particular. > > Awesome. Is there any other planned breaking change pending (can't think of any now) or we're pretty much "done" ? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Thu Aug 12 06:02:58 2010 From: ala...@ (ala...@) Date: Thu, 12 Aug 2010 06:02:58 -0700 Subject: [Public WebGL] WebKit and Chromium WebGL updates In-Reply-To: References: Message-ID: <4C63F102.5090706@mechnicality.com> Hi Ken This is exciting news. Do you know when these changes are likely to reach the 'dev' channel? Regards Alan On 08/11/2010 05:25 PM, Kenneth Russell wrote: > A brief note on recent updates to the WebKit WebGL implementation > shared by Safari and Chromium, and one on Chromium in particular. > > - WebGL shader validation and translation via the ANGLE project is > now enabled by default in Chromium continuous builds. This means that > all shaders loaded into WebGL must conform to the WebGL shading > language requirements, which are a superset of the OpenGL ES 2.0 > shading language specification. From a practical standpoint, you > should only have to add the following line to your fragment shader to > make it compliant: > > precision highp float; > > or > > precision mediump float; > > If you need to make your shader temporarily work both on compliant and > non-compliant WebGL implementations, you can add the following: > > #ifdef GL_ES > precision highp float; > #endif > > If you encounter any problems with this change, please post to the > list. You can temporarily disable the shader translator if necessary > by passing the command line argument --disable-glsl-translator . > > The shader translator should be enabled imminently in WebKit nightly > builds as well. > > - Support for the obsolete texImage2D and texSubImage2D variants, > which did not include the format, internal format or type arguments > and which accepted premultiplyAlpha and flipY as optional arguments, > has been removed. > > - The obsolete WebGLArray type names, as well as WebGLArrayBuffer, > have been removed. The current WebGL and TypedArray draft > specifications describe the supported names. > > All of these changes are either present in the Chromium continuous > builds and WebKit nightly builds, or will show up in the next one. > Please post if you run into any problems. > > Thanks, > > -Ken > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Aug 12 14:04:07 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 12 Aug 2010 14:04:07 -0700 Subject: [Public WebGL] WebKit and Chromium WebGL updates In-Reply-To: References: Message-ID: On Wed, Aug 11, 2010 at 8:12 PM, Cedric Vivier wrote: > On Thu, Aug 12, 2010 at 08:25, Kenneth Russell wrote: >> >> A brief note on recent updates to the WebKit WebGL implementation >> shared by Safari and Chromium, and one on Chromium in particular. >> > > > Awesome. > Is there any other planned breaking change pending (can't think of any now) > or we're pretty much "done" ? This should be it for the incompatible signature changes, though there will probably be semantic changes to a few APIs, in particular clarification around error reporting behavior. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Aug 12 14:05:29 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 12 Aug 2010 14:05:29 -0700 Subject: [Public WebGL] WebKit and Chromium WebGL updates In-Reply-To: <4C63F102.5090706@mechnicality.com> References: <4C63F102.5090706@mechnicality.com> Message-ID: On Thu, Aug 12, 2010 at 6:02 AM, alan...@ wrote: > Hi Ken > > This is exciting news. Do you know when these changes are likely to reach > the 'dev' channel? I'm not sure, but probably within one or two weeks. If you're developing with WebGL on Chromium I can't stress strongly enough to please use the top of tree builds as documented on http://khronos.org/webgl/wiki/Getting_a_WebGL_Implementation , or if you're on Windows, the new Canary build: http://blog.chromium.org/2010/08/google-chrome-in-coal-mine.html . -Ken > Regards > > Alan > > > On 08/11/2010 05:25 PM, Kenneth Russell wrote: >> >> A brief note on recent updates to the WebKit WebGL implementation >> shared by Safari and Chromium, and one on Chromium in particular. >> >> ?- WebGL shader validation and translation via the ANGLE project is >> now enabled by default in Chromium continuous builds. This means that >> all shaders loaded into WebGL must conform to the WebGL shading >> language requirements, which are a superset of the OpenGL ES 2.0 >> shading language specification. From a practical standpoint, you >> should only have to add the following line to your fragment shader to >> make it compliant: >> >> precision highp float; >> >> or >> >> precision mediump float; >> >> If you need to make your shader temporarily work both on compliant and >> non-compliant WebGL implementations, you can add the following: >> >> #ifdef GL_ES >> precision highp float; >> #endif >> >> If you encounter any problems with this change, please post to the >> list. You can temporarily disable the shader translator if necessary >> by passing the command line argument --disable-glsl-translator . >> >> The shader translator should be enabled imminently in WebKit nightly >> builds as well. >> >> ?- Support for the obsolete texImage2D and texSubImage2D variants, >> which did not include the format, internal format or type arguments >> and which accepted premultiplyAlpha and flipY as optional arguments, >> has been removed. >> >> ?- The obsolete WebGLArray type names, as well as WebGLArrayBuffer, >> have been removed. The current WebGL and TypedArray draft >> specifications describe the supported names. >> >> All of these changes are either present in the Chromium continuous >> builds and WebKit nightly builds, or will show up in the next one. >> Please post if you run into any problems. >> >> Thanks, >> >> -Ken >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From enn...@ Fri Aug 13 11:03:23 2010 From: enn...@ (Adrienne Walker) Date: Fri, 13 Aug 2010 11:03:23 -0700 Subject: [Public WebGL] texImage2D calls with HTMLVideoElement Message-ID: I was looking at the WebGL spec about the tex(Sub)Image2D calls and had two questions: 1) Why do the texImage2D calls that take HTMLMediaElement parameters allow the specification of format and type? Along those same lines, why is the spec so insistent that image data must be downconverted to lower precision if it is specified? The only use case I can imagine is that with a packed format you save on upload bandwidth and texture memory at the expense of conversion, but only if the HTMLMediaElement is being composited on the CPU, which is likely to not be the case for most of those element types in the future. The element's width and height are already implicitly used as parameters in the call--why not the format as well? 2) Regarding the texImage2D call with the HTMLVideoElement, the spec is unclear about how to handle poster attributes. Is this intentionally left as an implementation detail? Forgive me if either of these have been discussed before. I tried searching the email archives, but couldn't turn anything up. Cheers, -Adrienne ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Fri Aug 13 12:08:11 2010 From: bja...@ (Benoit Jacob) Date: Fri, 13 Aug 2010 12:08:11 -0700 (PDT) Subject: [Public WebGL] =?utf-8?Q?Effects_of_Completeness_on_Texture_Image_Speci=EF=AC=81cation?= Message-ID: <892318809.272264.1281726491598.JavaMail.root@cm-mail03.mozilla.org> Hi List, I am reading the GL ES 2.0.24 spec and find this at the end of section 3.7.10: "An implementation may allow a texture image array of level one or greater to be created only if a complete set of image arrays consistent with the requested array can be supported." If we want WebGL to run identically on all GL ES implementations, I suppose that means that we have to be strictest here, that is, in texImage2D and friends, require that condition. Do you agree? Possibly related question: the test conformance/texture-active-bind.html is doing this sub-test: gl.copyTexImage2D(gl.TEXTURE_2D, 1, gl.RGBA, 0, 0, 5, 3, 0); glErrorShouldBe(gl, gl.INVALID_VALUE, "copyTexImage2D with NPOT texture with level > 0 should return INVALID_VALUE."); I can't find such a condition being requested anywhere else in the GL ES spec. Do I correctly infer that the author of this test already was enforcing what I am proposing at the beginning of this email? Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Fri Aug 13 12:24:57 2010 From: cma...@ (Chris Marrin) Date: Fri, 13 Aug 2010 12:24:57 -0700 Subject: [Public WebGL] =?utf-8?Q?Re:_[Public_WebGL]_Effects_of_Completeness_on_Texture_?= =?utf-8?Q?Image_Speci=EF=AC=81cation?= In-Reply-To: <892318809.272264.1281726491598.JavaMail.root@cm-mail03.mozilla.org> References: <892318809.272264.1281726491598.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Aug 13, 2010, at 12:08 PM, Benoit Jacob wrote: > Hi List, > > I am reading the GL ES 2.0.24 spec and find this at the end of section 3.7.10: > > "An implementation may allow a texture image array of level one or greater to be > created only if a complete set of image arrays consistent with the requested array > can be supported." > > If we want WebGL to run identically on all GL ES implementations, I suppose that means that we have to be strictest here, that is, in texImage2D and friends, require that condition. Do you agree? > > Possibly related question: the test conformance/texture-active-bind.html is doing this sub-test: > > gl.copyTexImage2D(gl.TEXTURE_2D, 1, gl.RGBA, 0, 0, 5, 3, 0); > glErrorShouldBe(gl, gl.INVALID_VALUE, > "copyTexImage2D with NPOT texture with level > 0 should return INVALID_VALUE."); > > I can't find such a condition being requested anywhere else in the GL ES spec. Do I correctly infer that the author of this test already was enforcing what I am proposing at the beginning of this email? The NPOT support in WebGL (as in GL ES 2.0) is restricted to non-mipmapped textures. That's the reason for that test... ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Fri Aug 13 12:27:48 2010 From: kbr...@ (Kenneth Russell) Date: Fri, 13 Aug 2010 12:27:48 -0700 Subject: [Public WebGL] texImage2D calls with HTMLVideoElement In-Reply-To: References: Message-ID: On Fri, Aug 13, 2010 at 11:03 AM, Adrienne Walker wrote: > I was looking at the WebGL spec about the tex(Sub)Image2D calls and > had two questions: > > 1) Why do the texImage2D calls that take HTMLMediaElement parameters > allow the specification of format and type? Along those same lines, > why is the spec so insistent that image data must be downconverted to > lower precision if it is specified? Format (and internalformat) and type are specifiable so that authors can choose to use less VRAM for a given texture. The spec requires that the incoming image be converted to the given precision for repeatability across platforms; it would be bad if on one platform, uploading to an UNSIGNED_SHORT_5_5_5_1 texture preserved 8 bits of data per channel, but on another, it dropped bits. An author might develop on the first platform, expect a certain rendering fidelity, and then later run on the second platform and find that their game unexpectedly looked bad. > The only use case I can imagine is that with a packed format you save > on upload bandwidth and texture memory at the expense of conversion, > but only if the HTMLMediaElement is being composited on the CPU, which > is likely to not be the case for most of those element types in the > future. ?The element's width and height are already implicitly used as > parameters in the call--why not the format as well? If a packed format is requested, the conversion can be done on the GPU, although doing so is a little more involved; the source image or video element would be bound to a framebuffer object and glCopyTexSubImage2D used to populate the destination texture with given internalformat. Once it was decided (during a recent F2F of the WebGL working group) to allow the type of the texture to be specified, it was required for future compatibility to add both the format and internalformat parameters. > 2) Regarding the texImage2D call with the HTMLVideoElement, the spec > is unclear about how to handle poster attributes. ?Is this > intentionally left as an implementation detail? I'm sure this is an oversight on our part. If you have any suggestions on additional spec text please let us know. -Ken > Forgive me if either of these have been discussed before. ?I tried > searching the email archives, but couldn't turn anything up. > > Cheers, > -Adrienne > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Fri Aug 13 12:35:57 2010 From: bja...@ (Benoit Jacob) Date: Fri, 13 Aug 2010 12:35:57 -0700 (PDT) Subject: [Public WebGL] =?utf-8?Q?Re:_[Public_WebGL]_Effects_of_Comple?= =?utf-8?Q?teness_on_Texture_Image_Speci=EF=AC=81cation?= In-Reply-To: Message-ID: <607888621.272528.1281728157914.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Aug 13, 2010, at 12:08 PM, Benoit Jacob wrote: > > > Hi List, > > > > I am reading the GL ES 2.0.24 spec and find this at the end of > > section 3.7.10: > > > > "An implementation may allow a texture image array of level one or > > greater to be > > created only if a complete set of image arrays consistent with the > > requested array > > can be supported." > > > > If we want WebGL to run identically on all GL ES implementations, I > > suppose that means that we have to be strictest here, that is, in > > texImage2D and friends, require that condition. Do you agree? > > > > Possibly related question: the test > > conformance/texture-active-bind.html is doing this sub-test: > > > > gl.copyTexImage2D(gl.TEXTURE_2D, 1, gl.RGBA, 0, 0, 5, 3, 0); > > glErrorShouldBe(gl, gl.INVALID_VALUE, > > "copyTexImage2D with NPOT texture with level > 0 should return > > INVALID_VALUE."); > > > > I can't find such a condition being requested anywhere else in the > > GL ES spec. Do I correctly infer that the author of this test > > already was enforcing what I am proposing at the beginning of this > > email? > > The NPOT support in WebGL (as in GL ES 2.0) is restricted to > non-mipmapped textures. That's the reason for that test... The GL ES 2.0.24 says, in section 3.8.2, that a NPOT mipmapped texture should be rendered as if it were opaque black. (Even though such a NPOT mipmapped texture can still be "complete" in the sense of section 3.7.10). I can't see where in the spec it is said that such a texture creation call should generate an error? Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Fri Aug 13 13:02:09 2010 From: ala...@ (ala...@) Date: Fri, 13 Aug 2010 13:02:09 -0700 Subject: [Public WebGL] Issue with enable vertex attributes and re-binding VBOs Message-ID: <4C65A4C1.4000903@mechnicality.com> Hi List This may be a very dumb question, but is it actually specifically stated anywhere that you must enable vertex attributes each time you bind to a buffer target? I've found (empirically) that unless you re-enable all the vertex attributes each time you *re-use* a buffer, the following drawElements/drawArrays displays the data from the last VBO originally created. In other words, if you create VBO A, B and C and then redraw A, B or C, each time you redraw them you must go through the enableVertexAttribute stuff, even if the bindings are unchanged. If you don't when you re-draw A or B you get the data from C. What happened was the the first time I drew a scene everything was fine, but as soon as it was redrawn (e.g. camera move) all the rendered objects changed to the geometry of the last object originally created. I've fixed it by now specifically enabling vertex attributes as required (and then disabling them again). My inspiration in this was the San Angeles demo which appears to work this way. I have no idea whether this is GL ES 2.0 behavior - I have found nothing in either the spec or the man pages to indicate that the above would happen. There's an example in the GL3.0/3.1 Programmer's guide which only enables the vertex attributes once, but of course, that's not GL ES 2/WebGL. Regards Alan ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Fri Aug 13 13:20:50 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 13 Aug 2010 13:20:50 -0700 (PDT) Subject: [Public WebGL] Issue with enable vertex attributes and re-binding VBOs In-Reply-To: <4C65A4C1.4000903@mechnicality.com> Message-ID: <1251269623.272942.1281730849992.JavaMail.root@cm-mail03.mozilla.org> Hmm, do you have a testcase or a code snippet that shows what you're trying to do? You shouldn't need to re-enable vertex attributes, unless there's a bug somewhere.. - Vlad ----- Original Message ----- > Hi List > > This may be a very dumb question, but is it actually specifically > stated > anywhere that you must enable vertex attributes each time you bind to > a > buffer target? > > I've found (empirically) that unless you re-enable all the vertex > attributes each time you *re-use* a buffer, the following > drawElements/drawArrays displays the data from the last VBO originally > created. In other words, if you create VBO A, B and C and then redraw > A, > B or C, each time you redraw them you must go through the > enableVertexAttribute stuff, even if the bindings are unchanged. If > you > don't when you re-draw A or B you get the data from C. > > What happened was the the first time I drew a scene everything was > fine, > but as soon as it was redrawn (e.g. camera move) all the rendered > objects changed to the geometry of the last object originally created. > I've fixed it by now specifically enabling vertex attributes as > required > (and then disabling them again). My inspiration in this was the San > Angeles demo which appears to work this way. > > I have no idea whether this is GL ES 2.0 behavior - I have found > nothing > in either the spec or the man pages to indicate that the above would > happen. There's an example in the GL3.0/3.1 Programmer's guide which > only enables the vertex attributes once, but of course, that's not GL > ES > 2/WebGL. > > > Regards > > Alan > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From mar...@ Fri Aug 13 13:31:10 2010 From: mar...@ (Martin Kliehm) Date: Fri, 13 Aug 2010 22:31:10 +0200 Subject: [Public WebGL] Participation on a WebGL panel at SXSW 2011 Message-ID: <07351BF5-0930-46FE-B16C-B5D3FFF66783@kliehm.com> Apologies for being slightly off-topic, but at least it's about a public discussion of WebGL: I submitted a panel proposal for SXSW in March 2011 in Austin/TX.[1] It's an excellent opportunity to reach out to hundreds of early adopting web developers. I believe it would be essential to have a member of this working group on the panel. Any volunteers or suggestions? BTW, voting and comments on the proposals are open until end of August, I'd appreciate your feedback. ;) Regards, Martin [1] http://panelpicker.sxsw.com/ideas/view/6777 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Fri Aug 13 14:02:41 2010 From: ala...@ (ala...@) Date: Fri, 13 Aug 2010 14:02:41 -0700 Subject: [Public WebGL] Issue with enable vertex attributes and re-binding VBOs In-Reply-To: <1251269623.272942.1281730849992.JavaMail.root@cm-mail03.mozilla.org> References: <1251269623.272942.1281730849992.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C65B2F1.2010908@mechnicality.com> On 08/13/2010 01:20 PM, Vladimir Vukicevic wrote: > Hmm, do you have a testcase or a code snippet that shows what you're trying to do? You shouldn't need to re-enable vertex attributes, unless there's a bug somewhere.. > > - Vlad > Here's the generated Javascript for the most relevant part. I've tried clarify with comments (*not* machine-generated :) ) Originally, the bit which is between XXXXs was where ===>> is in the code below. It gave the behavior as originally described. Where it is now gives the expected results. This snippet is a method which is called when the scenegraph is traversed. Regards Alan function $doHandleGeometry(this$static, node, pass){ var arrayBuffer, at, at$iterator, attributeIndex, colorFound, colorIndex, defColor, elementBuffer, enabledAttributes, enabledCount, gl, i, mvf, pmf, strideOffset, ulmodelView, ulperspective, vbo, vertexBuffer; vbo = dynamicCast(node.instance, 91); gl = this$static.displayContext.canvas_0.context; switch (pass.ordinal) { case 0: return; case 1: // Stuff to set up the transform. { ulperspective = dynamicCastJso(this$static.displayContext.vertexUniformMap.get('u_perspectiveMatrix')); ulmodelView = dynamicCastJso(this$static.displayContext.vertexUniformMap.get('u_modelViewMatrix')); pmf = convertToFloatArray(this$static.transformManager.perspectiveMatrix); gl.uniformMatrix4fv(ulperspective, false, create_2(pmf)); mvf = convertToFloatArray(this$static.transformManager.modelViewMatrix); gl.uniformMatrix4fv(ulmodelView, false, create_2(mvf)); checkError(gl, 'after setting matrices in doHandleGeometry'); // First time around vertexBuffer and elementBuffer are null, so they are created // and references stored so that subsequent times they can be re-used. vertexBuffer = null; if (!vbo.arrayBuffer) { vertexBuffer = gl.createBuffer(); gl.bindBuffer(34962, vertexBuffer); gl.bufferData(34962, create_2(vbo.arrayData), 35044); vbo.arrayBuffer = vertexBuffer; } // the VBO is taken from the 'arrayBuffer' property of the 'vbo' object. else { vertexBuffer = vbo.arrayBuffer; gl.bindBuffer(34962, vertexBuffer); } // Same again but with the element buffer. elementBuffer = null; if (!vbo.elementBuffer) { elementBuffer = gl.createBuffer(); gl.bindBuffer(34963, elementBuffer); gl.bufferData(34963, create_3(vbo.elementData), 35044); vbo.elementBuffer = elementBuffer; // ===>> XXXXXX.... stuff was here. } else { elementBuffer = vbo.elementBuffer; gl.bindBuffer(34963, elementBuffer); } checkError(gl, 'after binding to buffers'); // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXxx // The next bit enables those attributes which are *used* by this buffer and *required* by the shader. // allows one generic shader to be used with objects with a 'mixed' attribute set. strideOffset = 0; colorFound = false; enabledAttributes = initDim(_3I_classLit, 241, -1, vbo.attributes.size_1(), 1); enabledCount = 0; for (at$iterator = vbo.attributes.iterator(); at$iterator.hasNext();) { at = dynamicCast(at$iterator.next_0(), 46); attributeIndex = gl.getAttribLocation(this$static.displayContext.shaderProgram, at.name_0); if (attributeIndex >= 0) { colorFound = $equals_3('a_color', at.name_0); gl.enableVertexAttribArray(attributeIndex); if (($clinit_354() , FLOAT_0) == at.type_0) { gl.vertexAttribPointer(attributeIndex, at.stride, 5126, false, vbo.vertexStride * 4, strideOffset); enabledAttributes[enabledCount] = attributeIndex; ++enabledCount; } else throw $UnsupportedOperationException(new UnsupportedOperationException, 'Can only handle FLOAT attributes'); } strideOffset += at.stride * 4; } // this bit sets a default color if there is no COLLADA COLOR semantic in the vertex data. if (!colorFound) { defColor = initValues(_3F_classLit, 240, -1, [0.20000000298023224, 0.800000011920929, 0.800000011920929, 1]); colorIndex = gl.getAttribLocation(this$static.displayContext.shaderProgram, 'a_color'); if (colorIndex != -1) { gl.disableVertexAttribArray(colorIndex); gl.vertexAttrib4fv(colorIndex, create_2(defColor)); } } checkError(gl, 'after bindingAttributes'); // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX // The actual drawing happens here. gl.drawElements(4, vbo.elementData.length, 5123, 0); // Then disable the vertex attributes we enabled above for (i = 0; i < enabledCount; ++i) { gl.disableVertexAttribArray(enabledAttributes[i]); } checkError(gl, 'after drawArrays'); // and unbind the buffers. gl.bindBuffer(34962, null); gl.bindBuffer(34963, null); } break; // this is for a 'clean' pass which deletes buffers when the scenegraph is being changed. case 3: { arrayBuffer = vbo.arrayBuffer; if (arrayBuffer) { gl.deleteBuffer(arrayBuffer); vbo.arrayBuffer = null; } elementBuffer = vbo.elementBuffer; if (elementBuffer) { gl.deleteBuffer(elementBuffer); vbo.elementBuffer = null; } } } } > ----- Original Message ----- > >> Hi List >> >> This may be a very dumb question, but is it actually specifically >> stated >> anywhere that you must enable vertex attributes each time you bind to >> a >> buffer target? >> >> I've found (empirically) that unless you re-enable all the vertex >> attributes each time you *re-use* a buffer, the following >> drawElements/drawArrays displays the data from the last VBO originally >> created. In other words, if you create VBO A, B and C and then redraw >> A, >> B or C, each time you redraw them you must go through the >> enableVertexAttribute stuff, even if the bindings are unchanged. If >> you >> don't when you re-draw A or B you get the data from C. >> >> What happened was the the first time I drew a scene everything was >> fine, >> but as soon as it was redrawn (e.g. camera move) all the rendered >> objects changed to the geometry of the last object originally created. >> I've fixed it by now specifically enabling vertex attributes as >> required >> (and then disabling them again). My inspiration in this was the San >> Angeles demo which appears to work this way. >> >> I have no idea whether this is GL ES 2.0 behavior - I have found >> nothing >> in either the spec or the man pages to indicate that the above would >> happen. There's an example in the GL3.0/3.1 Programmer's guide which >> only enables the vertex attributes once, but of course, that's not GL >> ES >> 2/WebGL. >> >> >> Regards >> >> Alan >> >> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Fri Aug 13 14:47:08 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 13 Aug 2010 14:47:08 -0700 (PDT) Subject: [Public WebGL] Issue with enable vertex attributes and re-binding VBOs In-Reply-To: <4C65B2F1.2010908@mechnicality.com> Message-ID: <616611915.273663.1281736028081.JavaMail.root@cm-mail03.mozilla.org> Ugh, your generated code is extremely hard to read, but I think I understand; I think what you had before was just incorrect. The current attribute bindings are part of global state, not tied to a shader. The current buffer binding is only used to set up attribute bindings; that is, the current binding itself is not tied to the attribute arrays. Your code before was setting up and enabling attributes only the first time, when you were creating the VBOs, and as such they would have always pointed to the first VBO that was created. If I didn't understand that correctly, you'll have to write out some hand-written code or some pseudocode :) - Vlad ----- Original Message ----- > On 08/13/2010 01:20 PM, Vladimir Vukicevic wrote: > > Hmm, do you have a testcase or a code snippet that shows what you're > > trying to do? You shouldn't need to re-enable vertex attributes, > > unless there's a bug somewhere.. > > > > - Vlad > > > Here's the generated Javascript for the most relevant part. I've tried > clarify with comments (*not* machine-generated :) ) > Originally, the bit which is between XXXXs was where ===>> is in the > code below. It gave the behavior as originally described. Where it is > now gives the expected results. > > This snippet is a method which is called when the scenegraph is > traversed. > > Regards > > Alan > > > function $doHandleGeometry(this$static, node, pass){ > var arrayBuffer, at, at$iterator, attributeIndex, colorFound, > colorIndex, defColor, elementBuffer, enabledAttributes, enabledCount, > gl, i, mvf, pmf, strideOffset, ulmodelView, ulperspective, vbo, > vertexBuffer; > vbo = dynamicCast(node.instance, 91); > gl = this$static.displayContext.canvas_0.context; > switch (pass.ordinal) { > case 0: > return; > case 1: > // Stuff to set up the transform. > { > ulperspective = > dynamicCastJso(this$static.displayContext.vertexUniformMap.get('u_perspectiveMatrix')); > ulmodelView = > dynamicCastJso(this$static.displayContext.vertexUniformMap.get('u_modelViewMatrix')); > pmf = > convertToFloatArray(this$static.transformManager.perspectiveMatrix); > gl.uniformMatrix4fv(ulperspective, false, create_2(pmf)); > mvf = > convertToFloatArray(this$static.transformManager.modelViewMatrix); > gl.uniformMatrix4fv(ulmodelView, false, create_2(mvf)); > checkError(gl, 'after setting matrices in doHandleGeometry'); > > // First time around vertexBuffer and elementBuffer are null, so they > are created > // and references stored so that subsequent times they can be re-used. > > vertexBuffer = null; > if (!vbo.arrayBuffer) { > vertexBuffer = gl.createBuffer(); > gl.bindBuffer(34962, vertexBuffer); > gl.bufferData(34962, create_2(vbo.arrayData), 35044); > vbo.arrayBuffer = vertexBuffer; > } > > // the VBO is taken from the 'arrayBuffer' property of the 'vbo' > object. > else { > vertexBuffer = vbo.arrayBuffer; > gl.bindBuffer(34962, vertexBuffer); > } > > // Same again but with the element buffer. > > elementBuffer = null; > if (!vbo.elementBuffer) { > elementBuffer = gl.createBuffer(); > gl.bindBuffer(34963, elementBuffer); > gl.bufferData(34963, create_3(vbo.elementData), 35044); > vbo.elementBuffer = elementBuffer; > > // ===>> XXXXXX.... stuff was here. > } > else { > elementBuffer = vbo.elementBuffer; > gl.bindBuffer(34963, elementBuffer); > } > checkError(gl, 'after binding to buffers'); > > // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXxx > // The next bit enables those attributes which are *used* by this > buffer > and *required* by the shader. > // allows one generic shader to be used with objects with a 'mixed' > attribute set. > > strideOffset = 0; > colorFound = false; > enabledAttributes = initDim(_3I_classLit, 241, -1, > vbo.attributes.size_1(), 1); > enabledCount = 0; > for (at$iterator = vbo.attributes.iterator(); > at$iterator.hasNext();) { > at = dynamicCast(at$iterator.next_0(), 46); > attributeIndex = > gl.getAttribLocation(this$static.displayContext.shaderProgram, > at.name_0); > if (attributeIndex >= 0) { > colorFound = $equals_3('a_color', at.name_0); > gl.enableVertexAttribArray(attributeIndex); > if (($clinit_354() , FLOAT_0) == at.type_0) { > gl.vertexAttribPointer(attributeIndex, at.stride, 5126, > false, vbo.vertexStride * 4, strideOffset); > enabledAttributes[enabledCount] = attributeIndex; > ++enabledCount; > } > else > throw $UnsupportedOperationException(new > UnsupportedOperationException, 'Can only handle FLOAT attributes'); > } > strideOffset += at.stride * 4; > } > > // this bit sets a default color if there is no COLLADA COLOR semantic > in the vertex data. > > if (!colorFound) { > defColor = initValues(_3F_classLit, 240, -1, > [0.20000000298023224, 0.800000011920929, 0.800000011920929, 1]); > colorIndex = > gl.getAttribLocation(this$static.displayContext.shaderProgram, > 'a_color'); > if (colorIndex != -1) { > gl.disableVertexAttribArray(colorIndex); > gl.vertexAttrib4fv(colorIndex, create_2(defColor)); > } > } > checkError(gl, 'after bindingAttributes'); > // XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX > > > // The actual drawing happens here. > gl.drawElements(4, vbo.elementData.length, 5123, 0); > > // Then disable the vertex attributes we enabled above > for (i = 0; i < enabledCount; ++i) { > gl.disableVertexAttribArray(enabledAttributes[i]); > } > checkError(gl, 'after drawArrays'); > > // and unbind the buffers. > gl.bindBuffer(34962, null); > gl.bindBuffer(34963, null); > } > > break; > > // this is for a 'clean' pass which deletes buffers when the > scenegraph > is being changed. > case 3: > { > arrayBuffer = vbo.arrayBuffer; > if (arrayBuffer) { > gl.deleteBuffer(arrayBuffer); > vbo.arrayBuffer = null; > } > elementBuffer = vbo.elementBuffer; > if (elementBuffer) { > gl.deleteBuffer(elementBuffer); > vbo.elementBuffer = null; > } > } > > } > } > > > ----- Original Message ----- > > > >> Hi List > >> > >> This may be a very dumb question, but is it actually specifically > >> stated > >> anywhere that you must enable vertex attributes each time you bind > >> to > >> a > >> buffer target? > >> > >> I've found (empirically) that unless you re-enable all the vertex > >> attributes each time you *re-use* a buffer, the following > >> drawElements/drawArrays displays the data from the last VBO > >> originally > >> created. In other words, if you create VBO A, B and C and then > >> redraw > >> A, > >> B or C, each time you redraw them you must go through the > >> enableVertexAttribute stuff, even if the bindings are unchanged. If > >> you > >> don't when you re-draw A or B you get the data from C. > >> > >> What happened was the the first time I drew a scene everything was > >> fine, > >> but as soon as it was redrawn (e.g. camera move) all the rendered > >> objects changed to the geometry of the last object originally > >> created. > >> I've fixed it by now specifically enabling vertex attributes as > >> required > >> (and then disabling them again). My inspiration in this was the San > >> Angeles demo which appears to work this way. > >> > >> I have no idea whether this is GL ES 2.0 behavior - I have found > >> nothing > >> in either the spec or the man pages to indicate that the above > >> would > >> happen. There's an example in the GL3.0/3.1 Programmer's guide > >> which > >> only enables the vertex attributes once, but of course, that's not > >> GL > >> ES > >> 2/WebGL. > >> > >> > >> Regards > >> > >> Alan > >> > >> > >> > >> > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Fri Aug 13 15:03:54 2010 From: cal...@ (Mark Callow) Date: Fri, 13 Aug 2010 15:03:54 -0700 Subject: [Public WebGL] =?UTF-8?B?UmU6IFtQdWJsaWMgV2ViR0xdIFJlOiBbUHVibGljIFdlYkdMXSBFZmY=?= =?UTF-8?B?ZWN0cyBvZiBDb21wbGV0ZW5lc3Mgb24gVGV4dHVyZSBJbWFnZSBTcGVjae+sgWM=?= =?UTF-8?B?YXRpb24=?= In-Reply-To: <607888621.272528.1281728157914.JavaMail.root@cm-mail03.mozilla.org> References: <607888621.272528.1281728157914.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C65C14A.1050003@hicorp.co.jp> On 2010/08/13 12:35, Benoit Jacob wrote: > ... > The GL ES 2.0.24 says, in section 3.8.2, that a NPOT mipmapped texture should be rendered as if it were opaque black. Section 3.8.2 is talking about fragment shader behavior in the event of sampling improperly specified textures. > (Even though such a NPOT mipmapped texture can still be "complete" in the sense of section 3.7.10). > > I can't see where in the spec it is said that such a texture creation call should generate an error? The test you mentioned is not creating a texture, it is attempting to copy to the level 1 texture image of an NPOT texture. Section 3.7.1 says glTexImage2D returns INVALID_VALUE if level > 0 for NPOT textures. Section 3.7.2 says the level parameter of glCopyTexImage2D is interpreted in exactly the same way as the level parameter of glTexImage2D. Hence the check for INVALID_VALUE in the test. Section 3.7.11 says glGenerateMipmap returns INVALID_OPERATION for NPOT textures. What is not clear about this? There is no way to load images to anything other than level 0 of an NPOT texture. 3.8.2 comes into play if the application has specified a mipmapped minification filter or a wrap mode for an NPOT texture. TexParameter does not flag an error because the texture image(s) might well be changed prior to use in rendering. Regards -Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From ste...@ Fri Aug 13 19:51:13 2010 From: ste...@ (Steve Baker) Date: Fri, 13 Aug 2010 21:51:13 -0500 Subject: [Public WebGL] Participation on a WebGL panel at SXSW 2011 In-Reply-To: <07351BF5-0930-46FE-B16C-B5D3FFF66783@kliehm.com> References: <07351BF5-0930-46FE-B16C-B5D3FFF66783@kliehm.com> Message-ID: <4C6604A1.6070202@sjbaker.org> I'm not sure I qualify as "a member of this working group" - but I do at least live in Austin, TX...and I'm definitely an early adopting Web developer. Any help I can provide will be willingly given. -- Steve Martin Kliehm wrote: > Apologies for being slightly off-topic, but at least it's about a > public discussion of WebGL: I submitted a panel proposal for SXSW in > March 2011 in Austin/TX.[1] It's an excellent opportunity to reach out > to hundreds of early adopting web developers. I believe it would be > essential to have a member of this working group on the panel. Any > volunteers or suggestions? BTW, voting and comments on the proposals > are open until end of August, I'd appreciate your feedback. ;) > > Regards, > Martin > > [1] http://panelpicker.sxsw.com/ideas/view/6777 > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Fri Aug 13 18:43:40 2010 From: kbr...@ (Kenneth Russell) Date: Fri, 13 Aug 2010 18:43:40 -0700 Subject: [Public WebGL] Demos for Khronos WebGL repository Message-ID: All, During this week's WebGL working group conference call the fact was discussed that the current demo repository ( http://www.khronos.org/webgl/wiki/Demo_Repository ) doesn't really reflect the state of the art of WebGL. Would any of you be willing to contribute demonstration code showing off what WebGL can do? Ideally the demos would be small, self-contained and flashy, and good starting points for learning WebGL. Jacob Seidelin's Worlds of WebGL or Charles Ying's port of Chocolux are two examples which come to mind. In order for us to be able to incorporate the code into the Khronos repository, we'd need the following: - All of the resources would need to be able to be checked in to the Khronos tree. No external dependencies. Links to sites containing more up-to-date code, etc. would be fine. - All code would need to be BSD licensed. - All art assets would need appropriate licensing terms (presumably the Creative Commons "Attribution" license, for parity with the BSD license). - The code would need to be fully compliant to the WebGL specification and not rely on any bugs or lack of input validation in current WebGL implementations. Ideally it would be well-commented to also act as a tutorial. We'll gladly offer commit access to the public WebGL repository to any contributor, which should ease updates of any contributed code. This would be a big help in making the coming launch of WebGL 1.0 a success. Thanks in advance for your help. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Sat Aug 14 01:20:53 2010 From: ste...@ (Steve Baker) Date: Sat, 14 Aug 2010 03:20:53 -0500 Subject: [Public WebGL] Demos for Khronos WebGL repository In-Reply-To: References: Message-ID: <4C6651E5.6040201@sjbaker.org> I'd definitely like to help - and I'm pretty sure I can make that happen. But I'd prefer to keep our plans quiet until we're a bit closer to being done. I'll send you details off-list. What are the timescales here? -- Steve Kenneth Russell wrote: > All, > > During this week's WebGL working group conference call the fact was > discussed that the current demo repository ( > http://www.khronos.org/webgl/wiki/Demo_Repository ) doesn't really > reflect the state of the art of WebGL. Would any of you be willing to > contribute demonstration code showing off what WebGL can do? Ideally > the demos would be small, self-contained and flashy, and good starting > points for learning WebGL. Jacob Seidelin's Worlds of WebGL or Charles > Ying's port of Chocolux are two examples which come to mind. > > In order for us to be able to incorporate the code into the Khronos > repository, we'd need the following: > > - All of the resources would need to be able to be checked in to the > Khronos tree. No external dependencies. Links to sites containing more > up-to-date code, etc. would be fine. > - All code would need to be BSD licensed. > - All art assets would need appropriate licensing terms (presumably > the Creative Commons "Attribution" license, for parity with the BSD > license). > - The code would need to be fully compliant to the WebGL > specification and not rely on any bugs or lack of input validation in > current WebGL implementations. Ideally it would be well-commented to > also act as a tutorial. > > We'll gladly offer commit access to the public WebGL repository to any > contributor, which should ease updates of any contributed code. > > This would be a big help in making the coming launch of WebGL 1.0 a > success. Thanks in advance for your help. > > -Ken > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Sat Aug 14 09:00:21 2010 From: bja...@ (Benoit Jacob) Date: Sat, 14 Aug 2010 09:00:21 -0700 (PDT) Subject: [Public WebGL] =?utf-8?Q?Re:_[Public_WebGL]_Re:_[Public_WebGL]_Effects?= =?utf-8?Q?_of_Completeness_on_Texture_Image_Speci=EF=AC=81cation?= In-Reply-To: <4C65C14A.1050003@hicorp.co.jp> Message-ID: <1763906254.277713.1281801621117.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On 2010/08/13 12:35, Benoit Jacob wrote: > > ... > > The GL ES 2.0.24 says, in section 3.8.2, that a NPOT mipmapped > > texture should be rendered as if it were opaque black. > Section 3.8.2 is talking about fragment shader behavior in the event > of > sampling improperly specified textures. > > (Even though such a NPOT mipmapped texture can still be "complete" > > in the sense of section 3.7.10). > > > > I can't see where in the spec it is said that such a texture > > creation call should generate an error? > The test you mentioned is not creating a texture, it is attempting to > copy to the level 1 texture image of an NPOT texture. Right, right, sorry for the imprecise phrasing. > > Section 3.7.1 says glTexImage2D returns INVALID_VALUE if level > 0 for > NPOT textures. Thanks, I had missed that (in my defense, section 3.7.1 is long and it is written 3 pages into it; also the glTexImage2D man page is missing this information). It is indeed written at the top of page 67 in the 2.0.24 spec. > > Section 3.7.2 says the level parameter of glCopyTexImage2D is > interpreted in exactly the same way as the level parameter of > glTexImage2D. Hence the check for INVALID_VALUE in the test. Thanks a lot, I had missed that too. (Also, the glCopyTexImage2D man page is missing this information) > > Section 3.7.11 says glGenerateMipmap returns INVALID_OPERATION for > NPOT > textures. Yes, that I had seen. > > What is not clear about this? Nothing, I guess! Thanks for walking me through these parts of the spec! > 3.8.2 comes into play if the application has specified a mipmapped > minification filter or a wrap mode for an NPOT texture. TexParameter > does not flag an error because the texture image(s) might well be > changed prior to use in rendering. Yes, that part I understood. Thanks, Benoit > > Regards > > -Mark ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Sat Aug 14 15:31:47 2010 From: ste...@ (Steve Baker) Date: Sat, 14 Aug 2010 17:31:47 -0500 Subject: [Public WebGL] Shader reload. Message-ID: <4C671953.6080301@sjbaker.org> I've had a growing suspicion (which I finally verified today) that sometimes Firefox/Minefield does not reload shader files via HTTP when you force a reload - either by clicking on the Reload icon with the shift-key held down - or by typing Ctrl+Shift+R. Several times now I've uploaded a new shader file onto the server and the browser has continued to use the old one - even after forcing a reload. The only thing that seems to un-wedge it is to clear the shader cache. I have no special reload directives in my HTML file - and I'm loading shaders using: var shader = gl.createShader(type); gl.shaderSource ( shader, shaderSrc ) ; gl.compileShader ( shader ) ; I've definitely seen it happen several times with Vertex shaders...I'm not 100% sure whether I've seen it with Fragment shaders or not. HTML and JS files reload just fine - and I haven't noticed issues with textures. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Sat Aug 14 11:19:14 2010 From: vla...@ (Vladimir Vukicevic) Date: Sat, 14 Aug 2010 11:19:14 -0700 (PDT) Subject: [Public WebGL] Shader reload. In-Reply-To: <4C671953.6080301@sjbaker.org> Message-ID: <89238328.279130.1281809954290.JavaMail.root@cm-mail03.mozilla.org> Hmm, the browser doesn't have a concept of shader files -- how are you loading your shader data, and how is it being served? - Vlad ----- Original Message ----- > I've had a growing suspicion (which I finally verified today) that > sometimes Firefox/Minefield does not reload shader files via HTTP when > you force a reload - either by clicking on the Reload icon with the > shift-key held down - or by typing Ctrl+Shift+R. Several times now > I've uploaded a new shader file onto the server and the browser has > continued to use the old one - even after forcing a reload. The only > thing that seems to un-wedge it is to clear the shader cache. > > I have no special reload directives in my HTML file - and I'm loading > shaders using: > > var shader = gl.createShader(type); > gl.shaderSource ( shader, shaderSrc ) ; > gl.compileShader ( shader ) ; > > I've definitely seen it happen several times with Vertex shaders...I'm > not 100% sure whether I've seen it with Fragment shaders or not. HTML > and JS files reload just fine - and I haven't noticed issues with > textures. > > -- Steve > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Sat Aug 14 11:31:52 2010 From: vla...@ (Vladimir Vukicevic) Date: Sat, 14 Aug 2010 11:31:52 -0700 (PDT) Subject: [Public WebGL] Shader reload. In-Reply-To: <4C671953.6080301@sjbaker.org> Message-ID: <854152364.279167.1281810712606.JavaMail.root@cm-mail03.mozilla.org> Whoops, hit send too soon. If you're using XMLHttpRequest to load shader data via GET requests, the browser is likely caching the results -- you'll have to either send cache headers from the server, e.g.: Cache-Control: no-cache, must-revalidate Pragma: no-cache and/or add an If-Modified-Since header to the request with a time in the past. Reload/Shift-Reload won't make any difference here, because that only applies to the resources in use by the page itself, and not any network requests it might make. - Vlad ----- Original Message ----- > I've had a growing suspicion (which I finally verified today) that > sometimes Firefox/Minefield does not reload shader files via HTTP when > you force a reload - either by clicking on the Reload icon with the > shift-key held down - or by typing Ctrl+Shift+R. Several times now > I've uploaded a new shader file onto the server and the browser has > continued to use the old one - even after forcing a reload. The only > thing that seems to un-wedge it is to clear the shader cache. > > I have no special reload directives in my HTML file - and I'm loading > shaders using: > > var shader = gl.createShader(type); > gl.shaderSource ( shader, shaderSrc ) ; > gl.compileShader ( shader ) ; > > I've definitely seen it happen several times with Vertex shaders...I'm > not 100% sure whether I've seen it with Fragment shaders or not. HTML > and JS files reload just fine - and I haven't noticed issues with > textures. > > -- Steve > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Sun Aug 15 01:12:40 2010 From: ste...@ (Steve Baker) Date: Sun, 15 Aug 2010 03:12:40 -0500 Subject: [Public WebGL] Shader reload. In-Reply-To: <854152364.279167.1281810712606.JavaMail.root@cm-mail03.mozilla.org> References: <854152364.279167.1281810712606.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C67A178.9040004@sjbaker.org> Ah - yes, I snipped the shader loader code from one of the example progs...thanks - that clears that one up. (I think I'll just embed my shader sources as string constants in the JavaScript and avoid the whole messy issue). -- Steve Vladimir Vukicevic wrote: > Whoops, hit send too soon. > > If you're using XMLHttpRequest to load shader data via GET requests, the browser is likely caching the results -- you'll have to either send cache headers from the server, e.g.: > > Cache-Control: no-cache, must-revalidate > Pragma: no-cache > > and/or add an If-Modified-Since header to the request with a time in the past. Reload/Shift-Reload won't make any difference here, because that only applies to the resources in use by the page itself, and not any network requests it might make. > > - Vlad > > ----- Original Message ----- > >> I've had a growing suspicion (which I finally verified today) that >> sometimes Firefox/Minefield does not reload shader files via HTTP when >> you force a reload - either by clicking on the Reload icon with the >> shift-key held down - or by typing Ctrl+Shift+R. Several times now >> I've uploaded a new shader file onto the server and the browser has >> continued to use the old one - even after forcing a reload. The only >> thing that seems to un-wedge it is to clear the shader cache. >> >> I have no special reload directives in my HTML file - and I'm loading >> shaders using: >> >> var shader = gl.createShader(type); >> gl.shaderSource ( shader, shaderSrc ) ; >> gl.compileShader ( shader ) ; >> >> I've definitely seen it happen several times with Vertex shaders...I'm >> not 100% sure whether I've seen it with Fragment shaders or not. HTML >> and JS files reload just fine - and I haven't noticed issues with >> textures. >> >> -- Steve >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kwa...@ Mon Aug 16 14:01:34 2010 From: kwa...@ (Kenneth Waters) Date: Mon, 16 Aug 2010 14:01:34 -0700 Subject: [Public WebGL] Issue with enable vertex attributes and re-binding VBOs In-Reply-To: <4C65A4C1.4000903@mechnicality.com> References: <4C65A4C1.4000903@mechnicality.com> Message-ID: Alan, If I'm understanding you correctly this is the intended behavior; and OpenGL 3 works the same way. BindBuffer(ARRAY_BUFFER, ) only sets the ARRAY_BUFFER state, it does not change where data is going to be fetched from. When you call VertexAttribPointer it binds the current ARRAY_BUFFER to that attribute. This allows you to take attributes from multiple different VBOs in one draw call. If you only call BindBuffer and not VertexAttribPointer you will continue to fetch from the buffer that was bound when you called VertexAttribPointer. -- Kenneth Waters -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Mon Aug 16 14:33:01 2010 From: ala...@ (ala...@) Date: Mon, 16 Aug 2010 14:33:01 -0700 Subject: [Public WebGL] Issue with enable vertex attributes and re-binding VBOs In-Reply-To: References: <4C65A4C1.4000903@mechnicality.com> Message-ID: <4C69AE8D.9080403@mechnicality.com> Thanks Ken, I got it working fine, I just couldn't quite understand why... I work largely on my own, so sometimes its possible to not quite see something. Your answer together with Vlad's earlier answer have now clarified things for me. Regards Alan On 08/16/2010 02:01 PM, Kenneth Waters wrote: > Alan, > > If I'm understanding you correctly this is the intended behavior; and > OpenGL 3 works the same way. BindBuffer(ARRAY_BUFFER, ) only sets the > ARRAY_BUFFER state, it does not change where data is going to be > fetched from. When you call VertexAttribPointer it binds the current > ARRAY_BUFFER to that attribute. This allows you to take attributes > from multiple different VBOs in one draw call. > > If you only call BindBuffer and not VertexAttribPointer you will > continue to fetch from the buffer that was bound when you called > VertexAttribPointer. > > -- Kenneth Waters ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From lee...@ Tue Aug 17 04:20:56 2010 From: lee...@ (Lee Sandberg) Date: Tue, 17 Aug 2010 13:20:56 +0200 Subject: [Public WebGL] Issue with enable vertex attributes and re-binding VBOs In-Reply-To: <4C69AE8D.9080403@mechnicality.com> References: <4C65A4C1.4000903@mechnicality.com> <4C69AE8D.9080403@mechnicality.com> Message-ID: Sorry for a slightly off topic question. Does the PowerVR chip in Galaxy S i9000 support OpenES 2.0 and webGL? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Tue Aug 17 06:02:01 2010 From: bja...@ (Benoit Jacob) Date: Tue, 17 Aug 2010 06:02:01 -0700 (PDT) Subject: [Public WebGL] Issue with enable vertex attributes and re-binding VBOs In-Reply-To: Message-ID: <1516348048.299389.1282050121645.JavaMail.root@cm-mail03.mozilla.org> Sorry for a slightly off topic question. Does the PowerVR chip in Galaxy S i9000 support OpenES 2.0 and webGL? a google search gave this: http://en.wikipedia.org/wiki/Samsung_i9000_Galaxy_S#Hardware so yes it does support OpenGL ES 2.0, and consequently, it supports webGL if you can install on it any web browser that does support WebGL. Benoit -------------- next part -------------- An HTML attachment was scrubbed... URL: From enn...@ Tue Aug 17 13:58:17 2010 From: enn...@ (Adrienne Walker) Date: Tue, 17 Aug 2010 13:58:17 -0700 Subject: [Public WebGL] texImage2D calls with HTMLVideoElement In-Reply-To: References: Message-ID: El d?a 13 de agosto de 2010 12:27, Kenneth Russell escribi?: > On Fri, Aug 13, 2010 at 11:03 AM, Adrienne Walker wrote: >> 2) Regarding the texImage2D call with the HTMLVideoElement, the spec >> is unclear about how to handle poster attributes. ?Is this >> intentionally left as an implementation detail? > > I'm sure this is an oversight on our part. If you have any suggestions > on additional spec text please let us know. After some thought, I think this is more general than just the poster attribute. Video controls have a similar problem. Also, border, width, and height attributes on an image tag bring up similar questions. My expectation for the use case of these texImage2D calls is for a user to create an out-of-DOM element, using the src attribute to coerce the browser into doing the heavy lifting for the media loading or using the canvas API for programmatic texture generation. I don't expect users to want a pixel-for-pixel copy of, for example, the video controls that would appear if they set controls=true on a video element. Following that line of thought, it would be useful if the spec listed a minimum set of attributes per element that are required to be supported when generating source image data from an element. If you're asking for suggestions, I would add something like the following paragraph: --snip-- The source image data specified by an HTML element is not guaranteed to respect all attributes on that element. texImage2D from an HTMLImageElement is only required to support the src attribute. texImage2D from an HTMLCanvasElement is only required to support the width and height attributes. texImage2D from an HTMLVideoElement will upload the current frame of the video specified by the src attribute. --snip-- I'll admit that I'm a little unsure on how to word that last sentence. I think attributes like autoplay, loop, or playbackrate should be respected, but I also didn't want to list the entire grab bag of attributes that affect what frame is currently playing. Regards, -Adrienne ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Tue Aug 17 14:07:52 2010 From: vla...@ (Vladimir Vukicevic) Date: Tue, 17 Aug 2010 14:07:52 -0700 (PDT) Subject: [Public WebGL] texImage2D calls with HTMLVideoElement In-Reply-To: Message-ID: <480111716.304436.1282079272752.JavaMail.root@cm-mail03.mozilla.org> We would want to (need to?) follow what the 2D canvas does in all of these cases, for consistency. I don't know that it's defined in all that much detail there, but browsers seem to have mostly identical behavior so far. - Vlad ----- Original Message ----- > El d?a 13 de agosto de 2010 12:27, Kenneth Russell > escribi?: > > On Fri, Aug 13, 2010 at 11:03 AM, Adrienne Walker > > wrote: > >> 2) Regarding the texImage2D call with the HTMLVideoElement, the > >> spec > >> is unclear about how to handle poster attributes. Is this > >> intentionally left as an implementation detail? > > > > I'm sure this is an oversight on our part. If you have any > > suggestions > > on additional spec text please let us know. > > After some thought, I think this is more general than just the poster > attribute. Video controls have a similar problem. Also, border, > width, and height attributes on an image tag bring up similar > questions. > > My expectation for the use case of these texImage2D calls is for a > user to create an out-of-DOM element, using the src attribute to > coerce the browser into doing the heavy lifting for the media loading > or using the canvas API for programmatic texture generation. I don't > expect users to want a pixel-for-pixel copy of, for example, the video > controls that would appear if they set controls=true on a video > element. > > Following that line of thought, it would be useful if the spec listed > a minimum set of attributes per element that are required to be > supported when generating source image data from an element. If > you're asking for suggestions, I would add something like the > following paragraph: > > --snip-- > The source image data specified by an HTML element is not guaranteed > to respect all attributes on that element. texImage2D from an > HTMLImageElement is only required to support the src attribute. > texImage2D from an HTMLCanvasElement is only required to support the > width and height attributes. texImage2D from an HTMLVideoElement will > upload the current frame of the video specified by the src attribute. > --snip-- > > I'll admit that I'm a little unsure on how to word that last sentence. > I think attributes like autoplay, loop, or playbackrate should be > respected, but I also didn't want to list the entire grab bag of > attributes that affect what frame is currently playing. > > Regards, > -Adrienne > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From m.s...@ Wed Aug 18 02:15:24 2010 From: m.s...@ (Mehmet Sirin) Date: Wed, 18 Aug 2010 11:15:24 +0200 Subject: [Public WebGL] Get Viewport for converting coordinates In-Reply-To: References: Message-ID: Hi, I'm trying to convert pixelcoordinates to objectcoordinates by using the inverse of projection*modeview. Actually it's similar to gluUnproject, that uses some viewport values like view[0]. First question: Which value is represented in view[0],view[1],.. if my viewport is 700x500px of size ? Second question: I tried to get the viewport via getParameter(gl.VIEWPORT) so I can pass it to the manually built unproject-method, but as result it's returning "undefined". Why? Third question: By now I've been playing around with converting the pixel coordinates but without success: mvPushMatrix(); ... ?//Viewport and Pixel ? var v1=(2*(xCoord-0)/700)-1; ? var v2=(2*(yCoord-0)/530)-1; ? var v3=2*(-1)-1 ?//4D-Vector ? var v = Vector.create([v1,v2,v3,1]); ? //Projection and mvMatrix ? var pM=pMatrix; ? var mvM=mvMatrix; ? var temp= pM.multiply(mvM); ? var temp_inv=temp.inverse().multiply(v); ?alert(pixel.flatten()[0]/pixel.flatten()[3]+" "+pixel.flatten()[1]/pixel.flatten()[3]); mvPopMatrix(); The alerted value is the one that i used for the last translation. Then when changing the position pixel, all that changes is the value after the comma in the alerted result. I don't know why. Maybe you know what's wrong? ..would appreciate your help ! ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From m.s...@ Wed Aug 18 05:34:01 2010 From: m.s...@ (Mehmet Sirin) Date: Wed, 18 Aug 2010 14:34:01 +0200 Subject: [Public WebGL] Re: Get Viewport for converting coordinates In-Reply-To: References: Message-ID: > First question: Which value is represented in view[0],view[1],.. if my > viewport is 700x500px of size ? Ok, forget this question. Of course it is an array that first represents the origin and then the width and height. But it that case, the maths I posted is correct? Then it is even more strange that it is not working properly.. > mvPushMatrix(); > ... //stuff like rotate and translate > > ?//Viewport and Pixel > ? var v1=(2*(xCoord-0)/700)-1; //xCoord= pixelcoordinate > ? var v2=(2*(yCoord-0)/530)-1; //yCoord= pixelcoordinate > ? var v3=2*(-1)-1 > > ?//4D-Vector > ? var v = Vector.create([v1,v2,v3,1]); > ? //Projection and mvMatrix > ? var pM=pMatrix; > ? var mvM=mvMatrix; > ? var temp= pM.multiply(mvM); > > ? var temp_inv=temp.inverse().multiply(v); > ... > The alerted value is the one that i used for the last translation. > Then when changing the position pixel, all that changes is the value > after the comma in the alerted result. The values are changed a lot (numbers in front of comma) only if I interact with the models like rotating or translating them. When klicking somewhere on the canvas after such a transformation the values still won't change correctly. I am confused :) ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From dan...@ Wed Aug 18 07:33:41 2010 From: dan...@ (Dan Lecocq) Date: Wed, 18 Aug 2010 08:33:41 -0600 Subject: [Public WebGL] Re: Get Viewport for converting coordinates In-Reply-To: References: Message-ID: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> Are you having trouble getting the model to show up with the transform you're expecting? I was having those sorts of problems and thought it was in my model view matrix, but it turns out there was a disconnect about how the viewport was handled by webkit and the size of the canvas. If the canvas has fixed width and height it shouldn't be a problem, but if the canvas is getting resized to fit the window or something along those lines, this is something to check. If you're basing your calculations on widths and heights that you think it should have, and/or canvas.clientWidth, then check with canvas.width is. I found that webkit would only let me display canvas.width x canvas.height pixels in the viewport, not matter what I set the viewport to be. The consequence was that the image looked grainy and translated up and to the right. Not sure if yours could be a related problem or not, but it took me a while to figure out so I thought i'd at least mention it. The solution ended up being to change canvas.width to canvas.clientWidth, etc. in my resize function. - Dan Sent from my iPad On Aug 18, 2010, at 6:34 AM, Mehmet Sirin wrote: >> First question: Which value is represented in view[0],view[1],.. if my >> viewport is 700x500px of size ? > Ok, forget this question. Of course it is an array that first > represents the origin and then the width and height. > > But it that case, the maths I posted is correct? > > Then it is even more strange that it is not working properly.. > >> mvPushMatrix(); >> ... //stuff like rotate and translate >> >> //Viewport and Pixel >> var v1=(2*(xCoord-0)/700)-1; //xCoord= pixelcoordinate >> var v2=(2*(yCoord-0)/530)-1; //yCoord= pixelcoordinate >> var v3=2*(-1)-1 >> >> //4D-Vector >> var v = Vector.create([v1,v2,v3,1]); >> //Projection and mvMatrix >> var pM=pMatrix; >> var mvM=mvMatrix; >> var temp= pM.multiply(mvM); >> >> var temp_inv=temp.inverse().multiply(v); >> ... >> The alerted value is the one that i used for the last translation. >> Then when changing the position pixel, all that changes is the value >> after the comma in the alerted result. > > The values are changed a lot (numbers in front of comma) only if I > interact with the models like rotating or translating them. When > klicking somewhere on the canvas after such a transformation the > values still won't change correctly. > > I am confused :) > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From m.s...@ Wed Aug 18 08:59:54 2010 From: m.s...@ (Mehmet Sirin) Date: Wed, 18 Aug 2010 17:59:54 +0200 Subject: [Public WebGL] Re: Get Viewport for converting coordinates In-Reply-To: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> References: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> Message-ID: hi 2010/8/18 Dan Lecocq : > Are you having trouble getting the model to show up with the transform you're expecting? No, models are rendered as they should. > > If the canvas has fixed width and height it shouldn't be a problem, but if the canvas is getting resized to fit the window or something along those lines, this is something to check. yes, the canvas has fixed width and height (700x530px) > > Not sure if yours could be a related problem or not, but it took me a while to figure out so I thought i'd at least mention it. ?The solution ended up being to change canvas.width to canvas.clientWidth, etc. in my resize function. > thank you for answering :) ..i think it must have something to do with the viewport. because when i remove the part of multiplying the viewport-matrix at least some values of the calculated object-coordinates are changed. but they are not correctly converted. And i dont understand the part with transformations: For example I klick on a blue point in my webgl scene. the converted coordinates are 2000,3000. when i do a translate and then again klick on that blue point the coordinates would not be again 2000,3000, they will change. ..something is going wrong with all of this :D > - Dan > > Sent from my iPad > > On Aug 18, 2010, at 6:34 AM, Mehmet Sirin wrote: > >>> First question: Which value is represented in view[0],view[1],.. if my >>> viewport is 700x500px of size ? >> Ok, forget this question. Of course it is an array that first >> represents the origin and then the width and height. >> >> But it that case, the maths I posted is correct? >> >> Then it is even more strange that it is not working properly.. >> >>> mvPushMatrix(); >>> ... ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? //stuff like rotate and translate >>> >>> ?//Viewport and Pixel >>> ? var v1=(2*(xCoord-0)/700)-1; ? ?//xCoord= pixelcoordinate >>> ? var v2=(2*(yCoord-0)/530)-1; ? ?//yCoord= pixelcoordinate >>> ? var v3=2*(-1)-1 >>> >>> ?//4D-Vector >>> ? var v = Vector.create([v1,v2,v3,1]); >>> ? //Projection and mvMatrix >>> ? var pM=pMatrix; >>> ? var mvM=mvMatrix; >>> ? var temp= pM.multiply(mvM); >>> >>> ? var temp_inv=temp.inverse().multiply(v); >>> ... >>> The alerted value is the one that i used for the last translation. >>> Then when changing the position pixel, all that changes is the value >>> after the comma in the alerted result. >> >> The values are changed a lot (numbers in front of comma) only if I >> interact with the models like rotating or translating them. When >> klicking somewhere on the canvas after such a transformation the >> values still won't change correctly. >> >> I am confused :) >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Wed Aug 18 09:11:04 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 18 Aug 2010 09:11:04 -0700 Subject: [Public WebGL] Re: Get Viewport for converting coordinates In-Reply-To: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> References: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> Message-ID: The size the canvas is rendered to is independent of the size it is displayed at. Will create a canvas that renders to a 100x100 pixel backbuffer but is stretched and displayed at 300x400 pixels so you should use the size it's displayed at to do your projection and click computations. Assuming you are doing standard world * view * projection math for your models here's how you should calculate the correct projection for your projection matrix based on the size the backbuffer is displayed perspective( projection, angleInRadians, canvas.clientWidth / canvas.clientHeight, zNear, zFar); And here is converting a place you click on the canvas into a ray that goes through the world. function clientPositionToWorldRay( mouseXPosition, mouseYPosition) { // normScreenX, normScreenY are in frustum coordinates. var normScreenX = mouseXPosition / (canvas.width * 0.5) - 1; var normScreenY = -(mouseYPosition / (canvas.height * 0.5) - 1); // Apply inverse view-projection matrix to get the ray in world coordinates. return { near: transformPoint( viewProjectionInverse, [normScreenX, normScreenY, 0]), far: transformPoint( viewProjectionInverse, [normScreenX, normScreenY, 1]) }; }; here are the definitions of the functions referenced above function perspective(angle, aspect, near, far) { var f = Math.tan(0.5 * (Math.PI - angle)); var range = near - far; return [ f / aspect, 0, 0, 0, 0, f, 0, 0, 0, 0, far / range, -1, 0, 0, near * far / range, 0 ]; }; function transformPoint(m, v) { var v0 = v[0]; var v1 = v[1]; var v2 = v[2]; var d = v0 * m[0*4+3] + v1 * m[1*4+3] + v2 * m[2*4+3] + m[3*4+3]; return [(v0 * m[0*4+0] + v1 * m[1*4+0] + v2 * m[2*4+0] + m[3*4+0]) / d, (v0 * m[0*4+1] + v1 * m[1*4+1] + v2 * m[2*4+1] + m[3*4+1]) / d, (v0 * m[0*4+2] + v1 * m[1*4+2] + v2 * m[2*4+2] + m[3*4+2]) / d]; }; If you need code for viewProjectionInverse I can post that as well. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.s...@ Wed Aug 18 09:47:21 2010 From: m.s...@ (Mehmet Sirin) Date: Wed, 18 Aug 2010 18:47:21 +0200 Subject: [Public WebGL] Re: Get Viewport for converting coordinates In-Reply-To: References: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> Message-ID: hi, thanks for your answer! 2010/8/18 Gregg Tavares (wrk) : > The size the canvas is rendered to is independent of the size it is > displayed at. > > Will create a canvas that renders to a 100x100 pixel backbuffer but is > stretched and displayed at 300x400 pixels so you should use the size it's > displayed at to do your projection and click computations. I have this one: > Assuming you are doing standard world * view * projection math for your > models yes, i do that standard. > here's how you should calculate the correct projection for your > projection matrix based on the size the backbuffer is displayed is it necessary to use the perspective matrix you posted? I'm using the matrices-functions which Vladimir Y. has written: perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 20000000.0); var pMatrix; function perspective(fovy, aspect, znear, zfar) { pMatrix = makePerspective(fovy, aspect, znear, zfar); } function makePerspective(fovy, aspect, znear, zfar) { var ymax = znear * Math.tan(fovy * Math.PI / 360.0); var ymin = -ymax; var xmin = ymin * aspect; var xmax = ymax * aspect; return makeFrustum(xmin, xmax, ymin, ymax, znear, zfar); } and function makeFrustum(left, right, bottom, top, znear, zfar) { var X = 2*znear/(right-left); var Y = 2*znear/(top-bottom); var A = (right+left)/(right-left); var B = (top+bottom)/(top-bottom); var C = -(zfar+znear)/(zfar-znear); var D = -2*zfar*znear/(zfar-znear); return $M([[X, 0, A, 0], [0, Y, B, 0], [0, 0, C, D], [0, 0, -1, 0]]); } > If you need code for viewProjectionInverse I can post that as well. you mean inverse(projectionMatrix*mvMatrix) ? Got that..but sure I would be glad if you post that,too :) ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Wed Aug 18 10:09:50 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 18 Aug 2010 10:09:50 -0700 Subject: [Public WebGL] Re: Get Viewport for converting coordinates In-Reply-To: References: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> Message-ID: On Wed, Aug 18, 2010 at 9:47 AM, Mehmet Sirin wrote: > hi, thanks for your answer! > > > > 2010/8/18 Gregg Tavares (wrk) : > > The size the canvas is rendered to is independent of the size it is > > displayed at. > > > > Will create a canvas that renders to a 100x100 pixel backbuffer but is > > stretched and displayed at 300x400 pixels so you should use the size it's > > displayed at to do your projection and click computations. > I have this one: > > > > Assuming you are doing standard world * view * projection math for your > > models > yes, i do that standard. > > > here's how you should calculate the correct projection for your > > projection matrix based on the size the backbuffer is displayed > > is it necessary to use the perspective matrix you posted? I'm using > the matrices-functions which Vladimir Y. has written: > I'm sure his are just fine. The point is you don't want gl.viewportWidth / gl.viewportHeight. You should use canvas.clientWidth / canvas.clientHeight > > perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 20000000.0); > > var pMatrix; > function perspective(fovy, aspect, znear, zfar) { > pMatrix = makePerspective(fovy, aspect, znear, zfar); > } > > function makePerspective(fovy, aspect, znear, zfar) > { > var ymax = znear * Math.tan(fovy * Math.PI / 360.0); > var ymin = -ymax; > var xmin = ymin * aspect; > var xmax = ymax * aspect; > > return makeFrustum(xmin, xmax, ymin, ymax, znear, zfar); > } > > > and > > function makeFrustum(left, right, > bottom, top, > znear, zfar) > { > var X = 2*znear/(right-left); > var Y = 2*znear/(top-bottom); > var A = (right+left)/(right-left); > var B = (top+bottom)/(top-bottom); > var C = -(zfar+znear)/(zfar-znear); > var D = -2*zfar*znear/(zfar-znear); > > return $M([[X, 0, A, 0], > [0, Y, B, 0], > [0, 0, C, D], > [0, 0, -1, 0]]); > } > > > > If you need code for viewProjectionInverse I can post that as well. > > you mean inverse(projectionMatrix*mvMatrix) ? Got that..but sure I > would be glad if you post that,too :) > If you're using these functions? http://webgl-mjs.googlecode.com/hg/docs/files/mjs-js.html Then I'd guess it's something like var projection = M4x4.makePerspective(45, canvas.clientWidth / canvas.clientHeight, 0.1, 20000000.0); var view = M4x4.makeLookAt(eye, center, up); var viewProjection = M4x4.mul(view, projection); (or is it projection, view?) It doesn't appear he provided an matrix inverse function var viewInverseProjection = inverse(viewProjection); where inverse is function inverse(m) { var tmp_0 = m[2*4+2] * m[3*4+3]; var tmp_1 = m[3*4+2] * m[2*4+3]; var tmp_2 = m[1*4+2] * m[3*4+3]; var tmp_3 = m[3*4+2] * m[1*4+3]; var tmp_4 = m[1*4+2] * m[2*4+3]; var tmp_5 = m[2*4+2] * m[1*4+3]; var tmp_6 = m[0*4+2] * m[3*4+3]; var tmp_7 = m[3*4+2] * m[0*4+3]; var tmp_8 = m[0*4+2] * m[2*4+3]; var tmp_9 = m[2*4+2] * m[0*4+3]; var tmp_10 = m[0*4+2] * m[1*4+3]; var tmp_11 = m[1*4+2] * m[0*4+3]; var tmp_12 = m[2*4+0] * m[3*4+1]; var tmp_13 = m[3*4+0] * m[2*4+1]; var tmp_14 = m[1*4+0] * m[3*4+1]; var tmp_15 = m[3*4+0] * m[1*4+1]; var tmp_16 = m[1*4+0] * m[2*4+1]; var tmp_17 = m[2*4+0] * m[1*4+1]; var tmp_18 = m[0*4+0] * m[3*4+1]; var tmp_19 = m[3*4+0] * m[0*4+1]; var tmp_20 = m[0*4+0] * m[2*4+1]; var tmp_21 = m[2*4+0] * m[0*4+1]; var tmp_22 = m[0*4+0] * m[1*4+1]; var tmp_23 = m[1*4+0] * m[0*4+1]; var t0 = (tmp_0 * m[1*4+1] + tmp_3 * m[2*4+1] + tmp_4 * m[3*4+1]) - (tmp_1 * m[1*4+1] + tmp_2 * m[2*4+1] + tmp_5 * m[3*4+1]); var t1 = (tmp_1 * m[0*4+1] + tmp_6 * m[2*4+1] + tmp_9 * m[3*4+1]) - (tmp_0 * m[0*4+1] + tmp_7 * m[2*4+1] + tmp_8 * m[3*4+1]); var t2 = (tmp_2 * m[0*4+1] + tmp_7 * m[1*4+1] + tmp_10 * m[3*4+1]) - (tmp_3 * m[0*4+1] + tmp_6 * m[1*4+1] + tmp_11 * m[3*4+1]); var t3 = (tmp_5 * m[0*4+1] + tmp_8 * m[1*4+1] + tmp_11 * m[2*4+1]) - (tmp_4 * m[0*4+1] + tmp_9 * m[1*4+1] + tmp_10 * m[2*4+1]); var d = 1.0 / (m[0*4+0] * t0 + m[1*4+0] * t1 + m[2*4+0] * t2 + m[3*4+0] * t3); return [d * t0, d * t1, d * t2, d * t3, d * ((tmp_1 * m[1*4+0] + tmp_2 * m[2*4+0] + tmp_5 * m[3*4+0]) - (tmp_0 * m[1*4+0] + tmp_3 * m[2*4+0] + tmp_4 * m[3*4+0])), d * ((tmp_0 * m[0*4+0] + tmp_7 * m[2*4+0] + tmp_8 * m[3*4+0]) - (tmp_1 * m[0*4+0] + tmp_6 * m[2*4+0] + tmp_9 * m[3*4+0])), d * ((tmp_3 * m[0*4+0] + tmp_6 * m[1*4+0] + tmp_11 * m[3*4+0]) - (tmp_2 * m[0*4+0] + tmp_7 * m[1*4+0] + tmp_10 * m[3*4+0])), d * ((tmp_4 * m[0*4+0] + tmp_9 * m[1*4+0] + tmp_10 * m[2*4+0]) - (tmp_5 * m[0*4+0] + tmp_8 * m[1*4+0] + tmp_11 * m[2*4+0])), d * ((tmp_12 * m[1*4+3] + tmp_15 * m[2*4+3] + tmp_16 * m[3*4+3]) - (tmp_13 * m[1*4+3] + tmp_14 * m[2*4+3] + tmp_17 * m[3*4+3])), d * ((tmp_13 * m[0*4+3] + tmp_18 * m[2*4+3] + tmp_21 * m[3*4+3]) - (tmp_12 * m[0*4+3] + tmp_19 * m[2*4+3] + tmp_20 * m[3*4+3])), d * ((tmp_14 * m[0*4+3] + tmp_19 * m[1*4+3] + tmp_22 * m[3*4+3]) - (tmp_15 * m[0*4+3] + tmp_18 * m[1*4+3] + tmp_23 * m[3*4+3])), d * ((tmp_17 * m[0*4+3] + tmp_20 * m[1*4+3] + tmp_23 * m[2*4+3]) - (tmp_16 * m[0*4+3] + tmp_21 * m[1*4+3] + tmp_22 * m[2*4+3])), d * ((tmp_14 * m[2*4+2] + tmp_17 * m[3*4+2] + tmp_13 * m[1*4+2]) - (tmp_16 * m[3*4+2] + tmp_12 * m[1*4+2] + tmp_15 * m[2*4+2])), d * ((tmp_20 * m[3*4+2] + tmp_12 * m[0*4+2] + tmp_19 * m[2*4+2]) - (tmp_18 * m[2*4+2] + tmp_21 * m[3*4+2] + tmp_13 * m[0*4+2])), d * ((tmp_18 * m[1*4+2] + tmp_23 * m[3*4+2] + tmp_15 * m[0*4+2]) - (tmp_22 * m[3*4+2] + tmp_14 * m[0*4+2] + tmp_19 * m[1*4+2])), d * ((tmp_22 * m[2*4+2] + tmp_16 * m[0*4+2] + tmp_21 * m[1*4+2]) - (tmp_20 * m[1*4+2] + tmp_23 * m[2*4+2] + tmp_17 * m[0*4+2]))]; }; -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Thu Aug 19 01:45:14 2010 From: ste...@ (Steve Baker) Date: Thu, 19 Aug 2010 03:45:14 -0500 Subject: [Public WebGL] Linux/Firefox performance. Message-ID: <4C6CEF1A.5080403@sjbaker.org> A couple of months ago, we talked about how the compositing of the WebGL rendering context into the browser canvas in Firefox/Linux(64bit) was being done in software - resulting in even a simple clear-screen operation running between 30Hz on a 320x200 canvas and 6Hz on a 1280x1024 canvas. By contrast, the same software running on the same fairly old computer - with the same browser version (last night's build) under WinXP gets a more or less constant 30Hz irrespective of canvas size. I'm in the process of deciding what I can afford to allow in terms of window size and rendering complexity in my final game. Is there any prospect of this performance hole under Linux getting fixed? I don't need it done immediately - but I'd like to know whether there is some reasonable expectation that it'll be fixed before WebGL "goes mainstream". -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Aug 18 23:37:56 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 18 Aug 2010 23:37:56 -0700 (PDT) Subject: [Public WebGL] Linux/Firefox performance. In-Reply-To: <4C6CEF1A.5080403@sjbaker.org> Message-ID: <228563064.319985.1282199876259.JavaMail.root@cm-mail03.mozilla.org> Interestingly, neither one has hardware compositing enabled yet. The performance issues that you're seeing on Linux are largely due to us trying to actually use various X features/extensions (RENDER in particular), instead of just assuming that doing it in software is going to be faster; 99% of the time it actually is. If we switch to using OpenGL for the compositing on linux, all that goes away, and we'd like to do that for Firefox 4, but as I said there are a few other things in the pipeline ahead of it. However, at this point we're probably talking about timelines on the order of weeks, as a bunch of stuff is coming together now :-) Linux is our lowest priority for this though, so it might lag behind other platforms. - Vlad ----- Original Message ----- > A couple of months ago, we talked about how the compositing of the > WebGL > rendering context into the browser canvas in Firefox/Linux(64bit) was > being done in software - resulting in even a simple clear-screen > operation running between 30Hz on a 320x200 canvas and 6Hz on a > 1280x1024 canvas. By contrast, the same software running on the same > fairly old computer - with the same browser version (last night's > build) > under WinXP gets a more or less constant 30Hz irrespective of canvas > size. > > I'm in the process of deciding what I can afford to allow in terms of > window size and rendering complexity in my final game. Is there any > prospect of this performance hole under Linux getting fixed? I don't > need it done immediately - but I'd like to know whether there is some > reasonable expectation that it'll be fixed before WebGL "goes > mainstream". > > -- Steve > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From m.s...@ Thu Aug 19 04:56:31 2010 From: m.s...@ (Mehmet Sirin) Date: Thu, 19 Aug 2010 13:56:31 +0200 Subject: [Public WebGL] Re: Get Viewport for converting coordinates In-Reply-To: References: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> Message-ID: hi Gregg, > If you're using these functions? > http://webgl-mjs.googlecode.com/hg/docs/files/mjs-js.html For matrix operations I use http://sylvester.jcoglan.com/api/matrix. Hmm, still having problems here. actually I'm doing exactly the things you posted in the recent emails.. maybe i should post the result which are calculated in the single steps, somewhere must be the mistake: I got some object coordinates like 3489860.0, 5557180.0, 457.0790023803711. Then I do the standard viewing pipeline with perspective projection and see my scene on the screen. Now I describe what the coordinates look like when i go backwards from screen coordinates to object coordinates. 1.) Transform screen coordinates to normalized device (where the scene is in a [1,1] cube): //take the inverse of viewport matrix and multiply with screen coordinates var v = Vector.create([xclick,yclick,0,1]); //xclick,yclick=pixel coordinates var a=getViewportMatrix(canvas.clientWidth,canvas.clientHeight).inverse(); var npc_coord=a.x(v); alert(npc_coord.flatten()) --> value for clicking lower left of canvas: -1,1,0,1 //fixed values in source code: xclick=canvas.clentWidth,.. --> value for clicking lower left of canvas: 1,-1,0,1 --> clicking in the middle of the scene: 0.0057, 0.1207, 0, 1 //without fixed values.. ..strange. 2.) Transform normalized coordinates to view coordinates: //take inverse projection matrix and multiply with normalized coordinates var p_Inv=pMatrix.inverse(); //inverse projection matrix var vrc_coord=p_Inv.x(npc_coord); alert(vrc_coord.flatten()) //for testing purposes without dividing with 4dComponent --> lower left: -0.5336,0.4040,-1, 5.0000 ..i thought the view coordinates should look like object coordinates? 3.) ..makes no sense because stuff above is terribly wrong. --------- All in one step: var pixel = Vector.create([0,530,0,1]); var view=getViewportMatrix(canvas.clientWidth,canvas.clientHeight); var pM=pMatrix.dup(); var mvM=mvMatrix.dup(); var allMultiplied= view.multiply(pM).multiply(mvM); var allInversed=allMultiplied.inverse(); var multiplyWithPixel=allInversed.multiply(pixel) var w=multiplyWithPixel.flatten()[3]; var objectCoordinate=...; alert(objectCoordinate); --> lower left: 3489859.8932, 5557180.0808, 7236.7802 --> upper right: 3489860.1067, 5557179.9191, 7236.7802 but it should be something like 3489940.00, .. ..and when clicking around on screen nothing but the values after the dot of the number changes. :) thank you for reading (and thinking "what the **** is he doing?! "). > Then I'd guess it's something like > var projection = M4x4.makePerspective(45, canvas.clientWidth / > canvas.clientHeight,??0.1, 20000000.0); > var view = M4x4.makeLookAt(eye, center, up); > var viewProjection = M4x4.mul(view, projection); ?(or is it projection, > view?) > It doesn't appear he provided an matrix inverse function > var viewInverseProjection = inverse(viewProjection); > where inverse is > function inverse(m) { > ??var tmp_0 = m[2*4+2] * m[3*4+3]; > ??var tmp_1 = m[3*4+2] * m[2*4+3]; > ??var tmp_2 = m[1*4+2] * m[3*4+3]; > ??var tmp_3 = m[3*4+2] * m[1*4+3]; > ??var tmp_4 = m[1*4+2] * m[2*4+3]; > ??var tmp_5 = m[2*4+2] * m[1*4+3]; > ??var tmp_6 = m[0*4+2] * m[3*4+3]; > ??var tmp_7 = m[3*4+2] * m[0*4+3]; > ??var tmp_8 = m[0*4+2] * m[2*4+3]; > ??var tmp_9 = m[2*4+2] * m[0*4+3]; > ??var tmp_10 = m[0*4+2] * m[1*4+3]; > ??var tmp_11 = m[1*4+2] * m[0*4+3]; > ??var tmp_12 = m[2*4+0] * m[3*4+1]; > ??var tmp_13 = m[3*4+0] * m[2*4+1]; > ??var tmp_14 = m[1*4+0] * m[3*4+1]; > ??var tmp_15 = m[3*4+0] * m[1*4+1]; > ??var tmp_16 = m[1*4+0] * m[2*4+1]; > ??var tmp_17 = m[2*4+0] * m[1*4+1]; > ??var tmp_18 = m[0*4+0] * m[3*4+1]; > ??var tmp_19 = m[3*4+0] * m[0*4+1]; > ??var tmp_20 = m[0*4+0] * m[2*4+1]; > ??var tmp_21 = m[2*4+0] * m[0*4+1]; > ??var tmp_22 = m[0*4+0] * m[1*4+1]; > ??var tmp_23 = m[1*4+0] * m[0*4+1]; > ??var t0 = (tmp_0 * m[1*4+1] + tmp_3 * m[2*4+1] + tmp_4 * m[3*4+1]) - > ?? ? ?(tmp_1 * m[1*4+1] + tmp_2 * m[2*4+1] + tmp_5 * m[3*4+1]); > ??var t1 = (tmp_1 * m[0*4+1] + tmp_6 * m[2*4+1] + tmp_9 * m[3*4+1]) - > ?? ? ?(tmp_0 * m[0*4+1] + tmp_7 * m[2*4+1] + tmp_8 * m[3*4+1]); > ??var t2 = (tmp_2 * m[0*4+1] + tmp_7 * m[1*4+1] + tmp_10 * m[3*4+1]) - > ?? ? ?(tmp_3 * m[0*4+1] + tmp_6 * m[1*4+1] + tmp_11 * m[3*4+1]); > ??var t3 = (tmp_5 * m[0*4+1] + tmp_8 * m[1*4+1] + tmp_11 * m[2*4+1]) - > ?? ? ?(tmp_4 * m[0*4+1] + tmp_9 * m[1*4+1] + tmp_10 * m[2*4+1]); > ??var d = 1.0 / (m[0*4+0] * t0 + m[1*4+0] * t1 + m[2*4+0] * t2 + m[3*4+0] * > t3); > ??return [d * t0, d * t1, d * t2, d * t3, > ?? ? ? d * ((tmp_1 * m[1*4+0] + tmp_2 * m[2*4+0] + tmp_5 * m[3*4+0]) - > ?? ? ? ? ?(tmp_0 * m[1*4+0] + tmp_3 * m[2*4+0] + tmp_4 * m[3*4+0])), > ?? ? ? d * ((tmp_0 * m[0*4+0] + tmp_7 * m[2*4+0] + tmp_8 * m[3*4+0]) - > ?? ? ? ? ?(tmp_1 * m[0*4+0] + tmp_6 * m[2*4+0] + tmp_9 * m[3*4+0])), > ?? ? ? d * ((tmp_3 * m[0*4+0] + tmp_6 * m[1*4+0] + tmp_11 * m[3*4+0]) - > ?? ? ? ? ?(tmp_2 * m[0*4+0] + tmp_7 * m[1*4+0] + tmp_10 * m[3*4+0])), > ?? ? ? d * ((tmp_4 * m[0*4+0] + tmp_9 * m[1*4+0] + tmp_10 * m[2*4+0]) - > ?? ? ? ? ?(tmp_5 * m[0*4+0] + tmp_8 * m[1*4+0] + tmp_11 * m[2*4+0])), > ?? ? ? d * ((tmp_12 * m[1*4+3] + tmp_15 * m[2*4+3] + tmp_16 * m[3*4+3]) - > ?? ? ? ? ?(tmp_13 * m[1*4+3] + tmp_14 * m[2*4+3] + tmp_17 * m[3*4+3])), > ?? ? ? d * ((tmp_13 * m[0*4+3] + tmp_18 * m[2*4+3] + tmp_21 * m[3*4+3]) - > ?? ? ? ? ?(tmp_12 * m[0*4+3] + tmp_19 * m[2*4+3] + tmp_20 * m[3*4+3])), > ?? ? ? d * ((tmp_14 * m[0*4+3] + tmp_19 * m[1*4+3] + tmp_22 * m[3*4+3]) - > ?? ? ? ? ?(tmp_15 * m[0*4+3] + tmp_18 * m[1*4+3] + tmp_23 * m[3*4+3])), > ?? ? ? d * ((tmp_17 * m[0*4+3] + tmp_20 * m[1*4+3] + tmp_23 * m[2*4+3]) - > ?? ? ? ? ?(tmp_16 * m[0*4+3] + tmp_21 * m[1*4+3] + tmp_22 * m[2*4+3])), > ?? ? ? d * ((tmp_14 * m[2*4+2] + tmp_17 * m[3*4+2] + tmp_13 * m[1*4+2]) - > ?? ? ? ? ?(tmp_16 * m[3*4+2] + tmp_12 * m[1*4+2] + tmp_15 * m[2*4+2])), > ?? ? ? d * ((tmp_20 * m[3*4+2] + tmp_12 * m[0*4+2] + tmp_19 * m[2*4+2]) - > ?? ? ? ? ?(tmp_18 * m[2*4+2] + tmp_21 * m[3*4+2] + tmp_13 * m[0*4+2])), > ?? ? ? d * ((tmp_18 * m[1*4+2] + tmp_23 * m[3*4+2] + tmp_15 * m[0*4+2]) - > ?? ? ? ? ?(tmp_22 * m[3*4+2] + tmp_14 * m[0*4+2] + tmp_19 * m[1*4+2])), > ?? ? ? d * ((tmp_22 * m[2*4+2] + tmp_16 * m[0*4+2] + tmp_21 * m[1*4+2]) - > ?? ? ? ? ?(tmp_20 * m[1*4+2] + tmp_23 * m[2*4+2] + tmp_17 * m[0*4+2]))]; > }; > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Thu Aug 19 08:50:26 2010 From: bja...@ (Benoit Jacob) Date: Thu, 19 Aug 2010 08:50:26 -0700 (PDT) Subject: [Public WebGL] about RenderBufferStorage Message-ID: <159460751.322283.1282233026387.JavaMail.root@cm-mail03.mozilla.org> Hi, Correct me if I am wrong, but it seems to me that the test, conformance/renderbuffer-initialization.html expects renderBufferStorage() to initialize the buffer's contents to 0. The OpenGL ES 2.0.24 spec says that it leaves the buffer's contents uninitialized, and the WebGL spec doesn't seem to say anything more about this function (?) so my question is, am I missing something here, or does the WebGL spec need to be updated to specify that renderBufferStorage() initializes by 0, or should this test be corrected? Note, I stumbled upon this while running the test suite in Valgrind, and it complained about uninitialized data, originally created by glRenderBufferStorage, being subsequently used in a conditional jump. Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From zhe...@ Thu Aug 19 09:14:42 2010 From: zhe...@ (Mo, Zhenyao) Date: Thu, 19 Aug 2010 09:14:42 -0700 Subject: [Public WebGL] about RenderBufferStorage In-Reply-To: <159460751.322283.1282233026387.JavaMail.root@cm-mail03.mozilla.org> References: <159460751.322283.1282233026387.JavaMail.root@cm-mail03.mozilla.org> Message-ID: In WebGL spec 4.1 it mentions that WebGL implementation must initialize resources' contents to 0. Although only texture and vbo are used as examples in 4.1, I think renderbuffer is also one of the resources that should be initialized to 0. Maybe we should add renderbuffer into section 4.1 for clarity? Zhenyao On Thu, Aug 19, 2010 at 8:50 AM, Benoit Jacob wrote: > Hi, > > Correct me if I am wrong, but it seems to me that the test, > > ? ?conformance/renderbuffer-initialization.html > > expects renderBufferStorage() to initialize the buffer's contents to 0. The OpenGL ES 2.0.24 spec says that it leaves the buffer's contents uninitialized, and the WebGL spec doesn't seem to say anything more about this function (?) so my question is, am I missing something here, or does the WebGL spec need to be updated to specify that renderBufferStorage() initializes by 0, or should this test be corrected? > > Note, I stumbled upon this while running the test suite in Valgrind, and it complained about uninitialized data, originally created by glRenderBufferStorage, being subsequently used in a conditional jump. > > Cheers, > Benoit > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Thu Aug 19 09:18:41 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Thu, 19 Aug 2010 09:18:41 -0700 Subject: [Public WebGL] Re: Get Viewport for converting coordinates In-Reply-To: References: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> Message-ID: On Thu, Aug 19, 2010 at 4:56 AM, Mehmet Sirin wrote: > hi Gregg, > > > If you're using these functions? > > http://webgl-mjs.googlecode.com/hg/docs/files/mjs-js.html > For matrix operations I use http://sylvester.jcoglan.com/api/matrix. > > Hmm, still having problems here. actually I'm doing exactly the things > you posted in the recent emails.. > Clearly you are not doing exactly the same things or you'd be getting exactly the same results :-D > > maybe i should post the result which are calculated in the single > steps, somewhere must be the mistake: > I got some object coordinates like 3489860.0, 5557180.0, > 457.0790023803711. Then I do the standard viewing pipeline with > perspective projection and see my scene on the screen. > Now I describe what the coordinates look like when i go backwards from > screen coordinates to object coordinates. > > 1.) Transform screen coordinates to normalized device (where the scene > is in a [1,1] cube): > //take the inverse of viewport matrix and multiply with screen > coordinates > > var v = Vector.create([xclick,yclick,0,1]); //xclick,yclick=pixel > coordinates > var > a=getViewportMatrix(canvas.clientWidth,canvas.clientHeight).inverse(); > var npc_coord=a.x(v); > I don't quite follow the x function and if it's really a matrix multiply but assuming it is it's not dividing by W as the function I posted was doing. > > alert(npc_coord.flatten()) > --> value for clicking lower left of canvas: -1,1,0,1 //fixed values > in source code: xclick=canvas.clentWidth,.. > --> value for clicking lower left of canvas: 1,-1,0,1 > --> clicking in the middle of the scene: 0.0057, 0.1207, 0, 1 > //without fixed values.. > ..strange. > > 2.) Transform normalized coordinates to view coordinates: > //take inverse projection matrix and multiply with normalized coordinates > var p_Inv=pMatrix.inverse(); //inverse projection matrix > var vrc_coord=p_Inv.x(npc_coord); > > alert(vrc_coord.flatten()) //for testing purposes > without dividing with 4dComponent > --> lower left: -0.5336,0.4040,-1, 5.0000 > ..i thought the view coordinates should look like object coordinates? > > 3.) ..makes no sense because stuff above is terribly wrong. > --------- > > All in one step: > > var pixel = Vector.create([0,530,0,1]); > var view=getViewportMatrix(canvas.clientWidth,canvas.clientHeight); > var pM=pMatrix.dup(); > var mvM=mvMatrix.dup(); > > var allMultiplied= view.multiply(pM).multiply(mvM); > var allInversed=allMultiplied.inverse(); > var multiplyWithPixel=allInversed.multiply(pixel) > > var w=multiplyWithPixel.flatten()[3]; > var objectCoordinate=...; > alert(objectCoordinate); > --> lower left: 3489859.8932, 5557180.0808, 7236.7802 > --> upper right: 3489860.1067, 5557179.9191, 7236.7802 but it should > be something like 3489940.00, .. > ..and when clicking around on screen nothing but the values after the > dot of the number changes. > > :) > > thank you for reading (and thinking "what the **** is he doing?! "). > > > > Then I'd guess it's something like > > var projection = M4x4.makePerspective(45, canvas.clientWidth / > > canvas.clientHeight, 0.1, 20000000.0); > > var view = M4x4.makeLookAt(eye, center, up); > > var viewProjection = M4x4.mul(view, projection); (or is it projection, > > view?) > > It doesn't appear he provided an matrix inverse function > > var viewInverseProjection = inverse(viewProjection); > > where inverse is > > function inverse(m) { > > var tmp_0 = m[2*4+2] * m[3*4+3]; > > var tmp_1 = m[3*4+2] * m[2*4+3]; > > var tmp_2 = m[1*4+2] * m[3*4+3]; > > var tmp_3 = m[3*4+2] * m[1*4+3]; > > var tmp_4 = m[1*4+2] * m[2*4+3]; > > var tmp_5 = m[2*4+2] * m[1*4+3]; > > var tmp_6 = m[0*4+2] * m[3*4+3]; > > var tmp_7 = m[3*4+2] * m[0*4+3]; > > var tmp_8 = m[0*4+2] * m[2*4+3]; > > var tmp_9 = m[2*4+2] * m[0*4+3]; > > var tmp_10 = m[0*4+2] * m[1*4+3]; > > var tmp_11 = m[1*4+2] * m[0*4+3]; > > var tmp_12 = m[2*4+0] * m[3*4+1]; > > var tmp_13 = m[3*4+0] * m[2*4+1]; > > var tmp_14 = m[1*4+0] * m[3*4+1]; > > var tmp_15 = m[3*4+0] * m[1*4+1]; > > var tmp_16 = m[1*4+0] * m[2*4+1]; > > var tmp_17 = m[2*4+0] * m[1*4+1]; > > var tmp_18 = m[0*4+0] * m[3*4+1]; > > var tmp_19 = m[3*4+0] * m[0*4+1]; > > var tmp_20 = m[0*4+0] * m[2*4+1]; > > var tmp_21 = m[2*4+0] * m[0*4+1]; > > var tmp_22 = m[0*4+0] * m[1*4+1]; > > var tmp_23 = m[1*4+0] * m[0*4+1]; > > var t0 = (tmp_0 * m[1*4+1] + tmp_3 * m[2*4+1] + tmp_4 * m[3*4+1]) - > > (tmp_1 * m[1*4+1] + tmp_2 * m[2*4+1] + tmp_5 * m[3*4+1]); > > var t1 = (tmp_1 * m[0*4+1] + tmp_6 * m[2*4+1] + tmp_9 * m[3*4+1]) - > > (tmp_0 * m[0*4+1] + tmp_7 * m[2*4+1] + tmp_8 * m[3*4+1]); > > var t2 = (tmp_2 * m[0*4+1] + tmp_7 * m[1*4+1] + tmp_10 * m[3*4+1]) - > > (tmp_3 * m[0*4+1] + tmp_6 * m[1*4+1] + tmp_11 * m[3*4+1]); > > var t3 = (tmp_5 * m[0*4+1] + tmp_8 * m[1*4+1] + tmp_11 * m[2*4+1]) - > > (tmp_4 * m[0*4+1] + tmp_9 * m[1*4+1] + tmp_10 * m[2*4+1]); > > var d = 1.0 / (m[0*4+0] * t0 + m[1*4+0] * t1 + m[2*4+0] * t2 + m[3*4+0] > * > > t3); > > return [d * t0, d * t1, d * t2, d * t3, > > d * ((tmp_1 * m[1*4+0] + tmp_2 * m[2*4+0] + tmp_5 * m[3*4+0]) - > > (tmp_0 * m[1*4+0] + tmp_3 * m[2*4+0] + tmp_4 * m[3*4+0])), > > d * ((tmp_0 * m[0*4+0] + tmp_7 * m[2*4+0] + tmp_8 * m[3*4+0]) - > > (tmp_1 * m[0*4+0] + tmp_6 * m[2*4+0] + tmp_9 * m[3*4+0])), > > d * ((tmp_3 * m[0*4+0] + tmp_6 * m[1*4+0] + tmp_11 * m[3*4+0]) - > > (tmp_2 * m[0*4+0] + tmp_7 * m[1*4+0] + tmp_10 * m[3*4+0])), > > d * ((tmp_4 * m[0*4+0] + tmp_9 * m[1*4+0] + tmp_10 * m[2*4+0]) - > > (tmp_5 * m[0*4+0] + tmp_8 * m[1*4+0] + tmp_11 * m[2*4+0])), > > d * ((tmp_12 * m[1*4+3] + tmp_15 * m[2*4+3] + tmp_16 * m[3*4+3]) - > > (tmp_13 * m[1*4+3] + tmp_14 * m[2*4+3] + tmp_17 * m[3*4+3])), > > d * ((tmp_13 * m[0*4+3] + tmp_18 * m[2*4+3] + tmp_21 * m[3*4+3]) - > > (tmp_12 * m[0*4+3] + tmp_19 * m[2*4+3] + tmp_20 * m[3*4+3])), > > d * ((tmp_14 * m[0*4+3] + tmp_19 * m[1*4+3] + tmp_22 * m[3*4+3]) - > > (tmp_15 * m[0*4+3] + tmp_18 * m[1*4+3] + tmp_23 * m[3*4+3])), > > d * ((tmp_17 * m[0*4+3] + tmp_20 * m[1*4+3] + tmp_23 * m[2*4+3]) - > > (tmp_16 * m[0*4+3] + tmp_21 * m[1*4+3] + tmp_22 * m[2*4+3])), > > d * ((tmp_14 * m[2*4+2] + tmp_17 * m[3*4+2] + tmp_13 * m[1*4+2]) - > > (tmp_16 * m[3*4+2] + tmp_12 * m[1*4+2] + tmp_15 * m[2*4+2])), > > d * ((tmp_20 * m[3*4+2] + tmp_12 * m[0*4+2] + tmp_19 * m[2*4+2]) - > > (tmp_18 * m[2*4+2] + tmp_21 * m[3*4+2] + tmp_13 * m[0*4+2])), > > d * ((tmp_18 * m[1*4+2] + tmp_23 * m[3*4+2] + tmp_15 * m[0*4+2]) - > > (tmp_22 * m[3*4+2] + tmp_14 * m[0*4+2] + tmp_19 * m[1*4+2])), > > d * ((tmp_22 * m[2*4+2] + tmp_16 * m[0*4+2] + tmp_21 * m[1*4+2]) - > > (tmp_20 * m[1*4+2] + tmp_23 * m[2*4+2] + tmp_17 * m[0*4+2]))]; > > }; > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu Aug 19 10:50:01 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Thu, 19 Aug 2010 10:50:01 -0700 Subject: [Public WebGL] about RenderBufferStorage In-Reply-To: References: <159460751.322283.1282233026387.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Aug 19, 2010 at 9:14 AM, Mo, Zhenyao wrote: > In WebGL spec 4.1 it mentions that WebGL implementation must > initialize resources' contents to 0. Although only texture and vbo > are used as examples in 4.1, I think renderbuffer is also one of the > resources that should be initialized to 0. > > Maybe we should add renderbuffer into section 4.1 for clarity? > What Mo said. If you don't clear the render buffers than it would be possible to OCR the contents of vram and possibly grab user info. > > Zhenyao > > On Thu, Aug 19, 2010 at 8:50 AM, Benoit Jacob wrote: > > Hi, > > > > Correct me if I am wrong, but it seems to me that the test, > > > > conformance/renderbuffer-initialization.html > > > > expects renderBufferStorage() to initialize the buffer's contents to 0. > The OpenGL ES 2.0.24 spec says that it leaves the buffer's contents > uninitialized, and the WebGL spec doesn't seem to say anything more about > this function (?) so my question is, am I missing something here, or does > the WebGL spec need to be updated to specify that renderBufferStorage() > initializes by 0, or should this test be corrected? > > > > Note, I stumbled upon this while running the test suite in Valgrind, and > it complained about uninitialized data, originally created by > glRenderBufferStorage, being subsequently used in a conditional jump. > > > > Cheers, > > Benoit > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.s...@ Thu Aug 19 10:53:29 2010 From: m.s...@ (Mehmet Sirin) Date: Thu, 19 Aug 2010 19:53:29 +0200 Subject: [Public WebGL] Re: Get Viewport for converting coordinates In-Reply-To: References: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> Message-ID: Hi, I think you were right! it seems to be an projection-matrix-problem. because now I'm testing the same procedure with a orthographic projection and it works fine. i will test some other proj matrices. thank you 2010/8/19 Gregg Tavares (wrk) : > > > On Thu, Aug 19, 2010 at 4:56 AM, Mehmet Sirin > wrote: >> >> hi Gregg, >> >> > If you're using these functions? >> > http://webgl-mjs.googlecode.com/hg/docs/files/mjs-js.html >> For matrix operations I use http://sylvester.jcoglan.com/api/matrix. >> >> Hmm, still having problems here. actually I'm doing exactly the things >> you posted in the recent emails.. > > Clearly you are not doing exactly the same things or you'd be getting > exactly the same results :-D > >> >> maybe i should post the result which are calculated in the single >> steps, somewhere must be the mistake: >> I got some object coordinates like 3489860.0, 5557180.0, >> 457.0790023803711. Then I do the standard viewing pipeline with >> perspective projection and see my scene on the screen. >> Now I describe what the coordinates look like when i go backwards from >> screen coordinates to object coordinates. >> >> 1.) Transform screen coordinates to normalized device (where the scene >> is in a [1,1] cube): >> ? //take the inverse of viewport matrix and multiply with screen >> coordinates >> >> ? var v = Vector.create([xclick,yclick,0,1]); ? //xclick,yclick=pixel >> coordinates >> ? var >> a=getViewportMatrix(canvas.clientWidth,canvas.clientHeight).inverse(); >> ? var npc_coord=a.x(v); > > I don't quite follow the x function and if it's really a matrix multiply but > assuming it is it's not dividing by W as the function I posted was doing. > >> >> ?alert(npc_coord.flatten()) >> --> value for clicking lower left of canvas: -1,1,0,1 ? //fixed values >> in source code: xclick=canvas.clentWidth,.. >> --> value for clicking lower left of canvas: 1,-1,0,1 >> --> clicking in the middle of the scene: 0.0057, 0.1207, 0, 1 >> //without fixed values.. >> ..strange. >> >> 2.) Transform normalized coordinates to view coordinates: >> //take inverse projection matrix and multiply with normalized coordinates >> ? var p_Inv=pMatrix.inverse(); ? ? ? ? ? ?//inverse projection matrix >> ? var vrc_coord=p_Inv.x(npc_coord); >> >> ?alert(vrc_coord.flatten()) ? ? ? ? ? ? ? ? //for testing purposes >> without dividing with 4dComponent >> --> lower left: -0.5336,0.4040,-1, 5.0000 >> ..i thought the view coordinates should look like object coordinates? >> >> 3.) ..makes no sense because stuff above is terribly wrong. >> --------- >> >> All in one step: >> >> ? var pixel = Vector.create([0,530,0,1]); >> ? var view=getViewportMatrix(canvas.clientWidth,canvas.clientHeight); >> ? var pM=pMatrix.dup(); >> ? var mvM=mvMatrix.dup(); >> >> ? var allMultiplied= view.multiply(pM).multiply(mvM); >> ? var allInversed=allMultiplied.inverse(); >> ? var multiplyWithPixel=allInversed.multiply(pixel) >> >> ? var w=multiplyWithPixel.flatten()[3]; >> ? var objectCoordinate=...; >> ?alert(objectCoordinate); >> ?--> lower left: ?3489859.8932, 5557180.0808, 7236.7802 >> ?--> upper right: 3489860.1067, 5557179.9191, 7236.7802 but it should >> be something like 3489940.00, .. >> ..and when clicking around on screen nothing but the values after the >> dot of the number changes. >> >> :) >> >> thank you for reading (and thinking "what the **** is he doing?! "). >> >> >> > Then I'd guess it's something like >> > var projection = M4x4.makePerspective(45, canvas.clientWidth / >> > canvas.clientHeight,??0.1, 20000000.0); >> > var view = M4x4.makeLookAt(eye, center, up); >> > var viewProjection = M4x4.mul(view, projection); ?(or is it projection, >> > view?) >> > It doesn't appear he provided an matrix inverse function >> > var viewInverseProjection = inverse(viewProjection); >> > where inverse is >> > function inverse(m) { >> > ??var tmp_0 = m[2*4+2] * m[3*4+3]; >> > ??var tmp_1 = m[3*4+2] * m[2*4+3]; >> > ??var tmp_2 = m[1*4+2] * m[3*4+3]; >> > ??var tmp_3 = m[3*4+2] * m[1*4+3]; >> > ??var tmp_4 = m[1*4+2] * m[2*4+3]; >> > ??var tmp_5 = m[2*4+2] * m[1*4+3]; >> > ??var tmp_6 = m[0*4+2] * m[3*4+3]; >> > ??var tmp_7 = m[3*4+2] * m[0*4+3]; >> > ??var tmp_8 = m[0*4+2] * m[2*4+3]; >> > ??var tmp_9 = m[2*4+2] * m[0*4+3]; >> > ??var tmp_10 = m[0*4+2] * m[1*4+3]; >> > ??var tmp_11 = m[1*4+2] * m[0*4+3]; >> > ??var tmp_12 = m[2*4+0] * m[3*4+1]; >> > ??var tmp_13 = m[3*4+0] * m[2*4+1]; >> > ??var tmp_14 = m[1*4+0] * m[3*4+1]; >> > ??var tmp_15 = m[3*4+0] * m[1*4+1]; >> > ??var tmp_16 = m[1*4+0] * m[2*4+1]; >> > ??var tmp_17 = m[2*4+0] * m[1*4+1]; >> > ??var tmp_18 = m[0*4+0] * m[3*4+1]; >> > ??var tmp_19 = m[3*4+0] * m[0*4+1]; >> > ??var tmp_20 = m[0*4+0] * m[2*4+1]; >> > ??var tmp_21 = m[2*4+0] * m[0*4+1]; >> > ??var tmp_22 = m[0*4+0] * m[1*4+1]; >> > ??var tmp_23 = m[1*4+0] * m[0*4+1]; >> > ??var t0 = (tmp_0 * m[1*4+1] + tmp_3 * m[2*4+1] + tmp_4 * m[3*4+1]) - >> > ?? ? ?(tmp_1 * m[1*4+1] + tmp_2 * m[2*4+1] + tmp_5 * m[3*4+1]); >> > ??var t1 = (tmp_1 * m[0*4+1] + tmp_6 * m[2*4+1] + tmp_9 * m[3*4+1]) - >> > ?? ? ?(tmp_0 * m[0*4+1] + tmp_7 * m[2*4+1] + tmp_8 * m[3*4+1]); >> > ??var t2 = (tmp_2 * m[0*4+1] + tmp_7 * m[1*4+1] + tmp_10 * m[3*4+1]) - >> > ?? ? ?(tmp_3 * m[0*4+1] + tmp_6 * m[1*4+1] + tmp_11 * m[3*4+1]); >> > ??var t3 = (tmp_5 * m[0*4+1] + tmp_8 * m[1*4+1] + tmp_11 * m[2*4+1]) - >> > ?? ? ?(tmp_4 * m[0*4+1] + tmp_9 * m[1*4+1] + tmp_10 * m[2*4+1]); >> > ??var d = 1.0 / (m[0*4+0] * t0 + m[1*4+0] * t1 + m[2*4+0] * t2 + >> > m[3*4+0] * >> > t3); >> > ??return [d * t0, d * t1, d * t2, d * t3, >> > ?? ? ? d * ((tmp_1 * m[1*4+0] + tmp_2 * m[2*4+0] + tmp_5 * m[3*4+0]) - >> > ?? ? ? ? ?(tmp_0 * m[1*4+0] + tmp_3 * m[2*4+0] + tmp_4 * m[3*4+0])), >> > ?? ? ? d * ((tmp_0 * m[0*4+0] + tmp_7 * m[2*4+0] + tmp_8 * m[3*4+0]) - >> > ?? ? ? ? ?(tmp_1 * m[0*4+0] + tmp_6 * m[2*4+0] + tmp_9 * m[3*4+0])), >> > ?? ? ? d * ((tmp_3 * m[0*4+0] + tmp_6 * m[1*4+0] + tmp_11 * m[3*4+0]) - >> > ?? ? ? ? ?(tmp_2 * m[0*4+0] + tmp_7 * m[1*4+0] + tmp_10 * m[3*4+0])), >> > ?? ? ? d * ((tmp_4 * m[0*4+0] + tmp_9 * m[1*4+0] + tmp_10 * m[2*4+0]) - >> > ?? ? ? ? ?(tmp_5 * m[0*4+0] + tmp_8 * m[1*4+0] + tmp_11 * m[2*4+0])), >> > ?? ? ? d * ((tmp_12 * m[1*4+3] + tmp_15 * m[2*4+3] + tmp_16 * m[3*4+3]) >> > - >> > ?? ? ? ? ?(tmp_13 * m[1*4+3] + tmp_14 * m[2*4+3] + tmp_17 * m[3*4+3])), >> > ?? ? ? d * ((tmp_13 * m[0*4+3] + tmp_18 * m[2*4+3] + tmp_21 * m[3*4+3]) >> > - >> > ?? ? ? ? ?(tmp_12 * m[0*4+3] + tmp_19 * m[2*4+3] + tmp_20 * m[3*4+3])), >> > ?? ? ? d * ((tmp_14 * m[0*4+3] + tmp_19 * m[1*4+3] + tmp_22 * m[3*4+3]) >> > - >> > ?? ? ? ? ?(tmp_15 * m[0*4+3] + tmp_18 * m[1*4+3] + tmp_23 * m[3*4+3])), >> > ?? ? ? d * ((tmp_17 * m[0*4+3] + tmp_20 * m[1*4+3] + tmp_23 * m[2*4+3]) >> > - >> > ?? ? ? ? ?(tmp_16 * m[0*4+3] + tmp_21 * m[1*4+3] + tmp_22 * m[2*4+3])), >> > ?? ? ? d * ((tmp_14 * m[2*4+2] + tmp_17 * m[3*4+2] + tmp_13 * m[1*4+2]) >> > - >> > ?? ? ? ? ?(tmp_16 * m[3*4+2] + tmp_12 * m[1*4+2] + tmp_15 * m[2*4+2])), >> > ?? ? ? d * ((tmp_20 * m[3*4+2] + tmp_12 * m[0*4+2] + tmp_19 * m[2*4+2]) >> > - >> > ?? ? ? ? ?(tmp_18 * m[2*4+2] + tmp_21 * m[3*4+2] + tmp_13 * m[0*4+2])), >> > ?? ? ? d * ((tmp_18 * m[1*4+2] + tmp_23 * m[3*4+2] + tmp_15 * m[0*4+2]) >> > - >> > ?? ? ? ? ?(tmp_22 * m[3*4+2] + tmp_14 * m[0*4+2] + tmp_19 * m[1*4+2])), >> > ?? ? ? d * ((tmp_22 * m[2*4+2] + tmp_16 * m[0*4+2] + tmp_21 * m[1*4+2]) >> > - >> > ?? ? ? ? ?(tmp_20 * m[1*4+2] + tmp_23 * m[2*4+2] + tmp_17 * m[0*4+2]))]; >> > }; >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From m.s...@ Thu Aug 19 11:08:34 2010 From: m.s...@ (Mehmet Sirin) Date: Thu, 19 Aug 2010 20:08:34 +0200 Subject: [Public WebGL] Re: Get Viewport for converting coordinates In-Reply-To: References: <76A29D32-25BE-40C3-B4D3-D28AD672079A@kaust.edu.sa> Message-ID: Hi, I think you were right! it seems to be an projection-matrix-problem. because now I'm testing the same procedure with a orthographic projection and it works fine (only if if i rotate everything gets stranhe..). i will test some other proj matrices. thank you 2010/8/19 Gregg Tavares (wrk) : > > > On Thu, Aug 19, 2010 at 4:56 AM, Mehmet Sirin > wrote: >> >> hi Gregg, >> >> > If you're using these functions? >> > http://webgl-mjs.googlecode.com/hg/docs/files/mjs-js.html >> For matrix operations I use http://sylvester.jcoglan.com/api/matrix. >> >> Hmm, still having problems here. actually I'm doing exactly the things >> you posted in the recent emails.. > > Clearly you are not doing exactly the same things or you'd be getting > exactly the same results :-D > >> >> maybe i should post the result which are calculated in the single >> steps, somewhere must be the mistake: >> I got some object coordinates like 3489860.0, 5557180.0, >> 457.0790023803711. Then I do the standard viewing pipeline with >> perspective projection and see my scene on the screen. >> Now I describe what the coordinates look like when i go backwards from >> screen coordinates to object coordinates. >> >> 1.) Transform screen coordinates to normalized device (where the scene >> is in a [1,1] cube): >> ? //take the inverse of viewport matrix and multiply with screen >> coordinates >> >> ? var v = Vector.create([xclick,yclick,0,1]); ? //xclick,yclick=pixel >> coordinates >> ? var >> a=getViewportMatrix(canvas.clientWidth,canvas.clientHeight).inverse(); >> ? var npc_coord=a.x(v); > > I don't quite follow the x function and if it's really a matrix multiply but > assuming it is it's not dividing by W as the function I posted was doing. > >> >> ?alert(npc_coord.flatten()) >> --> value for clicking lower left of canvas: -1,1,0,1 ? //fixed values >> in source code: xclick=canvas.clentWidth,.. >> --> value for clicking lower left of canvas: 1,-1,0,1 >> --> clicking in the middle of the scene: 0.0057, 0.1207, 0, 1 >> //without fixed values.. >> ..strange. >> >> 2.) Transform normalized coordinates to view coordinates: >> //take inverse projection matrix and multiply with normalized coordinates >> ? var p_Inv=pMatrix.inverse(); ? ? ? ? ? ?//inverse projection matrix >> ? var vrc_coord=p_Inv.x(npc_coord); >> >> ?alert(vrc_coord.flatten()) ? ? ? ? ? ? ? ? //for testing purposes >> without dividing with 4dComponent >> --> lower left: -0.5336,0.4040,-1, 5.0000 >> ..i thought the view coordinates should look like object coordinates? >> >> 3.) ..makes no sense because stuff above is terribly wrong. >> --------- >> >> All in one step: >> >> ? var pixel = Vector.create([0,530,0,1]); >> ? var view=getViewportMatrix(canvas.clientWidth,canvas.clientHeight); >> ? var pM=pMatrix.dup(); >> ? var mvM=mvMatrix.dup(); >> >> ? var allMultiplied= view.multiply(pM).multiply(mvM); >> ? var allInversed=allMultiplied.inverse(); >> ? var multiplyWithPixel=allInversed.multiply(pixel) >> >> ? var w=multiplyWithPixel.flatten()[3]; >> ? var objectCoordinate=...; >> ?alert(objectCoordinate); >> ?--> lower left: ?3489859.8932, 5557180.0808, 7236.7802 >> ?--> upper right: 3489860.1067, 5557179.9191, 7236.7802 but it should >> be something like 3489940.00, .. >> ..and when clicking around on screen nothing but the values after the >> dot of the number changes. >> >> :) >> >> thank you for reading (and thinking "what the **** is he doing?! "). >> >> >> > Then I'd guess it's something like >> > var projection = M4x4.makePerspective(45, canvas.clientWidth / >> > canvas.clientHeight,??0.1, 20000000.0); >> > var view = M4x4.makeLookAt(eye, center, up); >> > var viewProjection = M4x4.mul(view, projection); ?(or is it projection, >> > view?) >> > It doesn't appear he provided an matrix inverse function >> > var viewInverseProjection = inverse(viewProjection); >> > where inverse is >> > function inverse(m) { >> > ??var tmp_0 = m[2*4+2] * m[3*4+3]; >> > ??var tmp_1 = m[3*4+2] * m[2*4+3]; >> > ??var tmp_2 = m[1*4+2] * m[3*4+3]; >> > ??var tmp_3 = m[3*4+2] * m[1*4+3]; >> > ??var tmp_4 = m[1*4+2] * m[2*4+3]; >> > ??var tmp_5 = m[2*4+2] * m[1*4+3]; >> > ??var tmp_6 = m[0*4+2] * m[3*4+3]; >> > ??var tmp_7 = m[3*4+2] * m[0*4+3]; >> > ??var tmp_8 = m[0*4+2] * m[2*4+3]; >> > ??var tmp_9 = m[2*4+2] * m[0*4+3]; >> > ??var tmp_10 = m[0*4+2] * m[1*4+3]; >> > ??var tmp_11 = m[1*4+2] * m[0*4+3]; >> > ??var tmp_12 = m[2*4+0] * m[3*4+1]; >> > ??var tmp_13 = m[3*4+0] * m[2*4+1]; >> > ??var tmp_14 = m[1*4+0] * m[3*4+1]; >> > ??var tmp_15 = m[3*4+0] * m[1*4+1]; >> > ??var tmp_16 = m[1*4+0] * m[2*4+1]; >> > ??var tmp_17 = m[2*4+0] * m[1*4+1]; >> > ??var tmp_18 = m[0*4+0] * m[3*4+1]; >> > ??var tmp_19 = m[3*4+0] * m[0*4+1]; >> > ??var tmp_20 = m[0*4+0] * m[2*4+1]; >> > ??var tmp_21 = m[2*4+0] * m[0*4+1]; >> > ??var tmp_22 = m[0*4+0] * m[1*4+1]; >> > ??var tmp_23 = m[1*4+0] * m[0*4+1]; >> > ??var t0 = (tmp_0 * m[1*4+1] + tmp_3 * m[2*4+1] + tmp_4 * m[3*4+1]) - >> > ?? ? ?(tmp_1 * m[1*4+1] + tmp_2 * m[2*4+1] + tmp_5 * m[3*4+1]); >> > ??var t1 = (tmp_1 * m[0*4+1] + tmp_6 * m[2*4+1] + tmp_9 * m[3*4+1]) - >> > ?? ? ?(tmp_0 * m[0*4+1] + tmp_7 * m[2*4+1] + tmp_8 * m[3*4+1]); >> > ??var t2 = (tmp_2 * m[0*4+1] + tmp_7 * m[1*4+1] + tmp_10 * m[3*4+1]) - >> > ?? ? ?(tmp_3 * m[0*4+1] + tmp_6 * m[1*4+1] + tmp_11 * m[3*4+1]); >> > ??var t3 = (tmp_5 * m[0*4+1] + tmp_8 * m[1*4+1] + tmp_11 * m[2*4+1]) - >> > ?? ? ?(tmp_4 * m[0*4+1] + tmp_9 * m[1*4+1] + tmp_10 * m[2*4+1]); >> > ??var d = 1.0 / (m[0*4+0] * t0 + m[1*4+0] * t1 + m[2*4+0] * t2 + >> > m[3*4+0] * >> > t3); >> > ??return [d * t0, d * t1, d * t2, d * t3, >> > ?? ? ? d * ((tmp_1 * m[1*4+0] + tmp_2 * m[2*4+0] + tmp_5 * m[3*4+0]) - >> > ?? ? ? ? ?(tmp_0 * m[1*4+0] + tmp_3 * m[2*4+0] + tmp_4 * m[3*4+0])), >> > ?? ? ? d * ((tmp_0 * m[0*4+0] + tmp_7 * m[2*4+0] + tmp_8 * m[3*4+0]) - >> > ?? ? ? ? ?(tmp_1 * m[0*4+0] + tmp_6 * m[2*4+0] + tmp_9 * m[3*4+0])), >> > ?? ? ? d * ((tmp_3 * m[0*4+0] + tmp_6 * m[1*4+0] + tmp_11 * m[3*4+0]) - >> > ?? ? ? ? ?(tmp_2 * m[0*4+0] + tmp_7 * m[1*4+0] + tmp_10 * m[3*4+0])), >> > ?? ? ? d * ((tmp_4 * m[0*4+0] + tmp_9 * m[1*4+0] + tmp_10 * m[2*4+0]) - >> > ?? ? ? ? ?(tmp_5 * m[0*4+0] + tmp_8 * m[1*4+0] + tmp_11 * m[2*4+0])), >> > ?? ? ? d * ((tmp_12 * m[1*4+3] + tmp_15 * m[2*4+3] + tmp_16 * m[3*4+3]) >> > - >> > ?? ? ? ? ?(tmp_13 * m[1*4+3] + tmp_14 * m[2*4+3] + tmp_17 * m[3*4+3])), >> > ?? ? ? d * ((tmp_13 * m[0*4+3] + tmp_18 * m[2*4+3] + tmp_21 * m[3*4+3]) >> > - >> > ?? ? ? ? ?(tmp_12 * m[0*4+3] + tmp_19 * m[2*4+3] + tmp_20 * m[3*4+3])), >> > ?? ? ? d * ((tmp_14 * m[0*4+3] + tmp_19 * m[1*4+3] + tmp_22 * m[3*4+3]) >> > - >> > ?? ? ? ? ?(tmp_15 * m[0*4+3] + tmp_18 * m[1*4+3] + tmp_23 * m[3*4+3])), >> > ?? ? ? d * ((tmp_17 * m[0*4+3] + tmp_20 * m[1*4+3] + tmp_23 * m[2*4+3]) >> > - >> > ?? ? ? ? ?(tmp_16 * m[0*4+3] + tmp_21 * m[1*4+3] + tmp_22 * m[2*4+3])), >> > ?? ? ? d * ((tmp_14 * m[2*4+2] + tmp_17 * m[3*4+2] + tmp_13 * m[1*4+2]) >> > - >> > ?? ? ? ? ?(tmp_16 * m[3*4+2] + tmp_12 * m[1*4+2] + tmp_15 * m[2*4+2])), >> > ?? ? ? d * ((tmp_20 * m[3*4+2] + tmp_12 * m[0*4+2] + tmp_19 * m[2*4+2]) >> > - >> > ?? ? ? ? ?(tmp_18 * m[2*4+2] + tmp_21 * m[3*4+2] + tmp_13 * m[0*4+2])), >> > ?? ? ? d * ((tmp_18 * m[1*4+2] + tmp_23 * m[3*4+2] + tmp_15 * m[0*4+2]) >> > - >> > ?? ? ? ? ?(tmp_22 * m[3*4+2] + tmp_14 * m[0*4+2] + tmp_19 * m[1*4+2])), >> > ?? ? ? d * ((tmp_22 * m[2*4+2] + tmp_16 * m[0*4+2] + tmp_21 * m[1*4+2]) >> > - >> > ?? ? ? ? ?(tmp_20 * m[1*4+2] + tmp_23 * m[2*4+2] + tmp_17 * m[0*4+2]))]; >> > }; >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Thu Aug 19 12:20:31 2010 From: cal...@ (Mark Callow) Date: Thu, 19 Aug 2010 12:20:31 -0700 Subject: [Public WebGL] WebGLArray / Firefox / Minefield Message-ID: <4C6D83FF.6050809@hicorp.co.jp> Hi, There are several demos around that run on Minefield but not Chrome. One of the most common reasons is because Minefield is still supporting WebGLArray. Can Mozilla please remove WebGLArray support to stop any more people falling into the trap of using this obsolete feature? Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From cal...@ Thu Aug 19 13:00:07 2010 From: cal...@ (Mark Callow) Date: Thu, 19 Aug 2010 13:00:07 -0700 Subject: [Public WebGL] =?UTF-8?B?UmU6IFtQdWJsaWMgV2ViR0xdIFJlOiBbUHVibGljIFdlYkdMXSBFZmY=?= =?UTF-8?B?ZWN0cyBvZiBDb21wbGV0ZW5lc3Mgb24gVGV4dHVyZSBJbWFnZSBTcGVjae+sgWM=?= =?UTF-8?B?YXRpb24=?= In-Reply-To: <1763906254.277713.1281801621117.JavaMail.root@cm-mail03.mozilla.org> References: <1763906254.277713.1281801621117.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C6D8D47.5090505@hicorp.co.jp> Hi Benoit, Please file bugs about these omissions from the man pages in the Khronos public bugzilla . Regards -Mark On 2010/08/14 9:00, Benoit Jacob wrote: > ... > Thanks, I had missed that (in my defense, section 3.7.1 is long and it is written 3 pages into it; also the glTexImage2D man page is missing this information). It is indeed written at the top of page 67 in the 2.0.24 spec. > > ... > Thanks a lot, I had missed that too. (Also, the glCopyTexImage2D man page is missing this information) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From cal...@ Thu Aug 19 13:17:49 2010 From: cal...@ (Mark Callow) Date: Thu, 19 Aug 2010 13:17:49 -0700 Subject: [Public WebGL] Differences between Chrome & Minefield Message-ID: <4C6D916D.7000000@hicorp.co.jp> Hi, The WebGL sample at www.c3dl.org does not run on Chrome but does run on Minefield. On Minefield I see in the error console "enable: invalid enum 0x8642" and the sample runs. On Chrome I see "Uncaught TypeError: Cannot call method 'getProgramID' of null. The latter is caused by a createProgram call returning null. I am curious why a createProgram call could fails on Chrome and succeed on Minefield. They are the very latest builds downloaded this morning and are running on the same machine. The problem is 100% repeatable. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From gma...@ Thu Aug 19 15:19:32 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Thu, 19 Aug 2010 15:19:32 -0700 Subject: [Public WebGL] Differences between Chrome & Minefield In-Reply-To: <4C6D916D.7000000@hicorp.co.jp> References: <4C6D916D.7000000@hicorp.co.jp> Message-ID: I don't know why that site is getting that specific error but I tried the second tutorial and the code is using non WebGL stuff. (ie, Desktop OpenGL stuff) His shaders are referencing gl_FrontColor and gl_TexCoord, neither of which are part of WebGL. On Thu, Aug 19, 2010 at 1:17 PM, Mark Callow wrote: > Hi, > > The WebGL sample at www.c3dl.org does not run on Chrome but does run on > Minefield. > > On Minefield I see in the error console "enable: invalid enum 0x8642" and > the sample runs. > > On Chrome I see "Uncaught TypeError: Cannot call method 'getProgramID' of > null. > > The latter is caused by a createProgram call returning null. I am curious > why a createProgram call could fails on Chrome and succeed on Minefield. > They are the very latest builds downloaded this morning and are running on > the same machine. The problem is 100% repeatable. > > Regards > > -Mark > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu Aug 19 15:24:33 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Thu, 19 Aug 2010 15:24:33 -0700 Subject: [Public WebGL] Differences between Chrome & Minefield In-Reply-To: References: <4C6D916D.7000000@hicorp.co.jp> Message-ID: I see effectively the same error in Firefox 4 beta 2 as in Chromium. In Firefox I get this error: Error: this.createProgram(c3dl.psys_vs, c3dl.psys_fs) is null Source File: http://www.c3dl.org/wp-content/2.0Release/canvas3dapi/renderer/rendererwebgl.js Line: 502 In Chromium I get 1. Uncaught TypeError: Cannot call method 'getProgramID' of null 1. initrendererwebgl.js:502 2. initscene.js:827 3. J3DXplorer_mainJ3DXplorer.js:32 4. c3dl.ColladaQueue.popFrontcolladaqueue.js:84 5. parsecolladaloader.js:359 6. xmlhttp.onreadystatechange Same line in both browsers. On Thu, Aug 19, 2010 at 3:19 PM, Gregg Tavares (wrk) wrote: > I don't know why that site is getting that specific error but I tried the > second tutorial and the code is using non WebGL stuff. (ie, Desktop OpenGL > stuff) > > His shaders are referencing gl_FrontColor and gl_TexCoord, neither of which > are part of WebGL. > > > > On Thu, Aug 19, 2010 at 1:17 PM, Mark Callow wrote: > >> Hi, >> >> The WebGL sample at www.c3dl.org does not run on Chrome but does run on >> Minefield. >> >> On Minefield I see in the error console "enable: invalid enum 0x8642" and >> the sample runs. >> >> On Chrome I see "Uncaught TypeError: Cannot call method 'getProgramID' of >> null. >> >> The latter is caused by a createProgram call returning null. I am curious >> why a createProgram call could fails on Chrome and succeed on Minefield. >> They are the very latest builds downloaded this morning and are running on >> the same machine. The problem is 100% repeatable. >> >> Regards >> >> -Mark >> >> >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Thu Aug 19 16:05:03 2010 From: cal...@ (Mark Callow) Date: Thu, 19 Aug 2010 16:05:03 -0700 Subject: [Public WebGL] Differences between Chrome & Minefield In-Reply-To: References: <4C6D916D.7000000@hicorp.co.jp> Message-ID: <4C6DB89F.6010006@hicorp.co.jp> I am using Minefield 4.0b5pre Mozilla/5.0 (Windows NT 5.1; rv:2.0b5pre Gecko/20100819 Minefield/4.0b5pre. I do not see any error from create program, although the shader validator is supposedly enabled, and the sample runs. I just double checked and I do have the shader validator enabled which is shown as the default value. It looks like some bug has crept into Minefield/Firefox since beta 2. When I wrote my first message, I was a bit confused by this.createProgram. It's his own, not gl.createProgram. Because of this, I was thinking compilation would come later so shader content would not related to the error. Regards -Mark On 2010/08/19 15:24, Gregg Tavares (wrk) wrote: > I see effectively the same error in Firefox 4 beta 2 as in Chromium. > > In Firefox I get this error: > Error: this.createProgram(c3dl.psys_vs, c3dl.psys_fs) is null > Source File: > http://www.c3dl.org/wp-content/2.0Release/canvas3dapi/renderer/rendererwebgl.js > Line: 502 > > > In Chromium I get > > 1. > Uncaught TypeError: Cannot call method 'getProgramID' of null > 1. > initrendererwebgl.js:502 > > 2. > initscene.js:827 > > 3. > J3DXplorer_mainJ3DXplorer.js:32 > > 4. > c3dl.ColladaQueue.popFrontcolladaqueue.js:84 > > 5. > parsecolladaloader.js:359 > > 6. > xmlhttp.onreadystatechange > > > > > > > > Same line in both browsers. > > On Thu, Aug 19, 2010 at 3:19 PM, Gregg Tavares (wrk) > wrote: > > I don't know why that site is getting that specific error but I > tried the second tutorial and the code is using non WebGL stuff. > (ie, Desktop OpenGL stuff) > > His shaders are referencing gl_FrontColor and gl_TexCoord, neither > of which are part of WebGL. > > > > On Thu, Aug 19, 2010 at 1:17 PM, Mark Callow > > wrote: > > Hi, > > The WebGL sample at www.c3dl.org does > not run on Chrome but does run on Minefield. > > On Minefield I see in the error console "enable: invalid enum > 0x8642" and the sample runs. > > On Chrome I see "Uncaught TypeError: Cannot call method > 'getProgramID' of null. > > The latter is caused by a createProgram call returning null. I > am curious why a createProgram call could fails on Chrome and > succeed on Minefield. They are the very latest builds > downloaded this morning and are running on the same machine. > The problem is 100% repeatable. > > Regards > > -Mark > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 378 bytes Desc: not available URL: From vla...@ Thu Aug 19 16:24:37 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 19 Aug 2010 16:24:37 -0700 (PDT) Subject: [Public WebGL] Differences between Chrome & Minefield In-Reply-To: <4C6DB89F.6010006@hicorp.co.jp> Message-ID: <537345818.328613.1282260277525.JavaMail.root@cm-mail03.mozilla.org> Are you sure you're running the right build? 20100819 gives the enum warning and the createProgram error here. - Vlad ----- Original Message ----- > I am using Minefield 4.0b5pre Mozilla/5.0 (Windows NT 5.1; rv:2.0b5pre > Gecko/20100819 Minefield/4.0b5pre. I do not see any error from create > program, although the shader validator is supposedly enabled, and the > sample runs. I just double checked and I do have the shader validator > enabled which is shown as the default value. It looks like some bug > has crept into Minefield/Firefox since beta 2. > > > When I wrote my first message, I was a bit confused by > this.createProgram. It's his own, not gl.createProgram. Because of > this, I was thinking compilation would come later so shader content > would not related to the error. > > > Regards > > > > -Mark > > On 2010/08/19 15:24, Gregg Tavares (wrk) wrote: > > I see effectively the same error in Firefox 4 beta 2 as in Chromium. > > > In Firefox I get this error: > > Error: this.createProgram(c3dl.psys_vs, c3dl.psys_fs) is null > Source File: > http://www.c3dl.org/wp-content/2.0Release/canvas3dapi/renderer/rendererwebgl.js > Line: 502 > > > > > In Chromium I get > > > 1. > Uncaught TypeError: Cannot call method 'getProgramID' of null > > > 1. > init rendererwebgl.js:502 > 2. > init scene.js:827 > 3. > J3DXplorer_main J3DXplorer.js:32 > 4. > c3dl.ColladaQueue.popFront colladaqueue.js:84 > 5. > parse colladaloader.js:359 > 6. > xmlhttp.onreadystatechange > > > > > > > Same line in both browsers. > > On Thu, Aug 19, 2010 at 3:19 PM, Gregg Tavares (wrk) < gman...@ > > wrote: > > > I don't know why that site is getting that specific error but I tried > the second tutorial and the code is using non WebGL stuff. (ie, > Desktop OpenGL stuff) > > > His shaders are referencing gl_FrontColor and gl_TexCoord, neither of > which are part of WebGL. > > > > > > > > On Thu, Aug 19, 2010 at 1:17 PM, Mark Callow < > callow_mark...@ > wrote: > > > > > > Hi, > > > The WebGL sample at www.c3dl.org does not run on Chrome but does run > on Minefield. > > > On Minefield I see in the error console "enable: invalid enum 0x8642" > and the sample runs. > > > On Chrome I see "Uncaught TypeError: Cannot call method 'getProgramID' > of null. > > > The latter is caused by a createProgram call returning null. I am > curious why a createProgram call could fails on Chrome and succeed on > Minefield. They are the very latest builds downloaded this morning and > are running on the same machine. The problem is 100% repeatable. > > > Regards > > > > -Mark ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Thu Aug 19 17:32:16 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 19 Aug 2010 17:32:16 -0700 (PDT) Subject: [Public WebGL] Differences between Chrome & Minefield In-Reply-To: <537345818.328613.1282260277525.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1693353474.329191.1282264336313.JavaMail.root@cm-mail03.mozilla.org> Ah ha! You're right Mark, there was a bug. Something was static that shouldn't have been, which meant we were only checking the pref for the first context to get created (and why it didn't work for me). The default value was also incorrectly false (in the code, not the default of the pref), thus leading to the problem. I'll fix it and get it in tomorrow's nightly. Thanks for catching this! - Vlad ----- Original Message ----- > Are you sure you're running the right build? 20100819 gives the enum > warning and the createProgram error here. > > - Vlad > > ----- Original Message ----- > > I am using Minefield 4.0b5pre Mozilla/5.0 (Windows NT 5.1; > > rv:2.0b5pre > > Gecko/20100819 Minefield/4.0b5pre. I do not see any error from > > create > > program, although the shader validator is supposedly enabled, and > > the > > sample runs. I just double checked and I do have the shader > > validator > > enabled which is shown as the default value. It looks like some bug > > has crept into Minefield/Firefox since beta 2. > > > > > > When I wrote my first message, I was a bit confused by > > this.createProgram. It's his own, not gl.createProgram. Because of > > this, I was thinking compilation would come later so shader content > > would not related to the error. > > > > > > Regards > > > > > > > > -Mark > > > > On 2010/08/19 15:24, Gregg Tavares (wrk) wrote: > > > > I see effectively the same error in Firefox 4 beta 2 as in Chromium. > > > > > > In Firefox I get this error: > > > > Error: this.createProgram(c3dl.psys_vs, c3dl.psys_fs) is null > > Source File: > > http://www.c3dl.org/wp-content/2.0Release/canvas3dapi/renderer/rendererwebgl.js > > Line: 502 > > > > > > > > > > In Chromium I get > > > > > > 1. > > Uncaught TypeError: Cannot call method 'getProgramID' of null > > > > > > 1. > > init rendererwebgl.js:502 > > 2. > > init scene.js:827 > > 3. > > J3DXplorer_main J3DXplorer.js:32 > > 4. > > c3dl.ColladaQueue.popFront colladaqueue.js:84 > > 5. > > parse colladaloader.js:359 > > 6. > > xmlhttp.onreadystatechange > > > > > > > > > > > > > > Same line in both browsers. > > > > On Thu, Aug 19, 2010 at 3:19 PM, Gregg Tavares (wrk) < > > gman...@ > > > wrote: > > > > > > I don't know why that site is getting that specific error but I > > tried > > the second tutorial and the code is using non WebGL stuff. (ie, > > Desktop OpenGL stuff) > > > > > > His shaders are referencing gl_FrontColor and gl_TexCoord, neither > > of > > which are part of WebGL. > > > > > > > > > > > > > > > > On Thu, Aug 19, 2010 at 1:17 PM, Mark Callow < > > callow_mark...@ > wrote: > > > > > > > > > > > > Hi, > > > > > > The WebGL sample at www.c3dl.org does not run on Chrome but does run > > on Minefield. > > > > > > On Minefield I see in the error console "enable: invalid enum > > 0x8642" > > and the sample runs. > > > > > > On Chrome I see "Uncaught TypeError: Cannot call method > > 'getProgramID' > > of null. > > > > > > The latter is caused by a createProgram call returning null. I am > > curious why a createProgram call could fails on Chrome and succeed > > on > > Minefield. They are the very latest builds downloaded this morning > > and > > are running on the same machine. The problem is 100% repeatable. > > > > > > Regards > > > > > > > > -Mark > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From tim...@ Fri Aug 20 01:35:18 2010 From: tim...@ (Tim Johansson) Date: Fri, 20 Aug 2010 10:35:18 +0200 Subject: [Public WebGL] texImage2D calls with HTMLVideoElement In-Reply-To: <480111716.304436.1282079272752.JavaMail.root@cm-mail03.mozilla.org> References: <480111716.304436.1282079272752.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C6E3E46.9000806@opera.com> On 2010-08-17 23:07, Vladimir Vukicevic wrote: > We would want to (need to?) follow what the 2D canvas does in all of these cases, for consistency. I don't know that it's defined in all that much detail there, but browsers seem to have mostly identical behavior so far. > I agree, I think it would be a mistake to define it differently than the 2d canvas since it would be kind of confusing to everyone. If we define how it should work now and 2d canvas later defines it differently we will have the same problem. I would prefer that if we want to clarify the behavior we essentially just say it should work the same way it works in 2d canvas. We could also push to have it better specified in 2d canvas if required. //Tim > - Vlad > > ----- Original Message ----- >> El d?a 13 de agosto de 2010 12:27, Kenneth Russell >> escribi?: >>> On Fri, Aug 13, 2010 at 11:03 AM, Adrienne Walker >>> wrote: >>>> 2) Regarding the texImage2D call with the HTMLVideoElement, the >>>> spec >>>> is unclear about how to handle poster attributes. Is this >>>> intentionally left as an implementation detail? >>> I'm sure this is an oversight on our part. If you have any >>> suggestions >>> on additional spec text please let us know. >> After some thought, I think this is more general than just the poster >> attribute. Video controls have a similar problem. Also, border, >> width, and height attributes on an image tag bring up similar >> questions. >> >> My expectation for the use case of these texImage2D calls is for a >> user to create an out-of-DOM element, using the src attribute to >> coerce the browser into doing the heavy lifting for the media loading >> or using the canvas API for programmatic texture generation. I don't >> expect users to want a pixel-for-pixel copy of, for example, the video >> controls that would appear if they set controls=true on a video >> element. >> >> Following that line of thought, it would be useful if the spec listed >> a minimum set of attributes per element that are required to be >> supported when generating source image data from an element. If >> you're asking for suggestions, I would add something like the >> following paragraph: >> >> --snip-- >> The source image data specified by an HTML element is not guaranteed >> to respect all attributes on that element. texImage2D from an >> HTMLImageElement is only required to support the src attribute. >> texImage2D from an HTMLCanvasElement is only required to support the >> width and height attributes. texImage2D from an HTMLVideoElement will >> upload the current frame of the video specified by the src attribute. >> --snip-- >> >> I'll admit that I'm a little unsure on how to word that last sentence. >> I think attributes like autoplay, loop, or playbackrate should be >> respected, but I also didn't want to list the entire grab bag of >> attributes that affect what frame is currently playing. >> >> Regards, >> -Adrienne >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From enn...@ Fri Aug 20 09:31:39 2010 From: enn...@ (Adrienne Walker) Date: Fri, 20 Aug 2010 09:31:39 -0700 Subject: [Public WebGL] texImage2D calls with HTMLVideoElement In-Reply-To: <4C6E3E46.9000806@opera.com> References: <480111716.304436.1282079272752.JavaMail.root@cm-mail03.mozilla.org> <4C6E3E46.9000806@opera.com> Message-ID: El d?a 20 de agosto de 2010 01:35, Tim Johansson escribi?: > I agree, I think it would be a mistake to define it differently than the 2d > canvas since it would be kind of confusing to everyone. If we define how it > should work now and 2d canvas later defines it differently we will have the > same problem. > > I would prefer that if we want to clarify the behavior we essentially just > say it should work the same way it works in 2d canvas. We could also push to > have it better specified in 2d canvas if required. Staying consistent with the 2d canvas spec is a good goal. Upon rereading it, the 2d canvas spec appears to be much clearer about where the source data comes from and what its implicit size is. I like the idea of clarifying the WebGL behavior by saying that texImage2D calls for html elements are consistent with how they are treated during drawImage on a 2d canvas. -Adrienne ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Fri Aug 20 09:57:29 2010 From: cma...@ (Chris Marrin) Date: Fri, 20 Aug 2010 09:57:29 -0700 Subject: [Public WebGL] Proposed change to WebGL Event definition Message-ID: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> Currently the spec shows two event interfaces: WebGLContextLostEvent and WebGLContextRestoredEvent. Other Event generators in HTML tend to use a single Event object with many event types. For instance, there is a MouseEvent object with event types like "click", "mousedown", "mouseup", etc. I propose we change the spec to have a single WebGLContextEvent object with 3 event types: contextlost - The rendering context has lost its state. contextrestored - The rendering context state can be restored. contextcreationerror - An error occured when an attempt was made to create the context The IDL would be similar to the existing objects, with error information added: interface WebGLContextEvent : Event { // Status codes const unsigned short CONTEXT_LOST = 1; const unsigned short CONTEXT_RESTORED = 2; const unsigned short NOT_AVAILABLE = 3; // WebGL is not supported or not enabled const unsigned short NOT_SUPPORTED = 4; // Graphics hardware does not support WebGL const unsigned short OTHER_ERROR = 5; // Some other error occurred when creating context, details in statusMessage readonly attribute WebGLRenderingContext context; readonly attribute unsigned short statusCode; readonly attribute DOMString statusMessage; void initWebGLContextEvent(DOMString typeArg, boolean canBubbleArg, boolean cancelableArg, WebGLRenderingContext contextArg, unsigned short errorCodeArg, DOMString statusMessageArg); }; The contextlost event type would always have a statusCode of CONTEXT_LOST and contextrestored would always have a statusCode of CONTEXT_RESTORED. But both of these events can have a statusMessage which gives more details. I've included status codes for the two most common reasons a context could not be created and then a catch-all. I don't think we should go too far down the path of defining error codes. Many are platform specific and can be returned as OTHER_ERROR with a statusMessage. Comments? ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Fri Aug 20 10:02:33 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 20 Aug 2010 10:02:33 -0700 (PDT) Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> Message-ID: <582483422.335609.1282323753557.JavaMail.root@cm-mail03.mozilla.org> Looks good to me, and nicely solves the "give more info if we have to return null". It might be nice to add a status code that says something like "isn't available, might be soon" and then another for "became available" -- I'm thinking of the case where the browser wants to query the user if they want to allow WebGL. - Vlad ----- Original Message ----- > Currently the spec shows two event interfaces: WebGLContextLostEvent > and WebGLContextRestoredEvent. Other Event generators in HTML tend to > use a single Event object with many event types. For instance, there > is a MouseEvent object with event types like "click", "mousedown", > "mouseup", etc. > > I propose we change the spec to have a single WebGLContextEvent object > with 3 event types: > > contextlost - The rendering context has lost its state. > > contextrestored - The rendering context state can be restored. > > contextcreationerror - An error occured when an attempt was made to > create the context > > The IDL would be similar to the existing objects, with error > information added: > > interface WebGLContextEvent : Event { > // Status codes > const unsigned short CONTEXT_LOST = 1; > const unsigned short CONTEXT_RESTORED = 2; > const unsigned short NOT_AVAILABLE = 3; // WebGL is not supported or > not enabled > const unsigned short NOT_SUPPORTED = 4; // Graphics hardware does not > support WebGL > const unsigned short OTHER_ERROR = 5; // Some other error occurred > when creating context, details in statusMessage > > readonly attribute WebGLRenderingContext context; > readonly attribute unsigned short statusCode; > readonly attribute DOMString statusMessage; > > void initWebGLContextEvent(DOMString typeArg, > boolean canBubbleArg, > boolean cancelableArg, > WebGLRenderingContext contextArg, > unsigned short errorCodeArg, > DOMString statusMessageArg); > }; > > The contextlost event type would always have a statusCode of > CONTEXT_LOST and contextrestored would always have a statusCode of > CONTEXT_RESTORED. But both of these events can have a statusMessage > which gives more details. I've included status codes for the two most > common reasons a context could not be created and then a catch-all. I > don't think we should go too far down the path of defining error > codes. Many are platform specific and can be returned as OTHER_ERROR > with a statusMessage. > > Comments? > > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Fri Aug 20 10:51:18 2010 From: kbr...@ (Kenneth Russell) Date: Fri, 20 Aug 2010 10:51:18 -0700 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> References: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> Message-ID: Generally looks good, but the names of the events need to be prefixed with "webgl", because these event names are going to be passed to the Canvas during addEventListener and need to be scoped so that they don't collide with other context types (present or future). -Ken On Fri, Aug 20, 2010 at 9:57 AM, Chris Marrin wrote: > > Currently the spec shows two event interfaces: WebGLContextLostEvent and WebGLContextRestoredEvent. Other Event generators in HTML tend to use a single Event object with many event types. For instance, there is a MouseEvent object with event types like "click", "mousedown", "mouseup", etc. > > I propose we change the spec to have a single WebGLContextEvent object with 3 event types: > > ? ? ? ?contextlost ? ? ? ? ? ? ? ? ? ? - The rendering context has lost its state. > > ? ? ? ?contextrestored ? ? ? ? - The rendering context state can be restored. > > ? ? ? ?contextcreationerror ? ?- An error occured when an attempt was made to create the context > > The IDL would be similar to the existing objects, with error information added: > > ? ? ? ?interface WebGLContextEvent : Event { > ? ? ? ? ? ? ? ?// Status codes > ? ? ? ? ? ? ? ?const unsigned short CONTEXT_LOST = 1; > ? ? ? ? ? ? ? ?const unsigned short CONTEXT_RESTORED = 2; > ? ? ? ? ? ? ? ?const unsigned short NOT_AVAILABLE = 3; // WebGL is not supported or not enabled > ? ? ? ? ? ? ? ?const unsigned short NOT_SUPPORTED = 4; // Graphics hardware does not support WebGL > ? ? ? ? ? ? ? ?const unsigned short OTHER_ERROR = 5; // Some other error occurred when creating context, details in statusMessage > > ? ? ? ? ? ? ? ?readonly attribute WebGLRenderingContext context; > ? ? ? ? ? ? ? ?readonly attribute unsigned short statusCode; > ? ? ? ? ? ? ? ?readonly attribute DOMString statusMessage; > > ? ? ? ? ? ? ? ?void initWebGLContextEvent(DOMString typeArg, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? boolean canBubbleArg, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? boolean cancelableArg, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? WebGLRenderingContext contextArg, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? unsigned short errorCodeArg, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? DOMString statusMessageArg); > ? ? ? ?}; > > The contextlost event type would always have a statusCode of CONTEXT_LOST and contextrestored would always have a statusCode of CONTEXT_RESTORED. But both of these events can have a statusMessage which gives more details. I've included status codes for the two most common reasons a context could not be created and then a catch-all. I don't think we should go too far down the path of defining error codes. Many are platform specific and can be returned as OTHER_ERROR with a statusMessage. > > Comments? > > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Fri Aug 20 11:10:39 2010 From: cma...@ (Chris Marrin) Date: Fri, 20 Aug 2010 11:10:39 -0700 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: References: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> Message-ID: <3FFDA280-2649-45FB-BCCE-0A6257085DA7@apple.com> On Aug 20, 2010, at 10:51 AM, Kenneth Russell wrote: > Generally looks good, but the names of the events need to be prefixed > with "webgl", because these event names are going to be passed to the > Canvas during addEventListener and need to be scoped so that they > don't collide with other context types (present or future). Good point: webglcontextlost, webglcontextrestored and webglcontextcreationerror. FYI event message types are case insensitive, which is why you typically see them in the spec as all lower case. For readability writing this as "WebGLContextCreationError", for instance, works just fine. > > -Ken > > On Fri, Aug 20, 2010 at 9:57 AM, Chris Marrin wrote: >> >> Currently the spec shows two event interfaces: WebGLContextLostEvent and WebGLContextRestoredEvent. Other Event generators in HTML tend to use a single Event object with many event types. For instance, there is a MouseEvent object with event types like "click", "mousedown", "mouseup", etc. >> >> I propose we change the spec to have a single WebGLContextEvent object with 3 event types: >> >> contextlost - The rendering context has lost its state. >> >> contextrestored - The rendering context state can be restored. >> >> contextcreationerror - An error occured when an attempt was made to create the context >> >> The IDL would be similar to the existing objects, with error information added: >> >> interface WebGLContextEvent : Event { >> // Status codes >> const unsigned short CONTEXT_LOST = 1; >> const unsigned short CONTEXT_RESTORED = 2; >> const unsigned short NOT_AVAILABLE = 3; // WebGL is not supported or not enabled >> const unsigned short NOT_SUPPORTED = 4; // Graphics hardware does not support WebGL >> const unsigned short OTHER_ERROR = 5; // Some other error occurred when creating context, details in statusMessage >> >> readonly attribute WebGLRenderingContext context; >> readonly attribute unsigned short statusCode; >> readonly attribute DOMString statusMessage; >> >> void initWebGLContextEvent(DOMString typeArg, >> boolean canBubbleArg, >> boolean cancelableArg, >> WebGLRenderingContext contextArg, >> unsigned short errorCodeArg, >> DOMString statusMessageArg); >> }; >> >> The contextlost event type would always have a statusCode of CONTEXT_LOST and contextrestored would always have a statusCode of CONTEXT_RESTORED. But both of these events can have a statusMessage which gives more details. I've included status codes for the two most common reasons a context could not be created and then a catch-all. I don't think we should go too far down the path of defining error codes. Many are platform specific and can be returned as OTHER_ERROR with a statusMessage. >> >> Comments? >> >> ----- >> ~Chris >> cmarrin...@ >> >> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> >> ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Fri Aug 20 11:18:03 2010 From: cma...@ (Chris Marrin) Date: Fri, 20 Aug 2010 11:18:03 -0700 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: <582483422.335609.1282323753557.JavaMail.root@cm-mail03.mozilla.org> References: <582483422.335609.1282323753557.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <2C95F2D0-7C9F-4803-8939-75CB016ADAED@apple.com> On Aug 20, 2010, at 10:02 AM, Vladimir Vukicevic wrote: > Looks good to me, and nicely solves the "give more info if we have to return null". It might be nice to add a status code that says something like "isn't available, might be soon" and then another for "became available" -- I'm thinking of the case where the browser wants to query the user if they want to allow WebGL. My small concern there is that such a message might be a security and/or privacy violation. It would give the author information that the user is being asked a question and has responded negatively. Seems like you want to tell the author nothing when the dialog comes up. But I'm no security expert... ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Fri Aug 20 11:22:02 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 20 Aug 2010 11:22:02 -0700 (PDT) Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: <2C95F2D0-7C9F-4803-8939-75CB016ADAED@apple.com> Message-ID: <606766238.336202.1282328522649.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Aug 20, 2010, at 10:02 AM, Vladimir Vukicevic wrote: > > > Looks good to me, and nicely solves the "give more info if we have > > to return null". It might be nice to add a status code that says > > something like "isn't available, might be soon" and then another for > > "became available" -- I'm thinking of the case where the browser > > wants to query the user if they want to allow WebGL. > > My small concern there is that such a message might be a security > and/or privacy violation. It would give the author information that > the user is being asked a question and has responded negatively. Seems > like you want to tell the author nothing when the dialog comes up. But > I'm no security expert... Hmm, so maybe a better way to do it would be to just add a context_available message; initially, you would get a not_available without necessarily knowing why. Only if the user accepts would you get the context_available event, and can try again. That lets the app put up some kind of UI on not available, but get notified if it does become available. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Fri Aug 20 11:26:27 2010 From: ste...@ (ste...@) Date: Fri, 20 Aug 2010 11:26:27 -0700 Subject: [Public WebGL] Shader validator issue. Message-ID: <3221895a1924737a69201d00376a899f.squirrel@webmail.sjbaker.org> I think I may have found a validator bug. This shader is boiled down to the bare essentials from a much bigger example... #ifdef GL_ES precision highp float; #endif uniform sampler2D diffuseMap ; void main() { gl_FragColor = texture2DLod ( diffuseMap, vec2(0.5,0.5), 0.0 ) ; } It compiles and runs just fine under last nights' Firefox/Minefield in both Linux & Windows with the validator disabled - but fails with this error with the validator turned on: Error compiling shader: ERROR: 0:10: 'texture2DLod' : no matching overloaded function found ERROR: 0:10: 'assign' : cannot convert from 'const mediump float' to 'FragColor mediump 4-component vector of float' ERROR: 2 compilation errors. No code generated. GL Error: 1281 in loadShaderPairFromString (It also fails in more sane cases, like when the second parameter is a simple varying texCoord and when the third parameter is a float variable). Is there a problem with texture2DLod? I think I'm using it correctly per page 77 of http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf It says: vec4 texture2DLod (sampler2D sampler, vec2 coord, float lod) Eh? -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Fri Aug 20 11:33:24 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 20 Aug 2010 11:33:24 -0700 (PDT) Subject: [Public WebGL] Shader validator issue. In-Reply-To: <3221895a1924737a69201d00376a899f.squirrel@webmail.sjbaker.org> Message-ID: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> >From a bit further up on page 77.. > The built-ins suffixed with ?Lod? are allowed only in a vertex shader. That sounds like the problem :-) - Vlad ----- Original Message ----- > I think I may have found a validator bug. > > This shader is boiled down to the bare essentials from a much bigger > example... > > #ifdef GL_ES > precision highp float; > #endif > > uniform sampler2D diffuseMap ; > > void main() > { > gl_FragColor = texture2DLod ( diffuseMap, vec2(0.5,0.5), 0.0 ) ; > } > > It compiles and runs just fine under last nights' Firefox/Minefield in > both Linux & Windows with the validator disabled - but fails with this > error with the validator turned on: > > Error compiling shader: > ERROR: 0:10: 'texture2DLod' : no matching overloaded function found > ERROR: 0:10: 'assign' : cannot convert from 'const mediump float' > to 'FragColor mediump 4-component vector of float' > ERROR: 2 compilation errors. No code generated. > GL Error: 1281 in loadShaderPairFromString > > (It also fails in more sane cases, like when the second parameter is a > simple varying texCoord and when the third parameter is a float > variable). > > Is there a problem with texture2DLod? I think I'm using it correctly > per > page 77 of > http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf > > It says: > > vec4 texture2DLod (sampler2D sampler, vec2 coord, float lod) > > Eh? > > -- Steve > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Fri Aug 20 11:44:38 2010 From: ste...@ (ste...@) Date: Fri, 20 Aug 2010 11:44:38 -0700 Subject: [Public WebGL] Shader validator issue. In-Reply-To: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> Message-ID: Oooohhhh...ikky! So I can't control the LOD of a texture lookup in the frag shader?...Urgh! That's a pain in the butt! There is a significant class of shader algorithms related to image atlassing that kinda rely on that function. I may have to go talk to Microsoft about getting "WebD3D" working! :-) Oh well, it's a really good thing that the validator works. I guess I owe someone my humblest apologies...again! -- Steve > From a bit further up on page 77.. > >> The built-ins suffixed with "Lod" are allowed only in a vertex >> shader. > > That sounds like the problem :-) > > - Vlad > > ----- Original Message ----- >> I think I may have found a validator bug. >> >> This shader is boiled down to the bare essentials from a much bigger >> example... >> >> #ifdef GL_ES >> precision highp float; >> #endif >> >> uniform sampler2D diffuseMap ; >> >> void main() >> { >> gl_FragColor = texture2DLod ( diffuseMap, vec2(0.5,0.5), 0.0 ) ; >> } >> >> It compiles and runs just fine under last nights' Firefox/Minefield in >> both Linux & Windows with the validator disabled - but fails with this >> error with the validator turned on: >> >> Error compiling shader: >> ERROR: 0:10: 'texture2DLod' : no matching overloaded function found >> ERROR: 0:10: 'assign' : cannot convert from 'const mediump float' >> to 'FragColor mediump 4-component vector of float' >> ERROR: 2 compilation errors. No code generated. >> GL Error: 1281 in loadShaderPairFromString >> >> (It also fails in more sane cases, like when the second parameter is a >> simple varying texCoord and when the third parameter is a float >> variable). >> >> Is there a problem with texture2DLod? I think I'm using it correctly >> per >> page 77 of >> http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf >> >> It says: >> >> vec4 texture2DLod (sampler2D sampler, vec2 coord, float lod) >> >> Eh? >> >> -- Steve >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Fri Aug 20 11:53:38 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 20 Aug 2010 11:53:38 -0700 (PDT) Subject: [Public WebGL] Shader validator issue. In-Reply-To: Message-ID: <1105581613.336461.1282330417967.JavaMail.root@cm-mail03.mozilla.org> No, don't apologize, this is all good feedback that the validator is working :-) - Vlad ----- Original Message ----- > Oooohhhh...ikky! > > So I can't control the LOD of a texture lookup in the frag > shader?...Urgh! > That's a pain in the butt! There is a significant class of shader > algorithms related to image atlassing that kinda rely on that > function. I > may have to go talk to Microsoft about getting "WebD3D" working! :-) > > Oh well, it's a really good thing that the validator works. I guess I > owe > someone my humblest apologies...again! > > -- Steve > > > From a bit further up on page 77.. > > > >> The built-ins suffixed with "Lod" are allowed only in a vertex > >> shader. > > > > That sounds like the problem :-) > > > > - Vlad > > > > ----- Original Message ----- > >> I think I may have found a validator bug. > >> > >> This shader is boiled down to the bare essentials from a much > >> bigger > >> example... > >> > >> #ifdef GL_ES > >> precision highp float; > >> #endif > >> > >> uniform sampler2D diffuseMap ; > >> > >> void main() > >> { > >> gl_FragColor = texture2DLod ( diffuseMap, vec2(0.5,0.5), 0.0 ) ; > >> } > >> > >> It compiles and runs just fine under last nights' Firefox/Minefield > >> in > >> both Linux & Windows with the validator disabled - but fails with > >> this > >> error with the validator turned on: > >> > >> Error compiling shader: > >> ERROR: 0:10: 'texture2DLod' : no matching overloaded function found > >> ERROR: 0:10: 'assign' : cannot convert from 'const mediump float' > >> to 'FragColor mediump 4-component vector of float' > >> ERROR: 2 compilation errors. No code generated. > >> GL Error: 1281 in loadShaderPairFromString > >> > >> (It also fails in more sane cases, like when the second parameter > >> is a > >> simple varying texCoord and when the third parameter is a float > >> variable). > >> > >> Is there a problem with texture2DLod? I think I'm using it > >> correctly > >> per > >> page 77 of > >> http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf > >> > >> It says: > >> > >> vec4 texture2DLod (sampler2D sampler, vec2 coord, float lod) > >> > >> Eh? > >> > >> -- Steve > >> > >> > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Fri Aug 20 11:55:59 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Fri, 20 Aug 2010 11:55:59 -0700 Subject: [Public WebGL] Shader validator issue. In-Reply-To: References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Fri, Aug 20, 2010 at 11:44 AM, wrote: > Oooohhhh...ikky! > > So I can't control the LOD of a texture lookup in the frag shader?...Urgh! > That's a pain in the butt! There is a significant class of shader > algorithms related to image atlassing that kinda rely on that function. texture2D takes an optional bias parameter which you might be able to use depending on what you are trying to do. > I > may have to go talk to Microsoft about getting "WebD3D" working! :-) > > Oh well, it's a really good thing that the validator works. I guess I owe > someone my humblest apologies...again! > > -- Steve > > > From a bit further up on page 77.. > > > >> The built-ins suffixed with "Lod" are allowed only in a vertex > >> shader. > > > > That sounds like the problem :-) > > > > - Vlad > > > > ----- Original Message ----- > >> I think I may have found a validator bug. > >> > >> This shader is boiled down to the bare essentials from a much bigger > >> example... > >> > >> #ifdef GL_ES > >> precision highp float; > >> #endif > >> > >> uniform sampler2D diffuseMap ; > >> > >> void main() > >> { > >> gl_FragColor = texture2DLod ( diffuseMap, vec2(0.5,0.5), 0.0 ) ; > >> } > >> > >> It compiles and runs just fine under last nights' Firefox/Minefield in > >> both Linux & Windows with the validator disabled - but fails with this > >> error with the validator turned on: > >> > >> Error compiling shader: > >> ERROR: 0:10: 'texture2DLod' : no matching overloaded function found > >> ERROR: 0:10: 'assign' : cannot convert from 'const mediump float' > >> to 'FragColor mediump 4-component vector of float' > >> ERROR: 2 compilation errors. No code generated. > >> GL Error: 1281 in loadShaderPairFromString > >> > >> (It also fails in more sane cases, like when the second parameter is a > >> simple varying texCoord and when the third parameter is a float > >> variable). > >> > >> Is there a problem with texture2DLod? I think I'm using it correctly > >> per > >> page 77 of > >> > http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf > >> > >> It says: > >> > >> vec4 texture2DLod (sampler2D sampler, vec2 coord, float lod) > >> > >> Eh? > >> > >> -- Steve > >> > >> > >> ----------------------------------------------------------- > >> You are currently subscribed to public_webgl...@ > >> To unsubscribe, send an email to majordomo...@ with > >> the following command in the body of your email: > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Fri Aug 20 14:15:55 2010 From: ste...@ (ste...@) Date: Fri, 20 Aug 2010 14:15:55 -0700 Subject: [Public WebGL] Shader validator issue. In-Reply-To: References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <7bf2c3806ad74a50ebc0bf922caa1ccb.squirrel@webmail.sjbaker.org> > On Fri, Aug 20, 2010 at 11:44 AM, wrote: > >> Oooohhhh...ikky! >> >> So I can't control the LOD of a texture lookup in the frag >> shader?...Urgh! >> That's a pain in the butt! There is a significant class of shader >> algorithms related to image atlassing that kinda rely on that function. > > texture2D takes an optional bias parameter which you might be able to use > depending on what you are trying to do. Maybe. This is one of those situations where I'm packing a bunch of little textures into an atlas - but those sub-textures need to wrap so they can repeat across a polygon. So I take the texture coordinates modulo some number so they repeat over (say) 0 to 0.25 instead of 0 to 1. The problem with doing that is that when the texture coordinate jumps from 0.249 to 0.001, the texture hardware assumes that the texture has been massively minified and drops you down to virtually the lowest LOD of the map. That causes bits of the other sub-maps to be blended into the one you're rendering and you get a horrible flickery seam around the edges of your sub-maps. However, if you have texture2DLod, you can figure out how fast you would have been stepping across the map had you not done the modulo thing - and force the system to pick the MIPmap LOD you actually wanted...and the flickery seam magically 'goes away' (well, assuming you are careful to generate your MIPmaps with a box filter and keep the sub-maps to power-of-two boundaries, etc). It's a bit of a hack - but for most things, it works beautifully....except when it's not supported. Of course, now that I actually RTFM instead of stupidly assuming that GLSL/ES==GLSL I realize that the all-important ddx() and ddy() functions have also gone AWOL...so the other way to do this (which is to filter the texture yourself) won't work either. Argh! :-( Still - we have to make do with what we've got...so I'll have to figure out something else to do. I don't immediately see how the bias thing is going to help...but at least it has possibilities. Anyway - this is straying off-topic. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From Ben...@ Fri Aug 20 14:23:24 2010 From: Ben...@ (Ben Vanik) Date: Fri, 20 Aug 2010 21:23:24 +0000 Subject: [Public WebGL] Shader validator issue. In-Reply-To: <7bf2c3806ad74a50ebc0bf922caa1ccb.squirrel@webmail.sjbaker.org> References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> ,<7bf2c3806ad74a50ebc0bf922caa1ccb.squirrel@webmail.sjbaker.org> Message-ID: <86A8C2D991159E449E63BD326AAF923D0219D7D4@TK5EX14MBXC104.redmond.corp.microsoft.com> yeah I really wish that OES_standard_derivatives (http://www.khronos.org/registry/gles/extensions/OES/OES_standard_derivatives.txt) was part of the WebGL base spec. iPhone has it and I haven't seen a desktop GPU that doesn't. *very* handy for virtual texturing/other advanced tricks. -- Ben Vanik Live Labs / Seadragon ________________________________________ From: owner-public_webgl...@ [owner-public_webgl...@] on behalf of steve...@ [steve...@] Sent: Friday, August 20, 2010 2:15 PM To: Gregg Tavares (wrk) Cc: steve...@; Vladimir Vukicevic; public webgl Subject: Re: [Public WebGL] Shader validator issue. > On Fri, Aug 20, 2010 at 11:44 AM, wrote: > >> Oooohhhh...ikky! >> >> So I can't control the LOD of a texture lookup in the frag >> shader?...Urgh! >> That's a pain in the butt! There is a significant class of shader >> algorithms related to image atlassing that kinda rely on that function. > > texture2D takes an optional bias parameter which you might be able to use > depending on what you are trying to do. Maybe. This is one of those situations where I'm packing a bunch of little textures into an atlas - but those sub-textures need to wrap so they can repeat across a polygon. So I take the texture coordinates modulo some number so they repeat over (say) 0 to 0.25 instead of 0 to 1. The problem with doing that is that when the texture coordinate jumps from 0.249 to 0.001, the texture hardware assumes that the texture has been massively minified and drops you down to virtually the lowest LOD of the map. That causes bits of the other sub-maps to be blended into the one you're rendering and you get a horrible flickery seam around the edges of your sub-maps. However, if you have texture2DLod, you can figure out how fast you would have been stepping across the map had you not done the modulo thing - and force the system to pick the MIPmap LOD you actually wanted...and the flickery seam magically 'goes away' (well, assuming you are careful to generate your MIPmaps with a box filter and keep the sub-maps to power-of-two boundaries, etc). It's a bit of a hack - but for most things, it works beautifully....except when it's not supported. Of course, now that I actually RTFM instead of stupidly assuming that GLSL/ES==GLSL I realize that the all-important ddx() and ddy() functions have also gone AWOL...so the other way to do this (which is to filter the texture yourself) won't work either. Argh! :-( Still - we have to make do with what we've got...so I'll have to figure out something else to do. I don't immediately see how the bias thing is going to help...but at least it has possibilities. Anyway - this is straying off-topic. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Fri Aug 20 20:35:41 2010 From: ced...@ (Cedric Vivier) Date: Sat, 21 Aug 2010 11:35:41 +0800 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> References: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> Message-ID: On Sat, Aug 21, 2010 at 00:57, Chris Marrin wrote: > Currently the spec shows two event interfaces: WebGLContextLostEvent and > WebGLContextRestoredEvent. Other Event generators in HTML tend to use a > single Event object with many event types. For instance, there is a > MouseEvent object with event types like "click", "mousedown", "mouseup", > etc. > Looks good but I think event type and status code should be separated into two different fields. Consider using one handler for all three events, the context creation error case would require 3 comparisons instead of one, it's confusing and error-prone (easy to forget one) when you only care about context creation error itself. unsigned short type - one of CONTEXT_LOST, CONTEXT_RESTORED or CONTEXT_ERROR unsigned short statusCode - when type is CONTEXT_ERROR, one of NOT_AVAILABLE, NOT_ENABLED or OTHER_ERROR CONTEXT_ERROR wording has the advantage of being a little bit more general than creation-time only error. For instance such error event could be raised when a context could not be restored or the user just disabled WebGL. Having a separate statusCode also has the advantage that we can use the same organization wrt statusCode to deliver advanced status information about context loss/restoration in a future revision of the spec where we might support partial context restoration (eg. statusCode PARTIAL vs FULL, we discussed this few months ago, case where some GL resources are still available thus it's not necessary to re-upload everything). Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Sat Aug 21 18:25:41 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Sat, 21 Aug 2010 18:25:41 -0700 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: <606766238.336202.1282328522649.JavaMail.root@cm-mail03.mozilla.org> References: <2C95F2D0-7C9F-4803-8939-75CB016ADAED@apple.com> <606766238.336202.1282328522649.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Fri, Aug 20, 2010 at 11:22 AM, Vladimir Vukicevic wrote: > > > ----- Original Message ----- > > On Aug 20, 2010, at 10:02 AM, Vladimir Vukicevic wrote: > > > > > Looks good to me, and nicely solves the "give more info if we have > > > to return null". It might be nice to add a status code that says > > > something like "isn't available, might be soon" and then another for > > > "became available" -- I'm thinking of the case where the browser > > > wants to query the user if they want to allow WebGL. > > > > My small concern there is that such a message might be a security > > and/or privacy violation. It would give the author information that > > the user is being asked a question and has responded negatively. Seems > > like you want to tell the author nothing when the dialog comes up. But > > I'm no security expert... > > Hmm, so maybe a better way to do it would be to just add a > context_available message; initially, you would get a not_available without > necessarily knowing why. Only if the user accepts would you get the > context_available event, and can try again. That lets the app put up some > kind of UI on not available, but get notified if it does become available. > Do we really need messages for WebGL? Is WebGL more special than 2d canvas or Flash? Can't this happen behind the scenes? If the browser wants to give the user an option for WebGL it should freeze the JavaScript on that page, ask the question and then unfreeze the JavaScript letting context creation succeed or fail. I understand there are a few issues with WebGL but users are unlikely to care about those issues enough to want a question on every page that uses WebGL. I'd suspect they are more likely to want a permission message for canvas in general to stop annoying ads. Since this solution won't work for canvas in general it seems like the wrong solution. > - Vlad > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Sat Aug 21 18:28:43 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Sat, 21 Aug 2010 18:28:43 -0700 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: References: <2C95F2D0-7C9F-4803-8939-75CB016ADAED@apple.com> <606766238.336202.1282328522649.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Sat, Aug 21, 2010 at 6:25 PM, Gregg Tavares (wrk) wrote: > > > On Fri, Aug 20, 2010 at 11:22 AM, Vladimir Vukicevic > wrote: > >> >> >> ----- Original Message ----- >> > On Aug 20, 2010, at 10:02 AM, Vladimir Vukicevic wrote: >> > >> > > Looks good to me, and nicely solves the "give more info if we have >> > > to return null". It might be nice to add a status code that says >> > > something like "isn't available, might be soon" and then another for >> > > "became available" -- I'm thinking of the case where the browser >> > > wants to query the user if they want to allow WebGL. >> > >> > My small concern there is that such a message might be a security >> > and/or privacy violation. It would give the author information that >> > the user is being asked a question and has responded negatively. Seems >> > like you want to tell the author nothing when the dialog comes up. But >> > I'm no security expert... >> >> Hmm, so maybe a better way to do it would be to just add a >> context_available message; initially, you would get a not_available without >> necessarily knowing why. Only if the user accepts would you get the >> context_available event, and can try again. That lets the app put up some >> kind of UI on not available, but get notified if it does become available. >> > > Do we really need messages for WebGL? Is WebGL more special than 2d canvas > or Flash? > > Can't this happen behind the scenes? If the browser wants to give the user > an option for WebGL it should freeze the JavaScript on that page, ask the > question and then unfreeze the JavaScript letting context creation succeed > or fail. > > I understand there are a few issues with WebGL but users are unlikely to > care about those issues enough to want a question on every page that uses > WebGL. I'd suspect they are more likely to want a permission message for > canvas in general to stop annoying ads. Since this solution won't work for > canvas in general it seems like the wrong solution. > To be clear, I don't have a problem with an event for failure. I have a problem, if i understand the suggestion above, that the correct way to use WebGL will require asynchronous initialization. > > > >> - Vlad >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Sun Aug 22 08:01:04 2010 From: cma...@ (Chris Marrin) Date: Sun, 22 Aug 2010 08:01:04 -0700 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: <606766238.336202.1282328522649.JavaMail.root@cm-mail03.mozilla.org> References: <606766238.336202.1282328522649.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <8CBC59B5-2F9B-4ABF-A913-5CA3910C9EA1@apple.com> On Aug 20, 2010, at 11:22 AM, Vladimir Vukicevic wrote: > > > ----- Original Message ----- >> On Aug 20, 2010, at 10:02 AM, Vladimir Vukicevic wrote: >> >>> Looks good to me, and nicely solves the "give more info if we have >>> to return null". It might be nice to add a status code that says >>> something like "isn't available, might be soon" and then another for >>> "became available" -- I'm thinking of the case where the browser >>> wants to query the user if they want to allow WebGL. >> >> My small concern there is that such a message might be a security >> and/or privacy violation. It would give the author information that >> the user is being asked a question and has responded negatively. Seems >> like you want to tell the author nothing when the dialog comes up. But >> I'm no security expert... > > Hmm, so maybe a better way to do it would be to just add a context_available message; initially, you would get a not_available without necessarily knowing why. Only if the user accepts would you get the context_available event, and can try again. That lets the app put up some kind of UI on not available, but get notified if it does become available. I'm not sure what the use case is here. Isn't a return of null or non-null from getContext all you need to indicate "context available"? ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Sun Aug 22 08:05:30 2010 From: cma...@ (Chris Marrin) Date: Sun, 22 Aug 2010 08:05:30 -0700 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: References: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> Message-ID: <817D1DD8-125C-44B0-95FF-C288EECB06E0@apple.com> On Aug 20, 2010, at 8:35 PM, Cedric Vivier wrote: > On Sat, Aug 21, 2010 at 00:57, Chris Marrin wrote: > Currently the spec shows two event interfaces: WebGLContextLostEvent and WebGLContextRestoredEvent. Other Event generators in HTML tend to use a single Event object with many event types. For instance, there is a MouseEvent object with event types like "click", "mousedown", "mouseup", etc. > > > Looks good but I think event type and status code should be separated into two different fields. > Consider using one handler for all three events, the context creation error case would require 3 comparisons instead of one, it's confusing and error-prone (easy to forget one) when you only care about context creation error itself. > > unsigned short type - one of CONTEXT_LOST, CONTEXT_RESTORED or CONTEXT_ERROR > unsigned short statusCode - when type is CONTEXT_ERROR, one of NOT_AVAILABLE, NOT_ENABLED or OTHER_ERROR In DOM events the type is always the DOMString of the event type (e.g., "webglcontextlost"), so you do get what you're looking for, right? > > > CONTEXT_ERROR wording has the advantage of being a little bit more general than creation-time only error. For instance such error event could be raised when a context could not be restored or the user just disabled WebGL. Yes, we could make the names more general. > > Having a separate statusCode also has the advantage that we can use the same organization wrt statusCode to deliver advanced status information about context loss/restoration in a future revision of the spec where we might support partial context restoration (eg. statusCode PARTIAL vs FULL, we discussed this few months ago, case where some GL resources are still available thus it's not necessary to re-upload everything). There is a separate status code. You don't see the 'type' property in the WebGLContextEvent is because it's in the parent Event interface. ----- ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Mon Aug 23 13:41:59 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 23 Aug 2010 13:41:59 -0700 Subject: [Public WebGL] Shader validator issue. In-Reply-To: <86A8C2D991159E449E63BD326AAF923D0219D7D4@TK5EX14MBXC104.redmond.corp.microsoft.com> References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> <7bf2c3806ad74a50ebc0bf922caa1ccb.squirrel@webmail.sjbaker.org> <86A8C2D991159E449E63BD326AAF923D0219D7D4@TK5EX14MBXC104.redmond.corp.microsoft.com> Message-ID: Support for OES_standard_derivatives is in the process of being added to the ANGLE shader validator. See http://code.google.com/p/angleproject/issues/detail?id=25 . Once this is in place, it will be possible to enable OES_standard_derivatives in a WebGL app. (We need to start a WebGL extension registry...part of this is still on my plate.) -Ken On Fri, Aug 20, 2010 at 2:23 PM, Ben Vanik wrote: > yeah I really wish that OES_standard_derivatives (http://www.khronos.org/registry/gles/extensions/OES/OES_standard_derivatives.txt) was part of the WebGL base spec. > > iPhone has it and I haven't seen a desktop GPU that doesn't. *very* handy for virtual texturing/other advanced tricks. > > -- > Ben Vanik > Live Labs / Seadragon > ________________________________________ > From: owner-public_webgl...@ [owner-public_webgl...@] on behalf of steve...@ [steve...@] > Sent: Friday, August 20, 2010 2:15 PM > To: Gregg Tavares (wrk) > Cc: steve...@; Vladimir Vukicevic; public webgl > Subject: Re: [Public WebGL] Shader validator issue. > >> On Fri, Aug 20, 2010 at 11:44 AM, wrote: >> >>> Oooohhhh...ikky! >>> >>> So I can't control the LOD of a texture lookup in the frag >>> shader?...Urgh! >>> ?That's a pain in the butt! ?There is a significant class of shader >>> algorithms related to image atlassing that kinda rely on that function. >> >> texture2D takes an optional bias parameter which you might be able to use >> depending on what you are trying to do. > > Maybe. ?This is one of those situations where I'm packing a bunch of > little textures into an atlas - but those sub-textures need to wrap so > they can repeat across a polygon. ?So I take the texture coordinates > modulo some number so they repeat over (say) 0 to 0.25 instead of 0 to 1. > The problem with doing that is that when the texture coordinate jumps from > 0.249 to 0.001, the texture hardware assumes that the texture has been > massively minified and drops you down to virtually the lowest LOD of the > map. ?That causes bits of the other sub-maps to be blended into the one > you're rendering and you get a horrible flickery seam around the edges of > your sub-maps. > > However, if you have texture2DLod, you can figure out how fast you would > have been stepping across the map had you not done the modulo thing - and > force the system to pick the MIPmap LOD you actually wanted...and the > flickery seam magically 'goes away' (well, assuming you are careful to > generate your MIPmaps with a box filter and keep the sub-maps to > power-of-two boundaries, etc). > > It's a bit of a hack - but for most things, it works beautifully....except > when it's not supported. > > Of course, now that I actually RTFM instead of stupidly assuming that > GLSL/ES==GLSL I realize that the all-important ddx() and ddy() functions > have also gone AWOL...so the other way to do this (which is to filter the > texture yourself) won't work either. ?Argh! > > ?:-( > > Still - we have to make do with what we've got...so I'll have to figure > out something else to do. ?I don't immediately see how the bias thing is > going to help...but at least it has possibilities. > > Anyway - this is straying off-topic. > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From m.s...@ Mon Aug 23 14:07:42 2010 From: m.s...@ (M.Sirin) Date: Mon, 23 Aug 2010 23:07:42 +0200 Subject: [Public WebGL] New Rendering Pipeline ? Message-ID: Hi, i was wondering if there is anything special about the webgl rendering pipeline. or is it still working exactly like the one of opengl es 2.0 ? That means: Api -> VBO-> Vertex Shader -> Primitive Processiing ->Primitive Assembly -> Rasterizer -> FragmentShader -> Depth Stencil -> Color Buffer Blend -> Dither -> Framebuffer Right? There nothing to read about it on the internet? regards ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Aug 23 03:05:29 2010 From: ste...@ (Steve Baker) Date: Mon, 23 Aug 2010 05:05:29 -0500 Subject: [Public WebGL] New Rendering Pipeline ? In-Reply-To: References: Message-ID: <4C7247E9.5000805@sjbaker.org> WebGL is (essentially) built on top of either OpenGL, or OpenGL-ES - and behaves more or less exactly like OpenGL-ES. The rendering pipeline is therefore precisely the same as those graphics libraries. The only places where there are really any differences are at the API level: * Restrictions due to the fact we're running in a web browser (eg, for the moment, there are no OpenGL extensions). * Extensions to make things more convenient (eg the ability to load a texture from disk with a single command) * Things made necessary by the need to interface to JavaScript rather than C/C++ (eg The Float32Array stuff). So you should read OpenGL or OpenGL ES documentation - and you'll have a pretty solid understanding of what's going on. Your description is essentially correct - except, perhaps that the last step ("Framebuffer") is really "HTML5 " - which is then composited into the real frame buffer by the browser. -- Steve M.Sirin wrote: > Hi, i was wondering if there is anything special about the webgl > rendering pipeline. or is it still working exactly like the one of > opengl es 2.0 ? > > That means: > Api -> VBO-> Vertex Shader -> Primitive Processiing ->Primitive Assembly > -> Rasterizer -> FragmentShader -> Depth Stencil -> Color Buffer Blend > -> Dither -> Framebuffer > > Right? There nothing to read about it on the internet? > > > regards > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From Ben...@ Mon Aug 23 18:25:07 2010 From: Ben...@ (Ben Vanik) Date: Tue, 24 Aug 2010 01:25:07 +0000 Subject: [Public WebGL] Shader validator issue. In-Reply-To: References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> <7bf2c3806ad74a50ebc0bf922caa1ccb.squirrel@webmail.sjbaker.org> <86A8C2D991159E449E63BD326AAF923D0219D7D4@TK5EX14MBXC104.redmond.corp.microsoft.com>, Message-ID: <86A8C2D991159E449E63BD326AAF923D02AA675E@tk5ex14mbxc105.redmond.corp.microsoft.com> Great news! Thanks for the info! Now I can continue scheming on some interesting demos that use it :) -- Ben Vanik Live Labs / Seadragon ________________________________________ From: Kenneth Russell [kbr...@] Sent: Monday, August 23, 2010 1:41 PM To: Ben Vanik Cc: steve...@; Gregg Tavares (wrk); Vladimir Vukicevic; public webgl Subject: Re: [Public WebGL] Shader validator issue. Support for OES_standard_derivatives is in the process of being added to the ANGLE shader validator. See http://code.google.com/p/angleproject/issues/detail?id=25 . Once this is in place, it will be possible to enable OES_standard_derivatives in a WebGL app. (We need to start a WebGL extension registry...part of this is still on my plate.) -Ken On Fri, Aug 20, 2010 at 2:23 PM, Ben Vanik wrote: > yeah I really wish that OES_standard_derivatives (http://www.khronos.org/registry/gles/extensions/OES/OES_standard_derivatives.txt) was part of the WebGL base spec. > > iPhone has it and I haven't seen a desktop GPU that doesn't. *very* handy for virtual texturing/other advanced tricks. > > -- > Ben Vanik > Live Labs / Seadragon > ________________________________________ > From: owner-public_webgl...@ [owner-public_webgl...@] on behalf of steve...@ [steve...@] > Sent: Friday, August 20, 2010 2:15 PM > To: Gregg Tavares (wrk) > Cc: steve...@; Vladimir Vukicevic; public webgl > Subject: Re: [Public WebGL] Shader validator issue. > >> On Fri, Aug 20, 2010 at 11:44 AM, wrote: >> >>> Oooohhhh...ikky! >>> >>> So I can't control the LOD of a texture lookup in the frag >>> shader?...Urgh! >>> That's a pain in the butt! There is a significant class of shader >>> algorithms related to image atlassing that kinda rely on that function. >> >> texture2D takes an optional bias parameter which you might be able to use >> depending on what you are trying to do. > > Maybe. This is one of those situations where I'm packing a bunch of > little textures into an atlas - but those sub-textures need to wrap so > they can repeat across a polygon. So I take the texture coordinates > modulo some number so they repeat over (say) 0 to 0.25 instead of 0 to 1. > The problem with doing that is that when the texture coordinate jumps from > 0.249 to 0.001, the texture hardware assumes that the texture has been > massively minified and drops you down to virtually the lowest LOD of the > map. That causes bits of the other sub-maps to be blended into the one > you're rendering and you get a horrible flickery seam around the edges of > your sub-maps. > > However, if you have texture2DLod, you can figure out how fast you would > have been stepping across the map had you not done the modulo thing - and > force the system to pick the MIPmap LOD you actually wanted...and the > flickery seam magically 'goes away' (well, assuming you are careful to > generate your MIPmaps with a box filter and keep the sub-maps to > power-of-two boundaries, etc). > > It's a bit of a hack - but for most things, it works beautifully....except > when it's not supported. > > Of course, now that I actually RTFM instead of stupidly assuming that > GLSL/ES==GLSL I realize that the all-important ddx() and ddy() functions > have also gone AWOL...so the other way to do this (which is to filter the > texture yourself) won't work either. Argh! > > :-( > > Still - we have to make do with what we've got...so I'll have to figure > out something else to do. I don't immediately see how the bias thing is > going to help...but at least it has possibilities. > > Anyway - this is straying off-topic. > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Tue Aug 24 09:30:46 2010 From: cma...@ (Chris Marrin) Date: Tue, 24 Aug 2010 09:30:46 -0700 Subject: [Public WebGL] New Rendering Pipeline ? In-Reply-To: <4C7247E9.5000805@sjbaker.org> References: <4C7247E9.5000805@sjbaker.org> Message-ID: On Aug 23, 2010, at 3:05 AM, Steve Baker wrote: > WebGL is (essentially) built on top of either OpenGL, or OpenGL-ES - and > behaves more or less exactly like OpenGL-ES. > > The rendering pipeline is therefore precisely the same as those graphics > libraries. The only places where there are really any differences are > at the API level: > > * Restrictions due to the fact we're running in a web browser (eg, for > the moment, there are no OpenGL extensions). > * Extensions to make things more convenient (eg the ability to load a > texture from disk with a single command) > * Things made necessary by the need to interface to JavaScript rather > than C/C++ (eg The Float32Array stuff). There are also a small number of restrictions added to make it possible to use D3D as a backend for WebGL. I believe there was an intent to put such differences on the Wiki, similar to http://khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences, but I don't see it. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Tue Aug 24 09:31:28 2010 From: cma...@ (Chris Marrin) Date: Tue, 24 Aug 2010 09:31:28 -0700 Subject: [Public WebGL] Shader validator issue. In-Reply-To: References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> <7bf2c3806ad74a50ebc0bf922caa1ccb.squirrel@webmail.sjbaker.org> <86A8C2D991159E449E63BD326AAF923D0219D7D4@TK5EX14MBXC104.redmond.corp.microsoft.com> Message-ID: <0DC4E55F-7E18-402C-9C24-577DDC263C7B@apple.com> On Aug 23, 2010, at 1:41 PM, Kenneth Russell wrote: > Support for OES_standard_derivatives is in the process of being added > to the ANGLE shader validator. See > http://code.google.com/p/angleproject/issues/detail?id=25 . Once this > is in place, it will be possible to enable OES_standard_derivatives in > a WebGL app. (We need to start a WebGL extension registry...part of > this is still on my plate.) Extensions aren't enabled in WebKit yet, are they? How about Firefox? ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Tue Aug 24 09:52:13 2010 From: ste...@ (Steve Baker) Date: Tue, 24 Aug 2010 11:52:13 -0500 Subject: [Public WebGL] ETC texture compression. Message-ID: <4C73F8BD.2050601@sjbaker.org> Texture compression is now rising to the top of my list of priorities... I just got an email from Jacob Str?m (at Ericsson) giving further details of the ETC compression scheme - and a link to some useful papers on the subject. For anyone who is interested, I've added them as links to the Wikipedia article on ETC compression: http://en.wikipedia.org/wiki/Ericsson_Texture_Compression Anyway - a couple of highlights from Jacob's email that pretty much confirm my intuition: 1) Jacob agrees that ETC1 isn't useful for compressing normal maps. 2) ETC1 can't compress RGBA maps, for that you need ETC2 - which isn't finalized yet. 3) They believe that ETC1 quality is slightly better than S3TC (aka DXT1) - but the artifacts are of a different nature and some test images don't show an improvement. (Since the compression rates for ETC1 (6x) are not as great as DXT1 (8x), that's about what you'd expect). Anyway, on to my question...some graphics hardware (like my ancient 6800 card and my relatively new GeForce 285) don't support the OpenGL ETC1 extension. Both of those nVidia cards have GL_ARB_texture_compression, GL_EXT_texture_compression_s3tc and GL_NV_texture_compression_vtc - and the 285 card has 'latc' and 'rgtc'...but neither support GL_OES_compressed_ETC1_RGB8_texture - which is the ETC1 extension. I presume cellphones are much more likely to provide this support - but evidently, it's far from universal. So how will WebGL handle this on the desktop in practice? -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Tue Aug 24 10:13:44 2010 From: cma...@ (Chris Marrin) Date: Tue, 24 Aug 2010 10:13:44 -0700 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <4C73F8BD.2050601@sjbaker.org> References: <4C73F8BD.2050601@sjbaker.org> Message-ID: <48171ECE-5ED4-479C-A110-BA6659E2D291@apple.com> On Aug 24, 2010, at 9:52 AM, Steve Baker wrote: > Texture compression is now rising to the top of my list of priorities... > > I just got an email from Jacob Str?m (at Ericsson) giving further > details of the ETC compression scheme - and a link to some useful papers > on the subject. For anyone who is interested, I've added them as links > to the Wikipedia article on ETC compression: > > http://en.wikipedia.org/wiki/Ericsson_Texture_Compression > > Anyway - a couple of highlights from Jacob's email that pretty much > confirm my intuition: > > 1) Jacob agrees that ETC1 isn't useful for compressing normal maps. > 2) ETC1 can't compress RGBA maps, for that you need ETC2 - which isn't > finalized yet. > 3) They believe that ETC1 quality is slightly better than S3TC (aka > DXT1) - but the artifacts are of a different nature and some test images > don't show an improvement. (Since the compression rates for ETC1 (6x) > are not as great as DXT1 (8x), that's about what you'd expect). > > Anyway, on to my question...some graphics hardware (like my ancient 6800 > card and my relatively new GeForce 285) don't support the OpenGL ETC1 > extension. Both of those nVidia cards have GL_ARB_texture_compression, > GL_EXT_texture_compression_s3tc and GL_NV_texture_compression_vtc - and > the 285 card has 'latc' and 'rgtc'...but neither support > GL_OES_compressed_ETC1_RGB8_texture - which is the ETC1 extension. I > presume cellphones are much more likely to provide this support - but > evidently, it's far from universal. > > So how will WebGL handle this on the desktop in practice? This is where the licensing for the software ETC1 decompressor becomes an issue. If such a thing were unencumbered a WebGL implementation could use it to decompress ETC1 textures on systems without direct support and send the texture to the hardware uncompressed. An implementation could also recompress the image into a format supported by the hardware, but this might not be desirable because it would produce unexpected artifacts. The bottom line is that a WebGL implementation supporting ETC1 would have to accept these textures as input, what it does with them is implementation specific. But for universal support an unencumbered ETC1 decompressor is needed. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Tue Aug 24 10:33:10 2010 From: ced...@ (Cedric Vivier) Date: Wed, 25 Aug 2010 01:33:10 +0800 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <48171ECE-5ED4-479C-A110-BA6659E2D291@apple.com> References: <4C73F8BD.2050601@sjbaker.org> <48171ECE-5ED4-479C-A110-BA6659E2D291@apple.com> Message-ID: On Wed, Aug 25, 2010 at 01:13, Chris Marrin wrote: > The bottom line is that a WebGL implementation supporting ETC1 would have > to accept these textures as input, what it does with them is implementation > specific. But for universal support an unencumbered ETC1 decompressor is > needed. > Not if we limit ourselves just to provide access to low-level ES 2.0's compressedTex* functions. I still do not understand the fears of incompatibilities it generates as using it is essentially exactly the same mechanism than for using any extension (ie. may not be available so developer has to be careful about it - ie. getExtension("SOME_FORMAT") doesn't give the format enum). Providing access to compressedTex* functions makes compressed formats completely transparent to the WebGL implementations, developers just provide the functions with an UnsignedByteArray obtained elsewhere (eg. XMLHTTPRequest), therefore not requiring unencumbered decompressors and the likes. Libraries and/or web services can provide high level wrappers for easy developer usage and interop guarantees. Last but not least, there's no technical advantage in space or time at all at using ETC1 (or any decompression-optimized GPU-supported format) over the wire when it needs to be decompressed in software because hardware does not support it. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Tue Aug 24 11:13:14 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 24 Aug 2010 11:13:14 -0700 Subject: [Public WebGL] New Rendering Pipeline ? In-Reply-To: References: <4C7247E9.5000805@sjbaker.org> Message-ID: On Tue, Aug 24, 2010 at 9:30 AM, Chris Marrin wrote: > > On Aug 23, 2010, at 3:05 AM, Steve Baker wrote: > >> WebGL is (essentially) built on top of either OpenGL, or OpenGL-ES - and >> behaves more or less exactly like OpenGL-ES. >> >> The rendering pipeline is therefore precisely the same as those graphics >> libraries. ?The only places where there are really any differences are >> at the API level: >> >> * Restrictions due to the fact we're running in a web browser (eg, for >> the moment, there are no OpenGL extensions). >> * Extensions to make things more convenient (eg the ability to load a >> texture from disk with a single command) >> * Things made necessary by the need to interface to JavaScript rather >> than C/C++ ?(eg The Float32Array stuff). > > There are also a small number of restrictions added to make it possible to use D3D as a backend for WebGL. I believe there was an intent to put such differences on the Wiki, similar to http://khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences, but I don't see it. The differences between WebGL and OpenGL ES 2.0, which were intended to allow it to run on top of D3D, are in the WebGL specification. The differences between WebGL and desktop OpenGL, which may be surprising to desktop OpenGL programmers, are listed on the wiki as you mentioned above. -Ken > > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Tue Aug 24 11:14:09 2010 From: kbr...@ (Kenneth Russell) Date: Tue, 24 Aug 2010 11:14:09 -0700 Subject: [Public WebGL] Shader validator issue. In-Reply-To: <0DC4E55F-7E18-402C-9C24-577DDC263C7B@apple.com> References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> <7bf2c3806ad74a50ebc0bf922caa1ccb.squirrel@webmail.sjbaker.org> <86A8C2D991159E449E63BD326AAF923D0219D7D4@TK5EX14MBXC104.redmond.corp.microsoft.com> <0DC4E55F-7E18-402C-9C24-577DDC263C7B@apple.com> Message-ID: On Tue, Aug 24, 2010 at 9:31 AM, Chris Marrin wrote: > > On Aug 23, 2010, at 1:41 PM, Kenneth Russell wrote: > >> Support for OES_standard_derivatives is in the process of being added >> to the ANGLE shader validator. See >> http://code.google.com/p/angleproject/issues/detail?id=25 . Once this >> is in place, it will be possible to enable OES_standard_derivatives in >> a WebGL app. (We need to start a WebGL extension registry...part of >> this is still on my plate.) > > Extensions aren't enabled in WebKit yet, are they? How about Firefox? They aren't enabled in WebKit yet. I think we need to specify them before adding support. -Ken > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Tue Aug 24 11:24:41 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Tue, 24 Aug 2010 11:24:41 -0700 Subject: [Public WebGL] ETC texture compression. In-Reply-To: References: <4C73F8BD.2050601@sjbaker.org> <48171ECE-5ED4-479C-A110-BA6659E2D291@apple.com> Message-ID: On Tue, Aug 24, 2010 at 10:33 AM, Cedric Vivier wrote: > On Wed, Aug 25, 2010 at 01:13, Chris Marrin wrote: > >> The bottom line is that a WebGL implementation supporting ETC1 would have >> to accept these textures as input, what it does with them is implementation >> specific. But for universal support an unencumbered ETC1 decompressor is >> needed. >> > > > Not if we limit ourselves just to provide access to low-level ES 2.0's > compressedTex* functions. > I still do not understand the fears of incompatibilities it generates as > using it is essentially exactly the same mechanism than for using any > extension > The environment is not the same. When you take an Windows app and port it to OSX there's a ton of work that needs to be done. If you then port app to iPhone you also have a ton of work. You have to use ObjectiveC, you have to call different OS functions. None of that is true on the Web. When you create an page it's supposed to run everywhere. That's mostly true with very few exceptions which is why cellphones can browse the web. There is no "porting". Sure pages often have to check for a few incompatibilities in browsers but those are arguably failures of the browser makers to agree rather than by design such as with OpenGL extensions. > (ie. may not be available so developer has to be careful about it - ie. > getExtension("SOME_FORMAT") doesn't give the format enum). > > Providing access to compressedTex* functions makes compressed formats > completely transparent to the WebGL implementations, developers just provide > the functions with an UnsignedByteArray obtained elsewhere (eg. > XMLHTTPRequest), therefore not requiring unencumbered decompressors and the > likes. > > Libraries and/or web services can provide high level wrappers for easy > developer usage and interop guarantees. > > Last but not least, there's no technical advantage in space or time at all > at using ETC1 (or any decompression-optimized GPU-supported format) over the > wire when it needs to be decompressed in software because hardware does not > support it. > > > Regards, > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ced...@ Tue Aug 24 11:28:56 2010 From: ced...@ (Cedric Vivier) Date: Wed, 25 Aug 2010 02:28:56 +0800 Subject: [Public WebGL] ETC texture compression. In-Reply-To: References: <4C73F8BD.2050601@sjbaker.org> <48171ECE-5ED4-479C-A110-BA6659E2D291@apple.com> Message-ID: On Wed, Aug 25, 2010 at 02:24, Gregg Tavares (wrk) wrote: > Sure pages often have to check for a few incompatibilities in browsers but > those are arguably failures of the browser makers to agree rather than by > design such as with OpenGL extensions. > Yeah but by this argument why we do allow and plan for WebGL extensions then? By definition they won't be available on all implementations either (thus needing special care by its consumers not to break on non-supported platforms...). Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From oli...@ Tue Aug 24 11:32:20 2010 From: oli...@ (Oliver Hunt) Date: Tue, 24 Aug 2010 11:32:20 -0700 Subject: [Public WebGL] ETC texture compression. In-Reply-To: References: <4C73F8BD.2050601@sjbaker.org> <48171ECE-5ED4-479C-A110-BA6659E2D291@apple.com> Message-ID: <8AC51E94-13F5-4729-B396-C94CE8585031@apple.com> Which is basically why I opposed them some time ago (I still do, but I have no new arguments and merely repeating them is a waste of everyones time) --Oliver On Aug 24, 2010, at 11:28 AM, Cedric Vivier wrote: > On Wed, Aug 25, 2010 at 02:24, Gregg Tavares (wrk) wrote: > Sure pages often have to check for a few incompatibilities in browsers but those are arguably failures of the browser makers to agree rather than by design such as with OpenGL extensions. > > Yeah but by this argument why we do allow and plan for WebGL extensions then? By definition they won't be available on all implementations either (thus needing special care by its consumers not to break on non-supported platforms...). > > > Regards, > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Tue Aug 24 11:35:26 2010 From: cal...@ (Mark Callow) Date: Tue, 24 Aug 2010 11:35:26 -0700 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <48171ECE-5ED4-479C-A110-BA6659E2D291@apple.com> References: <4C73F8BD.2050601@sjbaker.org> <48171ECE-5ED4-479C-A110-BA6659E2D291@apple.com> Message-ID: <4C7410EE.3070601@hicorp.co.jp> Use in implementation of a Khronos API should be no problem as, under the Khronos membership agreement terms, Ericsson implicitly granted such a license when the ETC extension spec. was ratified. The license text in the source code does not reflect this. We are working on it with Ericsson's legal department to correct it. Please be patient. Regards -Mark Chris Marrin wrote: > This is where the licensing for the software ETC1 decompressor becomes an issue. If such a thing were unencumbered a WebGL implementation could use it to decompress ETC1 textures on systems without direct support and send the texture to the hardware uncompressed. An implementation could also recompress the image into a format supported by the hardware, but this might not be desirable because it would produce unexpected artifacts. > > The bottom line is that a WebGL implementation supporting ETC1 would have to accept these textures as input, what it does with them is implementation specific. But for universal support an unencumbered ETC1 decompressor is needed. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 378 bytes Desc: not available URL: From van...@ Tue Aug 24 11:44:13 2010 From: van...@ (Vangelis Kokkevis) Date: Tue, 24 Aug 2010 11:44:13 -0700 Subject: [Public WebGL] New Rendering Pipeline ? In-Reply-To: References: <4C7247E9.5000805@sjbaker.org> Message-ID: On Tue, Aug 24, 2010 at 9:30 AM, Chris Marrin wrote: > > On Aug 23, 2010, at 3:05 AM, Steve Baker wrote: > > > WebGL is (essentially) built on top of either OpenGL, or OpenGL-ES - and > > behaves more or less exactly like OpenGL-ES. > > > > The rendering pipeline is therefore precisely the same as those graphics > > libraries. The only places where there are really any differences are > > at the API level: > > > > * Restrictions due to the fact we're running in a web browser (eg, for > > the moment, there are no OpenGL extensions). > > * Extensions to make things more convenient (eg the ability to load a > > texture from disk with a single command) > > * Things made necessary by the need to interface to JavaScript rather > > than C/C++ (eg The Float32Array stuff). > > There are also a small number of restrictions added to make it possible to > use D3D as a backend for WebGL. I believe there was an intent to put such > differences on the Wiki, similar to > http://khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences, but I don't > see it. > They are in Section 6 of the WebGL spec, specifically 6.5 through 6.9 . Vangelis > > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ced...@ Tue Aug 24 11:49:50 2010 From: ced...@ (Cedric Vivier) Date: Wed, 25 Aug 2010 02:49:50 +0800 Subject: [Public WebGL] New Rendering Pipeline ? In-Reply-To: References: <4C7247E9.5000805@sjbaker.org> Message-ID: On Wed, Aug 25, 2010 at 02:13, Kenneth Russell wrote: > > The differences between WebGL and OpenGL ES 2.0, which were intended > to allow it to run on top of D3D, are in the WebGL specification. May be off-topic but I could not find any similar document for Native Client's OpenGL ES 2.0 ? I assume both WebGL and NaCl's ES 2.0 could be using the same backend therefore NaCl's ES 2.0 should have the same limitations ? Afaik NaCl documentation currently pretends to be "pure" ES 2.0. Any insight on this ? Kenneth ? Gregg ? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Tue Aug 24 12:12:57 2010 From: ste...@ (Steve Baker) Date: Tue, 24 Aug 2010 14:12:57 -0500 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <8AC51E94-13F5-4729-B396-C94CE8585031@apple.com> References: <4C73F8BD.2050601@sjbaker.org> <48171ECE-5ED4-479C-A110-BA6659E2D291@apple.com> <8AC51E94-13F5-4729-B396-C94CE8585031@apple.com> Message-ID: <4C7419B9.6010002@sjbaker.org> >From the app developer's point of view, there are really three separate situations with extensions. 1) Sometimes they are used to add features to OpenGL that lots of hardware supports - but which the standard has yet to encompass. 2) Sometimes they are used when one particular implementation has a sexy new feature that the others don't - and which is therefore unlikely to make it into the spec. 3) Sometimes they are used to cover an annoying situation where some feature is widely supported but which is somehow legally encumbered and cannot be a part of the standard. In the case of (1), you can probably just assume the existence of the feature - perhaps some ancient or odd-ball hardware doesn't support it - but that's life. I prefer that old hardware doesn't advertise the feature if using it causes a fallback on software emulation of the feature - so this is OK. In the case of (2), it's rare that I'll use the extension. Even though the feature might provide some massive speedup, it's likely that it does that on platforms that are already on the cutting edge of performance where I don't NEED a massive speedup. About the only time I'll do something here is when nVidia have implemented some feature one way and ATI implemented more or less the same feature in a slightly different way. Since both major brands implement more or less the same thing, I can use the extension mechanism to trigger special case code to cover the subtle differences. In the case of (3), you swear and curse at the stupidity of software patents and probably use the feature without a fallback. S3TC texture compression is a good example of that. Oliver Hunt wrote: > Which is basically why I opposed them some time ago (I still do, but I > have no new arguments and merely repeating them is a waste of > everyones time) > > --Oliver > > On Aug 24, 2010, at 11:28 AM, Cedric Vivier wrote: > >> On Wed, Aug 25, 2010 at 02:24, Gregg Tavares (wrk) > > wrote: >> >> Sure pages often have to check for a few incompatibilities in >> browsers but those are arguably failures of the browser makers to >> agree rather than by design such as with OpenGL extensions. >> >> >> Yeah but by this argument why we do allow and plan for WebGL >> extensions then? By definition they won't be available on all >> implementations either (thus needing special care by its consumers >> not to break on non-supported platforms...). >> >> >> Regards, >> > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Tue Aug 24 12:20:42 2010 From: oli...@ (Oliver Hunt) Date: Tue, 24 Aug 2010 12:20:42 -0700 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <4C7419B9.6010002@sjbaker.org> References: <4C73F8BD.2050601@sjbaker.org> <48171ECE-5ED4-479C-A110-BA6659E2D291@apple.com> <8AC51E94-13F5-4729-B396-C94CE8585031@apple.com> <4C7419B9.6010002@sjbaker.org> Message-ID: <04B7E9CA-FE8B-4D0B-B3EF-A03BDD3BC36E@apple.com> On Aug 24, 2010, at 12:12 PM, Steve Baker wrote: > From the app developer's point of view, there are really three separate > situations with extensions. > > 1) Sometimes they are used to add features to OpenGL that lots of > hardware supports - but which the standard has yet to encompass. No extension in hardware can be supported in webgl without webgl standardising the interface to the extension. > 2) Sometimes they are used when one particular implementation has a sexy > new feature that the others don't - and which is therefore unlikely to > make it into the spec. eg. making a website that is not compatible across multiple platforms. > 3) Sometimes they are used to cover an annoying situation where some > feature is widely supported but which is somehow legally encumbered and > cannot be a part of the standard. As in (1) the extension interface has to be spec'd, as the binding to any given extension has to be standardised in webgl. If GL can't support it, the chances are that WebGL can't either. --Oliver ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Tue Aug 24 12:39:16 2010 From: vla...@ (Vladimir Vukicevic) Date: Tue, 24 Aug 2010 12:39:16 -0700 (PDT) Subject: [Public WebGL] ETC texture compression. In-Reply-To: Message-ID: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Wed, Aug 25, 2010 at 02:24, Gregg Tavares (wrk) < gman...@ > > wrote: > > Sure pages often have to check for a few incompatibilities in browsers > but those are arguably failures of the browser makers to agree rather > than by design such as with OpenGL extensions. > > > Yeah but by this argument why we do allow and plan for WebGL > extensions then? By definition they won't be available on all > implementations either (thus needing special care by its consumers not > to break on non-supported platforms...). The extensions that we're looking at are things that are defined as extensions, but generally have very wide (or universal) availability. So that argument doesn't apply there -- it's very different defining an extension for webgl for something that will be present on 95% (if not 100%) of target hardware, vs. something that is known to not be present on half or more. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Tue Aug 24 13:27:10 2010 From: bja...@ (Benoit Jacob) Date: Tue, 24 Aug 2010 13:27:10 -0700 (PDT) Subject: [Public WebGL] Shader validator issue. In-Reply-To: <0DC4E55F-7E18-402C-9C24-577DDC263C7B@apple.com> Message-ID: <1406103368.369268.1282681630153.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Aug 23, 2010, at 1:41 PM, Kenneth Russell wrote: > > > Support for OES_standard_derivatives is in the process of being > > added > > to the ANGLE shader validator. See > > http://code.google.com/p/angleproject/issues/detail?id=25 . Once > > this > > is in place, it will be possible to enable OES_standard_derivatives > > in > > a WebGL app. (We need to start a WebGL extension registry...part of > > this is still on my plate.) > > Extensions aren't enabled in WebKit yet, are they? How about Firefox? They're not enabled in Firefox either. Benoit > > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Tue Aug 24 13:51:36 2010 From: cal...@ (Mark Callow) Date: Tue, 24 Aug 2010 13:51:36 -0700 Subject: [Public WebGL] New Rendering Pipeline ? In-Reply-To: References: <4C7247E9.5000805@sjbaker.org> Message-ID: <4C7430D8.2080301@hicorp.co.jp> > On Wed, Aug 25, 2010 at 02:13, Kenneth Russell > wrote: > > The differences between WebGL and OpenGL ES 2.0, which were intended > to allow it to run on top of D3D, are in the WebGL specification. > > > May be off-topic but I could not find any similar document for Native > Client's OpenGL ES 2.0 ? > I assume both WebGL and NaCl's ES 2.0 could be using the same backend > therefore NaCl's ES 2.0 should have the same limitations ? > Afaik NaCl documentation currently pretends to be "pure" ES 2.0. > > Any insight on this ? Kenneth ? Gregg ? To use the OpenGL ES 2.0 name or mark an implementation is supposed to pass the OpenGL 2.0 ES conformance tests. So NaCl (a.k.a. salt) ought to be "pure" ES 2.0. If it is not, they need to call it something else. That this question arises at all demonstrates that something is lacking in Khronos's marketing and enforcement of its marks. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From gma...@ Tue Aug 24 13:59:38 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Tue, 24 Aug 2010 13:59:38 -0700 Subject: [Public WebGL] New Rendering Pipeline ? In-Reply-To: <4C7430D8.2080301@hicorp.co.jp> References: <4C7247E9.5000805@sjbaker.org> <4C7430D8.2080301@hicorp.co.jp> Message-ID: On Tue, Aug 24, 2010 at 1:51 PM, Mark Callow wrote: > On Wed, Aug 25, 2010 at 02:13, Kenneth Russell wrote: >> >> The differences between WebGL and OpenGL ES 2.0, which were intended >> to allow it to run on top of D3D, are in the WebGL specification. > > > May be off-topic but I could not find any similar document for Native > Client's OpenGL ES 2.0 ? > I assume both WebGL and NaCl's ES 2.0 could be using the same backend > therefore NaCl's ES 2.0 should have the same limitations ? > Afaik NaCl documentation currently pretends to be "pure" ES 2.0. > > Any insight on this ? Kenneth ? Gregg ? > > To use the OpenGL ES 2.0 name or mark an implementation is supposed to pass > the OpenGL 2.0 ES conformance tests. So NaCl (a.k.a. salt) ought to be > "pure" ES 2.0. If it is not, they need to call it something else. > NaCl does pass all of the OpenGL ES 2.0 conformance tests or will by v1.0 > > That this question arises at all demonstrates that something is lacking in > Khronos's marketing and enforcement of its marks. > > Regards > > -Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Tue Aug 24 16:14:07 2010 From: cma...@ (Chris Marrin) Date: Tue, 24 Aug 2010 16:14:07 -0700 Subject: [Public WebGL] Shader validator issue. In-Reply-To: References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> <7bf2c3806ad74a50ebc0bf922caa1ccb.squirrel@webmail.sjbaker.org> <86A8C2D991159E449E63BD326AAF923D0219D7D4@TK5EX14MBXC104.redmond.corp.microsoft.com> <0DC4E55F-7E18-402C-9C24-577DDC263C7B@apple.com> Message-ID: <96D2264E-0A94-41D4-A4D9-9FC559FB846C@apple.com> On Aug 24, 2010, at 11:14 AM, Kenneth Russell wrote: > On Tue, Aug 24, 2010 at 9:31 AM, Chris Marrin wrote: >> >> On Aug 23, 2010, at 1:41 PM, Kenneth Russell wrote: >> >>> Support for OES_standard_derivatives is in the process of being added >>> to the ANGLE shader validator. See >>> http://code.google.com/p/angleproject/issues/detail?id=25 . Once this >>> is in place, it will be possible to enable OES_standard_derivatives in >>> a WebGL app. (We need to start a WebGL extension registry...part of >>> this is still on my plate.) >> >> Extensions aren't enabled in WebKit yet, are they? How about Firefox? > > They aren't enabled in WebKit yet. I think we need to specify them > before adding support. The extension mechanism is defined in the spec, but the functions are not yet in WebKit I see. Once they are, it seems reasonable to add experimental support for something like standard_derivitives. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Tue Aug 24 17:59:10 2010 From: ste...@ (Steve Baker) Date: Tue, 24 Aug 2010 19:59:10 -0500 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C746ADE.5080308@sjbaker.org> Vladimir Vukicevic wrote: > The extensions that we're looking at are things that are defined as extensions, but generally have very wide (or universal) availability. So that argument doesn't apply there -- it's very different defining an extension for webgl for something that will be present on 95% (if not 100%) of target hardware, vs. something that is known to not be present on half or more. > > - Vlad > The trouble is that if the extension is "nearly" universal - and useful enough to matter, many application authors are going to be tempted to use it - and not bother to write fallbacks for it because the additional revenue from (say) 5% of the market would not pay for the additional development cost. Also, a lot of ground-breaking web sites are produced by amateurs who may well find themselves unable to dig up a sufficiently ancient machine to test on to debug all of these possibilities. If WebGL programs don't run exactly the same everywhere, the web will become full of apps that simply fail for ill-explained reasons on random machines. That will reflect badly on the standard. I think we should avoid extensions in at least the first version of WebGL for exactly that reason. On balance, I would prefer that we didn't ever provide extensions. I believe that we should be aggressive about the hardware you need in order to be able to run WebGL. If 5% of machines don't have some important and "nearly" universal extension then let's make that a part of the core API and declare the 5% of machines obsolete and unable to support the API. Let's drive the community rather than lagging it. Right now - even without extensions - we have problems. Sometimes the underlying OpenGL will silently fall back on software emulation to implement a feature rather than admit that the hardware can't do it - and the result is too slow to use (Vertex textures...to pick a particularly nasty example). The spec claims that you can use this feature - but in practice it's not always usable - and the only way to detect that there is a problem and provide a fall-back is to measure in-game frame rates (which given garbage collection and the general randomness of timing in a browser environment is tricky at best)...argh! -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Tue Aug 24 18:08:48 2010 From: cal...@ (Mark Callow) Date: Tue, 24 Aug 2010 18:08:48 -0700 Subject: [Public WebGL] Extensions (was Re: ETC texture compression.) In-Reply-To: <4C746ADE.5080308@sjbaker.org> References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> Message-ID: <4C746D20.2080702@hicorp.co.jp> > If WebGL programs don't run exactly the same everywhere, the web will > become full of apps that simply fail for ill-explained reasons on random > machines. That will reflect badly on the standard. I think we should > avoid extensions in at least the first version of WebGL for exactly that > reason. I agree. Regards -Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From joe...@ Tue Aug 24 19:12:31 2010 From: joe...@ (Joe D Williams) Date: Tue, 24 Aug 2010 19:12:31 -0700 Subject: [Public WebGL] Shader validator issue. References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> <7bf2c3806ad74a50ebc0bf922caa1ccb.squirrel@webmail.sjbaker.org> <86A8C2D991159E449E63BD326AAF923D0219D7D4@TK5EX14MBXC104.redmond.corp.microsoft.com> <0DC4E55F-7E18-402C-9C24-577DDC263C7B@apple.com> <96D2264E-0A94-41D4-A4D9-9FC559FB846C@apple.com> Message-ID: > support for something like standard_derivitives. Is that a math thing, a first connection with OCL? Thanks and Best, Joe ----- Original Message ----- From: "Chris Marrin" To: "Kenneth Russell" Cc: "Ben Vanik" ; ; "Gregg Tavares (wrk)" ; "Vladimir Vukicevic" ; "public webgl" Sent: Tuesday, August 24, 2010 4:14 PM Subject: Re: [Public WebGL] Shader validator issue. > > On Aug 24, 2010, at 11:14 AM, Kenneth Russell wrote: > >> On Tue, Aug 24, 2010 at 9:31 AM, Chris Marrin >> wrote: >>> >>> On Aug 23, 2010, at 1:41 PM, Kenneth Russell wrote: >>> >>>> Support for OES_standard_derivatives is in the process of being >>>> added >>>> to the ANGLE shader validator. See >>>> http://code.google.com/p/angleproject/issues/detail?id=25 . Once >>>> this >>>> is in place, it will be possible to enable >>>> OES_standard_derivatives in >>>> a WebGL app. (We need to start a WebGL extension registry...part >>>> of >>>> this is still on my plate.) >>> >>> Extensions aren't enabled in WebKit yet, are they? How about >>> Firefox? >> >> They aren't enabled in WebKit yet. I think we need to specify them >> before adding support. > > The extension mechanism is defined in the spec, but the functions > are not yet in WebKit I see. Once they are, it seems reasonable to > add experimental support for something like standard_derivitives. > > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Tue Aug 24 19:56:54 2010 From: ced...@ (Cedric Vivier) Date: Wed, 25 Aug 2010 10:56:54 +0800 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <4C746ADE.5080308@sjbaker.org> References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> Message-ID: On Wed, Aug 25, 2010 at 08:59, Steve Baker wrote: > The trouble is that if the extension is "nearly" universal - and useful > enough to matter, many application authors are going to be tempted to > use it - and not bother to write fallbacks for it because the additional > revenue from (say) 5% of the market would not pay for the additional > development cost. I agree, counter-intuitively I'd think an extension with available on only 50% systems to be better supported for everyone (ie. have fallbacks) than an extension that runs for almost everyone (90%+) thus the developer might be less interested to do the effort. In the end there might well be 10% people left out of an app using 90%-available extension, whereas there would be 0% people left out for a 50%-available extension. Also the specific case of compression extensions is the easiest kind of extension do fallback for, eg. : var format; if (format = gl.getExtension("ETC2")) { loadETC2Texture(format, name); } else if (format = gl.getExtension("ETC1")) { loadETC1Texture(format, name); } else if (format = gl.getExtension("PVRTC")) { loadPVRTCTexture(format, name); } else { //fallback loadPNGTexture(name); } That said, I like the concept of a lowest-common denominator WebGL that guarantees WebGL code to run anywhere, especially for 1.0, on the other hand for the future of WebGL I'm wary of privileging policy decisions over technical decisions when it comes to give more power to advanced developers (indeed - more power means more responsibilities, there's always ways to make an app fail on some platforms) Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Tue Aug 24 20:04:20 2010 From: ste...@ (Steve Baker) Date: Tue, 24 Aug 2010 22:04:20 -0500 Subject: [Public WebGL] Shader validator issue. In-Reply-To: References: <1633547377.336340.1282329204703.JavaMail.root@cm-mail03.mozilla.org> <7bf2c3806ad74a50ebc0bf922caa1ccb.squirrel@webmail.sjbaker.org> <86A8C2D991159E449E63BD326AAF923D0219D7D4@TK5EX14MBXC104.redmond.corp.microsoft.com> <0DC4E55F-7E18-402C-9C24-577DDC263C7B@apple.com> <96D2264E-0A94-41D4-A4D9-9FC559FB846C@apple.com> Message-ID: <4C748834.8080102@sjbaker.org> Joe D Williams wrote: >> support for something like standard_derivitives. > Is that a math thing, a first connection with OCL? > Thanks and Best, > Joe Not exactly...but it's going to take a bit of explaining...I'll take a shot at it! Although each fragment shader program looks like a completely separate program running in splendid isolation on a little computer inside the GPU, there are in fact a bunch of fragment shaders running in parallel - each working on a different pixel of the same triangle. (Technically: on a different "fragment" - but it's OK to think "pixel".) Typically, these processors all run in perfect lock-step - instruction-by-instruction...and most GPU's have dozens to hundreds of them running along in parallel. When you are doing texturing, the shader needs to figure out which MIPmap level to read - which depends on how much the texture is magnified or minified. But each little shader processor only knows the texture coordinate at the precise location of the screen pixel that it's processing. So how does it know whether the texture is being stretched or squashed? The answer is that it 'cheats' - it takes a sneaky peek at what the other fragment shaders are doing on nearby fragment. By taking a sneaky peek at the next fragment above, below and to either side, it can figure out how much difference there is in the texture coordinates across the fragment - which tells it how much magnification and/or minification is going on...and that gets you the MIPmap level. Because the fragment shader processors run in perfect lockstep, they are always ready with their values for the texture coordinates at the precise same moment. In math terms, the rate of change of a variable is expressed as a 'differential' (as in 'differential calculus') - indicated with the letter 'd' (actually, the greek 'delta') and in calculus you see things like 'dx/dy' - the rate of change of x as a function of y. What the MIPmap level calculator is figuring out is the rate of change of texture S and T coordinates as a function of X and Y screen coordinate: ds/dx, dt/dx, ds/dy and dt/dy. Now - inside shader code, it's often convenient to know how the value of other variables are changing across the screen - this is handy when you want to avoid aliassing for procedurally-generated stuff, for example. Because this is so useful, most (but evidently not all) graphics cards allow you to use the "take a sneaky peek at your neighbours work" trick to find the rate of change of any variable with respect to screen-space X or Y. In shader code, you call the 'ddx(thing)' and 'ddy(thing)' functions - which tell you the rate of change of 'thing' with respect to the screen coordinates...d(thing)/dx and d(thing)/dy. Evidently some embedded processors don't (or at least, didn't) support this mechanism (probably because they don't have many shader processors - so the opportunity to look at their neighbours isn't there). Hence this functionality is an extension in OpenGL-ES - although it is in the core standard for mainstream OpenGL and Direct3D. It's a shame it's not in WebGL because quite a few algorithms rely on it...but c'est la vie. -- Steve > > ----- Original Message ----- From: "Chris Marrin" > To: "Kenneth Russell" > Cc: "Ben Vanik" ; ; "Gregg > Tavares (wrk)" ; "Vladimir Vukicevic" > ; "public webgl" > Sent: Tuesday, August 24, 2010 4:14 PM > Subject: Re: [Public WebGL] Shader validator issue. > > >> >> On Aug 24, 2010, at 11:14 AM, Kenneth Russell wrote: >> >>> On Tue, Aug 24, 2010 at 9:31 AM, Chris Marrin >>> wrote: >>>> >>>> On Aug 23, 2010, at 1:41 PM, Kenneth Russell wrote: >>>> >>>>> Support for OES_standard_derivatives is in the process of being added >>>>> to the ANGLE shader validator. See >>>>> http://code.google.com/p/angleproject/issues/detail?id=25 . Once this >>>>> is in place, it will be possible to enable >>>>> OES_standard_derivatives in >>>>> a WebGL app. (We need to start a WebGL extension registry...part of >>>>> this is still on my plate.) >>>> >>>> Extensions aren't enabled in WebKit yet, are they? How about Firefox? >>> >>> They aren't enabled in WebKit yet. I think we need to specify them >>> before adding support. >> >> The extension mechanism is defined in the spec, but the functions are >> not yet in WebKit I see. Once they are, it seems reasonable to add >> experimental support for something like standard_derivitives. >> >> ----- >> ~Chris >> cmarrin...@ >> >> >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Tue Aug 24 20:22:33 2010 From: ced...@ (Cedric Vivier) Date: Wed, 25 Aug 2010 11:22:33 +0800 Subject: [Public WebGL] New Rendering Pipeline ? In-Reply-To: References: <4C7247E9.5000805@sjbaker.org> <4C7430D8.2080301@hicorp.co.jp> Message-ID: On Wed, Aug 25, 2010 at 04:59, Gregg Tavares (wrk) wrote: > NaCl does pass all of the OpenGL ES 2.0 conformance tests or will by v1.0 > Interesting. Does this mean NaCl cannot run (fully) on top of Direct3D / Angle ?? If it can, why do we keep limitations on WebGL when NaCl can do without them ? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Tue Aug 24 20:43:02 2010 From: vla...@ (Vladimir Vukicevic) Date: Tue, 24 Aug 2010 20:43:02 -0700 (PDT) Subject: [Public WebGL] New Rendering Pipeline ? In-Reply-To: Message-ID: <1696763797.372412.1282707782422.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Wed, Aug 25, 2010 at 04:59, Gregg Tavares (wrk) < gman...@ > > wrote: > > NaCl does pass all of the OpenGL ES 2.0 conformance tests or will by > v1.0 > > Interesting. > > Does this mean NaCl cannot run (fully) on top of Direct3D / Angle ?? > If it can, why do we keep limitations on WebGL when NaCl can do > without them ? What does NaCl have to do with WebGL, or for that matter, with Direct3D? It is, AFAIK, currently primarily an API for Chrome OS extensibility, so neither WebGL nor Direct3D apply... (we might be getting a little off topic here...) - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Tue Aug 24 21:00:09 2010 From: ced...@ (Cedric Vivier) Date: Wed, 25 Aug 2010 12:00:09 +0800 Subject: [Public WebGL] New Rendering Pipeline ? In-Reply-To: <1696763797.372412.1282707782422.JavaMail.root@cm-mail03.mozilla.org> References: <1696763797.372412.1282707782422.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Wed, Aug 25, 2010 at 11:43, Vladimir Vukicevic wrote: > What does NaCl have to do with WebGL, or for that matter, with Direct3D? > It is, AFAIK, currently primarily an API for Chrome OS extensibility, so > neither WebGL nor Direct3D apply... (we might be getting a little off topic > here...) > > Nope NaCl is supposed to work with any Chrome browser, not only Chrome OS. Both NaCl and WebGL are implementing (to some extent) OpenGL ES 2.0 in the browser, they share a lot of similar constraints when it comes to safety / portability ; Chrome browser, a WebGL implementation, might even share quite a bit of backend code between these two APIs. IMHO it is interesting to learn how and why NaCl can fully implement ES 2.0 without subtle limitations when WebGL cannot, isn't it ? Of course if NaCl on Windows Chrome is _not_ supposed to ever run on top of Direct3D/Angle then my question is moot and I apologize for the being a bit off-topic. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Tue Aug 24 21:02:06 2010 From: ste...@ (Steve Baker) Date: Tue, 24 Aug 2010 23:02:06 -0500 Subject: [Public WebGL] ETC texture compression. In-Reply-To: References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> Message-ID: <4C7495BE.2080107@sjbaker.org> Cedric Vivier wrote: > On Wed, Aug 25, 2010 at 08:59, Steve Baker > wrote: > > The trouble is that if the extension is "nearly" universal - and > useful > enough to matter, many application authors are going to be tempted to > use it - and not bother to write fallbacks for it because the > additional > revenue from (say) 5% of the market would not pay for the additional > development cost. > > > I agree, counter-intuitively I'd think an extension with available on > only 50% systems to be better supported for everyone (ie. have > fallbacks) than an extension that runs for almost everyone (90%+) thus > the developer might be less interested to do the effort. Maybe - but what tends to happen is that the 50% of hardware that has the extension is the 50% which are the latest, greatest, fastest GPU's - and the ones that don't support it are the slowest, oldest, crappiest ones. So supporting the extension in your application makes the machines that are already plenty fast enough to run your app run yet faster - and leaves the slower machines running the cruddy fallback code - which is likely to be slower. Enthusiasm for using such extensions is generally small...especially if it takes significant art changes to make use of it. Having said that, the ETC extension isn't like that - it's (presumably) supported on cellphones but not on desktops - the cellphones are short of RAM - so they need compression - but most desktops have at least 6x more video RAM than cellphones do - so perhaps they don't need 6x compression in order to run the same applications. That makes it worth supporting ETC as an extension. > Also the specific case of compression extensions is the easiest kind > of extension do fallback for, eg. : > > var format; > if (format = gl.getExtension("ETC2")) { > loadETC2Texture(format, name); > } else if (format = gl.getExtension("ETC1")) { > loadETC1Texture(format, name); > } else if (format = gl.getExtension("PVRTC")) { > loadPVRTCTexture(format, name); > } else { //fallback > loadPNGTexture(name); > } Well...kinda. The trouble is that not all maps 'survive' a particular compression scheme - often artists and shader programmers have to make a case-by-case call. If you need ETC2 to support alpha - then falling back to ETC1 isn't going to work. Similarly, we're not going to be using ETC1 for normal maps - no matter whether it's supported or not. If you know that the underlying hardware is going to crunch your beautiful 8/8/8/8 textures into 4/4/4/4 and your 8/8/8's into 5/6/5 no matter what - then maybe you don't mind that ETC1 is going to reduce your color precision...but maybe you figure that now it's only giving you 3x compression instead of 6x and you might prefer to stick with uncompressed maps...it's a really tricky decision. I've been wondering how hard it would be to generate a compression scheme that ran in shader code using only standard texture facilities. Maybe one could emulate ETC1 inside the shader for desktop machines without ETC support...but without access to texture2DLod and ddx/ddy functions, it would be tough. But if we had WebGL generate extra shader code to do it on hardware that doesn't support ETC1 but does support texture2DLod/ddx/ddy then maybe we'd end up with universal ETC1 coverage. Are there any machines that don't support ETC1 AND don't support texture2DLod/ddx/ddy in the fragment shader? > That said, I like the concept of a lowest-common denominator WebGL > that guarantees WebGL code to run anywhere, especially for 1.0, on the > other hand for the future of WebGL I'm wary of privileging policy > decisions over technical decisions when it comes to give more power to > advanced developers (indeed - more power means more responsibilities, > there's always ways to make an app fail on some platforms) I understand completely. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gus...@ Wed Aug 25 01:31:07 2010 From: gus...@ (gustav) Date: Wed, 25 Aug 2010 10:31:07 +0200 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <4C746ADE.5080308@sjbaker.org> References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> Message-ID: On 25 August 2010 02:59, Steve Baker wrote: > Vladimir Vukicevic wrote: > > The extensions that we're looking at are things that are defined as > extensions, but generally have very wide (or universal) availability. So > that argument doesn't apply there -- it's very different defining an > extension for webgl for something that will be present on 95% (if not 100%) > of target hardware, vs. something that is known to not be present on half or > more. > > > > - Vlad > > > The trouble is that if the extension is "nearly" universal - and useful > enough to matter, many application authors are going to be tempted to > use it - and not bother to write fallbacks for it because the additional > revenue from (say) 5% of the market would not pay for the additional > development cost. Also, a lot of ground-breaking web sites are produced > by amateurs who may well find themselves unable to dig up a sufficiently > ancient machine to test on to debug all of these possibilities. > > If WebGL programs don't run exactly the same everywhere, the web will > become full of apps that simply fail for ill-explained reasons on random > machines. That will reflect badly on the standard. I think we should > avoid extensions in at least the first version of WebGL for exactly that > reason. > > On balance, I would prefer that we didn't ever provide extensions. I > believe that we should be aggressive about the hardware you need in > order to be able to run WebGL. If 5% of machines don't have some > important and "nearly" universal extension then let's make that a part > of the core API and declare the 5% of machines obsolete and unable to > support the API. Let's drive the community rather than lagging it. > > Right now - even without extensions - we have problems. Sometimes the > underlying OpenGL will silently fall back on software emulation to > implement a feature rather than admit that the hardware can't do it - > and the result is too slow to use (Vertex textures...to pick a > particularly nasty example). The spec claims that you can use this > feature - but in practice it's not always usable - and the only way to > detect that there is a problem and provide a fall-back is to measure > in-game frame rates (which given garbage collection and the general > randomness of timing in a browser environment is tricky at best)...argh! > > So the fear and assumption of that programmers will screw up extensions management is to prevent anybody from using it ?. Extensions are just that, what the word states. If people have modern gfx cards in their PC, they expect a certain level of graphics, and it can for some products be worth the man hours it so implement and QA extensions to make it look better and allow for a nice user experience. The same product can benefit from being able to run on less capable devices while still using the same development environment, its not looking as good but it works, and users don't expect the same level of graphics - so all is fine. Its IMO important to allow for somewhat competitive graphics by extensions to make it viable to choose the web platform as the only development target. The alternative is to develop traditional desktop versions along with a web version, if its deemed worth the trouble that is.. Id rather handle extensions then multiple development environments / delivery platforms. If programmers are feeling insecure about extensions, its reasonable to not use them.. Or to use a 3D engine. that's what they are for =). Incompetent people will find 1000 other ways to screw up, you cant prevent that. If your trying to remove all theoretical possibilities for programmers to screw up, you need to give them a tightly controlled and limited point and click tool and only that, people who can handle more cant be allowed to do that, because the others will do it too and screw up , right ?. Its a strategic business decision to design a product to transparently adapt by the discrete state machine logic that correctly extension usage is, gracefully degrade in visual quality but still work. Its not up to you to do that decision for us by removing that option ! I think its wrong that fear and generalising assumptions where you try to quantify programmers incompetence should be what settles a new standard. It would come with a price to do so, it hurts the adaptation by making the platform less viable then it really needs to be. regards gustav trede -------------- next part -------------- An HTML attachment was scrubbed... URL: From ced...@ Wed Aug 25 19:02:45 2010 From: ced...@ (Cedric Vivier) Date: Thu, 26 Aug 2010 10:02:45 +0800 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: <817D1DD8-125C-44B0-95FF-C288EECB06E0@apple.com> References: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> <817D1DD8-125C-44B0-95FF-C288EECB06E0@apple.com> Message-ID: On Sun, Aug 22, 2010 at 23:05, Chris Marrin wrote: > In DOM events the type is always the DOMString of the event type (e.g., > "webglcontextlost"), so you do get what you're looking for, right? > (...) > There is a separate status code. You don't see the 'type' property in the > WebGLContextEvent is because it's in the parent Event interface. > Oopsie, indeed. Then shouldn't we dump CONTEXT_LOST/CONTEXT_RESTORED status codes ? They are redundant and confusing, statusCode can be 0/undefined for the corresponding events for now. Their mere presence guarantees people will mistakenly use them as marker of event type and it will break code whenever we may introduce "real" status codes additions to "webglcontextlost" or "webglcontextrestored" events (eg. full vs partial loss for instance). Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Thu Aug 26 08:54:04 2010 From: cma...@ (Chris Marrin) Date: Thu, 26 Aug 2010 08:54:04 -0700 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: References: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> <817D1DD8-125C-44B0-95FF-C288EECB06E0@apple.com> Message-ID: I'm not concerned about this confusion. Status codes are enums, types are strings. The status codes for lost and restored context allow you to have a single handler with a simple switch for handling the codes. Sent from my iPad On Aug 25, 2010, at 7:02 PM, Cedric Vivier wrote: > On Sun, Aug 22, 2010 at 23:05, Chris Marrin wrote: > In DOM events the type is always the DOMString of the event type (e.g., "webglcontextlost"), so you do get what you're looking for, right? > (...) > There is a separate status code. You don't see the 'type' property in the WebGLContextEvent is because it's in the parent Event interface. > > > Oopsie, indeed. > Then shouldn't we dump CONTEXT_LOST/CONTEXT_RESTORED status codes ? They are redundant and confusing, statusCode can be 0/undefined for the corresponding events for now. > Their mere presence guarantees people will mistakenly use them as marker of event type and it will break code whenever we may introduce "real" status codes additions to "webglcontextlost" or "webglcontextrestored" events (eg. full vs partial loss for instance). > > Regards, > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Thu Aug 26 09:23:09 2010 From: cma...@ (Chris Marrin) Date: Thu, 26 Aug 2010 09:23:09 -0700 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <4C7495BE.2080107@sjbaker.org> References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> <4C7495BE.2080107@sjbaker.org> Message-ID: > Assuming the copyright issues for ETC1 get sorted out I think we should just make it a part of the spec rather than an extension. The availability of a software decoder would allow it to be implemented on platforms without ETC1. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Thu Aug 26 17:25:21 2010 From: ste...@ (Steve Baker) Date: Thu, 26 Aug 2010 19:25:21 -0500 Subject: [Public WebGL] ETC texture compression. In-Reply-To: References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> <4C7495BE.2080107@sjbaker.org> Message-ID: <4C7705F1.6060008@sjbaker.org> Chris Marrin wrote: > Assuming the copyright issues for ETC1 get sorted out I think we > should just make it a part of the spec rather than an extension. The > availability of a software decoder would allow it to be implemented on > platforms without ETC1. The problem with resorting to software decoders is that most (if not all) desktop systems don't provide support for ETC1. You'd package your textures into ETC1, suffer the horrific loss of image quality...and then discover that you're actually not saving any video memory or improving texture cache coherency at all! ETC1 does help your network bandwidth (the files are 6:1 compressed) - but the average file compression rate I get from PNG's zlib compression is 3.6:1 (averaged over all the maps in my game) - so the network bandwidth savings from ETC are less than a factor of two over PNG. Given that, I can't imagine many people preferring ETC1 over PNG for desktop systems. Of course on phones, (where it's implemented) ETC1 will be very useful indeed - and on those teeny-tiny displays, the loss of quality isn't so critical as it is on a 28" 1600x1400 monitor. So many implementors will end up having to treat ETC1 as if it was an extension and avoid using it on desktop hardware. But here is the problem...if ETC1 becomes a core feature - but doesn't actually do any compression on desktop platforms - how the heck does the implementation know whether it's useful or not? At the very least - if we make ETC1 into core feature - there needs to be a way to ask the API "Hypothetically: What compression ratio would I get for such-and-such format texture in such-and-such internal-format pixels on this hardware?" * On desktops & laptops that don't support ETC, this function would report 1.0 for both PNG and ETC. * On phones that support ETC and 8/8/8/8 textures, it would report 1.0 for PNG and 6.0 for ETC. * On phones that can only do ETC or 5/6/5/0. 5/5/5/1 or 4/4/4/4, it would report 2.0 for PNG and 6.0 for ETC. That would allow the application to make rational decisions about which format of file to grab from the server. If we're going to handle ETC2 similarly in the future, and we know for sure that no current system supports it - then we'd need to decide whether to use that too. But simply making ETC1 magically work if the file extension is ".etc" without any other help would be a bad way to handle it. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Thu Aug 26 17:31:22 2010 From: oli...@ (Oliver Hunt) Date: Thu, 26 Aug 2010 17:31:22 -0700 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <4C7705F1.6060008@sjbaker.org> References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> <4C7495BE.2080107@sjbaker.org> <4C7705F1.6060008@sjbaker.org> Message-ID: <5B60A84D-FE5F-4236-AB77-7F21A6FB18C8@apple.com> On Aug 26, 2010, at 5:25 PM, Steve Baker wrote: > Chris Marrin wrote: >> Assuming the copyright issues for ETC1 get sorted out I think we >> should just make it a part of the spec rather than an extension. The >> availability of a software decoder would allow it to be implemented on >> platforms without ETC1. > The problem with resorting to software decoders is that most (if not > all) desktop systems don't provide support for ETC1. You'd package your > textures into ETC1, suffer the horrific loss of image quality...and then > discover that you're actually not saving any video memory or improving > texture cache coherency at all! ETC1 does help your network bandwidth > (the files are 6:1 compressed) - but the average file compression rate I > get from PNG's zlib compression is 3.6:1 (averaged over all the maps in > my game) - so the network bandwidth savings from ETC are less than a > factor of two over PNG. Given that, I can't imagine many people > preferring ETC1 over PNG for desktop systems. How does it compare to jpeg? Given the current texture loading model is to load from Image (or Canvas) objects you need to transmit your data to the UA in a form the UA understands. In all honesty I find myself wondering if the API should simply be something akin to telling the WebGL implementation to use a compressed texture if possible, then leaving it up to the implementation to determine the best format on the current platform. --Oliver ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Aug 26 18:40:02 2010 From: ste...@ (Steve Baker) Date: Thu, 26 Aug 2010 20:40:02 -0500 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <5B60A84D-FE5F-4236-AB77-7F21A6FB18C8@apple.com> References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> <4C7495BE.2080107@sjbaker.org> <4C7705F1.6060008@sjbaker.org> <5B60A84D-FE5F-4236-AB77-7F21A6FB18C8@apple.com> Message-ID: <4C771772.2000706@sjbaker.org> Oliver Hunt wrote: > On Aug 26, 2010, at 5:25 PM, Steve Baker wrote: > > >> Chris Marrin wrote: >> >>> Assuming the copyright issues for ETC1 get sorted out I think we >>> should just make it a part of the spec rather than an extension. The >>> availability of a software decoder would allow it to be implemented on >>> platforms without ETC1. >>> >> The problem with resorting to software decoders is that most (if not >> all) desktop systems don't provide support for ETC1. You'd package your >> textures into ETC1, suffer the horrific loss of image quality...and then >> discover that you're actually not saving any video memory or improving >> texture cache coherency at all! ETC1 does help your network bandwidth >> (the files are 6:1 compressed) - but the average file compression rate I >> get from PNG's zlib compression is 3.6:1 (averaged over all the maps in >> my game) - so the network bandwidth savings from ETC are less than a >> factor of two over PNG. Given that, I can't imagine many people >> preferring ETC1 over PNG for desktop systems. >> > > How does it compare to jpeg? JPEG is incredibly compact - something like 10:1 on a fairly high quality setting, maybe even 100:1 on the lower quality settings! Vastly more compact than any of the specifically "texture" compression schemes. But despite that, JPEG is a simply awful format for texture! The problem is that it's based on a human perceptual model that presumes things about the amplitude and frequency response of your eye - and that assumes you're looking at the image square-on under normal room lighting and such that the image resolution is about what a typical screen resolution is and that you have the gamma setting of the screen set right. But none of those special conditions hold for textures - we squash them, stretch them, MIP them, lighten and darken them. If you look closely at a JPEG image you'll see that you tend to get odd random texels that are wildly "wrong" in hue. Bright green or magenta or something. When you're viewing under optimal conditions those colors are displayed at higher resolution than the color-perception cells in your eye can resolve them so they blend nicely to an intermediate hue and brightness. It's assumption of correct gamma presentation means that it can shave bits off of some brightness ranges and pack more precision into others. But when you stretch and squash and illuminate that, you get REALLY wierd shit coming out of it. Lossy "texture" compression systems are careful to avoid such assumptions - and that's why they can't get to such high densities. So "Just Say No" to JPEG. > Given the current texture loading model is to load from Image (or Canvas) objects you need to transmit your data to the UA in a form the UA understands. > > In all honesty I find myself wondering if the API should simply be something akin to telling the WebGL implementation to use a compressed texture if possible, then leaving it up to the implementation to determine the best format on the current platform. > Yeah - but it's not just a platform decision. You might choose only to compress your largest textures (on the grounds that they are 90% of the problem) - and you certainly don't want to compress normal maps using ETC1. There are also many reasons for using textures that are utterly unrelated to RGB data - or even "texels" in the conventional sense. For those kinds of thing, you CERTAINLY don't want the hardware messing with your data. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Aug 26 18:47:08 2010 From: ste...@ (Steve Baker) Date: Thu, 26 Aug 2010 20:47:08 -0500 Subject: [Public WebGL] On the lack of ddx/ddy. Message-ID: <4C77191C.1020001@sjbaker.org> I had an idea today for adding support for something very similar to the ddx/ddy shader functions on systems that don't support them natively. It's exceedingly hackish - and I hesitate to suggest it - but it will be incredibly useful for one particular effect in my application. So I thought I'd share it. If you make a 256x1 1D monochrome texture map (myMagicTexture) that has each MIPmap level painted in a brightness that is pow(2.0,MIPlevel)/256.0 - then you can deduce what ddx or ddy are by judiciously looking up that texture with the parameter that you were planning to pass to ddx or ddy. The MIP level that the hardware calculates is (theoretically) log-to-base-2 of the worst case minification direction of the map...so we should be able to do something like: sampler1D myMagicTexture; float ddxy(float thing){return texture1D(myMagicTexture,thing).g*256.0;} CAVEATS: * I'm still trying to figure out a way to distinguish between ddx and ddy (this function returns something like max(ddx(thing),ddy(thing)). * It''ll only work over the range of values set by the number of MIP levels in 'myMagicTexture'. * If you wanted to pass a vec4 to it, it would have to do four texture lookups(!) * On systems that can't support 8 bit monochrome maps, we'd need to do something a bit more complicated...maybe doing the pow(2,...) in the shader instead of offline. Now - we also don't have texture2DLod in the fragment shader - but we do have a version of texture2D that has a LOD bias parameter...without this trick, that doesn't help us much - but now that we can use myMagicTexture to figure out the LOD that the texture2D function would have used without the bias parameter - so we can calculate the amount of bias we need to get the LOD we actually want! That is hackish-squared...but maybe it'll work. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Thu Aug 26 18:56:02 2010 From: oli...@ (Oliver Hunt) Date: Thu, 26 Aug 2010 18:56:02 -0700 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <4C771772.2000706@sjbaker.org> References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> <4C7495BE.2080107@sjbaker.org> <4C7705F1.6060008@sjbaker.org> <5B60A84D-FE5F-4236-AB77-7F21A6FB18C8@apple.com> <4C771772.2000706@sjbaker.org> Message-ID: On Aug 26, 2010, at 6:40 PM, Steve Baker wrote: > Oliver Hunt wrote: >> On Aug 26, 2010, at 5:25 PM, Steve Baker wrote: >> >> >>> Chris Marrin wrote: >>> >>>> Assuming the copyright issues for ETC1 get sorted out I think we >>>> should just make it a part of the spec rather than an extension. The >>>> availability of a software decoder would allow it to be implemented on >>>> platforms without ETC1. >>>> >>> The problem with resorting to software decoders is that most (if not >>> all) desktop systems don't provide support for ETC1. You'd package your >>> textures into ETC1, suffer the horrific loss of image quality...and then >>> discover that you're actually not saving any video memory or improving >>> texture cache coherency at all! ETC1 does help your network bandwidth >>> (the files are 6:1 compressed) - but the average file compression rate I >>> get from PNG's zlib compression is 3.6:1 (averaged over all the maps in >>> my game) - so the network bandwidth savings from ETC are less than a >>> factor of two over PNG. Given that, I can't imagine many people >>> preferring ETC1 over PNG for desktop systems. >>> >> >> How does it compare to jpeg? > JPEG is incredibly compact - something like 10:1 on a fairly high > quality setting, maybe even 100:1 on the lower quality settings! Vastly > more compact than any of the specifically "texture" compression > schemes. But despite that, JPEG is a simply awful format for texture! > > The problem is that it's based on a human perceptual model that presumes > things about the amplitude and frequency response of your eye - and that > assumes you're looking at the image square-on under normal room lighting > and such that the image resolution is about what a typical screen > resolution is and that you have the gamma setting of the screen set right. > > But none of those special conditions hold for textures - we squash them, > stretch them, MIP them, lighten and darken them. If you look closely at > a JPEG image you'll see that you tend to get odd random texels that are > wildly "wrong" in hue. Bright green or magenta or something. When > you're viewing under optimal conditions those colors are displayed at > higher resolution than the color-perception cells in your eye can > resolve them so they blend nicely to an intermediate hue and > brightness. It's assumption of correct gamma presentation means that it > can shave bits off of some brightness ranges and pack more precision > into others. But when you stretch and squash and illuminate that, you > get REALLY wierd shit coming out of it. > > Lossy "texture" compression systems are careful to avoid such > assumptions - and that's why they can't get to such high densities. > > So "Just Say No" to JPEG. That would imply that all browsers would need to natively support ETC texture compression outside of webgl, which is what i was seeing as a problem. >> Given the current texture loading model is to load from Image (or Canvas) objects you need to transmit your data to the UA in a form the UA understands. >> >> In all honesty I find myself wondering if the API should simply be something akin to telling the WebGL implementation to use a compressed texture if possible, then leaving it up to the implementation to determine the best format on the current platform. >> > Yeah - but it's not just a platform decision. You might choose only to > compress your largest textures (on the grounds that they are 90% of the > problem) - and you certainly don't want to compress normal maps using > ETC1. There are also many reasons for using textures that are utterly > unrelated to RGB data - or even "texels" in the conventional sense. For > those kinds of thing, you CERTAINLY don't want the hardware messing with > your data. I wasn't suggesting that the UA make a decision for all textures, I was meaning that when loading a texture you could tell the UA that the texture could be compressed, and whether you're willing to accept lossy compression (I honestly have no idea if there are any lossless compressed texture formats) --Oliver ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Aug 26 20:10:16 2010 From: ste...@ (Steve Baker) Date: Thu, 26 Aug 2010 22:10:16 -0500 Subject: [Public WebGL] ETC texture compression. In-Reply-To: References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> <4C7495BE.2080107@sjbaker.org> <4C7705F1.6060008@sjbaker.org> <5B60A84D-FE5F-4236-AB77-7F21A6FB18C8@apple.com> <4C771772.2000706@sjbaker.org> Message-ID: <4C772C98.8010904@sjbaker.org> Oliver Hunt wrote: > I wasn't suggesting that the UA make a decision for all textures, I was meaning that when loading a texture you could tell the UA that the texture could be compressed, and whether you're willing to accept lossy compression Yeah - but I really do need to know what KIND of lossy compression. For example, if I have a normal map and the system is planning to do ETC1 compression - then I'm definitely going to say "No compression!" - but if it's going to do S3TC (aka DXT1) then "Maybe" - and if it's able to do DXT3/5 then I'll shuffle the blue component into the alpha and happily use that. Also, if the underlying hardware is going to unconditionally crunch my 8/8/8 bit PNG to 5/6/5 or something then ETC1 is only compressing the texture to 3x better than an uncompressed map - so, perhaps then I'd prefer to have my quality higher and not bother with compression. This whole business is very touchy...and if you have to spend any amount of time working with passionate 3D artists, you'll understand that the nature of the compression is very important to them. They'll want to play around with image filters to pre-enhance edges and stuff like that if they know that their image is going to be compressed with a certain algorithm. They understand that you can get 4x image "compression" just by halving the resolution of the map and storing it uncompressed - and for some textures that produces much better results than (for example) DXT3/5 compression...but for others, much worse. Sadly, the application really needs full access to the facts and full control here. Each compression algorithm (since none of them are lossless) has different problems and benefits. It's not nice that texture loaders have to be so careful - but incurring the wrath of your artists is no fun! The idea of hiding the underlying compression scheme from me in the interests of uniformity and portability will just drive me to trying to find ever more devious and hackish ways to extract that data from the system...so why not just come out and tell me what you're doing under the hood so I can make an informed decision? I guess it's OK to have a simplified interface for people who don't know or care - but there really needs to be a way to query exactly what algorithm the GPU is sticking us with this time - or my artists may be likely to form a lynch mob! > (I honestly have no idea if there are any lossless compressed texture formats) I very much doubt it...and it's somewhat unlikely that anyone will ever implement one in hardware because the mapping of addresses in the original texture to addresses in a lossless compressed map would not likely be a simple calculation. Lossless compression is (more or less by definition) something that compresses some parts of the image better than others. (If it could guarantee to compress all of any image by the same amount - then it could also compress arbitary random data by the same amount - and if it could do that, then it would be violating the laws of thermodynamics!) With things like ETC1 and S3TC, there is a precise mapping from original pixel coordinates to texture data addresses. In ETC1, for example, every 4x4 pixel block compresses to precisely 64 bits of binary data - so the graphics hardware can easily look up any texel in the map with just one 64 bit memory access. If you used something like run-length encoding or zlib compression (such as PNG uses), then the hardware would have a very hard time to figure out where in memory to fetch the texels for coordinate (0.1234, 0.5678). Texturing a triangle is a somewhat random-access type of processing - and traditional image compression is typically linear...start uncompressing at the top of the file - work down to the bottom. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Thu Aug 26 20:13:12 2010 From: ced...@ (Cedric Vivier) Date: Fri, 27 Aug 2010 11:13:12 +0800 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: References: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> <817D1DD8-125C-44B0-95FF-C288EECB06E0@apple.com> Message-ID: On Thu, Aug 26, 2010 at 23:54, Chris Marrin wrote: > I'm not concerned about this confusion. Status codes are enums, types are > strings. The status codes for lost and restored context allow you to have a > single handler with a simple switch for handling the codes. > I don't get your argument, this is exactly the problem. You can have a single handler with a simple switch on "type" as well ... JavaScript supports switch on strings. Status codes are not event types, being able to currently switch on status codes will just break existing code in subtle ways whenever we add new status codes to event types. Status codes are supposed to give _additional_ status information for a given event type, it should be clear that these are secondary and treating them as first-class "type" just makes things confusing and more fragile in the long term : Consider : switch (e.type) { case "webglcontextlost": stopAnimation(); break; ... } vs. switch (e.statusCode) { case CONTEXT_LOST: stopAnimation(); break; ... } The former code will always work as intended even if we add status code to give more information about the context loss (eg. full vs partial vs temporary [resource reclaim by browser for another newer tab] vs longterm [suspend], whatever...). The latter will break existing code if we ever decide to give more information. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Fri Aug 27 11:52:12 2010 From: cma...@ (Chris Marrin) Date: Fri, 27 Aug 2010 11:52:12 -0700 Subject: [Public WebGL] ETC texture compression. In-Reply-To: <4C771772.2000706@sjbaker.org> References: <1892571691.368690.1282678756224.JavaMail.root@cm-mail03.mozilla.org> <4C746ADE.5080308@sjbaker.org> <4C7495BE.2080107@sjbaker.org> <4C7705F1.6060008@sjbaker.org> <5B60A84D-FE5F-4236-AB77-7F21A6FB18C8@apple.com> <4C771772.2000706@sjbaker.org> Message-ID: On Aug 26, 2010, at 6:40 PM, Steve Baker wrote: >> ... >> In all honesty I find myself wondering if the API should simply be something akin to telling the WebGL implementation to use a compressed texture if possible, then leaving it up to the implementation to determine the best format on the current platform. >> > Yeah - but it's not just a platform decision. You might choose only to > compress your largest textures (on the grounds that they are 90% of the > problem) - and you certainly don't want to compress normal maps using > ETC1. There are also many reasons for using textures that are utterly > unrelated to RGB data - or even "texels" in the conventional sense. For > those kinds of thing, you CERTAINLY don't want the hardware messing with > your data. We could certainly let the author specify some parameters. We recently discussed adding a flag to say that we don't want the image to be changed (color space converted, gamma corrected, etc.) to allow data to be sent to the GPU via images. We could have another that lets the author favor space over quality for a given texture. This would be a hint so the implementation could ignore it if no such space savings is possible. But all this was moot. At the last WG meeting we decided that the 1.0 spec would include no extensions at all, required or allowed. It will simply have the extension mechanism and we will leave the definition of specific extensions to later. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Fri Aug 27 11:58:05 2010 From: cma...@ (Chris Marrin) Date: Fri, 27 Aug 2010 11:58:05 -0700 Subject: [Public WebGL] Proposed change to WebGL Event definition In-Reply-To: References: <33F15A1B-FF87-44F1-ABA3-632494FA649A@apple.com> <817D1DD8-125C-44B0-95FF-C288EECB06E0@apple.com> Message-ID: <10C92111-C1E5-429F-AF48-D4737C07BCEE@apple.com> On Aug 26, 2010, at 8:13 PM, Cedric Vivier wrote: > On Thu, Aug 26, 2010 at 23:54, Chris Marrin wrote: > I'm not concerned about this confusion. Status codes are enums, types are strings. The status codes for lost and restored context allow you to have a single handler with a simple switch for handling the codes. > > I don't get your argument, this is exactly the problem. > You can have a single handler with a simple switch on "type" as well ... JavaScript supports switch on strings. > > Status codes are not event types, being able to currently switch on status codes will just break existing code in subtle ways whenever we add new status codes to event types. > Status codes are supposed to give _additional_ status information for a given event type, it should be clear that these are secondary and treating them as first-class "type" just makes things confusing and more fragile in the long term : > > > Consider : > > switch (e.type) { > case "webglcontextlost": > stopAnimation(); > break; > ... > } > > vs. > > switch (e.statusCode) { > case CONTEXT_LOST: > stopAnimation(); > break; > ... > } > > > The former code will always work as intended even if we add status code to give more information about the context loss (eg. full vs partial vs temporary [resource reclaim by browser for another newer tab] vs longterm [suspend], whatever...). > > The latter will break existing code if we ever decide to give more information. Ok, I see your point. I will get rid of the two status codes. ----- ~Chris cmarrin...@ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ced...@ Fri Aug 27 22:28:25 2010 From: ced...@ (Cedric Vivier) Date: Sat, 28 Aug 2010 13:28:25 +0800 Subject: [Public WebGL] Is section 6.3 still needed ? (was: New Rendering Pipeline ?) Message-ID: On Wed, Aug 25, 2010 at 12:00, Cedric Vivier wrote: > Of course if NaCl on Windows Chrome is _not_ supposed to ever run on top of > Direct3D/Angle then my question is moot and I apologize for the being a bit > off-topic. > With the news of Chrome jumping into the GPU-accelerated HTML compositing train as well, I stumbled upon an interesting document about how Chrome handles accelerated compositing, WebGL content and NaCl 3D rendering within the same "GPU process" : https://sites.google.com/a/chromium.org/dev/developers/design-documents/gpu-accelerated-compositing-in-chrome After reading this article (and glancing at the code) I'm still very curious how NaCl can be fully ES 2.0 conformant with no restriction similar to WebGL spec section 6.3 when running on top of OpenGL desktop and Direct3D/ANGLE. Has a solution to reliably emulate the ES 2.0 semantics been developed somehow ? If so, assuming the same trick can be done on any WebGL implementation, do we still need to keep section 6.3 (and the divergence from ES 2.0 it brings) ? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From oli...@ Fri Aug 27 22:59:46 2010 From: oli...@ (Oliver Hunt) Date: Fri, 27 Aug 2010 22:59:46 -0700 Subject: [Public WebGL] Is section 6.3 still needed ? (was: New Rendering Pipeline ?) In-Reply-To: References: Message-ID: <9218E434-29DD-4BB9-BA02-B5E4977FDFEA@apple.com> On Aug 27, 2010, at 10:28 PM, Cedric Vivier wrote: > On Wed, Aug 25, 2010 at 12:00, Cedric Vivier wrote: > Of course if NaCl on Windows Chrome is _not_ supposed to ever run on top of Direct3D/Angle then my question is moot and I apologize for the being a bit off-topic. > > With the news of Chrome jumping into the GPU-accelerated HTML compositing train as well, I stumbled upon an interesting document about how Chrome handles accelerated compositing, WebGL content and NaCl 3D rendering within the same "GPU process" : > https://sites.google.com/a/chromium.org/dev/developers/design-documents/gpu-accelerated-compositing-in-chrome > > After reading this article (and glancing at the code) I'm still very curious how NaCl can be fully ES 2.0 conformant with no restriction similar to WebGL spec section 6.3 when running on top of OpenGL desktop and Direct3D/ANGLE. > > Has a solution to reliably emulate the ES 2.0 semantics been developed somehow ? > If so, assuming the same trick can be done on any WebGL implementation, do we still need to keep section 6.3 (and the divergence from ES 2.0 it brings) ? Compositing is a highly restricted use case, it is quite literally the act of drawing one texture on top of another, with no effects or anything. If there were compatibility problems doing something that simple then i don't think we'd have much in the way of consumer products that used 3d :D As far as compositing goes webkit determines what content should be placed in a compositing layer then, then asks the compositing engine for a new layer, draws the appropriate content into that layer, and from that point onwards relies on the compositing engine to composite that layer correctly. This can even be done in software (and has been done by Nokia in the Qt backend - it does actually provide a perf win vs the non-compositing model in some cases). --Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From ced...@ Fri Aug 27 23:27:33 2010 From: ced...@ (Cedric Vivier) Date: Sat, 28 Aug 2010 14:27:33 +0800 Subject: [Public WebGL] Is section 6.3 still needed ? (was: New Rendering Pipeline ?) In-Reply-To: <9218E434-29DD-4BB9-BA02-B5E4977FDFEA@apple.com> References: <9218E434-29DD-4BB9-BA02-B5E4977FDFEA@apple.com> Message-ID: On Sat, Aug 28, 2010 at 13:59, Oliver Hunt wrote: > Compositing is a highly restricted use case, it is quite literally the act > of drawing one texture on top of another, with no effects or anything. If > there were compatibility problems doing something that simple then i don't > think we'd have much in the way of consumer products that used 3d :D > > Yes indeed, but let's forget about the compositing part actually ;-) The interesting point is that the underlying GPU process (and the command buffer API) used is the same for both WebGL and NaCl 3D rendering. Both WebGL and NaCl expose ES 2.0 functionality to user code (WebGL doing it with just a bit more restrictions and adaptations with regards to JavaScript). Months ago we decided to add section 6.3 divergence to the WebGL specification because it was thought impossible to implement ES 2.0's framebuffer attachment semantics on top of OpenGL desktop and Direct3D... however it seems that a workaround has been found since ES 2.0 as exposed by NaCl is said to be able to support ES 2.0 semantics regardless of the underlying platform's 3D API. This makes me wonder whether we might be able to remove 6.3, hence have one less divergence from ES 2.0, which is one of the goals of WebGL ("conforms closely to the ES 2.0 API") and helps porting. Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From oli...@ Fri Aug 27 23:38:39 2010 From: oli...@ (Oliver Hunt) Date: Fri, 27 Aug 2010 23:38:39 -0700 Subject: [Public WebGL] Is section 6.3 still needed ? (was: New Rendering Pipeline ?) In-Reply-To: References: <9218E434-29DD-4BB9-BA02-B5E4977FDFEA@apple.com> Message-ID: <18BD5675-A47A-4A0A-8BD6-18355C220FC4@apple.com> On Aug 27, 2010, at 11:27 PM, Cedric Vivier wrote: > On Sat, Aug 28, 2010 at 13:59, Oliver Hunt wrote: > Compositing is a highly restricted use case, it is quite literally the act of drawing one texture on top of another, with no effects or anything. If there were compatibility problems doing something that simple then i don't think we'd have much in the way of consumer products that used 3d :D > > > > Yes indeed, but let's forget about the compositing part actually ;-) > > The interesting point is that the underlying GPU process (and the command buffer API) used is the same for both WebGL and NaCl 3D rendering. > > Both WebGL and NaCl expose ES 2.0 functionality to user code (WebGL doing it with just a bit more restrictions and adaptations with regards to JavaScript). > > Months ago we decided to add section 6.3 divergence to the WebGL specification because it was thought impossible to implement ES 2.0's framebuffer attachment semantics on top of OpenGL desktop and Direct3D... however it seems that a workaround has been found since ES 2.0 as exposed by NaCl is said to be able to support ES 2.0 semantics regardless of the underlying platform's 3D API. > > This makes me wonder whether we might be able to remove 6.3, hence have one less divergence from ES 2.0, which is one of the goals of WebGL ("conforms closely to the ES 2.0 API") and helps porting. The fact that they share a process is irrelevant -- the only reason that GPU stuff runs in a separate process is because historically graphics drivers have been buggy so it makes sense to isolate them from anything else, in much the same way (and for basically the same reasons) that browsers run plugins in a separate process now. Given the model taken by NaCl I doubt that they are as concerned about compatibility as we need to be. --Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Sat Aug 28 09:55:55 2010 From: ste...@ (Steve Baker) Date: Sat, 28 Aug 2010 11:55:55 -0500 Subject: [Public WebGL] Is section 6.3 still needed ? (was: New Rendering Pipeline ?) In-Reply-To: References: Message-ID: <4C793F9B.1090105@sjbaker.org> Cedric Vivier wrote: > With the news of Chrome jumping into the GPU-accelerated HTML > compositing train as well, I stumbled upon an interesting document > about how Chrome handles accelerated compositing, WebGL content and > NaCl 3D rendering within the same "GPU process" : > https://sites.google.com/a/chromium.org/dev/developers/design-documents/gpu-accelerated-compositing-in-chrome Woohoo! Compositing is taking something like 40% to 85% of the frame time in my app (at high screen resolutions - and depending on OS, browser and hardware). Doing it using the GPU should get that down to the roughly 5% that it should be and result in a doubling to quadrupling of frame rates! That is a MAJOR win. If I can spend 4x longer rendering content then that's is literally the difference between something that looks like Mario64 and something that looks like Red Dead Redemption! I strongly encourage all of the browser writers to do it as a matter of priority. However, it's only going to be such a huge win if the image that WebGL just rendered STAYS IN THE GPU. If you have to pull it back into main memory and push it back into the GPU later for reasons of threading or who-knows-what-else - then you just blew away most of that benefit. If we really want to see cutting-edge interactive 3D on the web - this is a "must have" thing. Concerns about the reliability of graphics drivers should be pretty minimal - I work with 3D all the time - I've been doing it for 20 years, it's my job. Over the past 4 or 5 years, I've seen very few driver problems - and they've mostly been of the kind that only affects very obscure corners of the architecture. When you have WindowsVista/Win7 doing their window compositing using the GPU, you know it's going to be pretty solid. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Aug 30 08:29:48 2010 From: ste...@ (ste...@) Date: Mon, 30 Aug 2010 08:29:48 -0700 Subject: [Public WebGL] Render to texture Message-ID: <019958f0ade0171e9a34380ac6dd2ed1.squirrel@webmail.sjbaker.org> Are there any examples out there of the correct way to set up for rendering to texture - and (especially) setting up rendering to and reading back from depth buffer textures? Getting the latter to work portably is always a bitch...I'd like to get the 'official' way to do it. I'm guessing there might be something in the conformance suite that I could steal from? TIA -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Mon Aug 30 10:31:37 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 30 Aug 2010 10:31:37 -0700 Subject: [Public WebGL] Render to texture In-Reply-To: <019958f0ade0171e9a34380ac6dd2ed1.squirrel@webmail.sjbaker.org> References: <019958f0ade0171e9a34380ac6dd2ed1.squirrel@webmail.sjbaker.org> Message-ID: On Mon, Aug 30, 2010 at 8:29 AM, wrote: > Are there any examples out there of the correct way to set up for > rendering to texture - and (especially) setting up rendering to and > reading back from depth buffer textures? ? Getting the latter to work > portably is always a bitch...I'd like to get the 'official' way to do it. Rendering to a depth texture isn't supported in core OpenGL ES 2.0 and therefore not in WebGL without an extension (GL_OES_depth_texture). However, you can approximate it by writing your normalized depth value into the color channels of an RGBA texture. One example of this technique is in the Shadow Mapping sample of the O3D/WebGL library; see http://code.google.com/p/o3d/wiki/Samples . There's another in SpiderGL; see http://www.spidergl.org/example.php?id=6 . -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Mon Aug 30 10:31:37 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 30 Aug 2010 10:31:37 -0700 Subject: [Public WebGL] Render to texture In-Reply-To: <019958f0ade0171e9a34380ac6dd2ed1.squirrel@webmail.sjbaker.org> References: <019958f0ade0171e9a34380ac6dd2ed1.squirrel@webmail.sjbaker.org> Message-ID: On Mon, Aug 30, 2010 at 8:29 AM, wrote: > Are there any examples out there of the correct way to set up for > rendering to texture - and (especially) setting up rendering to and > reading back from depth buffer textures? ? Getting the latter to work > portably is always a bitch...I'd like to get the 'official' way to do it. Rendering to a depth texture isn't supported in core OpenGL ES 2.0 and therefore not in WebGL without an extension (GL_OES_depth_texture). However, you can approximate it by writing your normalized depth value into the color channels of an RGBA texture. One example of this technique is in the Shadow Mapping sample of the O3D/WebGL library; see http://code.google.com/p/o3d/wiki/Samples . There's another in SpiderGL; see http://www.spidergl.org/example.php?id=6 . -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Mon Aug 30 10:43:37 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 30 Aug 2010 10:43:37 -0700 Subject: [Public WebGL] Is section 6.3 still needed ? (was: New Rendering Pipeline ?) In-Reply-To: References: <9218E434-29DD-4BB9-BA02-B5E4977FDFEA@apple.com> Message-ID: On Fri, Aug 27, 2010 at 11:27 PM, Cedric Vivier wrote: > On Sat, Aug 28, 2010 at 13:59, Oliver Hunt wrote: >> >> Compositing is a highly restricted use case, it is quite literally the act >> of drawing one texture on top of another, with no effects or anything. If >> there were compatibility problems doing something that simple then i don't >> think we'd have much in the way of consumer products that used 3d :D > > > Yes indeed, but let's forget about the compositing part actually ;-) > The interesting point is that the underlying GPU process (and the command > buffer API) used is the same for both WebGL and NaCl 3D rendering. > Both WebGL and NaCl expose ES 2.0 functionality to user code (WebGL doing it > with just a bit more restrictions and adaptations with regards to > JavaScript). > Months ago we decided to add section 6.3 divergence to the WebGL > specification because it was thought impossible to implement ES 2.0's > framebuffer attachment semantics on top of OpenGL desktop and Direct3D... > however it seems that a workaround has been found since ES 2.0 as exposed by > NaCl is said to be able to support ES 2.0 semantics regardless of the > underlying platform's 3D API. > This makes me wonder whether we might be able to remove 6.3, hence have one > less divergence from ES 2.0, which is one of the goals of WebGL ("conforms > closely to the ES 2.0 API") and helps porting. Section 6.3 of the WebGL specification is still needed. Adding the extra depth+stencil virtual attachment point works around problems when building WebGL on top of desktop GL; in particular, avoiding the need to allocate a packed depth+stencil renderbuffer all the time behind the scenes. I believe this is what ANGLE does. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Mon Aug 30 10:44:31 2010 From: oli...@ (Oliver Hunt) Date: Mon, 30 Aug 2010 10:44:31 -0700 Subject: [Public WebGL] Render to texture In-Reply-To: References: <019958f0ade0171e9a34380ac6dd2ed1.squirrel@webmail.sjbaker.org> Message-ID: <83F856FA-1A58-4481-BAFE-99A04C82489F@apple.com> Of course we're using WebGL so you the DOM available. You could _technically_ rendering to one webgl canvas, and then load that webgl canvas as a texture into another one (and presumably if it were deemed important enough implementations would optimise that use case to avoid too much copying) --Oliver On Aug 30, 2010, at 10:31 AM, Kenneth Russell wrote: > On Mon, Aug 30, 2010 at 8:29 AM, wrote: >> Are there any examples out there of the correct way to set up for >> rendering to texture - and (especially) setting up rendering to and >> reading back from depth buffer textures? Getting the latter to work >> portably is always a bitch...I'd like to get the 'official' way to do it. > > Rendering to a depth texture isn't supported in core OpenGL ES 2.0 and > therefore not in WebGL without an extension (GL_OES_depth_texture). > However, you can approximate it by writing your normalized depth value > into the color channels of an RGBA texture. One example of this > technique is in the Shadow Mapping sample of the O3D/WebGL library; > see http://code.google.com/p/o3d/wiki/Samples . There's another in > SpiderGL; see http://www.spidergl.org/example.php?id=6 . > > -Ken > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Aug 30 12:20:00 2010 From: ste...@ (ste...@) Date: Mon, 30 Aug 2010 12:20:00 -0700 Subject: [Public WebGL] Render to texture In-Reply-To: <83F856FA-1A58-4481-BAFE-99A04C82489F@apple.com> References: <019958f0ade0171e9a34380ac6dd2ed1.squirrel@webmail.sjbaker.org> <83F856FA-1A58-4481-BAFE-99A04C82489F@apple.com> Message-ID: <00b95b32c31b014ec921bc7a1c17dad9.squirrel@webmail.sjbaker.org> So is going via the DOM/canvas mechanism likely to hurt performance with present implementations? Would the image be copied from place to place? (I confess my knowledge of what's going on here is a little vague - please educate me!) Is there no way to render directly into a texture without involving a canvas? The render-depth-into-RGBA approach is perfectly OK for shadow mapping - (because you already need a separate render pass in order to render from the point of view of the light source). But for the other uses of depth textures, you need an entire extra render pass from the perspective of the camera to capture the Z data. That's not a bad idea on a C++-based OpenGL system since the savings you can get from occlusion testing pay for CPU & vertex cost of the extra pass and the benefits of using a cheaper shader on that pass pay - and fewer pixels hit on subsequent passes - pay for the extra fill rate costs. But in WebGL, we're stuck with JavaScript which makes the software overhead of that extra pass kinda large - and without the benefit of occlusion testing to win that back, it's a painful thing. -- Steve > Of course we're using WebGL so you the DOM available. You could > _technically_ rendering to one webgl canvas, and then load that webgl > canvas as a texture into another one (and presumably if it were deemed > important enough implementations would optimise that use case to avoid too > much copying) > > --Oliver > > On Aug 30, 2010, at 10:31 AM, Kenneth Russell wrote: > >> On Mon, Aug 30, 2010 at 8:29 AM, wrote: >>> Are there any examples out there of the correct way to set up for >>> rendering to texture - and (especially) setting up rendering to and >>> reading back from depth buffer textures? Getting the latter to work >>> portably is always a bitch...I'd like to get the 'official' way to do >>> it. >> >> Rendering to a depth texture isn't supported in core OpenGL ES 2.0 and >> therefore not in WebGL without an extension (GL_OES_depth_texture). >> However, you can approximate it by writing your normalized depth value >> into the color channels of an RGBA texture. One example of this >> technique is in the Shadow Mapping sample of the O3D/WebGL library; >> see http://code.google.com/p/o3d/wiki/Samples . There's another in >> SpiderGL; see http://www.spidergl.org/example.php?id=6 . >> >> -Ken >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Mon Aug 30 12:30:20 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 30 Aug 2010 12:30:20 -0700 Subject: [Public WebGL] Render to texture In-Reply-To: <00b95b32c31b014ec921bc7a1c17dad9.squirrel@webmail.sjbaker.org> References: <019958f0ade0171e9a34380ac6dd2ed1.squirrel@webmail.sjbaker.org> <83F856FA-1A58-4481-BAFE-99A04C82489F@apple.com> <00b95b32c31b014ec921bc7a1c17dad9.squirrel@webmail.sjbaker.org> Message-ID: On Mon, Aug 30, 2010 at 12:20 PM, wrote: > So is going via the DOM/canvas mechanism likely to hurt performance with > present implementations? ?Would the image be copied from place to place? > (I confess my knowledge of what's going on here is a little vague - please > educate me!) ?Is there no way to render directly into a texture without > involving a canvas? Yes, there will currently be a significant performance penalty involved with rendering WebGL to one canvas and then uploading it as a texture to another one (or even within the same context -- although you can get the same effect with copyTexSubImage2D). This is true at least in the WebKit WebGL implementation. The recommended way to perform render-to-texture of the color buffers in WebGL is to allocate an FBO and attach a texture as the color attachment. This functionality is supposed to be guaranteed on any platform claiming WebGL support. Note though the OpenGL ES 2.0 NPOT restrictions; see http://khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences . -Ken > The render-depth-into-RGBA approach is perfectly OK for shadow mapping - > (because you already need a separate render pass in order to render from > the point of view of the light source). ?But for the other uses of depth > textures, you need an entire extra render pass from the perspective of the > camera to capture the Z data. > > That's not a bad idea on a C++-based OpenGL system since the savings you > can get from occlusion testing pay for CPU & vertex cost of the extra pass > and the benefits of using a cheaper shader on that pass pay - and fewer > pixels hit on subsequent passes - pay for the extra fill rate costs. ?But > in WebGL, we're stuck with JavaScript which makes the software overhead of > that extra pass kinda large - and without the benefit of occlusion testing > to win that back, it's a painful thing. > > ?-- Steve > >> Of course we're using WebGL so you the DOM available. ?You could >> _technically_ rendering to one webgl canvas, and then load that webgl >> canvas as a texture into another one (and presumably if it were deemed >> important enough implementations would optimise that use case to avoid too >> much copying) >> >> --Oliver >> >> On Aug 30, 2010, at 10:31 AM, Kenneth Russell wrote: >> >>> On Mon, Aug 30, 2010 at 8:29 AM, ? wrote: >>>> Are there any examples out there of the correct way to set up for >>>> rendering to texture - and (especially) setting up rendering to and >>>> reading back from depth buffer textures? ? Getting the latter to work >>>> portably is always a bitch...I'd like to get the 'official' way to do >>>> it. >>> >>> Rendering to a depth texture isn't supported in core OpenGL ES 2.0 and >>> therefore not in WebGL without an extension (GL_OES_depth_texture). >>> However, you can approximate it by writing your normalized depth value >>> into the color channels of an RGBA texture. One example of this >>> technique is in the Shadow Mapping sample of the O3D/WebGL library; >>> see http://code.google.com/p/o3d/wiki/Samples . There's another in >>> SpiderGL; see http://www.spidergl.org/example.php?id=6 . >>> >>> -Ken >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> >> >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Aug 30 13:25:04 2010 From: ste...@ (ste...@) Date: Mon, 30 Aug 2010 13:25:04 -0700 Subject: [Public WebGL] Render to texture In-Reply-To: References: <019958f0ade0171e9a34380ac6dd2ed1.squirrel@webmail.sjbaker.org> <83F856FA-1A58-4481-BAFE-99A04C82489F@apple.com> <00b95b32c31b014ec921bc7a1c17dad9.squirrel@webmail.sjbaker.org> Message-ID: > On Mon, Aug 30, 2010 at 12:20 PM, wrote: >> So is going via the DOM/canvas mechanism likely to hurt performance with >> present implementations? ?Would the image be copied from place to place? >> (I confess my knowledge of what's going on here is a little vague - >> please >> educate me!) ?Is there no way to render directly into a texture without >> involving a canvas? > > Yes, there will currently be a significant performance penalty > involved with rendering WebGL to one canvas and then uploading it as a > texture to another one (or even within the same context -- although > you can get the same effect with copyTexSubImage2D). This is true at > least in the WebKit WebGL implementation. I suppose there is at least some hope that copyTexSubImage2D would happen entirely within the GPU...that might not be too terrible. > The recommended way to perform render-to-texture of the color buffers > in WebGL is to allocate an FBO and attach a texture as the color > attachment. This functionality is supposed to be guaranteed on any > platform claiming WebGL support. Note though the OpenGL ES 2.0 NPOT > restrictions; see > http://khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences . > > -Ken Is there an example someplace that uses the 'recommended' FBO approach? ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Mon Aug 30 13:31:58 2010 From: kbr...@ (Kenneth Russell) Date: Mon, 30 Aug 2010 13:31:58 -0700 Subject: [Public WebGL] Render to texture In-Reply-To: References: <019958f0ade0171e9a34380ac6dd2ed1.squirrel@webmail.sjbaker.org> <83F856FA-1A58-4481-BAFE-99A04C82489F@apple.com> <00b95b32c31b014ec921bc7a1c17dad9.squirrel@webmail.sjbaker.org> Message-ID: On Mon, Aug 30, 2010 at 1:25 PM, wrote: >> On Mon, Aug 30, 2010 at 12:20 PM, ? wrote: >>> So is going via the DOM/canvas mechanism likely to hurt performance with >>> present implementations? ?Would the image be copied from place to place? >>> (I confess my knowledge of what's going on here is a little vague - >>> please >>> educate me!) ?Is there no way to render directly into a texture without >>> involving a canvas? >> >> Yes, there will currently be a significant performance penalty >> involved with rendering WebGL to one canvas and then uploading it as a >> texture to another one (or even within the same context -- although >> you can get the same effect with copyTexSubImage2D). This is true at >> least in the WebKit WebGL implementation. > > I suppose there is at least some hope that copyTexSubImage2D would happen > entirely within the GPU...that might not be too terrible. It does. >> The recommended way to perform render-to-texture of the color buffers >> in WebGL is to allocate an FBO and attach a texture as the color >> attachment. This functionality is supposed to be guaranteed on any >> platform claiming WebGL support. Note though the OpenGL ES 2.0 NPOT >> restrictions; see >> http://khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences . >> >> -Ken > > Is there an example someplace that uses the 'recommended' FBO approach? The only one that comes to mind is the render target implementation in O3D/WebGL. See http://code.google.com/p/o3d/ and http://code.google.com/p/o3d/downloads/list . I'm sure others on this list will have additional recommendations. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: