From cma...@ Thu Jul 1 04:11:44 2010 From: cma...@ (Chris Marrin) Date: Thu, 01 Jul 2010 07:11:44 -0400 Subject: [Public WebGL] Typed Arrays & Strings In-Reply-To: References: Message-ID: <78221710-CD05-4778-AE16-B8CCA0D22EA9@apple.com> It does seem reasonable to have toDataURL() and fromDataURL() in the TypedArray base class, like Canvas 2D has. I'm not sure why we haven't discussed it before. ~Chris chris...@ On Jun 29, 2010, at 7:35 PM, ry...@ wrote: > Hello, > > I like the Typed Array proposal -- it seems like one of the more sane > binary javascript proposals. One important thing it's missing is the > ability to decode a chunk of UTF8 data. Is there a reason that is not > part of the spec? > > > Ryan > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Thu Jul 1 10:18:20 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 1 Jul 2010 10:18:20 -0700 (PDT) Subject: [Public WebGL] Typed Arrays & Strings In-Reply-To: <372119523.597770.1278004260881.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1973926285.597859.1278004700757.JavaMail.root@cm-mail03.mozilla.org> ----- "Joshua Bell" wrote: > To leap back in - I'd love to see browser vendors include a JS API > that allows for string encoding/decoding from Uint8Arrays. At a > minimum for my scenarios: ASCII, "binary", UTF-8, and Base64 (the > latter with inverted semantics if the words "encode" and "decode" are > used). Many libraries fail to properly deal with UTF-16 surrogate > pairs, so getting it into the browser with a spec and conformance > tests would be lovely. > > That said, IMHO string encodings don't belong in the TypedArray spec > or Uint8Array type itself. I agree here -- I think it would be possible to create a separate spec that depends on Typed Arrays, but provides the specific functionality needed. It sounds like you and Ryan (and others) have a good idea of what would be needed -- would you guys be willing to take a stab at a strawman spec for the functionality? I'd actually urge you to propose it to W3C public-html directly (though don't take that as a "get your dirty strings out of here" request -- you're welcome to bring it up here as well, though even typed arrays are only discussed here out of necessity; would be better if they lived in a general web stack spot :-). One thing that's not quite clear to me, though... I understand that being able to do string charset conversions would be useful. What I don't understand is the desire to do arbitrary binary string to UTF8-in-UInt8Array conversion and vice versa. Is it possible to just ship raw byte arrays (ArrayBuffer) down across the wire, through XHR/WebSockets instead? Seems much simpler/faster and less error prone than doing charset conversions on binary data. > One approach would be a Binary/Buffer type with API like Ryan's in > Node.js that "has-a" (or "is-a") Uint8Array and is what is yielded by > proposed binary support in XHR, File, and WebSockets. It's certainly a > clean and easy to understand API. > > But... that would delay being able to lock down any of those other > specs, though... so perhaps the common currency for binary data should > really be simply Uint8Array (Vlad, is that what you were showing at > WebGL camp?), and string<->binary encoding should exist in a separate > API (e.g. like the CommonJS Encodings proposals) > > (Hopefully I'm being pessimistic about the speed at which a > Binary/Buffer type can be settled on.) > > Vlad: does there need to be any further discussion on this list, or do > you have all the requirements? Should continued discussion stay here, > or be redirected to... bugzilla? Well, see above -- I think that neither Ken nor I really understand the requirements, but that's probably ok, since this is a fairly independent chunk from typed arrays themselves. However, I'd definitely like to understand the potential use cases through a new proposed spec for the charset conversions. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From jos...@ Thu Jul 1 12:12:29 2010 From: jos...@ (Joshua Bell) Date: Thu, 1 Jul 2010 12:12:29 -0700 Subject: [Public WebGL] Typed Arrays & Strings In-Reply-To: <1973926285.597859.1278004700757.JavaMail.root@cm-mail03.mozilla.org> References: <372119523.597770.1278004260881.JavaMail.root@cm-mail03.mozilla.org> <1973926285.597859.1278004700757.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Jul 1, 2010 at 10:18 AM, Vladimir Vukicevic wrote: > One thing that's not quite clear to me, though... I understand that being > able to do string charset conversions would be useful. What I don't > understand is the desire to do arbitrary binary string to UTF8-in-UInt8Array > conversion and vice versa. Is it possible to just ship raw byte arrays > (ArrayBuffer) down across the wire, through XHR/WebSockets instead? Seems > much simpler/faster and less error prone than doing charset conversions on > binary data. > The specific scenarios I am interested in both consuming and producing (from JS): - Textual data formats (XML, JSON) with embedded islands of base64-encoded binary data - Binary data formats with embedded UTF-8 strings It sounds like Ryan is interested in the latter. On Thu, Jul 1, 2010 at 4:11 AM, Chris Marrin wrote: > It does seem reasonable to have toDataURL() and fromDataURL() in the > TypedArray base class, like Canvas 2D has. I'm not sure why we haven't > discussed it before. > I like this suggestion: // binary -> base64 JS string: mybuf.toDataURL() // "data:application/octet-stream;base64,AwEEAQUJAgY=" // base64 JS string --> arraybuf: mybuf = ArrayBuffer.fromDataURL("data:application/octet-stream;base64,AwEEAQUJAgY=") ... but it's unclear to me how you might use this approach for getting UTF-8 data embedded in an ArrayBuffer back out to a JS string. This inspired me to look again at the File API draft http://dev.w3.org/2006/webapi/FileAPI since the Blob type therein is a more direct match for something which should consume/produce data URIs. It does temptingly offer similar services, but the usage of those for these scenarios seems limited and convoluted: e.g. pull the content down as a Blob via XHR, read it (asynchronously) via a BlobReader as an ArrayBuffer via readAsArrayBuffer(); on receipt, parse it, determine the subrange which is e.g. a UTF-8 string, construct another BlobReader on it and read it (async again!) using readAsText(subblob, "UTF-8"). -------------- next part -------------- An HTML attachment was scrubbed... URL: From vla...@ Thu Jul 1 13:04:10 2010 From: vla...@ (Vladimir Vukicevic) Date: Thu, 1 Jul 2010 13:04:10 -0700 (PDT) Subject: [Public WebGL] SIGGRAPH WebGL session/BoF Message-ID: <960202927.599239.1278014650619.JavaMail.root@cm-mail03.mozilla.org> Hi all, At SIGGRAPH 2010 coming up at the end of July, there will be a 2-hour WebGL session/BoF -- Thursday, the 29th of July, at 4pm to 6pm. If any of the framework authors or those of you who are currently developing apps is interested in presenting their experience with working with WebGL so far, please let me know via private email; I'm working on putting together the schedule, and it'd be great to get some presentations from the broader WebGL community. Talk/presentation length is flexible -- could be anything from 10-30 minutes. Thanks! - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Wed Jul 7 18:20:10 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 7 Jul 2010 18:20:10 -0700 Subject: [Public WebGL] Documenting GL_FIXED as unsupported Message-ID: Now that we have a section in the WebGL spec on the differences between WebGL and OpenGL ES 2.0, I've added a brief note based on feedback from Zhenyao Mo about the fact that the WebGL API does not support the GL_FIXED data type. Please send any comments to the list. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Jul 8 15:26:31 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 8 Jul 2010 15:26:31 -0700 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: <4C26988F.6000506@sjbaker.org> References: <4C26988F.6000506@sjbaker.org> Message-ID: On Sat, Jun 26, 2010 at 5:17 PM, Steve Baker wrote: > YIKES!!! > > Lack of shader invarience can be a major headache in many common > multipass algorithms. ?Without the 'invariant' keyword, we're going to > need something like the 'ftransform' function (which I think was > obsoleted in GLSL 1.4 and GLES 1.0). > > Without EITHER 'invariant' OR 'ftransform', some rather important > algorithms become impossible - and that would be really bad news! Sorry for the long delay in replying. The removal of the invariant enforcement was recommended by TransGaming based on its not being implementable on D3D9. Perhaps someone from TG could comment more on the exact issue and what is possible to implement. I agree that its removal seems to preclude multi-pass rendering algorithms in WebGL 1.0. -Ken > ?-- Steve > >>> On Thu, Apr 22, 2010 at 12:18 PM, Vangelis Kokkevis >>> wrote: >>> >>>> 5. Shader invariance as described in GLSL ES spec >>>> (http://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf) >>>> sections 4.6.1 and enforced by the "invariant" qualifier and #pragma STDGL >>>> invariant(all) cannot be guaranteed in D3D9. ?Spec change: No guarantees are >>>> made about invariance in shader outputs the invariant qualifier and #pragma >>>> are ignored. >>>> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Thu Jul 8 19:05:05 2010 From: cal...@ (Mark Callow) Date: Fri, 09 Jul 2010 11:05:05 +0900 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: References: <4C26988F.6000506@sjbaker.org> Message-ID: <4C3683D1.7060605@hicorp.co.jp> Removal of ftransform (because it was there for matching fixed function transformation) yet still wanting o support multi-pass algorithms was one of the drivers for the addition of the invariant keyword. Regards -Mark On 09/07/2010 07:26, Kenneth Russell wrote: > On Sat, Jun 26, 2010 at 5:17 PM, Steve Baker wrote: > >> YIKES!!! >> >> Lack of shader invarience can be a major headache in many common >> multipass algorithms. Without the 'invariant' keyword, we're going to >> need something like the 'ftransform' function (which I think was >> obsoleted in GLSL 1.4 and GLES 1.0). >> >> Without EITHER 'invariant' OR 'ftransform', some rather important >> algorithms become impossible - and that would be really bad news! >> > Sorry for the long delay in replying. > > The removal of the invariant enforcement was recommended by > TransGaming based on its not being implementable on D3D9. Perhaps > someone from TG could comment more on the exact issue and what is > possible to implement. I agree that its removal seems to preclude > multi-pass rendering algorithms in WebGL 1.0. > > -Ken > > -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 398 bytes Desc: not available URL: From ste...@ Fri Jul 9 18:58:15 2010 From: ste...@ (Steve Baker) Date: Fri, 09 Jul 2010 20:58:15 -0500 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: <4C3683D1.7060605@hicorp.co.jp> References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> Message-ID: <4C37D3B7.2070106@sjbaker.org> Yep. I'm sure that D3D renderers can take advantage of the depth-only+beauty approach - most modern game engines use this kind of technique. So there must be a way that Z invariance can be guaranteed in D3D. It is possible that D3D somehow magically guarantees this without 'invariant' or 'ftransform' - in which case it would be safe for a WebGL implementation based on D3D to simply ignore those directives. But that is most certainly NOT the case for true OpenGL drivers (trust me...I've seen what happened before ftransform was added into Cg!) - so it is utterly, 100% essential that we have either 'invarient' or 'ftransform' or some other cast-iron guarantee that shader optimization won't change even the least significant bit of (at a minimum) the matrix operations and clipping that drive glPosition in the Vertex shader. That's the minimum acceptable capability - effectively the same guarantee as ftransform. Invarience in XYZ is really a barest-minimum requirement. There are other nice algorithms that rely on invariance in things like texture coordinates. This is so critical that I'd MUCH prefer that we simply not support hardware that doesn't have a viable GLSL implementation than to screw up the most vital algorithm in all modern 3D engines for the sake of (let's be honest) crappy Intel graphics that quite honestly don't have the horsepower to do 'interesting' stuff anyway. They could support OpenGL and shader invariance if they were motivated to do so - failure to do it is just laziness...well, maybe it's time to punish that. If we can do no better, let us at least preserve some sort of invariance directive and provide a testable flag to tell us when it's not actually guaranteed so that developers who care can fall back on something else or simply punt on useless hardware. Just in case anyone out there doubts the importance of this - let me provide a brief primer on the subject (feel free to skip reading at this point if you already grok why I'm so upset about this) : One of the most important modern techniques is to render a super-simple depth-only pass first - then to render a "beauty pass" second. The depth-only pass uses the simplest possible shaders, position-only vertex attributes and writes only to the Z buffer to save video RAM bandwidth. The beauty pass has everything turned up to the max - and relies on the fact that most graphics hardware can skip over Z-fail fragments without even running the shader. It means that no matter how complex your fragment shader, you pays the price to render each screen pixel once. That's a massively important thing! If you have occlusion culling (I guess we don't in WebGL?), then you can make even more savings by spotting objects that didn't hit any pixels during the depth-only pass - and not rendering those at all during beauty...also render simple 'proxy' geometry for geometrically complex objects to see if they can be skipped during beauty. The net result is a massive speedup for sophisticated renderers with complex models. But if there is a difference in even the least significant bit of Z between those two passes, the image will break up and be unusable. To get the benefits of this approach, you need different vertex shaders between depth-only and beauty passes because you don't pass texture coordinates, colors, normals, etc to the depth-only shader - and this change in the source code of the shader will result in different optimisations happening in the GLSL compiler - which in turn will result in roundoff errors...which screws up the entire thing. Multipass lighting also requires perfect-to-the-least-significant-bit alignment between passes - and that's the way to get sophisticated multiple light source rendering to happen cheaply. But increasingly sophisticated algorithms that I'm using these days rely on invariance in other parts of the vertex shader...the way I apply bullet holes and blood splatter to geometry in first person shooters - for example - relies on invariance in texture calculations too. I'm sure other developers are finding equally sneaky ways to exploit shaders that also rely on LSB-perfect multipass. -- Steve Mark Callow wrote: > Removal of ftransform (because it was there for matching fixed function > transformation) yet still wanting o support multi-pass algorithms was > one of the drivers for the addition of the invariant keyword. > > Regards > > -Mark > > > On 09/07/2010 07:26, Kenneth Russell wrote: > >> On Sat, Jun 26, 2010 at 5:17 PM, Steve Baker wrote: >> >> >>> YIKES!!! >>> >>> Lack of shader invarience can be a major headache in many common >>> multipass algorithms. Without the 'invariant' keyword, we're going to >>> need something like the 'ftransform' function (which I think was >>> obsoleted in GLSL 1.4 and GLES 1.0). >>> >>> Without EITHER 'invariant' OR 'ftransform', some rather important >>> algorithms become impossible - and that would be really bad news! >>> >>> >> Sorry for the long delay in replying. >> >> The removal of the invariant enforcement was recommended by >> TransGaming based on its not being implementable on D3D9. Perhaps >> someone from TG could comment more on the exact issue and what is >> possible to implement. I agree that its removal seems to preclude >> multi-pass rendering algorithms in WebGL 1.0. >> >> -Ken >> >> >> ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Sat Jul 10 03:35:38 2010 From: ste...@ (stephen white) Date: Sat, 10 Jul 2010 20:05:38 +0930 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: <4C37D3B7.2070106@sjbaker.org> References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> Message-ID: On 10/07/2010, at 11:28 AM, Steve Baker wrote: > coordinates. This is so critical that I'd MUCH prefer that we simply > not support hardware that doesn't have a viable GLSL implementation than > to screw up the most vital algorithm in all modern 3D engines for the I agree with this statement, very much so. The ANGLE approach is interesting, however this may be the fly in the ointment that kills it. It would be better for ANGLE to have its problems with invariants (and the customer has an "install OpenGL driver" option), than for everything to have problems. -- steve...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ala...@ Sat Jul 10 05:55:15 2010 From: ala...@ (ala...@) Date: Sat, 10 Jul 2010 05:55:15 -0700 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> Message-ID: <4C386DB3.7090405@mechnicality.com> Fully agree with both Steves On 07/10/2010 03:35 AM, stephen white wrote: > On 10/07/2010, at 11:28 AM, Steve Baker wrote: > >> coordinates. This is so critical that I'd MUCH prefer that we simply >> not support hardware that doesn't have a viable GLSL implementation than >> to screw up the most vital algorithm in all modern 3D engines for the >> > > I agree with this statement, very much so. The ANGLE approach is interesting, however this may be the fly in the ointment that kills it. > > It would be better for ANGLE to have its problems with invariants (and the customer has an "install OpenGL driver" option), than for everything to have problems. > > +1 on this. I also strongly agree with Steve Baker in that if some implementations do support invariance and some don't it should be testable within at least WebGL and ideally in a shader. Regards Alan > -- > steve...@ > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Sat Jul 10 10:35:15 2010 From: ste...@ (Steve Baker) Date: Sat, 10 Jul 2010 12:35:15 -0500 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> Message-ID: <4C38AF53.6080809@sjbaker.org> Having slept on it, I wonder if there is another way. I believe that the following three statements are all true: * The nVidia Cg compiler is open-sourced: http://developer.nvidia.com/object/cg_compiler_code.html * The Cg compiler can also compile GLSL. * D3D accepts 'machine code' shader programs as an alternative to HLSL and Cg. If so, could we not take the OpenSourced nVidia compiler, turn on the GLSL option and produce a back-end to allow it to generate D3D shader machine code? It's probably a stretch - and one or more of my assumptions might be incorrect - but wouldn't that allow us to run fully compliant GLSL shaders under D3D with all the wonders of invariance? ANGLE must be doing something of the sort to convert GLSL for D3D already. -- Steve. stephen white wrote: > On 10/07/2010, at 11:28 AM, Steve Baker wrote: > >> coordinates. This is so critical that I'd MUCH prefer that we simply >> not support hardware that doesn't have a viable GLSL implementation than >> to screw up the most vital algorithm in all modern 3D engines for the >> > > > I agree with this statement, very much so. The ANGLE approach is interesting, however this may be the fly in the ointment that kills it. > > It would be better for ANGLE to have its problems with invariants (and the customer has an "install OpenGL driver" option), than for everything to have problems. > > -- > steve...@ > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From dan...@ Tue Jul 13 12:29:49 2010 From: dan...@ (Daniel Koch) Date: Tue, 13 Jul 2010 15:29:49 -0400 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: <4C37D3B7.2070106@sjbaker.org> References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> Message-ID: <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> Hi folks, The problem is that there is nothing like the "invariant" keyword in D3D9 HLSL (or even D3D10 HLSL for that matter) that I am aware of. In practise, there must be some form of invariance guaranteed by d3d9 (especially for position), since I know of many games which use multi-pass rendering algorithms which work just fine. The difficulty lies in figuring out exactly what is guaranteed by D3D9, since we've been unable to find any sort of public documentation or discussion of these issues. However, even if there is position invariance, this does not provide a mechanism to toggle invariance on and off on a per-variable basis as is required in GLSL. The closest thing we've been able to find is a SIGGRAPH article on D3D10 from Microsoft. (http://download.microsoft.com/download/f/2/d/f2d5ee2c-b7ba-4cd0-9686-b6508b5479a1/Direct3D10_web.pdf) They briefly allude to this problem in section 5.4: "We considered several solutions for how to specify invariance requirements in the source code itself, for example, requiring that subroutines be compiled in an invariant fashion even if they are inlined. However, our search ultimately led us to the more traditional route of providing selectable, well-defined optimization levels that must also be respected by the driver compiler." My assumption is that D3D9 must have had similar requirements. The version of Cg that was open-sourced is quite archaic at this point. The ANGLE compiler is open-sourced (https://code.google.com/p/angleproject/) and is based off the 3DLabs GLSL compiler. It compiles GLSL ES to HLSL9 which is then compiled to D3D9 byte-code using D3DXCompileShader. A future extension to ANGLE could be to generate D3D9 byte-code directly. However, even D3D9 bytecode is still an IL and there is no guarantee that the hardware executes those instructions exactly (and I know there are implementations which do compile/optimize this further). The issue raised about ftransform in CG and GLSL and position invariance was primarily an issue when using fixed function and shaders together, and it was indeed a very common problem. This is also why the "position_invariant" option was added to the ARB_vertex_program assembly extension. Examples of this occurring in shader-only applications are much more difficult to come by. Ideally, an example program which does exhibit invariance issues in webgl (or GLSL or GLSL ES) would be available to demonstrate that this is actually a real problem for webgl. If that existed, we could verify whether or not ANGLE on D3D9 has such problems, or if it just works. Steve Baker: do you have any such examples, or would you be able to put one together which demonstrates this problem? We are continuing to investigate the guarantees provided by D3D in this area and a concrete test case showing such issues would be invaluable for this. Thanks, Daniel On 2010-07-10, at 1:35 PM, Steve Baker wrote: > > Having slept on it, I wonder if there is another way. I believe that > the following three statements are all true: > > * The nVidia Cg compiler is open-sourced: > http://developer.nvidia.com/object/cg_compiler_code.html > * The Cg compiler can also compile GLSL. > * D3D accepts 'machine code' shader programs as an alternative to HLSL > and Cg. > > If so, could we not take the OpenSourced nVidia compiler, turn on the > GLSL option and produce a back-end to allow it to generate D3D shader > machine code? > > It's probably a stretch - and one or more of my assumptions might be > incorrect - but wouldn't that allow us to run fully compliant GLSL > shaders under D3D with all the wonders of invariance? ANGLE must be > doing something of the sort to convert GLSL for D3D already. > > -- Steve. On 2010-07-09, at 9:58 PM, Steve Baker wrote: > Yep. > > I'm sure that D3D renderers can take advantage of the depth-only+beauty > approach - most modern game engines use this kind of technique. So > there must be a way that Z invariance can be guaranteed in D3D. > > It is possible that D3D somehow magically guarantees this without > 'invariant' or 'ftransform' - in which case it would be safe for a WebGL > implementation based on D3D to simply ignore those directives. > > But that is most certainly NOT the case for true OpenGL drivers (trust > me...I've seen what happened before ftransform was added into Cg!) - so > it is utterly, 100% essential that we have either 'invarient' or > 'ftransform' or some other cast-iron guarantee that shader optimization > won't change even the least significant bit of (at a minimum) the matrix > operations and clipping that drive glPosition in the Vertex shader. > That's the minimum acceptable capability - effectively the same > guarantee as ftransform. > > Invarience in XYZ is really a barest-minimum requirement. There are > other nice algorithms that rely on invariance in things like texture > coordinates. This is so critical that I'd MUCH prefer that we simply > not support hardware that doesn't have a viable GLSL implementation than > to screw up the most vital algorithm in all modern 3D engines for the > sake of (let's be honest) crappy Intel graphics that quite honestly > don't have the horsepower to do 'interesting' stuff anyway. They could > support OpenGL and shader invariance if they were motivated to do so - > failure to do it is just laziness...well, maybe it's time to punish that. > > If we can do no better, let us at least preserve some sort of invariance > directive and provide a testable flag to tell us when it's not actually > guaranteed so that developers who care can fall back on something else > or simply punt on useless hardware. > > Just in case anyone out there doubts the importance of this - let me > provide a brief primer on the subject (feel free to skip reading at this > point if you already grok why I'm so upset about this) : > > One of the most important modern techniques is to render a super-simple > depth-only pass first - then to render a "beauty pass" second. The > depth-only pass uses the simplest possible shaders, position-only vertex > attributes and writes only to the Z buffer to save video RAM bandwidth. > The beauty pass has everything turned up to the max - and relies on the > fact that most graphics hardware can skip over Z-fail fragments without > even running the shader. It means that no matter how complex your > fragment shader, you pays the price to render each screen pixel once. > That's a massively important thing! If you have occlusion culling (I > guess we don't in WebGL?), then you can make even more savings by > spotting objects that didn't hit any pixels during the depth-only pass - > and not rendering those at all during beauty...also render simple > 'proxy' geometry for geometrically complex objects to see if they can be > skipped during beauty. The net result is a massive speedup for > sophisticated renderers with complex models. > > But if there is a difference in even the least significant bit of Z > between those two passes, the image will break up and be unusable. To > get the benefits of this approach, you need different vertex shaders > between depth-only and beauty passes because you don't pass texture > coordinates, colors, normals, etc to the depth-only shader - and this > change in the source code of the shader will result in different > optimisations happening in the GLSL compiler - which in turn will result > in roundoff errors...which screws up the entire thing. > > Multipass lighting also requires perfect-to-the-least-significant-bit > alignment between passes - and that's the way to get sophisticated > multiple light source rendering to happen cheaply. > > But increasingly sophisticated algorithms that I'm using these days rely > on invariance in other parts of the vertex shader...the way I apply > bullet holes and blood splatter to geometry in first person shooters - > for example - relies on invariance in texture calculations too. I'm > sure other developers are finding equally sneaky ways to exploit shaders > that also rely on LSB-perfect multipass. > > -- Steve > > > Mark Callow wrote: >> Removal of ftransform (because it was there for matching fixed function >> transformation) yet still wanting o support multi-pass algorithms was >> one of the drivers for the addition of the invariant keyword. >> >> Regards >> >> -Mark >> >> >> On 09/07/2010 07:26, Kenneth Russell wrote: >> >>> On Sat, Jun 26, 2010 at 5:17 PM, Steve Baker wrote: >>> >>> >>>> YIKES!!! >>>> >>>> Lack of shader invarience can be a major headache in many common >>>> multipass algorithms. Without the 'invariant' keyword, we're going to >>>> need something like the 'ftransform' function (which I think was >>>> obsoleted in GLSL 1.4 and GLES 1.0). >>>> >>>> Without EITHER 'invariant' OR 'ftransform', some rather important >>>> algorithms become impossible - and that would be really bad news! >>>> >>>> >>> Sorry for the long delay in replying. >>> >>> The removal of the invariant enforcement was recommended by >>> TransGaming based on its not being implementable on D3D9. Perhaps >>> someone from TG could comment more on the exact issue and what is >>> possible to implement. I agree that its removal seems to preclude >>> multi-pass rendering algorithms in WebGL 1.0. >>> >>> -Ken >>> --- Daniel Koch -+- daniel...@ -+- 1 613.244.1111 x352 Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com 311 O'Connor St., Suite 300, Ottawa, Ontario, Canada, K2P 2G9 This message is a private communication. It also contains information that is privileged or confidential. If you are not the intended recipient, please do not read, copy or use it, and do not disclose it to others. Please notify the sender of the delivery error by replying to this message, and then delete it and any attachments from your system. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pke...@ Tue Jul 13 12:48:14 2010 From: pke...@ (Phil Keslin) Date: Tue, 13 Jul 2010 12:48:14 -0700 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> Message-ID: I've been following this thread for a bit and just remembered an interesting thing about D3D (which you may already know). The spec is the refrast. Maybe that would hold some insight here. I haven't looked at refrast for any D3D variant in a while so can't say what the behavior is. - Phil On Tue, Jul 13, 2010 at 12:29 PM, Daniel Koch wrote: > Hi folks, > > The problem is that there is nothing like the "invariant" keyword in D3D9 > HLSL (or even D3D10 HLSL for that matter) that I am aware of. > In practise, there must be some form of invariance guaranteed by d3d9 > (especially for position), since I know of many games which use multi-pass > rendering algorithms which work just fine. The difficulty lies in figuring > out exactly what is guaranteed by D3D9, since we've been unable to find any > sort of public documentation or discussion of these issues. However, even > if there is position invariance, this does not provide a mechanism to toggle > invariance on and off on a per-variable basis as is required in GLSL. > > The closest thing we've been able to find is a SIGGRAPH article on D3D10 > from Microsoft. ( > http://download.microsoft.com/download/f/2/d/f2d5ee2c-b7ba-4cd0-9686-b6508b5479a1/Direct3D10_web.pdf) > They briefly allude to this problem in section 5.4: > > "We considered several solutions for how to specify invariance requirements > in the source code itself, for example, requiring that subroutines be > compiled in an invariant fashion even if they are inlined. However, our > search ultimately led us to the more traditional route of providing > selectable, well-defined optimization levels that must also be respected by > the driver compiler." > > My assumption is that D3D9 must have had similar requirements. > > The version of Cg that was open-sourced is quite archaic at this point. > The ANGLE compiler is open-sourced ( > https://code.google.com/p/angleproject/) and is based off the 3DLabs GLSL > compiler. It compiles GLSL ES to HLSL9 which is then compiled to D3D9 > byte-code using D3DXCompileShader. A future extension to ANGLE could be to > generate D3D9 byte-code directly. However, even D3D9 bytecode is still an > IL and there is no guarantee that the hardware executes those instructions > exactly (and I know there are implementations which do compile/optimize this > further). > > The issue raised about ftransform in CG and GLSL and position invariance > was primarily an issue when using fixed function and shaders together, and > it was indeed a very common problem. This is also why the > "position_invariant" option was added to the ARB_vertex_program assembly > extension. Examples of this occurring in shader-only applications are much > more difficult to come by. > > Ideally, an example program which does exhibit invariance issues in webgl > (or GLSL or GLSL ES) would be available to demonstrate that this is actually > a real problem for webgl. If that existed, we could verify whether or not > ANGLE on D3D9 has such problems, or if it just works. > > Steve Baker: do you have any such examples, or would you be able to put one > together which demonstrates this problem? > > We are continuing to investigate the guarantees provided by D3D in this > area and a concrete test case showing such issues would be invaluable for > this. > > Thanks, > Daniel > > On 2010-07-10, at 1:35 PM, Steve Baker wrote: > > > Having slept on it, I wonder if there is another way. I believe that > the following three statements are all true: > > * The nVidia Cg compiler is open-sourced: > http://developer.nvidia.com/object/cg_compiler_code.html > * The Cg compiler can also compile GLSL. > * D3D accepts 'machine code' shader programs as an alternative to HLSL > and Cg. > > If so, could we not take the OpenSourced nVidia compiler, turn on the > GLSL option and produce a back-end to allow it to generate D3D shader > machine code? > > It's probably a stretch - and one or more of my assumptions might be > incorrect - but wouldn't that allow us to run fully compliant GLSL > shaders under D3D with all the wonders of invariance? ANGLE must be > doing something of the sort to convert GLSL for D3D already. > > -- Steve. > > > On 2010-07-09, at 9:58 PM, Steve Baker wrote: > > Yep. > > I'm sure that D3D renderers can take advantage of the depth-only+beauty > approach - most modern game engines use this kind of technique. So > there must be a way that Z invariance can be guaranteed in D3D. > > It is possible that D3D somehow magically guarantees this without > 'invariant' or 'ftransform' - in which case it would be safe for a WebGL > implementation based on D3D to simply ignore those directives. > > But that is most certainly NOT the case for true OpenGL drivers (trust > me...I've seen what happened before ftransform was added into Cg!) - so > it is utterly, 100% essential that we have either 'invarient' or > 'ftransform' or some other cast-iron guarantee that shader optimization > won't change even the least significant bit of (at a minimum) the matrix > operations and clipping that drive glPosition in the Vertex shader. > That's the minimum acceptable capability - effectively the same > guarantee as ftransform. > > Invarience in XYZ is really a barest-minimum requirement. There are > other nice algorithms that rely on invariance in things like texture > coordinates. This is so critical that I'd MUCH prefer that we simply > not support hardware that doesn't have a viable GLSL implementation than > to screw up the most vital algorithm in all modern 3D engines for the > sake of (let's be honest) crappy Intel graphics that quite honestly > don't have the horsepower to do 'interesting' stuff anyway. They could > support OpenGL and shader invariance if they were motivated to do so - > failure to do it is just laziness...well, maybe it's time to punish that. > > If we can do no better, let us at least preserve some sort of invariance > directive and provide a testable flag to tell us when it's not actually > guaranteed so that developers who care can fall back on something else > or simply punt on useless hardware. > > Just in case anyone out there doubts the importance of this - let me > provide a brief primer on the subject (feel free to skip reading at this > point if you already grok why I'm so upset about this) : > > One of the most important modern techniques is to render a super-simple > depth-only pass first - then to render a "beauty pass" second. The > depth-only pass uses the simplest possible shaders, position-only vertex > attributes and writes only to the Z buffer to save video RAM bandwidth. > The beauty pass has everything turned up to the max - and relies on the > fact that most graphics hardware can skip over Z-fail fragments without > even running the shader. It means that no matter how complex your > fragment shader, you pays the price to render each screen pixel once. > That's a massively important thing! If you have occlusion culling (I > guess we don't in WebGL?), then you can make even more savings by > spotting objects that didn't hit any pixels during the depth-only pass - > and not rendering those at all during beauty...also render simple > 'proxy' geometry for geometrically complex objects to see if they can be > skipped during beauty. The net result is a massive speedup for > sophisticated renderers with complex models. > > But if there is a difference in even the least significant bit of Z > between those two passes, the image will break up and be unusable. To > get the benefits of this approach, you need different vertex shaders > between depth-only and beauty passes because you don't pass texture > coordinates, colors, normals, etc to the depth-only shader - and this > change in the source code of the shader will result in different > optimisations happening in the GLSL compiler - which in turn will result > in roundoff errors...which screws up the entire thing. > > Multipass lighting also requires perfect-to-the-least-significant-bit > alignment between passes - and that's the way to get sophisticated > multiple light source rendering to happen cheaply. > > But increasingly sophisticated algorithms that I'm using these days rely > on invariance in other parts of the vertex shader...the way I apply > bullet holes and blood splatter to geometry in first person shooters - > for example - relies on invariance in texture calculations too. I'm > sure other developers are finding equally sneaky ways to exploit shaders > that also rely on LSB-perfect multipass. > > -- Steve > > > Mark Callow wrote: > > Removal of ftransform (because it was there for matching fixed function > > transformation) yet still wanting o support multi-pass algorithms was > > one of the drivers for the addition of the invariant keyword. > > > Regards > > > -Mark > > > > On 09/07/2010 07:26, Kenneth Russell wrote: > > > On Sat, Jun 26, 2010 at 5:17 PM, Steve Baker wrote: > > > > YIKES!!! > > > Lack of shader invarience can be a major headache in many common > > multipass algorithms. Without the 'invariant' keyword, we're going to > > need something like the 'ftransform' function (which I think was > > obsoleted in GLSL 1.4 and GLES 1.0). > > > Without EITHER 'invariant' OR 'ftransform', some rather important > > algorithms become impossible - and that would be really bad news! > > > > Sorry for the long delay in replying. > > > The removal of the invariant enforcement was recommended by > > TransGaming based on its not being implementable on D3D9. Perhaps > > someone from TG could comment more on the exact issue and what is > > possible to implement. I agree that its removal seems to preclude > > multi-pass rendering algorithms in WebGL 1.0. > > > -Ken > > > > --- > Daniel Koch -+- daniel...@ -+- 1 613.244.1111 x352 > Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com > 311 O'Connor St., Suite 300, Ottawa, Ontario, Canada, K2P 2G9 > > This message is a private communication. It also contains information that > is privileged or confidential. If you are not the intended recipient, please > do not read, copy or use it, and do not disclose it to others. Please > notify the sender of the delivery error by replying to this message, and > then delete it and any attachments from your system. Thank you. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ala...@ Tue Jul 13 14:09:45 2010 From: ala...@ (ala...@) Date: Tue, 13 Jul 2010 14:09:45 -0700 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> Message-ID: <4C3CD619.8050407@mechnicality.com> Hi Daniel Thanks for as usual detailed and informative response. If what you say is correct it seems to me that one possible solution is for ANGLE to simply ignore any invariance directives in WebGL shaders. Then those implementations running with "real" OpenGL implementations will work as expected - and I think its that which has us application developers the most worried. Regards Alan On 07/13/2010 12:29 PM, Daniel Koch wrote: > Hi folks, > > The problem is that there is nothing like the "invariant" keyword in > D3D9 HLSL (or even D3D10 HLSL for that matter) that I am aware of. > In practise, there must be some form of invariance guaranteed by d3d9 > (especially for position), since I know of many games which use > multi-pass rendering algorithms which work just fine. The difficulty > lies in figuring out exactly what is guaranteed by D3D9, since we've > been unable to find any sort of public documentation or discussion of > these issues. However, even if there is position invariance, this > does not provide a mechanism to toggle invariance on and off on a > per-variable basis as is required in GLSL. > > The closest thing we've been able to find is a SIGGRAPH article on > D3D10 from Microsoft. > (http://download.microsoft.com/download/f/2/d/f2d5ee2c-b7ba-4cd0-9686-b6508b5479a1/Direct3D10_web.pdf) > They briefly allude to this problem in section 5.4: > > "We considered several solutions for how to specify invariance > requirements in the source code itself, for example, requiring > that subroutines be compiled in an invariant fashion even if they > are inlined. However, our search ultimately led us to the more > traditional route of providing selectable, well-defined > optimization levels that must also be respected by the driver > compiler." > > My assumption is that D3D9 must have had similar requirements. > > The version of Cg that was open-sourced is quite archaic at this > point. The ANGLE compiler is open-sourced > (https://code.google.com/p/angleproject/) and is based off the 3DLabs > GLSL compiler. It compiles GLSL ES to HLSL9 which is then compiled > to D3D9 byte-code using D3DXCompileShader. A future extension to > ANGLE could be to generate D3D9 byte-code directly. However, even > D3D9 bytecode is still an IL and there is no guarantee that the > hardware executes those instructions exactly (and I know there are > implementations which do compile/optimize this further). > > The issue raised about ftransform in CG and GLSL and position > invariance was primarily an issue when using fixed function and > shaders together, and it was indeed a very common problem. This is > also why the "position_invariant" option was added to the > ARB_vertex_program assembly extension. Examples of this occurring in > shader-only applications are much more difficult to come by. > > Ideally, an example program which does exhibit invariance issues in > webgl (or GLSL or GLSL ES) would be available to demonstrate that this > is actually a real problem for webgl. If that existed, we could > verify whether or not ANGLE on D3D9 has such problems, or if it just > works. > > Steve Baker: do you have any such examples, or would you be able to > put one together which demonstrates this problem? > > We are continuing to investigate the guarantees provided by D3D in > this area and a concrete test case showing such issues would be > invaluable for this. > > Thanks, > Daniel > > On 2010-07-10, at 1:35 PM, Steve Baker wrote: > >> >> Having slept on it, I wonder if there is another way. I believe that >> the following three statements are all true: >> >> * The nVidia Cg compiler is open-sourced: >> http://developer.nvidia.com/object/cg_compiler_code.html >> * The Cg compiler can also compile GLSL. >> * D3D accepts 'machine code' shader programs as an alternative to HLSL >> and Cg. >> >> If so, could we not take the OpenSourced nVidia compiler, turn on the >> GLSL option and produce a back-end to allow it to generate D3D shader >> machine code? >> >> It's probably a stretch - and one or more of my assumptions might be >> incorrect - but wouldn't that allow us to run fully compliant GLSL >> shaders under D3D with all the wonders of invariance? ANGLE must be >> doing something of the sort to convert GLSL for D3D already. >> >> -- Steve. > > On 2010-07-09, at 9:58 PM, Steve Baker wrote: > >> Yep. >> >> I'm sure that D3D renderers can take advantage of the depth-only+beauty >> approach - most modern game engines use this kind of technique. So >> there must be a way that Z invariance can be guaranteed in D3D. >> >> It is possible that D3D somehow magically guarantees this without >> 'invariant' or 'ftransform' - in which case it would be safe for a WebGL >> implementation based on D3D to simply ignore those directives. >> >> But that is most certainly NOT the case for true OpenGL drivers (trust >> me...I've seen what happened before ftransform was added into Cg!) - so >> it is utterly, 100% essential that we have either 'invarient' or >> 'ftransform' or some other cast-iron guarantee that shader optimization >> won't change even the least significant bit of (at a minimum) the matrix >> operations and clipping that drive glPosition in the Vertex shader. >> That's the minimum acceptable capability - effectively the same >> guarantee as ftransform. >> >> Invarience in XYZ is really a barest-minimum requirement. There are >> other nice algorithms that rely on invariance in things like texture >> coordinates. This is so critical that I'd MUCH prefer that we simply >> not support hardware that doesn't have a viable GLSL implementation than >> to screw up the most vital algorithm in all modern 3D engines for the >> sake of (let's be honest) crappy Intel graphics that quite honestly >> don't have the horsepower to do 'interesting' stuff anyway. They could >> support OpenGL and shader invariance if they were motivated to do so - >> failure to do it is just laziness...well, maybe it's time to punish that. >> >> If we can do no better, let us at least preserve some sort of invariance >> directive and provide a testable flag to tell us when it's not actually >> guaranteed so that developers who care can fall back on something else >> or simply punt on useless hardware. >> >> Just in case anyone out there doubts the importance of this - let me >> provide a brief primer on the subject (feel free to skip reading at this >> point if you already grok why I'm so upset about this) : >> >> One of the most important modern techniques is to render a super-simple >> depth-only pass first - then to render a "beauty pass" second. The >> depth-only pass uses the simplest possible shaders, position-only vertex >> attributes and writes only to the Z buffer to save video RAM bandwidth. >> The beauty pass has everything turned up to the max - and relies on the >> fact that most graphics hardware can skip over Z-fail fragments without >> even running the shader. It means that no matter how complex your >> fragment shader, you pays the price to render each screen pixel once. >> That's a massively important thing! If you have occlusion culling (I >> guess we don't in WebGL?), then you can make even more savings by >> spotting objects that didn't hit any pixels during the depth-only pass - >> and not rendering those at all during beauty...also render simple >> 'proxy' geometry for geometrically complex objects to see if they can be >> skipped during beauty. The net result is a massive speedup for >> sophisticated renderers with complex models. >> >> But if there is a difference in even the least significant bit of Z >> between those two passes, the image will break up and be unusable. To >> get the benefits of this approach, you need different vertex shaders >> between depth-only and beauty passes because you don't pass texture >> coordinates, colors, normals, etc to the depth-only shader - and this >> change in the source code of the shader will result in different >> optimisations happening in the GLSL compiler - which in turn will result >> in roundoff errors...which screws up the entire thing. >> >> Multipass lighting also requires perfect-to-the-least-significant-bit >> alignment between passes - and that's the way to get sophisticated >> multiple light source rendering to happen cheaply. >> >> But increasingly sophisticated algorithms that I'm using these days rely >> on invariance in other parts of the vertex shader...the way I apply >> bullet holes and blood splatter to geometry in first person shooters - >> for example - relies on invariance in texture calculations too. I'm >> sure other developers are finding equally sneaky ways to exploit shaders >> that also rely on LSB-perfect multipass. >> >> -- Steve >> >> >> Mark Callow wrote: >>> Removal of ftransform (because it was there for matching fixed function >>> transformation) yet still wanting o support multi-pass algorithms was >>> one of the drivers for the addition of the invariant keyword. >>> >>> Regards >>> >>> -Mark >>> >>> >>> On 09/07/2010 07:26, Kenneth Russell wrote: >>> >>>> On Sat, Jun 26, 2010 at 5:17 PM, Steve Baker >>> > wrote: >>>> >>>> >>>>> YIKES!!! >>>>> >>>>> Lack of shader invarience can be a major headache in many common >>>>> multipass algorithms. Without the 'invariant' keyword, we're going to >>>>> need something like the 'ftransform' function (which I think was >>>>> obsoleted in GLSL 1.4 and GLES 1.0). >>>>> >>>>> Without EITHER 'invariant' OR 'ftransform', some rather important >>>>> algorithms become impossible - and that would be really bad news! >>>>> >>>>> >>>> Sorry for the long delay in replying. >>>> >>>> The removal of the invariant enforcement was recommended by >>>> TransGaming based on its not being implementable on D3D9. Perhaps >>>> someone from TG could comment more on the exact issue and what is >>>> possible to implement. I agree that its removal seems to preclude >>>> multi-pass rendering algorithms in WebGL 1.0. >>>> >>>> -Ken >>>> > > --- > Daniel Koch -+- daniel...@ > -+- 1 613.244.1111 x352 > Senior Graphics Architect -+- TransGaming Inc. -+- > www.transgaming.com > 311 O'Connor St., Suite 300, Ottawa, Ontario, Canada, K2P 2G9 > > This message is a private communication. It also contains information > that is privileged or confidential. If you are not the intended > recipient, please do not read, copy or use it, and do not disclose it > to others. Please notify the sender of the delivery error by replying > to this message, and then delete it and any attachments from your > system. Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From asa...@ Tue Jul 13 19:27:08 2010 From: asa...@ (Andor Salga) Date: Tue, 13 Jul 2010 22:27:08 -0400 Subject: [Public WebGL] remove transpose param from uniformMatrix* ? Message-ID: Excuse me if this has already been discussed, The OpenGL ES spec states the transpose parameter must always be false when calling uniformMatrix*. Is this something we want in WebGL? It seems to just cause confusion. Or are there plans to support this parameter? Wouldn't it make sense to just have the users transpose matrices themselves? Thanks, Andor -------------- next part -------------- An HTML attachment was scrubbed... URL: From asa...@ Tue Jul 13 19:34:52 2010 From: asa...@ (Andor Salga) Date: Tue, 13 Jul 2010 22:34:52 -0400 Subject: [Public WebGL] enabling POINT_SMOOTH symbol? Message-ID: Hi, I'm working on Processing.js, a JavaScript port of Processing. One feature Processing has which we'd like to support is point smoothing. Is there any chance we could get the POINT_SMOOTH symbol into the specification? Is there any leeway for adding non-OpenGL ES 2.0 symbols? Thank you, Andor -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan...@ Tue Jul 13 19:47:52 2010 From: dan...@ (Dan Lecocq) Date: Tue, 13 Jul 2010 22:47:52 -0400 Subject: [Public WebGL] enabling POINT_SMOOTH symbol? In-Reply-To: References: Message-ID: Hi Andor, As far as I know, most/all WebGL functions are just wrappers giving you access to the hardware, and some features may or may not be advertised (as in, constants not defined). If you're sure that they are available on a particular graphics card, you can probably just plug in the constant in place of the symbol. So, instead of trying to get gl.POINT_SMOOTH, you could supply 0x0B10, which seems to be the constant GL_POINT_SMOOTH is set to. This sort of approach worked for me with floating-point textures in WebGL, but with points, I'm not sure that points are supported in the OpenGL ES 2.0 spec. Again, though, you might have luck by using the constant for GL_POINTS when calling gl.drawElements. Cheers, Dan On Tue, Jul 13, 2010 at 10:34 PM, Andor Salga wrote: > Hi, > I'm working on Processing.js, a JavaScript port of Processing. One > feature Processing has which we'd like to support is point smoothing. Is > there any chance we could get the POINT_SMOOTH symbol into the > specification? Is there any leeway for adding non-OpenGL ES 2.0 symbols? > > Thank you, > Andor -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Tue Jul 13 20:41:59 2010 From: cal...@ (Mark Callow) Date: Wed, 14 Jul 2010 12:41:59 +0900 Subject: [Public WebGL] remove transpose param from uniformMatrix* ? In-Reply-To: References: Message-ID: <4C3D3207.8040809@hicorp.co.jp> I'm not quite sure what your point is. Users do have to transpose matrices themselves The parameter is there in OpenGL ES because (a) it is there in OpenGL, and (b) the functionality is very likely to be added in a future version. There are many other examples where a parameter was kept even though it must always be set to a particular value because the underlying functionality was removed. We decided that is less confusing than OpenGL ES & OpenGL having different function signatures. Regards -Mark On 14/07/2010 11:27, Andor Salga wrote: > Excuse me if this has already been discussed, > > The OpenGL ES spec states the transpose parameter must always be > false when calling uniformMatrix*. Is this something we want in WebGL? > It seems to just cause confusion. Or are there plans to support this > parameter? Wouldn't it make sense to just have the users transpose > matrices themselves? > > Thanks, > Andor -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 398 bytes Desc: not available URL: From phi...@ Tue Jul 13 20:49:30 2010 From: phi...@ (Philip Rideout) Date: Tue, 13 Jul 2010 21:49:30 -0600 Subject: [Public WebGL] enabling POINT_SMOOTH symbol? In-Reply-To: References: Message-ID: Or, use point sprites. Smooth points are a bit antiquated; they're no longer in the core profile for desktop OpenGL 3.2+. Here's how you use point sprites in an ES 2.0 fragment shader: uniform sampler2D tex; void main ( ) { gl_FragColor = texture2D(tex, gl_PointCoord); } You can also set the gl_PointSize variable from your vertex shader. On Tue, Jul 13, 2010 at 8:47 PM, Dan Lecocq wrote: > Hi Andor, > > As far as I know, most/all WebGL functions are just wrappers giving you > access to the hardware, and some features may or may not be advertised (as > in, constants not defined). If you're sure that they are available on a > particular graphics card, you can probably just plug in the constant in > place of the symbol. > > So, instead of trying to get gl.POINT_SMOOTH, you could supply 0x0B10, > which seems to be the constant GL_POINT_SMOOTH is set to. This sort of > approach worked for me with floating-point textures in WebGL, but with > points, I'm not sure that points are supported in the OpenGL ES 2.0 spec. > Again, though, you might have luck by using the constant for GL_POINTS when > calling gl.drawElements. > > Cheers, > Dan > > On Tue, Jul 13, 2010 at 10:34 PM, Andor Salga wrote: > >> Hi, >> I'm working on Processing.js, a JavaScript port of Processing. One >> feature Processing has which we'd like to support is point smoothing. Is >> there any chance we could get the POINT_SMOOTH symbol into the >> specification? Is there any leeway for adding non-OpenGL ES 2.0 symbols? >> >> Thank you, >> Andor > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhe...@ Tue Jul 13 22:28:50 2010 From: zhe...@ (Mo, Zhenyao) Date: Tue, 13 Jul 2010 22:28:50 -0700 Subject: [Public WebGL] enabling POINT_SMOOTH symbol? In-Reply-To: References: Message-ID: Once we bring implementation to full WebGL spec comformance, any illegal enums (i.e., those desktop GL enums that are not supported by GLES2) will generate an error instead of passing them down to the underlying desktop GL drivers. - mo On Tue, Jul 13, 2010 at 7:47 PM, Dan Lecocq wrote: > Hi Andor, > > As far as I know, most/all WebGL functions are just wrappers giving you > access to the hardware, and some features may or may not be advertised (as > in, constants not defined). If you're sure that they are available on a > particular graphics card, you can probably just plug in the constant in > place of the symbol. > > So, instead of trying to get gl.POINT_SMOOTH, you could supply 0x0B10, > which seems to be the constant GL_POINT_SMOOTH is set to. This sort of > approach worked for me with floating-point textures in WebGL, but with > points, I'm not sure that points are supported in the OpenGL ES 2.0 spec. > Again, though, you might have luck by using the constant for GL_POINTS when > calling gl.drawElements. > > Cheers, > Dan > > On Tue, Jul 13, 2010 at 10:34 PM, Andor Salga wrote: > >> Hi, >> I'm working on Processing.js, a JavaScript port of Processing. One >> feature Processing has which we'd like to support is point smoothing. Is >> there any chance we could get the POINT_SMOOTH symbol into the >> specification? Is there any leeway for adding non-OpenGL ES 2.0 symbols? >> >> Thank you, >> Andor > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Wed Jul 14 00:22:51 2010 From: ste...@ (Steve Baker) Date: Wed, 14 Jul 2010 02:22:51 -0500 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: <4C3CD619.8050407@mechnicality.com> References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> <4C3CD619.8050407@mechnicality.com> Message-ID: <4C3D65CB.6080100@sjbaker.org> Sorry, no. I don't have an example of the problem easily to hand - I've changed jobs twice since the last time this happened to me - and it presumably hasn't happened since then because I've been using ftransform. The general form of a worst-case 'invariance problem' shader would likely be something that does the vertex position calculation in several steps (maybe something like Skeletal Mesh animation where there are four weighted bone transforms, then a separate modelview transform - and finally a separate perspective transform...and something that might cause the compiler to do some 'clever' optimization...such as using the intermediate result of the bone transforms to transform the Normal/Binormal/Tangent. Something like: glPosition = A * B * C ; #ifdef BEAUTY_PASS D = B * C ; #endif ...with the conditional code turned on, the compiler might rearrange the beauty pass to: D = B * C ; glPosition = A * D ; ...and calculate the depth pass as: temp = A * B ; glPosition = temp * C ; ...reversing the order of A*B*C from A*(B*C) to (A*B)*C ....which could obviously produce answers that are different in the least significant bit of glPosition between the two passes. But getting that to happen without a lot of internal knowledge of how the shader compiler optimizes is essentially a matter of blind luck. It's really tough to come up with examples (unless, of course your game is going gold next week and suddenly everything breaks because someone changed a shader constant someplace). But sadly, having an example that breaks under OpenGL (if you don't use ftransform or invariant) - and then showing that this same exact example DOESN'T break under D3D doesn't in any way demonstrate that some other example wouldn't break. The way these compilers optimize varies from platform to platform even for the same hardware vendor...so unless you have a cast-iron guarantee of invariance - no amount of "Look! This example doesn't break!" would be convincing proof of a non-problem. I've gotta say though - Microsoft's "solution" (as quoted below) is just lame! They are essentially saying "Well, you can solve the problem by turning off shader optimization"...but that means that you have to have the entire shader pessimally compiled just to stop a one or two line problem! Whoever thought THAT was an adequate response totally didn't understand the scope of the problem! When this problem strikes - it affects every single shader the project uses - since they all need to manage multi-pass correctly. -- Steve > On 07/13/2010 12:29 PM, Daniel Koch wrote: >> Hi folks, >> >> The problem is that there is nothing like the "invariant" keyword in >> D3D9 HLSL (or even D3D10 HLSL for that matter) that I am aware of. >> In practise, there must be some form of invariance guaranteed by d3d9 >> (especially for position), since I know of many games which use >> multi-pass rendering algorithms which work just fine. The difficulty >> lies in figuring out exactly what is guaranteed by D3D9, since we've >> been unable to find any sort of public documentation or discussion of >> these issues. However, even if there is position invariance, this >> does not provide a mechanism to toggle invariance on and off on a >> per-variable basis as is required in GLSL. >> >> The closest thing we've been able to find is a SIGGRAPH article on >> D3D10 from Microsoft. >> (http://download.microsoft.com/download/f/2/d/f2d5ee2c-b7ba-4cd0-9686-b6508b5479a1/Direct3D10_web.pdf) >> They briefly allude to this problem in section 5.4: >> >> "We considered several solutions for how to specify invariance >> requirements in the source code itself, for example, requiring >> that subroutines be compiled in an invariant fashion even if they >> are inlined. However, our search ultimately led us to the more >> traditional route of providing selectable, well-defined >> optimization levels that must also be respected by the driver >> compiler." >> >> My assumption is that D3D9 must have had similar requirements. >> >> The version of Cg that was open-sourced is quite archaic at this >> point. The ANGLE compiler is open-sourced >> (https://code.google.com/p/angleproject/) and is based off the 3DLabs >> GLSL compiler. It compiles GLSL ES to HLSL9 which is then compiled >> to D3D9 byte-code using D3DXCompileShader. A future extension to >> ANGLE could be to generate D3D9 byte-code directly. However, even >> D3D9 bytecode is still an IL and there is no guarantee that the >> hardware executes those instructions exactly (and I know there are >> implementations which do compile/optimize this further). >> >> The issue raised about ftransform in CG and GLSL and position >> invariance was primarily an issue when using fixed function and >> shaders together, and it was indeed a very common problem. This is >> also why the "position_invariant" option was added to the >> ARB_vertex_program assembly extension. Examples of this occurring in >> shader-only applications are much more difficult to come by. >> >> Ideally, an example program which does exhibit invariance issues in >> webgl (or GLSL or GLSL ES) would be available to demonstrate that >> this is actually a real problem for webgl. If that existed, we could >> verify whether or not ANGLE on D3D9 has such problems, or if it just >> works. >> >> Steve Baker: do you have any such examples, or would you be able to >> put one together which demonstrates this problem? >> >> We are continuing to investigate the guarantees provided by D3D in >> this area and a concrete test case showing such issues would be >> invaluable for this. >> >> Thanks, >> Daniel ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Jul 14 00:02:24 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 14 Jul 2010 00:02:24 -0700 (PDT) Subject: [Public WebGL] enabling POINT_SMOOTH symbol? In-Reply-To: Message-ID: <1021614083.274.1279090944567.JavaMail.root@cm-mail03.mozilla.org> Indeed, Firefox/Minefield trunk should already be rejecting such enums -- so switching to point sprites is the best thing to do here. Won't help with floating point textures, though. (Sorry Dan!) - Vlad ----- Original Message ----- > Once we bring implementation to full WebGL spec comformance, any > illegal enums (i.e., those desktop GL enums that are not supported by > GLES2) will generate an error instead of passing them down to the > underlying desktop GL drivers. > > - mo > > > On Tue, Jul 13, 2010 at 7:47 PM, Dan Lecocq < dan.lecocq...@ > > wrote: > > > Hi Andor, > > > As far as I know, most/all WebGL functions are just wrappers giving > you access to the hardware, and some features may or may not be > advertised (as in, constants not defined). If you're sure that they > are available on a particular graphics card, you can probably just > plug in the constant in place of the symbol. > > > So, instead of trying to get gl.POINT_SMOOTH, you could supply 0x0B10, > which seems to be the constant GL_POINT_SMOOTH is set to. This sort of > approach worked for me with floating-point textures in WebGL, but with > points, I'm not sure that points are supported in the OpenGL ES 2.0 > spec. Again, though, you might have luck by using the constant for > GL_POINTS when calling gl.drawElements. > > > Cheers, > Dan > > > > > > On Tue, Jul 13, 2010 at 10:34 PM, Andor Salga < > asalga...@ > wrote: > > > Hi, > I'm working on Processing.js, a JavaScript port of Processing. One > feature Processing has which we'd like to support is point smoothing. > Is there any chance we could get the POINT_SMOOTH symbol into the > specification? Is there any leeway for adding non-OpenGL ES 2.0 > symbols? > > Thank you, > Andor ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Wed Jul 14 09:06:04 2010 From: cma...@ (Chris Marrin) Date: Wed, 14 Jul 2010 09:06:04 -0700 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> Message-ID: On Jul 13, 2010, at 12:29 PM, Daniel Koch wrote: > Hi folks, > > The problem is that there is nothing like the "invariant" keyword in D3D9 HLSL (or even D3D10 HLSL for that matter) that I am aware of. > In practise, there must be some form of invariance guaranteed by d3d9 (especially for position), since I know of many games which use multi-pass rendering algorithms which work just fine. The difficulty lies in figuring out exactly what is guaranteed by D3D9, since we've been unable to find any sort of public documentation or discussion of these issues. However, even if there is position invariance, this does not provide a mechanism to toggle invariance on and off on a per-variable basis as is required in GLSL. I think we certainly need the feature of invariance. Perhaps D3D always creates invariant shaders (it never does optimizations that would break invariance)? If so, then I suggest we keep the keyword, use it normally in OpenGL implementations (where supported) and ignore it in HLSL. Of course, I haven't investigated the issue, so I'm not sure if this is possible. But I believe we need to: a) Make it possible to guarantee an invariant shader b) Make it possible to enable and disable invariance in implementations that support it. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Wed Jul 14 11:00:12 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 14 Jul 2010 11:00:12 -0700 Subject: [Public WebGL] enabling POINT_SMOOTH symbol? In-Reply-To: References: Message-ID: On Tue, Jul 13, 2010 at 7:47 PM, Dan Lecocq wrote: > Hi Andor, > > As far as I know, most/all WebGL functions are just wrappers giving you > access to the hardware, and some features may or may not be advertised (as > in, constants not defined). If you're sure that they are available on a > particular graphics card, you can probably just plug in the constant in > place of the symbol. > > So, instead of trying to get gl.POINT_SMOOTH, you could supply 0x0B10, > which seems to be the constant GL_POINT_SMOOTH is set to. This sort of > approach worked for me with floating-point textures in WebGL, but with > points, I'm not sure that points are supported in the OpenGL ES 2.0 spec. > Again, though, you might have luck by using the constant for GL_POINTS when > calling gl.drawElements. > Sorry Dan, this won't work on a WebGL compliant implementation which all browsers will soon be. On a positive note the particular case of floating point textures I would guess that an official WebGL extension to support them will be near the top of the list once WebGL1.0 ships. > > Cheers, > Dan > > On Tue, Jul 13, 2010 at 10:34 PM, Andor Salga wrote: > >> Hi, >> I'm working on Processing.js, a JavaScript port of Processing. One >> feature Processing has which we'd like to support is point smoothing. Is >> there any chance we could get the POINT_SMOOTH symbol into the >> specification? Is there any leeway for adding non-OpenGL ES 2.0 symbols? >> >> Thank you, >> Andor > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan...@ Wed Jul 14 11:28:33 2010 From: dan...@ (Daniel Koch) Date: Wed, 14 Jul 2010 14:28:33 -0400 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: <4C3D65CB.6080100@sjbaker.org> References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> <4C3CD619.8050407@mechnicality.com> <4C3D65CB.6080100@sjbaker.org> Message-ID: <5A0CDD72-BA72-4B99-94A0-B9671E5C3364@transgaming.com> On 2010-07-14, at 3:22 AM, Steve Baker wrote: > Sorry, no. I don't have an example of the problem easily to hand - I've > changed jobs twice since the last time this happened to me - and it > presumably hasn't happened since then because I've been using ftransform. > > It's really tough to come up with examples (unless, of course your game > is going gold next week and suddenly everything breaks because someone > changed a shader constant someplace). Yes I know. That's why I was hoping you had something on hand which had cause you problems... > But sadly, having an example that breaks under OpenGL (if you don't use > ftransform or invariant) - and then showing that this same exact example > DOESN'T break under D3D doesn't in any way demonstrate that some other > example wouldn't break. The way these compilers optimize varies from > platform to platform even for the same hardware vendor...so unless you > have a cast-iron guarantee of invariance - no amount of "Look! This > example doesn't break!" would be convincing proof of a non-problem. That is a valid point as well. > I've gotta say though - Microsoft's "solution" (as quoted below) is just > lame! They are essentially saying "Well, you can solve the problem by > turning off shader optimization"...but that means that you have to have > the entire shader pessimally compiled just to stop a one or two line > problem! Whoever thought THAT was an adequate response totally didn't > understand the scope of the problem! When this problem strikes - it > affects every single shader the project uses - since they all need to > manage multi-pass correctly. No that's not what they are saying. They said they are "providing selectable, well-defined optimization levels that must also be respected by the driver compiler." I believe this to mean that they have strictly defined the optimizations that the drivers are allowed to do, and the intent is to only allow optimizations which wouldn't cause invariance problems. What exactly those rules are, I don't know, but I would certainly like to! However I am quite certain that they would have examined the problem in great detail, even if the detail didn't manage to make it into that particular article. Daniel > > -- Steve > >> On 07/13/2010 12:29 PM, Daniel Koch wrote: >>> Hi folks, >>> >>> The problem is that there is nothing like the "invariant" keyword in >>> D3D9 HLSL (or even D3D10 HLSL for that matter) that I am aware of. >>> In practise, there must be some form of invariance guaranteed by d3d9 >>> (especially for position), since I know of many games which use >>> multi-pass rendering algorithms which work just fine. The difficulty >>> lies in figuring out exactly what is guaranteed by D3D9, since we've >>> been unable to find any sort of public documentation or discussion of >>> these issues. However, even if there is position invariance, this >>> does not provide a mechanism to toggle invariance on and off on a >>> per-variable basis as is required in GLSL. >>> >>> The closest thing we've been able to find is a SIGGRAPH article on >>> D3D10 from Microsoft. >>> (http://download.microsoft.com/download/f/2/d/f2d5ee2c-b7ba-4cd0-9686-b6508b5479a1/Direct3D10_web.pdf) >>> They briefly allude to this problem in section 5.4: >>> >>> "We considered several solutions for how to specify invariance >>> requirements in the source code itself, for example, requiring >>> that subroutines be compiled in an invariant fashion even if they >>> are inlined. However, our search ultimately led us to the more >>> traditional route of providing selectable, well-defined >>> optimization levels that must also be respected by the driver >>> compiler." >>> >>> My assumption is that D3D9 must have had similar requirements. >>> >>> The version of Cg that was open-sourced is quite archaic at this >>> point. The ANGLE compiler is open-sourced >>> (https://code.google.com/p/angleproject/) and is based off the 3DLabs >>> GLSL compiler. It compiles GLSL ES to HLSL9 which is then compiled >>> to D3D9 byte-code using D3DXCompileShader. A future extension to >>> ANGLE could be to generate D3D9 byte-code directly. However, even >>> D3D9 bytecode is still an IL and there is no guarantee that the >>> hardware executes those instructions exactly (and I know there are >>> implementations which do compile/optimize this further). >>> >>> The issue raised about ftransform in CG and GLSL and position >>> invariance was primarily an issue when using fixed function and >>> shaders together, and it was indeed a very common problem. This is >>> also why the "position_invariant" option was added to the >>> ARB_vertex_program assembly extension. Examples of this occurring in >>> shader-only applications are much more difficult to come by. >>> >>> Ideally, an example program which does exhibit invariance issues in >>> webgl (or GLSL or GLSL ES) would be available to demonstrate that >>> this is actually a real problem for webgl. If that existed, we could >>> verify whether or not ANGLE on D3D9 has such problems, or if it just >>> works. >>> >>> Steve Baker: do you have any such examples, or would you be able to >>> put one together which demonstrates this problem? >>> >>> We are continuing to investigate the guarantees provided by D3D in >>> this area and a concrete test case showing such issues would be >>> invaluable for this. >>> >>> Thanks, >>> Daniel > --- Daniel Koch -+- daniel...@ Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan...@ Wed Jul 14 11:40:14 2010 From: dan...@ (Daniel Koch) Date: Wed, 14 Jul 2010 14:40:14 -0400 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> Message-ID: Hi folks, We've discussed this further with Google off-list and believe that best way forward is to leave the invariance language from GLSL ES in WebGL as suggested. OpenGL implementations will be able to use it as normal and ANGLE will rely on the invariance guarantees that are provided by D3D9 and HLSL. We'll continue to investigate what invariance guarantees are available in D3D9 and what we may be able to do should any applications which demonstrate invariance issues on ANGLE arise. Hope this helps, Daniel On 2010-07-14, at 12:06 PM, Chris Marrin wrote: > > On Jul 13, 2010, at 12:29 PM, Daniel Koch wrote: > >> Hi folks, >> >> The problem is that there is nothing like the "invariant" keyword in D3D9 HLSL (or even D3D10 HLSL for that matter) that I am aware of. >> In practise, there must be some form of invariance guaranteed by d3d9 (especially for position), since I know of many games which use multi-pass rendering algorithms which work just fine. The difficulty lies in figuring out exactly what is guaranteed by D3D9, since we've been unable to find any sort of public documentation or discussion of these issues. However, even if there is position invariance, this does not provide a mechanism to toggle invariance on and off on a per-variable basis as is required in GLSL. > > I think we certainly need the feature of invariance. Perhaps D3D always creates invariant shaders (it never does optimizations that would break invariance)? If so, then I suggest we keep the keyword, use it normally in OpenGL implementations (where supported) and ignore it in HLSL. Of course, I haven't investigated the issue, so I'm not sure if this is possible. But I believe we need to: > > a) Make it possible to guarantee an invariant shader > b) Make it possible to enable and disable invariance in implementations that support it. > > ----- > ~Chris > cmarrin...@ > > > > --- Daniel Koch -+- daniel...@ -+- 1 613.244.1111 x352 Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com 311 O'Connor St., Suite 300, Ottawa, Ontario, Canada, K2P 2G9 This message is a private communication. It also contains information that is privileged or confidential. If you are not the intended recipient, please do not read, copy or use it, and do not disclose it to others. Please notify the sender of the delivery error by replying to this message, and then delete it and any attachments from your system. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed Jul 14 12:40:29 2010 From: bja...@ (Benoit Jacob) Date: Wed, 14 Jul 2010 12:40:29 -0700 (PDT) Subject: [Public WebGL] Error codes in drawArrays / drawElements In-Reply-To: Message-ID: <1333126035.4973.1279136429567.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Wed, Jun 2, 2010 at 3:22 PM, Benoit Jacob > wrote: > > Hi, > > > > These Khronos tests suggests that in certain circumstances drawArray > > / drawElements give INVALID_OPERATION, as opposed to INVALID_VALUE: > > > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-arrays-out-of-bounds.html > > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-elements-out-of-bounds.html > > > > But neither the WebGL spec, nor the OpenGL ES documentation, say > > that these functions can give INVALID_OPERATION. > > Section 4.1 of the WebGL indicates, but does not currently specify, > this behavior. > > We agreed at the F2F that generating an INVALID_OPERATION error will > be the specified behavior. The spec still needs to catch up but the > tests verify the intended behavior. The spec still needs to be updated: section 5.13.11 still only mentions a few of the various cases where we should specify GL errors. Would you accept a patch doing these changes in the spec? Benoit > > -Ken > > > What's happening? > > > > Benoit > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From asa...@ Wed Jul 14 12:52:16 2010 From: asa...@ (Andor Salga) Date: Wed, 14 Jul 2010 15:52:16 -0400 Subject: [Public WebGL] enabling POINT_SMOOTH symbol? In-Reply-To: References: Message-ID: Okay, thanks guys. ----- Original Message ----- From: "Gregg Tavares (wrk)" Date: Wednesday, July 14, 2010 2:03 pm Subject: Re: [Public WebGL] enabling POINT_SMOOTH symbol? To: Dan Lecocq Cc: Andor Salga , public_webgl...@ > > > On Tue, Jul 13, 2010 at 7:47 PM, Dan Lecocq wrote: > Hi Andor, > As far as I know, most/all WebGL functions are just wrappers giving you access to the hardware, and some features may or may not be advertised (as in, constants not defined). If you're sure that they are available on a particular graphics card, you can probably just plug in the constant in place of the symbol. > > So, instead of trying to get gl.POINT_SMOOTH, you could supply 0x0B10, which seems to be the constant GL_POINT_SMOOTH is set to. This sort of approach worked for me with floating-point textures in WebGL, but with points, I'm not sure that points are supported in the OpenGL ES 2.0 spec. Again, though, you might have luck by using the constant for GL_POINTS when calling gl.drawElements. > > Sorry Dan, this won't work on a WebGL compliant implementation which all browsers will soon be. > > On a positive note the particular case of floating point textures I would guess that an official WebGL extension to support them will be near the top of the list once WebGL1.0 ships. > > > Cheers,> Dan> > On Tue, Jul 13, 2010 at 10:34 PM, Andor Salga wrote: > Hi, > I'm working on Processing.js, a java_script port of Processing. One feature Processing has which we'd like to support is point smoothing. Is there any chance we could get the POINT_SMOOTH symbol into the specification? Is there any leeway for adding non-OpenGL ES 2.0 symbols? > > Thank you, > Andor > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Jul 15 13:26:20 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 15 Jul 2010 13:26:20 -0700 Subject: [Public WebGL] Error codes in drawArrays / drawElements In-Reply-To: <1333126035.4973.1279136429567.JavaMail.root@cm-mail03.mozilla.org> References: <1333126035.4973.1279136429567.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Wed, Jul 14, 2010 at 12:40 PM, Benoit Jacob wrote: > ----- Original Message ----- >> On Wed, Jun 2, 2010 at 3:22 PM, Benoit Jacob >> wrote: >> > Hi, >> > >> > These Khronos tests suggests that in certain circumstances drawArray >> > / drawElements give INVALID_OPERATION, as opposed to INVALID_VALUE: >> > >> > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-arrays-out-of-bounds.html >> > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-elements-out-of-bounds.html >> > >> > But neither the WebGL spec, nor the OpenGL ES documentation, say >> > that these functions can give INVALID_OPERATION. >> >> Section 4.1 of the WebGL indicates, but does not currently specify, >> this behavior. >> >> We agreed at the F2F that generating an INVALID_OPERATION error will >> be the specified behavior. The spec still needs to catch up but the >> tests verify the intended behavior. > > The spec still needs to be updated: section 5.13.11 still only mentions a few of the various cases where we should specify GL errors. > > Would you accept a patch doing these changes in the spec? I am sure we would -- could you propose one? Thanks, -Ken > Benoit > >> >> -Ken >> >> > What's happening? >> > >> > Benoit >> > ----------------------------------------------------------- >> > You are currently subscribed to public_webgl...@ >> > To unsubscribe, send an email to majordomo...@ with >> > the following command in the body of your email: >> > >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Jul 15 13:38:48 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 15 Jul 2010 13:38:48 -0700 Subject: [Public WebGL] Re: WebGL spec modifications for D3D In-Reply-To: References: <4C26988F.6000506@sjbaker.org> <4C3683D1.7060605@hicorp.co.jp> <4C37D3B7.2070106@sjbaker.org> <32619927-DD61-442F-9B82-E6D053412F00@transgaming.com> Message-ID: On Wed, Jul 14, 2010 at 11:40 AM, Daniel Koch wrote: > Hi folks, > We've discussed this further with Google off-list and believe that best way > forward is to leave the invariance language from GLSL ES in WebGL as > suggested. ?OpenGL implementations will be able to use it as normal and > ANGLE will rely on the invariance guarantees that are provided by D3D9 and > HLSL. ?We'll continue to investigate what invariance guarantees are > available in D3D9 and what we may be able to do should any applications > which demonstrate invariance issues on ANGLE arise. Based on these discussions, I've removed the section from the spec which stated that the invariant keyword is ignored in WebGL. If this should have been done differently (for example relaxing the language in the spec rather than removing it entirely) please post. -Ken > Hope this helps, > Daniel > On 2010-07-14, at 12:06 PM, Chris Marrin wrote: > > On Jul 13, 2010, at 12:29 PM, Daniel Koch wrote: > > Hi folks, > > The problem is that there is nothing like the "invariant" keyword in D3D9 > HLSL (or even D3D10 HLSL for that matter) that I am aware of. > > In practise, there must be some form of invariance guaranteed by d3d9 > (especially for position), since I know of many games which use multi-pass > rendering algorithms which work just fine. ?The difficulty lies in figuring > out exactly what is guaranteed by D3D9, since we've been unable to find any > sort of public documentation or discussion of these issues. ?However, even > if there is position invariance, this does not provide a mechanism to toggle > invariance on and off on a per-variable basis as is required in GLSL. > > I think we certainly need the feature of invariance. Perhaps D3D always > creates invariant shaders (it never does optimizations that would break > invariance)? If so, then I suggest we keep the keyword, use it normally in > OpenGL implementations (where supported) and ignore it in HLSL. Of course, I > haven't investigated the issue, so I'm not sure if this is possible. But I > believe we need to: > > a) Make it possible to guarantee an invariant shader > b) Make it possible to enable and disable invariance in implementations that > support it. > > ----- > ~Chris > cmarrin...@ > > > > > > --- > ?? ? Daniel Koch -+-??daniel...@??-+- ?1 613.244.1111 x352 > Senior Graphics Architect ?-+-?TransGaming Inc. ?-+-?www.transgaming.com > ?? ? ? ? ?311 O'Connor St., Suite 300, Ottawa, Ontario, Canada, K2P 2G9 > This message is a private communication. It also contains information that > is privileged or confidential. If you are not the intended recipient, please > do not read, copy or use it, and do not disclose it to others.? Please > notify the sender of the delivery error by replying to this message, and > then delete it and any attachments from your system. Thank you. > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Fri Jul 16 09:02:00 2010 From: bja...@ (Benoit Jacob) Date: Fri, 16 Jul 2010 09:02:00 -0700 (PDT) Subject: [Public WebGL] Error codes in drawArrays / drawElements In-Reply-To: Message-ID: <286455733.21937.1279296120699.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Wed, Jul 14, 2010 at 12:40 PM, Benoit Jacob > wrote: > > ----- Original Message ----- > >> On Wed, Jun 2, 2010 at 3:22 PM, Benoit Jacob > >> wrote: > >> > Hi, > >> > > >> > These Khronos tests suggests that in certain circumstances > >> > drawArray > >> > / drawElements give INVALID_OPERATION, as opposed to > >> > INVALID_VALUE: > >> > > >> > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-arrays-out-of-bounds.html > >> > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/tests/fast/draw-elements-out-of-bounds.html > >> > > >> > But neither the WebGL spec, nor the OpenGL ES documentation, say > >> > that these functions can give INVALID_OPERATION. > >> > >> Section 4.1 of the WebGL indicates, but does not currently specify, > >> this behavior. > >> > >> We agreed at the F2F that generating an INVALID_OPERATION error > >> will > >> be the specified behavior. The spec still needs to catch up but the > >> tests verify the intended behavior. > > > > The spec still needs to be updated: section 5.13.11 still only > > mentions a few of the various cases where we should specify GL > > errors. > > > > Would you accept a patch doing these changes in the spec? > > I am sure we would -- could you propose one? Actually, I sent this too fast and missed section 6.2 where this stuff is explained (and is easy to find). I can only see one thing that seems to be documented neither in the GL ES docs, nor in the WebGL spec, it is that glDrawElements should generate INVALID_OPERATION if no element array buffer is bound. Do we want to document that in the WebGL spec? Also, do we want to say somewhere (perhaps once and for all) in the WebGL spec that WebGL functions should generate INVALID_OPERATION if their normal execution would lead to an integer overflow? Thanks, Benoit > > Thanks, > > -Ken > > > Benoit > > > >> > >> -Ken > >> > >> > What's happening? > >> > > >> > Benoit > >> > ----------------------------------------------------------- > >> > You are currently subscribed to public_webgl...@ > >> > To unsubscribe, send an email to majordomo...@ with > >> > the following command in the body of your email: > >> > > >> > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Mon Jul 26 18:21:17 2010 From: ste...@ (Steve Baker) Date: Mon, 26 Jul 2010 20:21:17 -0500 Subject: [Public WebGL] Horrible Z buffer precision with latest FireFox. Message-ID: <4C4E348D.7070906@sjbaker.org> My son just upgraded his Linux box to Firefox/Minefield to 4.0.b3pre 20100723 - and suddenly his Z precision turned to crap. He definitely has a Z buffer - but the precision looks no better than 16 bit...possibly worse! This machine only has a 6800 card - but it's been working perfectly well for months. Any clues as to what happened? -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Tue Jul 27 08:24:04 2010 From: ste...@ (stephen white) Date: Wed, 28 Jul 2010 00:54:04 +0930 Subject: [Public WebGL] Earlier versions of OpenGL ES Message-ID: <6362E900-ED78-45E2-B537-C4249837B895@adam.com.au> Is there any prospect of WebGL having access to earlier versions of OpenGL ES, for the range of equipment out there like the iPhone 3G that does not have 2.0? If it were possible to make the API calls, then it could be up to the author to detect and use either shaders or hardware functionality, while still using buffers and vertex lists? Are there any subtle incompatibilities that would need more than just a few extra calls exposed, or is this really a set of APIs that would be separately maintained? -- steve...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Tue Jul 27 12:18:00 2010 From: cma...@ (Chris Marrin) Date: Tue, 27 Jul 2010 12:18:00 -0700 Subject: [Public WebGL] Earlier versions of OpenGL ES In-Reply-To: <6362E900-ED78-45E2-B537-C4249837B895@adam.com.au> References: <6362E900-ED78-45E2-B537-C4249837B895@adam.com.au> Message-ID: <0FB141B2-827F-4158-AE1E-FD64C4A85BD6@apple.com> On Jul 27, 2010, at 8:24 AM, stephen white wrote: > Is there any prospect of WebGL having access to earlier versions of OpenGL ES, for the range of equipment out there like the iPhone 3G that does not have 2.0? This is in insidious plan by Apple to force people to upgrade their iPhones :-) > > If it were possible to make the API calls, then it could be up to the author to detect and use either shaders or hardware functionality, while still using buffers and vertex lists? We discussed this and decided against it. The argument is that OpenGL ES 2.0 is the future, so you're likely to see fewer and fewer ES 1.1 devices as time goes on. By the time we get the spec finished, implemented and in the hands of a significant number of people, the number of devices that support ES 1.1 will be a small minority. This is borne out using the iPhone as an example. We've sold somewhere around 14 million iPhone and iPhone 3G devices. It's likely that some of these are no longer in use because their owners have upgraded to one of the newer models. We've already sold at least 25 million iPhone 3Gs and iPhone 4 devices (please don't use these as any sort of official figures, they are all gross estimates derived from Wikipedia). In another year (which is the minimum time needed for any kind of significant deployment) I doubt if more than 10% of the iPhones in the field will not be ES 2.0 capable. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Tue Jul 27 12:28:00 2010 From: vla...@ (Vladimir Vukicevic) Date: Tue, 27 Jul 2010 12:28:00 -0700 (PDT) Subject: [Public WebGL] Heads Up: WebGL API changes coming soon In-Reply-To: <1570646726.103730.1280174291068.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1974218888.112941.1280258880553.JavaMail.root@cm-mail03.mozilla.org> Hi all, The are two large changes that are coming to the public WebGL implementations fairly soon. They involve enabling mandatory GLSL ES shader validation, and a change to the texImage2D API that takes a DOM object for data. The bad news is that these changes will likely break all existing WebGL content. The good news is that you can test code with these changes today, and that it's possible to write code such that it will work both before and after the changes. == GLSL ES Shader Validation == The first change, mandatory GLSL ES shader validation. Thanks to Google & Transgaming's efforts, the ANGLE project now implements a full GLSL ES shader parser, and performs the necessary translation to desktop GLSL. The three public implementations are all using the same shader valiator, and it's already been integrated in such a way that it can be enabled for testing. To do so: In Minefield/Firefox: - open about:config - change "webgl.shader_validator" from false to true In Chromium: - pass --enable-glsl-translator on the command line In WebKit: - instructions coming soon :) The biggest impact of this change is that fragment shaders will have a mandatory precision specifier, as per the GLSL ES spec. However, the precision specifier is invalid in desktop GLSL without a #version pragma. So, a fix that will enable fragment shaders to work both with and without validation is to place: #ifdef GL_ES precision highp float; #endif at the start of every fragment shader. The GL_ES define is not present on desktop, so that line is ignored; however it is present when running under validation, because the translator implements GLSL ES. Note that the precision qualifiers will have no effect on the desktop (I believe they're just ignored by the translator), but may have an impact on mobile. Note that the #ifdefs should be considered temporary here, as the correct/valid shader must include a precision qualifier. If you wish you can also explicitly specify qualifiers on every float variable. Another common problem is implicit type casting, as no such feature exists in GLSL ES. Most commonly, floating point constants must be specified as floating point. For example, the following will fail: float a = 1.0 * 2; vec3 v2 = v * 2; The 2 must be written as "2.0" or float(2). The shader validator will also impose additional spec issues that are sometimes ignored or relaxed by drivers. The shader compilation log should provide the necessary information to fix these. Should you run into GLSL ES programs that you believe are correct but that the shader validator is either rejecting or translating incorrectly, please report these! == Obsolete TexImage2D API Removal == The second change is the removing of the now-obsolete form of texImage2D calls. The WebGL spec originally had a form of texImage2D that looked like this (for HTMLImageElement and other DOM elements): texImage2D(target, level, HTMLImageElement, [optional] flipY, [optional] premultiplyAlpha) This caused a number of problems once we looked at specifying what should happen when going from HTML image data to a texture given various source formats. The new form of this call looks like the standard buffer-taking texImage2D call: texImage2D(target, level, internalformat, format, type, HTMLImageElement) As described in the spec, the opaque image data provided by the DOM object will first be internally converted into the given format and type before being uploaded to GL. Additionally, two new PixelStore parameters replace the previous optional flipY and premultiplyAlpha parameters: UNPACK_FLIP_Y_WEBGL UNPACK_PREMULTIPLY_ALPHA_WEBGL More details on these can be found in the spec at https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#PIXEL_STORAGE_PARAMETERS . Currently, all implementations support both the new form of this API call, as well as the old legacy form. You should be able to change to the new form of the calls and have content continue to work on recent nightly builds of any implementation, with the obsolete form slated for removal shortly. We expect that enabling shader validation and removing the obsolete texImage2D calls will happen simultaneously, and within the next two weeks. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Tue Jul 27 12:48:01 2010 From: bja...@ (Benoit Jacob) Date: Tue, 27 Jul 2010 12:48:01 -0700 (PDT) Subject: [Public WebGL] Heads Up: WebGL API changes coming soon In-Reply-To: <1974218888.112941.1280258880553.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <319880615.113068.1280260081555.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > == Obsolete TexImage2D API Removal == > > The second change is the removing of the now-obsolete form of > texImage2D calls. The WebGL spec originally had a form of texImage2D > that looked like this (for HTMLImageElement and other DOM elements): > > texImage2D(target, level, HTMLImageElement, [optional] flipY, > [optional] premultiplyAlpha) > > This caused a number of problems once we looked at specifying what > should happen when going from HTML image data to a texture given > various source formats. The new form of this call looks like the > standard buffer-taking texImage2D call: > > texImage2D(target, level, internalformat, format, type, > HTMLImageElement) Also note that a similar API change applies to readPixels(). In the old API, readPixels() returns the buffer. In the new API, readPixels() takes the buffer as argument. Benoit > > As described in the spec, the opaque image data provided by the DOM > object will first be internally converted into the given format and > type before being uploaded to GL. Additionally, two new PixelStore > parameters replace the previous optional flipY and premultiplyAlpha > parameters: > > UNPACK_FLIP_Y_WEBGL > UNPACK_PREMULTIPLY_ALPHA_WEBGL > > More details on these can be found in the spec at > https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#PIXEL_STORAGE_PARAMETERS > . > > Currently, all implementations support both the new form of this API > call, as well as the old legacy form. You should be able to change to > the new form of the calls and have content continue to work on recent > nightly builds of any implementation, with the obsolete form slated > for removal shortly. > > We expect that enabling shader validation and removing the obsolete > texImage2D calls will happen simultaneously, and within the next two > weeks. > > - Vlad > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Tue Jul 27 12:58:59 2010 From: bja...@ (Benoit Jacob) Date: Tue, 27 Jul 2010 12:58:59 -0700 (PDT) Subject: [Public WebGL] Horrible Z buffer precision with latest FireFox. In-Reply-To: <4C4E348D.7070906@sjbaker.org> Message-ID: <1185980715.113197.1280260739805.JavaMail.root@cm-mail03.mozilla.org> Could you file a bug report about this, under Core / Canvas:WebGL ? It's something we should fix (or work around) :-) If you want to try stuff (since you have access to the machine in question; I have no problem here on my linux/nvidia system) the relevant code is probably CreateOffscreenPixmapContext() in gfx/thebes/GLContextProviderGLX.cpp. You could try setting the requested z-buffer width to 24 bits to see if it makes a difference... It's worth pasting the output of glxinfo too. Here it gives only 24 bit depth buffers, perhaps on your system certain visuals have a 16 bit depth buffer. Benoit ----- Original Message ----- > My son just upgraded his Linux box to Firefox/Minefield to 4.0.b3pre > 20100723 - and suddenly his Z precision turned to crap. He definitely > has a Z buffer - but the precision looks no better than 16 > bit...possibly worse! This machine only has a 6800 card - but it's > been working perfectly well for months. > > Any clues as to what happened? > > -- Steve > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Tue Jul 27 13:01:51 2010 From: bja...@ (Benoit Jacob) Date: Tue, 27 Jul 2010 13:01:51 -0700 (PDT) Subject: [Public WebGL] Horrible Z buffer precision with latest FireFox. In-Reply-To: <1185980715.113197.1280260739805.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1082436324.113207.1280260911762.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > Could you file a bug report about this, under Core / Canvas:WebGL ? > > It's something we should fix (or work around) :-) > > If you want to try stuff (since you have access to the machine in > question; I have no problem here on my linux/nvidia system) the > relevant code is probably CreateOffscreenPixmapContext() ... (or one of the other similar functions in that file... Vlad would know). Benoit > in gfx/thebes/GLContextProviderGLX.cpp. > > You could try setting the requested z-buffer width to 24 bits to see > if it makes a difference... > > It's worth pasting the output of glxinfo too. Here it gives only 24 > bit depth buffers, perhaps on your system certain visuals have a 16 > bit depth buffer. > > Benoit > > ----- Original Message ----- > > My son just upgraded his Linux box to Firefox/Minefield to 4.0.b3pre > > 20100723 - and suddenly his Z precision turned to crap. He > > definitely > > has a Z buffer - but the precision looks no better than 16 > > bit...possibly worse! This machine only has a 6800 card - but it's > > been working perfectly well for months. > > > > Any clues as to what happened? > > > > -- Steve > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Tue Jul 27 13:20:19 2010 From: vla...@ (Vladimir Vukicevic) Date: Tue, 27 Jul 2010 13:20:19 -0700 (PDT) Subject: [Public WebGL] Horrible Z buffer precision with latest FireFox. In-Reply-To: <1082436324.113207.1280260911762.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <1672927188.113369.1280262019104.JavaMail.root@cm-mail03.mozilla.org> Actually, on linux, we always use a FBO -- and I realize that I accidentally checked in always creating the depth renderbuffer with DEPTH_COMPONENT16. Oops, will fix. - Vlad ----- Original Message ----- > ----- Original Message ----- > > Could you file a bug report about this, under Core / Canvas:WebGL ? > > > > It's something we should fix (or work around) :-) > > > > If you want to try stuff (since you have access to the machine in > > question; I have no problem here on my linux/nvidia system) the > > relevant code is probably CreateOffscreenPixmapContext() > > ... (or one of the other similar functions in that file... Vlad would > know). > > Benoit > > > in gfx/thebes/GLContextProviderGLX.cpp. > > > > You could try setting the requested z-buffer width to 24 bits to see > > if it makes a difference... > > > > It's worth pasting the output of glxinfo too. Here it gives only 24 > > bit depth buffers, perhaps on your system certain visuals have a 16 > > bit depth buffer. > > > > Benoit > > > > ----- Original Message ----- > > > My son just upgraded his Linux box to Firefox/Minefield to > > > 4.0.b3pre > > > 20100723 - and suddenly his Z precision turned to crap. He > > > definitely > > > has a Z buffer - but the precision looks no better than 16 > > > bit...possibly worse! This machine only has a 6800 card - but it's > > > been working perfectly well for months. > > > > > > Any clues as to what happened? > > > > > > -- Steve > > > > > > ----------------------------------------------------------- > > > You are currently subscribed to public_webgl...@ > > > To unsubscribe, send an email to majordomo...@ with > > > the following command in the body of your email: > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Tue Jul 27 18:15:37 2010 From: ste...@ (Steve Baker) Date: Tue, 27 Jul 2010 20:15:37 -0500 Subject: [Public WebGL] Horrible Z buffer precision with latest FireFox. In-Reply-To: <1672927188.113369.1280262019104.JavaMail.root@cm-mail03.mozilla.org> References: <1672927188.113369.1280262019104.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C4F84B9.7060009@sjbaker.org> Aha! That would do it! Many thanks for such a quick turnaround. -- Steve Vladimir Vukicevic wrote: > Actually, on linux, we always use a FBO -- and I realize that I accidentally checked in always creating the depth renderbuffer with DEPTH_COMPONENT16. Oops, will fix. > > - Vlad > > ----- Original Message ----- > >> ----- Original Message ----- >> >>> Could you file a bug report about this, under Core / Canvas:WebGL ? >>> >>> It's something we should fix (or work around) :-) >>> >>> If you want to try stuff (since you have access to the machine in >>> question; I have no problem here on my linux/nvidia system) the >>> relevant code is probably CreateOffscreenPixmapContext() >>> >> ... (or one of the other similar functions in that file... Vlad would >> know). >> >> Benoit >> >> >>> in gfx/thebes/GLContextProviderGLX.cpp. >>> >>> You could try setting the requested z-buffer width to 24 bits to see >>> if it makes a difference... >>> >>> It's worth pasting the output of glxinfo too. Here it gives only 24 >>> bit depth buffers, perhaps on your system certain visuals have a 16 >>> bit depth buffer. >>> >>> Benoit >>> >>> ----- Original Message ----- >>> >>>> My son just upgraded his Linux box to Firefox/Minefield to >>>> 4.0.b3pre >>>> 20100723 - and suddenly his Z precision turned to crap. He >>>> definitely >>>> has a Z buffer - but the precision looks no better than 16 >>>> bit...possibly worse! This machine only has a 6800 card - but it's >>>> been working perfectly well for months. >>>> >>>> Any clues as to what happened? >>>> >>>> -- Steve >>>> >>>> ----------------------------------------------------------- >>>> You are currently subscribed to public_webgl...@ >>>> To unsubscribe, send an email to majordomo...@ with >>>> the following command in the body of your email: >>>> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From wan...@ Wed Jul 28 04:30:31 2010 From: wan...@ (=?ISO-8859-1?Q?Cau=EA_Waneck?=) Date: Wed, 28 Jul 2010 08:30:31 -0300 Subject: [Public WebGL] Earlier versions of OpenGL ES In-Reply-To: <0FB141B2-827F-4158-AE1E-FD64C4A85BD6@apple.com> References: <6362E900-ED78-45E2-B537-C4249837B895@adam.com.au> <0FB141B2-827F-4158-AE1E-FD64C4A85BD6@apple.com> Message-ID: We have also to remind that iphones aren't the only mobile devices out there. And there are lots of them that still are OpenGL ES powered. I think in order to be more universal, exposing the opengl es pipeline for those devices would be a great move. 2010/7/27 Chris Marrin > > On Jul 27, 2010, at 8:24 AM, stephen white wrote: > > > Is there any prospect of WebGL having access to earlier versions of > OpenGL ES, for the range of equipment out there like the iPhone 3G that does > not have 2.0? > > This is in insidious plan by Apple to force people to upgrade their iPhones > :-) > > > > > If it were possible to make the API calls, then it could be up to the > author to detect and use either shaders or hardware functionality, while > still using buffers and vertex lists? > > We discussed this and decided against it. The argument is that OpenGL ES > 2.0 is the future, so you're likely to see fewer and fewer ES 1.1 devices as > time goes on. By the time we get the spec finished, implemented and in the > hands of a significant number of people, the number of devices that support > ES 1.1 will be a small minority. > > This is borne out using the iPhone as an example. We've sold somewhere > around 14 million iPhone and iPhone 3G devices. It's likely that some of > these are no longer in use because their owners have upgraded to one of the > newer models. We've already sold at least 25 million iPhone 3Gs and iPhone 4 > devices (please don't use these as any sort of official figures, they are > all gross estimates derived from Wikipedia). In another year (which is the > minimum time needed for any kind of significant deployment) I doubt if more > than 10% of the iPhones in the field will not be ES 2.0 capable. > > ----- > ~Chris > cmarrin...@ > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Wed Jul 28 06:17:20 2010 From: cma...@ (Chris Marrin) Date: Wed, 28 Jul 2010 06:17:20 -0700 Subject: [Public WebGL] Earlier versions of OpenGL ES In-Reply-To: References: <6362E900-ED78-45E2-B537-C4249837B895@adam.com.au> <0FB141B2-827F-4158-AE1E-FD64C4A85BD6@apple.com> Message-ID: On Jul 28, 2010, at 4:30 AM, Cau? Waneck wrote: > We have also to remind that iphones aren't the only mobile devices out there. And there are lots of them that still are OpenGL ES powered. I think in order to be more universal, exposing the opengl es pipeline for those devices would be a great move. But I think the same rules apply there. I don't believe in the near future we will see many mobile devices that will support OpenGL ES 1.1 and not OpenGL ES 2.0. This is based on information from chip vendors. I'm sure there will be cases of such devices, but like the example of iPhone and iPhone 3g, I believe these will be old designs and so their numbers are unlikely to grow and very likely to shrink. There is some desktop hardware in the same boat. Older hardware only supports fixed function pipelines and some more recent hardware doesn't support things like FBO's, which are required by WebGL. We leave this hardware in the dust as well. In some cases WebKit will support some of this hardware with a software renderer, which will work but will not perform very well. But like with the mobile devices, you can expect such hardware to become more and more rare as time goes on. With all that said, I don't think there is anything stopping a group from writing a spec based on OpenGL ES 1.1. But I think such a spec would be separate, just like the 2D canvas is separate from WebGL. Another motivation for not supporting ES 1.1 in WebGL was to avoid making the spec too cluttered with non ES 2.0 features. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Wed Jul 28 06:18:49 2010 From: cma...@ (Chris Marrin) Date: Wed, 28 Jul 2010 06:18:49 -0700 Subject: [Public WebGL] Horrible Z buffer precision with latest FireFox. In-Reply-To: <1185980715.113197.1280260739805.JavaMail.root@cm-mail03.mozilla.org> References: <1185980715.113197.1280260739805.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <5E0130F2-F20B-4C0A-8B58-240E00345255@apple.com> On Jul 27, 2010, at 12:58 PM, Benoit Jacob wrote: > Could you file a bug report about this, under Core / Canvas:WebGL ? > > It's something we should fix (or work around) :-) > > If you want to try stuff (since you have access to the machine in question; I have no problem here on my linux/nvidia system) the relevant code is probably CreateOffscreenPixmapContext() in gfx/thebes/GLContextProviderGLX.cpp. > > You could try setting the requested z-buffer width to 24 bits to see if it makes a difference... > > It's worth pasting the output of glxinfo too. Here it gives only 24 bit depth buffers, perhaps on your system certain visuals have a 16 bit depth buffer. But also realize that WebGL only mandates support for a 16 bit depth buffer, so you'll ultimately have to deal with these rendering issues. ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Jul 28 11:21:46 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 28 Jul 2010 11:21:46 -0700 (PDT) Subject: [Public WebGL] Earlier versions of OpenGL ES In-Reply-To: Message-ID: <527171403.121012.1280341306264.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Jul 28, 2010, at 4:30 AM, Cau? Waneck wrote: > > > We have also to remind that iphones aren't the only mobile devices > > out there. And there are lots of them that still are OpenGL ES > > powered. I think in order to be more universal, exposing the opengl > > es pipeline for those devices would be a great move. > > But I think the same rules apply there. I don't believe in the near > future we will see many mobile devices that will support OpenGL ES 1.1 > and not OpenGL ES 2.0. This is based on information from chip vendors. > I'm sure there will be cases of such devices, but like the example of > iPhone and iPhone 3g, I believe these will be old designs and so their > numbers are unlikely to grow and very likely to shrink. It's also not enough for a mobile device to just have an OpenGL accelerator; the CPU portion really has to be up to speed as well, in order to both support JavaScript as well as the "full web" in the underlying web browser. Most devices that only support OpenGL ES 1.1 tend to have older and slower ARM cores. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Thu Jul 29 06:31:31 2010 From: ste...@ (Steve Baker) Date: Thu, 29 Jul 2010 08:31:31 -0500 Subject: [Public WebGL] Earlier versions of OpenGL ES In-Reply-To: <527171403.121012.1280341306264.JavaMail.root@cm-mail03.mozilla.org> References: <527171403.121012.1280341306264.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4C5182B3.7070401@sjbaker.org> The problem is this: * If you make the specification too hard to implement (and supporting all of that old fixed function stuff certainly does that) - then fewer modern devices will support it and WebGL will end up in fewer people's hands - not more. * If you try to avoid that and make it so that the standard allows the drivers to support EITHER fixed function OR shaders - and not require both at the same time - then the application authors have to support clunky old fixed function rendering AND sexxy shader-based stuff - even for the simplest possible applications of the technology...then the cost of writing WebGL applications will go WAY up - and the compromises to make applications work both ways will make the quality go down (way down, actually). Fewer application writers will choose to put their software out using it - versus simply shipping regular applications, and WebGL will fail. * Sure, WebGL doesn't work on low-end and older cellphones - but it doesn't run on older desktops - or Babbage difference engines or abacusses either. You have to draw a line somewhere. * If we're engineering a standard for the future, one that'll still be around for as long as (say) Flash - then looking back at this from two years into the future when all of those old phones fall off of the lock-in contracts from the telco's and someone can pick up an ES2.0 phone for $20 (that's what mine cost me) - we're going to be laughing at the idea that OpenGL ES 1.x was ever supported. -- Steve Vladimir Vukicevic wrote: > ----- Original Message ----- > >> On Jul 28, 2010, at 4:30 AM, Cau? Waneck wrote: >> >> >>> We have also to remind that iphones aren't the only mobile devices >>> out there. And there are lots of them that still are OpenGL ES >>> powered. I think in order to be more universal, exposing the opengl >>> es pipeline for those devices would be a great move. >>> >> But I think the same rules apply there. I don't believe in the near >> future we will see many mobile devices that will support OpenGL ES 1.1 >> and not OpenGL ES 2.0. This is based on information from chip vendors. >> I'm sure there will be cases of such devices, but like the example of >> iPhone and iPhone 3g, I believe these will be old designs and so their >> numbers are unlikely to grow and very likely to shrink. >> > > It's also not enough for a mobile device to just have an OpenGL accelerator; the CPU portion really has to be up to speed as well, in order to both support JavaScript as well as the "full web" in the underlying web browser. Most devices that only support OpenGL ES 1.1 tend to have older and slower ARM cores. > > - Vlad > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From wan...@ Thu Jul 29 05:16:26 2010 From: wan...@ (=?ISO-8859-1?Q?Cau=EA_Waneck?=) Date: Thu, 29 Jul 2010 09:16:26 -0300 Subject: [Public WebGL] Earlier versions of OpenGL ES In-Reply-To: <4C5182B3.7070401@sjbaker.org> References: <527171403.121012.1280341306264.JavaMail.root@cm-mail03.mozilla.org> <4C5182B3.7070401@sjbaker.org> Message-ID: oh well, I have to say I've been convinced : ) 2010/7/29 Steve Baker > The problem is this: > > * If you make the specification too hard to implement (and supporting > all of that old fixed function stuff certainly does that) - then fewer > modern devices will support it and WebGL will end up in fewer people's > hands - not more. > > * If you try to avoid that and make it so that the standard allows the > drivers to support EITHER fixed function OR shaders - and not require > both at the same time - then the application authors have to support > clunky old fixed function rendering AND sexxy shader-based stuff - even > for the simplest possible applications of the technology...then the cost > of writing WebGL applications will go WAY up - and the compromises to > make applications work both ways will make the quality go down (way > down, actually). Fewer application writers will choose to put their > software out using it - versus simply shipping regular applications, and > WebGL will fail. > > * Sure, WebGL doesn't work on low-end and older cellphones - but it > doesn't run on older desktops - or Babbage difference engines or > abacusses either. You have to draw a line somewhere. > > * If we're engineering a standard for the future, one that'll still be > around for as long as (say) Flash - then looking back at this from two > years into the future when all of those old phones fall off of the > lock-in contracts from the telco's and someone can pick up an ES2.0 > phone for $20 (that's what mine cost me) - we're going to be laughing at > the idea that OpenGL ES 1.x was ever supported. > > -- Steve > > > Vladimir Vukicevic wrote: > > ----- Original Message ----- > > > >> On Jul 28, 2010, at 4:30 AM, Cau? Waneck wrote: > >> > >> > >>> We have also to remind that iphones aren't the only mobile devices > >>> out there. And there are lots of them that still are OpenGL ES > >>> powered. I think in order to be more universal, exposing the opengl > >>> es pipeline for those devices would be a great move. > >>> > >> But I think the same rules apply there. I don't believe in the near > >> future we will see many mobile devices that will support OpenGL ES 1.1 > >> and not OpenGL ES 2.0. This is based on information from chip vendors. > >> I'm sure there will be cases of such devices, but like the example of > >> iPhone and iPhone 3g, I believe these will be old designs and so their > >> numbers are unlikely to grow and very likely to shrink. > >> > > > > It's also not enough for a mobile device to just have an OpenGL > accelerator; the CPU portion really has to be up to speed as well, in order > to both support JavaScript as well as the "full web" in the underlying web > browser. Most devices that only support OpenGL ES 1.1 tend to have older > and slower ARM cores. > > > > - Vlad > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu Jul 29 15:11:12 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Thu, 29 Jul 2010 15:11:12 -0700 Subject: [Public WebGL] maximum length of identifiers in WebGL GLSL Message-ID: Do we want to specify a maximum identifier length for WebGL GLSL? I didn't see one in the GLSL spec. I was going to write a test with really long identifiers (4meg) to see if I could find some drivers that had problems with them but it might be better to just require WebGL to enforce some maximum length. 64 chars? 128 chars? 256 chars which will make shaders less likely to fail on some drivers. Thoughts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pke...@ Thu Jul 29 15:25:34 2010 From: pke...@ (Phil Keslin) Date: Thu, 29 Jul 2010 15:25:34 -0700 Subject: [Public WebGL] maximum length of identifiers in WebGL GLSL In-Reply-To: References: Message-ID: Its not in the GLSL spec because its an implementation constant. See ACTIVE_UNIFORM_MAX_LENGTH and ACTIVE_ATTRIBUTE_MAX_LENGTH in the OpenGL/WebGL specs. - Phil On Thu, Jul 29, 2010 at 3:11 PM, Gregg Tavares (wrk) wrote: > Do we want to specify a maximum identifier length for WebGL GLSL? > > I didn't see one in the GLSL spec. I was going to write a test with really > long identifiers (4meg) to see if I could find some drivers that had > problems with them but it might be better to just require WebGL to enforce > some maximum length. 64 chars? 128 chars? 256 chars which will make shaders > less likely to fail on some drivers. > > Thoughts? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu Jul 29 15:48:25 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Thu, 29 Jul 2010 15:48:25 -0700 Subject: [Public WebGL] maximum length of identifiers in WebGL GLSL In-Reply-To: References: Message-ID: On Thu, Jul 29, 2010 at 3:25 PM, Phil Keslin wrote: > Its not in the GLSL spec because its an implementation constant. See > ACTIVE_UNIFORM_MAX_LENGTH and ACTIVE_ATTRIBUTE_MAX_LENGTH in the > OpenGL/WebGL specs. It's probably staring me in the face but I don't see maximum lengths defined in the OpenGL ES spec either. ACTIVE_UNIFORM_MAX_LENGTH and ACTIVE_ATTRIBUTE_MAX_LENGTH return the max length in the current program, not any implementation max length. > > - Phil > > > On Thu, Jul 29, 2010 at 3:11 PM, Gregg Tavares (wrk) wrote: > >> Do we want to specify a maximum identifier length for WebGL GLSL? >> >> I didn't see one in the GLSL spec. I was going to write a test with >> really long identifiers (4meg) to see if I could find some drivers that had >> problems with them but it might be better to just require WebGL to enforce >> some maximum length. 64 chars? 128 chars? 256 chars which will make shaders >> less likely to fail on some drivers. >> >> Thoughts? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Thu Jul 29 23:55:24 2010 From: ste...@ (stephen white) Date: Fri, 30 Jul 2010 16:25:24 +0930 Subject: [Public WebGL] Earlier versions of OpenGL ES In-Reply-To: References: <527171403.121012.1280341306264.JavaMail.root@cm-mail03.mozilla.org> <4C5182B3.7070401@sjbaker.org> Message-ID: <4D01672B-37F4-4563-88D1-D1E1BBDF34EC@adam.com.au> On 29/07/2010, at 9:46 PM, Cau? Waneck wrote: > oh well, I have to say I've been convinced : ) Yes, Steve Baker's rather good at this. :) -- steve...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Fri Jul 30 01:51:45 2010 From: ced...@ (Cedric Vivier) Date: Fri, 30 Jul 2010 16:51:45 +0800 Subject: [Public WebGL] maximum length of identifiers in WebGL GLSL In-Reply-To: References: Message-ID: On Fri, Jul 30, 2010 at 06:11, Gregg Tavares (wrk) wrote: > I didn't see one in the GLSL spec. I was going to write a test with really > long identifiers (4meg) to see if I could find some drivers that had > problems with them but it might be better to just require WebGL to enforce > some maximum length. 64 chars? 128 chars? 256 chars which will make shaders > less likely to fail on some drivers. > Thoughts? > The results of the test you were going to write would be great to decide if and how big the limit should be I guess ;-) Coincidently I was going to post a similar question with regards to gl_MaxVertexUniformVectors, gl_MaxFragmentUniformVectors and gl_MaxVaryingVectors ; they do not exist on GL desktop so I guess a GLSL ES to GLSL validator/translator needs to do something here. Should the translator replace references to those with a literal value as in ES 2.0 spec (ie. 256, 256 and 15 respectively) or use a calculated value using the desktop's gl_Max**Components* with some formula to find the **Vector* equivalent ? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Fri Jul 30 06:16:53 2010 From: ste...@ (ste...@) Date: Fri, 30 Jul 2010 06:16:53 -0700 Subject: [Public WebGL] Two shader validator issues. Message-ID: <07946c224142d23cb5c5739aa0f3d399.squirrel@webmail.sjbaker.org> I've been putting some of my more complex GLSL shaders through the new checking gizmo in yesterday's Firefox nightly build and I'm seeing some inconsistancies between the Linux 64 bit version and the Windows XP 32 bit version (those are the only two I've tried). Two shader constructs are accepted by Linux but not by Windows: #ifdef GL_ES precision highp float; #endif ... vec3 A ; vec3 B ; float C ; float D ; ... A = ( C > D ) ? 0.0 : B ; ...the compiler under Windows complains about that last line: ERROR: 0:50: ':' : wrong operand types no operation ':' exists that takes a left-hand operand of type 'const mediump float' and a right operand of type '3-component vector of float' (or there is no acceptable conversion) ERROR: 1 compilation errors. No code generated. I could understand if maybe the new 'no automatic conversions' thing were refusing to widen '0.0' to 'vec3(0.0,0.0,0.0)' - but the problem doesn't seem to be that - it looks like it's widening it into a 'mediump' instead of to the 'highp' that my 'precision' statement demands...which would be a bug. The second problem is that (in a fragment shader), I say: #ifdef GL_ES precision highp float; #endif ... varying vec2 texCoord ; ... texCoord.x += 0.1 ; ...and under Windows, I get: ERROR: 0:77: 'assign' : l-value required "texCoord" (can't modify a varying) ERROR: 1 compilation errors. No code generated. ...this machine has OpenGL installed on it - but I wonder if the 'no assignment to varying' restriction comes from some HLSL/Cg restriction in the underlying implementation? In both cases, the shader code compiles and runs just fine under regular C++/OpenGL 3.x and under the Linux FireFox (with or without the validator) - and it also works OK under Windows if the webgl.shader_validator variable is set to false. The Windows XP machine has a fancy high end nVidia GPU - and the various Linux boxes I've run it on have everything from a 6800 up to a really recent nVidia GPU. I know the validator is working under Linux because it caught a couple of: float X ; X = 6 ; ...kinds of thing, and it stopped complaining when I changed '6' to '6.0'. Obviously, neither of these things are serious problems for me - it's mostly just annoying that I can't develop under Linux and expect my shader code to "just work" under Windows. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From hmw...@ Fri Jul 30 08:39:58 2010 From: hmw...@ (Hans-Martin Will) Date: Fri, 30 Jul 2010 08:39:58 -0700 Subject: Fwd: [Public WebGL] maximum length of identifiers in WebGL GLSL References: <4C52984B.8020208@gmail.com> Message-ID: <1DAABCDB-6588-4E6F-B61B-A4536DB079E3@me.com> Forwarding back to list. - HM > From: "Robert J. Simpson" > Date: July 30, 2010 2:15:55 AM PDT > To: Hans-Martin Will > Cc: gman...@, pkeslin...@ > Subject: Re: [Public WebGL] maximum length of identifiers in WebGL GLSL > > There's no limit on the length of identifiers. For ES2 there is a rather poorly defined 'complexity' limit of shaders. I think for WebGL it probably ought to be better defined, along with resource usage. > > Rob. > > >> >> On Jul 29, 2010, at 3:48 PM, Gregg Tavares (wrk) wrote: >> >>> >>> >>> On Thu, Jul 29, 2010 at 3:25 PM, Phil Keslin wrote: >>> Its not in the GLSL spec because its an implementation constant. See ACTIVE_UNIFORM_MAX_LENGTH and ACTIVE_ATTRIBUTE_MAX_LENGTH in the OpenGL/WebGL specs. >>> >>> It's probably staring me in the face but I don't see maximum lengths defined in the OpenGL ES spec either. ACTIVE_UNIFORM_MAX_LENGTH and ACTIVE_ATTRIBUTE_MAX_LENGTH return the max length in the current program, not any implementation max length. >>> >>> >>> >>> - Phil >>> >>> >>> On Thu, Jul 29, 2010 at 3:11 PM, Gregg Tavares (wrk) wrote: >>> Do we want to specify a maximum identifier length for WebGL GLSL? >>> >>> I didn't see one in the GLSL spec. I was going to write a test with really long identifiers (4meg) to see if I could find some drivers that had problems with them but it might be better to just require WebGL to enforce some maximum length. 64 chars? 128 chars? 256 chars which will make shaders less likely to fail on some drivers. >>> >>> Thoughts? >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Fri Jul 30 09:25:04 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Fri, 30 Jul 2010 09:25:04 -0700 Subject: [Public WebGL] maximum length of identifiers in WebGL GLSL In-Reply-To: References: Message-ID: On Fri, Jul 30, 2010 at 1:51 AM, Cedric Vivier wrote: > On Fri, Jul 30, 2010 at 06:11, Gregg Tavares (wrk) wrote: > >> I didn't see one in the GLSL spec. I was going to write a test with >> really long identifiers (4meg) to see if I could find some drivers that had >> problems with them but it might be better to just require WebGL to enforce >> some maximum length. 64 chars? 128 chars? 256 chars which will make shaders >> less likely to fail on some drivers. >> Thoughts? >> > > The results of the test you were going to write would be great to decide if > and how big the limit should be I guess ;-) > > Coincidently I was going to post a similar question with regards > to gl_MaxVertexUniformVectors, gl_MaxFragmentUniformVectors and > gl_MaxVaryingVectors ; they do not exist on GL desktop so I guess a GLSL ES > to GLSL validator/translator needs to do something here. > > Should the translator replace references to those with a literal value as > in ES 2.0 spec (ie. 256, 256 and 15 respectively) or use a calculated value > using the desktop's gl_Max**Components* with some formula to find the > **Vector* equivalent ? > The translator already does this and uses values queried from GL glGetIntegerv( GL_MAX_FRAGMENT_UNIFORM_COMPONENTS, &max_fragment_uniform_vectors_); max_fragment_uniform_vectors_ /= 4; glGetIntegerv(GL_MAX_VARYING_FLOATS, &max_varying_vectors_); max_varying_vectors_ /= 4; glGetIntegerv(GL_MAX_VERTEX_UNIFORM_COMPONENTS, &max_vertex_uniform_vectors_); max_vertex_uniform_vectors_ /= 4; > > > Regards, > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oli...@ Fri Jul 30 10:05:42 2010 From: oli...@ (Oliver Hunt) Date: Fri, 30 Jul 2010 10:05:42 -0700 Subject: [Public WebGL] Two shader validator issues. In-Reply-To: <07946c224142d23cb5c5739aa0f3d399.squirrel@webmail.sjbaker.org> References: <07946c224142d23cb5c5739aa0f3d399.squirrel@webmail.sjbaker.org> Message-ID: <7608BD02-8841-40AF-96D0-71B949E9C3B1@apple.com> I believe the majority of current WebGL implementations currently just throw shaders, etc directly at the underlying graphics driver which exposes all sorts of inconsistencies between different driver versions, gpus, and platforms. Dealing with these problems is an ongoing problem but i believe that all implementations will eventually have their own much stricter shader validators. The hope is that stricter validation (and subsequent shader rewriting by the webgl impl) will make it difficult to write shaders that will work on some, but not all platforms. --Oliver On Jul 30, 2010, at 6:16 AM, steve...@ wrote: > I've been putting some of my more complex GLSL shaders through the new > checking gizmo in yesterday's Firefox nightly build and I'm seeing some > inconsistancies between the Linux 64 bit version and the Windows XP 32 bit > version (those are the only two I've tried). > > Two shader constructs are accepted by Linux but not by Windows: > > #ifdef GL_ES > precision highp float; > #endif > ... > vec3 A ; > vec3 B ; > float C ; > float D ; > ... > A = ( C > D ) ? 0.0 : B ; > > ...the compiler under Windows complains about that last line: > > ERROR: 0:50: ':' : wrong operand types no operation ':' exists > that takes a left-hand operand of type 'const mediump float' > and a right operand of type '3-component vector of float' > (or there is no acceptable conversion) > ERROR: 1 compilation errors. No code generated. > > I could understand if maybe the new 'no automatic conversions' thing were > refusing to widen '0.0' to 'vec3(0.0,0.0,0.0)' - but the problem doesn't > seem to be that - it looks like it's widening it into a 'mediump' instead > of to the 'highp' that my 'precision' statement demands...which would be a > bug. > > The second problem is that (in a fragment shader), I say: > > #ifdef GL_ES > precision highp float; > #endif > ... > varying vec2 texCoord ; > ... > texCoord.x += 0.1 ; > > ...and under Windows, I get: > > ERROR: 0:77: 'assign' : l-value required "texCoord" (can't modify > a varying) > ERROR: 1 compilation errors. No code generated. > > ...this machine has OpenGL installed on it - but I wonder if the 'no > assignment to varying' restriction comes from some HLSL/Cg restriction in > the underlying implementation? > > In both cases, the shader code compiles and runs just fine under regular > C++/OpenGL 3.x and under the Linux FireFox (with or without the validator) > - and it also works OK under Windows if the webgl.shader_validator > variable is set to false. > > The Windows XP machine has a fancy high end nVidia GPU - and the various > Linux boxes I've run it on have everything from a 6800 up to a really > recent nVidia GPU. > > I know the validator is working under Linux because it caught a couple of: > > float X ; > X = 6 ; > > ...kinds of thing, and it stopped complaining when I changed '6' to '6.0'. > > Obviously, neither of these things are serious problems for me - it's > mostly just annoying that I can't develop under Linux and expect my shader > code to "just work" under Windows. > > -- Steve > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Fri Jul 30 10:29:21 2010 From: vla...@ (Vladimir Vukicevic) Date: Fri, 30 Jul 2010 10:29:21 -0700 (PDT) Subject: [Public WebGL] Two shader validator issues. In-Reply-To: <07946c224142d23cb5c5739aa0f3d399.squirrel@webmail.sjbaker.org> Message-ID: <1733343855.139760.1280510960964.JavaMail.root@cm-mail03.mozilla.org> Ah crap, sorry Steve -- I forgot to mention that it's not available on linux 64 at the moment due to a build issue. https://bugzilla.mozilla.org/show_bug.cgi?id=578877 is the tracking bug to get it reenabled once the build config stuff is fixed. The linux 32-bit build should be fine though if you want a way to test on linux for now. The errors you have below are all coming from the ANGLE shader validator, so are likely real issues. - Vlad ----- Original Message ----- > I've been putting some of my more complex GLSL shaders through the new > checking gizmo in yesterday's Firefox nightly build and I'm seeing some > inconsistancies between the Linux 64 bit version and the Windows XP 32 bit > version (those are the only two I've tried). > > Two shader constructs are accepted by Linux but not by Windows: > > #ifdef GL_ES > precision highp float; > #endif > ... > vec3 A ; > vec3 B ; > float C ; > float D ; > ... > A = ( C > D ) ? 0.0 : B ; > > ...the compiler under Windows complains about that last line: > > ERROR: 0:50: ':' : wrong operand types no operation ':' exists > that takes a left-hand operand of type 'const mediump float' > and a right operand of type '3-component vector of float' > (or there is no acceptable conversion) > ERROR: 1 compilation errors. No code generated. > > I could understand if maybe the new 'no automatic conversions' thing > were > refusing to widen '0.0' to 'vec3(0.0,0.0,0.0)' - but the problem > doesn't > seem to be that - it looks like it's widening it into a 'mediump' > instead > of to the 'highp' that my 'precision' statement demands...which would > be a > bug. > > The second problem is that (in a fragment shader), I say: > > #ifdef GL_ES > precision highp float; > #endif > ... > varying vec2 texCoord ; > ... > texCoord.x += 0.1 ; > > ...and under Windows, I get: > > ERROR: 0:77: 'assign' : l-value required "texCoord" (can't modify > a varying) > ERROR: 1 compilation errors. No code generated. > > ...this machine has OpenGL installed on it - but I wonder if the 'no > assignment to varying' restriction comes from some HLSL/Cg restriction > in > the underlying implementation? > > In both cases, the shader code compiles and runs just fine under > regular > C++/OpenGL 3.x and under the Linux FireFox (with or without the > validator) > - and it also works OK under Windows if the webgl.shader_validator > variable is set to false. > > The Windows XP machine has a fancy high end nVidia GPU - and the > various > Linux boxes I've run it on have everything from a 6800 up to a really > recent nVidia GPU. > > I know the validator is working under Linux because it caught a couple > of: > > float X ; > X = 6 ; > > ...kinds of thing, and it stopped complaining when I changed '6' to > '6.0'. > > Obviously, neither of these things are serious problems for me - it's > mostly just annoying that I can't develop under Linux and expect my > shader > code to "just work" under Windows. > > -- Steve > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From alo...@ Fri Jul 30 11:28:41 2010 From: alo...@ (Alok Priyadarshi) Date: Fri, 30 Jul 2010 12:28:41 -0600 Subject: [Public WebGL] Two shader validator issues. In-Reply-To: <1733343855.139760.1280510960964.JavaMail.root@cm-mail03.mozilla.org> References: <07946c224142d23cb5c5739aa0f3d399.squirrel@webmail.sjbaker.org> <1733343855.139760.1280510960964.JavaMail.root@cm-mail03.mozilla.org> Message-ID: FYI I recently fixed a couple of translator issues regarding ternary operators. You may want to update ANGLE version if you are running into those. On Fri, Jul 30, 2010 at 11:29 AM, Vladimir Vukicevic wrote: > Ah crap, sorry Steve -- I forgot to mention that it's not available on linux 64 at the moment due to a build issue. ?https://bugzilla.mozilla.org/show_bug.cgi?id=578877 is the tracking bug to get it reenabled once the build config stuff is fixed. > > The linux 32-bit build should be fine though if you want a way to test on linux for now. ?The errors you have below are all coming from the ANGLE shader validator, so are likely real issues. > > ? ?- Vlad > > ----- Original Message ----- >> I've been putting some of my more complex GLSL shaders through the new >> checking gizmo in yesterday's Firefox nightly build and I'm seeing some >> inconsistancies between the Linux 64 bit version and the Windows XP 32 bit >> version (those are the only two I've tried). >> >> Two shader constructs are accepted by Linux but not by Windows: >> >> #ifdef GL_ES >> precision highp float; >> #endif >> ... >> vec3 A ; >> vec3 B ; >> float C ; >> float D ; >> ... >> A = ( C > D ) ? 0.0 : B ; >> >> ...the compiler under Windows complains about that last line: >> >> ERROR: 0:50: ':' : wrong operand types no operation ':' exists >> that takes a left-hand operand of type 'const mediump float' >> and a right operand of type '3-component vector of float' >> (or there is no acceptable conversion) >> ERROR: 1 compilation errors. No code generated. >> >> I could understand if maybe the new 'no automatic conversions' thing >> were >> refusing to widen '0.0' to 'vec3(0.0,0.0,0.0)' - but the problem >> doesn't >> seem to be that - it looks like it's widening it into a 'mediump' >> instead >> of to the 'highp' that my 'precision' statement demands...which would >> be a >> bug. >> >> The second problem is that (in a fragment shader), I say: >> >> #ifdef GL_ES >> precision highp float; >> #endif >> ... >> varying vec2 texCoord ; >> ... >> texCoord.x += 0.1 ; >> >> ...and under Windows, I get: >> >> ERROR: 0:77: 'assign' : l-value required "texCoord" (can't modify >> a varying) >> ERROR: 1 compilation errors. No code generated. >> >> ...this machine has OpenGL installed on it - but I wonder if the 'no >> assignment to varying' restriction comes from some HLSL/Cg restriction >> in >> the underlying implementation? >> >> In both cases, the shader code compiles and runs just fine under >> regular >> C++/OpenGL 3.x and under the Linux FireFox (with or without the >> validator) >> - and it also works OK under Windows if the webgl.shader_validator >> variable is set to false. >> >> The Windows XP machine has a fancy high end nVidia GPU - and the >> various >> Linux boxes I've run it on have everything from a 6800 up to a really >> recent nVidia GPU. >> >> I know the validator is working under Linux because it caught a couple >> of: >> >> float X ; >> X = 6 ; >> >> ...kinds of thing, and it stopped complaining when I changed '6' to >> '6.0'. >> >> Obviously, neither of these things are serious problems for me - it's >> mostly just annoying that I can't develop under Linux and expect my >> shader >> code to "just work" under Windows. >> >> -- Steve >> >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: