From tim...@ Wed Dec 1 01:30:41 2010 From: tim...@ (Tim Johansson) Date: Wed, 01 Dec 2010 10:30:41 +0100 Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: References: <4CF3CC6A.9090908@sjbaker.org> <1472832008.435493.1291047599809.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4CF615C1.5080307@opera.com> On 2010-11-30 05:24, David Sheets wrote: > >>> For application authors, there is immense value to be had from being >>> able to determine which card and drivers the user has - both at run >>> time >>> (so the application can work around bugs) >> It seems to me that we can realistically aim for good enough WebGL implementations that this shouldn't be needed. I would be very interested in the list of graphics-card-specific things that you need to do on Minefield, I would try to handle these as bugs in our implementation. > With at least 4 parties (webdev, browser vendor, card driver vendor, > operating system) involved in providing a seamless 3-d web experience, > there will always be bugs and incompatibilities. Yes, the WebGL > implementation is the critical multi-platform abstraction that should > hide these issues but limiting the information available to > application developers is the wrong place to enforce the abstraction > for pragmatic reasons. Well-written applications won't use card > sniffing unless absolutely necessary. Card sniffing has a clear > portability downside that most web devs are aware of because of user > agent sniffing. > I really doubt most web developers are aware of the portability issues. For user agent strings there are many sites which just does not work in Opera if the user agent says "Opera". The "solution" to this is to fake the user agent string and mask as firefox or IE. Masking as one of those quite often makes the site work perfectly. I can see why it is useful to many devs to know the renderer strings, but there is a high risk that we'll get into the same problem as with user agent strings and have to add some kind of renderer string spoofing to make content run on less common graphics cards. Then we have gained nothing at all since you cannot trust the strings. //Tim ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Wed Dec 1 03:14:30 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 1 Dec 2010 03:14:30 -0800 Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: <4CF615C1.5080307@opera.com> References: <4CF3CC6A.9090908@sjbaker.org> <1472832008.435493.1291047599809.JavaMail.root@cm-mail03.mozilla.org> <4CF615C1.5080307@opera.com> Message-ID: Why does this have to be in the spec? Why can't it just be an analog of OpenGL's spec which says the string must start with "WebGL 1.0" and "WebGL GLSL ES 1.0" and otherwise it's up to the browser to decide if they do or don't want to add more info after the required prefix? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed Dec 1 04:49:34 2010 From: bja...@ (Benoit Jacob) Date: Wed, 1 Dec 2010 04:49:34 -0800 (PST) Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: Message-ID: <367248566.453872.1291207774515.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Tue, Nov 30, 2010 at 6:51 PM, Benoit Jacob > wrote: > > > > > > ----- Mail original ----- > >> On 11/30/2010 05:28 PM, Benoit Jacob wrote: > >> > Fair enough. Do you have a better suggestion? I need a solution > >> > that > >> > preserves the user's privacy. Remember the amount of noise that > >> > Evercookie made when it was released. > >> > http://en.wikipedia.org/wiki/Evercookie > >> > The privacy issue that we're discussing here is potentially worse > >> > than that as, given enough bits of unique user info leaked, it > >> > allows to build a server-side "Evercookie" associated to a unique > >> > browser/computer id. I am inexperienced in the Web world so would > >> > be > >> > very happy if one explained me how my fears are unfounded, > >> > otherwise > >> > I am very interested in doing what I can to limit the > >> > possibilities > >> > of this happening, and consider this a priority over allowing web > >> > pages to adjust their rendering to the graphics card model faster > >> > than we can update browsers. > >> > > >> > > >> I already gave you a better suggestion! > >> > >> Put a checkbox in that gives the user the choice: > > > > I'll consider this, but will default to privacy mode. > > This is going to significantly reduce the usefulness of WebGL. > Graphics chips, even on mobile devices, are far more capable than the > OpenGL ES minimums. I'm not going to restrict to the OpenGL ES minimums. Rather, I'm going to look for a compromise. There are two different approaches I'm considering: * try to leak mostly only a one-dimensional parameter. Need to look at which of those MAX_... parameterers can be grouped together as a one-dimensional parameter. I understand that not all of them can at all. * try to leak mostly information that's strongly correlated with information that can be obtained anyways. > In the Chromium WebGL implementation I am going to > continue to push for the maximum amount of functionality, whether it > be the number of available uniforms and varyings, or the available > WebGL extensions. Fine, but this will leak roughly 10 bits of identification information, for the most part not correlated with already leaked information, so almost completely adding up with the number of bits already leaked. This can easily make the difference allowing to implement server-side evercookies. Cheers, Benoit > > -Ken > > >> 3) Close access to useful VENDOR/RENDERER/VERSION and provide > >> utterly > >> minimal settings via glGet > > > > We'll see about that, I need to examine each of the relevant MAX_... > > pnames here, see what the possible values are if a good compromise > > can be found. > > > >> (and never offer extensions > > > > We'll see when there are extensions. > > > >> note that this would require dumbing every desktop > >> computer down to 5/6/5 RGB). > > > > The color depth is already leaked to web content anyway. See > > Panopticlick.eff.org. > > > > We just have to agree to disagree on this topic. > > > > Cheers, > > Benoit > > > >> Taking the second option may salve your personal conscience - but > >> realistically it allows 90% as much data leakage as option (1) and > >> does > >> nothing but make life harder for the application developers. That's > >> a > >> big win for Adobe and a significant loss for everyone else. > >> > >> So it's a choice between (1) and (3)...that's a tough choice - do > >> you > >> want or care or even understand the need for utter anonymity (half > >> a > >> billion facebook users evidently don't)? If so - then you check the > >> box. Do you want cutting edge games right there on any computer on > >> the > >> web? If so - uncheck the box. You absolutely can't have both. > >> > >> I don't think you have the right to make that decision for Firefox > >> users > >> - and I'm pretty sure that if you do, you'll find that the problem > >> will > >> go away because all of the gamers (and people who want 3D pictures > >> of > >> stuff they buy online and people who want interactive 3D maps and > >> so > >> forth) will be using Chrome. > >> > >> Of course you could go with option (2) - which might make you feel > >> good > >> - but is a laughably poor decision because it inconveniences the > >> good > >> guys and does nothing to slow down the bad guys. > >> > >> So - let's agree we need the checkbox to choose between (1) and (3) > >> - > >> and we can arm-wrestle over how it's set by default - and whether > >> it > >> can > >> be overridden per-website on a white-list or black-list basis. > >> > >> Personally - I think that anyone who REALLY wants to know who you > >> are > >> and who can coerce you into visiting their website can use timing > >> tricks, roundoff-error detection and bug sniffing to gain more bits > >> of > >> information about you than VENDOR/RENDERER/VERSION could ever > >> provide > >> because they can also figure out the clock speed and memory > >> capacity > >> of > >> your GPU. > >> > >> There are so many other information leaks in so many other > >> subsystems > >> that I'm 100% sure that this is a lost cause. eg: > >> > >> * CSS3: Does your computer have such-and-such font installed > >> locally - > >> or does it fetch it from the server? > >> * Basic browser: How many bytes of data can we download - and then > >> redownload without a server hit to sniff out your cache size > >> settings? > >> * Security settings: By probing your security settings I can find > >> out > >> what cookies you accept and deny - so the very act of turning on > >> more > >> privacy settings gives away more of your privacy. > >> > >> I can come up with these kinds of data-gathering tricks about as > >> fast > >> as > >> I can type them in. How many do you think a black hat web expert > >> could > >> find in a month of concentrated effort? > >> > >> It's a lost cause - but as a salve to people's consciences - let's > >> put > >> a > >> checkbox on the security page so that people can get a warm, fuzzy, > >> false sense of security from it. > >> > >> -- Steve > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Wed Dec 1 05:03:59 2010 From: bja...@ (Benoit Jacob) Date: Wed, 1 Dec 2010 05:03:59 -0800 (PST) Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: Message-ID: <826838580.453903.1291208639374.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > On Wed, Dec 1, 2010 at 00:52, Benoit Jacob wrote: > >> Maybe a way to make RENDERER useful while not giving too much bits > >> would be to return the hardware maker and model but strip out > >> driver > >> information? > > > > That would be a step in the right direction, but these days GPU > > manufacturers make many different models. > > For NVIDIA alone, there are at least 200 device IDs relevant to > > WebGL (OpenGL 2 hardware) > > > > So I expect the RENDERER string to give roughly 9 bits of > > information, with an uneven distribution --- some models are less > > commons and so their owners would be more exposed. > > Yes, but then that's also the case for any kind of less common > setups... eg. people using, say, Opera on Linux are already much more > exposed to browser-tracking than people using Internet Explorer on > Windows ;-) Sure! But this is neither an argument against or for caring about leaking more info through RENDERER :-) It's orthogonal. People with rare setups were already more exposed than the average, and this will make it worse for them. > > The way privacy-conscious people workaround this is usually to change > their user-agent string through configuration, this is something that > should be possible as well for WebGL RENDERER string imho. If the RENDERER string is really important to get a good gaming experience on a given video card, then spoofing it will be more painful than it was to spoof the user-agent. > In general, RENDERER string without driver version would give very > minimal bits considering that the distribution is indeed very uneven > with a strong bias on more popular hardware... The more uneven the distribution, the least info is leaked for users with common hardware, but the more info is leaked for users with rare hardware. So I don't know that the unevenness affects how serious this issue is at all, in either direction. Also, there seems to be a long tail of relatively rare hardware. > and on mobile devices > the number of bits is even lesser considering the lesser number of > designs and the fact that GPU can be inferred by other ways (eg. IOS 4 > means PowerVR SGX). OK for mobile devices. Cheers, Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Wed Dec 1 05:25:12 2010 From: cal...@ (Mark Callow) Date: Wed, 01 Dec 2010 22:25:12 +0900 Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: <367248566.453872.1291207774515.JavaMail.root@cm-mail03.mozilla.org> References: <367248566.453872.1291207774515.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4CF64CB8.1050602@hicorp.co.jp> Regards -Mark On 2010/12/01 21:49, Benoit Jacob wrote: > I'm not going to restrict to the OpenGL ES minimums. Rather, I'm going to look for a compromise. There are two different approaches I'm considering: > * try to leak mostly only a one-dimensional parameter. Need to look at which of those MAX_... parameterers can be grouped together as a one-dimensional parameter. I understand that not all of them can at all. > * try to leak mostly information that's strongly correlated with information that can be obtained anyways. > >> In the Chromium WebGL implementation I am going to >> continue to push for the maximum amount of functionality, whether it >> be the number of available uniforms and varyings, or the available >> WebGL extensions. > Fine, but this will leak roughly 10 bits of identification information, for the most part not correlated with already leaked information, so almost completely adding up with the number of bits already leaked. This can easily make the difference allowing to implement server-side evercookies. > What will leak roughly 10 bits of information? How do you come up with this value? I suspect that the range of discreet values returned by the MAX queries is actually quite small across implementations. For example I expect MAX_VIEWPORT_DIMENSION will be one of 1024, 2048 or 4096. across the many, many different models of graphics cards. Therefore the amount of information leaked will not be so great. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 392 bytes Desc: not available URL: From bja...@ Wed Dec 1 05:40:47 2010 From: bja...@ (Benoit Jacob) Date: Wed, 1 Dec 2010 05:40:47 -0800 (PST) Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: <4CF64CB8.1050602@hicorp.co.jp> Message-ID: <1649208287.454034.1291210847800.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > What will leak roughly > 10 bits of information? How do you come up with this value? See earlier discussion with Cedric: for NVIDIDA desktop/laptop video cards alone, there are at least 200 devices, counting only OpenGL 2 devices that can be used for WebGL. I know this from handling graphics driver crashes in Mozilla, I assembled as complete a list as I could for NVIDIA: https://bug605749.bugzilla.mozilla.org/attachment.cgi?id=484688 that's where I get my number 200 from, and I know that this list is still very incomplete. There are many ATI cards too, here's a list with 120+ cards, again I know it's very incomplete: http://developer.amd.com/drivers/pc_vendor_id/Pages/default.aspx For Intel there are roughly 25 relevant card types (there are more PCI IDs but many duplicates). So the total number of RENDERER values is of the order of magnitude of 2^9, so RENDERER alone gives 9 bits of info on platforms where any graphics card is possible (Windows and Linux desktops and laptops). Now if you add the VERSION string, which contains the precise driver version, you get a few more bits, say roughly 4 bits in my experience (many people can never upgrade their OEM drivers), so the total is like 13 bits, so my 10 bits estimate was very conservative. Cheers, Benoit > > I suspect that the range of discreet values returned by the MAX > queries is actually quite small across implementations. For example I > expect MAX_VIEWPORT_DIMENSION will be one of 1024, 2048 or 4096. > across the many, many different models of graphics cards. Therefore > the amount of information leaked will not be so great. > > Regards > > -Mark ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ste...@ Wed Dec 1 07:19:57 2010 From: ste...@ (ste...@) Date: Wed, 1 Dec 2010 07:19:57 -0800 Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings Message-ID: <97b6232525d3774586206a1b95c25b43.squirrel@webmail.sjbaker.org> So if we think that there are ~10 bits of information leaked through the VENDOR/RENDERER/VERSION strings - the important question is how much of that is also leaked through gl.GetParameter and other easily testable means? If that number is anywhere close to 10 bits then simply disabling the VENDOR/RENDERER/VERSION strings just makes life inconvenient for the good guys without buying us anything in terms of privacy from the bad guys. The only way to fix THAT would be to dumb everything down to the lowest common denominator...and IMHO, that's suicide for WebGL...or at least for whichever browser does it if others do not. There certainly COULD be 10 bits of information here - but it's hard to tell how much variability there is out there without doing some kind of large-scale survey. Also, we don't know how well that correlates with the other bits of data that can already be obtained. We'd also have to worry that every time we introduce a WebGL extension (some of which are too important to miss out on) or provide any other kind of optional feature, that we're leaking more data. Adding extensions to WebGL would probably be the strongest 'signal' we could be sending. The values that seem most useful are: ALIASED_LINE_WIDTH_RANGE, ALIASED_POINT_SIZE_RANGE,ALPHA_BITS COMPRESSED_TEXTURE_FORMATS, DEPTH_BITS, MAX_COMBINED_TEXTURE_IMAGE_UNITS MAX_CUBE_MAP_TEXTURE_SIZE, MAX_FRAGMENT_UNIFORM_VECTORS MAX_RENDERBUFFER_SIZE, MAX_TEXTURE_IMAGE_UNITS, MAX_TEXTURE_SIZE MAX_VARYING_VECTORS, MAX_VERTEX_ATTRIBS, MAX_VERTEX_TEXTURE_IMAGE_UNITS MAX_VERTEX_UNIFORM_VECTORS, MAX_VIEWPORT_DIMS, NUM_COMPRESSED_TEXTURE_FORMATS, NUM_SHADER_BINARY_FORMATS, SAMPLE_BUFFERS STENCIL_BITS, SUBPIXEL_BITS I still think this is too tough. Dumbing everything down to lowest common denominator is too painful when that lowest common denominator is a cellphone. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bja...@ Wed Dec 1 07:37:17 2010 From: bja...@ (Benoit Jacob) Date: Wed, 1 Dec 2010 07:37:17 -0800 (PST) Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: <97b6232525d3774586206a1b95c25b43.squirrel@webmail.sjbaker.org> Message-ID: <1941739864.454669.1291217837184.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > So if we think that there are ~10 bits of information leaked through > the > VENDOR/RENDERER/VERSION strings - the important question is how much > of > that is also leaked through gl.GetParameter and other easily testable > means? I agree that's the next very important question. I don't expect to be able to bring the leakage down to 0 bits, but don't have to as a certain amount of information can be obtained by simple benchmarking anyway. I want to find a useful compromise as I explained in earlier emails. But there still is a quite large immediate benefit in disabling RENDERER alone, as RENDERER gives two important informations that can't be obtained in any other way (afaik)): 1) RENDERER tells whether the machine is a desktop or laptop 2) RENDERER tells whether the machine is a hardcode gamer's machine (or was when it was purchased). This was noted earlier by Oliver. Even if a geforce 8800 Ultra is not powerful by today's standards, it indicates a gaming machine. The VERSION string still gives away its ~4 bits of information, in a way that's completely orthogonal to what other getParameter calls give. So disabling VENDOR/RENDERER/VERSION readily solves half of the problem. It's not a useless thing to do. Then I agree with the need to look further into the other getParameter calls to further reduce the problem. Cheers, Benoit > > If that number is anywhere close to 10 bits then simply disabling the > VENDOR/RENDERER/VERSION strings just makes life inconvenient for the > good > guys without buying us anything in terms of privacy from the bad guys. > The only way to fix THAT would be to dumb everything down to the > lowest > common denominator...and IMHO, that's suicide for WebGL...or at least > for > whichever browser does it if others do not. > > There certainly COULD be 10 bits of information here - but it's hard > to > tell how much variability there is out there without doing some kind > of > large-scale survey. Also, we don't know how well that correlates with > the > other bits of data that can already be obtained. We'd also have to > worry > that every time we introduce a WebGL extension (some of which are too > important to miss out on) or provide any other kind of optional > feature, > that we're leaking more data. Adding extensions to WebGL would > probably > be the strongest 'signal' we could be sending. > > The values that seem most useful are: > > ALIASED_LINE_WIDTH_RANGE, ALIASED_POINT_SIZE_RANGE,ALPHA_BITS > COMPRESSED_TEXTURE_FORMATS, DEPTH_BITS, > MAX_COMBINED_TEXTURE_IMAGE_UNITS > MAX_CUBE_MAP_TEXTURE_SIZE, MAX_FRAGMENT_UNIFORM_VECTORS > MAX_RENDERBUFFER_SIZE, MAX_TEXTURE_IMAGE_UNITS, MAX_TEXTURE_SIZE > MAX_VARYING_VECTORS, MAX_VERTEX_ATTRIBS, > MAX_VERTEX_TEXTURE_IMAGE_UNITS > MAX_VERTEX_UNIFORM_VECTORS, MAX_VIEWPORT_DIMS, > NUM_COMPRESSED_TEXTURE_FORMATS, NUM_SHADER_BINARY_FORMATS, > SAMPLE_BUFFERS > STENCIL_BITS, SUBPIXEL_BITS > > I still think this is too tough. Dumbing everything down to lowest > common > denominator is too painful when that lowest common denominator is a > cellphone. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Wed Dec 1 10:14:52 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Wed, 1 Dec 2010 10:14:52 -0800 Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: <1941739864.454669.1291217837184.JavaMail.root@cm-mail03.mozilla.org> References: <97b6232525d3774586206a1b95c25b43.squirrel@webmail.sjbaker.org> <1941739864.454669.1291217837184.JavaMail.root@cm-mail03.mozilla.org> Message-ID: These are the ones that are likely to vary on the desktop MAX_COMBINED_TEXTURE_IMAGE_UNITS MAX_CUBE_MAP_TEXTURE_SIZE MAX_FRAGMENT_UNIFORM_VECTORS MAX_RENDERBUFFER_SIZE, MAX_TEXTURE_IMAGE_UNITS MAX_TEXTURE_SIZE MAX_VARYING_VECTORS MAX_VERTEX_ATTRIBS, MAX_VERTEX_TEXTURE_IMAGE_UNITS MAX_VERTEX_UNIFORM_VECTORS MAX_VIEWPORT_DIMS, I suspect there are an average of 3 bits of information for each of those (meaning 5-8 variations) which would be`11*3 or 33 bits of info. It's possible that many combinations don't exist though Add in mobile and some of the other values will start changing as well. There's a GL caps db here http://www.myogl.org/?target=database -------------- next part -------------- An HTML attachment was scrubbed... URL: From bja...@ Wed Dec 1 10:25:38 2010 From: bja...@ (Benoit Jacob) Date: Wed, 1 Dec 2010 10:25:38 -0800 (PST) Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: Message-ID: <787307469.456500.1291227938108.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > These are the ones that are likely to vary on the desktop > > MAX_COMBINED_TEXTURE_IMAGE_UNITS > MAX_CUBE_MAP_TEXTURE_SIZE > MAX_FRAGMENT_UNIFORM_VECTORS > MAX_RENDERBUFFER_SIZE, > MAX_TEXTURE_IMAGE_UNITS > MAX_TEXTURE_SIZE > MAX_VARYING_VECTORS > MAX_VERTEX_ATTRIBS, > MAX_VERTEX_TEXTURE_IMAGE_UNITS > MAX_VERTEX_UNIFORM_VECTORS > MAX_VIEWPORT_DIMS, > > > I suspect there are an average of 3 bits of information for each of > those (meaning 5-8 variations) which would be`11*3 or 33 bits of info. > > > It's possible that many combinations don't exist though The combination of these parameters is a function of the graphics card, and there are only 2^9 graphics cards, so really there are at most 9 bits of info there (99.999999% these 2^33 combinations don't exist). Furthermore, many different graphics cards (different RENDERER's) will give the same combination. > There's a GL caps db here > http://www.myogl.org/?target=database Thanks for the link! didn't know. Benoit ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Dec 1 11:00:24 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 1 Dec 2010 11:00:24 -0800 (PST) Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: Message-ID: <1774658765.457028.1291230024917.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > Why does this have to be in the spec? Why can't it just be an analog > of OpenGL's spec which says the string must start with "WebGL 1.0" and > "WebGL GLSL ES 1.0" and otherwise it's up to the browser to decide if > they do or don't want to add more info after the required prefix? This is the current state of things, and I don't think it's likely to change. The browsers can add whatever info they want after the various strings. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cma...@ Wed Dec 1 11:49:52 2010 From: cma...@ (Chris Marrin) Date: Wed, 01 Dec 2010 11:49:52 -0800 Subject: [Public WebGL] Quake compatible with recent WebGL implementations In-Reply-To: References: <4CEC954A.2000307@hicorp.co.jp> Message-ID: <72C90851-AF3C-4A91-872C-4E4C68F9E2D9@apple.com> On Nov 24, 2010, at 2:04 PM, Kenneth Russell wrote: > On Tue, Nov 23, 2010 at 8:32 PM, Mark Callow wrote: >> Is there a hosted version of WebGL Quake available that works with current >> browsers? The one at http://tatari.se:8080/GwtQuake.html gives a page saying >> "WebGL Support Required" when tried with FF4b7 and Chrome 9. >> >> Is the source on Google Code up-to-date with the latest implementations? > > I've verified it runs in top of tree Chromium (tested on Mac OS X). Fails on TOT WebKit. Hmmmmm... ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Wed Dec 1 12:09:08 2010 From: kbr...@ (Kenneth Russell) Date: Wed, 1 Dec 2010 12:09:08 -0800 Subject: [Public WebGL] Quake compatible with recent WebGL implementations In-Reply-To: <72C90851-AF3C-4A91-872C-4E4C68F9E2D9@apple.com> References: <4CEC954A.2000307@hicorp.co.jp> <72C90851-AF3C-4A91-872C-4E4C68F9E2D9@apple.com> Message-ID: On Wed, Dec 1, 2010 at 11:49 AM, Chris Marrin wrote: > > On Nov 24, 2010, at 2:04 PM, Kenneth Russell wrote: > >> On Tue, Nov 23, 2010 at 8:32 PM, Mark Callow wrote: >>> Is there a hosted version of WebGL Quake available that works with current >>> browsers? The one at http://tatari.se:8080/GwtQuake.html gives a page saying >>> "WebGL Support Required" when tried with FF4b7 and Chrome 9. >>> >>> Is the source on Google Code up-to-date with the latest implementations? >> >> I've verified it runs in top of tree Chromium (tested on Mac OS X). > > Fails on TOT WebKit. Hmmmmm... I just tested my quake2-gwt-port build (about a week old now) with TOT WebKit and it works fine on 10.6. -Ken > ----- > ~Chris > cmarrin...@ > > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From tu...@ Wed Dec 1 14:19:48 2010 From: tu...@ (Thatcher Ulrich) Date: Wed, 1 Dec 2010 23:19:48 +0100 Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: <787307469.456500.1291227938108.JavaMail.root@cm-mail03.mozilla.org> References: <787307469.456500.1291227938108.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Wed, Dec 1, 2010 at 7:25 PM, Benoit Jacob wrote: > ----- Original Message ----- >> These are the ones that are likely to vary on the desktop >> >> MAX_COMBINED_TEXTURE_IMAGE_UNITS >> MAX_CUBE_MAP_TEXTURE_SIZE >> MAX_FRAGMENT_UNIFORM_VECTORS >> MAX_RENDERBUFFER_SIZE, >> MAX_TEXTURE_IMAGE_UNITS >> MAX_TEXTURE_SIZE >> MAX_VARYING_VECTORS >> MAX_VERTEX_ATTRIBS, >> MAX_VERTEX_TEXTURE_IMAGE_UNITS >> MAX_VERTEX_UNIFORM_VECTORS >> MAX_VIEWPORT_DIMS, >> >> >> I suspect there are an average of 3 bits of information for each of >> those (meaning 5-8 variations) which would be`11*3 or 33 bits of info. >> >> >> It's possible that many combinations don't exist though > > The combination of these parameters is a function of the graphics card, and there are only 2^9 graphics cards, so really there are at most 9 bits of info there (99.999999% these 2^33 combinations don't exist). The caps could be affected by driver and OS as well. Also Steve makes a good point about extensions. Coming back to the RENDERER string question for Firefox -- for the purposes of invading privacy, bits obtained via gl.get and gl extensions are just as good as bits obtained from getString(RENDERER), and in practice will be highly redundant with the bits from RENDERER. On the other hand, the bits from RENDERER have extra value to WebGL authors and users, because they will help characterize performance, and allow workarounds for bugs in certain implementations. Unless FF plans on crippling gl.get and gl extensions, hiding the RENDERER info is just punishing the WebGL community, for no privacy benefit. -T ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From vla...@ Wed Dec 1 15:53:33 2010 From: vla...@ (Vladimir Vukicevic) Date: Wed, 1 Dec 2010 15:53:33 -0800 (PST) Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: Message-ID: <1387067668.459793.1291247613101.JavaMail.root@cm-mail03.mozilla.org> ----- Original Message ----- > The caps could be affected by driver and OS as well. Also Steve makes > a good point about extensions. > > Coming back to the RENDERER string question for Firefox -- for the > purposes of invading privacy, bits obtained via gl.get and gl > extensions are just as good as bits obtained from getString(RENDERER), > and in practice will be highly redundant with the bits from RENDERER. > On the other hand, the bits from RENDERER have extra value to WebGL > authors and users, because they will help characterize performance, > and allow workarounds for bugs in certain implementations. > > Unless FF plans on crippling gl.get and gl extensions, hiding the > RENDERER info is just punishing the WebGL community, for no privacy > benefit. That's pretty harsh -- there's certainly no desire to punish the webgl community. Instead, the goal is to try to remove roadblocks towards webgl adoption and future compatibility. Privacy is a big consideration; these are definitely more fingerprint inputs. But, as you point out, unless we block/sanitize/whatever gl.get results, blocking the renderer string doesn't help much. I don't know that this is necessarily true -- for example, there is a huge range of video cards whose max texture size is 2048 (or 4096 or 8192). The same holds for many of the other get params. That's not an argument that these aren't exposing something that can be used as input for fingerprinting, but potentially less than the aggreate get params. The security exploit targetting argumentalso exists, but it's nothing more than a speedbump. As has been said, the exploit can be attempted against all systems -- unless, perhaps, on systems without the flaw the exploit results in a visible effect (hang, crash, etc.), but is silent on systems where it does succeed. In that case, it would be worth considerably more if it was able to remain undetected for longer. However, Mozilla's current decision is mainly based around painful experience with the browser user agent string, which we are now only very slowly able to claw back under control. If every app developer on the web was a good actor and implemented things correctly, there'd be no problem; we could expose lots of details that would let people fine tune their apps. Unfortunately, that's not the case. For example -- given the string "Firefox/3.5" in the UA, we've had web sites break when we bumped the version, because thy checked for Firefox by matching on "Firefox/3.5" explicitly. We've had sites break because they were sniffing for "Firefox" (and not the Gecko renderer) when accessed via nightly/beta builds, purely due to the UA string. Given the extremely wide range of renderers out there, it seems to me that this issue will be horribly compounded. You'd have webgl apps that sniff for nvidia, and ignore a qualcomm or powervr gpu that might benefit from the same optimization. Someone might sniff for a specific AMD card to work around a bug, even if that bug has been fixed in a newer version of the drivers -- and the fix happens to break the app's workaround. And on and on. That, IMO, is a much worse situation. For performance considerations (e.g. the vertex shader texture fetch issue), apps can do some perf testing on first run. That data can be cached in the user's browser, so it should have minimal impact. If these issues become extremely common, we can look at adding a webgl extension that potentially tries to expose some of this information (in conjunction with a GL extension that would be needed to get the info in the first place). For bug workaround issues, there isn't necessarily an easy workaround, unless you can check for the bug at runtime. If you can, great; problem solved. If you can't for whatever reason, then perhaps that driver can't really support WebGL, and it should be blacklisted until the underlying bugs are fixed. For support issues, Firefox has a wealth of data in about:support that could be used to figure out hardware details. This can be accessed by the user via the menu -> Help -> Troubleshooting Information, from where they can click a button to copy all the info to the clipboard for pasting into a support request. Having more detailed RENDERER etc. info to work around these problems starts going down a long train where the short-term fix might feel good, but it will be extremely painful to undo in the long run. I'd rather not recreate another UA string, especially given the emphasis that has been placed on cross-browser WebGL compat. - Vlad ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Wed Dec 1 18:06:24 2010 From: cal...@ (Mark Callow) Date: Thu, 02 Dec 2010 11:06:24 +0900 Subject: [Public WebGL] about the VENDOR, RENDERER, and VERSION strings In-Reply-To: <1649208287.454034.1291210847800.JavaMail.root@cm-mail03.mozilla.org> References: <1649208287.454034.1291210847800.JavaMail.root@cm-mail03.mozilla.org> Message-ID: <4CF6FF20.3050107@hicorp.co.jp> > ----- Original Message ----- >> What will leak roughly >> 10 bits of information? How do you come up with this value? > See earlier discussion with Cedric: Benoit, I had seen that. I was confused by the juxtapositioning of your comment with Ken's statement that we would push for implementations to have more than minimum values for things such as max uniform vectors. I thought you were claiming increasing such values would leak 10 bits of info via the get commands. As I pointed out and you also discussed in a later message, the number of combinations of different values for the minimum maxima found across devices and implementations is probably nowhere near the number of different devices and implementations. I still think your enable flag idea is the way to go. As I've already said a couple of times applications that really care can ask users to enable it, if they discover it is disabled. Regards -Mark -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 412 bytes Desc: not available URL: From emo...@ Thu Dec 2 02:15:36 2010 From: emo...@ (=?iso-8859-15?Q?Erik_M=F6ller?=) Date: Thu, 02 Dec 2010 11:15:36 +0100 Subject: [Public WebGL] ReadPixels Conformance test Message-ID: Hi, I noted that the ReadPixels conformance test uses three texture units. We failed because of a bug making us incorrectly report a max of 2 units, but I suppose we need to make sure that all conformance tests work with just 2 units. The test in question can be fixed by adding this line colors.length = Math.min(colors.length, gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS)); right after where the colors are defined and replacing the two "% 3" with "% colors.length" -- Erik M?ller Core Developer Opera Software ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From ced...@ Thu Dec 2 02:30:51 2010 From: ced...@ (Cedric Vivier) Date: Thu, 2 Dec 2010 18:30:51 +0800 Subject: [Public WebGL] ReadPixels Conformance test In-Reply-To: References: Message-ID: On Thu, Dec 2, 2010 at 18:15, Erik M?ller wrote: > Hi, I noted that the ReadPixels conformance test uses three texture units. > We failed because of a bug making us incorrectly report a max of 2 units, > but I suppose we need to make sure that all conformance tests work with just > 2 units Actually we should rather make sure all tests works with 8 units since it is the minimum mandated by ES 2.0 afaik. Regards, ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From emo...@ Thu Dec 2 02:54:48 2010 From: emo...@ (=?iso-8859-15?Q?Erik_M=F6ller?=) Date: Thu, 02 Dec 2010 11:54:48 +0100 Subject: [Public WebGL] ReadPixels Conformance test In-Reply-To: References: Message-ID: On Thu, 02 Dec 2010 11:30:51 +0100, Cedric Vivier wrote: > On Thu, Dec 2, 2010 at 18:15, Erik M?ller wrote: >> Hi, I noted that the ReadPixels conformance test uses three texture >> units. >> We failed because of a bug making us incorrectly report a max of 2 >> units, >> but I suppose we need to make sure that all conformance tests work with >> just >> 2 units > > Actually we should rather make sure all tests works with 8 units since > it is the minimum mandated by ES 2.0 afaik. > > Regards, Right, 2 was just from memory and I might've mixed it up with something else. I had a look now but I can't seem to find where that limit is defined. Anyone know where to find it? -- Erik M?ller Core Developer Opera Software ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From jda...@ Thu Dec 2 03:27:25 2010 From: jda...@ (John Davis) Date: Thu, 2 Dec 2010 05:27:25 -0600 Subject: [Public WebGL] Re: Webp In-Reply-To: References: <33475911.99316.1287410887932.JavaMail.root@cm-mail03.mozilla.org> Message-ID: I see WebP is also supported in the Chrome 9 build, can it be used with WebGL for texturing? On Wed, Oct 20, 2010 at 11:59 AM, Kenneth Russell wrote: > On Wed, Oct 20, 2010 at 7:43 AM, John Davis > wrote: > > What started this whole thing is the claim that Webp is much better than > jpeg. > > > > Btw, is webgl on by default in chrome7 or do we have to wait for chrome8? > > Actually, WebGL will not be on by default yet even in Chrome 8. We > expect it to be on by default in Chrome 9. > > -Ken > > > On Wednesday, October 20, 2010, Patrick Baggett > > wrote: > >> > >> > >> On Wed, Oct 20, 2010 at 6:33 AM, John Davis > wrote: > >> > >> I guess I'm asking the browser makers. Anything that cuts down on the > bandwidth bill for texture maps being downloaded to end users is going to be > a welcome feature. If people build MMOG's on this technology, compression > of textures is going to be key. > >> > >> > >> > >> Hate to ask the obvious question, but is there something wrong with PNGs > or JPEGs? They both perform compression, and in the case of JPEG, offer > space/quality trade-offs. Since you've moved outside the realm of real time > compression/uncompression, why not settle on those -- the browser support is > great as is the tool support to create them. You can even use nifty > utilities like pngcrush. > >> > >> I guess what I am trying to say is, unless there is compelling reason > and amazing algorithm that really puts PNG and JPEG to shame, I don't see > any reason that yet another image format would be desirable, and would > likely just make the artists' lives more difficult (i.e. texture mapping a > model using KTX file? Not likely to be supported) > >> > >> > > > > ----------------------------------------------------------- > > You are currently subscribed to public_webgl...@ > > To unsubscribe, send an email to majordomo...@ with > > the following command in the body of your email: > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan...@ Thu Dec 2 05:54:04 2010 From: dan...@ (Daniel Koch) Date: Thu, 2 Dec 2010 08:54:04 -0500 Subject: [Public WebGL] ReadPixels Conformance test In-Reply-To: References: Message-ID: <6732B531-9C38-4B0F-97B2-CCFFB7246161@transgaming.com> Cedric is right -- it's 8. Table 6.19, p 151 of the ES2.0 specification. Daniel On 2010-12-02, at 5:54 AM, Erik M?ller wrote: > On Thu, 02 Dec 2010 11:30:51 +0100, Cedric Vivier wrote: > >> On Thu, Dec 2, 2010 at 18:15, Erik M?ller wrote: >>> Hi, I noted that the ReadPixels conformance test uses three texture units. >>> We failed because of a bug making us incorrectly report a max of 2 units, >>> but I suppose we need to make sure that all conformance tests work with just >>> 2 units >> >> Actually we should rather make sure all tests works with 8 units since >> it is the minimum mandated by ES 2.0 afaik. >> >> Regards, > > Right, 2 was just from memory and I might've mixed it up with something else. I had a look now but I can't seem to find where that limit is defined. Anyone know where to find it? > > -- > Erik M?ller > Core Developer > Opera Software --- Daniel Koch -+- daniel...@ Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma...@ Thu Dec 2 10:35:03 2010 From: cma...@ (Chris Marrin) Date: Thu, 02 Dec 2010 10:35:03 -0800 Subject: [Public WebGL] Quake compatible with recent WebGL implementations In-Reply-To: References: <4CEC954A.2000307@hicorp.co.jp> <72C90851-AF3C-4A91-872C-4E4C68F9E2D9@apple.com> Message-ID: <447162CB-E853-4E7C-A88C-AD58DE8D3CAD@apple.com> On Dec 1, 2010, at 12:09 PM, Kenneth Russell wrote: > On Wed, Dec 1, 2010 at 11:49 AM, Chris Marrin wrote: >> >> On Nov 24, 2010, at 2:04 PM, Kenneth Russell wrote: >> >>> On Tue, Nov 23, 2010 at 8:32 PM, Mark Callow wrote: >>>> Is there a hosted version of WebGL Quake available that works with current >>>> browsers? The one at http://tatari.se:8080/GwtQuake.html gives a page saying >>>> "WebGL Support Required" when tried with FF4b7 and Chrome 9. >>>> >>>> Is the source on Google Code up-to-date with the latest implementations? >>> >>> I've verified it runs in top of tree Chromium (tested on Mac OS X). >> >> Fails on TOT WebKit. Hmmmmm... > > I just tested my quake2-gwt-port build (about a week old now) with TOT > WebKit and it works fine on 10.6. I was talking about the link above which gives me "WebGL support required" and the extremely descriptive error log of "Failed to load resource: cancelled" in gwtquake/145205A6B37E69830A60D5EB91291985.cache.html. Maybe that site is out of date. But that brings up the real question of why isn't anyone hosting the game? Are there legal restrictions? ----- ~Chris cmarrin...@ ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Dec 2 10:44:11 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 2 Dec 2010 10:44:11 -0800 Subject: [Public WebGL] Quake compatible with recent WebGL implementations In-Reply-To: <447162CB-E853-4E7C-A88C-AD58DE8D3CAD@apple.com> References: <4CEC954A.2000307@hicorp.co.jp> <72C90851-AF3C-4A91-872C-4E4C68F9E2D9@apple.com> <447162CB-E853-4E7C-A88C-AD58DE8D3CAD@apple.com> Message-ID: On Thu, Dec 2, 2010 at 10:35 AM, Chris Marrin wrote: > > On Dec 1, 2010, at 12:09 PM, Kenneth Russell wrote: > >> On Wed, Dec 1, 2010 at 11:49 AM, Chris Marrin wrote: >>> >>> On Nov 24, 2010, at 2:04 PM, Kenneth Russell wrote: >>> >>>> On Tue, Nov 23, 2010 at 8:32 PM, Mark Callow wrote: >>>>> Is there a hosted version of WebGL Quake available that works with current >>>>> browsers? The one at http://tatari.se:8080/GwtQuake.html gives a page saying >>>>> "WebGL Support Required" when tried with FF4b7 and Chrome 9. >>>>> >>>>> Is the source on Google Code up-to-date with the latest implementations? >>>> >>>> I've verified it runs in top of tree Chromium (tested on Mac OS X). >>> >>> Fails on TOT WebKit. Hmmmmm... >> >> I just tested my quake2-gwt-port build (about a week old now) with TOT >> WebKit and it works fine on 10.6. > > I was talking about the link above which gives me "WebGL support required" and the extremely descriptive error log of "Failed to load resource: cancelled" in gwtquake/145205A6B37E69830A60D5EB91291985.cache.html. > > Maybe that site is out of date. But that brings up the real question of why isn't anyone hosting the game? Are there legal restrictions? Yes, there are. The media (textures, models, sounds) are copyrighted and can't be redistributed. If someone hosted the engine with a user-created level, that would plausibly be OK. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From bag...@ Thu Dec 2 10:58:33 2010 From: bag...@ (Patrick Baggett) Date: Thu, 2 Dec 2010 12:58:33 -0600 Subject: [Public WebGL] Quake compatible with recent WebGL implementations In-Reply-To: References: <4CEC954A.2000307@hicorp.co.jp> <72C90851-AF3C-4A91-872C-4E4C68F9E2D9@apple.com> <447162CB-E853-4E7C-A88C-AD58DE8D3CAD@apple.com> Message-ID: > > > > Maybe that site is out of date. But that brings up the real question of > why isn't anyone hosting the game? Are there legal restrictions? > > Yes, there are. The media (textures, models, sounds) are copyrighted > and can't be redistributed. If someone hosted the engine with a > user-created level, that would plausibly be OK. > > -Ken > > Even the shareware version? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Dec 2 11:00:38 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 2 Dec 2010 11:00:38 -0800 Subject: [Public WebGL] Re: Webp In-Reply-To: References: <33475911.99316.1287410887932.JavaMail.root@cm-mail03.mozilla.org> Message-ID: I haven't tried it. If WebKit's image decoders support it then it should theoretically work. Try it and let us know. -Ken On Thu, Dec 2, 2010 at 3:27 AM, John Davis wrote: > I see WebP is also supported in the Chrome 9 build, can it be used with > WebGL for texturing? > > On Wed, Oct 20, 2010 at 11:59 AM, Kenneth Russell wrote: >> >> On Wed, Oct 20, 2010 at 7:43 AM, John Davis >> wrote: >> > What started this whole thing is the claim that Webp is much better than >> > jpeg. >> > >> > Btw, is webgl on by default in chrome7 or do we have to wait for >> > chrome8? >> >> Actually, WebGL will not be on by default yet even in Chrome 8. We >> expect it to be on by default in Chrome 9. >> >> -Ken >> >> > On Wednesday, October 20, 2010, Patrick Baggett >> > wrote: >> >> >> >> >> >> On Wed, Oct 20, 2010 at 6:33 AM, John Davis >> >> wrote: >> >> >> >> I guess I'm asking the browser makers.? Anything that cuts down on the >> >> bandwidth bill for texture maps being downloaded to end users?is going to be >> >> a welcome feature.? If people build MMOG's on this technology, compression >> >> of textures is going to be key. >> >> >> >> >> >> >> >> Hate to ask the obvious question, but is there something wrong with >> >> PNGs or JPEGs? They both perform compression, and in the case of JPEG, >> >> offer space/quality trade-offs. Since you've moved outside the realm of real >> >> time compression/uncompression, why not settle on those -- the browser >> >> support is great as is the tool support to create them. You can even use >> >> nifty utilities like pngcrush. >> >> >> >> I guess what I am trying to say is, unless there is compelling reason >> >> and amazing algorithm that really puts PNG and JPEG to shame, I don't see >> >> any reason that yet another image format would be desirable, and would >> >> likely just make the artists' lives more difficult (i.e. texture mapping a >> >> model using KTX file? Not likely to be supported) >> >> >> >> >> > >> > ----------------------------------------------------------- >> > You are currently subscribed to public_webgl...@ >> > To unsubscribe, send an email to majordomo...@ with >> > the following command in the body of your email: >> > >> > >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From kbr...@ Thu Dec 2 11:08:38 2010 From: kbr...@ (Kenneth Russell) Date: Thu, 2 Dec 2010 11:08:38 -0800 Subject: [Public WebGL] Quake compatible with recent WebGL implementations In-Reply-To: References: <4CEC954A.2000307@hicorp.co.jp> <72C90851-AF3C-4A91-872C-4E4C68F9E2D9@apple.com> <447162CB-E853-4E7C-A88C-AD58DE8D3CAD@apple.com> Message-ID: On Thu, Dec 2, 2010 at 10:58 AM, Patrick Baggett wrote: >> >> > Maybe that site is out of date. But that brings up the real question of >> > why isn't anyone hosting the game? Are there legal restrictions? >> >> Yes, there are. The media (textures, models, sounds) are copyrighted >> and can't be redistributed. If someone hosted the engine with a >> user-created level, that would plausibly be OK. >> >> -Ken >> > > Even the shareware version? Even the shareware version. Check the license. -Ken ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From gma...@ Thu Dec 2 12:24:04 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Thu, 2 Dec 2010 12:24:04 -0800 Subject: [Public WebGL] Re: Webp In-Reply-To: References: <33475911.99316.1287410887932.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Dec 2, 2010 at 11:00 AM, Kenneth Russell wrote: > I haven't tried it. If WebKit's image decoders support it then it > should theoretically work. Try it and let us know. > The conformance tests use webp in the Chromium dev build and they pass so I'm guessing it should work. > > -Ken > > On Thu, Dec 2, 2010 at 3:27 AM, John Davis > wrote: > > I see WebP is also supported in the Chrome 9 build, can it be used with > > WebGL for texturing? > > > > On Wed, Oct 20, 2010 at 11:59 AM, Kenneth Russell > wrote: > >> > >> On Wed, Oct 20, 2010 at 7:43 AM, John Davis > >> wrote: > >> > What started this whole thing is the claim that Webp is much better > than > >> > jpeg. > >> > > >> > Btw, is webgl on by default in chrome7 or do we have to wait for > >> > chrome8? > >> > >> Actually, WebGL will not be on by default yet even in Chrome 8. We > >> expect it to be on by default in Chrome 9. > >> > >> -Ken > >> > >> > On Wednesday, October 20, 2010, Patrick Baggett > >> > wrote: > >> >> > >> >> > >> >> On Wed, Oct 20, 2010 at 6:33 AM, John Davis < > jdavis...@> > >> >> wrote: > >> >> > >> >> I guess I'm asking the browser makers. Anything that cuts down on > the > >> >> bandwidth bill for texture maps being downloaded to end users is > going to be > >> >> a welcome feature. If people build MMOG's on this technology, > compression > >> >> of textures is going to be key. > >> >> > >> >> > >> >> > >> >> Hate to ask the obvious question, but is there something wrong with > >> >> PNGs or JPEGs? They both perform compression, and in the case of > JPEG, > >> >> offer space/quality trade-offs. Since you've moved outside the realm > of real > >> >> time compression/uncompression, why not settle on those -- the > browser > >> >> support is great as is the tool support to create them. You can even > use > >> >> nifty utilities like pngcrush. > >> >> > >> >> I guess what I am trying to say is, unless there is compelling reason > >> >> and amazing algorithm that really puts PNG and JPEG to shame, I don't > see > >> >> any reason that yet another image format would be desirable, and > would > >> >> likely just make the artists' lives more difficult (i.e. texture > mapping a > >> >> model using KTX file? Not likely to be supported) > >> >> > >> >> > >> > > >> > ----------------------------------------------------------- > >> > You are currently subscribed to public_webgl...@ > >> > To unsubscribe, send an email to majordomo...@ with > >> > the following command in the body of your email: > >> > > >> > > >> > > > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gma...@ Thu Dec 2 12:25:14 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Thu, 2 Dec 2010 12:25:14 -0800 Subject: [Public WebGL] Re: Webp In-Reply-To: References: <33475911.99316.1287410887932.JavaMail.root@cm-mail03.mozilla.org> Message-ID: On Thu, Dec 2, 2010 at 12:24 PM, Gregg Tavares (wrk) wrote: > > > On Thu, Dec 2, 2010 at 11:00 AM, Kenneth Russell wrote: > >> I haven't tried it. If WebKit's image decoders support it then it >> should theoretically work. Try it and let us know. >> > > The conformance tests use webp in the Chromium dev build and they pass so > I'm guessing it should work. > or am I getting confused. Sorry, I read that was webm, not webp. I don't know about webp. Tell us if it works. > > >> >> -Ken >> >> On Thu, Dec 2, 2010 at 3:27 AM, John Davis >> wrote: >> > I see WebP is also supported in the Chrome 9 build, can it be used with >> > WebGL for texturing? >> > >> > On Wed, Oct 20, 2010 at 11:59 AM, Kenneth Russell >> wrote: >> >> >> >> On Wed, Oct 20, 2010 at 7:43 AM, John Davis >> >> wrote: >> >> > What started this whole thing is the claim that Webp is much better >> than >> >> > jpeg. >> >> > >> >> > Btw, is webgl on by default in chrome7 or do we have to wait for >> >> > chrome8? >> >> >> >> Actually, WebGL will not be on by default yet even in Chrome 8. We >> >> expect it to be on by default in Chrome 9. >> >> >> >> -Ken >> >> >> >> > On Wednesday, October 20, 2010, Patrick Baggett >> >> > wrote: >> >> >> >> >> >> >> >> >> On Wed, Oct 20, 2010 at 6:33 AM, John Davis < >> jdavis...@> >> >> >> wrote: >> >> >> >> >> >> I guess I'm asking the browser makers. Anything that cuts down on >> the >> >> >> bandwidth bill for texture maps being downloaded to end users is >> going to be >> >> >> a welcome feature. If people build MMOG's on this technology, >> compression >> >> >> of textures is going to be key. >> >> >> >> >> >> >> >> >> >> >> >> Hate to ask the obvious question, but is there something wrong with >> >> >> PNGs or JPEGs? They both perform compression, and in the case of >> JPEG, >> >> >> offer space/quality trade-offs. Since you've moved outside the realm >> of real >> >> >> time compression/uncompression, why not settle on those -- the >> browser >> >> >> support is great as is the tool support to create them. You can even >> use >> >> >> nifty utilities like pngcrush. >> >> >> >> >> >> I guess what I am trying to say is, unless there is compelling >> reason >> >> >> and amazing algorithm that really puts PNG and JPEG to shame, I >> don't see >> >> >> any reason that yet another image format would be desirable, and >> would >> >> >> likely just make the artists' lives more difficult (i.e. texture >> mapping a >> >> >> model using KTX file? Not likely to be supported) >> >> >> >> >> >> >> >> > >> >> > ----------------------------------------------------------- >> >> > You are currently subscribed to public_webgl...@ >> >> > To unsubscribe, send an email to majordomo...@ with >> >> > the following command in the body of your email: >> >> > >> >> > >> >> >> > >> > >> >> ----------------------------------------------------------- >> You are currently subscribed to public_webgl...@ >> To unsubscribe, send an email to majordomo...@ with >> the following command in the body of your email: >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cal...@ Thu Dec 2 18:40:07 2010 From: cal...@ (Mark Callow) Date: Fri, 03 Dec 2010 11:40:07 +0900 Subject: [Public WebGL] ReadPixels Conformance test In-Reply-To: <6732B531-9C38-4B0F-97B2-CCFFB7246161@transgaming.com> References: <6732B531-9C38-4B0F-97B2-CCFFB7246161@transgaming.com> Message-ID: <4CF85887.1060103@hicorp.co.jp> Yes. But it's irrelevant to the fix Erik proposed since his change correctly queries gl for the value. Regards -Mark > Cedric is right -- it's 8. Table 6.19, p 151 of the ES2.0 specification. > > Daniel > > On 2010-12-02, at 5:54 AM, Erik M?ller wrote: > >> On Thu, 02 Dec 2010 11:30:51 +0100, Cedric Vivier > > wrote: >> >>> On Thu, Dec 2, 2010 at 18:15, Erik M?ller >> > wrote: >>>> Hi, I noted that the ReadPixels conformance test uses three texture >>>> units. >>>> We failed because of a bug making us incorrectly report a max of 2 >>>> units, >>>> but I suppose we need to make sure that all conformance tests work >>>> with just >>>> 2 units >>> >>> Actually we should rather make sure all tests works with 8 units since >>> it is the minimum mandated by ES 2.0 afaik. >>> >>> Regards, >> >> Right, 2 was just from memory and I might've mixed it up with >> something else. I had a look now but I can't seem to find where that >> limit is defined. Anyone know where to find it? >> >> -- >> Erik M?ller >> Core Developer >> Opera Software > > --- > Daniel Koch -+- daniel...@ > > Senior Graphics Architect -+- TransGaming Inc. > -+- www.transgaming.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 398 bytes Desc: not available URL: From cal...@ Thu Dec 2 18:56:44 2010 From: cal...@ (Mark Callow) Date: Fri, 03 Dec 2010 11:56:44 +0900 Subject: [Public WebGL] libKTX ETC software decompressor license revised Message-ID: <4CF85C6C.6050509@hicorp.co.jp> Hi, It has taken a quite some time but Ericsson has finally released etcdec.cxx with a revised license that makes it clear it can be used for implementing Khronos related stuff. The amendment is highlighted in the following excerpt: Under the terms and conditions of the License Agreement, Licensee hereby, receives a non-exclusive, non transferable, limited, free of charge, perpetual and worldwide license, to copy, use, distribute and modify the Software, but only for the purpose of developing, manufacturing, selling, using and distributing products including the Software, which products are used for (i) compression and/or decompression to create content creation tools for usage with a Khronos API, and/or (ii) compression and/or decompression for the purpose of usage with a middleware API that is built on top of a Khronos API, such as JCPs based on a Khronos API (in particular "Mobile 3D Graphics API for J2ME" and its future versions and "Java Bindings for OpenGL ES" and its future versions), and/or (iii) compression and/or decompression to implement a Khronos specification. It is now clear that the code can be used in WebGL implementations, even those built on top of D3D, and for a KTX viewer. So there should be no problem for browser vendors to use the code from libktx. Hopefully this removes the last impediment to including ETC & KTX support in WebGL, most likely in the next version. See the libktx documentation for the whole license and the KTX home page for downloads etc. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 412 bytes Desc: not available URL: From gma...@ Thu Dec 2 19:17:50 2010 From: gma...@ (Gregg Tavares (wrk)) Date: Thu, 2 Dec 2010 19:17:50 -0800 Subject: [Public WebGL] ReadPixels Conformance test In-Reply-To: <4CF85887.1060103@hicorp.co.jp> References: <6732B531-9C38-4B0F-97B2-CCFFB7246161@transgaming.com> <4CF85887.1060103@hicorp.co.jp> Message-ID: On Thu, Dec 2, 2010 at 6:40 PM, Mark Callow wrote: > Yes. But it's irrelevant to the fix Erik proposed since his change > correctly queries gl for the value. > I'm confused. If the minimum is 8 then no fix is needed. The test uses 3. There is no need to query Regards > > -Mark > > > Cedric is right -- it's 8. Table 6.19, p 151 of the ES2.0 specification. > > Daniel > > On 2010-12-02, at 5:54 AM, Erik M?ller wrote: > > On Thu, 02 Dec 2010 11:30:51 +0100, Cedric Vivier > wrote: > > On Thu, Dec 2, 2010 at 18:15, Erik M?ller wrote: > > Hi, I noted that the ReadPixels conformance test uses three texture > units. > > We failed because of a bug making us incorrectly report a max of 2 units, > > but I suppose we need to make sure that all conformance tests work with > just > > 2 units > > > Actually we should rather make sure all tests works with 8 units since > > it is the minimum mandated by ES 2.0 afaik. > > > Regards, > > > Right, 2 was just from memory and I might've mixed it up with something > else. I had a look now but I can't seem to find where that limit is defined. > Anyone know where to find it? > > -- > Erik M?ller > Core Developer > Opera Software > > > --- > Daniel Koch -+- daniel...@ > Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jda...@ Thu Dec 2 19:31:23 2010 From: jda...@ (John Davis) Date: Thu, 2 Dec 2010 21:31:23 -0600 Subject: [Public WebGL] Re: Webp In-Reply-To: References: <33475911.99316.1287410887932.JavaMail.root@cm-mail03.mozilla.org> Message-ID: Isn't there already a unit test or something for this? Why do I have to write a test? On Thu, Dec 2, 2010 at 2:25 PM, Gregg Tavares (wrk) wrote: > > > On Thu, Dec 2, 2010 at 12:24 PM, Gregg Tavares (wrk) wrote: > >> >> >> On Thu, Dec 2, 2010 at 11:00 AM, Kenneth Russell wrote: >> >>> I haven't tried it. If WebKit's image decoders support it then it >>> should theoretically work. Try it and let us know. >>> >> >> The conformance tests use webp in the Chromium dev build and they pass so >> I'm guessing it should work. >> > > or am I getting confused. Sorry, I read that was webm, not webp. I don't > know about webp. Tell us if it works. > > >> >> >>> >>> -Ken >>> >>> On Thu, Dec 2, 2010 at 3:27 AM, John Davis >>> wrote: >>> > I see WebP is also supported in the Chrome 9 build, can it be used with >>> > WebGL for texturing? >>> > >>> > On Wed, Oct 20, 2010 at 11:59 AM, Kenneth Russell >>> wrote: >>> >> >>> >> On Wed, Oct 20, 2010 at 7:43 AM, John Davis >> > >>> >> wrote: >>> >> > What started this whole thing is the claim that Webp is much better >>> than >>> >> > jpeg. >>> >> > >>> >> > Btw, is webgl on by default in chrome7 or do we have to wait for >>> >> > chrome8? >>> >> >>> >> Actually, WebGL will not be on by default yet even in Chrome 8. We >>> >> expect it to be on by default in Chrome 9. >>> >> >>> >> -Ken >>> >> >>> >> > On Wednesday, October 20, 2010, Patrick Baggett >>> >> > wrote: >>> >> >> >>> >> >> >>> >> >> On Wed, Oct 20, 2010 at 6:33 AM, John Davis < >>> jdavis...@> >>> >> >> wrote: >>> >> >> >>> >> >> I guess I'm asking the browser makers. Anything that cuts down on >>> the >>> >> >> bandwidth bill for texture maps being downloaded to end users is >>> going to be >>> >> >> a welcome feature. If people build MMOG's on this technology, >>> compression >>> >> >> of textures is going to be key. >>> >> >> >>> >> >> >>> >> >> >>> >> >> Hate to ask the obvious question, but is there something wrong with >>> >> >> PNGs or JPEGs? They both perform compression, and in the case of >>> JPEG, >>> >> >> offer space/quality trade-offs. Since you've moved outside the >>> realm of real >>> >> >> time compression/uncompression, why not settle on those -- the >>> browser >>> >> >> support is great as is the tool support to create them. You can >>> even use >>> >> >> nifty utilities like pngcrush. >>> >> >> >>> >> >> I guess what I am trying to say is, unless there is compelling >>> reason >>> >> >> and amazing algorithm that really puts PNG and JPEG to shame, I >>> don't see >>> >> >> any reason that yet another image format would be desirable, and >>> would >>> >> >> likely just make the artists' lives more difficult (i.e. texture >>> mapping a >>> >> >> model using KTX file? Not likely to be supported) >>> >> >> >>> >> >> >>> >> > >>> >> > ----------------------------------------------------------- >>> >> > You are currently subscribed to public_webgl...@ >>> >> > To unsubscribe, send an email to majordomo...@ with >>> >> > the following command in the body of your email: >>> >> > >>> >> > >>> >> >>> > >>> > >>> >>> ----------------------------------------------------------- >>> You are currently subscribed to public_webgl...@ >>> To unsubscribe, send an email to majordomo...@ with >>> the following command in the body of your email: >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Thu Dec 2 20:24:02 2010 From: ste...@ (Steve Baker) Date: Thu, 02 Dec 2010 22:24:02 -0600 Subject: [Public WebGL] libKTX ETC software decompressor license revised In-Reply-To: <4CF85C6C.6050509@hicorp.co.jp> References: <4CF85C6C.6050509@hicorp.co.jp> Message-ID: <4CF870E2.3090808@sjbaker.org> Have you heard any more about ETC2? It would be nice to be able to release with both formats - not being able to compress alpha-maps is a rather serious restriction. On 12/02/2010 08:56 PM, Mark Callow wrote: > > Hi, > > It has taken a quite some time but Ericsson has finally released > etcdec.cxx with a revised license that makes it clear it can be used > for implementing Khronos related stuff. The amendment is highlighted > in the following excerpt: > > Under the terms and conditions of the License Agreement, Licensee > hereby, receives a non-exclusive, non transferable, limited, free > of charge, perpetual and worldwide license, to copy, use, > distribute and modify the Software, but only for the purpose of > developing, manufacturing, selling, using and distributing > products including the Software, which products are used for (i) > compression and/or decompression to create content creation tools > for usage with a Khronos API, and/or (ii) compression and/or > decompression for the purpose of usage with a middleware API that > is built on top of a Khronos API, such as JCPs based on a Khronos > API (in particular "Mobile 3D Graphics API for J2ME" and its > future versions and "Java Bindings for OpenGL ES" and its future > versions), and/or (iii) compression and/or decompression to > implement a Khronos specification. > > It is now clear that the code can be used in WebGL implementations, > even those built on top of D3D, and for a KTX viewer. So there should > be no problem for browser vendors to use the code from libktx. > > Hopefully this removes the last impediment to including ETC & KTX > support in WebGL, most likely in the next version. > > See the libktx documentation > for the > whole license and the KTX home page > for downloads etc. > > Regards > > -Mark > > > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From cal...@ Thu Dec 2 20:54:39 2010 From: cal...@ (Mark Callow) Date: Fri, 03 Dec 2010 13:54:39 +0900 Subject: [Public WebGL] libKTX ETC software decompressor license revised In-Reply-To: <4CF870E2.3090808@sjbaker.org> References: <4CF85C6C.6050509@hicorp.co.jp> <4CF870E2.3090808@sjbaker.org> Message-ID: <4CF8780F.3030009@hicorp.co.jp> Yes I have but the information is still Khronos confidential so I am not allowed to say anything. I expect that when the information is made public, you will be pleased. Regards -Mark > Have you heard any more about ETC2? It would be nice to be able to > release with both formats - not being able to compress alpha-maps is a > rather serious restriction. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: callow_mark.vcf Type: text/x-vcard Size: 412 bytes Desc: not available URL: From ste...@ Thu Dec 2 22:41:18 2010 From: ste...@ (Steve Baker) Date: Fri, 03 Dec 2010 00:41:18 -0600 Subject: [Public WebGL] Alternative compression scheme. Message-ID: <4CF8910E.7000807@sjbaker.org> I don't know whether there is any interest in this - but I guess I could suggest another lossy compression scheme for WebGL textures based on simple dictionary compression. The reason I propose it in the teeth of so many other lossy image compression schemes such as JPEG, WebP, DXT and ETC is that this scheme can be decoded in the shader - either as a one-time step to avoid decompressing it in JavaScript - or used directly in its compressed form as the image is rendered. The idea is to chop your image into (say) 4x4 pixel chunks - and make a list of them. The file itself consists of the list of 4x4 chunks (the dictionary) - plus a 2D array of indices that point into that dictionary. The size of the resulting file is (16*DictionarySize*Sizeof(Pixel)) + (ImageSize*log2(DictionarySize)/26) ...which is (in practice) dominated by the first term - the space consumed by the dictionary. To avoid having to invent new file formats - you can store this as two separate maps - one containing the dictionary and the other containing the index array. Both of those could be stored as .PNG or whatever. The format is able (depending on the encoder) to: a) Losslessly compress images with areas of solid color, repeating pattern or zero alpha by simply recognizing identical chunks in the dictionary and merging them...or... b) Lossily compress arbitary images by eliminating groups of chunks that are sufficiently similar that they can be replaced by the average of that group. This scheme is very costly to encode (finding the smallest set of sufficiently similar chunks is painful) - but it's super-cheap to decode. It has an arbitrary quality versus size trade-off that you can set either to produce constant compression ratios (by fixing the size of the dictionary so it contains the N least similar chunks) or constant quality metrics (by allowing the dictionary to be of any size and limiting the degree of dissimilarity you allow when merging different chunks). It can generally achieve lossy compression rates of around 8:1 to maybe 20:1 with reasonable image quality...which is much better than either ETC or DXT. But the most important thing is that you can decode it inside the shader - on-the-fly if necessary. That means that you can save texture map memory as well as download bandwidth and cache space. To do that, you write your dictionary into one texture and the 2D index array into another - and do a texture fetch to the appropriate chunk index and a dependent texture read to fetch the actual texel(s) out of the appropriate chunk in the dictionary texture. The decoding is a lot slower than hardware texture compression (especially if you want filtering) - but the savings on texture memory make it useful - and it doesn't depend on having hardware support. With 4x4 chunks, you can fairly easily adapt the shader-based decoder to do a couple of levels of MIPmapping by MIPmapping the dictionary image to make 2x2 and 1x1 chunks. If you need lower levels of MIPmap than that - you can either make a new dictionary/index image for every 3 levels of MIP - or by just storing the lower MIP levels uncompressed because they contribute so little to the overall file size. I've used it in a couple of projects in the past - for compressing multi-spectral satellite photography, for example - and it works surprisingly well...provided that you aren't too fill-rate limited and can stand the extra shader complexity. The big advantage over ANY of the other schemes is that it does an excellent job of compressing images with alpha planes, HDR images, normal maps, floating point maps and other weird kinds of texture...neither DXT, ETC, JPEG or WebP can do even a half-assed job of any of those things. When you have a lot of more or less similar images - you can even do the trick of compressing all of the images into a common dictionary and just having separate index maps for each of the original images. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From emo...@ Fri Dec 3 01:05:44 2010 From: emo...@ (=?iso-8859-15?Q?Erik_M=F6ller?=) Date: Fri, 03 Dec 2010 10:05:44 +0100 Subject: [Public WebGL] ReadPixels Conformance test In-Reply-To: References: <6732B531-9C38-4B0F-97B2-CCFFB7246161@transgaming.com> <4CF85887.1060103@hicorp.co.jp> Message-ID: Agreed, no need to query. Just me underestimating the awesomeness of ES 2. -Erik On Fri, 03 Dec 2010 04:17:50 +0100, Gregg Tavares (wrk) wrote: > On Thu, Dec 2, 2010 at 6:40 PM, Mark Callow > wrote: > >> Yes. But it's irrelevant to the fix Erik proposed since his change >> correctly queries gl for the value. >> > I'm confused. If the minimum is 8 then no fix is needed. The test uses 3. > > There is no need to query > > > Regards >> >> -Mark >> >> >> Cedric is right -- it's 8. Table 6.19, p 151 of the ES2.0 >> specification. >> >> Daniel >> >> On 2010-12-02, at 5:54 AM, Erik M?ller wrote: >> >> On Thu, 02 Dec 2010 11:30:51 +0100, Cedric Vivier >> wrote: >> >> On Thu, Dec 2, 2010 at 18:15, Erik M?ller wrote: >> >> Hi, I noted that the ReadPixels conformance test uses three texture >> units. >> >> We failed because of a bug making us incorrectly report a max of 2 >> units, >> >> but I suppose we need to make sure that all conformance tests work with >> just >> >> 2 units >> >> >> Actually we should rather make sure all tests works with 8 units since >> >> it is the minimum mandated by ES 2.0 afaik. >> >> >> Regards, >> >> >> Right, 2 was just from memory and I might've mixed it up with something >> else. I had a look now but I can't seem to find where that limit is >> defined. >> Anyone know where to find it? >> >> -- >> Erik M?ller >> Core Developer >> Opera Software >> >> >> --- >> Daniel Koch -+- daniel...@ >> Senior Graphics Architect -+- TransGaming Inc. -+- www.transgaming.com >> >> -- Erik M?ller Core Developer Opera Software ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From tu...@ Fri Dec 3 07:23:12 2010 From: tu...@ (Thatcher Ulrich) Date: Fri, 3 Dec 2010 16:23:12 +0100 Subject: [Public WebGL] webgl-bench now has uploads, show aggregate results Message-ID: I've been hacking away at webgl-bench and now it shows aggregate performance results: http://webgl-bench.appspot.com You can upload your benchmark results (or run the benchmarks without uploading). It's still very rough and likely full of bugs, but fun to look at. Next TODO is to show breakdowns by browser, OS, GPU vendor, etc. Then, more benchmarks covering more interesting stuff. -T ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From zhe...@ Fri Dec 3 08:10:59 2010 From: zhe...@ (Mo, Zhenyao) Date: Fri, 3 Dec 2010 08:10:59 -0800 Subject: [Public WebGL] Re: Webp In-Reply-To: References: <33475911.99316.1287410887932.JavaMail.root@cm-mail03.mozilla.org> Message-ID: WebP is not supported by all browsers yet that are running WebGL. That's why we can't add an entry in conformance test. Mo On Thu, Dec 2, 2010 at 7:31 PM, John Davis wrote: > Isn't there already a unit test or something for this? ?Why do I have to > write a test? > > On Thu, Dec 2, 2010 at 2:25 PM, Gregg Tavares (wrk) wrote: >> >> >> On Thu, Dec 2, 2010 at 12:24 PM, Gregg Tavares (wrk) >> wrote: >>> >>> >>> On Thu, Dec 2, 2010 at 11:00 AM, Kenneth Russell wrote: >>>> >>>> I haven't tried it. If WebKit's image decoders support it then it >>>> should theoretically work. Try it and let us know. >>> >>> The conformance tests use webp in the Chromium dev build and they pass so >>> I'm guessing it should work. >> >> or am I getting confused. Sorry, I read that was webm, not webp. I don't >> know about webp. Tell us if it works. >> >>> >>> >>>> >>>> -Ken >>>> >>>> On Thu, Dec 2, 2010 at 3:27 AM, John Davis >>>> wrote: >>>> > I see WebP is also supported in the Chrome 9 build, can it be used >>>> > with >>>> > WebGL for texturing? >>>> > >>>> > On Wed, Oct 20, 2010 at 11:59 AM, Kenneth Russell >>>> > wrote: >>>> >> >>>> >> On Wed, Oct 20, 2010 at 7:43 AM, John Davis >>>> >> >>>> >> wrote: >>>> >> > What started this whole thing is the claim that Webp is much better >>>> >> > than >>>> >> > jpeg. >>>> >> > >>>> >> > Btw, is webgl on by default in chrome7 or do we have to wait for >>>> >> > chrome8? >>>> >> >>>> >> Actually, WebGL will not be on by default yet even in Chrome 8. We >>>> >> expect it to be on by default in Chrome 9. >>>> >> >>>> >> -Ken >>>> >> >>>> >> > On Wednesday, October 20, 2010, Patrick Baggett >>>> >> > wrote: >>>> >> >> >>>> >> >> >>>> >> >> On Wed, Oct 20, 2010 at 6:33 AM, John Davis >>>> >> >> >>>> >> >> wrote: >>>> >> >> >>>> >> >> I guess I'm asking the browser makers.? Anything that cuts down on >>>> >> >> the >>>> >> >> bandwidth bill for texture maps being downloaded to end users?is >>>> >> >> going to be >>>> >> >> a welcome feature.? If people build MMOG's on this technology, >>>> >> >> compression >>>> >> >> of textures is going to be key. >>>> >> >> >>>> >> >> >>>> >> >> >>>> >> >> Hate to ask the obvious question, but is there something wrong >>>> >> >> with >>>> >> >> PNGs or JPEGs? They both perform compression, and in the case of >>>> >> >> JPEG, >>>> >> >> offer space/quality trade-offs. Since you've moved outside the >>>> >> >> realm of real >>>> >> >> time compression/uncompression, why not settle on those -- the >>>> >> >> browser >>>> >> >> support is great as is the tool support to create them. You can >>>> >> >> even use >>>> >> >> nifty utilities like pngcrush. >>>> >> >> >>>> >> >> I guess what I am trying to say is, unless there is compelling >>>> >> >> reason >>>> >> >> and amazing algorithm that really puts PNG and JPEG to shame, I >>>> >> >> don't see >>>> >> >> any reason that yet another image format would be desirable, and >>>> >> >> would >>>> >> >> likely just make the artists' lives more difficult (i.e. texture >>>> >> >> mapping a >>>> >> >> model using KTX file? Not likely to be supported) >>>> >> >> >>>> >> >> >>>> >> > >>>> >> > ----------------------------------------------------------- >>>> >> > You are currently subscribed to public_webgl...@ >>>> >> > To unsubscribe, send an email to majordomo...@ with >>>> >> > the following command in the body of your email: >>>> >> > >>>> >> > >>>> >> >>>> > >>>> > >>>> >>>> ----------------------------------------------------------- >>>> You are currently subscribed to public_webgl...@ >>>> To unsubscribe, send an email to majordomo...@ with >>>> the following command in the body of your email: >>>> >>> >> > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: From oli...@ Fri Dec 3 09:21:23 2010 From: oli...@ (Oliver Hunt) Date: Fri, 3 Dec 2010 09:21:23 -0800 Subject: [Public WebGL] Re: Webp In-Reply-To: References: <33475911.99316.1287410887932.JavaMail.root@cm-mail03.mozilla.org> Message-ID: I don't believe that it is the job of the WebGL conformance test suite to test non-webgl features: namely what image formats are supported by a UA. Otherwise why don't we (for example) test tiff, ico, jpeg2000, pdfs, SVG, the many and varied raw image types, etc. The WebGL spec talks about and