From wil...@ Thu Sep 15 07:16:29 2016 From: wil...@ (Will Eastcott) Date: Thu, 15 Sep 2016 15:16:29 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL Message-ID: Hi all, Chrome supports ETC1 on desktop (at least on Windows). How is this possible? I didn't think desktop drivers/GPUs had native support. Are these textures being decompressed in software before use? Thanks, Will -- Will Eastcott (@willeastcott ) CEO, PlayCanvas Ltd http://playcanvas.com ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From cwa...@ Thu Sep 15 07:34:28 2016 From: cwa...@ (Corentin Wallez) Date: Thu, 15 Sep 2016 10:34:28 -0400 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Hey Will, As you guessed it is being decompressed in software . I think the rational is that it makes it easier to port mobile games to the Web and that the increased memory usage (and texture cache misses) is tiny compared to the VRAM difference between mobile and desktop platforms. Corentin On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott wrote: > Hi all, > > Chrome supports ETC1 on desktop (at least on Windows). How is this > possible? I didn't think desktop drivers/GPUs had native support. Are these > textures being decompressed in software before use? > > Thanks, > > Will > > -- > Will Eastcott (@willeastcott ) > CEO, PlayCanvas Ltd > http://playcanvas.com > ? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Thu Sep 15 07:44:22 2016 From: max...@ (Maksims Mihejevs) Date: Thu, 15 Sep 2016 15:44:22 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Hi Corentin. Is there ability for a developer to choose the behaviour? This leads to a problem here: We are working on Asset Variants, where we generate list of different compressed textures from original texture. So for example we have PNG, and we generate: DXT, PVR, ETC and others (in future). Then we have priority order of them based on quality. So browser will check what format is supported, and will use highest priority compressed texture available. The only reason we use them is to reduce VRAM usage, to allow larger scale applications on mobile and desktop. And download size benefits against PNG, but JPEG is usually smaller anyway. If ETC1 says "it is supported" but then leads to software decompression, this leads to 6 times more the VRAM usage comparing to similar compressed formats, like DXT for example. This is actually a problem for us. So question: How can we detect that it is truly supported by GPU? Kind Regards, Max On 15 September 2016 at 15:34, Corentin Wallez wrote: > Hey Will, > > As you guessed it is being decompressed in software > . > I think the rational is that it makes it easier to port mobile games to the > Web and that the increased memory usage (and texture cache misses) is tiny > compared to the VRAM difference between mobile and desktop platforms. > > Corentin > > On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott > wrote: > >> Hi all, >> >> Chrome supports ETC1 on desktop (at least on Windows). How is this >> possible? I didn't think desktop drivers/GPUs had native support. Are these >> textures being decompressed in software before use? >> >> Thanks, >> >> Will >> >> -- >> Will Eastcott (@willeastcott ) >> CEO, PlayCanvas Ltd >> http://playcanvas.com >> ? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Thu Sep 15 07:53:17 2016 From: max...@ (Maksims Mihejevs) Date: Thu, 15 Sep 2016 15:53:17 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Just to mention another point. This is web, and in web there are formats that are fully supported, such as PNG and JPEG, and they do support most of cases, even can do HDR in form of RGBM in PNG. I cannot imagine anyone developing projects for a web, with using only a single compressed format without actually using PNG/JPEGs. Especially taking in account the fact there is no a single compressed format that is universally supported, and ETC1 is not a candidate for it (it cannot even do Alpha, and quality is just enough for mobile, but not for HD desktop experiences). In the webgl, our goal is to support everyone on the web, and we as engine and tools developer have to make sensible decisions to optimise and deliver best experience. Reason why we do compressed textures - is to reduce VRAM dramatically, as there are already too many cases where apps simply crash mobile and desktop as they don't fit in VRAM. And if by providing compressed textures, we actually not solving a problem, then I'm afraid web is failing to provide sensible functionality regarding compressed textures as they've been made: to reduce VRAM footprint. Kind Regards, Max On 15 September 2016 at 15:44, Maksims Mihejevs wrote: > Hi Corentin. > Is there ability for a developer to choose the behaviour? > > This leads to a problem here: We are working on Asset Variants, where we > generate list of different compressed textures from original texture. > So for example we have PNG, and we generate: DXT, PVR, ETC and others (in > future). Then we have priority order of them based on quality. > So browser will check what format is supported, and will use highest > priority compressed texture available. > > The only reason we use them is to reduce VRAM usage, to allow larger scale > applications on mobile and desktop. And download size benefits against PNG, > but JPEG is usually smaller anyway. > If ETC1 says "it is supported" but then leads to software decompression, > this leads to 6 times more the VRAM usage comparing to similar compressed > formats, like DXT for example. > This is actually a problem for us. > > So question: How can we detect that it is truly supported by GPU? > > Kind Regards, > Max > > On 15 September 2016 at 15:34, Corentin Wallez wrote: > >> Hey Will, >> >> As you guessed it is being decompressed in software >> . >> I think the rational is that it makes it easier to port mobile games to the >> Web and that the increased memory usage (and texture cache misses) is tiny >> compared to the VRAM difference between mobile and desktop platforms. >> >> Corentin >> >> On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott >> wrote: >> >>> Hi all, >>> >>> Chrome supports ETC1 on desktop (at least on Windows). How is this >>> possible? I didn't think desktop drivers/GPUs had native support. Are these >>> textures being decompressed in software before use? >>> >>> Thanks, >>> >>> Will >>> >>> -- >>> Will Eastcott (@willeastcott ) >>> CEO, PlayCanvas Ltd >>> http://playcanvas.com >>> ? >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geo...@ Thu Sep 15 08:05:52 2016 From: geo...@ (Geoff Lang) Date: Thu, 15 Sep 2016 15:05:52 +0000 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: If support for ETC1 on Windows is enabled without hardware support, it is likely a mistake. ANGLE has to support ETC2 compressed formats because ES3 and WebGL2 require us to, regardless of hardware support. I think that compressing to DXT should always be preferred. It is only available by extension so hardware support is implied and it should be available on all desktop platforms. On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs wrote: > Just to mention another point. > > This is web, and in web there are formats that are fully supported, such > as PNG and JPEG, and they do support most of cases, even can do HDR in form > of RGBM in PNG. > I cannot imagine anyone developing projects for a web, with using only a > single compressed format without actually using PNG/JPEGs. Especially > taking in account the fact there is no a single compressed format that is > universally supported, and ETC1 is not a candidate for it (it cannot even > do Alpha, and quality is just enough for mobile, but not for HD desktop > experiences). > > In the webgl, our goal is to support everyone on the web, and we as engine > and tools developer have to make sensible decisions to optimise and deliver > best experience. > Reason why we do compressed textures - is to reduce VRAM dramatically, as > there are already too many cases where apps simply crash mobile and desktop > as they don't fit in VRAM. And if by providing compressed textures, we > actually not solving a problem, then I'm afraid web is failing to provide > sensible functionality regarding compressed textures as they've been made: > to reduce VRAM footprint. > > Kind Regards, > Max > > On 15 September 2016 at 15:44, Maksims Mihejevs > wrote: > > Hi Corentin. > Is there ability for a developer to choose the behaviour? > > This leads to a problem here: We are working on Asset Variants, where we > generate list of different compressed textures from original texture. > So for example we have PNG, and we generate: DXT, PVR, ETC and others (in > future). Then we have priority order of them based on quality. > So browser will check what format is supported, and will use highest > priority compressed texture available. > > The only reason we use them is to reduce VRAM usage, to allow larger scale > applications on mobile and desktop. And download size benefits against PNG, > but JPEG is usually smaller anyway. > If ETC1 says "it is supported" but then leads to software decompression, > this leads to 6 times more the VRAM usage comparing to similar compressed > formats, like DXT for example. > This is actually a problem for us. > > So question: How can we detect that it is truly supported by GPU? > > Kind Regards, > Max > > On 15 September 2016 at 15:34, Corentin Wallez wrote: > > Hey Will, > > As you guessed it is being decompressed in software > . > I think the rational is that it makes it easier to port mobile games to the > Web and that the increased memory usage (and texture cache misses) is tiny > compared to the VRAM difference between mobile and desktop platforms. > > Corentin > > On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott > wrote: > > Hi all, > > Chrome supports ETC1 on desktop (at least on Windows). How is this > possible? I didn't think desktop drivers/GPUs had native support. Are these > textures being decompressed in software before use? > > Thanks, > > Will > > -- > Will Eastcott (@willeastcott ) > CEO, PlayCanvas Ltd > http://playcanvas.com > ? > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Thu Sep 15 08:09:07 2016 From: max...@ (Maksims Mihejevs) Date: Thu, 15 Sep 2016 16:09:07 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Thanks Geoff. Still wondering - will the be a way to know if it is truly supported by GPU or is software decompressed? On 15 September 2016 at 16:05, Geoff Lang wrote: > If support for ETC1 on Windows is enabled without hardware support, it is > likely a mistake. ANGLE has to support ETC2 compressed formats because ES3 > and WebGL2 require us to, regardless of hardware support. > > I think that compressing to DXT should always be preferred. It is only > available by extension so hardware support is implied and it should be > available on all desktop platforms. > > On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs > wrote: > >> Just to mention another point. >> >> This is web, and in web there are formats that are fully supported, such >> as PNG and JPEG, and they do support most of cases, even can do HDR in form >> of RGBM in PNG. >> I cannot imagine anyone developing projects for a web, with using only a >> single compressed format without actually using PNG/JPEGs. Especially >> taking in account the fact there is no a single compressed format that is >> universally supported, and ETC1 is not a candidate for it (it cannot even >> do Alpha, and quality is just enough for mobile, but not for HD desktop >> experiences). >> >> In the webgl, our goal is to support everyone on the web, and we as >> engine and tools developer have to make sensible decisions to optimise and >> deliver best experience. >> Reason why we do compressed textures - is to reduce VRAM dramatically, as >> there are already too many cases where apps simply crash mobile and desktop >> as they don't fit in VRAM. And if by providing compressed textures, we >> actually not solving a problem, then I'm afraid web is failing to provide >> sensible functionality regarding compressed textures as they've been made: >> to reduce VRAM footprint. >> >> Kind Regards, >> Max >> >> On 15 September 2016 at 15:44, Maksims Mihejevs >> wrote: >> >>> Hi Corentin. >>> Is there ability for a developer to choose the behaviour? >>> >>> This leads to a problem here: We are working on Asset Variants, where we >>> generate list of different compressed textures from original texture. >>> So for example we have PNG, and we generate: DXT, PVR, ETC and others >>> (in future). Then we have priority order of them based on quality. >>> So browser will check what format is supported, and will use highest >>> priority compressed texture available. >>> >>> The only reason we use them is to reduce VRAM usage, to allow larger >>> scale applications on mobile and desktop. And download size benefits >>> against PNG, but JPEG is usually smaller anyway. >>> If ETC1 says "it is supported" but then leads to software decompression, >>> this leads to 6 times more the VRAM usage comparing to similar compressed >>> formats, like DXT for example. >>> This is actually a problem for us. >>> >>> So question: How can we detect that it is truly supported by GPU? >>> >>> Kind Regards, >>> Max >>> >>> On 15 September 2016 at 15:34, Corentin Wallez >>> wrote: >>> >>>> Hey Will, >>>> >>>> As you guessed it is being decompressed in software >>>> . >>>> I think the rational is that it makes it easier to port mobile games to the >>>> Web and that the increased memory usage (and texture cache misses) is tiny >>>> compared to the VRAM difference between mobile and desktop platforms. >>>> >>>> Corentin >>>> >>>> On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott >>>> wrote: >>>> >>>>> Hi all, >>>>> >>>>> Chrome supports ETC1 on desktop (at least on Windows). How is this >>>>> possible? I didn't think desktop drivers/GPUs had native support. Are these >>>>> textures being decompressed in software before use? >>>>> >>>>> Thanks, >>>>> >>>>> Will >>>>> >>>>> -- >>>>> Will Eastcott (@willeastcott ) >>>>> CEO, PlayCanvas Ltd >>>>> http://playcanvas.com >>>>> ? >>>>> >>>> >>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From geo...@ Thu Sep 15 08:22:53 2016 From: geo...@ (Geoff Lang) Date: Thu, 15 Sep 2016 15:22:53 +0000 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: It's unlikely that there will be a way to know. ANGLE only decompresses when it is required to by spec and it wouldn't be possible to know if the driver is doing the same. On Thu, Sep 15, 2016 at 11:09 AM Maksims Mihejevs wrote: > Thanks Geoff. > Still wondering - will the be a way to know if it is truly supported by > GPU or is software decompressed? > > On 15 September 2016 at 16:05, Geoff Lang wrote: > > If support for ETC1 on Windows is enabled without hardware support, it is > likely a mistake. ANGLE has to support ETC2 compressed formats because ES3 > and WebGL2 require us to, regardless of hardware support. > > I think that compressing to DXT should always be preferred. It is only > available by extension so hardware support is implied and it should be > available on all desktop platforms. > > On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs > wrote: > > Just to mention another point. > > This is web, and in web there are formats that are fully supported, such > as PNG and JPEG, and they do support most of cases, even can do HDR in form > of RGBM in PNG. > I cannot imagine anyone developing projects for a web, with using only a > single compressed format without actually using PNG/JPEGs. Especially > taking in account the fact there is no a single compressed format that is > universally supported, and ETC1 is not a candidate for it (it cannot even > do Alpha, and quality is just enough for mobile, but not for HD desktop > experiences). > > In the webgl, our goal is to support everyone on the web, and we as engine > and tools developer have to make sensible decisions to optimise and deliver > best experience. > Reason why we do compressed textures - is to reduce VRAM dramatically, as > there are already too many cases where apps simply crash mobile and desktop > as they don't fit in VRAM. And if by providing compressed textures, we > actually not solving a problem, then I'm afraid web is failing to provide > sensible functionality regarding compressed textures as they've been made: > to reduce VRAM footprint. > > Kind Regards, > Max > > On 15 September 2016 at 15:44, Maksims Mihejevs > wrote: > > Hi Corentin. > Is there ability for a developer to choose the behaviour? > > This leads to a problem here: We are working on Asset Variants, where we > generate list of different compressed textures from original texture. > So for example we have PNG, and we generate: DXT, PVR, ETC and others (in > future). Then we have priority order of them based on quality. > So browser will check what format is supported, and will use highest > priority compressed texture available. > > The only reason we use them is to reduce VRAM usage, to allow larger scale > applications on mobile and desktop. And download size benefits against PNG, > but JPEG is usually smaller anyway. > If ETC1 says "it is supported" but then leads to software decompression, > this leads to 6 times more the VRAM usage comparing to similar compressed > formats, like DXT for example. > This is actually a problem for us. > > So question: How can we detect that it is truly supported by GPU? > > Kind Regards, > Max > > On 15 September 2016 at 15:34, Corentin Wallez wrote: > > Hey Will, > > As you guessed it is being decompressed in software > . > I think the rational is that it makes it easier to port mobile games to the > Web and that the increased memory usage (and texture cache misses) is tiny > compared to the VRAM difference between mobile and desktop platforms. > > Corentin > > On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott > wrote: > > Hi all, > > Chrome supports ETC1 on desktop (at least on Windows). How is this > possible? I didn't think desktop drivers/GPUs had native support. Are these > textures being decompressed in software before use? > > Thanks, > > Will > > -- > Will Eastcott (@willeastcott ) > CEO, PlayCanvas Ltd > http://playcanvas.com > ? > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Thu Sep 15 08:30:53 2016 From: max...@ (Maksims Mihejevs) Date: Thu, 15 Sep 2016 16:30:53 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Perhaps building GPU Vendor blacklist could help to go around this problem.. On 15 September 2016 at 16:22, Geoff Lang wrote: > It's unlikely that there will be a way to know. ANGLE only decompresses > when it is required to by spec and it wouldn't be possible to know if the > driver is doing the same. > > On Thu, Sep 15, 2016 at 11:09 AM Maksims Mihejevs > wrote: > >> Thanks Geoff. >> Still wondering - will the be a way to know if it is truly supported by >> GPU or is software decompressed? >> >> On 15 September 2016 at 16:05, Geoff Lang wrote: >> >>> If support for ETC1 on Windows is enabled without hardware support, it >>> is likely a mistake. ANGLE has to support ETC2 compressed formats because >>> ES3 and WebGL2 require us to, regardless of hardware support. >>> >>> I think that compressing to DXT should always be preferred. It is only >>> available by extension so hardware support is implied and it should be >>> available on all desktop platforms. >>> >>> On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs >>> wrote: >>> >>>> Just to mention another point. >>>> >>>> This is web, and in web there are formats that are fully supported, >>>> such as PNG and JPEG, and they do support most of cases, even can do HDR in >>>> form of RGBM in PNG. >>>> I cannot imagine anyone developing projects for a web, with using only >>>> a single compressed format without actually using PNG/JPEGs. Especially >>>> taking in account the fact there is no a single compressed format that is >>>> universally supported, and ETC1 is not a candidate for it (it cannot even >>>> do Alpha, and quality is just enough for mobile, but not for HD desktop >>>> experiences). >>>> >>>> In the webgl, our goal is to support everyone on the web, and we as >>>> engine and tools developer have to make sensible decisions to optimise and >>>> deliver best experience. >>>> Reason why we do compressed textures - is to reduce VRAM dramatically, >>>> as there are already too many cases where apps simply crash mobile and >>>> desktop as they don't fit in VRAM. And if by providing compressed textures, >>>> we actually not solving a problem, then I'm afraid web is failing to >>>> provide sensible functionality regarding compressed textures as they've >>>> been made: to reduce VRAM footprint. >>>> >>>> Kind Regards, >>>> Max >>>> >>>> On 15 September 2016 at 15:44, Maksims Mihejevs >>>> wrote: >>>> >>>>> Hi Corentin. >>>>> Is there ability for a developer to choose the behaviour? >>>>> >>>>> This leads to a problem here: We are working on Asset Variants, where >>>>> we generate list of different compressed textures from original texture. >>>>> So for example we have PNG, and we generate: DXT, PVR, ETC and others >>>>> (in future). Then we have priority order of them based on quality. >>>>> So browser will check what format is supported, and will use highest >>>>> priority compressed texture available. >>>>> >>>>> The only reason we use them is to reduce VRAM usage, to allow larger >>>>> scale applications on mobile and desktop. And download size benefits >>>>> against PNG, but JPEG is usually smaller anyway. >>>>> If ETC1 says "it is supported" but then leads to software >>>>> decompression, this leads to 6 times more the VRAM usage comparing to >>>>> similar compressed formats, like DXT for example. >>>>> This is actually a problem for us. >>>>> >>>>> So question: How can we detect that it is truly supported by GPU? >>>>> >>>>> Kind Regards, >>>>> Max >>>>> >>>>> On 15 September 2016 at 15:34, Corentin Wallez >>>>> wrote: >>>>> >>>>>> Hey Will, >>>>>> >>>>>> As you guessed it is being decompressed in software >>>>>> . >>>>>> I think the rational is that it makes it easier to port mobile games to the >>>>>> Web and that the increased memory usage (and texture cache misses) is tiny >>>>>> compared to the VRAM difference between mobile and desktop platforms. >>>>>> >>>>>> Corentin >>>>>> >>>>>> On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott >>>>>> wrote: >>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> Chrome supports ETC1 on desktop (at least on Windows). How is this >>>>>>> possible? I didn't think desktop drivers/GPUs had native support. Are these >>>>>>> textures being decompressed in software before use? >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Will >>>>>>> >>>>>>> -- >>>>>>> Will Eastcott (@willeastcott ) >>>>>>> CEO, PlayCanvas Ltd >>>>>>> http://playcanvas.com >>>>>>> ? >>>>>>> >>>>>> >>>>>> >>>>> >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Thu Sep 15 08:34:10 2016 From: ste...@ (Steve Baker) Date: Thu, 15 Sep 2016 10:34:10 -0500 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: ETC1 is a subset of ETC2. You can throw an ETC1 texture into ETC2 and it "just works". So if there is ETC2 support - then ETC1 support is implied. -- Steve Geoff Lang wrote: > If support for ETC1 on Windows is enabled without hardware support, it is > likely a mistake. ANGLE has to support ETC2 compressed formats because > ES3 > and WebGL2 require us to, regardless of hardware support. > > I think that compressing to DXT should always be preferred. It is only > available by extension so hardware support is implied and it should be > available on all desktop platforms. > > On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs > wrote: > >> Just to mention another point. >> >> This is web, and in web there are formats that are fully supported, such >> as PNG and JPEG, and they do support most of cases, even can do HDR in >> form >> of RGBM in PNG. >> I cannot imagine anyone developing projects for a web, with using only a >> single compressed format without actually using PNG/JPEGs. Especially >> taking in account the fact there is no a single compressed format that >> is >> universally supported, and ETC1 is not a candidate for it (it cannot >> even >> do Alpha, and quality is just enough for mobile, but not for HD desktop >> experiences). >> >> In the webgl, our goal is to support everyone on the web, and we as >> engine >> and tools developer have to make sensible decisions to optimise and >> deliver >> best experience. >> Reason why we do compressed textures - is to reduce VRAM dramatically, >> as >> there are already too many cases where apps simply crash mobile and >> desktop >> as they don't fit in VRAM. And if by providing compressed textures, we >> actually not solving a problem, then I'm afraid web is failing to >> provide >> sensible functionality regarding compressed textures as they've been >> made: >> to reduce VRAM footprint. >> >> Kind Regards, >> Max >> >> On 15 September 2016 at 15:44, Maksims Mihejevs >> wrote: >> >> Hi Corentin. >> Is there ability for a developer to choose the behaviour? >> >> This leads to a problem here: We are working on Asset Variants, where we >> generate list of different compressed textures from original texture. >> So for example we have PNG, and we generate: DXT, PVR, ETC and others >> (in >> future). Then we have priority order of them based on quality. >> So browser will check what format is supported, and will use highest >> priority compressed texture available. >> >> The only reason we use them is to reduce VRAM usage, to allow larger >> scale >> applications on mobile and desktop. And download size benefits against >> PNG, >> but JPEG is usually smaller anyway. >> If ETC1 says "it is supported" but then leads to software decompression, >> this leads to 6 times more the VRAM usage comparing to similar >> compressed >> formats, like DXT for example. >> This is actually a problem for us. >> >> So question: How can we detect that it is truly supported by GPU? >> >> Kind Regards, >> Max >> >> On 15 September 2016 at 15:34, Corentin Wallez >> wrote: >> >> Hey Will, >> >> As you guessed it is being decompressed in software >> . >> I think the rational is that it makes it easier to port mobile games to >> the >> Web and that the increased memory usage (and texture cache misses) is >> tiny >> compared to the VRAM difference between mobile and desktop platforms. >> >> Corentin >> >> On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott >> wrote: >> >> Hi all, >> >> Chrome supports ETC1 on desktop (at least on Windows). How is this >> possible? I didn't think desktop drivers/GPUs had native support. Are >> these >> textures being decompressed in software before use? >> >> Thanks, >> >> Will >> >> -- >> Will Eastcott (@willeastcott ) >> CEO, PlayCanvas Ltd >> http://playcanvas.com >> ??? >> >> >> >> >> > -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Thu Sep 15 14:41:17 2016 From: kbr...@ (Kenneth Russell) Date: Thu, 15 Sep 2016 14:41:17 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Max, Will, all, First, apologies for the confusing behavior here. The goal of WebGL 2.0 has been to expose the ES 3.0 spec in as unmodified a form as possible. The fact that the compressed texture formats in the ES 3.0 spec aren't supported natively on all desktop platforms is a problem. Here are a few possible options: 1) Remove the EAC/ETC2 compressed texture format enums from the WebGL 2.0 core spec, and state that the https://www.khronos.org/registry/webgl/extensions/WEBGL_compressed_texture_es3/ extension applies to WebGL 2.0 as well. This would require WebGL 2.0 applications to explicitly enable this extension before they would gain access to these compressed texture formats. Provide an ETC decompression library to applications, perhaps via a WebAssembly module. 2) Decompress these texture formats on platforms which don't support them natively. (This is what's currently being done.) Add a getParameter query enum like ETC_TEXTURES_DECOMPRESSED_WEBGL to give information to the application that this is happening. 3) Transcode these textures into DXT when that compressed texture format is available. This is a lossy operation, so another getParameter query like ETC_TEXTURES_TRANSCODED_WEBGL would also be necessary. Do you have a preference for which direction to go? For simplicity and clarity, I would prefer (1) if a change is going to be made. -Ken On Thu, Sep 15, 2016 at 8:30 AM, Maksims Mihejevs wrote: > Perhaps building GPU Vendor blacklist could help to go around this > problem.. > > On 15 September 2016 at 16:22, Geoff Lang wrote: > >> It's unlikely that there will be a way to know. ANGLE only decompresses >> when it is required to by spec and it wouldn't be possible to know if the >> driver is doing the same. >> >> On Thu, Sep 15, 2016 at 11:09 AM Maksims Mihejevs >> wrote: >> >>> Thanks Geoff. >>> Still wondering - will the be a way to know if it is truly supported by >>> GPU or is software decompressed? >>> >>> On 15 September 2016 at 16:05, Geoff Lang wrote: >>> >>>> If support for ETC1 on Windows is enabled without hardware support, it >>>> is likely a mistake. ANGLE has to support ETC2 compressed formats because >>>> ES3 and WebGL2 require us to, regardless of hardware support. >>>> >>>> I think that compressing to DXT should always be preferred. It is only >>>> available by extension so hardware support is implied and it should be >>>> available on all desktop platforms. >>>> >>>> On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs >>>> wrote: >>>> >>>>> Just to mention another point. >>>>> >>>>> This is web, and in web there are formats that are fully supported, >>>>> such as PNG and JPEG, and they do support most of cases, even can do HDR in >>>>> form of RGBM in PNG. >>>>> I cannot imagine anyone developing projects for a web, with using only >>>>> a single compressed format without actually using PNG/JPEGs. Especially >>>>> taking in account the fact there is no a single compressed format that is >>>>> universally supported, and ETC1 is not a candidate for it (it cannot even >>>>> do Alpha, and quality is just enough for mobile, but not for HD desktop >>>>> experiences). >>>>> >>>>> In the webgl, our goal is to support everyone on the web, and we as >>>>> engine and tools developer have to make sensible decisions to optimise and >>>>> deliver best experience. >>>>> Reason why we do compressed textures - is to reduce VRAM dramatically, >>>>> as there are already too many cases where apps simply crash mobile and >>>>> desktop as they don't fit in VRAM. And if by providing compressed textures, >>>>> we actually not solving a problem, then I'm afraid web is failing to >>>>> provide sensible functionality regarding compressed textures as they've >>>>> been made: to reduce VRAM footprint. >>>>> >>>>> Kind Regards, >>>>> Max >>>>> >>>>> On 15 September 2016 at 15:44, Maksims Mihejevs >>>>> wrote: >>>>> >>>>>> Hi Corentin. >>>>>> Is there ability for a developer to choose the behaviour? >>>>>> >>>>>> This leads to a problem here: We are working on Asset Variants, where >>>>>> we generate list of different compressed textures from original texture. >>>>>> So for example we have PNG, and we generate: DXT, PVR, ETC and others >>>>>> (in future). Then we have priority order of them based on quality. >>>>>> So browser will check what format is supported, and will use highest >>>>>> priority compressed texture available. >>>>>> >>>>>> The only reason we use them is to reduce VRAM usage, to allow larger >>>>>> scale applications on mobile and desktop. And download size benefits >>>>>> against PNG, but JPEG is usually smaller anyway. >>>>>> If ETC1 says "it is supported" but then leads to software >>>>>> decompression, this leads to 6 times more the VRAM usage comparing to >>>>>> similar compressed formats, like DXT for example. >>>>>> This is actually a problem for us. >>>>>> >>>>>> So question: How can we detect that it is truly supported by GPU? >>>>>> >>>>>> Kind Regards, >>>>>> Max >>>>>> >>>>>> On 15 September 2016 at 15:34, Corentin Wallez >>>>>> wrote: >>>>>> >>>>>>> Hey Will, >>>>>>> >>>>>>> As you guessed it is being decompressed in software >>>>>>> . >>>>>>> I think the rational is that it makes it easier to port mobile games to the >>>>>>> Web and that the increased memory usage (and texture cache misses) is tiny >>>>>>> compared to the VRAM difference between mobile and desktop platforms. >>>>>>> >>>>>>> Corentin >>>>>>> >>>>>>> On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott >>>>>> > wrote: >>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> Chrome supports ETC1 on desktop (at least on Windows). How is this >>>>>>>> possible? I didn't think desktop drivers/GPUs had native support. Are these >>>>>>>> textures being decompressed in software before use? >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Will >>>>>>>> >>>>>>>> -- >>>>>>>> Will Eastcott (@willeastcott ) >>>>>>>> CEO, PlayCanvas Ltd >>>>>>>> http://playcanvas.com >>>>>>>> ? >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Thu Sep 15 16:06:33 2016 From: max...@ (Maksims Mihejevs) Date: Fri, 16 Sep 2016 00:06:33 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Hi Ken. Motivation behind ETC2 is a good one, we actually do want to have universally available compressed format on many platforms, that supports many things such as: single channel textures, rgb/rgba, floating textures, hdr textures, way to compress normal maps preserving normalized vectors (done usually by storing X and Y of normals in channels that do not leak to each other), and many other things. And looking at ETC2 it does look promising. We haven't run out quality checks yet, but are looking forward to do it soonish. Although, as mentioned before, this is all great if it actually proves the point of those compressed formats. Most if this is actually possible with PNG and JPEG using many tricks, and it is very hard to get file size smaller than good JPEG anyway. The point of compressed formats - is to get VRAM usage all the way to minimum, so that textures can be streamed to mobile devices without crashing browser, which we see today if PNG and JPEG are used. Although almost all mobile platforms today support one or the other format. It is indeed not easy to support all of them, but that is why we are working hard to come up with unified solution where we do all heavy lifting by compressing on our servers and providing option of downloading format that is suitable and preferred by platform. If none: PNG/JPEG are saviour! :) Third option would be worst due to many reasons, and actually not really possible - many ETC2/EAC formats are not transferable to DXT's. And quality loss from double lossy compression will just be too much. Second option which is being true currently, defeats the purpose of compressed texture, for the favour of taking responsibility from developer to provide fallback to universal texture format (JPEG/PNG). But reality is that developer will have to support WebGL 1.0 anyway, as we are talking years till WebGL 2.0 will have 95%+ coverage. So even if ETC1/ETC2/EAC is now easy, PNG/JPEG or other compressed textures are still - must have. First option - I do agree as being preferred here. If platform does not support compressed format but developer has textures in that format. Either developer provides alternatives preparing assets offline, or ships decompressor of format he has textures in. This is actually true to any compressed formats. For example if it is easy port from iOS, then textures will be in PVR, and developer can ship own decompressor. If I'm not wrong, this is what Unity already does by delivering DXT's to the web in boundles and decompressing them if they are not supported, that is why they struggle with mobile support for webgl. Either way, there should be sensible means for a developer to make a choice of what to do, and how to fill VRAM either by using compressed textures of one or the other format, or simply shipping it with decompressor to guarantee that all browsers and platforms will be covered. This is all great stuff you guys all do anyway, and we just getting first iterations on asset variants (compressed textures), and it is already very promising. We've been able to load things on iPhone 5S what I would never imagined could load at all :) Kind Regards, Max On 15 September 2016 at 22:41, Kenneth Russell wrote: > Max, Will, all, > > First, apologies for the confusing behavior here. > > The goal of WebGL 2.0 has been to expose the ES 3.0 spec in as unmodified > a form as possible. The fact that the compressed texture formats in the ES > 3.0 spec aren't supported natively on all desktop platforms is a problem. > > Here are a few possible options: > > 1) Remove the EAC/ETC2 compressed texture format enums from the WebGL 2.0 > core spec, and state that the https://www.khronos.org/ > registry/webgl/extensions/WEBGL_compressed_texture_es3/ extension applies > to WebGL 2.0 as well. This would require WebGL 2.0 applications to > explicitly enable this extension before they would gain access to these > compressed texture formats. Provide an ETC decompression library to > applications, perhaps via a WebAssembly module. > > 2) Decompress these texture formats on platforms which don't support them > natively. (This is what's currently being done.) Add a getParameter query > enum like ETC_TEXTURES_DECOMPRESSED_WEBGL to give information to the > application that this is happening. > > 3) Transcode these textures into DXT when that compressed texture format > is available. This is a lossy operation, so another getParameter query like > ETC_TEXTURES_TRANSCODED_WEBGL would also be necessary. > > Do you have a preference for which direction to go? For simplicity and > clarity, I would prefer (1) if a change is going to be made. > > -Ken > > > > On Thu, Sep 15, 2016 at 8:30 AM, Maksims Mihejevs > wrote: > >> Perhaps building GPU Vendor blacklist could help to go around this >> problem.. >> >> On 15 September 2016 at 16:22, Geoff Lang wrote: >> >>> It's unlikely that there will be a way to know. ANGLE only decompresses >>> when it is required to by spec and it wouldn't be possible to know if the >>> driver is doing the same. >>> >>> On Thu, Sep 15, 2016 at 11:09 AM Maksims Mihejevs >>> wrote: >>> >>>> Thanks Geoff. >>>> Still wondering - will the be a way to know if it is truly supported by >>>> GPU or is software decompressed? >>>> >>>> On 15 September 2016 at 16:05, Geoff Lang wrote: >>>> >>>>> If support for ETC1 on Windows is enabled without hardware support, it >>>>> is likely a mistake. ANGLE has to support ETC2 compressed formats because >>>>> ES3 and WebGL2 require us to, regardless of hardware support. >>>>> >>>>> I think that compressing to DXT should always be preferred. It is >>>>> only available by extension so hardware support is implied and it should be >>>>> available on all desktop platforms. >>>>> >>>>> On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs >>>>> wrote: >>>>> >>>>>> Just to mention another point. >>>>>> >>>>>> This is web, and in web there are formats that are fully supported, >>>>>> such as PNG and JPEG, and they do support most of cases, even can do HDR in >>>>>> form of RGBM in PNG. >>>>>> I cannot imagine anyone developing projects for a web, with using >>>>>> only a single compressed format without actually using PNG/JPEGs. >>>>>> Especially taking in account the fact there is no a single compressed >>>>>> format that is universally supported, and ETC1 is not a candidate for it >>>>>> (it cannot even do Alpha, and quality is just enough for mobile, but not >>>>>> for HD desktop experiences). >>>>>> >>>>>> In the webgl, our goal is to support everyone on the web, and we as >>>>>> engine and tools developer have to make sensible decisions to optimise and >>>>>> deliver best experience. >>>>>> Reason why we do compressed textures - is to reduce VRAM >>>>>> dramatically, as there are already too many cases where apps simply crash >>>>>> mobile and desktop as they don't fit in VRAM. And if by providing >>>>>> compressed textures, we actually not solving a problem, then I'm afraid web >>>>>> is failing to provide sensible functionality regarding compressed textures >>>>>> as they've been made: to reduce VRAM footprint. >>>>>> >>>>>> Kind Regards, >>>>>> Max >>>>>> >>>>>> On 15 September 2016 at 15:44, Maksims Mihejevs >>>>>> wrote: >>>>>> >>>>>>> Hi Corentin. >>>>>>> Is there ability for a developer to choose the behaviour? >>>>>>> >>>>>>> This leads to a problem here: We are working on Asset Variants, >>>>>>> where we generate list of different compressed textures from original >>>>>>> texture. >>>>>>> So for example we have PNG, and we generate: DXT, PVR, ETC and >>>>>>> others (in future). Then we have priority order of them based on quality. >>>>>>> So browser will check what format is supported, and will use highest >>>>>>> priority compressed texture available. >>>>>>> >>>>>>> The only reason we use them is to reduce VRAM usage, to allow larger >>>>>>> scale applications on mobile and desktop. And download size benefits >>>>>>> against PNG, but JPEG is usually smaller anyway. >>>>>>> If ETC1 says "it is supported" but then leads to software >>>>>>> decompression, this leads to 6 times more the VRAM usage comparing to >>>>>>> similar compressed formats, like DXT for example. >>>>>>> This is actually a problem for us. >>>>>>> >>>>>>> So question: How can we detect that it is truly supported by GPU? >>>>>>> >>>>>>> Kind Regards, >>>>>>> Max >>>>>>> >>>>>>> On 15 September 2016 at 15:34, Corentin Wallez >>>>>>> wrote: >>>>>>> >>>>>>>> Hey Will, >>>>>>>> >>>>>>>> As you guessed it is being decompressed in software >>>>>>>> . >>>>>>>> I think the rational is that it makes it easier to port mobile games to the >>>>>>>> Web and that the increased memory usage (and texture cache misses) is tiny >>>>>>>> compared to the VRAM difference between mobile and desktop platforms. >>>>>>>> >>>>>>>> Corentin >>>>>>>> >>>>>>>> On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott < >>>>>>>> will...@> wrote: >>>>>>>> >>>>>>>>> Hi all, >>>>>>>>> >>>>>>>>> Chrome supports ETC1 on desktop (at least on Windows). How is this >>>>>>>>> possible? I didn't think desktop drivers/GPUs had native support. Are these >>>>>>>>> textures being decompressed in software before use? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> Will >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Will Eastcott (@willeastcott ) >>>>>>>>> CEO, PlayCanvas Ltd >>>>>>>>> http://playcanvas.com >>>>>>>>> ? >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Sep 15 16:33:37 2016 From: kbr...@ (Kenneth Russell) Date: Thu, 15 Sep 2016 16:33:37 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Hi Max, Thanks for your feedback and for raising this issue. https://github.com/KhronosGroup/WebGL/issues/2030 has been filed and we'll address it before enabling WebGL 2.0 by default in browsers. -Ken On Thu, Sep 15, 2016 at 4:06 PM, Maksims Mihejevs wrote: > Hi Ken. > > Motivation behind ETC2 is a good one, we actually do want to have > universally available compressed format on many platforms, that supports > many things such as: single channel textures, rgb/rgba, floating textures, > hdr textures, way to compress normal maps preserving normalized vectors > (done usually by storing X and Y of normals in channels that do not leak to > each other), and many other things. And looking at ETC2 it does look > promising. We haven't run out quality checks yet, but are looking forward > to do it soonish. > > Although, as mentioned before, this is all great if it actually proves the > point of those compressed formats. Most if this is actually possible with > PNG and JPEG using many tricks, and it is very hard to get file size > smaller than good JPEG anyway. The point of compressed formats - is to get > VRAM usage all the way to minimum, so that textures can be streamed to > mobile devices without crashing browser, which we see today if PNG and JPEG > are used. > Although almost all mobile platforms today support one or the other > format. It is indeed not easy to support all of them, but that is why we > are working hard to come up with unified solution where we do all heavy > lifting by compressing on our servers and providing option of downloading > format that is suitable and preferred by platform. If none: PNG/JPEG are > saviour! :) > > Third option would be worst due to many reasons, and actually not really > possible - many ETC2/EAC formats are not transferable to DXT's. And quality > loss from double lossy compression will just be too much. > > Second option which is being true currently, defeats the purpose of > compressed texture, for the favour of taking responsibility from developer > to provide fallback to universal texture format (JPEG/PNG). But reality is > that developer will have to support WebGL 1.0 anyway, as we are talking > years till WebGL 2.0 will have 95%+ coverage. So even if ETC1/ETC2/EAC is > now easy, PNG/JPEG or other compressed textures are still - must have. > > First option - I do agree as being preferred here. If platform does not > support compressed format but developer has textures in that format. Either > developer provides alternatives preparing assets offline, or ships > decompressor of format he has textures in. This is actually true to any > compressed formats. For example if it is easy port from iOS, then textures > will be in PVR, and developer can ship own decompressor. > If I'm not wrong, this is what Unity already does by delivering DXT's to > the web in boundles and decompressing them if they are not supported, that > is why they struggle with mobile support for webgl. > > Either way, there should be sensible means for a developer to make a > choice of what to do, and how to fill VRAM either by using compressed > textures of one or the other format, or simply shipping it with > decompressor to guarantee that all browsers and platforms will be covered. > > This is all great stuff you guys all do anyway, and we just getting first > iterations on asset variants (compressed textures), and it is already very > promising. We've been able to load things on iPhone 5S what I would never > imagined could load at all :) > > Kind Regards, > Max > > On 15 September 2016 at 22:41, Kenneth Russell wrote: > >> Max, Will, all, >> >> First, apologies for the confusing behavior here. >> >> The goal of WebGL 2.0 has been to expose the ES 3.0 spec in as unmodified >> a form as possible. The fact that the compressed texture formats in the ES >> 3.0 spec aren't supported natively on all desktop platforms is a problem. >> >> Here are a few possible options: >> >> 1) Remove the EAC/ETC2 compressed texture format enums from the WebGL 2.0 >> core spec, and state that the https://www.khronos.org/re >> gistry/webgl/extensions/WEBGL_compressed_texture_es3/ extension applies >> to WebGL 2.0 as well. This would require WebGL 2.0 applications to >> explicitly enable this extension before they would gain access to these >> compressed texture formats. Provide an ETC decompression library to >> applications, perhaps via a WebAssembly module. >> >> 2) Decompress these texture formats on platforms which don't support them >> natively. (This is what's currently being done.) Add a getParameter query >> enum like ETC_TEXTURES_DECOMPRESSED_WEBGL to give information to the >> application that this is happening. >> >> 3) Transcode these textures into DXT when that compressed texture format >> is available. This is a lossy operation, so another getParameter query like >> ETC_TEXTURES_TRANSCODED_WEBGL would also be necessary. >> >> Do you have a preference for which direction to go? For simplicity and >> clarity, I would prefer (1) if a change is going to be made. >> >> -Ken >> >> >> >> On Thu, Sep 15, 2016 at 8:30 AM, Maksims Mihejevs >> wrote: >> >>> Perhaps building GPU Vendor blacklist could help to go around this >>> problem.. >>> >>> On 15 September 2016 at 16:22, Geoff Lang wrote: >>> >>>> It's unlikely that there will be a way to know. ANGLE only >>>> decompresses when it is required to by spec and it wouldn't be possible to >>>> know if the driver is doing the same. >>>> >>>> On Thu, Sep 15, 2016 at 11:09 AM Maksims Mihejevs >>>> wrote: >>>> >>>>> Thanks Geoff. >>>>> Still wondering - will the be a way to know if it is truly supported >>>>> by GPU or is software decompressed? >>>>> >>>>> On 15 September 2016 at 16:05, Geoff Lang >>>>> wrote: >>>>> >>>>>> If support for ETC1 on Windows is enabled without hardware support, >>>>>> it is likely a mistake. ANGLE has to support ETC2 compressed formats >>>>>> because ES3 and WebGL2 require us to, regardless of hardware support. >>>>>> >>>>>> I think that compressing to DXT should always be preferred. It is >>>>>> only available by extension so hardware support is implied and it should be >>>>>> available on all desktop platforms. >>>>>> >>>>>> On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs >>>>>> wrote: >>>>>> >>>>>>> Just to mention another point. >>>>>>> >>>>>>> This is web, and in web there are formats that are fully supported, >>>>>>> such as PNG and JPEG, and they do support most of cases, even can do HDR in >>>>>>> form of RGBM in PNG. >>>>>>> I cannot imagine anyone developing projects for a web, with using >>>>>>> only a single compressed format without actually using PNG/JPEGs. >>>>>>> Especially taking in account the fact there is no a single compressed >>>>>>> format that is universally supported, and ETC1 is not a candidate for it >>>>>>> (it cannot even do Alpha, and quality is just enough for mobile, but not >>>>>>> for HD desktop experiences). >>>>>>> >>>>>>> In the webgl, our goal is to support everyone on the web, and we as >>>>>>> engine and tools developer have to make sensible decisions to optimise and >>>>>>> deliver best experience. >>>>>>> Reason why we do compressed textures - is to reduce VRAM >>>>>>> dramatically, as there are already too many cases where apps simply crash >>>>>>> mobile and desktop as they don't fit in VRAM. And if by providing >>>>>>> compressed textures, we actually not solving a problem, then I'm afraid web >>>>>>> is failing to provide sensible functionality regarding compressed textures >>>>>>> as they've been made: to reduce VRAM footprint. >>>>>>> >>>>>>> Kind Regards, >>>>>>> Max >>>>>>> >>>>>>> On 15 September 2016 at 15:44, Maksims Mihejevs >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Corentin. >>>>>>>> Is there ability for a developer to choose the behaviour? >>>>>>>> >>>>>>>> This leads to a problem here: We are working on Asset Variants, >>>>>>>> where we generate list of different compressed textures from original >>>>>>>> texture. >>>>>>>> So for example we have PNG, and we generate: DXT, PVR, ETC and >>>>>>>> others (in future). Then we have priority order of them based on quality. >>>>>>>> So browser will check what format is supported, and will use >>>>>>>> highest priority compressed texture available. >>>>>>>> >>>>>>>> The only reason we use them is to reduce VRAM usage, to allow >>>>>>>> larger scale applications on mobile and desktop. And download size benefits >>>>>>>> against PNG, but JPEG is usually smaller anyway. >>>>>>>> If ETC1 says "it is supported" but then leads to software >>>>>>>> decompression, this leads to 6 times more the VRAM usage comparing to >>>>>>>> similar compressed formats, like DXT for example. >>>>>>>> This is actually a problem for us. >>>>>>>> >>>>>>>> So question: How can we detect that it is truly supported by GPU? >>>>>>>> >>>>>>>> Kind Regards, >>>>>>>> Max >>>>>>>> >>>>>>>> On 15 September 2016 at 15:34, Corentin Wallez >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Hey Will, >>>>>>>>> >>>>>>>>> As you guessed it is being decompressed in software >>>>>>>>> . >>>>>>>>> I think the rational is that it makes it easier to port mobile games to the >>>>>>>>> Web and that the increased memory usage (and texture cache misses) is tiny >>>>>>>>> compared to the VRAM difference between mobile and desktop platforms. >>>>>>>>> >>>>>>>>> Corentin >>>>>>>>> >>>>>>>>> On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott < >>>>>>>>> will...@> wrote: >>>>>>>>> >>>>>>>>>> Hi all, >>>>>>>>>> >>>>>>>>>> Chrome supports ETC1 on desktop (at least on Windows). How is >>>>>>>>>> this possible? I didn't think desktop drivers/GPUs had native support. Are >>>>>>>>>> these textures being decompressed in software before use? >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> >>>>>>>>>> Will >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Will Eastcott (@willeastcott ) >>>>>>>>>> CEO, PlayCanvas Ltd >>>>>>>>>> http://playcanvas.com >>>>>>>>>> ? >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Sep 15 16:51:12 2016 From: kbr...@ (Kenneth Russell) Date: Thu, 15 Sep 2016 16:51:12 -0700 Subject: [Public WebGL] ETC2Comp texture compressor Message-ID: WebGL community, Relevant to earlier discussions today, and of interest: Google today open-sourced a super-fast compressor for ETC2 format textures. This will be useful for asset pipelines targeting WebGL 2.0. The blog post is here: https://medium.com/@duhroach/building-a-blazing-fast-etc2-compressor-307f3e9aad99#.vz91iw2ft and the Github repository: https://github.com/google/etc2comp -Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Thu Sep 15 17:54:57 2016 From: max...@ (Maksims Mihejevs) Date: Fri, 16 Sep 2016 01:54:57 +0100 Subject: [Public WebGL] ETC2Comp texture compressor In-Reply-To: References: Message-ID: This is just great! We at PlayCanvas just been going through finding best quality and fastest possible compression tools for all possible WebGL compressed texture formats (DXT, ETC1, PVR, ATC) As we provide services in a cloud, we have to run compression for developers textures on our servers and if it takes too long, for too many users, it just becomes simply not viable. So fast compressor - is what we need! We will be looking at using it to provide ETC2 support in PlayCanvas tools and engine for WebGL 2.0 release. Amazing work guys! -Max On 16 September 2016 at 00:51, Kenneth Russell wrote: > WebGL community, > > Relevant to earlier discussions today, and of interest: Google today > open-sourced a super-fast compressor for ETC2 format textures. This will be > useful for asset pipelines targeting WebGL 2.0. The blog post is here: > > https://medium.com/@duhroach/building-a-blazing-fast-etc2- > compressor-307f3e9aad99#.vz91iw2ft > > and the Github repository: > > https://github.com/google/etc2comp > > -Ken > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Thu Sep 15 19:41:05 2016 From: ste...@ (Steve Baker) Date: Thu, 15 Sep 2016 21:41:05 -0500 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: <540d371b75dc882bd3bb9b2627fb83c6.squirrel@webmail.sjbaker.org> I would strongly plead for "NOT (3)". ETC is a pretty nasty format - lossy as all hell, really poor quality. Now decompress into another, differently-lossy format, and throw it up on a (typically) much larger, higher resolution screen - and the results will be decidedly un-pretty! (2) gets my vote. It lets mobile-friendly pages work on the desktop without any messing around - which is fine for those who aren't too concerned about the horrors of ETC. Those who do care and can stream in their images in a wider range of formats can then query the flag to decide whether ETC is what they want or not. -- Steve Kenneth Russell wrote: > Max, Will, all, > > First, apologies for the confusing behavior here. > > The goal of WebGL 2.0 has been to expose the ES 3.0 spec in as unmodified > a > form as possible. The fact that the compressed texture formats in the ES > 3.0 spec aren't supported natively on all desktop platforms is a problem. > > Here are a few possible options: > > 1) Remove the EAC/ETC2 compressed texture format enums from the WebGL 2.0 > core spec, and state that the > https://www.khronos.org/registry/webgl/extensions/WEBGL_compressed_texture_es3/ > extension applies to WebGL 2.0 as well. This would require WebGL 2.0 > applications to explicitly enable this extension before they would gain > access to these compressed texture formats. Provide an ETC decompression > library to applications, perhaps via a WebAssembly module. > > 2) Decompress these texture formats on platforms which don't support them > natively. (This is what's currently being done.) Add a getParameter query > enum like ETC_TEXTURES_DECOMPRESSED_WEBGL to give information to the > application that this is happening. > > 3) Transcode these textures into DXT when that compressed texture format > is > available. This is a lossy operation, so another getParameter query like > ETC_TEXTURES_TRANSCODED_WEBGL would also be necessary. > > Do you have a preference for which direction to go? For simplicity and > clarity, I would prefer (1) if a change is going to be made. > > -Ken > > > > On Thu, Sep 15, 2016 at 8:30 AM, Maksims Mihejevs > wrote: > >> Perhaps building GPU Vendor blacklist could help to go around this >> problem.. >> >> On 15 September 2016 at 16:22, Geoff Lang wrote: >> >>> It's unlikely that there will be a way to know. ANGLE only >>> decompresses >>> when it is required to by spec and it wouldn't be possible to know if >>> the >>> driver is doing the same. >>> >>> On Thu, Sep 15, 2016 at 11:09 AM Maksims Mihejevs >>> wrote: >>> >>>> Thanks Geoff. >>>> Still wondering - will the be a way to know if it is truly supported >>>> by >>>> GPU or is software decompressed? >>>> >>>> On 15 September 2016 at 16:05, Geoff Lang >>>> wrote: >>>> >>>>> If support for ETC1 on Windows is enabled without hardware support, >>>>> it >>>>> is likely a mistake. ANGLE has to support ETC2 compressed formats >>>>> because >>>>> ES3 and WebGL2 require us to, regardless of hardware support. >>>>> >>>>> I think that compressing to DXT should always be preferred. It is >>>>> only >>>>> available by extension so hardware support is implied and it should >>>>> be >>>>> available on all desktop platforms. >>>>> >>>>> On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs >>>>> >>>>> wrote: >>>>> >>>>>> Just to mention another point. >>>>>> >>>>>> This is web, and in web there are formats that are fully supported, >>>>>> such as PNG and JPEG, and they do support most of cases, even can do >>>>>> HDR in >>>>>> form of RGBM in PNG. >>>>>> I cannot imagine anyone developing projects for a web, with using >>>>>> only >>>>>> a single compressed format without actually using PNG/JPEGs. >>>>>> Especially >>>>>> taking in account the fact there is no a single compressed format >>>>>> that is >>>>>> universally supported, and ETC1 is not a candidate for it (it cannot >>>>>> even >>>>>> do Alpha, and quality is just enough for mobile, but not for HD >>>>>> desktop >>>>>> experiences). >>>>>> >>>>>> In the webgl, our goal is to support everyone on the web, and we as >>>>>> engine and tools developer have to make sensible decisions to >>>>>> optimise and >>>>>> deliver best experience. >>>>>> Reason why we do compressed textures - is to reduce VRAM >>>>>> dramatically, >>>>>> as there are already too many cases where apps simply crash mobile >>>>>> and >>>>>> desktop as they don't fit in VRAM. And if by providing compressed >>>>>> textures, >>>>>> we actually not solving a problem, then I'm afraid web is failing to >>>>>> provide sensible functionality regarding compressed textures as >>>>>> they've >>>>>> been made: to reduce VRAM footprint. >>>>>> >>>>>> Kind Regards, >>>>>> Max >>>>>> >>>>>> On 15 September 2016 at 15:44, Maksims Mihejevs >>>>>> wrote: >>>>>> >>>>>>> Hi Corentin. >>>>>>> Is there ability for a developer to choose the behaviour? >>>>>>> >>>>>>> This leads to a problem here: We are working on Asset Variants, >>>>>>> where >>>>>>> we generate list of different compressed textures from original >>>>>>> texture. >>>>>>> So for example we have PNG, and we generate: DXT, PVR, ETC and >>>>>>> others >>>>>>> (in future). Then we have priority order of them based on quality. >>>>>>> So browser will check what format is supported, and will use >>>>>>> highest >>>>>>> priority compressed texture available. >>>>>>> >>>>>>> The only reason we use them is to reduce VRAM usage, to allow >>>>>>> larger >>>>>>> scale applications on mobile and desktop. And download size >>>>>>> benefits >>>>>>> against PNG, but JPEG is usually smaller anyway. >>>>>>> If ETC1 says "it is supported" but then leads to software >>>>>>> decompression, this leads to 6 times more the VRAM usage comparing >>>>>>> to >>>>>>> similar compressed formats, like DXT for example. >>>>>>> This is actually a problem for us. >>>>>>> >>>>>>> So question: How can we detect that it is truly supported by GPU? >>>>>>> >>>>>>> Kind Regards, >>>>>>> Max >>>>>>> >>>>>>> On 15 September 2016 at 15:34, Corentin Wallez >>>>>>> wrote: >>>>>>> >>>>>>>> Hey Will, >>>>>>>> >>>>>>>> As you guessed it is being decompressed in software >>>>>>>> . >>>>>>>> I think the rational is that it makes it easier to port mobile >>>>>>>> games to the >>>>>>>> Web and that the increased memory usage (and texture cache misses) >>>>>>>> is tiny >>>>>>>> compared to the VRAM difference between mobile and desktop >>>>>>>> platforms. >>>>>>>> >>>>>>>> Corentin >>>>>>>> >>>>>>>> On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott >>>>>>>> >>>>>>> > wrote: >>>>>>>> >>>>>>>>> Hi all, >>>>>>>>> >>>>>>>>> Chrome supports ETC1 on desktop (at least on Windows). How is >>>>>>>>> this >>>>>>>>> possible? I didn't think desktop drivers/GPUs had native support. >>>>>>>>> Are these >>>>>>>>> textures being decompressed in software before use? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> Will >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Will Eastcott (@willeastcott ) >>>>>>>>> CEO, PlayCanvas Ltd >>>>>>>>> http://playcanvas.com >>>>>>>>> ??? >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>> >> > -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From ste...@ Thu Sep 15 19:52:09 2016 From: ste...@ (Steve Baker) Date: Thu, 15 Sep 2016 21:52:09 -0500 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: <1f6d8cbd4ab8092ccc671a21b7cef21f.squirrel@webmail.sjbaker.org> Hmmm - thinking about this a bit more, perhaps, for option (2), instead of the super-specific "ETC_TEXTURES_DECOMPRESSED_WEBGL" flag - maybe we should have a more general queries of the form "If I did this, would it be lossy?" and "If I did this would the memory consumption be larger than I'd predict from the request I made?" - these flags could be queried after doing some sort of proxy texture request? So PNG-in, DXT-out would trigger the "IS_LOSSY" flag, but JPG-in, RGB888-out wouldn't (because the losses are already there in JPG and the conversion doesn't make it worse) would only trigger "IS_LARGER" result because JPG is decompressed - but no additional losses are caused by doing so. This would be a nicer "forever feature" that would provide the needed information for any additional file format hacks that may turn out to be needed in the future. -- Steve Kenneth Russell wrote: > Max, Will, all, > > First, apologies for the confusing behavior here. > > The goal of WebGL 2.0 has been to expose the ES 3.0 spec in as unmodified > a > form as possible. The fact that the compressed texture formats in the ES > 3.0 spec aren't supported natively on all desktop platforms is a problem. > > Here are a few possible options: > > 1) Remove the EAC/ETC2 compressed texture format enums from the WebGL 2.0 > core spec, and state that the > https://www.khronos.org/registry/webgl/extensions/WEBGL_compressed_texture_es3/ > extension applies to WebGL 2.0 as well. This would require WebGL 2.0 > applications to explicitly enable this extension before they would gain > access to these compressed texture formats. Provide an ETC decompression > library to applications, perhaps via a WebAssembly module. > > 2) Decompress these texture formats on platforms which don't support them > natively. (This is what's currently being done.) Add a getParameter query > enum like ETC_TEXTURES_DECOMPRESSED_WEBGL to give information to the > application that this is happening. > > 3) Transcode these textures into DXT when that compressed texture format > is > available. This is a lossy operation, so another getParameter query like > ETC_TEXTURES_TRANSCODED_WEBGL would also be necessary. > > Do you have a preference for which direction to go? For simplicity and > clarity, I would prefer (1) if a change is going to be made. > > -Ken > > > > On Thu, Sep 15, 2016 at 8:30 AM, Maksims Mihejevs > wrote: > >> Perhaps building GPU Vendor blacklist could help to go around this >> problem.. >> >> On 15 September 2016 at 16:22, Geoff Lang wrote: >> >>> It's unlikely that there will be a way to know. ANGLE only >>> decompresses >>> when it is required to by spec and it wouldn't be possible to know if >>> the >>> driver is doing the same. >>> >>> On Thu, Sep 15, 2016 at 11:09 AM Maksims Mihejevs >>> wrote: >>> >>>> Thanks Geoff. >>>> Still wondering - will the be a way to know if it is truly supported >>>> by >>>> GPU or is software decompressed? >>>> >>>> On 15 September 2016 at 16:05, Geoff Lang >>>> wrote: >>>> >>>>> If support for ETC1 on Windows is enabled without hardware support, >>>>> it >>>>> is likely a mistake. ANGLE has to support ETC2 compressed formats >>>>> because >>>>> ES3 and WebGL2 require us to, regardless of hardware support. >>>>> >>>>> I think that compressing to DXT should always be preferred. It is >>>>> only >>>>> available by extension so hardware support is implied and it should >>>>> be >>>>> available on all desktop platforms. >>>>> >>>>> On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs >>>>> >>>>> wrote: >>>>> >>>>>> Just to mention another point. >>>>>> >>>>>> This is web, and in web there are formats that are fully supported, >>>>>> such as PNG and JPEG, and they do support most of cases, even can do >>>>>> HDR in >>>>>> form of RGBM in PNG. >>>>>> I cannot imagine anyone developing projects for a web, with using >>>>>> only >>>>>> a single compressed format without actually using PNG/JPEGs. >>>>>> Especially >>>>>> taking in account the fact there is no a single compressed format >>>>>> that is >>>>>> universally supported, and ETC1 is not a candidate for it (it cannot >>>>>> even >>>>>> do Alpha, and quality is just enough for mobile, but not for HD >>>>>> desktop >>>>>> experiences). >>>>>> >>>>>> In the webgl, our goal is to support everyone on the web, and we as >>>>>> engine and tools developer have to make sensible decisions to >>>>>> optimise and >>>>>> deliver best experience. >>>>>> Reason why we do compressed textures - is to reduce VRAM >>>>>> dramatically, >>>>>> as there are already too many cases where apps simply crash mobile >>>>>> and >>>>>> desktop as they don't fit in VRAM. And if by providing compressed >>>>>> textures, >>>>>> we actually not solving a problem, then I'm afraid web is failing to >>>>>> provide sensible functionality regarding compressed textures as >>>>>> they've >>>>>> been made: to reduce VRAM footprint. >>>>>> >>>>>> Kind Regards, >>>>>> Max >>>>>> >>>>>> On 15 September 2016 at 15:44, Maksims Mihejevs >>>>>> wrote: >>>>>> >>>>>>> Hi Corentin. >>>>>>> Is there ability for a developer to choose the behaviour? >>>>>>> >>>>>>> This leads to a problem here: We are working on Asset Variants, >>>>>>> where >>>>>>> we generate list of different compressed textures from original >>>>>>> texture. >>>>>>> So for example we have PNG, and we generate: DXT, PVR, ETC and >>>>>>> others >>>>>>> (in future). Then we have priority order of them based on quality. >>>>>>> So browser will check what format is supported, and will use >>>>>>> highest >>>>>>> priority compressed texture available. >>>>>>> >>>>>>> The only reason we use them is to reduce VRAM usage, to allow >>>>>>> larger >>>>>>> scale applications on mobile and desktop. And download size >>>>>>> benefits >>>>>>> against PNG, but JPEG is usually smaller anyway. >>>>>>> If ETC1 says "it is supported" but then leads to software >>>>>>> decompression, this leads to 6 times more the VRAM usage comparing >>>>>>> to >>>>>>> similar compressed formats, like DXT for example. >>>>>>> This is actually a problem for us. >>>>>>> >>>>>>> So question: How can we detect that it is truly supported by GPU? >>>>>>> >>>>>>> Kind Regards, >>>>>>> Max >>>>>>> >>>>>>> On 15 September 2016 at 15:34, Corentin Wallez >>>>>>> wrote: >>>>>>> >>>>>>>> Hey Will, >>>>>>>> >>>>>>>> As you guessed it is being decompressed in software >>>>>>>> . >>>>>>>> I think the rational is that it makes it easier to port mobile >>>>>>>> games to the >>>>>>>> Web and that the increased memory usage (and texture cache misses) >>>>>>>> is tiny >>>>>>>> compared to the VRAM difference between mobile and desktop >>>>>>>> platforms. >>>>>>>> >>>>>>>> Corentin >>>>>>>> >>>>>>>> On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott >>>>>>>> >>>>>>> > wrote: >>>>>>>> >>>>>>>>> Hi all, >>>>>>>>> >>>>>>>>> Chrome supports ETC1 on desktop (at least on Windows). How is >>>>>>>>> this >>>>>>>>> possible? I didn't think desktop drivers/GPUs had native support. >>>>>>>>> Are these >>>>>>>>> textures being decompressed in software before use? >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> >>>>>>>>> Will >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Will Eastcott (@willeastcott ) >>>>>>>>> CEO, PlayCanvas Ltd >>>>>>>>> http://playcanvas.com >>>>>>>>> ??? >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>> >> > -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From max...@ Fri Sep 16 01:12:53 2016 From: max...@ (Maksims Mihejevs) Date: Fri, 16 Sep 2016 09:12:53 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: <540d371b75dc882bd3bb9b2627fb83c6.squirrel@webmail.sjbaker.org> References: <540d371b75dc882bd3bb9b2627fb83c6.squirrel@webmail.sjbaker.org> Message-ID: > (2) gets my vote. It lets mobile-friendly pages work on the desktop without any messing around - which is fine for those who aren't too concerned about the horrors of ETC. > > Those who do care and can stream in their images in a wider range of formats can then query the flag to decide whether ETC is what they want or not. But that simply still wont work, because there will be never 100% etc coverage on desktops. So developer in any case will have to provide universal or other specific formats to cover whole webgl platform. So as much as this utopian idea is great, it actually wont be the reality. On 16 Sep 2016 3:41 a.m., "Steve Baker" wrote: > I would strongly plead for "NOT (3)". > > ETC is a pretty nasty format - lossy as all hell, really poor quality. > > Now decompress into another, differently-lossy format, and throw it up on > a (typically) much larger, higher resolution screen - and the results will > be decidedly un-pretty! > > (2) gets my vote. It lets mobile-friendly pages work on the desktop > without any messing around - which is fine for those who aren't too > concerned about the horrors of ETC. Those who do care and can stream in > their images in a wider range of formats can then query the flag to decide > whether ETC is what they want or not. > > -- Steve > > Kenneth Russell wrote: > > Max, Will, all, > > > > First, apologies for the confusing behavior here. > > > > The goal of WebGL 2.0 has been to expose the ES 3.0 spec in as unmodified > > a > > form as possible. The fact that the compressed texture formats in the ES > > 3.0 spec aren't supported natively on all desktop platforms is a problem. > > > > Here are a few possible options: > > > > 1) Remove the EAC/ETC2 compressed texture format enums from the WebGL 2.0 > > core spec, and state that the > > https://www.khronos.org/registry/webgl/extensions/ > WEBGL_compressed_texture_es3/ > > extension applies to WebGL 2.0 as well. This would require WebGL 2.0 > > applications to explicitly enable this extension before they would gain > > access to these compressed texture formats. Provide an ETC decompression > > library to applications, perhaps via a WebAssembly module. > > > > 2) Decompress these texture formats on platforms which don't support them > > natively. (This is what's currently being done.) Add a getParameter query > > enum like ETC_TEXTURES_DECOMPRESSED_WEBGL to give information to the > > application that this is happening. > > > > 3) Transcode these textures into DXT when that compressed texture format > > is > > available. This is a lossy operation, so another getParameter query like > > ETC_TEXTURES_TRANSCODED_WEBGL would also be necessary. > > > > Do you have a preference for which direction to go? For simplicity and > > clarity, I would prefer (1) if a change is going to be made. > > > > -Ken > > > > > > > > On Thu, Sep 15, 2016 at 8:30 AM, Maksims Mihejevs > > wrote: > > > >> Perhaps building GPU Vendor blacklist could help to go around this > >> problem.. > >> > >> On 15 September 2016 at 16:22, Geoff Lang wrote: > >> > >>> It's unlikely that there will be a way to know. ANGLE only > >>> decompresses > >>> when it is required to by spec and it wouldn't be possible to know if > >>> the > >>> driver is doing the same. > >>> > >>> On Thu, Sep 15, 2016 at 11:09 AM Maksims Mihejevs > >>> wrote: > >>> > >>>> Thanks Geoff. > >>>> Still wondering - will the be a way to know if it is truly supported > >>>> by > >>>> GPU or is software decompressed? > >>>> > >>>> On 15 September 2016 at 16:05, Geoff Lang > >>>> wrote: > >>>> > >>>>> If support for ETC1 on Windows is enabled without hardware support, > >>>>> it > >>>>> is likely a mistake. ANGLE has to support ETC2 compressed formats > >>>>> because > >>>>> ES3 and WebGL2 require us to, regardless of hardware support. > >>>>> > >>>>> I think that compressing to DXT should always be preferred. It is > >>>>> only > >>>>> available by extension so hardware support is implied and it should > >>>>> be > >>>>> available on all desktop platforms. > >>>>> > >>>>> On Thu, Sep 15, 2016 at 10:55 AM Maksims Mihejevs > >>>>> > >>>>> wrote: > >>>>> > >>>>>> Just to mention another point. > >>>>>> > >>>>>> This is web, and in web there are formats that are fully supported, > >>>>>> such as PNG and JPEG, and they do support most of cases, even can do > >>>>>> HDR in > >>>>>> form of RGBM in PNG. > >>>>>> I cannot imagine anyone developing projects for a web, with using > >>>>>> only > >>>>>> a single compressed format without actually using PNG/JPEGs. > >>>>>> Especially > >>>>>> taking in account the fact there is no a single compressed format > >>>>>> that is > >>>>>> universally supported, and ETC1 is not a candidate for it (it cannot > >>>>>> even > >>>>>> do Alpha, and quality is just enough for mobile, but not for HD > >>>>>> desktop > >>>>>> experiences). > >>>>>> > >>>>>> In the webgl, our goal is to support everyone on the web, and we as > >>>>>> engine and tools developer have to make sensible decisions to > >>>>>> optimise and > >>>>>> deliver best experience. > >>>>>> Reason why we do compressed textures - is to reduce VRAM > >>>>>> dramatically, > >>>>>> as there are already too many cases where apps simply crash mobile > >>>>>> and > >>>>>> desktop as they don't fit in VRAM. And if by providing compressed > >>>>>> textures, > >>>>>> we actually not solving a problem, then I'm afraid web is failing to > >>>>>> provide sensible functionality regarding compressed textures as > >>>>>> they've > >>>>>> been made: to reduce VRAM footprint. > >>>>>> > >>>>>> Kind Regards, > >>>>>> Max > >>>>>> > >>>>>> On 15 September 2016 at 15:44, Maksims Mihejevs > > >>>>>> wrote: > >>>>>> > >>>>>>> Hi Corentin. > >>>>>>> Is there ability for a developer to choose the behaviour? > >>>>>>> > >>>>>>> This leads to a problem here: We are working on Asset Variants, > >>>>>>> where > >>>>>>> we generate list of different compressed textures from original > >>>>>>> texture. > >>>>>>> So for example we have PNG, and we generate: DXT, PVR, ETC and > >>>>>>> others > >>>>>>> (in future). Then we have priority order of them based on quality. > >>>>>>> So browser will check what format is supported, and will use > >>>>>>> highest > >>>>>>> priority compressed texture available. > >>>>>>> > >>>>>>> The only reason we use them is to reduce VRAM usage, to allow > >>>>>>> larger > >>>>>>> scale applications on mobile and desktop. And download size > >>>>>>> benefits > >>>>>>> against PNG, but JPEG is usually smaller anyway. > >>>>>>> If ETC1 says "it is supported" but then leads to software > >>>>>>> decompression, this leads to 6 times more the VRAM usage comparing > >>>>>>> to > >>>>>>> similar compressed formats, like DXT for example. > >>>>>>> This is actually a problem for us. > >>>>>>> > >>>>>>> So question: How can we detect that it is truly supported by GPU? > >>>>>>> > >>>>>>> Kind Regards, > >>>>>>> Max > >>>>>>> > >>>>>>> On 15 September 2016 at 15:34, Corentin Wallez > > >>>>>>> wrote: > >>>>>>> > >>>>>>>> Hey Will, > >>>>>>>> > >>>>>>>> As you guessed it is being decompressed in software > >>>>>>>> angle/src/image_util/loadimage_etc.cpp?sq=package:chromium&dr=CSs&rcl= > 1473923811&l=1404>. > >>>>>>>> I think the rational is that it makes it easier to port mobile > >>>>>>>> games to the > >>>>>>>> Web and that the increased memory usage (and texture cache misses) > >>>>>>>> is tiny > >>>>>>>> compared to the VRAM difference between mobile and desktop > >>>>>>>> platforms. > >>>>>>>> > >>>>>>>> Corentin > >>>>>>>> > >>>>>>>> On Thu, Sep 15, 2016 at 10:16 AM, Will Eastcott > >>>>>>>> >>>>>>>> > wrote: > >>>>>>>> > >>>>>>>>> Hi all, > >>>>>>>>> > >>>>>>>>> Chrome supports ETC1 on desktop (at least on Windows). How is > >>>>>>>>> this > >>>>>>>>> possible? I didn't think desktop drivers/GPUs had native support. > >>>>>>>>> Are these > >>>>>>>>> textures being decompressed in software before use? > >>>>>>>>> > >>>>>>>>> Thanks, > >>>>>>>>> > >>>>>>>>> Will > >>>>>>>>> > >>>>>>>>> -- > >>>>>>>>> Will Eastcott (@willeastcott ) > >>>>>>>>> CEO, PlayCanvas Ltd > >>>>>>>>> http://playcanvas.com > >>>>>>>>> ? ? > >>>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>> > >>>>>> > >>>> > >> > > > > > -- Steve > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Sep 16 01:58:27 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 16 Sep 2016 10:58:27 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Thu, Sep 15, 2016 at 11:41 PM, Kenneth Russell wrote: > > 1) Remove the EAC/ETC2 compressed texture format enums from the WebGL 2.0 > core spec, and state that the https://www.khronos.org/ > registry/webgl/extensions/WEBGL_compressed_texture_es3/ extension applies > to WebGL 2.0 as well. This would require WebGL 2.0 applications to > explicitly enable this extension before they would gain access to these > compressed texture formats. Provide an ETC decompression library to > applications, perhaps via a WebAssembly module. > I don't think compression functionality should be advertised, that's not actually supported. So this would seem to me to be a good option. The whole purpose of on-GPU compression is to save on VRAM and bandwidth for the obvious reasons. If it all ends up blown up plain on the GPU, there's hardly any point other than ritual adherence to the spec. I think practicality should take precedence. > 2) Decompress these texture formats on platforms which don't support them > natively. (This is what's currently being done.) Add a getParameter query > enum like ETC_TEXTURES_DECOMPRESSED_WEBGL to give information to the > application that this is happening. > Silently unless checked blowing up vram (as is currently the case) sounds like a pretty bad idea. > 3) Transcode these textures into DXT when that compressed texture format > is available. This is a lossy operation, so another getParameter query like > ETC_TEXTURES_TRANSCODED_WEBGL would also be necessary. > It'd be good to have the ability to *explicitely* transcode, but it should perhaps not be silent. On a larger topic of on-client conversion. The lossy transfer formats (jpeg, webp etc.) are vastly more friendly on the network bandwidth than any on GPU compressed format. So I don't think that assembling all the assets in a GPU friendly way before transmitting them through the network is completely satisfactory. While in theory many formats could be supported client side by writing some JS (or webassembly or whatever), and that would be somewhat practical, it would certainly be convenient if UAs offered routines to encode/decode various GPU compressed formats built-in (just as it would be helpful if they exposed zlib inflate/deflate that they already ship anyway). -------------- next part -------------- An HTML attachment was scrubbed... URL: From chr...@ Fri Sep 16 02:58:50 2016 From: chr...@ (Christophe Riccio) Date: Fri, 16 Sep 2016 11:58:50 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: *1) Remove the EAC/ETC2 compressed texture format enums from the WebGL 2.0 core spec, and state that the https://www.khronos.org/registry/webgl/extensions/WEBGL_compressed_texture_es3/ extension applies to WebGL 2.0 as well. This would require WebGL 2.0 applications to explicitly enable this extension before they would gain access to these compressed texture formats. Provide an ETC decompression library to applications, perhaps via a WebAssembly module.* I think this is the better option. We would not use ETC2 on platform that doesn't actually support it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nic...@ Fri Sep 16 08:32:35 2016 From: nic...@ (Nicolas Capens) Date: Fri, 16 Sep 2016 15:32:35 +0000 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: I don't think removing these formats from WebGL 2.0 is a good idea. It would put a dent into the strong relationship with OpenGL ES 3.0, and it's a slippery slope when we start to remove things for reasons other than security or not being able to emulate support at all. ANGLE already works around a lot of hardware limitations. Note that the ES3 spec makes no promise that compressed formats actually stay compressed in graphics memory. There are probably GPUs out there that don't have native support for it either but do claim ES3 conformance. They could transcode it to another lesser compressed format, if that conserves accuracy, or they might do on-demand decompression with an upper limit on how much RAM it consumes, or they might have a dedicated cache for it but it can't sustain the same bandwidth as other formats when there's low locality, or they can page out to system RAM, etc. What I'm trying to say is, compressed formats are "supported" to various degrees, and I don't think it's up to us to decide where to draw the line. Another point in case is that on ~10% of systems we fall back to using SwiftShader for WebGL 1.0. It will initially be an even greater percentage for 2.0. Even if we decide to move ETC formats to an extension, SwiftShader will expose support for that extension regardless. Note that DXT or any other compressed format is also always decompressed. It would be very restrictive to remove support for all of them. And again, there are many alternatives to full decompression to adequately support them on the CPU. Likewise nothing would prevent other WebGL implementers from supporting such extensions with GPU rendering even when they do decompress the formats, just to claim to support more features, or to work around a bug in an app making wrong assumptions about the availability of these formats. So there's no guarantee that it's actually going to solve the issue for people who care about how it's implemented. I believe there's two kinds of WebGL uses: developers who just create something simple and don't test it on many implementations but still expect it to run everywhere, and developers who want to push the envelope. I think the latter are the ones who wouldn't mind too much if they had to query the implementation to get more information about potential caveats. We don't want to make things harder for the simple case. So I'm rather strongly in favor of option (2), but it would probably have to be something more granular than a single boolean. On Fri, Sep 16, 2016 at 6:00 AM Christophe Riccio < christophe.riccio...@> wrote: > *1) Remove the EAC/ETC2 compressed texture format enums from the WebGL 2.0 > core spec, and state that > the https://www.khronos.org/registry/webgl/extensions/WEBGL_compressed_texture_es3/ > extension > applies to WebGL 2.0 as well. This would require WebGL 2.0 applications to > explicitly enable this extension before they would gain access to these > compressed texture formats. Provide an ETC decompression library to > applications, perhaps via a WebAssembly module.* > > I think this is the better option. We would not use ETC2 on platform that > doesn't actually support it. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ste...@ Fri Sep 16 09:02:07 2016 From: ste...@ (Steve Baker) Date: Fri, 16 Sep 2016 11:02:07 -0500 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Florian B??sch wrote: > On a larger topic of on-client conversion. The lossy transfer formats > (jpeg, webp etc.) are vastly more friendly on the network bandwidth than > any on GPU compressed format. So I don't think that assembling all the > assets in a GPU friendly way before transmitting them through the network > is completely satisfactory. It's not *completely* satisfactory - but in some applications it is the best compromise. The trouble is that we have lossy compression on the network side *and* lossy compression in the GPU - and when you convert one lossy format into another (especially if you're trying to do it rapidly) you get some pretty severe image artifacts - much more than either of the individual losses. Since you can't simultaneously have optimal network bandwidth *and* optimal GPU capacity *and* decent image quality, it follows that applications have to juggle these trade-offs. The needs of every application are very different...and even within a single application, different tradeoffs apply to icons, RGB textures and stuff like normal maps and height fields. In the WebGL API, we have yet another trade-off - making life easy for simple applications and relatively unsophisticated developers. That produces an understandable desire to hide ugliness and incompatibilities by silently "doing the right thing"...which is quite the opposite of "letting the app decide how to balance the trade-offs". So either we have to break everything down into separate steps that can be controlled in application code (bad for simple applications) - or we try to guess "the right thing" (bad for sophisticated applications) - we we lay out the conversion options - let the application decide - and if it doesn't, default to the most "reasonable" choice. > While in theory many formats could be supported client side by > writing some JS (or webassembly or whatever), and that would > be somewhat practical, it would certainly be convenient if UAs > offered routines to encode/decode various GPU compressed formats > built-in (just as it would be helpful if they exposed zlib > inflate/deflate that they already ship anyway). Yeah - that's true. Providing the more common conversions in the API would be a huge win - both on complexity grounds and reliability. The thing that has to be avoided is the assumption that format conversions can be performed silently without the application's knowledge. -- Steve ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From kbr...@ Fri Sep 16 10:20:40 2016 From: kbr...@ (Kenneth Russell) Date: Fri, 16 Sep 2016 10:20:40 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Thanks everyone for your feedback. It sounds like the consensus is toward not advertising these compressed texture formats when the WebGL implementation knows that they're going to be decompressed. (In some situations the browser won't be able to detect this, for example on native ES3 implementations that actually decompress under the hood.) We're going to proceed with extracting these enums into an extension for WebGL 1.0 and 2.0 per https://github.com/KhronosGroup/WebGL/issues/2030 . -Ken On Fri, Sep 16, 2016 at 9:02 AM, Steve Baker wrote: > Florian B??sch wrote: > > > On a larger topic of on-client conversion. The lossy transfer formats > > (jpeg, webp etc.) are vastly more friendly on the network bandwidth than > > any on GPU compressed format. So I don't think that assembling all the > > assets in a GPU friendly way before transmitting them through the network > > is completely satisfactory. > > It's not *completely* satisfactory - but in some applications it is the > best compromise. > > The trouble is that we have lossy compression on the network side *and* > lossy compression in the GPU - and when you convert one lossy format into > another (especially if you're trying to do it rapidly) you get some pretty > severe image artifacts - much more than either of the individual losses. > > Since you can't simultaneously have optimal network bandwidth *and* > optimal GPU capacity *and* decent image quality, it follows that > applications have to juggle these trade-offs. The needs of every > application are very different...and even within a single application, > different tradeoffs apply to icons, RGB textures and stuff like normal > maps and height fields. > > In the WebGL API, we have yet another trade-off - making life easy for > simple applications and relatively unsophisticated developers. That > produces an understandable desire to hide ugliness and incompatibilities > by silently "doing the right thing"...which is quite the opposite of > "letting the app decide how to balance the trade-offs". > > So either we have to break everything down into separate steps that can be > controlled in application code (bad for simple applications) - or we try > to guess "the right thing" (bad for sophisticated applications) - we we > lay out the conversion options - let the application decide - and if it > doesn't, default to the most "reasonable" choice. > > > While in theory many formats could be supported client side by > > writing some JS (or webassembly or whatever), and that would > > be somewhat practical, it would certainly be convenient if UAs > > offered routines to encode/decode various GPU compressed formats > > built-in (just as it would be helpful if they exposed zlib > > inflate/deflate that they already ship anyway). > > Yeah - that's true. Providing the more common conversions in the API > would be a huge win - both on complexity grounds and reliability. > > The thing that has to be avoided is the assumption that format conversions > can be performed silently without the application's knowledge. > > -- Steve > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Fri Sep 16 11:10:50 2016 From: max...@ (Maksims Mihejevs) Date: Fri, 16 Sep 2016 19:10:50 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > ANGLE already works around a lot of hardware limitations. It might be great in most cases, but there are issues sometimes, that we as engine developers have to identify through loads of debugging and deep investigations, to realise that those fallback lead to problems. We've found for example that in some cases float32 textures actually not really 32 bits, somewhat missing few (or what?), and in some cases that might be acceptable, but in some it leads to artefacts in shaders that heavily rely on packing and exact assumption of texture float precision. We had to find a ways to test this through hacks, so to identify those "not-fully-32-bits-textures" and assume that platform actually not suitable for 32b textures. There is a line to cross, and it shall be by not defeating purpose of feature. > Another point in case is that on ~10% of systems we fall back to using SwiftShader for WebGL 1.0. It will initially be an even greater percentage for 2.0. Please, please, lets not do this. We found in all cases, that practically SwiftShader is just does not work. The performance goes so down, without even letting users to know why, that they come to you with simple app that on iPhone 4S gives 30fps, and ask why it is running on their desktop at 3 fps. It would be better if it wouldn't run at all. If WebGL will be based on hacks and hacks comparing to native platforms just in favour of convenience, it will grow to be very frustrating and not performing well platform. That developing for will prove a challenge and have too many performance limitations. Technical limitations can be overcome by investing into solution more time, but not being able to deal with internal hacks - will simply ruin it. > I believe there's two kinds of WebGL uses: developers who just create something simple and don't test it on many implementations but still expect it to run everywhere, and developers who want to push the envelope. I think the latter are the ones who wouldn't mind too much if they had to query the implementation to get more information about potential caveats. We don't want to make things harder for the simple case. Agree, and we don't want have simple cases questioning why their simple app performs poorly and uses all VRAM. In past of early 00s, developers would start from making engines, and then do games. Today you do one or the other. And most developers choose to use existing frameworks and engines. This very much applies to "simple case" actually. Tools/Engine providers need control over platform, and those hacks actually create more work then solutions, as when they misbehave developers have to find the ways to avoid those hacks. And it is very hard work tbh. Moving ETC into separate extension have benefits for whole WebGL platform, that includes WebGL 1.0 as well, as potentially some hardware with WebGL 1.0 only can support ETC2/EAC and that is good for whole WebGL platform. Kinds Regards, Max On 16 September 2016 at 16:32, Nicolas Capens wrote: > I don't think removing these formats from WebGL 2.0 is a good idea. It > would put a dent into the strong relationship with OpenGL ES 3.0, and it's > a slippery slope when we start to remove things for reasons other than > security or not being able to emulate support at all. ANGLE already works > around a lot of hardware limitations. > > Note that the ES3 spec makes no promise that compressed formats actually > stay compressed in graphics memory. There are probably GPUs out there that > don't have native support for it either but do claim ES3 conformance. They > could transcode it to another lesser compressed format, if that conserves > accuracy, or they might do on-demand decompression with an upper limit on > how much RAM it consumes, or they might have a dedicated cache for it but > it can't sustain the same bandwidth as other formats when there's low > locality, or they can page out to system RAM, etc. What I'm trying to say > is, compressed formats are "supported" to various degrees, and I don't > think it's up to us to decide where to draw the line. > > Another point in case is that on ~10% of systems we fall back to using > SwiftShader for WebGL 1.0. It will initially be an even greater percentage > for 2.0. Even if we decide to move ETC formats to an extension, SwiftShader > will expose support for that extension regardless. Note that DXT or any > other compressed format is also always decompressed. It would be very > restrictive to remove support for all of them. And again, there are many > alternatives to full decompression to adequately support them on the CPU. > > Likewise nothing would prevent other WebGL implementers from supporting > such extensions with GPU rendering even when they do decompress the > formats, just to claim to support more features, or to work around a bug in > an app making wrong assumptions about the availability of these formats. So > there's no guarantee that it's actually going to solve the issue for people > who care about how it's implemented. > > I believe there's two kinds of WebGL uses: developers who just create > something simple and don't test it on many implementations but still expect > it to run everywhere, and developers who want to push the envelope. I think > the latter are the ones who wouldn't mind too much if they had to query the > implementation to get more information about potential caveats. We don't > want to make things harder for the simple case. > > So I'm rather strongly in favor of option (2), but it would probably have > to be something more granular than a single boolean. > > On Fri, Sep 16, 2016 at 6:00 AM Christophe Riccio < > christophe.riccio...@> wrote: > >> *1) Remove the EAC/ETC2 compressed texture format enums from the WebGL >> 2.0 core spec, and state that >> the https://www.khronos.org/registry/webgl/extensions/WEBGL_compressed_texture_es3/ >> extension >> applies to WebGL 2.0 as well. This would require WebGL 2.0 applications to >> explicitly enable this extension before they would gain access to these >> compressed texture formats. Provide an ETC decompression library to >> applications, perhaps via a WebAssembly module.* >> >> I think this is the better option. We would not use ETC2 on platform that >> doesn't actually support it. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Fri Sep 16 12:53:50 2016 From: khr...@ (Mark Callow) Date: Fri, 16 Sep 2016 12:53:50 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: <34A28B08-D771-4CDE-8770-32BA95F29D0C@callow.im> > On Sep 16, 2016, at 10:20 AM, Kenneth Russell wrote: > > Thanks everyone for your feedback. It sounds like the consensus is toward not advertising these compressed texture formats when the WebGL implementation knows that they're going to be decompressed. (In some situations the browser won't be able to detect this, for example on native ES3 implementations that actually decompress under the hood.) We're going to proceed with extracting these enums into an extension for WebGL 1.0 and 2.0 per https://github.com/KhronosGroup/WebGL/issues/2030 . I don?t see a consensus. In this thread I see 2 people expressing support for #1, 1 person definitely and 1 person implying support for #2. Almost balanced. I support #2 with the modified queries proposed by Steve Baker so that makes 2.5 people. NO doubt Ken supports #1, so we?re still pretty much balanced. > > (2) gets my vote. It lets mobile-friendly pages work on the desktop > without any messing around - which is fine for those who aren't too > concerned about the horrors of ETC. > > Those who do care and can stream in > their images in a wider range of formats can then query the flag to decide > whether ETC is what they want or not. > > But that simply still wont work, because there will be never 100% etc coverage on desktops. So developer in any case will have to provide universal or other specific formats to cover whole webgl platform. So as much as this utopian idea is great, it actually wont be the reality. > > I don?t see why it ?won?t work.? Those who care about ultimate efficiency or have a lot of vram pressure can use the queries to determine what format to use. I don?t see how this is worse than determining the same information by way of whether an extension exists. This way, those who aren?t stressing the devices so much can just use ETC2, letting the browser decompress when necessary, and have it work anywhere. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From aap...@ Fri Sep 16 13:22:20 2016 From: aap...@ (Austin Appleby) Date: Fri, 16 Sep 2016 13:22:20 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: <34A28B08-D771-4CDE-8770-32BA95F29D0C@callow.im> References: <34A28B08-D771-4CDE-8770-32BA95F29D0C@callow.im> Message-ID: #1. Secretly doing expensive operations behind the scenes is always a bad idea. On Fri, Sep 16, 2016 at 12:53 PM, Mark Callow wrote: > > On Sep 16, 2016, at 10:20 AM, Kenneth Russell wrote: > > Thanks everyone for your feedback. It sounds like the consensus is toward > not advertising these compressed texture formats when the WebGL > implementation knows that they're going to be decompressed. (In some > situations the browser won't be able to detect this, for example on native > ES3 implementations that actually decompress under the hood.) We're going > to proceed with extracting these enums into an extension for WebGL 1.0 and > 2.0 per https://github.com/KhronosGroup/WebGL/issues/2030 . > > > I don?t see a consensus. In this thread I see 2 people expressing support > for #1, 1 person definitely and 1 person implying support for #2. Almost > balanced. I support #2 with the modified queries proposed by Steve Baker > so that makes 2.5 people. NO doubt Ken supports #1, so we?re still pretty > much balanced. > > > (2) gets my vote. It lets mobile-friendly pages work on the desktop > without any messing around - which is fine for those who aren't too > concerned about the horrors of ETC. > > Those who do care and can stream > in > their images in a wider range of formats can then query the flag to decide > whether ETC is what they want or not. > > But that simply still wont work, because there will be never 100% etc > coverage on desktops. So developer in any case will have to provide > universal or other specific formats to cover whole webgl platform. So as > much as this utopian idea is great, it actually wont be the reality. > > > I don?t see why it ?won?t work.? Those who care about ultimate efficiency > or have a lot of vram pressure can use the queries to determine what format > to use. I don?t see how this is worse than determining the same information > by way of whether an extension exists. This way, those who aren?t stressing > the devices so much can just use ETC2, letting the browser decompress when > necessary, and have it work anywhere. > > Regards > > -Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aap...@ Fri Sep 16 13:25:26 2016 From: aap...@ (Austin Appleby) Date: Fri, 16 Sep 2016 13:25:26 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: <34A28B08-D771-4CDE-8770-32BA95F29D0C@callow.im> Message-ID: If you want WebGL to automatically decompress ETC to RGBA32 for you, then you should have to opt into that expensive operation via an extension. On Fri, Sep 16, 2016 at 1:22 PM, Austin Appleby wrote: > #1. Secretly doing expensive operations behind the scenes is always a bad > idea. > > On Fri, Sep 16, 2016 at 12:53 PM, Mark Callow wrote: > >> >> On Sep 16, 2016, at 10:20 AM, Kenneth Russell wrote: >> >> Thanks everyone for your feedback. It sounds like the consensus is toward >> not advertising these compressed texture formats when the WebGL >> implementation knows that they're going to be decompressed. (In some >> situations the browser won't be able to detect this, for example on native >> ES3 implementations that actually decompress under the hood.) We're going >> to proceed with extracting these enums into an extension for WebGL 1.0 and >> 2.0 per https://github.com/KhronosGroup/WebGL/issues/2030 . >> >> >> I don?t see a consensus. In this thread I see 2 people expressing support >> for #1, 1 person definitely and 1 person implying support for #2. Almost >> balanced. I support #2 with the modified queries proposed by Steve Baker >> so that makes 2.5 people. NO doubt Ken supports #1, so we?re still pretty >> much balanced. >> >> > (2) gets my vote. It lets mobile-friendly pages work on the desktop >> without any messing around - which is fine for those who aren't too >> concerned about the horrors of ETC. > > Those who do care and can stream >> in >> their images in a wider range of formats can then query the flag to decide >> whether ETC is what they want or not. >> >> But that simply still wont work, because there will be never 100% etc >> coverage on desktops. So developer in any case will have to provide >> universal or other specific formats to cover whole webgl platform. So as >> much as this utopian idea is great, it actually wont be the reality. >> >> >> I don?t see why it ?won?t work.? Those who care about ultimate efficiency >> or have a lot of vram pressure can use the queries to determine what format >> to use. I don?t see how this is worse than determining the same information >> by way of whether an extension exists. This way, those who aren?t stressing >> the devices so much can just use ETC2, letting the browser decompress when >> necessary, and have it work anywhere. >> >> Regards >> >> -Mark >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Sep 16 13:26:41 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 16 Sep 2016 22:26:41 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Doing implicit/opaque transmoglification and relying on users to check state is a bad idea. It's a bad idea when drivers do it, and it's a bad idea when UAs do it. There is one simple way to solve this whole, entire problem, that does not have any drawback whatsoever, and that way is: 1. Do not claim native support for something that doesn't exist 2. Offer a set of explicit APIs to convert various GPU compressed formats, other compressed formats and raw data on the client That way everybody gets to choose what they want to happen, and nobody is forced into any behavior they don't want. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Sep 16 13:32:38 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 16 Sep 2016 22:32:38 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: If for whatever reason, ETC support is claimed, when it isn't actually there, this is the worst that can, and will happen: The program fetches ETC assets in good faith, and in order to avoid converting from one compressed format into another. ETC consumes more network bandwidth than jpeg, but since ETC support is claimed, it's the "best" choice, right? No. Because, not only does it consume more bandwidth, but since it's not actually supported, it also consumes exactly as much VRAM as when you just uploaded from jpeg to plain. So, you get, no benefit whatsoever. You have no clue that you just hit a worst case scenario. And to the cherry on top is, you will experience a higher rate of context failures and/or GPU process crashes AND you used more network bandwidth than necessary, to achieve that outcome. How bad does do you really want it hm? On Fri, Sep 16, 2016 at 10:26 PM, Florian B?sch wrote: > Doing implicit/opaque transmoglification and relying on users to check > state is a bad idea. It's a bad idea when drivers do it, and it's a bad > idea when UAs do it. > > There is one simple way to solve this whole, entire problem, that does not > have any drawback whatsoever, and that way is: > > 1. Do not claim native support for something that doesn't exist > 2. Offer a set of explicit APIs to convert various GPU compressed > formats, other compressed formats and raw data on the client > > That way everybody gets to choose what they want to happen, and nobody is > forced into any behavior they don't want. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Sep 16 13:50:26 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 16 Sep 2016 22:50:26 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: <34A28B08-D771-4CDE-8770-32BA95F29D0C@callow.im> Message-ID: On Fri, Sep 16, 2016 at 10:47 PM, Nicolas Capens wrote: > On Fri, Sep 16, 2016 at 4:22 PM, Austin Appleby > wrote: > >> #1. Secretly doing expensive operations behind the scenes is always a bad >> idea. >> > > It's highly subjective whether something is expensive. Something that > seems expensive to one person might be a blessing to be taken care of > automatically for another. > If taking care of means completely defeating the purpose, then nobody will see that as a blessing. ANGLE has been doing several kinds of API translations and data conversions > for the longest time. They're documented > so > it might not be considered "secret", but neither would option (2) be hiding > anything from people looking for this information. > Which aren't comparable at all. > In contrast option (1) provides no guarantees about whether or not > something undesirable might be going on behind the scenes. > Nothing offers any guarantee that a driver hasn't decided to shoot its users in the foot, none of the scenarios. But one at least reduces the incidence of foot-shooting and is opt-in to the party. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Fri Sep 16 14:28:47 2016 From: max...@ (Maksims Mihejevs) Date: Fri, 16 Sep 2016 22:28:47 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: <34A28B08-D771-4CDE-8770-32BA95F29D0C@callow.im> Message-ID: I'm very with Florian on this. On 16 Sep 2016 9:50 p.m., "Florian B?sch" wrote: > On Fri, Sep 16, 2016 at 10:47 PM, Nicolas Capens wrote: > >> On Fri, Sep 16, 2016 at 4:22 PM, Austin Appleby >> wrote: >> >>> #1. Secretly doing expensive operations behind the scenes is always a >>> bad idea. >>> >> >> It's highly subjective whether something is expensive. Something that >> seems expensive to one person might be a blessing to be taken care of >> automatically for another. >> > If taking care of means completely defeating the purpose, then nobody will > see that as a blessing. > > ANGLE has been doing several kinds of API translations and data >> conversions for the longest time. They're documented >> so >> it might not be considered "secret", but neither would option (2) be hiding >> anything from people looking for this information. >> > Which aren't comparable at all. > > >> In contrast option (1) provides no guarantees about whether or not >> something undesirable might be going on behind the scenes. >> > Nothing offers any guarantee that a driver hasn't decided to shoot its > users in the foot, none of the scenarios. But one at least reduces the > incidence of foot-shooting and is opt-in to the party. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Fri Sep 16 14:37:56 2016 From: max...@ (Maksims Mihejevs) Date: Fri, 16 Sep 2016 22:37:56 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > Really, what's so bad about option (2)? It will use 6 times more VRAM and 10-40% more bandwidth in cases where clearly JPEG or other truly GPU supported compressed format should be loaded. Extra efforts needs to be made to make sure this wont happen. And might be harder for many that it might seem. Doing things silently will confuse and annoy developers, especially newbies, leading to some extra efforts to find out why it behaves like that when extension is advertised to be ok. Estimations of VRAM usage becomes harder as well. On 16 Sep 2016 10:29 p.m., "Nicolas Capens" wrote: > On Fri, Sep 16, 2016 at 4:32 PM, Florian B?sch wrote: > >> If for whatever reason, ETC support is claimed, when it isn't actually >> there, this is the worst that can, and will happen: The program fetches ETC >> assets in good faith, and in order to avoid converting from one compressed >> format into another. ETC consumes more network bandwidth than jpeg, but >> since ETC support is claimed, it's the "best" choice, right? No. Because, >> not only does it consume more bandwidth, but since it's not actually >> supported, it also consumes exactly as much VRAM as when you just uploaded >> from jpeg to plain. So, you get, no benefit whatsoever. You have no clue >> that you just hit a worst case scenario. And to the cherry on top is, you >> will experience a higher rate of context failures and/or GPU process >> crashes AND you used more network bandwidth than necessary, to achieve that >> outcome. >> > > As I said, "actual" support for ETC is hard to define. There are many > viable implementations with each some compromises. What works for one > person may not be desirable by another. So in practice you'll always have > to test for that. > > Also, your worst case scenario doesn't sound so bad to me. People hit > worst case paths in ANGLE and in drivers all the time. It doesn't mean > they're doomed. Most applications will run fine. And when you're really > pushing the envelope then you should test things extensively and take > advantage of all the information you can query. Note that implementations > can also provide run-time warnings about caveats. So it's not like you > would have no clue. > > Last but not least, your worst case scenario can very much still happen > with option (1). Granted, it's not eliminated with option (2) even in the > case where the support is checked, but at least you're not taking things > away from people who don't care if it gets decompressed or not. They now > have to provide a solution themselves, which will likely be less optimal. > Moreover, a wrong assumption will result in failures, for developers who > may not have the means to test and maintain things on many platforms. With > the support being unconditional but having a potential performance caveat > the burden is on those who cared about these kind of details. > > Really, what's so bad about option (2)? > > >> How bad does do you really want it hm? >> >> On Fri, Sep 16, 2016 at 10:26 PM, Florian B?sch wrote: >> >>> Doing implicit/opaque transmoglification and relying on users to check >>> state is a bad idea. It's a bad idea when drivers do it, and it's a bad >>> idea when UAs do it. >>> >>> There is one simple way to solve this whole, entire problem, that does >>> not have any drawback whatsoever, and that way is: >>> >>> 1. Do not claim native support for something that doesn't exist >>> 2. Offer a set of explicit APIs to convert various GPU compressed >>> formats, other compressed formats and raw data on the client >>> >>> That way everybody gets to choose what they want to happen, and nobody >>> is forced into any behavior they don't want. >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Sep 16 14:40:26 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 16 Sep 2016 23:40:26 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Fri, Sep 16, 2016 at 11:29 PM, Nicolas Capens wrote: > > As I said, "actual" support for ETC is hard to define. There are many > viable implementations with each some compromises. What works for one > person may not be desirable by another. So in practice you'll always have > to test for that. > There is no test for it. There's just not making it worse than it already is. > Also, your worst case scenario doesn't sound so bad to me. > So it doesn't sound bad to you to: - Consume more network bandwidth - Get worse picture quality - Get unexpectedly high incidents of context loss/gpu process crash/tab reloads - Do extra work to achieve that outcome - Have no clue why you got screwed It sounds pretty bad to me... > People hit worst case paths in ANGLE and in drivers all the time. It > doesn't mean they're doomed. Most applications will run fine. And when > you're really pushing the envelope then you should test things extensively > and take advantage of all the information you can query. Note that > implementations can also provide run-time warnings about caveats. So it's > not like you would have no clue. > You do have no clue, and the caveats are a bad idea. There is a standard way to discover support of optional capabilities, and if the driver decides to lie, there's nothing the browser can hint either, so you're as screwed as everybody else, it's just that you have a choice of not being part of the problem or exasperating it more. > Last but not least, your worst case scenario can very much still happen > with option (1). > Yes it can, but at least the browser didn't help create it. > Granted, it's not eliminated with option (2) > No it isn't. > even in the case where the support is checked, but at least you're not > taking things away from people who don't care if it gets decompressed or > not. They now have to provide a solution themselves, which will likely be > less optimal. > That's why I suggested that UAs offer a set of APIs to convert, in software, the various formats, so that everybody can make their own choices, and nobody is entrapped. > Moreover, a wrong assumption will result in failures, for developers who > may not have the means to test and maintain things on many platforms. With > the support being unconditional but having a potential performance caveat > the burden is on those who cared about these kind of details. > > Really, what's so bad about option (2)? > That it lures people into a worst case scenario, by lying about support, that doesn't exist. The worst case scenario is: - Consuming more network bandwidth than without - Getting worse picture quality than without - Getting unexpectedly high incidents of context loss/gpu process crash/tab reloads - Having no clue that it is going horribly wrong, or why. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Sep 16 14:43:06 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 16 Sep 2016 23:43:06 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Is it possible in any way to detect when a driver is lying about ETC support? -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Sep 16 14:59:30 2016 From: kbr...@ (Kenneth Russell) Date: Fri, 16 Sep 2016 14:59:30 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: <34A28B08-D771-4CDE-8770-32BA95F29D0C@callow.im> References: <34A28B08-D771-4CDE-8770-32BA95F29D0C@callow.im> Message-ID: On Fri, Sep 16, 2016 at 12:53 PM, Mark Callow wrote: > > On Sep 16, 2016, at 10:20 AM, Kenneth Russell wrote: > > Thanks everyone for your feedback. It sounds like the consensus is toward > not advertising these compressed texture formats when the WebGL > implementation knows that they're going to be decompressed. (In some > situations the browser won't be able to detect this, for example on native > ES3 implementations that actually decompress under the hood.) We're going > to proceed with extracting these enums into an extension for WebGL 1.0 and > 2.0 per https://github.com/KhronosGroup/WebGL/issues/2030 . > > > I don?t see a consensus. In this thread I see 2 people expressing support > for #1, 1 person definitely and 1 person implying support for #2. Almost > balanced. I support #2 with the modified queries proposed by Steve Baker > so that makes 2.5 people. NO doubt Ken supports #1, so we?re still pretty > much balanced. > Members of the WebGL working group who haven't spoken up on this public thread have also expressed the desire to keep things simple and not expose new queries unless absolutely necessary -- i.e., just splitting off the ETC compressed texture formats into their own extension, since there's already a well established mechanism for querying the supported compressed texture formats. -Ken > > > (2) gets my vote. It lets mobile-friendly pages work on the desktop > without any messing around - which is fine for those who aren't too > concerned about the horrors of ETC. > > Those who do care and can stream > in > their images in a wider range of formats can then query the flag to decide > whether ETC is what they want or not. > > But that simply still wont work, because there will be never 100% etc > coverage on desktops. So developer in any case will have to provide > universal or other specific formats to cover whole webgl platform. So as > much as this utopian idea is great, it actually wont be the reality. > > > I don?t see why it ?won?t work.? Those who care about ultimate efficiency > or have a lot of vram pressure can use the queries to determine what format > to use. I don?t see how this is worse than determining the same information > by way of whether an extension exists. This way, those who aren?t stressing > the devices so much can just use ETC2, letting the browser decompress when > necessary, and have it work anywhere. > > Regards > > -Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Fri Sep 16 15:02:30 2016 From: max...@ (Maksims Mihejevs) Date: Fri, 16 Sep 2016 23:02:30 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: We've tried to find, and unfortunately as WebGL does not provides ways to measure VRAM usage due to security reasons, we can't simply "measure > upload > measure" to identify if it's an expected or not VRAM usage. Extra API to encode/decode by choice - has more benefits than just to tackle current issue. But I personally have even doubts about this approach either, as encoding - is very-very slow process in most cases, and fast encoders usually suffer from very low quality. Decoding is faster of course, but that has to be a developers choice to do so or not as there is a performance concerns associated with it still. On 16 September 2016 at 22:43, Florian B?sch wrote: > Is it possible in any way to detect when a driver is lying about ETC > support? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Fri Sep 16 15:41:46 2016 From: jgi...@ (Jeff Gilbert) Date: Fri, 16 Sep 2016 15:41:46 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: I support option (1), since I think it's the most immediately obvious way to implement it. Ideally we can publish a JS library that can decompress ES3 formats into uncompressed data for upload, allowing devs who want it the ability to pretend to have ES3 format support everywhere. On Fri, Sep 16, 2016 at 3:02 PM, Maksims Mihejevs wrote: > We've tried to find, and unfortunately as WebGL does not provides ways to > measure VRAM usage due to security reasons, we can't simply "measure > > upload > measure" to identify if it's an expected or not VRAM usage. > > Extra API to encode/decode by choice - has more benefits than just to tackle > current issue. > But I personally have even doubts about this approach either, as encoding - > is very-very slow process in most cases, and fast encoders usually suffer > from very low quality. Decoding is faster of course, but that has to be a > developers choice to do so or not as there is a performance concerns > associated with it still. > > On 16 September 2016 at 22:43, Florian B?sch wrote: >> >> Is it possible in any way to detect when a driver is lying about ETC >> support? > > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From pya...@ Sat Sep 17 05:27:18 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sat, 17 Sep 2016 14:27:18 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Sat, Sep 17, 2016 at 5:53 AM, Nicolas Capens wrote: > Newbies wouldn't run into this that easily. WebGL 1.0 has been fine with > hardly any use of compressed textures. People who run into graphics memory > limitations are by definition pushing the limits and should be ok with > doing some queries to find out which format is most likely to suit their > needs the best. > Those queries being the established queries for supported compression formats/extensions already, not a second mechanism, and silent automatism are never a blessing, if you don't understand how it's pretty silly to deliver a worse picture for more bandwidth to get zero benefit but consume 6x the amount of vram, you should yourself test your simple apps on more devices. Some mobiles will crap out at anything between 30-60mb of vram use, and they don't implement context loss, so you'll get an auto-page-reload, even if you wanted to handle it. Not making it even easier to get even more auto page reloads would be kind of... paramount importance, because especially on those platforms that exhibit that behavior, webgl is getting an extremely bad name among users and developers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Sat Sep 17 05:52:05 2016 From: max...@ (Maksims Mihejevs) Date: Sat, 17 Sep 2016 13:52:05 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > Like I said before, decompression does not "completely" defeat the purpose of compressed formats. Plenty of people would find it a blessing that there's an automatic fallback that just works Providing browser/asm.js tools to decompress - is convenient, and better as it gives a choice. For real content developers, compressed textures - are the VRAM saviour. For rest minority of "simple users", is a wish for "one button make app" thing. > As an example of a reverse situation, ANGLE has received a lot of criticism for not supporting wide lines, while it's supported by many desktop OpenGL drivers. It would be inefficient to emulate, and since there's no strict specification for it our implementation could have varied from what's actually desired. So we preferred not to put effort into it. Also, in this case the spec actually allows to not support it. And that is good. As if you would support it and it would floor the performance it would be then no use. And additionally to still a need to implement custom wide lines, developers would have additional frustration with a feeling "to reinvent something that webgl does bad". > In contrast ETC is required to be supported by OpenGL ES 3.0, and so there will undoubtedly be people expecting it to be supported unconditionally by WebGL 2.0 I actually disagree. WebGL developers do know what API is based on, but they also know that it is only semantics, where hardware and software wise it is not OpenGL ES, so they do know that it might and will behave some different. So no, they will not expect WebGL 2.0 to behave 1:1 as OpenGL ES 3.0. Especially taking in account that a lot of WebGL developers are not coming from mobile world and are not familiar with OpenGL ES in first place. > Also it's easy to support, so we have far less justification not to support it than for the wide lines case. I'm a bit sad that you ignore so many reasons provided by multiple people, and simply still push your believes on this. > There are cases where single-channel textures are actually implemented using four-channel storage. And we actually had problems with this! And not just problems, we had to decide not to use single channel 8bit at all! As it is not what it is. This is example where by adding "convinience" you actually ruin the purpose of it. And even worse: by pretending to support it, you make developer to do a choice to store things in different single-channel textures, then wonder why 4 x R8 textures used up all VRAM. Instead of actually having reasonable limitations and communication that on those platforms R8 - is not the way, and developer would fallback to packing channels into single RGBA8. We have same problems in many places, and discovering those things - is a huge frustration. *I am talking here as actually tools, engine and content developer working with small and big users of WebGL, and trying to communicate not just my opinion, but aggregated opinion of many who had to stumble on those silent-issues and then had to wonder around for quiet some time to figure out "what the hell?".* > I don't see what would be hard about it. Please elaborate. I've described multiple times how developers hit hidden-issues and miss behaviours and how it is hard to identify the problem and even harder to come up with sensible detection and avoidance approaches. Already said more than enough on this. > Newbies wouldn't run into this that easily. WebGL 1.0 has been fine with hardly any use of compressed textures. People who run into graphics memory limitations are by definition pushing the limits and should be ok with doing some queries to find out which format is most likely to suit their needs the best. I'm afraid you have a wrong assumption about newbie users. They tend to upload 4k textures without worry, and million triangle models. - That is newbie webgl user. And they then wonder why it does not perform, then they go and optimise a bit their content to be web/mobile friendly, scaling down textures, reducing polygons. But hey - it still wont work, because someone decided to multiply VRAM 6 times. Then they get annoyed why their simple content simply crashes the browser and then they go to some other platforms, that actually work. *Just because drivers already "lie" about support of compressed textures, shall not be a justification to "lie" on software level as well. Making the whole situation just worse than it is already.* *Nor shall already bad behaviour be used somewhere else with logic that: if it is already used, then it is "ok".* We do want to have one big web, where everything "just works" and people are throwing things on it, and it handles whatever it need to. But reality is different, and it is OK reality, give us tools, not "promises". Lets not get over our heads, and stay reasonable. Kind Regards, Max On 17 September 2016 at 13:27, Florian B?sch wrote: > On Sat, Sep 17, 2016 at 5:53 AM, Nicolas Capens wrote: > >> Newbies wouldn't run into this that easily. WebGL 1.0 has been fine with >> hardly any use of compressed textures. People who run into graphics memory >> limitations are by definition pushing the limits and should be ok with >> doing some queries to find out which format is most likely to suit their >> needs the best. >> > Those queries being the established queries for supported compression > formats/extensions already, not a second mechanism, and silent automatism > are never a blessing, if you don't understand how it's pretty silly to > deliver a worse picture for more bandwidth to get zero benefit but consume > 6x the amount of vram, you should yourself test your simple apps on more > devices. > > Some mobiles will crap out at anything between 30-60mb of vram use, and > they don't implement context loss, so you'll get an auto-page-reload, even > if you wanted to handle it. Not making it even easier to get even more auto > page reloads would be kind of... paramount importance, because especially > on those platforms that exhibit that behavior, webgl is getting an > extremely bad name among users and developers. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Sat Sep 17 06:06:17 2016 From: max...@ (Maksims Mihejevs) Date: Sat, 17 Sep 2016 14:06:17 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Adding Arthur, Engine Developer for PlayCanvas in topic. On 17 September 2016 at 13:52, Maksims Mihejevs wrote: > > Like I said before, decompression does not "completely" defeat the > purpose of compressed formats. Plenty of people would find it a blessing > that there's an automatic fallback that just works > Providing browser/asm.js tools to decompress - is convenient, and better > as it gives a choice. > For real content developers, compressed textures - are the VRAM saviour. > For rest minority of "simple users", is a wish for "one button make app" > thing. > > > As an example of a reverse situation, ANGLE has received a lot of > criticism for not supporting wide lines, while it's supported by many > desktop OpenGL drivers. It would be inefficient to emulate, and since > there's no strict specification for it our implementation could have varied > from what's actually desired. So we preferred not to put effort into it. > Also, in this case the spec actually allows to not support it. > And that is good. As if you would support it and it would floor the > performance it would be then no use. And additionally to still a need to > implement custom wide lines, developers would have additional frustration > with a feeling "to reinvent something that webgl does bad". > > > In contrast ETC is required to be supported by OpenGL ES 3.0, and so > there will undoubtedly be people expecting it to be supported > unconditionally by WebGL 2.0 > I actually disagree. WebGL developers do know what API is based on, but > they also know that it is only semantics, where hardware and software wise > it is not OpenGL ES, so they do know that it might and will behave some > different. > So no, they will not expect WebGL 2.0 to behave 1:1 as OpenGL ES 3.0. > Especially taking in account that a lot of WebGL developers are not coming > from mobile world and are not familiar with OpenGL ES in first place. > > > Also it's easy to support, so we have far less justification not to > support it than for the wide lines case. > I'm a bit sad that you ignore so many reasons provided by multiple people, > and simply still push your believes on this. > > > There are cases where single-channel textures are actually implemented > using four-channel storage. > And we actually had problems with this! And not just problems, we had to > decide not to use single channel 8bit at all! As it is not what it is. > This is example where by adding "convinience" you actually ruin the > purpose of it. > And even worse: by pretending to support it, you make developer to do a > choice to store things in different single-channel textures, then wonder > why 4 x R8 textures used up all VRAM. Instead of actually having reasonable > limitations and communication that on those platforms R8 - is not the way, > and developer would fallback to packing channels into single RGBA8. > We have same problems in many places, and discovering those things - is a > huge frustration. > > *I am talking here as actually tools, engine and content developer working > with small and big users of WebGL, and trying to communicate not just my > opinion, but aggregated opinion of many who had to stumble on those > silent-issues and then had to wonder around for quiet some time to figure > out "what the hell?".* > > > I don't see what would be hard about it. Please elaborate. > I've described multiple times how developers hit hidden-issues and miss > behaviours and how it is hard to identify the problem and even harder to > come up with sensible detection and avoidance approaches. Already said more > than enough on this. > > > Newbies wouldn't run into this that easily. WebGL 1.0 has been fine with > hardly any use of compressed textures. People who run into graphics memory > limitations are by definition pushing the limits and should be ok with > doing some queries to find out which format is most likely to suit their > needs the best. > I'm afraid you have a wrong assumption about newbie users. They tend to > upload 4k textures without worry, and million triangle models. - That is > newbie webgl user. And they then wonder why it does not perform, then they > go and optimise a bit their content to be web/mobile friendly, scaling down > textures, reducing polygons. But hey - it still wont work, because someone > decided to multiply VRAM 6 times. Then they get annoyed why their simple > content simply crashes the browser and then they go to some other > platforms, that actually work. > > *Just because drivers already "lie" about support of compressed textures, > shall not be a justification to "lie" on software level as well. Making the > whole situation just worse than it is already.* > *Nor shall already bad behaviour be used somewhere else with logic that: > if it is already used, then it is "ok".* > > > We do want to have one big web, where everything "just works" and people > are throwing things on it, and it handles whatever it need to. > But reality is different, and it is OK reality, give us tools, not > "promises". > Lets not get over our heads, and stay reasonable. > > Kind Regards, > Max > > On 17 September 2016 at 13:27, Florian B?sch wrote: > >> On Sat, Sep 17, 2016 at 5:53 AM, Nicolas Capens wrote: >> >>> Newbies wouldn't run into this that easily. WebGL 1.0 has been fine with >>> hardly any use of compressed textures. People who run into graphics memory >>> limitations are by definition pushing the limits and should be ok with >>> doing some queries to find out which format is most likely to suit their >>> needs the best. >>> >> Those queries being the established queries for supported compression >> formats/extensions already, not a second mechanism, and silent automatism >> are never a blessing, if you don't understand how it's pretty silly to >> deliver a worse picture for more bandwidth to get zero benefit but consume >> 6x the amount of vram, you should yourself test your simple apps on more >> devices. >> >> Some mobiles will crap out at anything between 30-60mb of vram use, and >> they don't implement context loss, so you'll get an auto-page-reload, even >> if you wanted to handle it. Not making it even easier to get even more auto >> page reloads would be kind of... paramount importance, because especially >> on those platforms that exhibit that behavior, webgl is getting an >> extremely bad name among users and developers. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Sat Sep 17 06:41:51 2016 From: art...@ (Mr F) Date: Sat, 17 Sep 2016 16:41:51 +0300 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Oh, hello. Option #1 is the most straightforward way to me. Option #2 is OK as long as we can query the parameter and realize that decompression will happen (similar to checking for extension availability in option #1). Option #3 is surely a bad idea, as encoding most compressed formats either takes seconds, or produces terrible quality. Accumulating artifacts from both formats will produce something truly horrible. On 17 September 2016 at 16:06, Maksims Mihejevs wrote: > Adding Arthur, Engine Developer for PlayCanvas in topic. > > On 17 September 2016 at 13:52, Maksims Mihejevs > wrote: > >> > Like I said before, decompression does not "completely" defeat the >> purpose of compressed formats. Plenty of people would find it a blessing >> that there's an automatic fallback that just works >> Providing browser/asm.js tools to decompress - is convenient, and better >> as it gives a choice. >> For real content developers, compressed textures - are the VRAM saviour. >> For rest minority of "simple users", is a wish for "one button make app" >> thing. >> >> > As an example of a reverse situation, ANGLE has received a lot of >> criticism for not supporting wide lines, while it's supported by many >> desktop OpenGL drivers. It would be inefficient to emulate, and since >> there's no strict specification for it our implementation could have varied >> from what's actually desired. So we preferred not to put effort into it. >> Also, in this case the spec actually allows to not support it. >> And that is good. As if you would support it and it would floor the >> performance it would be then no use. And additionally to still a need to >> implement custom wide lines, developers would have additional frustration >> with a feeling "to reinvent something that webgl does bad". >> >> > In contrast ETC is required to be supported by OpenGL ES 3.0, and so >> there will undoubtedly be people expecting it to be supported >> unconditionally by WebGL 2.0 >> I actually disagree. WebGL developers do know what API is based on, but >> they also know that it is only semantics, where hardware and software wise >> it is not OpenGL ES, so they do know that it might and will behave some >> different. >> So no, they will not expect WebGL 2.0 to behave 1:1 as OpenGL ES 3.0. >> Especially taking in account that a lot of WebGL developers are not >> coming from mobile world and are not familiar with OpenGL ES in first place. >> >> > Also it's easy to support, so we have far less justification not to >> support it than for the wide lines case. >> I'm a bit sad that you ignore so many reasons provided by multiple >> people, and simply still push your believes on this. >> >> > There are cases where single-channel textures are actually implemented >> using four-channel storage. >> And we actually had problems with this! And not just problems, we had to >> decide not to use single channel 8bit at all! As it is not what it is. >> This is example where by adding "convinience" you actually ruin the >> purpose of it. >> And even worse: by pretending to support it, you make developer to do a >> choice to store things in different single-channel textures, then wonder >> why 4 x R8 textures used up all VRAM. Instead of actually having reasonable >> limitations and communication that on those platforms R8 - is not the way, >> and developer would fallback to packing channels into single RGBA8. >> We have same problems in many places, and discovering those things - is a >> huge frustration. >> >> *I am talking here as actually tools, engine and content developer >> working with small and big users of WebGL, and trying to communicate not >> just my opinion, but aggregated opinion of many who had to stumble on those >> silent-issues and then had to wonder around for quiet some time to figure >> out "what the hell?".* >> >> > I don't see what would be hard about it. Please elaborate. >> I've described multiple times how developers hit hidden-issues and miss >> behaviours and how it is hard to identify the problem and even harder to >> come up with sensible detection and avoidance approaches. Already said more >> than enough on this. >> >> > Newbies wouldn't run into this that easily. WebGL 1.0 has been fine >> with hardly any use of compressed textures. People who run into graphics >> memory limitations are by definition pushing the limits and should be ok >> with doing some queries to find out which format is most likely to suit >> their needs the best. >> I'm afraid you have a wrong assumption about newbie users. They tend to >> upload 4k textures without worry, and million triangle models. - That is >> newbie webgl user. And they then wonder why it does not perform, then they >> go and optimise a bit their content to be web/mobile friendly, scaling down >> textures, reducing polygons. But hey - it still wont work, because someone >> decided to multiply VRAM 6 times. Then they get annoyed why their simple >> content simply crashes the browser and then they go to some other >> platforms, that actually work. >> >> *Just because drivers already "lie" about support of compressed textures, >> shall not be a justification to "lie" on software level as well. Making the >> whole situation just worse than it is already.* >> *Nor shall already bad behaviour be used somewhere else with logic that: >> if it is already used, then it is "ok".* >> >> >> We do want to have one big web, where everything "just works" and people >> are throwing things on it, and it handles whatever it need to. >> But reality is different, and it is OK reality, give us tools, not >> "promises". >> Lets not get over our heads, and stay reasonable. >> >> Kind Regards, >> Max >> >> On 17 September 2016 at 13:27, Florian B?sch wrote: >> >>> On Sat, Sep 17, 2016 at 5:53 AM, Nicolas Capens wrote: >>> >>>> Newbies wouldn't run into this that easily. WebGL 1.0 has been fine >>>> with hardly any use of compressed textures. People who run into graphics >>>> memory limitations are by definition pushing the limits and should be ok >>>> with doing some queries to find out which format is most likely to suit >>>> their needs the best. >>>> >>> Those queries being the established queries for supported compression >>> formats/extensions already, not a second mechanism, and silent automatism >>> are never a blessing, if you don't understand how it's pretty silly to >>> deliver a worse picture for more bandwidth to get zero benefit but consume >>> 6x the amount of vram, you should yourself test your simple apps on more >>> devices. >>> >>> Some mobiles will crap out at anything between 30-60mb of vram use, and >>> they don't implement context loss, so you'll get an auto-page-reload, even >>> if you wanted to handle it. Not making it even easier to get even more auto >>> page reloads would be kind of... paramount importance, because especially >>> on those platforms that exhibit that behavior, webgl is getting an >>> extremely bad name among users and developers. >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Sat Sep 17 06:48:02 2016 From: art...@ (Mr F) Date: Sat, 17 Sep 2016 16:48:02 +0300 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: >There are cases where single-channel textures are actually implemented using four-channel storage. Also, why is that? Isn't ANGLE d3d9/11? And they also had the R8 format. On 17 September 2016 at 16:41, Mr F wrote: > Oh, hello. > Option #1 is the most straightforward way to me. > Option #2 is OK as long as we can query the parameter and realize that > decompression will happen (similar to checking for extension availability > in option #1). > Option #3 is surely a bad idea, as encoding most compressed formats either > takes seconds, or produces terrible quality. Accumulating artifacts from > both formats will produce something truly horrible. > > > On 17 September 2016 at 16:06, Maksims Mihejevs > wrote: > >> Adding Arthur, Engine Developer for PlayCanvas in topic. >> >> On 17 September 2016 at 13:52, Maksims Mihejevs >> wrote: >> >>> > Like I said before, decompression does not "completely" defeat the >>> purpose of compressed formats. Plenty of people would find it a blessing >>> that there's an automatic fallback that just works >>> Providing browser/asm.js tools to decompress - is convenient, and better >>> as it gives a choice. >>> For real content developers, compressed textures - are the VRAM saviour. >>> For rest minority of "simple users", is a wish for "one button make app" >>> thing. >>> >>> > As an example of a reverse situation, ANGLE has received a lot of >>> criticism for not supporting wide lines, while it's supported by many >>> desktop OpenGL drivers. It would be inefficient to emulate, and since >>> there's no strict specification for it our implementation could have varied >>> from what's actually desired. So we preferred not to put effort into it. >>> Also, in this case the spec actually allows to not support it. >>> And that is good. As if you would support it and it would floor the >>> performance it would be then no use. And additionally to still a need to >>> implement custom wide lines, developers would have additional frustration >>> with a feeling "to reinvent something that webgl does bad". >>> >>> > In contrast ETC is required to be supported by OpenGL ES 3.0, and so >>> there will undoubtedly be people expecting it to be supported >>> unconditionally by WebGL 2.0 >>> I actually disagree. WebGL developers do know what API is based on, but >>> they also know that it is only semantics, where hardware and software wise >>> it is not OpenGL ES, so they do know that it might and will behave some >>> different. >>> So no, they will not expect WebGL 2.0 to behave 1:1 as OpenGL ES 3.0. >>> Especially taking in account that a lot of WebGL developers are not >>> coming from mobile world and are not familiar with OpenGL ES in first place. >>> >>> > Also it's easy to support, so we have far less justification not to >>> support it than for the wide lines case. >>> I'm a bit sad that you ignore so many reasons provided by multiple >>> people, and simply still push your believes on this. >>> >>> > There are cases where single-channel textures are actually >>> implemented using four-channel storage. >>> And we actually had problems with this! And not just problems, we had to >>> decide not to use single channel 8bit at all! As it is not what it is. >>> This is example where by adding "convinience" you actually ruin the >>> purpose of it. >>> And even worse: by pretending to support it, you make developer to do a >>> choice to store things in different single-channel textures, then wonder >>> why 4 x R8 textures used up all VRAM. Instead of actually having reasonable >>> limitations and communication that on those platforms R8 - is not the way, >>> and developer would fallback to packing channels into single RGBA8. >>> We have same problems in many places, and discovering those things - is >>> a huge frustration. >>> >>> *I am talking here as actually tools, engine and content developer >>> working with small and big users of WebGL, and trying to communicate not >>> just my opinion, but aggregated opinion of many who had to stumble on those >>> silent-issues and then had to wonder around for quiet some time to figure >>> out "what the hell?".* >>> >>> > I don't see what would be hard about it. Please elaborate. >>> I've described multiple times how developers hit hidden-issues and miss >>> behaviours and how it is hard to identify the problem and even harder to >>> come up with sensible detection and avoidance approaches. Already said more >>> than enough on this. >>> >>> > Newbies wouldn't run into this that easily. WebGL 1.0 has been fine >>> with hardly any use of compressed textures. People who run into graphics >>> memory limitations are by definition pushing the limits and should be ok >>> with doing some queries to find out which format is most likely to suit >>> their needs the best. >>> I'm afraid you have a wrong assumption about newbie users. They tend to >>> upload 4k textures without worry, and million triangle models. - That is >>> newbie webgl user. And they then wonder why it does not perform, then they >>> go and optimise a bit their content to be web/mobile friendly, scaling down >>> textures, reducing polygons. But hey - it still wont work, because someone >>> decided to multiply VRAM 6 times. Then they get annoyed why their simple >>> content simply crashes the browser and then they go to some other >>> platforms, that actually work. >>> >>> *Just because drivers already "lie" about support of compressed >>> textures, shall not be a justification to "lie" on software level as well. >>> Making the whole situation just worse than it is already.* >>> *Nor shall already bad behaviour be used somewhere else with logic that: >>> if it is already used, then it is "ok".* >>> >>> >>> We do want to have one big web, where everything "just works" and people >>> are throwing things on it, and it handles whatever it need to. >>> But reality is different, and it is OK reality, give us tools, not >>> "promises". >>> Lets not get over our heads, and stay reasonable. >>> >>> Kind Regards, >>> Max >>> >>> On 17 September 2016 at 13:27, Florian B?sch wrote: >>> >>>> On Sat, Sep 17, 2016 at 5:53 AM, Nicolas Capens >>>> wrote: >>>> >>>>> Newbies wouldn't run into this that easily. WebGL 1.0 has been fine >>>>> with hardly any use of compressed textures. People who run into graphics >>>>> memory limitations are by definition pushing the limits and should be ok >>>>> with doing some queries to find out which format is most likely to suit >>>>> their needs the best. >>>>> >>>> Those queries being the established queries for supported compression >>>> formats/extensions already, not a second mechanism, and silent automatism >>>> are never a blessing, if you don't understand how it's pretty silly to >>>> deliver a worse picture for more bandwidth to get zero benefit but consume >>>> 6x the amount of vram, you should yourself test your simple apps on more >>>> devices. >>>> >>>> Some mobiles will crap out at anything between 30-60mb of vram use, and >>>> they don't implement context loss, so you'll get an auto-page-reload, even >>>> if you wanted to handle it. Not making it even easier to get even more auto >>>> page reloads would be kind of... paramount importance, because especially >>>> on those platforms that exhibit that behavior, webgl is getting an >>>> extremely bad name among users and developers. >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Sat Sep 17 10:14:46 2016 From: khr...@ (Mark Callow) Date: Sat, 17 Sep 2016 10:14:46 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > On Sep 17, 2016, at 5:27 AM, Florian B?sch wrote: > > Some mobiles will crap out at anything between 30-60mb of vram use, and they don't implement context loss, so you'll get an auto-page-reload, even if you wanted to handle it. Not making it even easier to get even more auto page reloads would be kind of... paramount importance, because especially on those platforms that exhibit that behavior, webgl is getting an extremely bad name among users and developers. As far as I know all OpenGL ES 3.0-capable and later h/w supports ETC2 (& so ETC1) therefore the above is irrelevant to the discussion at hand. The only platforms affected by pre-expansion are older desktop OpenGL drivers and presumably any implementations running on d3d. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From khr...@ Sat Sep 17 10:23:09 2016 From: khr...@ (Mark Callow) Date: Sat, 17 Sep 2016 10:23:09 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > On Sep 17, 2016, at 5:52 AM, Maksims Mihejevs wrote: > > Providing browser/asm.js tools to decompress - is convenient, and better as it gives a choice. Option #2 gives a choice as well. Perhaps I am missing something but I can?t see any difference in the choices offered by options 1 and option 2. The only differences I see are the way you make the decision and the implementation of software decompression Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From khr...@ Sat Sep 17 10:24:33 2016 From: khr...@ (Mark Callow) Date: Sat, 17 Sep 2016 10:24:33 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > On Sep 17, 2016, at 10:14 AM, Mark Callow wrote: > > As far as I know all OpenGL ES 3.0-capable and later h/w supports That should be OpenGL ES 3.0-capable mobile hardware. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From juj...@ Sat Sep 17 11:38:54 2016 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Sat, 17 Sep 2016 21:38:54 +0300 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: I would also like to voice my support for option 1. That is by far the best course of action, and 2 or 3 would be detrimental to being able to develop applications that would behave optimally in all situations. GL implementations should not advertise compression formats they can't actually satisfy with the expected memory or quality footprint. Emulating ETCx or any other compression under the hood would just have the effect of poisoning the feature, where developers could not trust what any implementation reports. This would lead to sniffing the unmasked GL device fields to try to get the correct information. If WebGL would like to offer a conveniency mechanism for developers, I'd recommend creating an explicit machinery along the lines of webglcontext.decompress(src, dst), which doesn't conflict with existing specced API behavior. That would allow tutorial sites then to show examples of both types of behaviors that developers can choose how to utilize compression. Making speed-memory-compatibility tradeoffs at the expense of the developer is not the privilege of the browser. Let's empower web developers to be able to make the choices themselves instead of building APIs that tie the developers' hands at the excuse of "conveniency". I am very happy to see Kenneth writing up the GitHub entry to work towards option 1. That is no doubt the correct action to take here. On Sep 17, 2016 8:26 PM, "Mark Callow" wrote: On Sep 17, 2016, at 10:14 AM, Mark Callow wrote: As far as I know all OpenGL ES 3.0-capable and later h/w supports That should be OpenGL ES 3.0-capable *mobile *hardware. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Sat Sep 17 17:39:37 2016 From: khr...@ (Mark Callow) Date: Sat, 17 Sep 2016 17:39:37 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > On Sep 17, 2016, at 11:38 AM, Jukka Jyl?nki wrote: > > I would also like to voice my support for option 1. That is by far the best course of action, and 2 or 3 would be detrimental to being able to develop applications that would behave optimally in all situations. GL implementations should not advertise compression formats they can't actually satisfy with the expected memory or quality footprint. I?m afraid that horse has already bolted w.r.t. older desktop OpenGL hardware. For NVIDIA that means pre-Logan family h/w. Anything more recent has ETC and ASTC support. Logan-family hardware first started appearing a little over 3 years ago. I?m curious how Emscripten will support OpenGL ES 3.0 if this standard feature is missing from WebGL. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From wil...@ Sun Sep 18 02:18:32 2016 From: wil...@ (Will Eastcott) Date: Sun, 18 Sep 2016 10:18:32 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Sun, Sep 18, 2016 at 4:53 AM, Nicolas Capens wrote: > That said, I'm still sympathetic to the issue, but removing texture > formats and then allowing them to be added back in as extensions doesn't > solve anything. There's still just as little guarantee that it won't be > supported by decompressing it as before. To get that kind of information, > we need a new query. > We seem to be going round in circles here. I've seen Florian, Max and others be very clear about what this solves. So I won't restate their opinions. However, let me look at this from another angle. Why shouldn't Chrome implement decompression for all of these: WEBGL_compressed_texture_s3tc WEBGL_compressed_texture_s3tc_srgb WEBGL_compressed_texture_atc WEBGL_compressed_texture_pvr WEBGL_compressed_texture_etc1 WEBGL_compressed_texture_es3_0 WEBGL_compressed_texture_astc And any formats that are introduced in the future too. Maybe none of these are supported by the underlying GPU. Maybe 1 of them is. Maybe 2. But which ones? Yes, maybe you can add some query API to try to actually determine which extensions will not decompress in browser code. But this is adding API for a feature which has very little, if any, value (again, for the reasons explained by Max and Florian). We already have an extension mechanism that works well. That's all we need for this. Maybe this is a philosophical point about WebGL extensions. I personally feel that a WebGL implementation offering an extension is saying 'Hey, I can do this thing!'. Not 'Hey, I can pretend I can do this thing, and do it pretty badly while I'm at it!'. Cheers, Will -- Will Eastcott (@willeastcott ) CEO, PlayCanvas Ltd http://playcanvas.com ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sun Sep 18 02:24:07 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sun, 18 Sep 2016 11:24:07 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Sun, Sep 18, 2016 at 2:39 AM, Mark Callow wrote: > > I?m curious how Emscripten will support OpenGL ES 3.0 if this standard > feature is missing from WebGL. > var etcBlowUpDefault = function(gl){ if(!gl.getExtension('WEBGL_compressed_texture_etc1')){ gl.compressedTexImage2DOrig = gl.compressedTexImage2D gl.compressedTexImage2D = function(target, level, internalformat, width, height, border, data){ if(target == 0x8D64){ gl.texImage2D(gl.TEXTURE_2D, level, gl.RGB, gl.RGB, width, height, border, gl.compression.decode('etc1', data)); } else{ gl.compressedTexImage2DOrig(target, level, internalformat, width, height, border, size); } } }} -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sun Sep 18 02:26:40 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sun, 18 Sep 2016 11:26:40 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Sun, Sep 18, 2016 at 5:53 AM, Nicolas Capens wrote: > I'm still sympathetic to the issue, but removing texture formats and then > allowing them to be added back in as extensions doesn't solve anything. > It's how optional device capabilities such as compressed texture formats have always been handled traditionally. Inventing some new mechanism for something that's established semantics and use is bad. > There's still just as little guarantee that it won't be supported by > decompressing it as before. To get that kind of information, we need a new > query. > That query exists, it's called "gl.getExtension". You just would need to make it work reliably, not foobaring it even further. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sun Sep 18 02:35:33 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sun, 18 Sep 2016 11:35:33 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: The bottom line is this: If somebody uses ETC they expect to make a network bandwidth and quality tradeoff in order to gain 6x less texture memory footprint, somewhat higher texel lookup rate, somewhat better fillrate (if rendering to it) and less incidences of context loss/GPU process crash/tab reloads/paging to ram slowdowns. If the ETC implementation (whoever implemented it), isn't providing the benefits reliably, then there is no point in using ETC, in fact, it's harmful because it becomes a source of bugs you may not be able to reproduce with the devices you have, and even if you can reproduce it, you cannot fix it. On Sun, Sep 18, 2016 at 11:26 AM, Florian B?sch wrote: > On Sun, Sep 18, 2016 at 5:53 AM, Nicolas Capens wrote: > >> I'm still sympathetic to the issue, but removing texture formats and then >> allowing them to be added back in as extensions doesn't solve anything. >> > It's how optional device capabilities such as compressed texture formats > have always been handled traditionally. Inventing some new mechanism for > something that's established semantics and use is bad. > > >> There's still just as little guarantee that it won't be supported by >> decompressing it as before. To get that kind of information, we need a new >> query. >> > That query exists, it's called "gl.getExtension". You just would need to > make it work reliably, not foobaring it even further. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sun Sep 18 03:25:32 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sun, 18 Sep 2016 12:25:32 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: One further thought, if we don't fix the ETC mess, while we can, and if there is no other mechanism to discover true support, this is what engine developers will likely do if they really want to support ETC: 1. Get the unsmasked GPU and vendor string 2. Lookup true ETC support 3. If no information is found, run an up-front upload/samplerate/fillrate performance test comparing claimed ETC vs. plain 4. Send back the information to the server for storage and later retrieval 5. Make the decision based on the discovered support I believe we'd all like to avoid a situation where professional developers are forced to put up-front loading time in front of any actual loading, and fingerprint browsers/backends/drivers/GPUs in order to deliver a good experience. On Sun, Sep 18, 2016 at 11:35 AM, Florian B?sch wrote: > The bottom line is this: > > If somebody uses ETC they expect to make a network bandwidth and quality > tradeoff in order to gain 6x less texture memory footprint, somewhat higher > texel lookup rate, somewhat better fillrate (if rendering to it) and less > incidences of context loss/GPU process crash/tab reloads/paging to ram > slowdowns. > > If the ETC implementation (whoever implemented it), isn't providing the > benefits reliably, then there is no point in using ETC, in fact, it's > harmful because it becomes a source of bugs you may not be able to > reproduce with the devices you have, and even if you can reproduce it, you > cannot fix it. > > On Sun, Sep 18, 2016 at 11:26 AM, Florian B?sch wrote: > >> On Sun, Sep 18, 2016 at 5:53 AM, Nicolas Capens wrote: >> >>> I'm still sympathetic to the issue, but removing texture formats and >>> then allowing them to be added back in as extensions doesn't solve anything. >>> >> It's how optional device capabilities such as compressed texture formats >> have always been handled traditionally. Inventing some new mechanism for >> something that's established semantics and use is bad. >> >> >>> There's still just as little guarantee that it won't be supported by >>> decompressing it as before. To get that kind of information, we need a new >>> query. >>> >> That query exists, it's called "gl.getExtension". You just would need to >> make it work reliably, not foobaring it even further. >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsh...@ Sun Sep 18 05:48:18 2016 From: tsh...@ (Tarek Sherif) Date: Sun, 18 Sep 2016 08:48:18 -0400 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Definitely for #1 out the options Ken suggested. If I'm using compressed textures, it's explicitly to save on memory. There's no win for me in the browser indicating I'm getting those savings when I'm not. Tarek Sherif http://tareksherif.net/ https://www.biodigital.com/ On Sun, Sep 18, 2016 at 6:25 AM, Florian B?sch wrote: > One further thought, if we don't fix the ETC mess, while we can, and if > there is no other mechanism to discover true support, this is what engine > developers will likely do if they really want to support ETC: > > 1. Get the unsmasked GPU and vendor string > 2. Lookup true ETC support > 3. If no information is found, run an up-front > upload/samplerate/fillrate performance test comparing claimed ETC vs. plain > 4. Send back the information to the server for storage and later > retrieval > 5. Make the decision based on the discovered support > > I believe we'd all like to avoid a situation where professional developers > are forced to put up-front loading time in front of any actual loading, and > fingerprint browsers/backends/drivers/GPUs in order to deliver a good > experience. > > > > On Sun, Sep 18, 2016 at 11:35 AM, Florian B?sch wrote: > >> The bottom line is this: >> >> If somebody uses ETC they expect to make a network bandwidth and quality >> tradeoff in order to gain 6x less texture memory footprint, somewhat higher >> texel lookup rate, somewhat better fillrate (if rendering to it) and less >> incidences of context loss/GPU process crash/tab reloads/paging to ram >> slowdowns. >> >> If the ETC implementation (whoever implemented it), isn't providing the >> benefits reliably, then there is no point in using ETC, in fact, it's >> harmful because it becomes a source of bugs you may not be able to >> reproduce with the devices you have, and even if you can reproduce it, you >> cannot fix it. >> >> On Sun, Sep 18, 2016 at 11:26 AM, Florian B?sch wrote: >> >>> On Sun, Sep 18, 2016 at 5:53 AM, Nicolas Capens wrote: >>> >>>> I'm still sympathetic to the issue, but removing texture formats and >>>> then allowing them to be added back in as extensions doesn't solve anything. >>>> >>> It's how optional device capabilities such as compressed texture formats >>> have always been handled traditionally. Inventing some new mechanism for >>> something that's established semantics and use is bad. >>> >>> >>>> There's still just as little guarantee that it won't be supported by >>>> decompressing it as before. To get that kind of information, we need a new >>>> query. >>>> >>> That query exists, it's called "gl.getExtension". You just would need to >>> make it work reliably, not foobaring it even further. >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sun Sep 18 23:52:56 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 19 Sep 2016 08:52:56 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Mon, Sep 19, 2016 at 7:09 AM, Nicolas Capens wrote: > Then you want option (2), because option (1) provides no indication that > the extension isn't supported by means of decompression. > If you gl.whatIfBad? or gl.getExtension('?s_it_good') does not make any difference to the underlying behavior, your argument is invalid, neither option provides certainty of the underlying behavior, don't pretend that they do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sun Sep 18 23:54:29 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 19 Sep 2016 08:54:29 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Mon, Sep 19, 2016 at 7:05 AM, Nicolas Capens wrote: > Option (1) leaves things wide open for the browser to make a > speed-memory-compatibility tradeoff. Option (2) empowers web developers to > make the choices themselves by indicating with certainty that decompression > is taking place. > Neither option allows you with certainty to determine the underlying behavior, and neither option forces a UA to conform to the "consensus", argument is false. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Sep 19 00:03:57 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 19 Sep 2016 09:03:57 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Mon, Sep 19, 2016 at 6:52 AM, Nicolas Capens wrote: > > That's the idea. And no, it's not similar to option (1) for obtaining that > information. > Both options are identical in what information is queried, argument is false. Option #1 uses an established, tried&true querying mechanism and adheres to the spirit of what compression is for: Supporting something only if the hardware supports it and reflecting that spirit as cleanly as possible. Option #2 uses a new way to query actual hardware support (that's different than how hardware support for everything else is queried), and inverts the meaning of the extension, assuming by default it doesn't mean anything at all whatsoever and substitutes it for "here be willy nilly behavior that completely defeats the entire purpose". Cargo-Culting ES3 is a bad idea. Those who need exact compatibility with ES3 (such as Unity, Epic, etc.) have the necessary know how to consult the list of differences to ES3. You also never address in any way the point I make about including an explicit conversion API that takes the guesswork out of the UAs code of what the user will want. Funny, why is that? There where many instances in the past where we did not include functionality in WebGL because it was poorly supported by the minority platform (mobile). Since software emulation of ETC is actively harmful, I don't see how it should be any different than when the majority platform can't support a feature. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Sep 19 00:06:06 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 19 Sep 2016 09:06:06 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Also purely as an observation, nearly nobody (other than Unity and Epic) take their native code and emscripten it to JS. But if you do, and you're smart enough to operate emscripten AND adapt your code to work when compiled to JS, you too, can consult the list of differences. On Mon, Sep 19, 2016 at 9:03 AM, Florian B?sch wrote: > On Mon, Sep 19, 2016 at 6:52 AM, Nicolas Capens wrote: >> >> That's the idea. And no, it's not similar to option (1) for obtaining >> that information. >> > > Both options are identical in what information is queried, argument is > false. > > Option #1 uses an established, tried&true querying mechanism and adheres > to the spirit of what compression is for: Supporting something only if the > hardware supports it and reflecting that spirit as cleanly as possible. > > Option #2 uses a new way to query actual hardware support (that's > different than how hardware support for everything else is queried), and > inverts the meaning of the extension, assuming by default it doesn't mean > anything at all whatsoever and substitutes it for "here be willy nilly > behavior that completely defeats the entire purpose". > > Cargo-Culting ES3 is a bad idea. Those who need exact compatibility with > ES3 (such as Unity, Epic, etc.) have the necessary know how to consult the > list of differences to ES3. > > You also never address in any way the point I make about including an > explicit conversion API that takes the guesswork out of the UAs code of > what the user will want. Funny, why is that? > > There where many instances in the past where we did not include > functionality in WebGL because it was poorly supported by the minority > platform (mobile). Since software emulation of ETC is actively harmful, I > don't see how it should be any different than when the majority platform > can't support a feature. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wil...@ Mon Sep 19 00:56:49 2016 From: wil...@ (Will Eastcott) Date: Mon, 19 Sep 2016 08:56:49 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Mon, Sep 19, 2016 at 6:09 AM, Nicolas Capens wrote: > > > On Sun, Sep 18, 2016 at 8:48 AM, Tarek Sherif wrote: > >> Definitely for #1 out the options Ken suggested. If I'm using compressed >> textures, it's explicitly to save on memory. There's no win for me in the >> browser indicating I'm getting those savings when I'm not. >> > > Then you want option (2), because option (1) provides no indication that > the extension isn't supported by means of decompression, > Can you tell me a (mobile) device that decompresses ETC1 at a driver level? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsh...@ Mon Sep 19 04:30:06 2016 From: tsh...@ (Tarek Sherif) Date: Mon, 19 Sep 2016 07:30:06 -0400 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > > Then you want option (2), because option (1) provides no indication that > the extension isn't supported by means of decompression, > What I want is for that information to be available as transparently as possible. If gl.getExtension were to provide more accurate information than the driver about whether hardware support is available, that would be even better. I don't see any value in having a primary layer of the API reporting support for something and then having to dig further to find out if gl.getParameter("ETC1_FOR_REAL_THO?"). All I see happening there is that any devs who haven't seen this thread will waste a lot of time trying to figure out why their ETC1 textures aren't giving them the savings they should be getting. And maybe I'm missing something, but I honestly don't see who is supposed to benefit from software decompression. If someone wants to use textures that will work anywhere and doesn't care about the memory footprint, they'll just use JPEGs or PNGs. Who are the devs out there who are bothering to use ETC1 textures, but are fine with them silently being blown up to 6x their size behind the scenes? Tarek Sherif http://tareksherif.net/ https://www.biodigital.com/ On Mon, Sep 19, 2016 at 3:56 AM, Will Eastcott wrote: > On Mon, Sep 19, 2016 at 6:09 AM, Nicolas Capens wrote: > >> >> >> On Sun, Sep 18, 2016 at 8:48 AM, Tarek Sherif wrote: >> >>> Definitely for #1 out the options Ken suggested. If I'm using compressed >>> textures, it's explicitly to save on memory. There's no win for me in the >>> browser indicating I'm getting those savings when I'm not. >>> >> >> Then you want option (2), because option (1) provides no indication that >> the extension isn't supported by means of decompression, >> > Can you tell me a (mobile) device that decompresses ETC1 at a driver level? > > ? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Sep 19 07:38:58 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 19 Sep 2016 16:38:58 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Mon, Sep 19, 2016 at 4:34 PM, Nicolas Capens wrote: > > It provides certainty that when a caveat is indicated, there really is a > known caveat and the application may want to use an alternative. As an > extension instead, we wouldn't have any certainty of whether it comes with > caveats. > How is asking gl.disisguud? any different from asking gl.getExtension('usedis') != null? What imbues the first with the magical property of being right everytime if there is any kind of UA or driver caveat, whereas it completely denies that knowledge to the latter? If you have a way to detect if a driver is lying about its ETC support, then gl.getExtension('ETC') should return null. End of story, problem fixed, definitively. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Sep 19 07:40:11 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 19 Sep 2016 16:40:11 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: On Mon, Sep 19, 2016 at 4:34 PM, Nicolas Capens wrote: > there's no trade-off to be made between compatibility and performance. You > can get both. > Luring people into using more effort to produce a worse result isn't a feature, it's a bug. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tsh...@ Mon Sep 19 08:07:02 2016 From: tsh...@ (Tarek Sherif) Date: Mon, 19 Sep 2016 11:07:02 -0400 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > > there's no trade-off to be made between compatibility and performance. You > can get both. Maybe I lack imagination, but what's the use case for compatibility without hardware support here? What is the situation where I'd want to use ETC1 textures when they're not compressed? Tarek Sherif http://tareksherif.net/ https://www.biodigital.com/ On Mon, Sep 19, 2016 at 10:40 AM, Florian B?sch wrote: > On Mon, Sep 19, 2016 at 4:34 PM, Nicolas Capens wrote: > >> there's no trade-off to be made between compatibility and performance. >> You can get both. >> > > Luring people into using more effort to produce a worse result isn't a > feature, it's a bug. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Mon Sep 19 09:22:28 2016 From: max...@ (Maksims Mihejevs) Date: Mon, 19 Sep 2016 17:22:28 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: So far option #2 has the only motivation: make it very easy to port games from mobile native to desktops. But: porting from native to webgl is whole a different story, and there never will be easy ways, simply because those platforms in the core are totally different. Second of all, there will be never full coverage for ETC on webgl platform. And anyone developing for that platform shall make first choice of universal format, and then consider compressed options for VRAM usage optimisations if it becomes a problem. It is not the other way around. So I'm sorry, but I still don't see a convenience in option #2 because developer still will have to provide multiple formats or single universal. All we need to do here. Is to use all along common and tested by years approach: make it an extension. Without inventing new ways, API's or concepts, which would make learning experience just worse. If we concerned about simplicity of a platform, then we should think if existing mechanics do fit well. And hey, they do. Utopia (one format for all) is warm idea, but lets stay real here. On 19 Sep 2016 4:07 p.m., "Tarek Sherif" wrote: > there's no trade-off to be made between compatibility and performance. You >> can get both. > > > Maybe I lack imagination, but what's the use case for compatibility > without hardware support here? What is the situation where I'd want to use > ETC1 textures when they're not compressed? > > Tarek Sherif > http://tareksherif.net/ > https://www.biodigital.com/ > > > On Mon, Sep 19, 2016 at 10:40 AM, Florian B?sch wrote: > >> On Mon, Sep 19, 2016 at 4:34 PM, Nicolas Capens wrote: >> >>> there's no trade-off to be made between compatibility and performance. >>> You can get both. >>> >> >> Luring people into using more effort to produce a worse result isn't a >> feature, it's a bug. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Sep 19 09:52:51 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 19 Sep 2016 18:52:51 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: That's complete fiction. You're postulating that somehow, magically, GPU vendors would be inclined to introduce an extension (let's call it OES_ETC_just_kidding_its_not_real), and somehow that decision was based upon that WebGL preferred to communicate that information as gl.ETC_is_bad rather than gl.getExtension('ETC_for_realz'). If you have the ability to detect ETC is bad, it shouldn't be exposed. Everytime somebody isn't using ETC because it's bad, you'll get less work, less network bandwidth use, and better picture quality. On Mon, Sep 19, 2016 at 6:41 PM, Nicolas Capens wrote: > On Mon, Sep 19, 2016 at 2:54 AM, Florian B?sch wrote: > >> On Mon, Sep 19, 2016 at 7:05 AM, Nicolas Capens wrote: >> >>> Option (1) leaves things wide open for the browser to make a >>> speed-memory-compatibility tradeoff. Option (2) empowers web developers to >>> make the choices themselves by indicating with certainty that decompression >>> is taking place. >>> >> >> Neither option allows you with certainty to determine the underlying >> behavior, and neither option forces a UA to conform to the "consensus", >> argument is false. >> > > You're correct that we can never force them. But we can define the API in > a way that they don't have to make a trade-off. It's in their best interest > to implement the caveat query truthfully. You don't get that incentive with > the format extension. They may value compatibility more that performance. > They only need to be right 51% of the time to make that choice. But that > sucks when you're part of the 49% who's affected by that compromise and you > were perfectly willing to query for caveats. > > It's not perfect to have to make an additional query. I get that. But it's > a tiny price to pay for being in full control of the actual problem. We can > direct anyone running into memory issues to this query, and we can request > GPU vendors to provide this information without them risking to lose > compatibility. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Mon Sep 19 10:00:26 2016 From: max...@ (Maksims Mihejevs) Date: Mon, 19 Sep 2016 18:00:26 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Lets not create extra work to those who care, by doing magic and then maintaining some way of communication to explain the magic. Those who don't care to provide universal/alternative formats with their apps - will be fine with supporting whatever platforms they support, as again: even with hacks, it will never be 100% of all webgl platform. Less magic, less need of maintaining extra work, less new stuff to learn. There are other formats, lets make this one as equal extension with them. It is no better and no worse, so shall not get special treatment with magic. On 19 Sep 2016 5:52 p.m., "Florian B?sch" wrote: > That's complete fiction. You're postulating that somehow, magically, GPU > vendors would be inclined to introduce an extension (let's call it > OES_ETC_just_kidding_its_not_real), and somehow that decision was based > upon that WebGL preferred to communicate that information as gl.ETC_is_bad > rather than gl.getExtension('ETC_for_realz'). > > If you have the ability to detect ETC is bad, it shouldn't be exposed. > Everytime somebody isn't using ETC because it's bad, you'll get less work, > less network bandwidth use, and better picture quality. > > On Mon, Sep 19, 2016 at 6:41 PM, Nicolas Capens wrote: > >> On Mon, Sep 19, 2016 at 2:54 AM, Florian B?sch wrote: >> >>> On Mon, Sep 19, 2016 at 7:05 AM, Nicolas Capens wrote: >>> >>>> Option (1) leaves things wide open for the browser to make a >>>> speed-memory-compatibility tradeoff. Option (2) empowers web developers to >>>> make the choices themselves by indicating with certainty that decompression >>>> is taking place. >>>> >>> >>> Neither option allows you with certainty to determine the underlying >>> behavior, and neither option forces a UA to conform to the "consensus", >>> argument is false. >>> >> >> You're correct that we can never force them. But we can define the API in >> a way that they don't have to make a trade-off. It's in their best interest >> to implement the caveat query truthfully. You don't get that incentive with >> the format extension. They may value compatibility more that performance. >> They only need to be right 51% of the time to make that choice. But that >> sucks when you're part of the 49% who's affected by that compromise and you >> were perfectly willing to query for caveats. >> >> It's not perfect to have to make an additional query. I get that. But >> it's a tiny price to pay for being in full control of the actual problem. >> We can direct anyone running into memory issues to this query, and we can >> request GPU vendors to provide this information without them risking to >> lose compatibility. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From max...@ Mon Sep 19 10:03:20 2016 From: max...@ (Maksims Mihejevs) Date: Mon, 19 Sep 2016 18:03:20 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: I think, I have exhausted numerous reasons of why it shall be simple as option #1. And I think further conversation on this without brining factual or actual reasoning becomes repetitive. Lets put our views and values a bit down, and being techy, use techy reasoning: not reinventing wheel and not making magic behind the scenes with side effects. This is engineering, not sorcery. On 19 Sep 2016 6:00 p.m., wrote: > > Lets not create extra work to those who care, by doing magic and then maintaining some way of communication to explain the magic. > > Those who don't care to provide universal/alternative formats with their apps - will be fine with supporting whatever platforms they support, as again: even with hacks, it will never be 100% of all webgl platform. > > Less magic, less need of maintaining extra work, less new stuff to learn. > > There are other formats, lets make this one as equal extension with them. > It is no better and no worse, so shall not get special treatment with magic. > > > On 19 Sep 2016 5:52 p.m., "Florian B?sch" wrote: >> >> That's complete fiction. You're postulating that somehow, magically, GPU vendors would be inclined to introduce an extension (let's call it OES_ETC_just_kidding_its_not_real), and somehow that decision was based upon that WebGL preferred to communicate that information as gl.ETC_is_bad rather than gl.getExtension('ETC_for_realz'). >> >> If you have the ability to detect ETC is bad, it shouldn't be exposed. Everytime somebody isn't using ETC because it's bad, you'll get less work, less network bandwidth use, and better picture quality. >> >> On Mon, Sep 19, 2016 at 6:41 PM, Nicolas Capens wrote: >>> >>> On Mon, Sep 19, 2016 at 2:54 AM, Florian B?sch wrote: >>>> >>>> On Mon, Sep 19, 2016 at 7:05 AM, Nicolas Capens wrote: >>>>> >>>>> Option (1) leaves things wide open for the browser to make a speed-memory-compatibility tradeoff. Option (2) empowers web developers to make the choices themselves by indicating with certainty that decompression is taking place. >>>> >>>> >>>> Neither option allows you with certainty to determine the underlying behavior, and neither option forces a UA to conform to the "consensus", argument is false. >>> >>> >>> You're correct that we can never force them. But we can define the API in a way that they don't have to make a trade-off. It's in their best interest to implement the caveat query truthfully. You don't get that incentive with the format extension. They may value compatibility more that performance. They only need to be right 51% of the time to make that choice. But that sucks when you're part of the 49% who's affected by that compromise and you were perfectly willing to query for caveats. >>> >>> It's not perfect to have to make an additional query. I get that. But it's a tiny price to pay for being in full control of the actual problem. We can direct anyone running into memory issues to this query, and we can request GPU vendors to provide this information without them risking to lose compatibility. >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Sep 19 10:05:42 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 19 Sep 2016 19:05:42 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Also following Nicolas logic, everything is fair game to be emulated: - "We don't support 8k textures, here, we give you an 4k one, it's a reasonable approximation, if you don't agree, just query gl.texture_size_caveat" - "Nope no floating point texture support, here have a byte texture, if you don't agree, just query.gl.texture_float_caveat" - "No standard derivatives here, it happens rarely, but tell you what, we'll just give you 0, it's a reasonable approximation, if you don't agree, just query gl.standard_derivatives_just_kidding" It's pretty silly to even start going down the route of "here query this or that caveat flag for any functionality we offer". On Mon, Sep 19, 2016 at 7:00 PM, Maksims Mihejevs wrote: > Lets not create extra work to those who care, by doing magic and then > maintaining some way of communication to explain the magic. > > Those who don't care to provide universal/alternative formats with their > apps - will be fine with supporting whatever platforms they support, as > again: even with hacks, it will never be 100% of all webgl platform. > > Less magic, less need of maintaining extra work, less new stuff to learn. > > There are other formats, lets make this one as equal extension with them. > It is no better and no worse, so shall not get special treatment with > magic. > > On 19 Sep 2016 5:52 p.m., "Florian B?sch" wrote: > >> That's complete fiction. You're postulating that somehow, magically, GPU >> vendors would be inclined to introduce an extension (let's call it >> OES_ETC_just_kidding_its_not_real), and somehow that decision was based >> upon that WebGL preferred to communicate that information as gl.ETC_is_bad >> rather than gl.getExtension('ETC_for_realz'). >> >> If you have the ability to detect ETC is bad, it shouldn't be exposed. >> Everytime somebody isn't using ETC because it's bad, you'll get less work, >> less network bandwidth use, and better picture quality. >> >> On Mon, Sep 19, 2016 at 6:41 PM, Nicolas Capens wrote: >> >>> On Mon, Sep 19, 2016 at 2:54 AM, Florian B?sch wrote: >>> >>>> On Mon, Sep 19, 2016 at 7:05 AM, Nicolas Capens >>>> wrote: >>>> >>>>> Option (1) leaves things wide open for the browser to make a >>>>> speed-memory-compatibility tradeoff. Option (2) empowers web developers to >>>>> make the choices themselves by indicating with certainty that decompression >>>>> is taking place. >>>>> >>>> >>>> Neither option allows you with certainty to determine the underlying >>>> behavior, and neither option forces a UA to conform to the "consensus", >>>> argument is false. >>>> >>> >>> You're correct that we can never force them. But we can define the API >>> in a way that they don't have to make a trade-off. It's in their best >>> interest to implement the caveat query truthfully. You don't get that >>> incentive with the format extension. They may value compatibility more that >>> performance. They only need to be right 51% of the time to make that >>> choice. But that sucks when you're part of the 49% who's affected by that >>> compromise and you were perfectly willing to query for caveats. >>> >>> It's not perfect to have to make an additional query. I get that. But >>> it's a tiny price to pay for being in full control of the actual problem. >>> We can direct anyone running into memory issues to this query, and we can >>> request GPU vendors to provide this information without them risking to >>> lose compatibility. >>> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Mon Sep 19 12:24:02 2016 From: khr...@ (Mark Callow) Date: Mon, 19 Sep 2016 12:24:02 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > On Sep 19, 2016, at 12:56 AM, Will Eastcott wrote: > > Can you tell me a (mobile) device that decompresses ETC1 at a driver level? None, as far as I know. The h/w decompresses as necessary during sampling. There is no reason for driver-level decompression. All the OpenGL ES 3 capable h/w & IP was designed after ETC2* was included in the specification. As for pre-OpenGL ES 3 h/w I don?t know why a driver would advertise the ETC1 extension if it did not have h/w support. Also the specifications do not permit it. The whole issue of driver-level compression only arose because some companies wanted to support the GL_ARB_es3_compatibility extension on existing hardware to help people developing for mobile. GL_ARB_es3_compatibility explicitly says "It is not necessary for the GL to implement these compression formats in hardware?. When ETC2 became part of core OpenGL in 4.3, released 4 years ago the above quoted permission was removed. Therefore all implementations of OpenGL 4.3 and later should have h/w support for ETC2. Driver-level decompression is only an issue when using older versions of OpenGL with es3_compatibility, D3D or ANGLE on D3D. * ETC2 is a superset of ETC1. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pya...@ Mon Sep 19 12:29:45 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 19 Sep 2016 21:29:45 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Let's say you don't support floating point textures. But, for you'll absolutely want to say you support them. Do you go and just give people byte textures and claim they're float, because, people have no clue, they're all newbs anyway, and they'll thank you? No, you don't. You don't because that would defeat the entire purpose of getting a floating point texture. It's the same way with ETC. It's a shitty compression scheme. It's got bad quality. And the transfer size is huge compared to good quality lossy compression schemes like jpeg. There's one reason, and only one reason why you'd want to use it: It's supported in hardware and gives you 6x less vram use. All the hassle that you go trough to support ETC, all the quality loss, all the silly many bytes you transfer over the network, all the pre-encoding, it might, it just might be worth it for 6x less vram at somewhat usable quality. Might. But it sure ain't if you silently emulate it to give somebody all of the drawbacks of ETC without any of advantages. On Mon, Sep 19, 2016 at 8:53 PM, Nicolas Capens wrote: > On Mon, Sep 19, 2016 at 3:03 AM, Florian B?sch wrote: > >> On Mon, Sep 19, 2016 at 6:52 AM, Nicolas Capens wrote: >>> >>> That's the idea. And no, it's not similar to option (1) for obtaining >>> that information. >>> >> >> Both options are identical in what information is queried, argument is >> false. >> >> Option #1 uses an established, tried&true querying mechanism and adheres >> to the spirit of what compression is for: Supporting something only if the >> hardware supports it and reflecting that spirit as cleanly as possible. >> > > That's a nice theory, but it's not how extensions work in practice. > Implementers can, have, and will offer features even when not supported > natively by the hardware. > > >> Option #2 uses a new way to query actual hardware support (that's >> different than how hardware support for everything else is queried), and >> inverts the meaning of the extension, assuming by default it doesn't mean >> anything at all whatsoever and substitutes it for "here be willy nilly >> behavior that completely defeats the entire purpose". >> > > It doesn't invert anything. It adds information that can only be > communicated by using an extra query. Call it an oversight in the original > spec, if you will, but we shouldn't use that as an argument to keep not > giving app developers the actual control they want. > > >> Cargo-Culting ES3 is a bad idea. Those who need exact compatibility with >> ES3 (such as Unity, Epic, etc.) have the necessary know how to consult the >> list of differences to ES3. >> > > This argument also works works in reverse. Those who care about whether a > format is supported by decompression will quickly learn about the caveat > query and should have no issues writing a couple of lines of code to deal > with it. > > >> You also never address in any way the point I make about including an >> explicit conversion API that takes the guesswork out of the UAs code of >> what the user will want. Funny, why is that? >> > > I didn't realize there was anything for me to address there. It in no way > helps anyone to have that conversion API available when the implementation > decides to expose the extension despite using decompression. > > Also, you're against adding anything to the WebGL spec to be able to query > about caveats, i.e. the actual information we want, but you're totally fine > adding something else instead that isn't going to be helpful in determining > that? > > There where many instances in the past where we did not include >> functionality in WebGL because it was poorly supported by the minority >> platform (mobile). Since software emulation of ETC is actively harmful, I >> don't see how it should be any different than when the majority platform >> can't support a feature. >> > > WebGL wasn't based on OpenGL *Embedded Systems* because mobile was > considered a minority platform. The ANGLE team had to make a lot of effort > to ensure it would run reliably on desktops as well, requiring quite a bit > of emulation. We would have had to move tons of things to extensions if it > were the norm to do that for any feature not supported 1:1. They might have > looked justifiable in isolation, but it would have left us with a skeleton > of an API where doing anything useful required checking many extensions and > providing your own workarounds even when you wouldn't have cared if it was > emulated or not. Likewise this ETC thing isn't so bad in isolation but it's > a very slippery slope to have expectations about the hardware > implementation, when not even the native drivers give you those guarantees. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Mon Sep 19 12:29:47 2016 From: khr...@ (Mark Callow) Date: Mon, 19 Sep 2016 12:29:47 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: <745D4C61-17B4-405E-AD29-478ADDA0E16E@callow.im> > On Sep 19, 2016, at 12:24 PM, Mark Callow wrote: > > Driver-level decompression is only an issue when using older versions of OpenGL with es3_compatibility, D3D or ANGLE on D3D. So WebGL implementations don?t need any help from driver vendors to correctly report caveats. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From pya...@ Mon Sep 19 12:38:12 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Mon, 19 Sep 2016 21:38:12 +0200 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: Also let's consider the worst case scenario for option #1. 1. Somebody has some ES3 compatible codebase that doesn't run without modification on WebGL because ETC 2. They go work over their code to deliver jpegs instead of ETC for those cases that ETC isn't avilable 3. The application loads faster 4. The textures look nicer 5. It's no worse on vram than ETC emulation ... Yeah that's pretty bad right? God forbid content loads faster and looks nicer. Can't have programmers go trough a few lines of code and make it happen. What's the world come to? On Mon, Sep 19, 2016 at 9:29 PM, Florian B?sch wrote: > Let's say you don't support floating point textures. But, for random compatibility wish fullfilment> you'll absolutely want to say you > support them. Do you go and just give people byte textures and claim > they're float, because, people have no clue, they're all newbs anyway, and > they'll thank you? > > No, you don't. You don't because that would defeat the entire purpose of > getting a floating point texture. > > It's the same way with ETC. It's a shitty compression scheme. It's got bad > quality. And the transfer size is huge compared to good quality lossy > compression schemes like jpeg. There's one reason, and only one reason why > you'd want to use it: It's supported in hardware and gives you 6x less vram > use. All the hassle that you go trough to support ETC, all the quality > loss, all the silly many bytes you transfer over the network, all the > pre-encoding, it might, it just might be worth it for 6x less vram at > somewhat usable quality. > > Might. > > But it sure ain't if you silently emulate it to give somebody all of the > drawbacks of ETC without any of advantages. > > On Mon, Sep 19, 2016 at 8:53 PM, Nicolas Capens wrote: > >> On Mon, Sep 19, 2016 at 3:03 AM, Florian B?sch wrote: >> >>> On Mon, Sep 19, 2016 at 6:52 AM, Nicolas Capens wrote: >>>> >>>> That's the idea. And no, it's not similar to option (1) for obtaining >>>> that information. >>>> >>> >>> Both options are identical in what information is queried, argument is >>> false. >>> >>> Option #1 uses an established, tried&true querying mechanism and adheres >>> to the spirit of what compression is for: Supporting something only if the >>> hardware supports it and reflecting that spirit as cleanly as possible. >>> >> >> That's a nice theory, but it's not how extensions work in practice. >> Implementers can, have, and will offer features even when not supported >> natively by the hardware. >> >> >>> Option #2 uses a new way to query actual hardware support (that's >>> different than how hardware support for everything else is queried), and >>> inverts the meaning of the extension, assuming by default it doesn't mean >>> anything at all whatsoever and substitutes it for "here be willy nilly >>> behavior that completely defeats the entire purpose". >>> >> >> It doesn't invert anything. It adds information that can only be >> communicated by using an extra query. Call it an oversight in the original >> spec, if you will, but we shouldn't use that as an argument to keep not >> giving app developers the actual control they want. >> >> >>> Cargo-Culting ES3 is a bad idea. Those who need exact compatibility with >>> ES3 (such as Unity, Epic, etc.) have the necessary know how to consult the >>> list of differences to ES3. >>> >> >> This argument also works works in reverse. Those who care about whether a >> format is supported by decompression will quickly learn about the caveat >> query and should have no issues writing a couple of lines of code to deal >> with it. >> >> >>> You also never address in any way the point I make about including an >>> explicit conversion API that takes the guesswork out of the UAs code of >>> what the user will want. Funny, why is that? >>> >> >> I didn't realize there was anything for me to address there. It in no way >> helps anyone to have that conversion API available when the implementation >> decides to expose the extension despite using decompression. >> >> Also, you're against adding anything to the WebGL spec to be able to >> query about caveats, i.e. the actual information we want, but you're >> totally fine adding something else instead that isn't going to be helpful >> in determining that? >> >> There where many instances in the past where we did not include >>> functionality in WebGL because it was poorly supported by the minority >>> platform (mobile). Since software emulation of ETC is actively harmful, I >>> don't see how it should be any different than when the majority platform >>> can't support a feature. >>> >> >> WebGL wasn't based on OpenGL *Embedded Systems* because mobile was >> considered a minority platform. The ANGLE team had to make a lot of effort >> to ensure it would run reliably on desktops as well, requiring quite a bit >> of emulation. We would have had to move tons of things to extensions if it >> were the norm to do that for any feature not supported 1:1. They might have >> looked justifiable in isolation, but it would have left us with a skeleton >> of an API where doing anything useful required checking many extensions and >> providing your own workarounds even when you wouldn't have cared if it was >> emulated or not. Likewise this ETC thing isn't so bad in isolation but it's >> a very slippery slope to have expectations about the hardware >> implementation, when not even the native drivers give you those guarantees. >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Mon Sep 19 21:26:48 2016 From: khr...@ (Mark Callow) Date: Mon, 19 Sep 2016 21:26:48 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > On Sep 19, 2016, at 8:08 PM, Nicolas Capens wrote: > > That is incorrect. That sentence was removed because the 4.3 core spec already explicitly allows for any feature to be implemented by CPU or GPU processing or any combination thereof. Then why was the language put in the extension spec. in the first place? It was unnecessary. I agree the OpenGL specs say they do not require any particular implementation. I?ll answer my own question because I was there. The language was put in because there was an expectation that h/w implementations of GL do texture decompression in hardware and the ARB wanted to acknowledge that software decompression was considered acceptable for ARB_es3_compatibility. The language was removed because there is no intention to make a special exception for ETC2 in core OpenGL. Support is expected to be at the same level as BPTC or RGTC. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dko...@ Tue Sep 20 05:50:04 2016 From: dko...@ (Daniel Koch) Date: Tue, 20 Sep 2016 12:50:04 +0000 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: > The language was removed because there is no intention to make a special exception for ETC2 in core OpenGL. Disagree (and I was there too!). I expect the sentence was removed because there is never any guarantee that you'll get any specific requested format in OpenGL (there is more of a guarantee in Vulkan however). > Support is expected to be at the same level as BPTC or RGTC. Support is not necessarily expected to be at the same level as BPTC or RGTC, and it's not "in hardware" on any current NVIDIA desktop GPUs. If WebGL does go the route of reporting caveats on texture formats, I'd strongly recommend going the route of the GL_ARB_internalformat_query2 extension, which was designed exactly for this type of query. -Daniel On 2016-09-20, 12:26 AM, "owners-public_webgl...@ on behalf of Mark Callow" on behalf of khronos...@> wrote: * PGP Signed by an unknown key On Sep 19, 2016, at 8:08 PM, Nicolas Capens > wrote: That is incorrect. That sentence was removed because the 4.3 core spec already explicitly allows for any feature to be implemented by CPU or GPU processing or any combination thereof. Then why was the language put in the extension spec. in the first place? It was unnecessary. I agree the OpenGL specs say they do not require any particular implementation. I'll answer my own question because I was there. The language was put in because there was an expectation that h/w implementations of GL do texture decompression in hardware and the ARB wanted to acknowledge that software decompression was considered acceptable for ARB_es3_compatibility. The language was removed because there is no intention to make a special exception for ETC2 in core OpenGL. Support is expected to be at the same level as BPTC or RGTC. Regards -Mark * Unknown Key * 0x3F001063 -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Tue Sep 20 07:49:34 2016 From: khr...@ (Mark Callow) Date: Tue, 20 Sep 2016 07:49:34 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: Message-ID: <21060B47-2A63-4D77-AE67-86A419CE61EC@callow.im> > On Sep 20, 2016, at 5:50 AM, Daniel Koch wrote: > > Support is not necessarily expected to be at the same level as BPTC or RGTC, and it?s not "in hardware" on any current NVIDIA desktop GPUs. Hmm? This conflicts with what I was told by another NVIDIA engineer 3 years ago. I was told Logan family parts have ETC and ASTC support. Perhaps there aren?t any Logan family desktop GPUs. Or perhaps my source was wrong. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From dko...@ Tue Sep 20 08:59:01 2016 From: dko...@ (Daniel Koch) Date: Tue, 20 Sep 2016 15:59:01 +0000 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: <21060B47-2A63-4D77-AE67-86A419CE61EC@callow.im> References: <21060B47-2A63-4D77-AE67-86A419CE61EC@callow.im> Message-ID: That information is correct, but Logan is a Tegra SOC. The "Logan" is the codename for the Tegra K1 (https://en.wikipedia.org/wiki/Tegra#Tegra_K1), which does natively support ETC2 and ASTC. The Tegra X1 parts also support these natively. The current discrete "desktop" GPUs do not. -Daniel On 2016-09-20, 10:49 AM, "Mark Callow" > wrote: * PGP Signed by an unknown key On Sep 20, 2016, at 5:50 AM, Daniel Koch > wrote: Support is not necessarily expected to be at the same level as BPTC or RGTC, and it's not "in hardware" on any current NVIDIA desktop GPUs. Hmm? This conflicts with what I was told by another NVIDIA engineer 3 years ago. I was told Logan family parts have ETC and ASTC support. Perhaps there aren't any Logan family desktop GPUs. Or perhaps my source was wrong. Regards -Mark * Unknown Key * 0x3F001063 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Tue Sep 20 13:19:20 2016 From: jgi...@ (Jeff Gilbert) Date: Tue, 20 Sep 2016 13:19:20 -0700 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: <21060B47-2A63-4D77-AE67-86A419CE61EC@callow.im> Message-ID: Update: The WG is planning on only exposing the extension where there is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this best matches what devs expect when they see support for a compressed texture extension. Compressed image formats are a better delivery mechanism than compressed texture formats, if they're going to be decompressed anyway. We also would like to release a JS polyfill implementation of ETC2/EAC decompression. This would make it easy for apps to choose to only ship ETC2, and decompress when not natively supported. Practically speaking, it sounds like best-practice for WebGL2 compressed textures will be to supply ETC2 and S3TC, which together should handle all supporting platforms. On Tue, Sep 20, 2016 at 8:59 AM, Daniel Koch wrote: > That information is correct, but Logan is a Tegra SOC. > The ?Logan? is the codename for the Tegra K1 > (https://en.wikipedia.org/wiki/Tegra#Tegra_K1), which does natively support > ETC2 and ASTC. > The Tegra X1 parts also support these natively. The current discrete > ?desktop? GPUs do not. > > -Daniel > > On 2016-09-20, 10:49 AM, "Mark Callow" wrote: > > * PGP Signed by an unknown key > > On Sep 20, 2016, at 5:50 AM, Daniel Koch wrote: > > Support is not necessarily expected to be at the same level as BPTC or RGTC, > and it?s not "in hardware" on any current NVIDIA desktop GPUs. > > > Hmm? This conflicts with what I was told by another NVIDIA engineer 3 years > ago. I was told Logan family parts have ETC and ASTC support. Perhaps there > aren?t any Logan family desktop GPUs. Or perhaps my source was wrong. > > Regards > > -Mark > > * Unknown Key > * 0x3F001063 > ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From wil...@ Tue Sep 20 14:07:21 2016 From: wil...@ (Will Eastcott) Date: Tue, 20 Sep 2016 22:07:21 +0100 Subject: [Public WebGL] ETC1 support in desktop WebGL In-Reply-To: References: <21060B47-2A63-4D77-AE67-86A419CE61EC@callow.im> Message-ID: Thanks for the update, Jeff. The PlayCanvas team gives this a big thumbs up. :) Will ? On Tue, Sep 20, 2016 at 9:19 PM, Jeff Gilbert wrote: > > Update: The WG is planning on only exposing the extension where there > is 'native' support. (Not D3D, maybe not Desktop NV?) We feel this > best matches what devs expect when they see support for a compressed > texture extension. Compressed image formats are a better delivery > mechanism than compressed texture formats, if they're going to be > decompressed anyway. > > We also would like to release a JS polyfill implementation of ETC2/EAC > decompression. This would make it easy for apps to choose to only ship > ETC2, and decompress when not natively supported. > > Practically speaking, it sounds like best-practice for WebGL2 > compressed textures will be to supply ETC2 and S3TC, which together > should handle all supporting platforms. > > On Tue, Sep 20, 2016 at 8:59 AM, Daniel Koch wrote: > > That information is correct, but Logan is a Tegra SOC. > > The ?Logan? is the codename for the Tegra K1 > > (https://en.wikipedia.org/wiki/Tegra#Tegra_K1), which does natively > support > > ETC2 and ASTC. > > The Tegra X1 parts also support these natively. The current discrete > > ?desktop? GPUs do not. > > > > -Daniel > > > > On 2016-09-20, 10:49 AM, "Mark Callow" wrote: > > > > * PGP Signed by an unknown key > > > > On Sep 20, 2016, at 5:50 AM, Daniel Koch wrote: > > > > Support is not necessarily expected to be at the same level as BPTC or > RGTC, > > and it?s not "in hardware" on any current NVIDIA desktop GPUs. > > > > > > Hmm? This conflicts with what I was told by another NVIDIA engineer 3 > years > > ago. I was told Logan family parts have ETC and ASTC support. Perhaps > there > > aren?t any Logan family desktop GPUs. Or perhaps my source was wrong. > > > > Regards > > > > -Mark > > > > * Unknown Key > > * 0x3F001063 > > > > ----------------------------------------------------------- > You are currently subscribed to public_webgl...@ > To unsubscribe, send an email to majordomo...@ with > the following command in the body of your email: > unsubscribe public_webgl > ----------------------------------------------------------- > > -- Will Eastcott (@willeastcott ) CEO, PlayCanvas Ltd http://playcanvas.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Thu Sep 22 10:13:31 2016 From: kbr...@ (Kenneth Russell) Date: Thu, 22 Sep 2016 10:13:31 -0700 Subject: [Public WebGL] Moving WEBGL_compressed_texture_es3_0 to community approved Message-ID: I would like to propose that the extension https://www.khronos.org/registry/webgl/extensions/WEBGL_compressed_texture_es3_0/ be moved to community approved in https://github.com/KhronosGroup/WebGL/pull/2047 . Mozilla proposed the extension originally; Chrome now implements it on top-of-tree. The conformance tests have been updated to verify and use the extension if it's available. Moving it out of draft status is a blocker for making the ETC compressed texture formats optional in WebGL 2.0. Any objections? Thanks, -Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From chr...@ Fri Sep 23 01:49:52 2016 From: chr...@ (Christophe Riccio) Date: Fri, 23 Sep 2016 10:49:52 +0200 Subject: [Public WebGL] Moving WEBGL_compressed_texture_es3_0 to community approved In-Reply-To: References: Message-ID: Ship it! :) On Thu, Sep 22, 2016 at 7:13 PM, Kenneth Russell wrote: > I would like to propose that the extension https://www.khronos.org/ > registry/webgl/extensions/WEBGL_compressed_texture_es3_0/ be moved to > community approved in https://github.com/KhronosGroup/WebGL/pull/2047 . > > Mozilla proposed the extension originally; Chrome now implements it on > top-of-tree. The conformance tests have been updated to verify and use the > extension if it's available. Moving it out of draft status is a blocker for > making the ETC compressed texture formats optional in WebGL 2.0. > > Any objections? > > Thanks, > > -Ken > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Sep 23 04:25:02 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 23 Sep 2016 13:25:02 +0200 Subject: [Public WebGL] Moving WEBGL_compressed_texture_es3_0 to community approved In-Reply-To: References: Message-ID: There's a section "Issues" in the specification with 3 questions: Q: The ES 3.0.4 specification allows for compressedTexImage3D and > compressedTexSubImage3D which are missing from this specification, should > it be added? > Q: The ES 3.0.4 specification defines the errors INVALID_OPERATION, should > it be added? > Q: The ES 3.0.4 specification defines the errors INVALID_VALUE for other > cases than compressed size missmatch, should these be added? Are these addressed? Also what other differences does the extension have to the ES 3? On Thu, Sep 22, 2016 at 7:13 PM, Kenneth Russell wrote: > I would like to propose that the extension https://www.khronos.org/ > registry/webgl/extensions/WEBGL_compressed_texture_es3_0/ be moved to > community approved in https://github.com/KhronosGroup/WebGL/pull/2047 . > > Mozilla proposed the extension originally; Chrome now implements it on > top-of-tree. The conformance tests have been updated to verify and use the > extension if it's available. Moving it out of draft status is a blocker for > making the ETC compressed texture formats optional in WebGL 2.0. > > Any objections? > > Thanks, > > -Ken > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From art...@ Fri Sep 23 05:06:03 2016 From: art...@ (Mr F) Date: Fri, 23 Sep 2016 15:06:03 +0300 Subject: [Public WebGL] MSAA + MRT? Message-ID: I just wanted to clarify the intended behaviour of WebGL2 (and ES3?) about the subject. Can you render to multiple anti-aliased targets at once? Because everywhere I've seen two features listed separately without any info on their interoperability. It's a big difference actually, e.g. it was one of the advantages of DX10 over DX9. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juj...@ Fri Sep 23 09:28:27 2016 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Fri, 23 Sep 2016 19:28:27 +0300 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query Message-ID: There are currently two APIs to do queries: 1) In WebGL 2.0 core, there exists an API for query functions: https://www.khronos.org/registry/webgl/specs/latest/2.0/#3.7.12 That specification enables query objects that can be used for rasterization queries: var queryType = GL_ANY_SAMPLES_PASSED or GL_ANY_SAMPLES_PASSED_CONSERVATIVE or GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN; var queryObject = webGL2Context.createQuery(); webGL2Context.beginQuery(queryObject, queryType); See https://www.khronos.org/opengles/sdk/docs/man3/html/glBeginQuery.xhtml. 2) In WebGL extension EXT_disjoint_timer_query, there exists an API for query functions: https://www.khronos.org/registry/webgl/extensions/EXT_disjoint_timer_query/ That specification enables query objects that can be used for time elapsed queries: var queryType = GL_TIME_ELAPSED_EXT; var queryExtObject = webGL1Or2Context.getExtension("EXT_disjoint_timer_query").createQueryEXT(); webGL1Or2Context.getExtension("EXT_disjoint_timer_query").beginQueryEXT(queryExtObject, queryType); Should these two types of query objects be considered to be strictly separate? Say, if a WebGL 2 codebase would like to do time elapsed queries, should it maintain a separation of "query ext" objects vs a pool of "query core WebGL 2" objects for this purposes? Or can the following type of code considered valid: if (webGL2Context.getExtension("EXT_disjoint_timer_query")) { var queryObject = webGL2Context.createQuery(); var queryType = GL_TIME_ELAPSED_EXT; webGL2Context.beginQuery(queryObject, queryType); } If the above is not valid, it would be good to be reflected by changing the language of the EXT_disjoint_timer_query to not talk about "query objects", but e.g. "queryExt objects" or otherwise explicitly note the required separation. Thanks in advance, Jukka -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Fri Sep 23 10:14:00 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Fri, 23 Sep 2016 19:14:00 +0200 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: On Fri, Sep 23, 2016 at 6:28 PM, Jukka Jyl?nki wrote: > Should these two types of query objects be considered to be strictly > separate? > The question isn't just a valid one for WebGL, it is a valid question for OpenGL ES in general. Are the ES functions glGenQueries and glGenQueriesEXT considered interchangable? Can you for instance call glGenQuery, then glBeginQuery(GL_TIME_ELAPSED_EXT)? I'd propose that if If the answer is yes, then those functions are overlapping and the object that either glGenQueries and glGenQueriesEXT outputs is identical. If the answer is no, then they are not identical. It might be kind of a unique case, where a core specification introduces some functionality that an extension introduced, but it's introduced to satisfy another original extension (EXT_occlusion_query_boolean), which also introduces the symbol glGenQueriesEXT. I would have to assume if you load up both the occlusion query and disjoint timer query extensions, that there would not be 2 different functions for glGenQueriesEXT (the disjoint function) and glGenQueriesEXT (the occlusion query). This would suggest to me tentatively that glGenQueries is interchangable with glGenQueriesEXT (of any flavor). If the above is a correct conclusion, then no change to the specification would be required (as there wouldn't be queryEXT objects). -------------- next part -------------- An HTML attachment was scrubbed... URL: From geo...@ Fri Sep 23 10:20:12 2016 From: geo...@ (Geoff Lang) Date: Fri, 23 Sep 2016 17:20:12 +0000 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: For OpenGL in general, glGenQueries and glGenQueriesEXT are allowed to be the same function pointer and in some ARB extensions the function name isn't even suffixed (GL_ARB_get_program_binary for example). On Fri, Sep 23, 2016 at 1:15 PM Florian B?sch wrote: > On Fri, Sep 23, 2016 at 6:28 PM, Jukka Jyl?nki wrote: > > Should these two types of query objects be considered to be strictly > separate? > > > The question isn't just a valid one for WebGL, it is a valid question for > OpenGL ES in general. Are the ES functions glGenQueries and glGenQueriesEXT > considered interchangable? Can you for instance call glGenQuery, then > glBeginQuery(GL_TIME_ELAPSED_EXT)? > > I'd propose that if If the answer is yes, then those functions are > overlapping and the object that either glGenQueries and glGenQueriesEXT > outputs is identical. If the answer is no, then they are not identical. > > It might be kind of a unique case, where a core specification introduces > some functionality that an extension introduced, but it's introduced to > satisfy another original extension (EXT_occlusion_query_boolean), which > also introduces the symbol glGenQueriesEXT. I would have to assume if you > load up both the occlusion query and disjoint timer query extensions, that > there would not be 2 different functions for glGenQueriesEXT (the disjoint > function) and glGenQueriesEXT (the occlusion query). This would suggest to > me tentatively that glGenQueries is interchangable with glGenQueriesEXT (of > any flavor). > > If the above is a correct conclusion, then no change to the specification > would be required (as there wouldn't be queryEXT objects). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geo...@ Fri Sep 23 10:21:02 2016 From: geo...@ (Geoff Lang) Date: Fri, 23 Sep 2016 17:21:02 +0000 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: I'll also note that it's undefined what happens when you call an EXT function when the extension is not present. On Fri, Sep 23, 2016 at 1:20 PM Geoff Lang wrote: > For OpenGL in general, glGenQueries and glGenQueriesEXT are allowed to be > the same function pointer and in some ARB extensions the function name > isn't even suffixed (GL_ARB_get_program_binary for example). > > On Fri, Sep 23, 2016 at 1:15 PM Florian B?sch wrote: > > On Fri, Sep 23, 2016 at 6:28 PM, Jukka Jyl?nki wrote: > > Should these two types of query objects be considered to be strictly > separate? > > > The question isn't just a valid one for WebGL, it is a valid question for > OpenGL ES in general. Are the ES functions glGenQueries and glGenQueriesEXT > considered interchangable? Can you for instance call glGenQuery, then > glBeginQuery(GL_TIME_ELAPSED_EXT)? > > I'd propose that if If the answer is yes, then those functions are > overlapping and the object that either glGenQueries and glGenQueriesEXT > outputs is identical. If the answer is no, then they are not identical. > > It might be kind of a unique case, where a core specification introduces > some functionality that an extension introduced, but it's introduced to > satisfy another original extension (EXT_occlusion_query_boolean), which > also introduces the symbol glGenQueriesEXT. I would have to assume if you > load up both the occlusion query and disjoint timer query extensions, that > there would not be 2 different functions for glGenQueriesEXT (the disjoint > function) and glGenQueriesEXT (the occlusion query). This would suggest to > me tentatively that glGenQueries is interchangable with glGenQueriesEXT (of > any flavor). > > If the above is a correct conclusion, then no change to the specification > would be required (as there wouldn't be queryEXT objects). > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgi...@ Fri Sep 23 13:13:04 2016 From: jgi...@ (Jeff Gilbert) Date: Fri, 23 Sep 2016 13:13:04 -0700 Subject: [Public WebGL] MSAA + MRT? In-Reply-To: References: Message-ID: Yes, you can. All MRT draw targets must having matching sample counts, but it's otherwise fine. On Fri, Sep 23, 2016 at 5:06 AM, Mr F wrote: > I just wanted to clarify the intended behaviour of WebGL2 (and ES3?) about > the subject. Can you render to multiple anti-aliased targets at once? > Because everywhere I've seen two features listed separately without any info > on their interoperability. It's a big difference actually, e.g. it was one > of the advantages of DX10 over DX9. ----------------------------------------------------------- You are currently subscribed to public_webgl...@ To unsubscribe, send an email to majordomo...@ with the following command in the body of your email: unsubscribe public_webgl ----------------------------------------------------------- From art...@ Sat Sep 24 03:11:27 2016 From: art...@ (Mr F) Date: Sat, 24 Sep 2016 13:11:27 +0300 Subject: [Public WebGL] MSAA + MRT? In-Reply-To: References: Message-ID: Thanks. That's really great. On 23 September 2016 at 23:13, Jeff Gilbert wrote: > Yes, you can. All MRT draw targets must having matching sample counts, > but it's otherwise fine. > > On Fri, Sep 23, 2016 at 5:06 AM, Mr F wrote: > > I just wanted to clarify the intended behaviour of WebGL2 (and ES3?) > about > > the subject. Can you render to multiple anti-aliased targets at once? > > Because everywhere I've seen two features listed separately without any > info > > on their interoperability. It's a big difference actually, e.g. it was > one > > of the advantages of DX10 over DX9. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Sat Sep 24 20:37:35 2016 From: kbr...@ (Kenneth Russell) Date: Sat, 24 Sep 2016 20:37:35 -0700 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: On Fri, Sep 23, 2016 at 9:28 AM, Jukka Jyl?nki wrote: > There are currently two APIs to do queries: > > 1) In WebGL 2.0 core, there exists an API for query functions: > https://www.khronos.org/registry/webgl/specs/latest/2.0/#3.7.12 > > That specification enables query objects that can be used for > rasterization queries: > > var queryType = GL_ANY_SAMPLES_PASSED or GL_ANY_SAMPLES_PASSED_CONSERVATIVE > or GL_TRANSFORM_FEEDBACK_PRIMITIVES_WRITTEN; > > var queryObject = webGL2Context.createQuery(); > webGL2Context.beginQuery(queryObject, queryType); > > See https://www.khronos.org/opengles/sdk/docs/man3/html/glBeginQuery.xhtml > . > > 2) In WebGL extension EXT_disjoint_timer_query, there exists an API for > query functions: https://www.khronos.org/registry/webgl/ > extensions/EXT_disjoint_timer_query/ > > That specification enables query objects that can be used for time elapsed > queries: > > var queryType = GL_TIME_ELAPSED_EXT; > var queryExtObject = webGL1Or2Context.getExtension( > "EXT_disjoint_timer_query").createQueryEXT(); > webGL1Or2Context.getExtension("EXT_disjoint_timer_query").beginQueryEXT(queryExtObject, > queryType); > > Should these two types of query objects be considered to be strictly > separate? > Good question. Yes, they should be considered separate. It would be technically feasible to update WebGL 2.0 implementations so that enabling the EXT_disjoint_timer_query extension would allow them to be used by the core context's functions. However, this definitely won't work in current implementations. > Say, if a WebGL 2 codebase would like to do time elapsed queries, should > it maintain a separation of "query ext" objects vs a pool of "query core > WebGL 2" objects for this purposes? > > Or can the following type of code considered valid: > > if (webGL2Context.getExtension("EXT_disjoint_timer_query")) { > var queryObject = webGL2Context.createQuery(); > var queryType = GL_TIME_ELAPSED_EXT; > webGL2Context.beginQuery(queryObject, queryType); > } > > If the above is not valid, it would be good to be reflected by changing > the language of the EXT_disjoint_timer_query to not talk about "query > objects", but e.g. "queryExt objects" or otherwise explicitly note the > required separation. > The extension's IDL already defines the main timer query object type as WebGLTimerQueryEXT, which is not type compatible with WebGLQuery defined by the WebGL 2.0 spec. I think that's pretty clear, but if you'd like to propose updates to the extension spec, please put up a pull request or file an enhancement. Thanks, -Ken > Thanks in advance, > Jukka > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Sun Sep 25 01:34:27 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Sun, 25 Sep 2016 10:34:27 +0200 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: On Sun, Sep 25, 2016 at 5:37 AM, Kenneth Russell wrote: > > Good question. Yes, they should be considered separate. > I don't agree with that interpretation. On Fri, Sep 23, 2016 at 7:20 PM, Geoff Lang wrote: > For OpenGL in general, glGenQueries and glGenQueriesEXT are allowed to be > the same function pointer > glBeginQuery, glBeginQueryEXT (disjoint timer) and glBeginQueryEXT (occlusion) are clearly returning the same kind of object, and they're identical. It would be quite unfortunate if you had to track which extension object you operated with in order to know which interface to use with that particular extension object. If we where to interprete it that way, then the API interface to these functions should not reside on the extension object, but be on the query object. As in: query = disjoint_ext.create(); query.begin(); query.end(); -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Mon Sep 26 10:13:49 2016 From: khr...@ (Mark Callow) Date: Mon, 26 Sep 2016 10:13:49 -0700 Subject: [Public WebGL] Moving WEBGL_compressed_texture_es3_0 to community approved In-Reply-To: References: Message-ID: <601C63CE-AA2E-4201-BC33-B2839DAF8949@callow.im> > On Sep 22, 2016, at 10:13 AM, Kenneth Russell wrote: > > I would like to propose that the extension https://www.khronos.org/registry/webgl/extensions/WEBGL_compressed_texture_es3_0/ be moved to community approved in https://github.com/KhronosGroup/WebGL/pull/2047 . > > Mozilla proposed the extension originally; Chrome now implements it on top-of-tree. The conformance tests have been updated to verify and use the extension if it's available. Moving it out of draft status is a blocker for making the ETC compressed texture formats optional in WebGL 2.0. > > Any objections? > I dislike the name, as I have stated elsewhere. The stated reason for choosing WEBGL_compressed_texture_es3_0 vs WEBGL_compressed_texture_etc is that OpenGL ES 3 has several standard compressed texture formats, ETC2, EAC, punchthrough alpha et cetera. However the part of the spec. where these are described is entitled "ETC Compressed Texture Image Formats? and the opening sentence says ?The ETC formats form a family of related compressed texture image formats?. This is very clear so I see no reason for deviating from the established pattern for naming compressed texture extensions. There is even precedent. WEBGL_compressed_texture_s3tc covers more than a single format too. The name WEBGL_compressed_texture_etc doesn?t even conflict with the original WEBGL_compressed_texture_etc1. The latter supports only ETC1. The former, the whole family. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From kbr...@ Mon Sep 26 15:41:33 2016 From: kbr...@ (Kenneth Russell) Date: Mon, 26 Sep 2016 15:41:33 -0700 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: On Sun, Sep 25, 2016 at 1:34 AM, Florian B?sch wrote: > On Sun, Sep 25, 2016 at 5:37 AM, Kenneth Russell wrote: >> >> Good question. Yes, they should be considered separate. >> > > I don't agree with that interpretation. > The IDL is clear. As I mentioned, it could be modified, but right now WebGLTimerQueryEXT and WebGLQuery are distinct types. > > On Fri, Sep 23, 2016 at 7:20 PM, Geoff Lang wrote: > >> For OpenGL in general, glGenQueries and glGenQueriesEXT are allowed to >> be the same function pointer >> > > glBeginQuery, glBeginQueryEXT (disjoint timer) and glBeginQueryEXT > (occlusion) are clearly returning the same kind of object, and they're > identical. > > It would be quite unfortunate if you had to track which extension object > you operated with in order to know which interface to use with that > particular extension object. If we where to interprete it that way, then > the API interface to these functions should not reside on the extension > object, but be on the query object. As in: > > query = disjoint_ext.create(); > query.begin(); > query.end(); > With occlusion queries being folded into the core WebGL 2.0 spec, the only outlier is EXT_disjoint_timer_query. We can revisit this after the first iteration of WebGL 2.0 ships. -Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Mon Sep 26 16:03:08 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Tue, 27 Sep 2016 01:03:08 +0200 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: On Tue, Sep 27, 2016 at 12:41 AM, Kenneth Russell wrote: > On Sun, Sep 25, 2016 at 1:34 AM, Florian B?sch wrote: > >> On Sun, Sep 25, 2016 at 5:37 AM, Kenneth Russell wrote: >>> >>> Good question. Yes, they should be considered separate. >>> >> >> I don't agree with that interpretation. >> > > The IDL is clear. As I mentioned, it could be modified, but right now > WebGLTimerQueryEXT and WebGLQuery are distinct types. > I mean the interpretation that they should be considered separate, not what the IDL says they should be. > >> On Fri, Sep 23, 2016 at 7:20 PM, Geoff Lang wrote: >> >>> For OpenGL in general, glGenQueries and glGenQueriesEXT are allowed to >>> be the same function pointer >>> >> >> glBeginQuery, glBeginQueryEXT (disjoint timer) and glBeginQueryEXT >> (occlusion) are clearly returning the same kind of object, and they're >> identical. >> >> It would be quite unfortunate if you had to track which extension object >> you operated with in order to know which interface to use with that >> particular extension object. If we where to interprete it that way, then >> the API interface to these functions should not reside on the extension >> object, but be on the query object. As in: >> >> query = disjoint_ext.create(); >> query.begin(); >> query.end(); >> > > With occlusion queries being folded into the core WebGL 2.0 spec, the only > outlier is EXT_disjoint_timer_query. > > We can revisit this after the first iteration of WebGL 2.0 ships. > I think that's too late. It's my opinion that following the OpenGL semantic on symbols that these two objects should be considered the same. It wouldn't be good if code emerged that treated them as if they wheren't, if we intend to make it so they aren't, and vice versa. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Wed Sep 28 16:04:36 2016 From: kbr...@ (Kenneth Russell) Date: Wed, 28 Sep 2016 16:04:36 -0700 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: On Mon, Sep 26, 2016 at 4:03 PM, Florian B?sch wrote: > On Tue, Sep 27, 2016 at 12:41 AM, Kenneth Russell wrote: > >> On Sun, Sep 25, 2016 at 1:34 AM, Florian B?sch wrote: >> >>> On Sun, Sep 25, 2016 at 5:37 AM, Kenneth Russell wrote: >>>> >>>> Good question. Yes, they should be considered separate. >>>> >>> >>> I don't agree with that interpretation. >>> >> >> The IDL is clear. As I mentioned, it could be modified, but right now >> WebGLTimerQueryEXT and WebGLQuery are distinct types. >> > > I mean the interpretation that they should be considered separate, not > what the IDL says they should be. > > >> >>> On Fri, Sep 23, 2016 at 7:20 PM, Geoff Lang wr >>> ote: >>> >>>> For OpenGL in general, glGenQueries and glGenQueriesEXT are allowed to >>>> be the same function pointer >>>> >>> >>> glBeginQuery, glBeginQueryEXT (disjoint timer) and glBeginQueryEXT >>> (occlusion) are clearly returning the same kind of object, and they're >>> identical. >>> >>> It would be quite unfortunate if you had to track which extension object >>> you operated with in order to know which interface to use with that >>> particular extension object. If we where to interprete it that way, then >>> the API interface to these functions should not reside on the extension >>> object, but be on the query object. As in: >>> >>> query = disjoint_ext.create(); >>> query.begin(); >>> query.end(); >>> >> >> With occlusion queries being folded into the core WebGL 2.0 spec, the >> only outlier is EXT_disjoint_timer_query. >> >> We can revisit this after the first iteration of WebGL 2.0 ships. >> > > I think that's too late. It's my opinion that following the OpenGL > semantic on symbols that these two objects should be considered the same. > It wouldn't be good if code emerged that treated them as if they wheren't, > if we intend to make it so they aren't, and vice versa. > This is a good point. The working group will discuss this on tomorrow's conference call and I'll file an issue on the KhronosGroup/WebGL issue tracker assuming we decide to make a change to the EXT_disjoint_timer_query spec, IDL, and conformance test. -Ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From juj...@ Thu Sep 29 02:18:13 2016 From: juj...@ (=?UTF-8?Q?Jukka_Jyl=C3=A4nki?=) Date: Thu, 29 Sep 2016 12:18:13 +0300 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: For reference, here is the feature added to Emscripten: https://github.com/kripken/emscripten/pull/4575 2016-09-29 2:04 GMT+03:00 Kenneth Russell : > On Mon, Sep 26, 2016 at 4:03 PM, Florian B?sch wrote: > >> On Tue, Sep 27, 2016 at 12:41 AM, Kenneth Russell wrote: >> >>> On Sun, Sep 25, 2016 at 1:34 AM, Florian B?sch wrote: >>> >>>> On Sun, Sep 25, 2016 at 5:37 AM, Kenneth Russell >>>> wrote: >>>>> >>>>> Good question. Yes, they should be considered separate. >>>>> >>>> >>>> I don't agree with that interpretation. >>>> >>> >>> The IDL is clear. As I mentioned, it could be modified, but right now >>> WebGLTimerQueryEXT and WebGLQuery are distinct types. >>> >> >> I mean the interpretation that they should be considered separate, not >> what the IDL says they should be. >> >> >>> >>>> On Fri, Sep 23, 2016 at 7:20 PM, Geoff Lang wr >>>> ote: >>>> >>>>> For OpenGL in general, glGenQueries and glGenQueriesEXT are allowed >>>>> to be the same function pointer >>>>> >>>> >>>> glBeginQuery, glBeginQueryEXT (disjoint timer) and glBeginQueryEXT >>>> (occlusion) are clearly returning the same kind of object, and they're >>>> identical. >>>> >>>> It would be quite unfortunate if you had to track which extension >>>> object you operated with in order to know which interface to use with that >>>> particular extension object. If we where to interprete it that way, then >>>> the API interface to these functions should not reside on the extension >>>> object, but be on the query object. As in: >>>> >>>> query = disjoint_ext.create(); >>>> query.begin(); >>>> query.end(); >>>> >>> >>> With occlusion queries being folded into the core WebGL 2.0 spec, the >>> only outlier is EXT_disjoint_timer_query. >>> >>> We can revisit this after the first iteration of WebGL 2.0 ships. >>> >> >> I think that's too late. It's my opinion that following the OpenGL >> semantic on symbols that these two objects should be considered the same. >> It wouldn't be good if code emerged that treated them as if they wheren't, >> if we intend to make it so they aren't, and vice versa. >> > > This is a good point. The working group will discuss this on tomorrow's > conference call and I'll file an issue on the KhronosGroup/WebGL issue > tracker assuming we decide to make a change to the EXT_disjoint_timer_query > spec, IDL, and conformance test. > > -Ken > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pya...@ Thu Sep 29 03:39:03 2016 From: pya...@ (=?UTF-8?Q?Florian_B=C3=B6sch?=) Date: Thu, 29 Sep 2016 12:39:03 +0200 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: Yeah that's the thing I mean. It needs to get resolved and unambiguously (and preferably following GL semantics) or tool chains spring into existence doing the wrong things. On Thu, Sep 29, 2016 at 11:18 AM, Jukka Jyl?nki wrote: > For reference, here is the feature added to Emscripten: > https://github.com/kripken/emscripten/pull/4575 > > 2016-09-29 2:04 GMT+03:00 Kenneth Russell : > >> On Mon, Sep 26, 2016 at 4:03 PM, Florian B?sch wrote: >> >>> On Tue, Sep 27, 2016 at 12:41 AM, Kenneth Russell >>> wrote: >>> >>>> On Sun, Sep 25, 2016 at 1:34 AM, Florian B?sch >>>> wrote: >>>> >>>>> On Sun, Sep 25, 2016 at 5:37 AM, Kenneth Russell >>>>> wrote: >>>>>> >>>>>> Good question. Yes, they should be considered separate. >>>>>> >>>>> >>>>> I don't agree with that interpretation. >>>>> >>>> >>>> The IDL is clear. As I mentioned, it could be modified, but right now >>>> WebGLTimerQueryEXT and WebGLQuery are distinct types. >>>> >>> >>> I mean the interpretation that they should be considered separate, not >>> what the IDL says they should be. >>> >>> >>>> >>>>> On Fri, Sep 23, 2016 at 7:20 PM, Geoff Lang wr >>>>> ote: >>>>> >>>>>> For OpenGL in general, glGenQueries and glGenQueriesEXT are allowed >>>>>> to be the same function pointer >>>>>> >>>>> >>>>> glBeginQuery, glBeginQueryEXT (disjoint timer) and glBeginQueryEXT >>>>> (occlusion) are clearly returning the same kind of object, and they're >>>>> identical. >>>>> >>>>> It would be quite unfortunate if you had to track which extension >>>>> object you operated with in order to know which interface to use with that >>>>> particular extension object. If we where to interprete it that way, then >>>>> the API interface to these functions should not reside on the extension >>>>> object, but be on the query object. As in: >>>>> >>>>> query = disjoint_ext.create(); >>>>> query.begin(); >>>>> query.end(); >>>>> >>>> >>>> With occlusion queries being folded into the core WebGL 2.0 spec, the >>>> only outlier is EXT_disjoint_timer_query. >>>> >>>> We can revisit this after the first iteration of WebGL 2.0 ships. >>>> >>> >>> I think that's too late. It's my opinion that following the OpenGL >>> semantic on symbols that these two objects should be considered the same. >>> It wouldn't be good if code emerged that treated them as if they wheren't, >>> if we intend to make it so they aren't, and vice versa. >>> >> >> This is a good point. The working group will discuss this on tomorrow's >> conference call and I'll file an issue on the KhronosGroup/WebGL issue >> tracker assuming we decide to make a change to the EXT_disjoint_timer_query >> spec, IDL, and conformance test. >> >> -Ken >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Sep 30 18:44:07 2016 From: kbr...@ (Kenneth Russell) Date: Fri, 30 Sep 2016 18:44:07 -0700 Subject: [Public WebGL] Moving WEBGL_compressed_texture_es3_0 to community approved In-Reply-To: References: Message-ID: On Fri, Sep 23, 2016 at 4:25 AM, Florian B?sch wrote: > There's a section "Issues" in the specification with 3 questions: > > Q: The ES 3.0.4 specification allows for compressedTexImage3D and >> compressedTexSubImage3D which are missing from this specification, should >> it be added? >> Q: The ES 3.0.4 specification defines the errors INVALID_OPERATION, >> should it be added? >> Q: The ES 3.0.4 specification defines the errors INVALID_VALUE for other >> cases than compressed size missmatch, should these be added? > > > Are these addressed? > Sorry for the long delay replying and thanks for pointing this out. I hadn't addressed these issues, but have now. https://github.com/KhronosGroup/WebGL/pull/2047 contains the revised text; it's been substantially simplified to eliminate duplicate text. Also what other differences does the extension have to the ES 3? > None; it's simply that it can be applied to WebGL 1.0 as well as now WebGL 2.0. It contains the restrictions common to the other WebGL compressed texture extensions (i.e., the ArrayBufferView must be exactly the right size). -Ken > > On Thu, Sep 22, 2016 at 7:13 PM, Kenneth Russell wrote: > >> I would like to propose that the extension https://www.khronos.org/regist >> ry/webgl/extensions/WEBGL_compressed_texture_es3_0/ be moved to >> community approved in https://github.com/KhronosGroup/WebGL/pull/2047 . >> >> Mozilla proposed the extension originally; Chrome now implements it on >> top-of-tree. The conformance tests have been updated to verify and use the >> extension if it's available. Moving it out of draft status is a blocker for >> making the ETC compressed texture formats optional in WebGL 2.0. >> >> Any objections? >> >> Thanks, >> >> -Ken >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Sep 30 18:44:31 2016 From: kbr...@ (Kenneth Russell) Date: Fri, 30 Sep 2016 18:44:31 -0700 Subject: [Public WebGL] Moving WEBGL_compressed_texture_es3_0 to community approved In-Reply-To: References: Message-ID: On Mon, Sep 26, 2016 at 9:23 AM, Nicolas Capens wrote: > The current draft specification provides no indication that browsers > should not advertise this extensions when the implementation (or driver) > decompresses these formats. > Thanks for pointing this out. I've addressed this in the current version of the pull request at https://github.com/KhronosGroup/WebGL/pull/2047 . -Ken > > On Thu, Sep 22, 2016 at 1:13 PM, Kenneth Russell wrote: > >> I would like to propose that the extension https://www.khronos.org/regist >> ry/webgl/extensions/WEBGL_compressed_texture_es3_0/ be moved to >> community approved in https://github.com/KhronosGroup/WebGL/pull/2047 . >> >> Mozilla proposed the extension originally; Chrome now implements it on >> top-of-tree. The conformance tests have been updated to verify and use the >> extension if it's available. Moving it out of draft status is a blocker for >> making the ETC compressed texture formats optional in WebGL 2.0. >> >> Any objections? >> >> Thanks, >> >> -Ken >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Sep 30 18:47:01 2016 From: kbr...@ (Kenneth Russell) Date: Fri, 30 Sep 2016 18:47:01 -0700 Subject: [Public WebGL] Moving WEBGL_compressed_texture_es3_0 to community approved In-Reply-To: <601C63CE-AA2E-4201-BC33-B2839DAF8949@callow.im> References: <601C63CE-AA2E-4201-BC33-B2839DAF8949@callow.im> Message-ID: On Mon, Sep 26, 2016 at 10:13 AM, Mark Callow wrote: > > On Sep 22, 2016, at 10:13 AM, Kenneth Russell wrote: > > I would like to propose that the extension https://www.khronos.org/ > registry/webgl/extensions/WEBGL_compressed_texture_es3_0/ be moved to > community approved in https://github.com/KhronosGroup/WebGL/pull/2047 . > > Mozilla proposed the extension originally; Chrome now implements it on > top-of-tree. The conformance tests have been updated to verify and use the > extension if it's available. Moving it out of draft status is a blocker for > making the ETC compressed texture formats optional in WebGL 2.0. > > Any objections? > > > I dislike the name, as I have stated elsewhere. The stated reason for > choosing WEBGL_compressed_texture_es3_0 vs WEBGL_compressed_texture_etc is > that OpenGL ES 3 has several standard compressed texture formats, ETC2, > EAC, punchthrough alpha et cetera. However the part of the spec. where > these are described is entitled "ETC Compressed Texture Image Formats? > and the opening sentence says ?The ETC formats form a family of related > compressed texture image formats?. > > This is very clear so I see no reason for deviating from the established > pattern for naming compressed texture extensions. There is even > precedent. WEBGL_compressed_texture_s3tc covers more than a single format > too. > > The name WEBGL_compressed_texture_etc doesn?t even conflict with the > original WEBGL_compressed_texture_etc1. The latter supports only ETC1. > The former, the whole family. > Thanks for your feedback Mark. I don't have a strong feeling about the name but Mozilla did which is why _es3_0 was chosen. _etc would be OK in my opinion, but could be problematic in the future if an ETC3 is introduced into the ETC family. _es3_0 should be unambiguous. -Ken > Regards > > -Mark > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kbr...@ Fri Sep 30 19:00:53 2016 From: kbr...@ (Kenneth Russell) Date: Fri, 30 Sep 2016 19:00:53 -0700 Subject: [Public WebGL] Moving WEBGL_compressed_texture_es3_0 to community approved In-Reply-To: References: <601C63CE-AA2E-4201-BC33-B2839DAF8949@callow.im> Message-ID: On Fri, Sep 30, 2016 at 6:47 PM, Kenneth Russell wrote: > On Mon, Sep 26, 2016 at 10:13 AM, Mark Callow wrote: > >> >> On Sep 22, 2016, at 10:13 AM, Kenneth Russell wrote: >> >> I would like to propose that the extension https://www.khronos.org/regist >> ry/webgl/extensions/WEBGL_compressed_texture_es3_0/ be moved to >> community approved in https://github.com/KhronosGroup/WebGL/pull/2047 . >> >> Mozilla proposed the extension originally; Chrome now implements it on >> top-of-tree. The conformance tests have been updated to verify and use the >> extension if it's available. Moving it out of draft status is a blocker for >> making the ETC compressed texture formats optional in WebGL 2.0. >> >> Any objections? >> >> >> I dislike the name, as I have stated elsewhere. The stated reason for >> choosing WEBGL_compressed_texture_es3_0 vs WEBGL_compressed_texture_etc is >> that OpenGL ES 3 has several standard compressed texture formats, ETC2, >> EAC, punchthrough alpha et cetera. However the part of the spec. where >> these are described is entitled "ETC Compressed Texture Image Formats? >> and the opening sentence says ?The ETC formats form a family of related >> compressed texture image formats?. >> >> This is very clear so I see no reason for deviating from the established >> pattern for naming compressed texture extensions. There is even >> precedent. WEBGL_compressed_texture_s3tc covers more than a single >> format too. >> >> The name WEBGL_compressed_texture_etc doesn?t even conflict with the >> original WEBGL_compressed_texture_etc1. The latter supports only ETC1. >> The former, the whole family. >> > > Thanks for your feedback Mark. I don't have a strong feeling about the > name but Mozilla did which is why _es3_0 was chosen. _etc would be OK in my > opinion, but could be problematic in the future if an ETC3 is introduced > into the ETC family. _es3_0 should be unambiguous. > Just spoke with Mozilla and they're fine with naming it _etc. Renamed it in https://github.com/KhronosGroup/WebGL/pull/2047/ . Thanks again for your feedback. -Ken > -Ken > > > >> Regards >> >> -Mark >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khr...@ Fri Sep 30 19:09:21 2016 From: khr...@ (Mark Callow) Date: Fri, 30 Sep 2016 19:09:21 -0700 Subject: [Public WebGL] Moving WEBGL_compressed_texture_es3_0 to community approved In-Reply-To: References: <601C63CE-AA2E-4201-BC33-B2839DAF8949@callow.im> Message-ID: > On Sep 30, 2016, at 7:00 PM, Kenneth Russell wrote: > > On Fri, Sep 30, 2016 at 6:47 PM, Kenneth Russell > wrote: > ... > > Thanks for your feedback Mark. I don't have a strong feeling about the name but Mozilla did which is why _es3_0 was chosen. _etc would be OK in my opinion, but could be problematic in the future if an ETC3 is introduced into the ETC family. _es3_0 should be unambiguous. The possibility of an ETC3 is approximately 0. > > Just spoke with Mozilla and they're fine with naming it _etc. Renamed it in https://github.com/KhronosGroup/WebGL/pull/2047/ . Thanks again for your feedback. > Thanks for pursuing this. Regards -Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From kbr...@ Fri Sep 30 20:21:27 2016 From: kbr...@ (Kenneth Russell) Date: Fri, 30 Sep 2016 20:21:27 -0700 Subject: [Public WebGL] Interaction of WebGL 2 query objects and WebGL timer queries extension EXT_disjoint_timer_query In-Reply-To: References: Message-ID: Thanks for your feedback. The working group discussed this and proposes to expose a slimmed-down IDL for WebGL 2.0 under the same extension name that reuses all of the core context's query functionality. Please review and comment: https://github.com/KhronosGroup/WebGL/pull/2076 Thanks, -Ken On Thu, Sep 29, 2016 at 3:39 AM, Florian B?sch wrote: > Yeah that's the thing I mean. It needs to get resolved and unambiguously > (and preferably following GL semantics) or tool chains spring into > existence doing the wrong things. > > On Thu, Sep 29, 2016 at 11:18 AM, Jukka Jyl?nki wrote: > >> For reference, here is the feature added to Emscripten: >> https://github.com/kripken/emscripten/pull/4575 >> >> 2016-09-29 2:04 GMT+03:00 Kenneth Russell : >> >>> On Mon, Sep 26, 2016 at 4:03 PM, Florian B?sch wrote: >>> >>>> On Tue, Sep 27, 2016 at 12:41 AM, Kenneth Russell >>>> wrote: >>>> >>>>> On Sun, Sep 25, 2016 at 1:34 AM, Florian B?sch >>>>> wrote: >>>>> >>>>>> On Sun, Sep 25, 2016 at 5:37 AM, Kenneth Russell >>>>>> wrote: >>>>>>> >>>>>>> Good question. Yes, they should be considered separate. >>>>>>> >>>>>> >>>>>> I don't agree with that interpretation. >>>>>> >>>>> >>>>> The IDL is clear. As I mentioned, it could be modified, but right now >>>>> WebGLTimerQueryEXT and WebGLQuery are distinct types. >>>>> >>>> >>>> I mean the interpretation that they should be considered separate, not >>>> what the IDL says they should be. >>>> >>>> >>>>> >>>>>> On Fri, Sep 23, 2016 at 7:20 PM, Geoff Lang wr >>>>>> ote: >>>>>> >>>>>>> For OpenGL in general, glGenQueries and glGenQueriesEXT are allowed >>>>>>> to be the same function pointer >>>>>>> >>>>>> >>>>>> glBeginQuery, glBeginQueryEXT (disjoint timer) and glBeginQueryEXT >>>>>> (occlusion) are clearly returning the same kind of object, and they're >>>>>> identical. >>>>>> >>>>>> It would be quite unfortunate if you had to track which extension >>>>>> object you operated with in order to know which interface to use with that >>>>>> particular extension object. If we where to interprete it that way, then >>>>>> the API interface to these functions should not reside on the extension >>>>>> object, but be on the query object. As in: >>>>>> >>>>>> query = disjoint_ext.create(); >>>>>> query.begin(); >>>>>> query.end(); >>>>>> >>>>> >>>>> With occlusion queries being folded into the core WebGL 2.0 spec, the >>>>> only outlier is EXT_disjoint_timer_query. >>>>> >>>>> We can revisit this after the first iteration of WebGL 2.0 ships. >>>>> >>>> >>>> I think that's too late. It's my opinion that following the OpenGL >>>> semantic on symbols that these two objects should be considered the same. >>>> It wouldn't be good if code emerged that treated them as if they wheren't, >>>> if we intend to make it so they aren't, and vice versa. >>>> >>> >>> This is a good point. The working group will discuss this on tomorrow's >>> conference call and I'll file an issue on the KhronosGroup/WebGL issue >>> tracker assuming we decide to make a change to the EXT_disjoint_timer_query >>> spec, IDL, and conformance test. >>> >>> -Ken >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: