[Public WebGL] dual GPU setups default to integrated GPU

Kenneth Russell [email protected]
Mon Jan 30 18:06:05 PST 2017


Thanks Rafael for the feedback.

Is there any way (with a public Windows API) to compose D3D textures
rendered on the integrated GPU with textures rendered on the discrete GPU?
Does DirectComposition support that?

Thanks,

-Ken



On Mon, Jan 30, 2017 at 5:58 PM, Rafael Cintron <
[email protected]> wrote:

> Currently, when Edge enumerates adapters with D3D, it always picks adapter
> #0 for all content, including WebGL content.  By default, adapter #0 is the
> iGPU on hybrid machines.  Users can customize this behavior by either using
> the custom IHV control panel or checking the “Use software rendering
> instead of GPU rendering” checkbox.  The latter, of course, will use Warp
> for rendering.
>
>
>
> D3D11 allows you to share textures between D3D11 devices running on the
> same adapter.  However, resource sharing does not work between adapters.
> Developers that need to migrate resources between adapters can do so by
> manually copying the data to the CPU from the source adapter and uploading
> it to a resource of the same type on the destination adapter.  Not all
> resources can be easily read back and restored in this manner.
>
>
>
> --Rafael
>
>
>
> *From:* [email protected] [mailto:[email protected]
> khronos.org] *On Behalf Of *Kenneth Russell
> *Sent:* Monday, January 30, 2017 5:45 PM
> *To:* Maksims Mihejevs <[email protected]>
> *Cc:* Ben Adams <[email protected]>; Florian Bösch <
> [email protected]>; public <[email protected]>
> *Subject:* Re: [Public WebGL] dual GPU setups default to integrated GPU
>
>
>
> preferLowPowerToHighPerformance would apply equally to WebGL 2.0 as WebGL
> 1.0. It wasn't dropped from the spec.
>
>
>
> https://www.khronos.org/registry/webgl/specs/latest/2.0/#2.2 mandates
> that a few of the context creation attributes must be honored, but that's
> because multisampled renderbuffers are a mandatory part of the OpenGL ES
> 3.0 spec.
>
>
>
> I'm not a D3D expert so don't know how feasible it is to render to a
> texture on one D3D device and display on another. To the best of my
> knowledge Edge doesn't dynamically activate the discrete GPU when WebGL's
> active and switch back to the integrated GPU when it isn't. It uses the
> "0th" GPU, whatever that is according to the control panel settings.
>
>
>
> -Ken
>
>
>
>
>
> On Tue, Jan 24, 2017 at 6:30 AM, Maksims Mihejevs <[email protected]>
> wrote:
>
> Even my home laptop by default uses integrated GPU for Chrome, regardless
> of battery/plug. NVIDIA Control Panel has a preset for programs, and I've
> seen it is set by default to Integrated.
>
>
>
> On 24 January 2017 at 14:02, Ben Adams <[email protected]> wrote:
>
> dGPU should always be used when on power; and this is only a decision that
> effects the choice when on battery?
>
>
>
> On 24 January 2017 at 13:00, Maksims Mihejevs <[email protected]> wrote:
>
> We've recognised same problem that a lot of people with Windows laptops
> and dual gpu get integrated gpu as default.
>
>
>
> Although this is not the case for Edge for example. Which gives it
> advantage over other browsers on Windows laptops.
>
>
>
> On 24 Jan 2017 9:26 a.m., "Florian Bösch" <[email protected]> wrote:
>
> Dual GPU laptops such as some brands of windows and macOS laptops, the OS
> has a GPU switching function that switches to the discrete GPU in graphics
> intensive applications (such as games, cad software etc.)
>
>
>
> However, when a browser is running, it is often the case that the
> integrated GPU is used, regardless of if a tab is doing something graphics
> intensive with WebGL or not (https://twitter.com/grorgwork/status/
> 823719997616701440).
>
>
>
> On windows a user can influence this by a series of complicated steps to
> designate the preferred GPU, but it is a machine wide-setting, and
> sometimes it was ignored (I don't know if that's still the case).
>
>
>
> Obviously this presents a problem for WebGL developers. Neither would we
> want to leech a users batteries unnecessarily, nor would we like to force a
> user with a discrete GPU to receive worse performance should they wish to
> use a graphics intensive WebGL application.
>
>
>
> In WebGL1 there was a context creation flag called "
> preferLowPowerToHighPerformance", but I'm not aware how widely this was
> implemented, and apparently it's also ignored on macOS (because it defaults
> to false, yet the discrete GPU is still not used).
>
>
>
> WebGL2 has no equivalent context creation flag.
>
>
>
> Questions:
>
>    1. It would seem we have a sufficient mechanism to express a GPU
>    preference, is this a correct assessment?
>    2. Why was preferLowPowerToHighPerformance dropped from WebGL2?
>    3. Why is preferLowPowerToHighPerformance ignored for WebGL1 on some
>    configurations where it would be most useful?
>    4. Should an additional mechanism be introduced so a user can switch
>    between GPUs at his choice for tabs?
>
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://khronos.org/pipermail/public_webgl_khronos.org/attachments/20170130/5c93e7ef/attachment.html>


More information about the public_webgl mailing list