[Public WebGL] Re: webgl/swiftshader
Wed Feb 22 05:59:32 PST 2012
The trouble with the software/hardware flag is that some implementations
straddle the line. For example, most (all?) Intel graphics chips don't
have vertex shader hardware - so they do that by running the shader in
CPU-side software. Should they fess-up to that and flag themselves as
Maybe...but whichever they did would be wrong for someone.
If your application is vertex-shader-heavy then you really would like
Intel GPU's to identify themselves as software implementations - but if
you only draw 100 full-screen quads a frame but do a huge amount of
high-resolution, MSAA, pixel-shader stuff then you'd like it to tell you
that it's a hardware implementation. Once you start breaking the
implementation down into pieces, you end up in a world of hurt over which
flags are set by whom.
This debate was out there at the very beginning of OpenGL - when SGI were
in the transition from IrisGL. The decision then was to expose the
GL_VENDOR/GL_RENDERER/GL_VERSION strings so that the application could
blacklist/whitelist or employ different algorithms where needed.
In theory that's doable in WebGL - except that Firefox doesn't give you
access to the real data because of the "server-side-cookie" issue.
Ashley Gullen wrote:
> Why not just introduce some flag which indicates whether the
> is rendering with specialised hardware (GPU) or not (software)? Then each
> application developer can test their individual apps both ways and see if
> for practical purposes both work, or if only one works. Then each
> developer can either support both, require GPU support, or have some
> notification that says something like "Your computer can run this content,
> but it may be slow. Are you sure you want to continue?" By default it
> would use whatever's available so the developer would have to intervene to
> either require GPU support or introduce a notification.
> Sure, some beefy CPUs may be able to just out-do low-class GPU hardware,
> but I think the vast majority of software-rendering cases will be ordinary
> low-mid power computers with perfectly capable GPUs that are blacklisted
> due to old drivers. (Judging by the mozilla link earlier, 50% of people
> don't get hardware accelerated WebGL.) As noted previously, most of the
> time any GPU will far outdo software rendering. So for practical purposes
> a flag indicating software rendering is a good heuristic that rendering
> will basically be slow.
> IMO running performance tests on startup to judge performance is out of
> question because every single different application needs its own
> performance test. This likely will require special coding and the
> developer may not even create a fair representation, resulting in bad data
> and the wrong renderer chosen. Personally I have no idea how I would
> create a single fair test for any particular game that always chooses the
> best renderer for the vast array of system configurations out there.
> about a game with loads of different levels, each with different rendering
> characteristics and heavily dependent on user input and skill level - how
> do you make a worthwhile five second test for that? I don't think it can
> be solved by frameworks either, because if there was some generalised test
> that could always pick the best renderer, we wouldn't have this problem in
> the first place.
> So I think there really ought to be a software/hardware rendering flag,
> that's a good heuristic. For eye candy on a web page, just don't show it
> if it's going to be software rendered, and fall back to an image or video.
> For a simple game, you can probably get by with software rendering. For
> Crysis 3: Browser Wars, require hardware acceleration. That's still
> than the "no WebGL, no content" situation that we face without
> My 2c.
> Ashley Gullen
> On 21 February 2012 15:36, John Tamplin <[email protected]> wrote:
>> On Mon, Feb 20, 2012 at 2:14 PM, Steve Baker <[email protected]> wrote:
>>> If you're doing something like selling widgets on the Internet - then
>>> users with GPU's should get a 3D rendering of your new SuperWidget-3000
>>> that they can spin around and interact with - and low end users should
>>> a movie clip or a photo gallery of still images or something. If you
>>> enough about getting your message to all of the users out there then
>>> is a small price to pay.
>>> But - if you're a game designer - there is nothing worse than to find a
>>> valid WebGL environment and then discover that it takes 30 seconds to
>>> render a single frame and to have to resort to disgusting timing tricks
>>> try to deduce whether this is a GPU or not. I'd *much* rather a system
>>> report "Sorry, you need a machine that can run WebGL" than to have it
>>> to run my game at 0.03Hz.
>> Maybe the answer is to have some way to ask the question "I have x
>> triangles and need y fps and features z, what is the likelihood this
>> implementation can deliver that". Given that, the app can make a
>> reasonable decision of telling the user "sorry, you need to upgrade to
>> this app", running as-is, or scaling back.
>> As I understand it, there isn't any way to do this now other than to try
>> it and time the result, which gives a poor user experience anyway. The
>> alternative is to detect and whitelist configurations, or leave it up to
>> the user -- "here is the known-good version, you can try the WebGL
>> if you are brave".
>> John A. Tamplin
>> Software Engineer (GWT), Google
You are currently subscribed to [email protected]
To unsubscribe, send an email to [email protected] with
the following command in the body of your email:
More information about the public_webgl