[Public WebGL] WebVulkan and multithreaded command queue assembly
Wed Mar 4 04:07:42 PST 2015
Do we already know how exactly multithreaded command buffer building
works in Vulkan? My guess is that Vulkan itself doesn't care/know
about threads but instead the application maps a pointer to resource
memory (e.g. a command buffer) on the main thread, and a thread
directly writes to the mapped memory area, signals back to the main
thread when it's done, and the main thread enqueues the command buffer
(as far as I know, the application is completely responsible for
resource management in those new low-level 3D APIs, like making sure
that the CPU doesn't write to memory that the GPU is currently using).
This is the most efficient way to setup a resource since it doesn't
involve extra data copying, and is also recommended for other types of
resources in modern GL (e.g., have a single GL context on the main
thread, create a few, big, persistently mapped buffers there, and
write to them from other threads, while making sure that no areas in
the buffer are written that are pending for rendering).
While this is the most efficient way to setup resources, I don't see
how this would work in browser (e.g. with renderers living in another
process etc...). Maybe SharedArrayBuffers would at least help to
eliminate some memory copies:
(I think this will become the preferred solution for emscripten's
pthreads-like threading implementation).
On Wed, Mar 4, 2015 at 7:38 AM, Florian Bösch <[email protected]> wrote:
> Modify JS engines to be thread safe (at least when run in a WebWorker) and
> add multithreading to WebWorkers. Give WebWorkers direct Vulkan access.
> Problem solved.
> With Vulkan, Khronos is introducing its first API that supports command
> queue assembly and sensibly made it fit for multi-threaded command queue
> assembly (note that it says multi-threaded and not multi-processed, this
> will prove important in just a few paragraphs).
> To understand why this matters it's important to realize two things:
> 1) Todays CPUs usually have multiple cores (anything from 2 to 8 usually)
> and they can allocate execution time on several cores for several threads in
> the same process. So modern threads truly can be physically threaded.
> 2) Games and graphics heavy applications often have CPU intensive tasks that
> can run in parallel (such as say running a physics engine and a LOD engine)
> Today on the web we cannot do threads. On a personal note, I don't like
> threads much, I prefer multiprocessing. The reason we cannot use threads
> mainly has to do with the fact that neither the DOM nor the JS
> interpreter/compiler are thread safe. In the case of JS, it's quite a common
> theme among scripting languages to suffer the GIL effect (global interpreter
> lock), where the interpreter uses some resources and logic that aren't
> thread safe and hence imposes a lock on operations by the scripting language
> (this leads to synchronization among would-be threads and hence renders them
> largely inefficient).
> The alternative for the Web today to deal with this problem is web-workers.
> But this isn't a good way to go about it because this would require a fairly
> complex (and also globally locked) mechanism to transport commands/data to
> and from workers so that they can be emitted against a Vulkan context.
> Additionally such mechanisms (usually implemented over shared memory of some
> kind) also tend to be somewhat less efficient (because they rely on IPC)
> than threads which can share a memory space (although threads have their own
> As long as there's APIs/drivers that assume threads as their primary way to
> facilitate parallelization, the best fit to serve these is to actually have
> OS-level threads. You can make other forms of paralellization work, but
> those will always be suboptimal solutions.
> Of course this leaves the web in somewhat of a pickle, because you cannot
> introduce threads on the JS main thread. It's largely futile to try to make
> the DOM and various JS APIs thread safe, and even if you could, you'd mostly
> just end up synchronizing the threads again.
> So a relatively "simple" (in quotes here because it still requires a major
> overhaul to the JS engine) solution to this would be to introduce a JS
> realtime context of a sort. In this context, JS is executed in its own
> process, and it only has access to a limited set of APIs (XHRs, WebSockets,
> Vulkan Contexts, WebRTC and some other minor APIs) but crucially not the DOM
> and its ilk.
> This is largely what WebWorkers are today. The proposed change would be to
> overhaul to the JS engine to be able to effectively multithread (this might
> be more or less difficult depending on which one it is), and add the
> capability to spawn threads from JS to JS if it runs in such a performance
> context (or a WebWorker).
> I believe this to be a far more attainable goal than the various complex
> workarounds attempted to make multi-processed rendering work (canvas
> proxies, what have you). It's more attainable because it can ignore most of
> the complexities of dealing with the rest of the browser and concentrate on
> providing a clean context to run non DOM related code against an API
> (Vulkan) that's designed to work well in that environment.
You are currently subscribed to [email protected]
To unsubscribe, send an email to [email protected] with
the following command in the body of your email:
More information about the public_webgl