[Public WebGL] WebGL2 and no mapBuffer/mapBufferRange

Mark Callow [email protected]
Fri Mar 6 01:48:09 PST 2015

> On Mar 6, 2015, at 9:59 AM, Mark Callow <[email protected]> wrote:
> If implementation is a problem, how about requiring that MAP_READ_BIT | MAP_WRITE_BIT must always be specified?

Shortly after writing this I realized it is a bad idea but was not in a position to correct myself until now. Why? Because it forces any implementation (WebGL or OpenGL) that implements map via copy to always copy in both map and unmap.

I think the reason why these access bits exist is to allow such implementations to avoid a copy in map (write-only set) or unmap (read-only set). For implementations that actually map the buffers I expect there is probably no performance difference between single access and RW access.

WebGL implementations that would copy during map/unmap could very easily enforce errors for bad applications without any changes to ArrayBuffers by not copying, as described above and providing a buffer initialized to 0 for the write-only case to prevent data leakage.

Unfortunately the only way I see for WebGL implementations that would actually map buffers to work is to specify RW when calling the underlying OpenGL. This would be fine for correct applications (see above about performance), but would mean that bad applications would run without error. Thus bad apps would work in some implementations but not on others which is unacceptable for WebGL.

So I think supporting MapBuffer does require read-only and write-only ArrayBuffers.



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://khronos.org/pipermail/public_webgl_khronos.org/attachments/20150306/938646e5/attachment.asc>

More information about the public_webgl mailing list