The intention of parallelism, Herman stressed, is "unlocking the power lurking inside our devices: GPUs, SIMD instructions, and multiple processor cores." He added, "With the emerging WebGL 2.0 and SIMD standards, the Web is making significant progress on the first two. And Web Workers go some part of the way toward enabling multicore parallelism." Web Workers provide a way for Web content to run scripts in background threads, but they are isolated, Herman said.
To achieve the goal of parallelism, Mozilla is experimenting with a SharedArrayBuffer API in SpiderMonkey. The company is drafting a specification featuring the API, with a prototype implementation featured in Firefox Nightly builds, said Herman, who also noted that Mozilla is looking for users to provide feedback.
A SharedArrayBuffer type with built-ins for locking introduces new forms of blocking to workers, plus the possibility that some objects could be subject to data races, Herman explained. "But unlike [the Nashorn project in Java], this is only true for objects that opt in to using shared memory as a backing store -- if you create an object without using a shared buffer, you know for sure that it can never race. And workers do not automatically share memory; they have to coordinate up front to share an array buffer," said Herman.
Mozilla and Intel Labs, meanwhile, have done work with deterministic parallelism APIs. "The goal of these experiments was to find high-level abstractions that could enable parallel speedups without any of the pitfalls of threads," said Herman. "This is a difficult approach, because it's hard to find high-level models that are general enough to suit a wide variety of parallel programs."