Should I Be Using WebAssembly?


01/09/2019

Since all major browsers officially supported WebAssembly as of November 2017, interest surrounding the new technology has exploded. Perhaps you’ve seen an impressive Unreal Engine 4 in-browser WebAssembly demo. Or maybe you’ve had the chance to play with a full in-page operating system port like WebAssembly Windows 2000.

While such demos are exciting, they fail to convey immediate use cases for using WebAssembly with the majority of web applications today. Under the hood, many otherwise ordinary web applications are beginning to leverage WebAssembly for a host of reasons. However, WebAssembly is not a panacea. In this article, I will highlight two big use cases for WebAssembly as well as resulting tradeoffs.
 
WebAssembly Chart
 

Performance

The most common-sense reason to use WebAssembly is for accelerating performance-critical logic. That aim is even part of the WebAssembly manifesto. This goal is achieved in two major ways.

First, in part by specifying data types, WebAssembly eliminates a large swath of abstraction work traditionally required at runtime for Javascript. Various other types of low-level optimizations are expected to have been done by the compiler, so less additional optimization work is needed at runtime as well. Theoretically, this should enable shorter compile time by the browser and ultimately less time to start executing on the page. In practice, browser vendors are still optimizing this process and compilation times aren't always faster.

Since the WebAssembly format can be decoded and compiled over a network stream in separate threads, it can be ready to execute as soon as it's done being transferred to the client given compile time doesn't take longer than the transfer itself.

The folks at Mozilla have reported streaming compile performance in Firefox of 30-60MB/s on desktop and 8MB/s on an average mobile device. This particularly benefits cases where network and processor speed are both relatively slow; with streaming compilation, the end user is effectively waiting for only the longer of network transfer or compilation time.

The second way WebAssembly enables performance acceleration is by allowing developers to author logic at a lower level of abstraction. Javascript is very laissez-faire, handling things like memory allocation for us and eschewing language constructs such as strong typing.

WebAssembly, as a compile target for low-level languages like C or C++, allows authors to control more details about how their code operates and avoids unpredictable runtime optimizer behavior across browsers. WebAssembly memory is an ArrayBuffer or SharedArrayBuffer acting as a surrogate heap through the Memory API. Unlike Javascript, WebAssembly specifies no background garbage collector and doesn't suffer from unpredictable GC calls at runtime.

MultiplyInt

Some benchmarks show impressive performance gains of WebAssembly compared to logically equivalent Javascript:

In this case, we have a function containing a loop across a very large number of items performing a calculation each time. My machine was able to complete the micro-benchmark nearly four times faster using WebAssembly than Javascript.


Source – MultiplyInt

VideoGrayScale

But, there is a performance penalty for going between the Javascript VM and the WebAssembly VM. If you’re familiar with foreign function interfaces, similar constraints apply. If you’re going to access data or invoke functions from an external execution context, aim to do it sparingly. For example, another benchmark makes constant repeated calls to grayscale each frame of a continuous video running in the background:

In this case, calling the WebAssembly code to do a relatively small single task for each frame results in a 3x performance decrease. This penalty is shrinking as browsers are increasing the efficiency of WASM / JS interoperability, but it nonetheless means that a 1:1 WASM drop-in replacement is not an instant performance-boost.


Source – VideoGrayScale

 


 

Ecosystem Access

Engineers have had the ability to invoke compiled native code directly from Javascript for years now, at least with Node.js, via node-gyp. It enables access to vast ecosystems outside of Javascript for cases where NPM didn’t have particular module or native code performance was needed.

But, client code had no ability to leverage native modules from the browser. This has resulted in forced architectural decisions like REST API bloat where client code must reach out to the server to perform work that it should be able to do on its own. A domino effect ensues, resulting in unnecessary and expensive additional cloud infrastructure.

WebAssembly gives back this ability to run native code in the browser. Powerful, mature native libraries in fields ranging from machine learning to cryptography become available as client-side problem solving tools. Increasingly, native code WebAssembly ports are showing up as NPM packages themselves. At Accusoft, we are already using hunspell-asm, a WebAssembly port of the popular spell checking library Hunspell, in our new PrizmDoc Editor product because of its proven track record over JS-only alternatives.

More often, however, native code will need to be ported manually to compile to WebAssembly. Some WebAssembly-compatible native code is listed on the emscripten ports list. For these, the small amount of work to be done can be as simple as wrapping the native code with a Javascript API interface and compiling with the appropriate emscripten flags.
 


 

Our WebAssembly Use Case

Recently, we’ve investigated doing this to improve the text editing experience in PrizmDoc Editor as certain constraints can make this seemingly simple task performance-intensive. A document editing application has unique text rendering requirements. Different fonts, styles, tab stops, kerning, etc. can be applied not just to blocks of text but also to individual characters within words or even blank spaces. Line wrapping matters; the printed page needs to look just like the virtual page, and the document must render as similarly as possible across operating systems and browsers. Unlike with PDF, the rendered document must remain editable at all times.

We originally started off just using the built-in canvas.measureText method to measure text width after applying a font. It was reasonably fast in Chrome, and allowed us to determine whether to render text in a given line or wrap the line. But once we needed to render custom underlines, subscripts, superscripts, strikethroughs, and other metrics-sensitive font components, we found that measureText didn't provide necessary additional metrics like font ascenders, descenders, and italics angles.

Then we jumped to using fontkit to generate this data. It worked! It was reasonably fast in Chrome. It had the APIs we needed. But there was now a noticeable pause when typing text that grew and grew the larger a paragraph got. Once a paragraph spanned multiple lines, even a powerful development machine would get bogged down… editing text. We deferred it as a performance edge case to look at later until, during a meeting, a manager happened to open the editor on a new Chromebook and had to wait several painful seconds between typing each character for rendering to complete in the middle of an ordinary three-page document.

We asked ourselves, "How has everyone else been doing it for decades?" Some brief research landed us at harfbuzz, the text layout library used internally by Firefox, ChromeOS, Chrome, LibreOffice, Android, and many other UI frameworks. Since harfbuzz was already in the emscripten ports list, I ported it to WebAssembly and benchmarked it against fontkit:

Chrome
Harfbuzz-WASM
Fontkit
Small-1
1.7
102.0
Small-10
6.6
198.9
Large-1
10.5
384.4
Large-10
89.1
2495.3
Firefox
Small-1
2.4
128.8
Small-10
14
220.8
Large-1
7.8
629.8
Large-10
211.8
4566.0

(Note: The left-hand column represents various test loads for getting the horizontal width of a run of text in a particular font. All numbers are in milliseconds; lower is better.)

In cases like this, the winning route is clear-cut, but not all native libraries easily compile to a functional WebAssembly module. Another major caveat is that any linked libraries must also be recompiled, meaning that one must have access to the source code of any third party library being used. This naturally cuts out most commercial SDK products, barring explicit support for WebAssembly from the vendor.

Finally, even if it’s clear that running native code via WebAssembly would be optimal for a given use case, there’s still a problem: writing and maintaining efficient native code. It’s a different skillset than typical modern web development and has a big learning curve if a team isn’t already up to speed.

Higher-level languages are continuing to be added to the WebAssembly support list, helping to empower developers from more backgrounds than C/C++. Some day, even standard Javascript might be compilable to WebAssembly; work on Typescript is promising. Until then, WebAssembly will remain a powerful but niche tool for specific use cases.


Cody Owens

About Cody Owens

Cody Owens joined Accusoft as a software engineer in 2016. He currently contributes to the PrizmDoc Viewer and PrizmDoc Editor products. Throughout his career, he has presented on topics including Continuous Delivery and WebAssembly at conferences such as API World and DeveloperWeek. In addition, Cody is a certified AWS Solutions Architect. A graduate of the University of North Carolina at Chapel Hill, Cody's degree is in New Media with a focus on game development. His background includes architectural visualization and digital news publishing. In his spare time, Cody enjoys Paradox strategy games and long-distance cycling in the Tampa Bay area.

Related posts


Programmer working in a software developing company office.
An Introduction to RPA
Read More >
Businessman analyzing investment charts with laptop.
The Importance of Financial Data Extraction
Read More >
Two female programmers working on new project.They're working a late at night at the office.
How to Estimate How Long Manual Testing Will Take
Read More >

Join the discussion.