Friday, September 6, 2013

DICOM rendering within pre-HTML5 browsers

Summary: Retrieval of DICOM images, parsing, windowing and display using only JavaScript within browsers without using HTML5 Canvas is feasible.

Long Version.

Earlier this year, someone challenged me to display a DICOM image in a browser without resorting to HTML5 Canvas elements, using only JavaScript. This turned out to be rather fun and quite straightforward, largely due to the joy of Google searching to find all the various concepts and problems that other folks had already explored and solved, even if they were intended for other purposes. I just needed to add the DICOM-specific bits. As a consequence it took just a few hours on a Saturday afternoon to figure out the basics and in total about a day's work to refine it and make the whole thing work.

The crude demonstration JavaScript code, hard-wired to download, window (using the values in the DICOM header) and render a particular 16 bit MR image image can be found here and executed from this page. It is fully self-contained and has no dependencies on other JavaScript libraries. The code is ugly as sin, filled with commented out experiments and tests, and references to where bits of code and ideas came from, but hopefully it is short enough to be self-explanatory.

It seems to work in contemporary versions of Safari, Firefox, Opera, Chrome and  even IE (although a little more slowly in IE, probably due to the need to convert some extra array stuff, and it seemed to work in IE 10 on Windows 7 but not IE 8 on XP, haven't figured out why yet). I was pleased to see that it also works on my Android phones and tablets.

Here is how it works ...

First task - get the DICOM binary object down to the client and accessible via JavaScript. That was an easy one, since as everyone probably knows, the infamous XMLHttpRequest can be used to pull pretty much anything from the server (i.e., even though its name implies it was designed to pull XML documents). The way to make it return a binary file is to set the XMLHttpRequest.overrideMimeType parameter, and to make sure that no character set conversion is applied to the returned binary stream. This trick is due to Marcus Granado, whose archived blog entry can be found here, and which is also discussed along with other helpful hints at the Mozilla Developer Network site here. There is a little bit of further screwing around needed to handle various Microsoft Internet Explorer peculiarities related to what is returned, not in the responseText, but instead in the responseBody, and this needs an intermediate VBArray to get the job done (discussed in a StackOverflow thread).

Second task - parse the DICOM binary object. Once upon a time, using the bit twiddling functions in JavaScript might have been too slow, but nowadays that does not seem to be the case. It was pretty trivial to write a modest number of lines of code to skip the 128 byte preamble, detect the DICM magic string, then parse each data element successively, using explicit lengths to skip those that aren't needed and skipping undefined length sequences and items, and keeping track of only the values of those data elements that are needed for later stages (e.g., Bits Allocated) and ignore the rest. Having written just a few DICOM parsers in the past made this a lot easier for me than starting from scratch. I kept the line count down by restricting the input to explicit VR little endian for the time being, not trying to cope with malformed input, and just assuming that the desired data element values were those that occurred last in the data set. Obviously this could be made more robust in the future for production use (e.g., tracking the top level data set versus data sets nested within sequence items), but this was sufficient for the proof of concept.

Third task - windowing a greater than 8 bit image. It would have been easy to just download an 8 bit DICOM image, whether grayscale or color, since then no windowing from 10, 12 or 16 bits to 8 would be needed, but that wouldn't have been a fair test. I particularly wanted to demonstrate that client-side interactivity using the full contrast and spatial resolution DICOM pixel data was possible. So I used the same approach as I have used many times before, for example in the PixelMed toolkit com.pixelmed.display.WindowCenterWidth class, to build a lookup table indexed by all possible input values for the DICOM bit depth containing values to use for an 8 bit display. I did handle signed and unsigned input, as well as Rescale Slope and Intercept, but for the first cut at this, I have ignored special handling of pixel padding values, and other subtleties.

These first three tasks are essentially independent of the rendering approach, and are necessary regardless of whether Canvas is going to be used or not.

The fourth and fifth tasks are related -  making something the browser will display, and then making the browser actually display it. I found the clues for how to do this in the work of Jeff Epler, who described a tool for creating single bit image files in the browser (client side) to use as glyphs.

Fourth task - making something the browser will display. Since without Canvas one cannot write directly to a window, the older browsers need to be fed something they know about already. An image file format that is sufficient for the task, and which contributes no "loss" in that it can directly represent 8 bit RGB pixels, is GIF.  But you say, GIF involves a lossless compression step, with entropy coding using LZW (the compression scheme that was at the heart of the now obsolete patent-related issues with using GIF). Sure it does, but many years ago, Tom Lane (of IJG fame) observed that because of the way LZW works, with an initial default code table in which the code (index) is the same as the value it represents, as long as one adds one extra true bit before each code, and resets the code table periodically, once can just send the original values as if they were entropy coded values. Add a bit of blocking and a few header values, and one is good to go with a completely valid uncompressed (albeit slightly expanded) bitstream that any GIF decoder should be able to handle. This concept is now immortalized in the libungif library, which was developed to be able to create "uncompressed GIF" files to avoid infringing on the Unisys LZW patent. Some of the details are described under the heading of "Is there an uncompressed GIF format?" in the old Graphic File Formats FAQ, which references Tom Lane's original post. In my implementation, I just make 9 bit codes from 8 bit values, and added a clear code every 128 values, and made sure to stuff the bits into appropriate length blocks preceded by a length value, and it worked fine. And since I have 8 bit gray scale values as indices, I needed to populate the global color tables that mapped each gray scale index to RGB triplets with the same intensity value (since GIF is an indexed color file format, which is why GIF is lossless for 8 bit single channel data, but lossy (needs quantization and dithering) for true color data with more than 256 different RGB values).

Fifth task - make the browser display the GIF. Since JavaScript in the browser runs in a sort of "sand box" to prevent unsecure access to the local file system, etc., it is not so easy to feed the GIF file we just made to the browser, say as an updated IMG reference on an HTML page. It is routine to update an image reference with an "http:" URL that comes over the network, but how does one achieve that with locally generated content? The answer lies in the "data:" URI that was introduced for this purpose. There is a whole web site, http://dataurl.net/, devoted to this subject. Here, for example, is a description of using it for inline images. It turns out that what is needed to display the locally generated GIF is to create a (big) string that is a "data:" URI with the actual binary content Base64 encoded and embedded in the string itself. This seems to be supported by all recent and contemporary browsers. I don't know what the ultimate size limits are for the "data:" URL, but it worked for the purpose of this demonstration. There are actually various online "image to data: URI convertors" available, for generating static content (e.g., at webSemantics, the URI Kitchen) but for the purpose of rendering DICOM images this needs to be done dynamically by the client-side JavaScript. Base64 encoding is trivial and I just copied a function from an answer on StackOverflow, and then tacked the Base64 encoded GIF file on the end of a "data:image/gif;base64," string, et voilĂ !

Anyway, not rocket science, but hopefully useful to someone. I dare say that in the long run the HTML Canvas element will make most of this moot, and there are certainly already a small but growing number of "pure JavaScript" DICOM viewers out there. I have to admit it is tempting to spend a little time experimenting more with this, and perhaps even write an entire IHE Basic Image Review Profile viewer this way, using either Canvas or the "GIF/data: URI" trick for rendering. Don't hold your breath though.

It would also be fun to go back through previous generations of browsers to see just how far back the necessary concepts are supported. I suspect that size limits on the "data:" URI may be the most significant issue in that respect, but one could conceivably break the image into small tiles, each of which was represented by a separate small GIF in its own small enough "data:" URI string. I also haven't looked at client-side caching issues. These tend to be significant when one is displaying (or switching between) a lot of images or frames. I don't know whether browsers handle caching of "data:" URIs objects differently from those fetched via http, or indeed how they handle caching of files pulled via XMLHttpRequest.

Extending the DICOM parsing and payload extraction stuff to handle other uncompressed DICOM transfer syntaxes would be trivial detail work. For the compressed transfer syntaxes, for single and three channel 8 bit baseline JPEG, one can just strip out the JPEG bit stream from its DICOM encapsulated fragments, concatenate and Base64 encode the result, and stuff each frame in a data:url with a media type of image/jpeg instead of image/gif. Same goes for DICOM encapsulated MPEG, I suppose, though that might really stretch the size limits of the "data:" URI.

Since bit-twiddling is not so bad in JavaScript after all, one could even write a JPEG lossless or JPEG-LS decoder in JavaScript that might not perform too horribly. After all, JPEG-LS was based on LOCO and that was simple enough to fly to Mars, so it should be a cakewalk in a modern browser; it is conceptually simple enough that even I managed to write a C++ JPEG-LS codec for it, some time back. That said, modest compression without requiring an image-specific lossless compression scheme can be achieved using gzip or deflate (zip) with HTTP compression, and may obviate the need to use a DICOM lossless compression transfer syntax, unless the server happens to already have files in such transfer syntaxes.

Doing the JPEG 12 bit DCT process might be a bit of a performance dog, but you never know until someone tries. Don't hold your breath for these from me any time soon though, but if I get another spare Saturday, you never know ...

Oops, spoke to soon, someone has already done a pure JavaScript JPEG decoder ...
 
nanos gigantum humeris insidentes

David
 





4 comments:

Ivan said...

It would be interesting to see window leveling speed. Canvas rendering is really fast. In my own expirience we can have 100fps on MR and CT images, 20-60 on CR-DR-MG images.

Martin Peacock said...

I recall a JEPG encoder too ( http://ajaxian.com/archives/javascript-jpeg-encoding )

Would BMP not be easier than GIF? Seems to be supported in pretty much all browsers and might make IE a bit easier.

But then.. fun as I'm sure it was - with more and more 'mainstream' sites getting strict about browser versions (IE 7 starting to fall off the edge in some cases), would it not be simpler to assume CANVAS? Just like the transition to JS-on itself is pretty much complete.

Must admit, I hadn't come across data: before - useful one in the armoury, although the DataURL.net site, the size limitations are quite severe.

Unknown said...

The data element goes back a long way, but has always been subject to significant length restrictions - when I FIRST tried using it for roughly this purpose (rendering DICOM from a server as a monolithic page without image fetching) back in 1998, I failed, as all the browsers I could find then had (undocumented!) limits of about 32k.

Anonymous said...

There is also emcscripten that can compile C/C++ libraries to JavaScript, at least in theory. So, reuse of the existing libraries may be possible once all dependencies are accounted for.