Sunday, October 26, 2014

Keeping up with Mac Java - Bundling into Executable Apps

Summary: Packaging a Java application into an executable Mac bundle is not difficult, but has changed over time; JavaApplicationStub is replaced by JavaAppLauncher; manually building the package content files and hand editing the Info.plist is straightforward, but the organization and properties have changed. Still irritating that JWS/JNLP does not work properly in Safari.

Long Version.

I have long been a fan of Macs and of Java, and I have a pathological aversion to writing single-platform code, if for no other reason than my favorite platforms tend to vanish without much notice. Since I am a command-line weenie, use XCode only for text editing and never bother much with "integrated development environments" (since they tend to vanish too), I am also a fan of "make", and tend to use it in preference to "ant" for big projects. I am sure "ant" is really cool but editing all those build.xml files just doesn't appeal to me. This probably drives the users of my source code crazy, but c'est la vie.

The relevance of the foregoing is that my Neanderthal approach makes keeping up with Apple's and Oracle's changes to the way in which Java is developed and deployed on the Mac a bit of a challenge. I do need to keep up, because my primary development platform is my Mac laptop, since it has the best of all three "worlds" running on it, the Mac stuff, the Unix stuff and the Windows stuff (under Parallels), and I want my tools to be as useful to as many folks as possible, irrespective of their platform of choice (or that which is inflicted upon them).

Most of the tools in my PixelMed DICOM toolkit, for example, are intended to be run from the command line, but occasionally I try to make something vaguely useful with a user interface (not my forte), like the DoseUtility or DicomCleaner. I deploy these as Java Web Start, which fortunately continues to work fine for Windows, as well for Firefox users on any platform, but since an unfortunate "security fix" from Apple, is not so great in Safari anymore (it downloads the JNLP file, which you have to go find and open manually, rather than automatically starting; blech!). I haven't been able to find a way to restore JNLP files to the "CoreTypes safe list", since the "XProtect.plist XProtect.meta.plist" and "XProtect.plist" files in "/System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/" don't seem to be responsible for this undesirable change in behavior, and I haven't found an editable file that is yet.

Since not everyone likes JWS, and in some deployment environments it is disabled, I have for a while now also been creating selected downloadable executable bundles, both for Windows and the Mac.

Once upon a time, the way to do this to build Mac applications was with a tool that Apple supplied called "jarbundler". This did the work of populating the tree of files that constitute a Mac application "bundle"; every Mac application is really a folder called "something.app", and it contains various property files and resources, etc., including a binary executable file. In the pre-Oracle days, when Apple supplied its own flavor of Java, the necessary binary file was "JavaApplicationStub", and jarbundler would stuff that into the necessary place when it ran. There is obsolete documentation of this still available from Apple.

Having used jarbundler once, to see what folder structure it made, I stopped using it and just manually cut and past stuff into the right places for each new application, and mirrored what jarbundler did to the Info.plist file when JVM options needed to be added (such as to control the heap size), and populated the resources with the appropriate jar files, updated the classpaths in Info.plist, etc. Automating updates to such predefined structures in the Makefiles was trivial. Since I was using very little if anything that was Apple-JRE specific in my work, when Apple stopped doing the JRE and Oracle took over, it had very little impact on my process. So now I am in the habit of using various bleeding edge OpenJDK versions depending on the phase of the moon, and everything still seems to work just fine (putting aside changes in the appearance and performance of graphics, a story for another day).

Even though I have been compiling to target the 1.5 JVM for a long time, just in case anybody was still on such an old unsupported JRE, I finally decided to bite the bullet and switch to 1.7. This seemed sensible when I noticed that Java 9 (with which I was experimenting) would no longer compile to such an old target. After monkeying around with the relevant javac options (-target, -source, and -bootclasspath) to silence various (important) warnings, everything seemed good to go.

Until I copied one of these 1.7 targeted jar files into a Mac application bundle, and thought hey, why not rev up the JVMVersion property from "1.5+" to "1.7+"? Then it didn't work anymore and gave me a warning about "unsupported versions".

Up to this point, for years I had been smugly ignoring all sorts of anguished messages on the Mac Java mailing list about some new tool called "appbundler" described by Oracle, and the Apple policy that executable apps could no longer depend on the installed JRE, but instead had to be bundled with their own complete copy of the appropriate JRE (see this link). I was content being a fat dumb and happy ostrich, since things were working fine for me, at least as soon as I disabled all that Gatekeeper nonsense by allowing apps from "anywhere" to run (i.e., not just from the App Store, and without signatures), which I do routinely.

So, when my exposed ostrich butt got bitten by my 1.7 target changes (or whatever other incidental change was responsible), I finally realized that I had to either deal with this properly, or give up on using and sharing Mac executables. Since I have no idea how many, if any, users of my tools are dependent on these executables (I suspect not many), giving up wouldn't have been so bad except that (a) I don't like to give up so easily, and (b) occasionally the bundled applications are useful to me, since they support such things as putting it in the Dock, dragging and dropping to an icon, etc.

How hard can this be I thought? Just run appbundler, right? Well, it turns out the appbundler depends on using ant, which I don't normally use, and its configuration out of the box doesn't seem to handle the JVM options I wanted to specify. One can download it from java.net, and here is its documentation. I noticed it seemed to be a little old (two years) and doesn't seem to be actively maintained by Oracle, which is a bit worrying. It turns out there is a fork of it that is maintained by others (infinitekind) that has more configuration options, but this all seemed to be getting a little more complicated than I wanted to have to deal with. I found a post from Michael Hall on the Mac Java developers mailing list that mentioned a tool he had written, AppConverter, which would supposedly convert the old to the new. Sounded just like what I needed. Unfortunately, it did nothing when I tried it (did not respond to a drag and drop of an app bundle as promised).

I was a bit bummed at this point, since it looked like I was going to have to trawl through the source of one of the appbundler variants or AppConverter, but then I decided I would first try and just cheat, and see if I could find an example of an already bundled Java app, and copy it.

AppConverter turned out to be useful after all, if only to provide a template for me to copy, since when I opened it up to show the Package Contents, sure enough, it was a Java application, contained a copy of the java binary executable JavaAppLauncher, which is what is used now instead of JavaApplicationStub, and had an Info.plist that showed what was necessary. In addition, it was apparent that the folder where the jar files go has moved, from being in "Contents/Resources/Java" to "Contents/Java" (and various posts on the Mac Java developers mailing list mentioned that too).

So, with a bit of manual editing of the file structure and the Info.plist, and copying the JavaAppLauncher out of AppConverter, I got it to work just fine, without the need to figure out how to run and configure appbundler.

By way of example, here is the Package Contents of DicomCleaner the old way:



and here it is the new way:


And here is the old Info.plist:


and here is the new Info.plist:

Note that it is no longer necessary to specify the classpath (not even sure how to); apparently the JavaAppLauncher adds everything in Contents/Java to the classpath automatically.

Rather than have all the Java properties under a single Java key, the JavaAppLauncher seems to use a JVMMainClassName key rather than Java/MainClass, and JVMOptions, rather than Java/VMOptions. Also, I found that in the absence of a specific Java/Properties/apple.laf.useScreenMenuBar key, another item in JVMOptions would work.

Why whoever wrote appbundler thought that they had to introduce these gratuitous inconsistencies, when they could have perpetuated the old Package Content structure and Java/Properties easily enough, I have no idea, but at least the structure is sufficiently "obvious" so as to permit morphing one to the other.

Though I had propagated various properties that jarbundler had originally included, and added one that AppConverter had used (Bundle display name), I was interested to know just what the minimal set was, so I started removing stuff to see if it would keep working, and sure enough it would. Here is the bare minimum that "works" (assuming you don't need any JVM options, don't care what name is displayed in the top line and despite the Apple documentation's list of "required" properties):


To reiterate, I used the JavaAppLauncher copied out of AppConverter, because it worked, and it wasn't obvious where to get it "officially".

I did try copying the JavaAppLauncher binary that is present in the "com/oracle/appbundler/JavaAppLauncher" in appbundler-1.0.jar, but for some reason that didn't work. I also poked around inside javapackager (vide infra), and extracted "com/oracle/tools/packager/mac/JavaAppLauncher" from the JDKs "lib/ant-javafx.jar", but that didn't work either (reported "com.apple.launchd.peruser ... Job failed to exec(3) for weird reason: 13"), so I will give up for now and stick with what works.

It would be nice to have an "official" source for JavaAppLauncher though.

In case it has any impact, I was using OS 10.8.5 and JDK 1.8.0_40-ea whilst doing these experiments.

David

PS. What I have not done is figure out how to include a bundled JRE, since I haven't had a need to do this myself yet (and am not motivated to bother with the AppStore), but I dare say it should be easy enough to find another example and copy it. I did find what looks like a fairly thorough description in this blog entry by Danno Ferrin about getting stuff ready for the AppStore.

PPS. I will refrain from (much) editorial comment about the pros and cons of requiring an embedded JRE in every tiny app, sufficeth to say I haven't found many reasons to do it, except for turn key applications (such as on a CD) where I do this on Windows a bit, just because one can. I am happy Apple/Oracle have enabled it, but surprised that Apple mandated it (for the AppStore).

PPPS. There is apparently also something from Oracle called "javafxpackager", which is pretty well documented, and which is supposed to be able to package non-FX apps as well, but I haven't tried it. Learning it looked more complicated than just doing it by hand. Digging deeper, it seems that this has been renamed to just "javapackager" and is distributed with current JDKs.

PPPPS. There is apparently an effort to develop a binary app that works with either the Apple or Oracle Package Contents and Info.plist properties, called "universalJavaApplicationStub", but I haven't tried that either.


Saturday, October 19, 2013

How Thick am I? The Sad Story of a Lonely Slice.

Summary: Single slice regions of interest with no multi-slice context or interval/thickness information may need to be reported as area only, not volume. Explicit interval/thickness information can and should be encoded. Thickness should be distinguished from interval.

Long Version.

Given a Region of Interest (ROI), no matter how it is encoded (as contours or segmented pixels or whatever), one can compute its area, using the pixel spacing (size) information. If a single planar ROI (on one slice) is grouped with a bunch of siblings on contiguous slices, then one can produce a sum of the areas. And if one knows the (regular) spacing between the slices (reconstruction interval in CT/MR/PET parlance), one can compute a volume from the sum of the areas multiplied by the slice spacing. Often one does not treat the top and bottom slice specially, i.e., the ROI is regarded as occupying the entire slice interval. Alternatively, one could consider the top and bottom slices (or both slices) as only being partially occupied, and perhaps halve the contribution of the top and bottom slices.

The slice interval is distinct from the slice "thickness" (Slice Thickness (0018,0050)), since data may be acquired and reconstructed such that there is either a gap between slices, or slices overlap, and in such cases, using the thickness rather than the interval would not return a volume representative of the object represented by the ROI(s). The slice interval is rarely encoded explicitly, and even if it is, may be unreliable, so one should compute the interval from the distance along the normal to the common orientation (parallel slices) using the Image Position (Patient) origin offset and the Image Orientation (Patient) row and column vectors. The Spacing Between Slices (0018,0088) is only officially defined for the MR and NM objects, though one does see it in CT images occasionally. In the past, some vendors erroneously encoded the gap between slices rather than the distance between their centers in Spacing Between Slices (0018,0088), so be wary of it.

This all presupposes that one does indeed have sufficient spatial information about the ROI available, encoded in the appropriate attributes, which is the case for 2D contours defined relative to 3D slices (e.g., SR SCOORDS with referenced cross-sectional images), 3D contours (e.g., SR SCOORD3D or RT Structure Sets), and Segmentation objects encoded as image objects with plane orientation, position and spacing.

And it works nicely down to just two slices.

But what if one only has one lonely slice? Then there is no "interval" per se.

For 2D contours defined relative to 3D image slices one could consult the adjacent (unreferenced) image slices and deduce the slice interval and assume that was applicable to the contour too. But for 3D contours and segmentation objects that stand alone in 3D space, and may have no explicit reference to the images from which they were derived, if indeed there were any images and if indeed those images were not re-sampled during segmentation, then there may be no "interval" information available at all.

The RT Structure Set does handle this in the ROI Contour Module, by the provision of an (optional) Contour Slab Thickness (3006,0044) value, though it may interact with an the associated Contour Offset Vector (3006,0045) such that the plane of the coordinates is not the center of the slab. See PS 3.3 Section C.8.8.6.2.

The Segmentation object, by virtue of inclusion of the Pixel Measures Sequence (functional group macro), which defines the Pixel Spacing, also requires the presence of the Slice Thickness attribute, but only if Volumetric Properties (0008,9206) is VOLUME or SAMPLED. And wouldn't you know it, the Segmentation IOD does not require the presence of Volumetric Properties :( That said, it is possible to encode it, so ideally one should; the question arises as to what the "thickness" of a segmentation is, and whether one should slavishly copy the slice thickness from the source images that were segmented, or whether one should use the interval (computed if necessary), since arguably one is segmenting the volume, regardless of how it was sampled. We should probably consider whether or not to include Spacing Between Slices (0018,0088) in the Pixel Measures Sequence as well, and to refine their definitions to make this clear.

The SR SCOORD3D content item attributes do not include interval or thickness. That does not prevent one from encoding a numeric content item to associate with it, though no standard templates currently do. Either way, it would be desirable to standardize the convention. Codes are already defined in PS 3.16 for 112225, DCM, “Slice Thickness”) and (112226, DCM, “Spacing between slices”) (these are used in the Image Library entries for cross-sectional images in the CAD templates).

Anyhow, from a recipient's perspective, given no explicit information and no referenced images there is no other choice than to report only area. If an image is referenced, and its interval or thickness are available, then one may be tempted to use it, but if they are different, which should one use? Probably the interval, to be consistent with the general case of multiple slices.

From a sender's perspective, should one explicitly encode interval or thickness information in the RT Structure Set, SR SCOORD3D, and Segmentation objects, even though it is not required? This is probably a good move, especially for single slice ROIs, and should probably be something considered by the standard for inclusion as a CP.

David

Monday, October 14, 2013

Binge and Purge ... Archive Forever, Re-compress or Discard ... PACS Lifecyle Management

Summary: Technical solutions and standards existing for implementing a hodge-podge of varied retention policies; teaching and research facilities should hesitate before purging or recompressing though; separating the decision making engine from the archive is desirable.

Long version:

As we continue to "binge" on imaging modalities that produce ever large quantities of data, such as MDCT, breast tomosynthesis and maybe one day whole slide imaging, the question of duration of storage becomes more pressing.

An Australian colleague recently circulated a link to a piece entitled "What should we do with old PACS images?", in which Kim Thomas from eHealth Insider magazine discusses whether or not to discard old images, and how. The article nicely summarizes the UK situation, and concludes with the usual VNA hyperbole, but fails to distinguish the differences in practice settings in which such questions arise.

In an operational environment that is focused only on immediate patient care, risk and cost minimization, and compliance with regulatory requirements, the primary questions are whether or not it is cheaper to retain, re-compress or delete studies that are no longer necessary, and whether or not the technology in use is capable of implementing it. In such environments, there is little if any consideration given to "secondary re-use" of such images, such as for research or teaching. Typically a freestanding ambulatory setting might be in such a category, the priorities being quality, cost and competitiveness.

An extreme case of "early discarding" arises in Australia where, as I understand it, the policy of some private practices (in the absence of any statutory requirement to the contrary) is to hand the images to the patient and discard the local digital copy promptly. Indeed, this no doubt made sense when the medium was radiographic (as opposed to printed) film.

In many jurisdictions though, there is some (non-zero) duration required by a local regulation specific to medical imaging, or a general regulation for retention of medical records that includes images. Such regulations define a length of time during which the record must be stored and made available. There may be a statutory requirement for each facility to have a written policy in place.

In the US, the HIPAA Privacy Rule does not include medical record retention requirements, and the rules are defined by the states, and vary (see for instance, the ONC summary of State Medical Record Laws). Though not regulatory in nature, the ACR–AAPM–SIIM Technical Standard For Electronic Practice of Medical Imaging requires a written policy and that digital imaging data management systems must provide storage capacity capable of complying with all facility, state, and federal regulations regarding medical record retention. The current policy of the ACR Council is described in Appendix E Ownership, Retention and Patient Access to Medical Records of the 2012-2012 Digest of Council Actions. This seems a bit outdated (and still refers to "magnetic tapes" !). Google did reveal a draft of an attempt to revise this, but I am not sure of the status of that, and I will investigate whether or not our Standards and Interoperability group can help with the technical details. I was interested though, to read that:

"The scope of the “discovery rules” in other states mean that records should conceivably be held indefinitely. Evidence of “fraud” could extend the statute of limitations indefinitely."

Beyond the minimum required, whatever that might be, in many settings there are good reasons to archive images for longer.

In an academic enterprise, the needs of teaching and research must be considered seriously, and the (relatively modest) cost of archiving everything forever must be weighed against the benefit of maintaining a durable longitudinal record in anticipation of secondary re-use.

I recall as a radiology registrar (resident in US-speak) spending many long hours in film archives digging out ancient films of exotic conditions, using lists of record numbers generated by queries for particular codes (which had been diligently recorded in the limited administrative information system of the day), for the purpose of preparing teaching content for various meetings and forums. These searches went back not just years but decades, if I remember correctly. This would not have been possible if older material had been discarded. Nowadays in a teaching hospital it is highly desirable that "good cases" be identified, flagged, de-identified and stored prospectively (e.g., using the IHE Teaching File and Clinical Trial Export (TCE) profile). But not everyone is that diligent, or has the necessary technology deployed, and there will remain many situations in which the value of a case is not recognized except in retrospect.

Retrospective research investigations have a place too. Despite the need to perform prospective randomized controlled trials there will always be a place for observational studies in radiology. Quite apart from clinical questions, there are technical questions to be answered too. For example, suppose one wanted to compare the performance of irreversible compression algorithms for a specific interpretation task (or to demonstrate non-inferiority compared to uncompressed images). To attain sufficient statistical power to detect the absence of a small but clinically significant difference in observer performance, a relatively large number of cases would be required. Obtaining these prospectively, or from multiple institutions, might be cost prohibitive, yet a sufficiently large local historical archive might render the problem tractable. The further the question strays from those that might be answered using existing public or sequestered large image collections (such as those available through the NBIA or TCIA or ADNI or CardiacAtlas), the more often this true.

Such questions also highlight the potential danger of using irreversible compression as a means of reducing storage costs for older images. Whilst such a strategy may or may not impinge upon the utility of the images for prior comparison or evidential purposes, they may render them useless for certain types of image processing research, such as CAD, and certainly so for research into compression itself.

Technologically speaking, as the eHI article reminds us, not all of the installed base of PACS have the ability to perform what is colloquially referred to as "life cycle management", especially if it is automated in some manner, based on some set of rules that implement configurable local policy. So, even if one decides that it is desirable to purge, one may need some technology refreshment to implement even a simple retention policy.

This might be as "easy" as upgrading one's PACS to a more recent version, or it might be one factor motivating a PACS replacement, or it might require some third party component, such as a VNA. One might even go so far as to separate the execution of the purging from the decision making about what to purge, using a separate "rules engine", coupled with a standard like IHE Image Object Change Management (IOCM) to communicate the purge decision (as I discussed in an old thread on Life Cycle Management in the UK Imaging Informatics Group). We added "Data Retention Policy Expired" as a KOS document title in DICOM CP 1152 specifically for this purpose.

One also needs a reliable source of data to drive the purging decision. Some parameters like the patient's age, visit dates, condition and types of procedure should be readily available locally; others may not, such as whether or not the patient has died. As I mentioned in that same UK thread, and has also been discussed in lifecycle, purging and deletion threads in the pacsadmin group, in the US we have the Social Security Administration's Death Master File available for this.

Since the necessary information to make the decision may not reside in the PACS or archive, but perhaps the HIS or EHR, separating the decision maker from the decision executor makes a lot of sense. Indeed, when you think about it, the entire medical record, not just the images, may need to be purged according to the same policy. So, it seems sensible to make the decision in one place and communicate it to all the places where information may be stored within an enterprise. This includes not only the EHR and radiology, but also the lab, histopathology, cardiology, and the visual 'ologies like ophthalmology, dermatology, etc. Whilst one day all databases, archives and caches may be centralized and consolidated throughout an enterprise (VNA panacea scenario), in the interim, a more loosely coupled solution is possible.

That said, my natural inclination as a researcher and a hoarder (with a 9 track tape drive and an 8" floppy drive in the attic, just in case) is to keep everything forever. Fortunately for the likes of me, disk is cheap, and even the power and HVAC required to maintain it are not really outrageously priced in the scheme of things. However, if you feel you really must purge, then there are solutions available, and a move towards using standards to implement them.

David

Sunday, September 29, 2013

You're gonna need a bigger field (not) ... Radix 64 Revisited

Summary: It is easy to fit a long number in a short string field by transcoding it to use more (printable) characters; the question is what encoding to use; there are more alternatives than you might think, but Base64 is the pragmatic choice.

Long Version.

Every now and then the subject of how to fit numeric SNOMED Concept IDs (defined by the SNOMED SCTID Data Type) into a DICOM representation comes up. These can be up to 18 decimal digits (and fit into a signed or unsigned 64 bit binary integer), whereas in DICOM, the Code Value has an SH (Short String) Value Representation (VR), hence is limited to 16 characters.

Harry Solomon suggested "Base64" encoding it, either always, or on those few occasions when the Concept ID really was too long (and then using a "prefix" to the value to recognize it).

The need arises because DICOM has always used the "old fashioned" SNOMED-RT style SnomedID values (like "T-A0100" for "Brain") rather than the SNOMED-CT style SNOMED Concept ID values (like "12738006"). DICOM was a relatively "early adopter" of SNOMED, and the numeric form did not exist in the early days (prior to the incorporation of the UK Read Codes that resulted in SNOMED-CT). Fortunately, SNOMED continues to issue the older style codes; unfortunately, folks outside the DICOM realm may need to use the newer style, and so converting at the boundary is irritating (and needs a dictionary, unless we transmit both). The negative impact on the installed base that depends on recognizing the old-style codes, were we to "change", is a subject for another day; herein I want to address only how it could be done.

Stuffing long numbers into short strings is a generic problem, not confined to using SNOMED ConceptIDs in DICOM. Indeed, this post was triggered as a result of pondering another use case, stuffing long numbers into Accession Number (also SH VR). So I thought I would implement this to see how well it worked. It turns out that there are a few choices to be made.

My first pass at this was to see if there was something already in the standard Java class library that supported conversion of arbitrary length base10 encoded integers into some other radix; I did not want to be constrained to only handling 64 bit integers.

It seemed logical to look at the arbitrary length numeric java.math.BigInteger class, and indeed it has a radix argument to its String constructor and toString() methods. It also has constructors based on two's-complement binary representations in byte[] arrays. Sounded like a no brainer.

Aargh! It turns out that BigInteger has an implementation limit on the size of the radix that it will handle. The maximum radix is 36 (the 10 digits plus 26 lowercase alphabetic characters that is the limit for java.lang.Character.MAX_RADIX). Bummer.

OK, I thought, I will hand write it, by doing successive divisions by the radix in BigInteger, and character encoding the modulus, accumulating the resulting characters in the correct order. Turned out to be pretty trivial.

Then I realized that I now had to choose which characters to select beyond the 36 that Java uses. At which point I noticed that BigInteger uses completely different characters than the traditional "Base64" encoding. "Base64" is the encoding used by folks who do anything that depends on MIME content encoding (email attachments or XML files with embedded binary payloads), as is defined in RFC 2045. Indeed, there are variants on "Base64" that handle situations where the two characters for 62 and 63 (normally '+' and '/' respectively) are problematic, e.g., in URLs (RFC 4648). Indeed RFC 4648 seems to be the most current definition of not only "Base64" and variants, but also "Base32" and "Base16" and so-called "extended hex" variants of them.

If you think about it, based on the long-standing hexadecimal representation convention that uses characters '0' to '9' for numeric values [0,9], then characters 'a' to 'f' for numeric values [10,15], it is pretty peculiar that "Base64" uses capital letters 'A' to 'J' for numeric values [0,9], and uses the characters '0' to '9' to represent numeric values [52,61]. Positively unnatural, one might say.

This is what triggered my dilemma with the built-in methods of the Java BigInteger. BigInteger returns strings that are a natural progression from the traditional hexadecimal representation, and indeed for a radix of 16 or a radix of 32, the values match those from the RFC 4648 "base16" and "base32hex" (as distinct from "base32") representations. Notably, RFC 4648 does NOT define a "base64hex" alternative to "base64", which is a bit disappointing.

It turns out that a long time ago (1992) in a galaxy far, far away, this was the subject of a discussion between Phil Zimmerman (of PGP fame), and Marshall Rose and Ned Freed on the MIME working group mailing list, in which Phil noticed this discrepancy and proposed it be changed. His suggestion was rejected on the grounds that it would not improve functionality and would threaten the installed base, and was made at a relatively late stage in development of the "standard". The choice of the encoding apparently traces back to the Privacy Enhanced Mail (PEM) RFC 989 from 1987. I dare say there was no love lost between Phil and the PEM/S-MIME folks, given that they were developers of competing methods for secure email, but you can read the exchange yourself and make up your own mind.

So I dug a little deeper, and it turns out that The Open Group Base (IEEE Std 1003.1) (POSIX, Single Unix Specification) has a definition for how to encode radix 64 numbers as ASCII characters too, in the specification of the a64l() and l64a() functions, which uses '.' (dot) for 0, '/' for 1, '0' through '9' for [2,11], 'A' through 'Z' for [12,37], and 'a' through 'z' for [38,63]. Note that is this is not part of the C standard library.

An early attempt at stuffing binary stuff into printable characters was used by the "uuencode" utility used in Unix-to-Unix copy (UUCP) implementations, such as was once used for mail transfer. It used the expedient of adding the 32 (the US-ASCII space character) to the 6 bit (base 64) numeric value and came up with a range of printable characters.

Of course, from the perspective of stuffing a long decimal value into a short string and making it fit, it doesn't matter which character representation is chosen, as long as it is valid. E.g., a 64 bit unsigned integer that has a maximum value of 18,446,744,073,709,551,615, which is 20 digits, is only 11 characters long when encoded with a radix of 65, regardless of the character choices.

For your interest, here is what each of the choices described above looks like, for single numeric values [0,63], and for the maximum unsigned 64 bit integer value:

Extension of Java and base16hex to hypothetical "base64hex":
0 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z : _
f__________


Unix a64l:
 . / 0 1 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z
Dzzzzzzzzzz

Base64 (RFC 2045):
 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9 + /
P//////////


uuencode (note that space is the first character):
   ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _
/__________


Returning to DICOM then, the choice of what to use for a Short String (SH) VR is constrained to be any US-ASCII (ISO IR 6) character that is not a backslash (used as a value delimiter in DICOM) and not a control character. This would exclude the uuencode representation, since it contains a backslash, but any of the other choices would produce valid strings. The SH VR is case-preserving, which is a prerequisite for all of the choices other than uuencode. Were that not to be the case, we would need to define yet another encoding that was both case-insensitive and did not contain the backslash character. I can't thank of a use for packing numeric values into the Code String (CS) VR, the only upper-case only DICOM VR.

The more elegant choice in my opinion would be the hypothetical "base64hex", for the reasons Phil Z eloquently expressed, but ...

Pragmatically speaking, since RFC 989/1113/2045/4648-style "Base64" coding is so ubiquitous these days for bulk binary payloads, it would make no sense at all to buck that trend.

Just to push the limits though, if one uses all 94 printable US-ASCII characters except backslash, one can squeeze the largest unsigned 64 bit integer into 10 rather than 11 characters. However, for the 18 decimal digit longest SNOMED Concept ID, the length of the result is the same whether one uses a radix of 64 or 94, still 10 characters.

David


Thursday, September 12, 2013

What Template is that?

Summary: Determining what top-level template, if any, has been used to create a DICOM Structured Report can be non-trivial. Some SOP Classes require a single template, and an explicit Template ID is supposed to always be present, but if isn't, the coded Document Title is a starting point, but is not always unambiguous.

Long Version.

When Structured Reports were introduced into DICOM (Supplement 23), the concept of a "template" was somewhat nebulous, and was refined over time. Accordingly, the requirement to specify which template was used, if any, to author and format the content, was, and has remained, fairly weak.

The original intent, which remains the current intent, is that if a template was used, it's identity should be explicitly encoded. A means for doing so is the Content Template Sequence. Originally this was potentially encoded at each content item, but was later clarified by CP 452. In short, the identification applies only to CONTAINER content items, and in a particular to the root content item, and consists of a mapping resource (DCMR, in the case of templates defined in PS 3.16), and a string identifier.

The requirement on its presence is:

"if a template was used to define the content of this Item, and the template consists of a single CONTAINER with nested content, and it is the outermost invocation of a set of nested templates that start with the same CONTAINER"

Since the document root is always a container, whenever one of the templates that defines the entire content tree of the SR is used, then by definition, an explicit Template ID is required to be present.

That said, though most SR producers seem to get this right, sometimes the Template ID is not present, which presents a problem. I don't think this can be excused by lack of awareness of the requirement, or of failure to notice CP 452 (from 2005), since the original requirement in Sup 23 (2000) read:

"Required if a template was used to define the content of this Item".

Certainly CP 452 made things clearer though, in that it amended the definition to not only apply to the content item, but also "its subsidiary" content items.

Some SR SOP Classes define either a single template that shall be used, the KOS being one example, the CAD family (including Mammo, Chest and Colon) CAD being others. So, even if an explicit Template ID is not present, the expected template can be deduced from the SOP Class. Sometimes though, such instances are encoded as generic (e.g., Comprehensive) SR, perhaps because an intermediate system did not support the more specific SOP Class, and so one still needs to check for the template identifier.

In the absence of a specific SOP Class or an explicit template identifier, what is a poor recipient to do? One clue can be the concept name of the top level container content item, which is always coded, and always present, and which is referred to as the "document title". In many cases, within the scope of PS 3.16, the same coded concept is used only for a single root template. For example, (122292, DCM, "Quantitative Ventriculography Report”) is used only for TID 3202. That's helpful, at least as long as nobody other than DICOM (like a vendor) has re-used the same code to head a different template.

Other situations are more challenging. The basic diagnostic reporting templates, e.g., TID 2000, 2005 or 2006, are encoded in generic SOP Classes and furthermore don't have a single code or unique code for the document title, rather, any code can be used, and a defined set of them is drawn from LOINC, corresponding to common radiological procedures. It is not at all unlikely that some other completely different template might be used with the same code as (18747-6,LN,"CT Report"), or (18748-4,LN,"Diagnostic Imaging Report"), for instance.

One case of interest demonstrates that in the absence of an explicit Template ID, even a specific SOP Class and a relatively specific Document Title is insufficient. For Radiation Dose SRs, the same SOP Class is used for both CT and Projection X-Ray. Both TID 10001 Projection X-Ray Radiation Dose and  TID 10011 CT Radiation Dose have the same Document Title, (113701, DCM, “X-Ray Radiation Dose Report”).

One can go deeper into the tree though. One of the children of the Document Title content item is required to be (121058, DCM, ”Procedure reported”). For a CT report, it is required to have an enumerated value of (P5-08000,SRT, “Computed Tomography X-Ray”), whereas for a Projection X-Ray report, it may have a value of (113704, DCM, “Projection X-Ray”) or (P5-40010, SRT, “Mammography”), or something else, because these are defined terms.

So, in short, at the root level, the absence of a Template ID is not the end of the world, and a few heuristics might be able to allow a recipient to proceed.

Indeed, if one is expecting a particular pattern based on a particular template, and that pattern "matches" the content of the tree that one has received, does it really matter? It certainly makes life easier though, to match a top level identifier, than have to write a matching rule for the entire tree.

Related to the matter of the identification of the "root" or "top level" template is that of recognizing subordinate or "mini" templates. As you know, most of PS 3.16 is taken up not by monstrously long single templates but rather by invocation of sub-templates. So there are sub-templates for identifying things, measuring things, etc. These are re-used inside lots of application-specific templates.

Certainly "top-down" parsing from a known root template takes one to content items that are expected to be present based on the "inclusion" of one of these sub-templates. These are rarely, if ever, explicitly identified during creation by a Template ID, even though one could interpret that as being a requirement if the language introduced in CP 452 is taken literally. Not all "included" sub-templates start with a container, but many do. I have to admit that most of the SRs that I create do not contain Template IDs below the Document Title either, and I should probably revisit that.

Why might one want to be able to recognize such a sub-template?

One example is being able to locate and extract measurements or image coordinate references, regardless of where they occur in some unrecognized root template. An explicit Template ID might be of some assistance in such cases, but pattern matching of sub-trees can generally find these pretty easily too. When annotating images based on SRs, for example, I will often just search for all SCOORDs, and explore around the neighborhood content items to find labels and measurements to display. Having converted an SR to an XML representation also allows one to use XSL-T match() clauses and an XPath expression to select even complex patterns, without requiring an explicit ID.

David


Saturday, September 7, 2013

Share and share alike - CSIDQ

Summary: Image sharing requires the availability (download and transmission) of a complete set of images of diagnostic quality (CSIDQ), even if for a particular task, viewing of a lesser quality subset may be sufficient. The user then needs to be able to decide what they need to view on a case-by-case basis.

Long Version.

The title of this post comes from the legal use of the term "share and share alike", the equal division of a benefit from an estate, trust, or gift.

In the context of image sharing, I mean to say that all potential recipients of images, radiologists, specialists, GPs, patients, family, and yes, even lawyers, need to have the means to access the same thing: a complete set of images of diagnostic quality (CSIDQ). Note the emphasis on "have the means". CSIDQ seems to be a less unwieldy acronym that CSoIoDQ, so that's what I will use for notational convenience.

There are certainly situations in which images of lesser quality (or less than a complete set) might be sufficient, might be expedient, or indeed might even be necessary to enable the use case. A case in point being the need to make an urgent or rapid decision remotely when there is a only slow link available.

For folks defining architectures and standards, and deploying systems to make this happen, it is essential to assure that the CSIDQ is available throughout. In practice, this translates to requiring that
  • the acquisition modality produce a CSIDQ,
  • the means of distribution (typically a departmental or enterprise PACS) in the local environment stores and makes available a CSIDQ,
  • the system of record where the acquired images are stored for archival and evidential purposes contains a CSIDQ
  • any exported CD or DVD contains a CSIDQ,
  • any point-to-point transfer mechanism be capable of supporting transfer of a CSIDQ
  • any "edge server" or "portal" that permits authorized access to the locally stored images is capable of sharing a CSIDQ on request,
  • any "central" archive to which images are stored also retain and be capable of distributing a CSIDQ
  • any "clearinghouse" that acts as an intermediary needs to be capable of transferring a CSIDQ
These requirements apply particularly to the "Download" and "Transmit" parts of the Meaningful Use "View, Download and Transmit" (VDT) approach to defining sharing, as it applies to images and imaging results.

In other words, it is essential that whatever technologies, architectures and standards are used to implement Download and Transmit, that they be capable of supporting a CSIDQ. Otherwise, anything that is lost early in the "chain of custody", if you will, is not recoverable later when it is needed.

From a payload perspective, the appropriate standard for a CSIDQ is obviously DICOM, since that is the only widely (universally) implemented standard that permits the recipient to make full use of the acquired images, including importation, post-processing, measurement, planning, templating, etc. DICOM is the only format whose pixel data and meta data all medical imaging systems can import.

That said, it may be desirable to also provide Download of a subset, or a subset of lesser quality, or in a different format, for one reason or another. In doing so it is vital not to compromise the CSIDQ principle, e.g., by misleading a recipient (such as a patient or a referring physician) into thinking that anything less that a CSIDQ that has been download is sufficient for future use (e.g., subsequent referrals). And it is vital not to discard the DICOM format meta data. EHR and PHR vendors need to be particularly careful about not making expedient implementation decisions in this regard that compromise the CSIDQ principle (and hence may be below the standard of practice, may be misleadingly labelled, may introduce the risk of a bad outcome, and may expose them to product liability or regulatory action).

Viewing is an entirely different matter, however.

Certainly, one can download a CSIDQ and then view it, and in a sense that is what the CD/DVD distribution mechanism is ... a "thick client" viewer is either already installed or executed from the media to display the DICOM (IHE PDI) content. This approach is typically appropriate when one wants to import what has been downloaded (e.g., into the local PACS) so that it can be viewed along with all the other studies for the patient. This is certainly the approach that most referral centers will want to adopt, in order to provide continuity of patient care coupled with familiarity of users with the local viewing tools. It is also equally reasonable to use for an "in office" imaging system, as I have discussed before. It is a natural extension of the current widespread CD importation that takes place, and the only difference is the mode of transport, not the payload.

For sporadic users though, who may have no need to import or retain a local copy of the CSIDQ, many other standard (WADO and XDS-I) and proprietary alternatives exist for viewing. Nowadays web-based image viewing mechanisms, including so-called "zero footprint" viewers, can provide convenient access to an interactively rendered version of that subset of the CSIDQ that the user needs access to, with the appropriate quality, whether using client or server-side rendering, and irrespective of how and in what format the pixel data moves from server to client. Indeed, these same mechanisms may suffice even for the radiologist's viewing interface, as long as the necessary image quality is assured, there is access to the complete set, and the necessary tools are provided.

The moral being that the choice needs to be made by the user, and perhaps on the basis of whatever specific task they need to perform or question they want to answer. For any particular user (or type of user), there may be no single best answer that is generally applicable. For one patient, at one visit, the user might be satisfied with the report. On another occasion they might just want to illustrate something to the patient that requires only modest quality, and on yet another they might have a need to examine the study with the diligence that a radiologist would apply.

In other words, the user needs to be able to make the viewing quality choice dynamically. So, to enable the full spectrum of quality needs, the server needs to have the CSIDQ in the first place.

David

PS. By the way, do not take any of the foregoing to imply that irreversibly (lossy) compressed images are not of diagnostic quality. It is easy to make the erroneous assumptions that uncompressed images are diagnostic and compressed ones are not, or that DICOM images are uncompressed (when they may be encoded with lossy compression, including JPEG, even right off the modality in some cases), or that JPEG lossy compressed images supplied to a browser are not diagnostic. Sometimes they are and sometimes they are not, depending on the modality, task or question, method and amount of compression, and certainly last but not least, the display and viewing environment.

What "diagnostic quality" means and what constitutes sufficient quality and when, in general, and in the context of "Diagnostically Acceptable Irreversible Compression" (DAIC), are questions for another day. The point of this post is that the safest general solution is to preserve whatever came off the modality. Doing anything less than that might be safe and sufficient, but you need to prove it. Further, regardless of the quality of the pixel data, losing the DICOM "meta data" precludes many downstream use cases, including even simple size measurements.

PPS. This blog post elaborates on a principle that I attempted to convey during my recent testimony to the ONC HIT Standards Committee Clinical Operations Workgroup about standards for image sharing, which you can see, read or listen to if you have the stomach for it. If you are interested in the entire series of meetings at which other folks have testified or the subject has been discussed, here is a short summary, with links (or you can go to the group's homepage and follow the calendar link, to future meetings if you are interested in joining them, or past meetings:

2013-04-19 (initial discussion)
2013-06-14 (RSNA: Chris Carr, David Avrin, Brad Erickson)
2013-06-28 (RSNA: David Mendelson, Keith Dreyer)
2013-07-19 (lifeIMAGE: Hamid Tabatabaie, Mike Baglio)
2013-07-26 (general discussion)
2013-08-09 (general discussion)
2013-08-29 (standards: David Clunie)

Also of interest is the parent HIT Standards Committee:

2013-04-17 (establish goal of image exchange)

And the HIT Policy Committee:

2013-03-14 (prioritize image exchange)

PPPS. The concept of "complete set of images of diagnostic quality" was first espoused by an AMA Safety Panel that met with a group of industry folks (2008/08/27) to try to address the historical "CD problem". The problem was not the existence of the CD transport mechanism, which everyone is now eager to decry in favor of a network-based image sharing solution, but rather the problem of inconsistent formats, content and viewer behavior. The effort was triggered by a group of unhappy neurosurgeons in 2006 (AMA House of Delegates Resolution 539 A-06). They were concerned about potential safety issues caused by inadequate or delayed access or incomplete or inadequately displayed MR images. To cut a long story short, a meeting with industry was proposed (Board of Trustees Report 30 A-07 and House of Delegates Resolution 523 A-08), and that meeting resulted in two outcomes.

One was the statement that we hammered out together in that clinical-industry meeting, which was attended not just by the AMA and MITA (NEMA) folks, but also representatives of multiple professional societies, including the American Association of Neurological Surgeons, Congress of Neurological Surgeons, American Academy of Neurology, American College of Radiology, American Academy of Orthopedic Surgeons, American College of Cardiology, American Academy of Otolaryngology-Head and Neck Surgery, as well as vendors, including Cerner, Toshiba, Philips, General Electric and Accuray, and DICOM/IHE folks like me. You can read a summary of the meeting, but the most important part is the recommendation for a standard of practice, which states in part:

"The American Medical Association Expert Panel on Medical Imaging (Panel) is concerned whether medical imaging data recorded on CD’s/DVD’s is meeting standards of practice relevant to patient care.  

The Panel puts forward the following statement, which embodies the standard the medical imaging community must achieve. 

  • All medical imaging data distributed should be a complete set of images of diagnostic quality in compliance with IHE-PDI.
This standard will engender safe, timely, appropriate, effective, and efficient care; mitigate delayed care and confusion; enhance care coordination and communication across settings of care; decrease waste and costs; and, importantly, improve patient and physician satisfaction with the medical imaging process."

More recently, the recommendation of the panel is incorporated in the AMA's discussion of the implementation of EHRs, in the Board of Trustees Report 24 A-13, which recognizes the need to "disseminate this statement widely".

The other outcome of the AMA-industry meeting was the development of the IHE Basic Image Review (BIR) Profile, intended to standardize the user experience when using any viewer. The original neurosurgeon protagonists contributed actively to the development of this profile, even to the extent of sacrificing entire days of their time to travel to Chicago to sit with us in IHE Radiology Technical Committee meetings. Sadly, adoption of that profile has been much less successful than the now almost universal use of IHE PDI DICOM CDs. Interestingly enough, with a resurgence of interest in web-based viewers, and with many new vendors entering the field, the BIR profile, which is equally applicable to both network and media viewers, could perhaps see renewed uptake, particularly amongst those who have no entrenched "look and feel" user interface conventions to protect.

Friday, September 6, 2013

DICOM rendering within pre-HTML5 browsers

Summary: Retrieval of DICOM images, parsing, windowing and display using only JavaScript within browsers without using HTML5 Canvas is feasible.

Long Version.

Earlier this year, someone challenged me to display a DICOM image in a browser without resorting to HTML5 Canvas elements, using only JavaScript. This turned out to be rather fun and quite straightforward, largely due to the joy of Google searching to find all the various concepts and problems that other folks had already explored and solved, even if they were intended for other purposes. I just needed to add the DICOM-specific bits. As a consequence it took just a few hours on a Saturday afternoon to figure out the basics and in total about a day's work to refine it and make the whole thing work.

The crude demonstration JavaScript code, hard-wired to download, window (using the values in the DICOM header) and render a particular 16 bit MR image image can be found here and executed from this page. It is fully self-contained and has no dependencies on other JavaScript libraries. The code is ugly as sin, filled with commented out experiments and tests, and references to where bits of code and ideas came from, but hopefully it is short enough to be self-explanatory.

It seems to work in contemporary versions of Safari, Firefox, Opera, Chrome and  even IE (although a little more slowly in IE, probably due to the need to convert some extra array stuff, and it seemed to work in IE 10 on Windows 7 but not IE 8 on XP, haven't figured out why yet). I was pleased to see that it also works on my Android phones and tablets.

Here is how it works ...

First task - get the DICOM binary object down to the client and accessible via JavaScript. That was an easy one, since as everyone probably knows, the infamous XMLHttpRequest can be used to pull pretty much anything from the server (i.e., even though its name implies it was designed to pull XML documents). The way to make it return a binary file is to set the XMLHttpRequest.overrideMimeType parameter, and to make sure that no character set conversion is applied to the returned binary stream. This trick is due to Marcus Granado, whose archived blog entry can be found here, and which is also discussed along with other helpful hints at the Mozilla Developer Network site here. There is a little bit of further screwing around needed to handle various Microsoft Internet Explorer peculiarities related to what is returned, not in the responseText, but instead in the responseBody, and this needs an intermediate VBArray to get the job done (discussed in a StackOverflow thread).

Second task - parse the DICOM binary object. Once upon a time, using the bit twiddling functions in JavaScript might have been too slow, but nowadays that does not seem to be the case. It was pretty trivial to write a modest number of lines of code to skip the 128 byte preamble, detect the DICM magic string, then parse each data element successively, using explicit lengths to skip those that aren't needed and skipping undefined length sequences and items, and keeping track of only the values of those data elements that are needed for later stages (e.g., Bits Allocated) and ignore the rest. Having written just a few DICOM parsers in the past made this a lot easier for me than starting from scratch. I kept the line count down by restricting the input to explicit VR little endian for the time being, not trying to cope with malformed input, and just assuming that the desired data element values were those that occurred last in the data set. Obviously this could be made more robust in the future for production use (e.g., tracking the top level data set versus data sets nested within sequence items), but this was sufficient for the proof of concept.

Third task - windowing a greater than 8 bit image. It would have been easy to just download an 8 bit DICOM image, whether grayscale or color, since then no windowing from 10, 12 or 16 bits to 8 would be needed, but that wouldn't have been a fair test. I particularly wanted to demonstrate that client-side interactivity using the full contrast and spatial resolution DICOM pixel data was possible. So I used the same approach as I have used many times before, for example in the PixelMed toolkit com.pixelmed.display.WindowCenterWidth class, to build a lookup table indexed by all possible input values for the DICOM bit depth containing values to use for an 8 bit display. I did handle signed and unsigned input, as well as Rescale Slope and Intercept, but for the first cut at this, I have ignored special handling of pixel padding values, and other subtleties.

These first three tasks are essentially independent of the rendering approach, and are necessary regardless of whether Canvas is going to be used or not.

The fourth and fifth tasks are related -  making something the browser will display, and then making the browser actually display it. I found the clues for how to do this in the work of Jeff Epler, who described a tool for creating single bit image files in the browser (client side) to use as glyphs.

Fourth task - making something the browser will display. Since without Canvas one cannot write directly to a window, the older browsers need to be fed something they know about already. An image file format that is sufficient for the task, and which contributes no "loss" in that it can directly represent 8 bit RGB pixels, is GIF.  But you say, GIF involves a lossless compression step, with entropy coding using LZW (the compression scheme that was at the heart of the now obsolete patent-related issues with using GIF). Sure it does, but many years ago, Tom Lane (of IJG fame) observed that because of the way LZW works, with an initial default code table in which the code (index) is the same as the value it represents, as long as one adds one extra true bit before each code, and resets the code table periodically, once can just send the original values as if they were entropy coded values. Add a bit of blocking and a few header values, and one is good to go with a completely valid uncompressed (albeit slightly expanded) bitstream that any GIF decoder should be able to handle. This concept is now immortalized in the libungif library, which was developed to be able to create "uncompressed GIF" files to avoid infringing on the Unisys LZW patent. Some of the details are described under the heading of "Is there an uncompressed GIF format?" in the old Graphic File Formats FAQ, which references Tom Lane's original post. In my implementation, I just make 9 bit codes from 8 bit values, and added a clear code every 128 values, and made sure to stuff the bits into appropriate length blocks preceded by a length value, and it worked fine. And since I have 8 bit gray scale values as indices, I needed to populate the global color tables that mapped each gray scale index to RGB triplets with the same intensity value (since GIF is an indexed color file format, which is why GIF is lossless for 8 bit single channel data, but lossy (needs quantization and dithering) for true color data with more than 256 different RGB values).

Fifth task - make the browser display the GIF. Since JavaScript in the browser runs in a sort of "sand box" to prevent unsecure access to the local file system, etc., it is not so easy to feed the GIF file we just made to the browser, say as an updated IMG reference on an HTML page. It is routine to update an image reference with an "http:" URL that comes over the network, but how does one achieve that with locally generated content? The answer lies in the "data:" URI that was introduced for this purpose. There is a whole web site, http://dataurl.net/, devoted to this subject. Here, for example, is a description of using it for inline images. It turns out that what is needed to display the locally generated GIF is to create a (big) string that is a "data:" URI with the actual binary content Base64 encoded and embedded in the string itself. This seems to be supported by all recent and contemporary browsers. I don't know what the ultimate size limits are for the "data:" URL, but it worked for the purpose of this demonstration. There are actually various online "image to data: URI convertors" available, for generating static content (e.g., at webSemantics, the URI Kitchen) but for the purpose of rendering DICOM images this needs to be done dynamically by the client-side JavaScript. Base64 encoding is trivial and I just copied a function from an answer on StackOverflow, and then tacked the Base64 encoded GIF file on the end of a "data:image/gif;base64," string, et voilĂ !

Anyway, not rocket science, but hopefully useful to someone. I dare say that in the long run the HTML Canvas element will make most of this moot, and there are certainly already a small but growing number of "pure JavaScript" DICOM viewers out there. I have to admit it is tempting to spend a little time experimenting more with this, and perhaps even write an entire IHE Basic Image Review Profile viewer this way, using either Canvas or the "GIF/data: URI" trick for rendering. Don't hold your breath though.

It would also be fun to go back through previous generations of browsers to see just how far back the necessary concepts are supported. I suspect that size limits on the "data:" URI may be the most significant issue in that respect, but one could conceivably break the image into small tiles, each of which was represented by a separate small GIF in its own small enough "data:" URI string. I also haven't looked at client-side caching issues. These tend to be significant when one is displaying (or switching between) a lot of images or frames. I don't know whether browsers handle caching of "data:" URIs objects differently from those fetched via http, or indeed how they handle caching of files pulled via XMLHttpRequest.

Extending the DICOM parsing and payload extraction stuff to handle other uncompressed DICOM transfer syntaxes would be trivial detail work. For the compressed transfer syntaxes, for single and three channel 8 bit baseline JPEG, one can just strip out the JPEG bit stream from its DICOM encapsulated fragments, concatenate and Base64 encode the result, and stuff each frame in a data:url with a media type of image/jpeg instead of image/gif. Same goes for DICOM encapsulated MPEG, I suppose, though that might really stretch the size limits of the "data:" URI.

Since bit-twiddling is not so bad in JavaScript after all, one could even write a JPEG lossless or JPEG-LS decoder in JavaScript that might not perform too horribly. After all, JPEG-LS was based on LOCO and that was simple enough to fly to Mars, so it should be a cakewalk in a modern browser; it is conceptually simple enough that even I managed to write a C++ JPEG-LS codec for it, some time back. That said, modest compression without requiring an image-specific lossless compression scheme can be achieved using gzip or deflate (zip) with HTTP compression, and may obviate the need to use a DICOM lossless compression transfer syntax, unless the server happens to already have files in such transfer syntaxes.

Doing the JPEG 12 bit DCT process might be a bit of a performance dog, but you never know until someone tries. Don't hold your breath for these from me any time soon though, but if I get another spare Saturday, you never know ...

Oops, spoke to soon, someone has already done a pure JavaScript JPEG decoder ...
 
nanos gigantum humeris insidentes

David
 





Friday, August 2, 2013

So Many EHR Vendors, So Little Time

Summary: In the US, >=442 vendors need to interface to imaging systems. Good thing we have IID.

Long Version.

A report on US Physician Office EHR adoption from SK&A that Brian Ahier described in a recent post, contains some interesting numbers of relevance to imaging folks. Overall adoption hovers around 50%, but what really intrigued me was the list of vendors with market share ... Allscripts, eClinicalWorks and Epic had about 10% each, another 17 vendors split about half the market (45%), then there were 422 more (!!!) splitting the remaining 25% or so.

That seemed like an awful lot, and I was wondering if perhaps the "other" category confused different versions or something, or was just an error. So I went to the ONC's Certified Heath IT Products List, and in the combination of 2011 and 2014 editions, elected to browse all Ambulatory Products, and was rewarded with 3470 products found! That list includes lots of different bits and pieces and versions, but it does confirm the presence of a large number of choices.

That is a very large number of EHR vendors and systems for PACS and VNA and Imaging Sharing system producers to interface with, in order to View (or Download or Transmit) images.

It certainly is a very large "n" for the "n:m" combinations of individually customized interfaces, if one goes that route. It is a good thing perhaps that we just finished the IID profile in IHE, to potential make it "n+m" instead.

It is hard to believe that there won't be some very dramatic consolidation some time soon, but no matter how rapidly that occurs, being able to satisfy the menu option for stage 2 that includes image viewing, would seem to be a potential discriminator and a competitive advantage.

This may be particularly true for the smaller players, who clearly seem to be satisfying some customers (judging by the stratification in the SK&A report by practice size). Perhaps the big players are too expensive or too complicated, or too busy to bother with small accounts, as MU obsession consumes all their available resources.

Imaging vendors that make it easy for small EHR players to access images by implementing the Image Display actor of IID might help imaging facilities that purchase their products to remain competitive in this age of reimbursement reduction. If their referring providers' insist on integration with whichever one of the 442 to 3470 EHR products they happen to have, if not satisfied, they can easily switch to another imaging provider.

Small EHR players that don't take advantage of standards like IID, and succumb to pressure from even a modest number of PACS vendors to customize to an existing proprietary interface, may run out of resources pretty quickly. Our other imaging standards like WADO and XDS-I are good as far as they go, and very important for imaging vendors to support. But they require a level of sophistication on the client side that may be beyond most small EHR vendors, particularly if interactive viewing is required by the referring providers. WADO and XDS-I might be the means used to support IID on the imaging side, but the EHR doesn't need to sweat the details.

David

Thursday, August 1, 2013

Lumpers vs Splitters - Anatomy and Procedures, Prefetching and Browsing

Summary: For remote access and pre-fetching, should one lump anatomic regions into a small number of categories, or retain the finer granularity inherent in the procedure codes and explicitly encoded in the images?

Long Version.

Of late you may have noticed a spate of posts from me about anatomy in the images, procedure codes, as well as pre-fetching. Needless to say these topics are related, and there is a reason for my recently renewed interest in researching these subjects.

You may or may not have noticed that in IHE XDS-I.b, there is a bunch of information included in the registry metadata that is specifically described for imaging use (see IHE RAD TF:3, section 4.68.4.1.2.3.2 XDSDocumentEntry Metadata).

The typeCode is supposed to contain the (single) Procedure Code. Unfortunately, since almost nobody currently uses standard sets of codes, these will usually contain local institution codes. So whilst their display name may be rendered and browseable by a human, they will not easily be recognized by a machine, e.g., for pre-fetching or hanging. The specification currently says typeCode should contain the Requested Procedure Code rather than the performed Procedure Code, which is an interesting choice, since what was requested is not always what was done.

There is also an eventCodeList that is currently defined to contain a code for the Modality (drawn from DICOM PS 3.16 CID 29), and one or more Anatomic Region codes (from DICOM CID 4).

Now, no matter where the anatomic codes come from (be they derived from the local or standard procedure codes, extracted from the images, from some mysterious out of band source, or entered by a human), there is a fairly long list of theoretical values and practical values that are actually encountered, depending on the scenario, whether it be radiology, cardiology, or some other specialty that is a source of images, like ophthalmology.

There are different potential human users of this information, whether it be radiologists viewing radiology images, those physicians who requested the imaging viewing radiology images (like an ophthalmologist requesting an MR of the orbits), or other specialists viewing their own images (like ophthalmologists, endoscopists, dermatologists, etc.). Even confining oneself to the radiology domain, the reasons for retrieving a set of images may vary.

One might think that there is no problem, since XDS-I.b requires that the anatomical information be present, and requires that it be drawn from a rich set of choices.

However, some folks seem to think that the set of choices of anatomical concepts is too rich and too long, and want to cut it down to just a short list, "lumping" a whole bunch of stuff together, rather than leaving it "split" into its fined grained descriptions.

Why, one might ask, would one ever want to discard potentially useful information by such "coarsening" of the anatomical concepts in advance, when if there was a need to do so, one could easily do it on the querying end, when necessary?

So I did ask, and the result was a fairly vigorous and prolonged email "debate" back and forth between the "lumpers" and the "splitters". The net result of which is that neither side is convinced of the merits of the others' argument, and are not interested in talking to each other anymore. So the process has stalled, and in the interim individual XDS "affinity domains" will do whatever they see fit, with their choices no doubt modulated by what their vendors are able or willing to deliver in this respect.

An obvious compromise would be to always send both coarse and fine codes. Unfortunately, since the eventCodeList is a flat list of codes, there is no easy way to communicate name-value pairs, and since coarse and fine grained anatomy come from the same coding scheme (SNOMED), there is no easy way to send both and distinguish them, which turns out to be important. At least not without a change to the underlying ITI requirements for XDS, and they are loathe to make changes for fear apparently of invalidating the installed base of XDS systems (modest sized though that might be at this early stage). Getting a slot added to send Accession Number was like pulling teeth from ITI, and nobody has the stomach for a repeat of that tedious exercise.

The context in which this arose initially was pre-fetching. One reasonable approach is pre-fetching all those studies in the same coarse group as the current study, and the expectation is that this would be better than pre-fetching everything, or nothing, or relying on workflow related reasons, such as pre-fetching the most recent studies or the study that one actually ordered in the first place, or studies of the same modality, or intended for the same recipient, etc.

However, one can potentially do a better job of pre-fetching if one applies more granular rules, and this is particularly the case when one has a specific clinical question or task to perform.

An example may help. Suppose one is interested in, say, a patient's screening virtual CT colonoscopy, whether one is a radiologist reporting it, or the ordering physician. And one wants to compare it with previous virtual CT colonoscopy. Should one pre-fetch all CT's of the abdomen for comparison (and there may be quite a few given that they are handed out in the emergency room like candies), not to mention whole body CT-PET scans that include the abdomen, etc.? Or should one pre-fetch only CT's of the colon? Now, if one could match procedure codes, and there was only one or a limited number of procedure codes for CT colonoscopy, one could match on that and ignore all the extraneous studies. But we have already established that procedure codes are currently largely non-standardized and in any reasonable size enterprise that has grown through acquisition or changed its EHR or RIS lately (can you say MU?), there may be a multitude of different coding schemes used in the archives.

So, the lumpers would say, send abdomen for the anatomy, and pre-fetch them all. The splitters would say send colon for the anatomy, and pre-fetch whatever comes out of rules you want to apply at the requesting end (lump with other abdomens if you want to, or not, depending on your preference, or the sophistication of your rules, and your knowledge of the question).

The clinical question really is important. If you are a vascular surgeon wondering about change in size of an aortic aneurysm, you might really want any imaging that included the abdominal aorta, for whatever reason, and not just cardiovascular images, and CT colonoscopy would include useful images in the axial set.

One can come up with all sorts of similar examples, perfusion brain CT or petrous temporal bone CT versus any head CT, coronary or pulmonary CT angiogram versus any chest CT, etc. Beyond radiology, does an ophthalmologist want all head and neck, or just eyes, or just retinas?

The "lumping" strategy required also depends on the use, since there may be potential ambiguities. Is a cervical spine lumped into "spine" or does it go with "head and neck", for example, and with multiple contribution sources, will they implement the same lumping decisions?

The point being that it is impossible to anticipate the requirements on the receiving end until the question is asked, not when the studies are registered in the first place. Accordingly, in my opinion the richness should be recorded in the registry and available in the query, and the pre-fetching decision-making, including any "lumping" if appropriate, should be performed at the receiving end.

Retaining the more granular information is particularly important when one considers the possibility of using more sophisticated artificial intelligence approaches to pre-fetching, rather than simple heuristics or manually authored rules; you will find some references to those techniques in my recent pre-fetching post. Adaptive systems can learn what individual users (or sets of users in the same role) need based on what they are observed to actually view. But even simple rule-based pre-fetchers can be more sophisticated than just using a coarse list (e.g., the RadMapps approach based on string study descriptions).

Besides, if one believes in "lumping", it is not as if the task is very burdensome, no matter where it is performed, given the modest numbers of codes to deal with. Though I described the list of fine grained codes as "fairly long" earlier, it isn't really that long. Even were one to need to select from the list in a user interface, just like for a user interface for procedures (a much longer list than anatomic locations), there are tactics for presenting long lists in an easily navigable manner.

It is interesting to consider the history of the DICOM list in this respect. Over time the list has grown, from the original 19 that were CR-specific in DICOM 1993, to contain now 112 string values for Body Part Examined, most of which have been added to reflect experience in the field (e.g., what CR vendors started to send when they couldn't find a good match, or what other modalities needed). DICOM defines the SNOMED coded equivalents of all of those, plus various others that are used in specific objects (especially cardiology objects, and those for echocardiography in particular); the total is 340 coded concepts at the moment, many of which are not relevant to the application of anatomic region for a procedure for a registry and wouldn't be applicable, and some of which are the same concept but different meanings for different contexts (e.g., X and endo-X with same code). This is all summarized in DICOM PS 3.16 Annex L, which is related to CID 4. There are probably a few too many highly specific cardiovascular locations that got pulled in this way. There are a few specialties that have separate lists, e.g., ophthalmology, which have not been folded into Annex L yet, and do not have string equivalents for coded values. These lists may not be perfect, but they are a line in the sand and do reflect what people have asked for, over 20 years of experience with the standard.

So, in short, no more than a few hundred codes probably need to be mapped from the procedure codes (or acquired by some other means) at the sending end. And at the receiving end, no more than that few hundred codes need to be "lumped" to apply coarse pre-fetching rules, if that floats your boat. And since all the anatomy codes defined in DICOM CID 4 are SNOMED codes, the mapping is already right there for the implementer to extract in the relationships present in the SNOMED files.

One concern that has been expressed is that there are too many anatomical codes to map to from one's local procedure code, and it is easier to map to a short list. I would argue the opposite, in that it is easier to map "XR wrist" to "wrist" than "lower extremity", or "MR Pituitary" to "pituitary" rather than "head and neck". I.e. a literal mapping doesn't require a knowledge of anatomy. Not to mention the fact that the better approach is to map one's local procedure codes to standard procedure codes (like SNOMED or LOINC or RadLex Playbook) in the first place, then extract the anatomy automatically from the ontologies that back those standards.

I asked a bunch of radiologists in the US and Australia what their preference was, fine or coarse grained anatomy, and they all expressed a preference for retaining the fine grained concept.

A similar sentiment was expressed by several UK radiologists in the UK Imaging Informatics Group when a short list was suggested. The interest in "lumping" in the UK is particularly surprising, when one considers that they all have to use the NICIP codes, which are not only already mapped to SNOMED, but are also already mapped to OPCS-4, which already contains fine-grained anatomy codes (their Z codes and O codes). If you read the UK forum posts carefully though, you will see a distinction suggested between using their standard procedure (rather than anatomy) code for plain radiography pre-fetching, versus "lumping" anatomy for cross-sectional modalities.

Anyhow, I am not certain that I have convinced anyone who already has their mind made up (that coarse codes are sufficient), nor anyone who is for some reason intimidated by the more comprehensive fine grained list in DICOM CID 4 than a short and arbitrary list.

Personally though, given the limitations inherent in the XDS metadata model, I remain convinced that the more precise information is valuable, and the coarse information not only limits what a recipient can find but contaminates the information with noise (claiming more territory was imaged than actually was). Not only does this undermine the utility of XDS, but it creates an artificial distinction between what is possible using local PACS protocols like DICOM queries as opposed to cross-enterprise protocols, when instead we should be working to make such artificial distinctions transparent to the user. In my opinion, the remote user deserves the same level of pre-fetching and manual browsing performance that is achievable locally.

What do you think?

David

It is interesting to consider what concepts might be included in a lumped list.

The original IHE CP, which triggered this debate, proposed a list that consisted of:

Abdomen
Cardiovascular
Cervical Spine
Chest
Entire Body
Head
Lower Extremity
Lumbar Spine
Neck
Pelvis
Thoracic Spine
Upper Extremity

Not much use if you are a mammographer looking for last year's priors, for example, so at the very least it would make sense to add Breast.

The proposed UK forum list was initially:

Abdo
Body (esp for overlapping CT body areas)
Chest
Head
Heart
Lower Limb
Misc
Neck
Pelvis
Spine
Upper Limb
Vessels

to which there were later suggestions in the forum to add Breast and Bowel.

When the ACR ITIC was discussing appropriateness criteria work, it had found it helpful to group procedures for that specific purpose, and the list was:

Abdomen
Breast
Cardiac
Chest
Head
Lower extremity
Maxface-dental
Neck
Pelvis
Spine
Unspecified
Upper extremity
Whole body

Another source of interest is the RadLex PlayBook, which categorizes procedures by Body Region (e.g., abdomen), a very short list by comparison with the more fine-grained Anatomic Focus (e.g., pancreas) that is also used. That list is:

Abdomen
Abdomen and Pelvis
Bone
Breast
Cervical Spine
Chest
Face
Head
Lower Extremity
Lumbar Spine
Lumbosacral Spine
Neck
Pelvis
Spine
Thoracic Spine
Thoracolumbar Spine
Upper Extremity
Whole Body

The Canadian DI Standards collaborative working group (SCWG 10) short list for XDS (after they were not convinced by my argument that no short list is necessary) is currently proposed to be:

Abdomen
Breast
Cardiovascular
Cervical Spine
Chest
Entire Body
Head
Lower Extremity
Lumbar Spine
Neck
Pelvis
Thoracic Spine
Upper Extremity

When I asked various radiologists what they would prefer if they were forced to live with a coarse list only, one proposal was:

Abdomen
Breast
Cardiac
Cardiovascular (not heart)
Cervical Spine
Chest
Entire/Whole Body
Facial/dental
Head
Lower Extremity
Lumbar Spine
Neck
Pelvis
Thoracic Spine
Unspecified
Upper Extremity

There was then a discussion about whether Face should be separated from Brain within Head, and then what one should do about Base of Skull and Inner Ear, which serves to emphasize my point that it is difficult to come up with a list that satisfies every constituent.

To be fair, putting aside the fact that "unspecified" is undesirable, and that combined body parts may not be needed since one can send multiple codes (IHE XDS-I permits this), there is a lot of similarity between the proposals.

One might wonder about the apparent obsession with lumping regions within an upper or lower extremity category, and why one would want shoulders with wrists, etc. I suppose it might reflect the continuum of radiographic views that extend along the limbs (e.g. does humerus include shoulder and elbow). Then again, if one were doing a skeletal survey for metastases one might want a category of Bone instead I suppose, in which would be included Skull, and all Spines, and Pelvis, and Chest (for ribs). Or for a skeletal survey for arthritis, just Joints and Spine perhaps.

What would your list be, if you needed one?