A group of folks at Johns Hopkins, Harris Corp, and Vital Images have been working on the "large study" problem and have produced a largely "DICOM free" implementation (apart from modality image ingestion) called Medical Imaging Network Transport (MINT). They are now proposing that this become a new "standard" and be blessed by and incorporated in DICOM as a "replacement". Since the MINT implementation is based on HTTP transport, DICOM WG 27 Web Technology has become the home for these discussions. Not surprisingly, the "replace everything" work item proposal was rejected by the DICOM Standards Committee at our last meeting by a large majority - you can read the summary in the minutes of the committee and see the slides presented by the MINT folks.
The rejection by the committee of the proposal should not be interpreted as a rejection of the validity of the use-case, however.
It is accepted that large studies potentially pose a problem for many existing implementations, both for efficient transfer from the central store to the user's desktop for viewing or analysis, and for bulk transfer between two stores (e.g., between a PACS and a "vendor neutral archive" or a regional image repository).
So, to move forward with solving the problem WG 6 and WG 27 met together earlier this week to try to achieve consensus on what the existing DICOM standard has to offer in this respect, and to identify any gaps may exist that could be filled by incremental extensions to the standard.
If one puts aside the assumption that it is necessary to completely replace DICOM (and hence re-solve every problem that DICOM and PACS vendors have spent the last quarter of a century solving), and instead focus narrowly on the key aspects of concern, two essential issues emerge:
- transporting large numbers of slices as separate single instances (files) is potentially extremely inefficient
- replicating the "meta-data" for the entire patient/study/series/acquisition in every separate single instance is also potentially extremely inefficient, and though the size of the meta-data is trivial by comparison with the bulk data, the effort to repeatedly parse it and sort out what it means as a whole on the receiving end is definitely not trivial
Yet this approach ignores the significant effort that has already been put into "normalizing" each acquisition at the modality end, specifically, the "enhanced multi-frame" family of DICOM objects defined for CT, MR and PET as well as XA/XRF, and new applications like 3D X-ray, breast tomosynthesis, ophthalmic optical coherence tomography (OCT), intra-vascular OCT, pathology whole slide imaging (WSI), etc. The following slide (which I simplified and redrew from an early one produced by either Bob Haworth or Kees Verduin for WG 16) illustrates how the enhanced multi-frame family of objects uses the shared and per-frame "functional groups" (as well as the top level DICOM dataset) to factor out the commonality compared to encoding single slices each with its own complete "header":
Now, it is no secret that adoption of the enhanced family of objects has been very slow, especially by the modalities that already have single frame "legacy" DICOM objects, particularly the CT and MR. Currently only Philips offers a commercial MR implementation and Toshiba offers a commercial CT implementation. Many PACS are capable of storing and regurgitating these over a DICOM connection, but may not be capable of viewing them or sorting or annotating them correctly, or performing more sophisticated functions on them like 3D and MPR rendering, nor for that matter are they well supported in many CD-based viewers, etc.
But it is important to distinguish between gaps in implementations, as opposed to gaps in the DICOM standard. If the standard already specifies a means to solve a problem it should be used by implementers; inventing a new "standard" like MINT to solve the problem is not going to encourage implementation (unless it solves other pressing problems as well). The bottom line here seems to be that PACS vendors in particular are not well motivated to solve in an interoperable (standard) way, any problem beyond ingestion of images; many PACS vendors may be quite happy with proprietary implementations between the archive/manager component of their PACS and their image display devices or software. But the last thing we need are multiple competing standard approaches to solving the same problem (or entire competing standards), since that only compromises interoperability.
So, to cut a long story short, the argument was put forth this week that use of the enhanced multi-frame family of objects for encoding a single "acquisition" as a single object should suffice to achieve the vast majority of the benefits of the "study normalization" suggested by MINT.
We explored some of what could be achieved by using enhanced multi-frame objects and observed that:
- though not all modalities can create enhanced multi-frame objects, it is possible to "convert" the original legacy single frame objects into such multi-frame objects
- the modality-specific enhanced multi-frame objects have many mandatory and coded attributes that are not present in the legacy single frame object, which it is challenging if not impossible to populate during such a conversion
- there are "secondary capture" enhanced multi-frame objects that do permit the optional inclusion of position, orientation, temporal and dimension information extracted from legacy single frame objects, and conversion to these might suffice for the vast majority of bulk transfer and viewing and analysis use-cases
- it may be desirable to either a) document in the standard how to perform such a conversion, or b) define new IODs and SOP Classes that are somewhere in between the "everything optional" enhanced secondary capture objects and the modality-specific objects in terms of requirements, in order to assure interoperability of archives and viewers using such an approach
- it may also be desirable to specify the requirements for full round-trip fidelity conversion from the legacy single frame objects to the converted enhanced multi-frame object and back again, to allow intermediate devices to take advantage of the multi-frame objects but still serve extracted single frame objects to legacy receiving devices, of which there will remain many in the installed base
So, the new action item for WG 6 (and more specifically for me, since I volunteered to write it), is to produce a work item proposal for the committee to define a new IOD and SOP Class (or perhaps modality-specific family of them), for "transitional multi-frame converted legacy" images, with the deficiency in the existing standard being the lack of a set of multi-frame images that can be fully populated with only the limited information in the legacy images but with sufficient mandatory position, orientation, temporal and dimension information to satisfy the 3D and 4D viewing and rendering and bulk transfer use cases.
In the interim, now that MINT guys have been encouraged to look at the potential use of the secondary capture multi-frame objects, they have the opportunity to experiment with them to see if they can achieve the necessary performance in their implementation.
The following four slides illustrate graphically the principle of migration from:
- a completely proprietary optimized PACS to workstation interface (where the "viewer" is essentially "part of the PACS"), to
- a DICOM standard PACS to workstation boundary (possible with current single frame DICOM Query/Retrieve/Store interfaces, but likely not "optimized" for performance by the vendor), to
- converting to multi-frame objects (or passing through those from modalities), combined with round-trip de-conversion to support legacy workstations, to
- supporting PACS to PACS (or Image Manager/Image Archive or Vendor Neutral Archive) transfers also using legacy objects converted to multi-frame if supported by both sides:
In what ways does this proposal differ from what the MINT implementation has done to date ?
- the aggregation of meta-data would occur at the "acquisition" level, and not the entire "study" level; this would seem to be sufficient to capture the vast majority of the performance benefit in that when viewing or performing 3D/4D analysis, the bulk of the pixel data and meta-data for each "set" will be within one object
- the enhanced multi-frame objects require that every frame have the same number of rows and columns and mostly the same pixel data characteristics (bit depth, etc.); this means that funky image shapes like localizers will end up in separate objects
- the opportunity exists to pre-populate the "dimension" information that is a feature of the enhanced family of objects, e.g., this dimension is space, this is time, etc., rather than have to "figure it out" retrospectively from each vendor's pattern of use of the individual descriptive attributes
Also discussed at our recent meeting was the availability of mechanisms in DICOM for gaining access to selected frames and to meta-data (the "header") without transferring everything. Those two features are defined in Supplement 119, Instance and Frame Level Retrieve SOP Classes, which was specifically written to address the consequences of putting "everything" in single large objects. For example, if a report references one or two key frames in a very large object, one needs the ability to retrieve just those frames efficiently. Supplement 119 defines a mechanism for doing so, by extracting those frames, and building a small but still valid DICOM object to retrieve and display. The existing WADO HTTP-based DICOM service also supports the retrieval of a selected frame, as do the equivalent SOAP-based Web Services transactions defined in IHE XDS-I, back ported into the DICOM standard in Supplement 148 WADO via Web Services, currently out for ballot. Though Supplement 119 does defined a SOP Class for gaining access to the meta-data without transferring the bulk data, if one uses the JPEG Interactive Protocol (JPIP) to access frames or selected regions of a frame in JPEG 2000, one can also gain access to the meta-data using a specific Transfer Syntax (see Supplement 106).
Unlike IHE (particularly IHE XDS and XDS-I), the MINT guys are also RESTful at heart, and this is reflected in their current implementation. We tried to keep out of the REST versus SOAP religious wars during our most recent discussion, and focus on what DICOM already has to solve the use case. Yet to be resolved is the matter of whether DICOM already has sufficient pure DICOM network protocol support and HTTP-based support to satisfy the use-cases without having to introduce additional RESTful equivalents. On the one hand there is a Committee and WG 6 level desire to not have multiple gratuitously different ways to do the same thing; on the other hand there may be significant advantages to alternative mechanisms if they can take effective advantage of off-the-shelf HTTP infrastructure components. A case in point is the use of HTTP caching that requires some statelessness in the transactions to be effective. The MINT guys were advised to present evidence that such caching is sufficiently beneficial in order to justify the introduction of yet another transport mechanism, and this is something that WG 27 intends to follow up on.
Related religious wars about whether or not DICOM or HTTP should be used "within" an enterprise (i.e., over the LAN), the extent to which DICOM can be used between LANs that are nominally part of the same "enterprise" but are separated by firewalls (i.e., if the Canadians can do DICOM between two places, why can't Johns Hopkins), why XDS-I is not sufficient, etc., were mentioned but essentially deferred for another day. One key aspect mentioned, but not discussed in much detail, was the matter of user authentication and access control, and the IHE direction that uses Kerberos (EUA) within an enterprise and SAML assertions (XUA) across enterprises; this is easy for DICOM and SOAP-based WS like XDS-I, but potentially problematic for RESTful solutions (like WADO). Whether or not Vendor Neutral Archives (VNA), whatever they are, are a good idea or not was also not debated; we simply agreed that the efficient bulk data transfer from one archive to another using a standard protocol is a genuine use case. That said, the IHE radiology guys (myself included) are contemplating considering (again) the question of whether to separate the Image Manager from the Image Archive Actors in the IHE Radiology Technical Framework, so we do have the opportunity to start a whole new war in a whole new forum.
There are many other practical issues that the MINT folks have encountered, such as the lack of uniqueness of UIDs, or inconsistency in some of the patient or study information between individual slices, but many of these can be characterized as "implementation" problems faced by any PACS or archive when populating their databases, and not something that the DICOM standard (or any standard) can really resolve. I.e., if an implementation fails to comply with the standard it is just plain "bad" (or the model of the real-world in the standard does not actually match the real-world). At some point (usually ingestion from the modality, and/or administrative study merges and other "corrections"), any implementation has to deal with this, and will have to regardless of whether the originally proposed MINT approach or the conversion to enhanced multi-frame DICOM approach is used. With respect to the need for change management, the MINT proponents were made aware of the Image Object Change Management (IOCM) profile defined by IHE, which addresses the use-cases and implementation of change in a loosely-coupled multi-archive environment, as well as the IHE Multiple Image Manager/Archive (MIMA) profile, which addresses archives with different patient identity domains and what to do with DICOM identifying attributes when transferring across domains. With respect to modalities or other implementations that create non-unique UIDs, the need to a) detect and correct for this on ingestion, and b) report the defects to the offending vendor, was emphasized.
Finally, the foregoing should not be taken to mean that switching to the use of multi-frame objects is a panacea, nor indeed a prerequisite for the efficient transport of large multi-slice studies. As we were careful to emphasize during the initial roll out of the enhanced multi-frame DICOM CT and MR objects, the primary goal was improved interoperability for advanced applications, not transfer performance improvement, since it was well known at the time that optimized applications transporting single frame objects can achieve very good performance (e.g., through the negotiation and use of DICOM asynchronous operations, or multiple simultaneous associations if asynchronous operations cannot be negotiated, both of which eliminate the impact of delayed acknowledgment of individual C-STORE operations, whether it be due to network latency or application level delays such as waiting for successful database insertion before acknowledgment). Rather, poor observed performance in the real world is often a consequence of applications simply not being optimized or well designed in this respect, and many vendors' engineers are far too quick to switch to a proprietary optimized protocol and ignore opportunities for optimizing standards-based solutions. As a case in point, this old white paper from Oracle on A Performance Evaluation of Storage and Retrieval of DICOM Image Content, which shows quite impressive numbers for single frame DICOM images using JDBC (not DICOM network) based server to client retrieval over five 1 Gigabit Ethernet connections between server and client (over 400 MB/s, 852 images/s, 1497 Cardiac CT studies per hour). MINT performance figures over a single connection as published so far are also impressive (Harris's results and Vital's results) though the hardware is different. The bottom line is probably that the protocol used is less important than the architecture and implementation details on both end, and in comparing performance claims for specific commercial implementations, one needs to be sure one is comparing apples with apples rather than oranges. The lack of a published industry standard benchmark for these use-cases is probably a significant gap that we should try to close.
The following slide is one that I produced for the early enhanced multi-frame demonstration and educational lectures, about where the DICOM protocol critical delay lies:
The C-STORE acknowledgment discussion is separable but related to the discussion of TCP/IP performance in the presence of significant latency (or also significant packet loss). As was emphasized at the recent meeting by the purveyor of a potential proprietary TCP/IP replacement (Aspera), unmodified TCP/IP over wide area networks is not ideal for taking full advantage of the theoretical bandwidth limits, and both DICOM and HTTP (and hence MINT, which is HTTP-based) are potentially at a disadvantage in this respect. The conventional answer to this is to use multiple connections and associations and to swap out the TCP stack at both ends of a slow connection (e.g., a satellite link), and/or to use a "WAN accelerator" box at both ends (such as something from Circadence). I am not recommending or promoting any of these technologies or companies, since I have no experience with them. I will say that the idea of changing DICOM applications and tool kits to use something other than TCP/IP or to add the ability to negotiate something proprietary (as Aspera was suggesting), is superficially way less attractive to me, than putting in a box in between that takes care of the problem, transparent to the applications, if it achieves anything close to the maximum possible "goodput". Anyway, if you are interested in thinking about TCP/IP performance issues, I have found Hassan and Jain's book High Performance TCP/IP Networking to be a good introduction. In this discussion, not for the first time, trying to take advantage of UDP or features of various peer-to-peer network protocols was also discussed, and a quick Google search on DICOM and P2P or UDP file transfer will reveal some interesting articles and experiments.
PS. Note that in the foregoing I reference and provide links to numerous DICOM Supplements that introduced various features; since most of these supplements have long since been folded into the body of the DICOM standard, and may have had subsequent corrections applied, implementers need to reference the latest DICOM standard text and not the old supplement text, as appropriate. I reference the supplements only to provide a historical time line and to provide the context for interpreting their scope and use.