Summary: The use of the term Modality in radiology is old; its definition is vague in various standards, but relatively consistent in terms of both meaning and granularity.
Long version:
I got an email inquiry from Scott N who wrote:
My colleague and I have been discussing the usage of the term "imaging modality" and how it appears (to us) that it is commonly applied ambiguously in the literature. I would roughly define the term to mean "a method involving the employment of a physical agent to produce an image." As MR utilizes electromagnetic waves and PET utilizes positron-emitting isotopes, I would then characterize the two as separate image modalities. However, a quick Google search yields countless examples of statements such as the "DTI and fMRI modalities." By my definition, I would classify DTI and fMRI as the same imaging modality, and I believe the DICOM modality 0008,0060 always gives "MR" for these two types of scans.
I was hoping you mind have a little time to comment on this and on how the DICOM standard defines this?
This prompted me to get a little carried away in researching the matter (this being a quiet Sunday morning, too foggy to fly yet, the dog emptied and my wife absorbed in vacation planning), and I came up with the following.
DICOM defines the Attribute of the Series, Modality, in PS 3.3 C.7.3.1, to mean:
"Type of equipment that originally acquired the data used to
create the images in this Series"
and then proceeds to provide a list of defined terms (the value set) for the attribute in C.7.3.1.1.1.
Also, in PS 3 C.4.15:
"Type of equipment that originally acquired the data used to create the images associated with this Modality Performed Procedure Step."
In HL7 V2.5 Section 4.5.6.5 the Modality field is defined (in a post-DICOM modality worklist addition to HL7) as:
"The type of equipment requested to acquire data during performance of a Procedure Step. The acquired data will be used to create the images for the Imaging Study corresponding to the Requested Procedure."
In IHE, the Acquisition Modality Actor is defined in RAD TF Vol 1 Section 2.3 as:
"A system that acquires and creates medical images while a patient is present, e.g. a Computed Tomography scanner or Nuclear Medicine camera. A modality may also create other evidence objects such as Grayscale Softcopy Presentation States for the consistent viewing of images or Evidence Documents containing measurements."
Surprisingly, RadLex doesn't provide a text definition for its Imaging Modality concept (http://www.radlex.org/RID/RID10311), only relationships to parent and child concepts; this is probably an easily rectified oversight.
SNOMED CT has a concept "Imaging Method" (360037004) for which it has a synonym "Imaging Modality", but again, no textual definition; the UMLS maps this to its "Imaging Modality" concept C1275506.
The UMLS Metathesaurus (https://uts.nlm.nih.gov//metathesaurus.html) also contains a more general concept "Modality" (C0695347).
These map to the NCI Thesaurus (http://ncit.nci.nih.gov/ncitbrowser/) concept of "Modality" (C41147) defined variously as:
"NCIt Definition: A specific manner, characteristic, pattern of application or the employment of, any therapeutic agent or method of treatment, especially involving the physical treatment of a condition.
DICOM Definition: Type of data acquisition device.
NCI-GLOSS Definition: A method of treatment. For example, surgery and chemotherapy are treatment modalities."
Wikipedia says ("http://en.wikipedia.org/wiki/Modality"):
"In medical imaging, any of the various types of equipment or probes used to acquire images of the body, such as radiography, ultrasound and magnetic resonance imaging."
I don't know what the history of the use of the word "modality" in a medical imaging context is, but it would be fascinating to research it. A quick look at the earliest reference in Radiology shows a mention in 1923 of a "therapeutic quartz lamp", which "enables all patients, because of its low cost, to be benefited by the use of this modality" (http://radiology.rsna.org/content/1/4/245.full.pdf). A therapeutic rather than diagnostic radiology use perhaps, but an early similar use all the same (and I found other mention of the term in a similar vein from the same year).
Given the DICOM precedent, I think it is more practical to consider fMRI and DTI as being "sub types" of a single modality MR.
Likewise MR Angiography is also a "sub type" of MR in DICOM, as are other borderline cases, such as MR Spectroscopy, even though there is not always imaging involved (it was once separate, but not any more, in recognition that one can produce spectroscopy-based (chemical shift, metabolite map) images, for example).
We had a big fight years ago in DICOM about introducing an explicit "modality sub-type" attribute, when we were doing visible light "modalities" and ended up with a generic VL from which we carved out specific modalities like ES (endoscopy) and slide microscopy (SL) as specific needs for different information or encoding for those application areas arose. In otherwords, though the technology may remain the same (a digital camera), the different "accessory" if you will (e.g., the endoscope or microscope) and the application area (e.g., gut, slide, retina, cornea) lead to refinements of what a single attribute Modality would refer to, weighed against what was generally done "together" by the equipment. The compounding of these different dimensions leads to some confusion, but, for example, one generally performs MRI, MRA, DWI, DTI, fMRI, DCE-MRI, MRS and CSI all in the same study on the same device (an MR scanner), whereas one would not perform endoscopy (ES) with a slide microscope (SM) even though they are both visible light (VL). Nor for that matter would one general mix upper and lower GI endoscopy in practice, but it has not proven necessary to separate these by a different DICOM Modality value. Similarly, for time-based waveforms, DICOM has relatively specific modality values like ECG, HD, RESP, etc. (even though those waveforms may emerge from the same "device" as that producing accompanying images and certainly be part of the same procedure).
The reason for the fight, by the way, was that some of us wanted the single Modality attribute that is widely implemented in browsers and databases to remain a single, useful, value, and not to require implementation changes to support multiple values for the same attribute or multiple attributes. As always in DICOM, purity is sacrificed in favor of pragmatic and incremental extension of installed base functionality rather than disruptive revolutionary change.
Hybrid modalities like PET-CT (and now MR-PET) lead to some concern about conflating the "equipment" with the "modality", as well as some cardinality issues in models and queries (hence attributes like "Modalities in Study). For DICOM aficionados, one needs to remember that Modality and SOP Class are not synonymous (hence the addition of "SOP Classes in Study", etc.). Theoretically we could have introduced a new single combined modality value for these cases, but in practice the equipment was two distinct modalities "bolted together" (and still is for many vendors) and the downstream applications (e.g. to perform fusion) were novel anyway, and they evolved to cope with the "two" modalities involved in the one procedure.
However, in general conversation (or the scientific literature) where the context is more "granular", I can see why folks use the term "modality" to distinguish such things as fMRI and DTI, but we should probably try and come up with a better term for that "lower level". RadLex, for example, just makes fMRI and MRA "Is a" children of MRI, which is in turn an "Is a" child of Imaging Modality, without providing a definition of what entity those children are; which actually raises an interesting question about whether traditional hierarchical entity-relationship models such as DICOM uses are sufficient for descriptive purposes, or whether "tagging" anything with concepts that depend on an ontology of relationships, semantic web style, adds more value. E.g., should one tag (semantically annotate) a series of images as being an fMRI, without explicitly saying that it is an MR too, such that if (and only if) the recipient had access to the ontology, it would "know" that relationship. Or should one just anticipate that use case and explicitly define that it is an MR and an fMRI in a model that is explicitly encoded? Of course the current state of the art is that one gets an MR modality value and figuring out that it is an fMRI (or any other highly specific "flavor" of MR) is a challenge, but the enhanced family of DICOM objects (with much more detailed attributes and value sets, but still traditional ER models) are intended to help with that.
Anyway, thanks Scott, for the interesting conversation starter.
David
Sunday, September 25, 2011
Sunday, September 18, 2011
IHE Radiology Planning Committee rejects Image Manager/Archive Content Migration Profile
Summary: Other proposals were judged to have higher priority than the Image Manager/Archive Content Migration Profile.
Long version:
Last week I wrote about a proposal that I had drafted for an Image Manager/Archive Content Migration Profile, to use a an offline archive of standard DICOM objects on an external transportable disk array as an approach to the PACS migration problem.
Unfortunately, as you can see from the minutes of the IHE Radiology Planning Committee t/con, other proposals received higher priority for evaluation by the technical committee, which is the next step before shortening the list further.
Irritatingly, I had let my voting rights on the planning committee lapse, since I hadn't participated in the last few t/cons, and the t/con ran over time and I had to leave before the vote, so I am not sure if any else voted for it anyway.
So, not this year, at least in IHE anyway. That said, the lack of an IHE profile does not mean the one cannot implement the idea (though it does mean that one does not have the IHE Connectathon as a venue for testing it).
David
Long version:
Last week I wrote about a proposal that I had drafted for an Image Manager/Archive Content Migration Profile, to use a an offline archive of standard DICOM objects on an external transportable disk array as an approach to the PACS migration problem.
Unfortunately, as you can see from the minutes of the IHE Radiology Planning Committee t/con, other proposals received higher priority for evaluation by the technical committee, which is the next step before shortening the list further.
Irritatingly, I had let my voting rights on the planning committee lapse, since I hadn't participated in the last few t/cons, and the t/con ran over time and I had to leave before the vote, so I am not sure if any else voted for it anyway.
So, not this year, at least in IHE anyway. That said, the lack of an IHE profile does not mean the one cannot implement the idea (though it does mean that one does not have the IHE Connectathon as a venue for testing it).
David
Sunday, September 11, 2011
PACS Migration Using A Standard-format Intermediate Offline Archive
Summary: Creation of an offline disposable archive of standard DICOM objects on an external transportable disk array is proposed as an approach to the PACS migration problem; an IHE profile has been proposed.
Long version:
This isn't actually going to be very long, since most of the content is elsewhere.
On the RCR Imaging Informatics Group there was a recent discussion of PACS migration that arose in the context of the existing contracts expiring some time soon (see "http://www.pacsgroup.org.uk/cgi-bin/forum/show.cgi?2/58490"). During the course of that there was discussion of DICOM "Part 10" format objects, that ultimately lead to a proposal for all PACS to be able to create a a new archive copy on an externally supplied filesystem (directly or network-attached) of standard DICOM files with a standard (lossless) compressed transfer syntax and all the "headers" up to date.
So, since it is that time of year once again, I put together an IHE brief proposal to define an Image Manager/Archive Content Migration Profile for this. Just swap "PACS" for "IM/IA" if you are not familiar with IHE-speak.
This is not a particularly new idea. I recall that many years ago Peter Kuzmak of the VA put forth some suggestions related to an interchangeable archive format for migration with some file system organization and naming as well as potential DICOMDIR-related changes in that context, but using DVD-R; I dug out his old presentation that was referenced from the minutes of the WG 5 meeting in Feb 2000.
Nowadays studies are so large and multitudinous that migrating them on thousands of CDs or DVDs would seem to be infeasible, but the low cost of consumer price-point hard drives and RAID boxes seem to suggest that making what is essentially a "throw away" copy of an entire PACS archive is nowadays realistic. It only needs to last long enough to be populated, reattached to the new PACS and its content imported. The DICOM standard already supports the notion of USB-attached media (physical media unspecified) to allow for flash drives and hard drives, although there may be details that need to be worked through for the sheer size of an entire PACS archive.
Anyway, it remains to be seen how the IHE Radiology Planning Committee receives this, and if they (well, we, since I am a member) don't reject it out of hand, and prioritizes it relative to other profile proposals (since the Technical Committee has limited bandwidth and each year only works on a small number of proposals).
So if any of you out there have any thoughts about whether this is a good idea or a terrible one, or suggestions for improvement of the proposal, please let me know.
Long version:
This isn't actually going to be very long, since most of the content is elsewhere.
On the RCR Imaging Informatics Group there was a recent discussion of PACS migration that arose in the context of the existing contracts expiring some time soon (see "http://www.pacsgroup.org.uk/cgi-bin/forum/show.cgi?2/58490"). During the course of that there was discussion of DICOM "Part 10" format objects, that ultimately lead to a proposal for all PACS to be able to create a a new archive copy on an externally supplied filesystem (directly or network-attached) of standard DICOM files with a standard (lossless) compressed transfer syntax and all the "headers" up to date.
So, since it is that time of year once again, I put together an IHE brief proposal to define an Image Manager/Archive Content Migration Profile for this. Just swap "PACS" for "IM/IA" if you are not familiar with IHE-speak.
This is not a particularly new idea. I recall that many years ago Peter Kuzmak of the VA put forth some suggestions related to an interchangeable archive format for migration with some file system organization and naming as well as potential DICOMDIR-related changes in that context, but using DVD-R; I dug out his old presentation that was referenced from the minutes of the WG 5 meeting in Feb 2000.
Nowadays studies are so large and multitudinous that migrating them on thousands of CDs or DVDs would seem to be infeasible, but the low cost of consumer price-point hard drives and RAID boxes seem to suggest that making what is essentially a "throw away" copy of an entire PACS archive is nowadays realistic. It only needs to last long enough to be populated, reattached to the new PACS and its content imported. The DICOM standard already supports the notion of USB-attached media (physical media unspecified) to allow for flash drives and hard drives, although there may be details that need to be worked through for the sheer size of an entire PACS archive.
Anyway, it remains to be seen how the IHE Radiology Planning Committee receives this, and if they (well, we, since I am a member) don't reject it out of hand, and prioritizes it relative to other profile proposals (since the Technical Committee has limited bandwidth and each year only works on a small number of proposals).
So if any of you out there have any thoughts about whether this is a good idea or a terrible one, or suggestions for improvement of the proposal, please let me know.
Saturday, June 11, 2011
Framing the Big Study Problem
Summary: Large studies such as thin slice CT create a performance problem in unoptimized implementations; DICOM provides several means of addressing these problems without throwing out DICOM entirely and reimplementing, as the MINT folks originally proposed; retrospective use of the enhanced multi-frame family of objects may be able to alleviate this problem, even without support in the modalities, by converting legacy single-frame DICOM objects to enhanced multi-frame objects in the PACS for distribution to workstations or other PACS or archives.
Long version:
A group of folks at Johns Hopkins, Harris Corp, and Vital Images have been working on the "large study" problem and have produced a largely "DICOM free" implementation (apart from modality image ingestion) called Medical Imaging Network Transport (MINT). They are now proposing that this become a new "standard" and be blessed by and incorporated in DICOM as a "replacement". Since the MINT implementation is based on HTTP transport, DICOM WG 27 Web Technology has become the home for these discussions. Not surprisingly, the "replace everything" work item proposal was rejected by the DICOM Standards Committee at our last meeting by a large majority - you can read the summary in the minutes of the committee and see the slides presented by the MINT folks.
The rejection by the committee of the proposal should not be interpreted as a rejection of the validity of the use-case, however.
It is accepted that large studies potentially pose a problem for many existing implementations, both for efficient transfer from the central store to the user's desktop for viewing or analysis, and for bulk transfer between two stores (e.g., between a PACS and a "vendor neutral archive" or a regional image repository).
So, to move forward with solving the problem WG 6 and WG 27 met together earlier this week to try to achieve consensus on what the existing DICOM standard has to offer in this respect, and to identify any gaps may exist that could be filled by incremental extensions to the standard.
If one puts aside the assumption that it is necessary to completely replace DICOM (and hence re-solve every problem that DICOM and PACS vendors have spent the last quarter of a century solving), and instead focus narrowly on the key aspects of concern, two essential issues emerge:
Yet this approach ignores the significant effort that has already been put into "normalizing" each acquisition at the modality end, specifically, the "enhanced multi-frame" family of DICOM objects defined for CT, MR and PET as well as XA/XRF, and new applications like 3D X-ray, breast tomosynthesis, ophthalmic optical coherence tomography (OCT), intra-vascular OCT, pathology whole slide imaging (WSI), etc. The following slide (which I simplified and redrew from an early one produced by either Bob Haworth or Kees Verduin for WG 16) illustrates how the enhanced multi-frame family of objects uses the shared and per-frame "functional groups" (as well as the top level DICOM dataset) to factor out the commonality compared to encoding single slices each with its own complete "header":
Now, it is no secret that adoption of the enhanced family of objects has been very slow, especially by the modalities that already have single frame "legacy" DICOM objects, particularly the CT and MR. Currently only Philips offers a commercial MR implementation and Toshiba offers a commercial CT implementation. Many PACS are capable of storing and regurgitating these over a DICOM connection, but may not be capable of viewing them or sorting or annotating them correctly, or performing more sophisticated functions on them like 3D and MPR rendering, nor for that matter are they well supported in many CD-based viewers, etc.
But it is important to distinguish between gaps in implementations, as opposed to gaps in the DICOM standard. If the standard already specifies a means to solve a problem it should be used by implementers; inventing a new "standard" like MINT to solve the problem is not going to encourage implementation (unless it solves other pressing problems as well). The bottom line here seems to be that PACS vendors in particular are not well motivated to solve in an interoperable (standard) way, any problem beyond ingestion of images; many PACS vendors may be quite happy with proprietary implementations between the archive/manager component of their PACS and their image display devices or software. But the last thing we need are multiple competing standard approaches to solving the same problem (or entire competing standards), since that only compromises interoperability.
So, to cut a long story short, the argument was put forth this week that use of the enhanced multi-frame family of objects for encoding a single "acquisition" as a single object should suffice to achieve the vast majority of the benefits of the "study normalization" suggested by MINT.
We explored some of what could be achieved by using enhanced multi-frame objects and observed that:
So, the new action item for WG 6 (and more specifically for me, since I volunteered to write it), is to produce a work item proposal for the committee to define a new IOD and SOP Class (or perhaps modality-specific family of them), for "transitional multi-frame converted legacy" images, with the deficiency in the existing standard being the lack of a set of multi-frame images that can be fully populated with only the limited information in the legacy images but with sufficient mandatory position, orientation, temporal and dimension information to satisfy the 3D and 4D viewing and rendering and bulk transfer use cases.
In the interim, now that MINT guys have been encouraged to look at the potential use of the secondary capture multi-frame objects, they have the opportunity to experiment with them to see if they can achieve the necessary performance in their implementation.
The following four slides illustrate graphically the principle of migration from:




In what ways does this proposal differ from what the MINT implementation has done to date ?
Also discussed at our recent meeting was the availability of mechanisms in DICOM for gaining access to selected frames and to meta-data (the "header") without transferring everything. Those two features are defined in Supplement 119, Instance and Frame Level Retrieve SOP Classes, which was specifically written to address the consequences of putting "everything" in single large objects. For example, if a report references one or two key frames in a very large object, one needs the ability to retrieve just those frames efficiently. Supplement 119 defines a mechanism for doing so, by extracting those frames, and building a small but still valid DICOM object to retrieve and display. The existing WADO HTTP-based DICOM service also supports the retrieval of a selected frame, as do the equivalent SOAP-based Web Services transactions defined in IHE XDS-I, back ported into the DICOM standard in Supplement 148 WADO via Web Services, currently out for ballot. Though Supplement 119 does defined a SOP Class for gaining access to the meta-data without transferring the bulk data, if one uses the JPEG Interactive Protocol (JPIP) to access frames or selected regions of a frame in JPEG 2000, one can also gain access to the meta-data using a specific Transfer Syntax (see Supplement 106).
Unlike IHE (particularly IHE XDS and XDS-I), the MINT guys are also RESTful at heart, and this is reflected in their current implementation. We tried to keep out of the REST versus SOAP religious wars during our most recent discussion, and focus on what DICOM already has to solve the use case. Yet to be resolved is the matter of whether DICOM already has sufficient pure DICOM network protocol support and HTTP-based support to satisfy the use-cases without having to introduce additional RESTful equivalents. On the one hand there is a Committee and WG 6 level desire to not have multiple gratuitously different ways to do the same thing; on the other hand there may be significant advantages to alternative mechanisms if they can take effective advantage of off-the-shelf HTTP infrastructure components. A case in point is the use of HTTP caching that requires some statelessness in the transactions to be effective. The MINT guys were advised to present evidence that such caching is sufficiently beneficial in order to justify the introduction of yet another transport mechanism, and this is something that WG 27 intends to follow up on.
Related religious wars about whether or not DICOM or HTTP should be used "within" an enterprise (i.e., over the LAN), the extent to which DICOM can be used between LANs that are nominally part of the same "enterprise" but are separated by firewalls (i.e., if the Canadians can do DICOM between two places, why can't Johns Hopkins), why XDS-I is not sufficient, etc., were mentioned but essentially deferred for another day. One key aspect mentioned, but not discussed in much detail, was the matter of user authentication and access control, and the IHE direction that uses Kerberos (EUA) within an enterprise and SAML assertions (XUA) across enterprises; this is easy for DICOM and SOAP-based WS like XDS-I, but potentially problematic for RESTful solutions (like WADO). Whether or not Vendor Neutral Archives (VNA), whatever they are, are a good idea or not was also not debated; we simply agreed that the efficient bulk data transfer from one archive to another using a standard protocol is a genuine use case. That said, the IHE radiology guys (myself included) are contemplating considering (again) the question of whether to separate the Image Manager from the Image Archive Actors in the IHE Radiology Technical Framework, so we do have the opportunity to start a whole new war in a whole new forum.
Another interesting use case that we discussed is the so-called "zero-footprint" viewer that can run in a "standard" browser that makes use of no additional technology, whether it be a medical application or generic plug-in like Adobe Flash or whatever, since not everybody has that available (especially on mobile devices like tablets). This essentially requires that the server be able to provide a source of meta-data and bulk pixel data that is amenable to efficient rendering and sufficient interaction within something as simple as JavaScript. The extent to which DICOM, WADO and IHE XDS-I based web services are lacking with respect to this zero footprint use case, and to what extent aspects of MINT offer advantages, remains to be determined. There has already been a lot of work in this area; see for example the dcm4che XERO approach discussed briefly in this article about the Benefits of Using the DCM4CHE DICOM Archive. I am not surely exactly what state the open source XERO project is in, given that Agfa uses it in a commercial implementation now, but obviously many of the principles are generally applicable. The question of whether a JSON or a GBP representation of the DICOM header is required (as opposed to MINT's dislike of WG 23's Supplement 118 XML representation of DICOM attributes) has not yet been explored; certainly JSON would seem like a more natural fit if JavaScript is the primary mechanism of implementation, but I dare say that could become the subject of yet another religious war.
There are many other practical issues that the MINT folks have encountered, such as the lack of uniqueness of UIDs, or inconsistency in some of the patient or study information between individual slices, but many of these can be characterized as "implementation" problems faced by any PACS or archive when populating their databases, and not something that the DICOM standard (or any standard) can really resolve. I.e., if an implementation fails to comply with the standard it is just plain "bad" (or the model of the real-world in the standard does not actually match the real-world). At some point (usually ingestion from the modality, and/or administrative study merges and other "corrections"), any implementation has to deal with this, and will have to regardless of whether the originally proposed MINT approach or the conversion to enhanced multi-frame DICOM approach is used. With respect to the need for change management, the MINT proponents were made aware of the Image Object Change Management (IOCM) profile defined by IHE, which addresses the use-cases and implementation of change in a loosely-coupled multi-archive environment, as well as the IHE Multiple Image Manager/Archive (MIMA) profile, which addresses archives with different patient identity domains and what to do with DICOM identifying attributes when transferring across domains. With respect to modalities or other implementations that create non-unique UIDs, the need to a) detect and correct for this on ingestion, and b) report the defects to the offending vendor, was emphasized.
Finally, the foregoing should not be taken to mean that switching to the use of multi-frame objects is a panacea, nor indeed a prerequisite for the efficient transport of large multi-slice studies. As we were careful to emphasize during the initial roll out of the enhanced multi-frame DICOM CT and MR objects, the primary goal was improved interoperability for advanced applications, not transfer performance improvement, since it was well known at the time that optimized applications transporting single frame objects can achieve very good performance (e.g., through the negotiation and use of DICOM asynchronous operations, or multiple simultaneous associations if asynchronous operations cannot be negotiated, both of which eliminate the impact of delayed acknowledgment of individual C-STORE operations, whether it be due to network latency or application level delays such as waiting for successful database insertion before acknowledgment). Rather, poor observed performance in the real world is often a consequence of applications simply not being optimized or well designed in this respect, and many vendors' engineers are far too quick to switch to a proprietary optimized protocol and ignore opportunities for optimizing standards-based solutions. As a case in point, this old white paper from Oracle on A Performance Evaluation of Storage and Retrieval of DICOM Image Content, which shows quite impressive numbers for single frame DICOM images using JDBC (not DICOM network) based server to client retrieval over five 1 Gigabit Ethernet connections between server and client (over 400 MB/s, 852 images/s, 1497 Cardiac CT studies per hour). MINT performance figures over a single connection as published so far are also impressive (Harris's results and Vital's results) though the hardware is different. The bottom line is probably that the protocol used is less important than the architecture and implementation details on both end, and in comparing performance claims for specific commercial implementations, one needs to be sure one is comparing apples with apples rather than oranges. The lack of a published industry standard benchmark for these use-cases is probably a significant gap that we should try to close.
The following slide is one that I produced for the early enhanced multi-frame demonstration and educational lectures, about where the DICOM protocol critical delay lies:

The C-STORE acknowledgment discussion is separable but related to the discussion of TCP/IP performance in the presence of significant latency (or also significant packet loss). As was emphasized at the recent meeting by the purveyor of a potential proprietary TCP/IP replacement (Aspera), unmodified TCP/IP over wide area networks is not ideal for taking full advantage of the theoretical bandwidth limits, and both DICOM and HTTP (and hence MINT, which is HTTP-based) are potentially at a disadvantage in this respect. The conventional answer to this is to use multiple connections and associations and to swap out the TCP stack at both ends of a slow connection (e.g., a satellite link), and/or to use a "WAN accelerator" box at both ends (such as something from Circadence). I am not recommending or promoting any of these technologies or companies, since I have no experience with them. I will say that the idea of changing DICOM applications and tool kits to use something other than TCP/IP or to add the ability to negotiate something proprietary (as Aspera was suggesting), is superficially way less attractive to me, than putting in a box in between that takes care of the problem, transparent to the applications, if it achieves anything close to the maximum possible "goodput". Anyway, if you are interested in thinking about TCP/IP performance issues, I have found Hassan and Jain's book High Performance TCP/IP Networking to be a good introduction. In this discussion, not for the first time, trying to take advantage of UDP or features of various peer-to-peer network protocols was also discussed, and a quick Google search on DICOM and P2P or UDP file transfer will reveal some interesting articles and experiments.
David
PS. Note that in the foregoing I reference and provide links to numerous DICOM Supplements that introduced various features; since most of these supplements have long since been folded into the body of the DICOM standard, and may have had subsequent corrections applied, implementers need to reference the latest DICOM standard text and not the old supplement text, as appropriate. I reference the supplements only to provide a historical time line and to provide the context for interpreting their scope and use.
Long version:
A group of folks at Johns Hopkins, Harris Corp, and Vital Images have been working on the "large study" problem and have produced a largely "DICOM free" implementation (apart from modality image ingestion) called Medical Imaging Network Transport (MINT). They are now proposing that this become a new "standard" and be blessed by and incorporated in DICOM as a "replacement". Since the MINT implementation is based on HTTP transport, DICOM WG 27 Web Technology has become the home for these discussions. Not surprisingly, the "replace everything" work item proposal was rejected by the DICOM Standards Committee at our last meeting by a large majority - you can read the summary in the minutes of the committee and see the slides presented by the MINT folks.
The rejection by the committee of the proposal should not be interpreted as a rejection of the validity of the use-case, however.
It is accepted that large studies potentially pose a problem for many existing implementations, both for efficient transfer from the central store to the user's desktop for viewing or analysis, and for bulk transfer between two stores (e.g., between a PACS and a "vendor neutral archive" or a regional image repository).
So, to move forward with solving the problem WG 6 and WG 27 met together earlier this week to try to achieve consensus on what the existing DICOM standard has to offer in this respect, and to identify any gaps may exist that could be filled by incremental extensions to the standard.
If one puts aside the assumption that it is necessary to completely replace DICOM (and hence re-solve every problem that DICOM and PACS vendors have spent the last quarter of a century solving), and instead focus narrowly on the key aspects of concern, two essential issues emerge:
- transporting large numbers of slices as separate single instances (files) is potentially extremely inefficient
- replicating the "meta-data" for the entire patient/study/series/acquisition in every separate single instance is also potentially extremely inefficient, and though the size of the meta-data is trivial by comparison with the bulk data, the effort to repeatedly parse it and sort out what it means as a whole on the receiving end is definitely not trivial
Yet this approach ignores the significant effort that has already been put into "normalizing" each acquisition at the modality end, specifically, the "enhanced multi-frame" family of DICOM objects defined for CT, MR and PET as well as XA/XRF, and new applications like 3D X-ray, breast tomosynthesis, ophthalmic optical coherence tomography (OCT), intra-vascular OCT, pathology whole slide imaging (WSI), etc. The following slide (which I simplified and redrew from an early one produced by either Bob Haworth or Kees Verduin for WG 16) illustrates how the enhanced multi-frame family of objects uses the shared and per-frame "functional groups" (as well as the top level DICOM dataset) to factor out the commonality compared to encoding single slices each with its own complete "header":

But it is important to distinguish between gaps in implementations, as opposed to gaps in the DICOM standard. If the standard already specifies a means to solve a problem it should be used by implementers; inventing a new "standard" like MINT to solve the problem is not going to encourage implementation (unless it solves other pressing problems as well). The bottom line here seems to be that PACS vendors in particular are not well motivated to solve in an interoperable (standard) way, any problem beyond ingestion of images; many PACS vendors may be quite happy with proprietary implementations between the archive/manager component of their PACS and their image display devices or software. But the last thing we need are multiple competing standard approaches to solving the same problem (or entire competing standards), since that only compromises interoperability.
So, to cut a long story short, the argument was put forth this week that use of the enhanced multi-frame family of objects for encoding a single "acquisition" as a single object should suffice to achieve the vast majority of the benefits of the "study normalization" suggested by MINT.
We explored some of what could be achieved by using enhanced multi-frame objects and observed that:
- though not all modalities can create enhanced multi-frame objects, it is possible to "convert" the original legacy single frame objects into such multi-frame objects
- the modality-specific enhanced multi-frame objects have many mandatory and coded attributes that are not present in the legacy single frame object, which it is challenging if not impossible to populate during such a conversion
- there are "secondary capture" enhanced multi-frame objects that do permit the optional inclusion of position, orientation, temporal and dimension information extracted from legacy single frame objects, and conversion to these might suffice for the vast majority of bulk transfer and viewing and analysis use-cases
- it may be desirable to either a) document in the standard how to perform such a conversion, or b) define new IODs and SOP Classes that are somewhere in between the "everything optional" enhanced secondary capture objects and the modality-specific objects in terms of requirements, in order to assure interoperability of archives and viewers using such an approach
- it may also be desirable to specify the requirements for full round-trip fidelity conversion from the legacy single frame objects to the converted enhanced multi-frame object and back again, to allow intermediate devices to take advantage of the multi-frame objects but still serve extracted single frame objects to legacy receiving devices, of which there will remain many in the installed base
So, the new action item for WG 6 (and more specifically for me, since I volunteered to write it), is to produce a work item proposal for the committee to define a new IOD and SOP Class (or perhaps modality-specific family of them), for "transitional multi-frame converted legacy" images, with the deficiency in the existing standard being the lack of a set of multi-frame images that can be fully populated with only the limited information in the legacy images but with sufficient mandatory position, orientation, temporal and dimension information to satisfy the 3D and 4D viewing and rendering and bulk transfer use cases.
In the interim, now that MINT guys have been encouraged to look at the potential use of the secondary capture multi-frame objects, they have the opportunity to experiment with them to see if they can achieve the necessary performance in their implementation.
The following four slides illustrate graphically the principle of migration from:
- a completely proprietary optimized PACS to workstation interface (where the "viewer" is essentially "part of the PACS"), to
- a DICOM standard PACS to workstation boundary (possible with current single frame DICOM Query/Retrieve/Store interfaces, but likely not "optimized" for performance by the vendor), to
- converting to multi-frame objects (or passing through those from modalities), combined with round-trip de-conversion to support legacy workstations, to
- supporting PACS to PACS (or Image Manager/Image Archive or Vendor Neutral Archive) transfers also using legacy objects converted to multi-frame if supported by both sides:




In what ways does this proposal differ from what the MINT implementation has done to date ?
- the aggregation of meta-data would occur at the "acquisition" level, and not the entire "study" level; this would seem to be sufficient to capture the vast majority of the performance benefit in that when viewing or performing 3D/4D analysis, the bulk of the pixel data and meta-data for each "set" will be within one object
- the enhanced multi-frame objects require that every frame have the same number of rows and columns and mostly the same pixel data characteristics (bit depth, etc.); this means that funky image shapes like localizers will end up in separate objects
- the opportunity exists to pre-populate the "dimension" information that is a feature of the enhanced family of objects, e.g., this dimension is space, this is time, etc., rather than have to "figure it out" retrospectively from each vendor's pattern of use of the individual descriptive attributes
Also discussed at our recent meeting was the availability of mechanisms in DICOM for gaining access to selected frames and to meta-data (the "header") without transferring everything. Those two features are defined in Supplement 119, Instance and Frame Level Retrieve SOP Classes, which was specifically written to address the consequences of putting "everything" in single large objects. For example, if a report references one or two key frames in a very large object, one needs the ability to retrieve just those frames efficiently. Supplement 119 defines a mechanism for doing so, by extracting those frames, and building a small but still valid DICOM object to retrieve and display. The existing WADO HTTP-based DICOM service also supports the retrieval of a selected frame, as do the equivalent SOAP-based Web Services transactions defined in IHE XDS-I, back ported into the DICOM standard in Supplement 148 WADO via Web Services, currently out for ballot. Though Supplement 119 does defined a SOP Class for gaining access to the meta-data without transferring the bulk data, if one uses the JPEG Interactive Protocol (JPIP) to access frames or selected regions of a frame in JPEG 2000, one can also gain access to the meta-data using a specific Transfer Syntax (see Supplement 106).
Unlike IHE (particularly IHE XDS and XDS-I), the MINT guys are also RESTful at heart, and this is reflected in their current implementation. We tried to keep out of the REST versus SOAP religious wars during our most recent discussion, and focus on what DICOM already has to solve the use case. Yet to be resolved is the matter of whether DICOM already has sufficient pure DICOM network protocol support and HTTP-based support to satisfy the use-cases without having to introduce additional RESTful equivalents. On the one hand there is a Committee and WG 6 level desire to not have multiple gratuitously different ways to do the same thing; on the other hand there may be significant advantages to alternative mechanisms if they can take effective advantage of off-the-shelf HTTP infrastructure components. A case in point is the use of HTTP caching that requires some statelessness in the transactions to be effective. The MINT guys were advised to present evidence that such caching is sufficiently beneficial in order to justify the introduction of yet another transport mechanism, and this is something that WG 27 intends to follow up on.
Related religious wars about whether or not DICOM or HTTP should be used "within" an enterprise (i.e., over the LAN), the extent to which DICOM can be used between LANs that are nominally part of the same "enterprise" but are separated by firewalls (i.e., if the Canadians can do DICOM between two places, why can't Johns Hopkins), why XDS-I is not sufficient, etc., were mentioned but essentially deferred for another day. One key aspect mentioned, but not discussed in much detail, was the matter of user authentication and access control, and the IHE direction that uses Kerberos (EUA) within an enterprise and SAML assertions (XUA) across enterprises; this is easy for DICOM and SOAP-based WS like XDS-I, but potentially problematic for RESTful solutions (like WADO). Whether or not Vendor Neutral Archives (VNA), whatever they are, are a good idea or not was also not debated; we simply agreed that the efficient bulk data transfer from one archive to another using a standard protocol is a genuine use case. That said, the IHE radiology guys (myself included) are contemplating considering (again) the question of whether to separate the Image Manager from the Image Archive Actors in the IHE Radiology Technical Framework, so we do have the opportunity to start a whole new war in a whole new forum.
Another interesting use case that we discussed is the so-called "zero-footprint" viewer that can run in a "standard" browser that makes use of no additional technology, whether it be a medical application or generic plug-in like Adobe Flash or whatever, since not everybody has that available (especially on mobile devices like tablets). This essentially requires that the server be able to provide a source of meta-data and bulk pixel data that is amenable to efficient rendering and sufficient interaction within something as simple as JavaScript. The extent to which DICOM, WADO and IHE XDS-I based web services are lacking with respect to this zero footprint use case, and to what extent aspects of MINT offer advantages, remains to be determined. There has already been a lot of work in this area; see for example the dcm4che XERO approach discussed briefly in this article about the Benefits of Using the DCM4CHE DICOM Archive. I am not surely exactly what state the open source XERO project is in, given that Agfa uses it in a commercial implementation now, but obviously many of the principles are generally applicable. The question of whether a JSON or a GBP representation of the DICOM header is required (as opposed to MINT's dislike of WG 23's Supplement 118 XML representation of DICOM attributes) has not yet been explored; certainly JSON would seem like a more natural fit if JavaScript is the primary mechanism of implementation, but I dare say that could become the subject of yet another religious war.
There are many other practical issues that the MINT folks have encountered, such as the lack of uniqueness of UIDs, or inconsistency in some of the patient or study information between individual slices, but many of these can be characterized as "implementation" problems faced by any PACS or archive when populating their databases, and not something that the DICOM standard (or any standard) can really resolve. I.e., if an implementation fails to comply with the standard it is just plain "bad" (or the model of the real-world in the standard does not actually match the real-world). At some point (usually ingestion from the modality, and/or administrative study merges and other "corrections"), any implementation has to deal with this, and will have to regardless of whether the originally proposed MINT approach or the conversion to enhanced multi-frame DICOM approach is used. With respect to the need for change management, the MINT proponents were made aware of the Image Object Change Management (IOCM) profile defined by IHE, which addresses the use-cases and implementation of change in a loosely-coupled multi-archive environment, as well as the IHE Multiple Image Manager/Archive (MIMA) profile, which addresses archives with different patient identity domains and what to do with DICOM identifying attributes when transferring across domains. With respect to modalities or other implementations that create non-unique UIDs, the need to a) detect and correct for this on ingestion, and b) report the defects to the offending vendor, was emphasized.
Finally, the foregoing should not be taken to mean that switching to the use of multi-frame objects is a panacea, nor indeed a prerequisite for the efficient transport of large multi-slice studies. As we were careful to emphasize during the initial roll out of the enhanced multi-frame DICOM CT and MR objects, the primary goal was improved interoperability for advanced applications, not transfer performance improvement, since it was well known at the time that optimized applications transporting single frame objects can achieve very good performance (e.g., through the negotiation and use of DICOM asynchronous operations, or multiple simultaneous associations if asynchronous operations cannot be negotiated, both of which eliminate the impact of delayed acknowledgment of individual C-STORE operations, whether it be due to network latency or application level delays such as waiting for successful database insertion before acknowledgment). Rather, poor observed performance in the real world is often a consequence of applications simply not being optimized or well designed in this respect, and many vendors' engineers are far too quick to switch to a proprietary optimized protocol and ignore opportunities for optimizing standards-based solutions. As a case in point, this old white paper from Oracle on A Performance Evaluation of Storage and Retrieval of DICOM Image Content, which shows quite impressive numbers for single frame DICOM images using JDBC (not DICOM network) based server to client retrieval over five 1 Gigabit Ethernet connections between server and client (over 400 MB/s, 852 images/s, 1497 Cardiac CT studies per hour). MINT performance figures over a single connection as published so far are also impressive (Harris's results and Vital's results) though the hardware is different. The bottom line is probably that the protocol used is less important than the architecture and implementation details on both end, and in comparing performance claims for specific commercial implementations, one needs to be sure one is comparing apples with apples rather than oranges. The lack of a published industry standard benchmark for these use-cases is probably a significant gap that we should try to close.
The following slide is one that I produced for the early enhanced multi-frame demonstration and educational lectures, about where the DICOM protocol critical delay lies:

The C-STORE acknowledgment discussion is separable but related to the discussion of TCP/IP performance in the presence of significant latency (or also significant packet loss). As was emphasized at the recent meeting by the purveyor of a potential proprietary TCP/IP replacement (Aspera), unmodified TCP/IP over wide area networks is not ideal for taking full advantage of the theoretical bandwidth limits, and both DICOM and HTTP (and hence MINT, which is HTTP-based) are potentially at a disadvantage in this respect. The conventional answer to this is to use multiple connections and associations and to swap out the TCP stack at both ends of a slow connection (e.g., a satellite link), and/or to use a "WAN accelerator" box at both ends (such as something from Circadence). I am not recommending or promoting any of these technologies or companies, since I have no experience with them. I will say that the idea of changing DICOM applications and tool kits to use something other than TCP/IP or to add the ability to negotiate something proprietary (as Aspera was suggesting), is superficially way less attractive to me, than putting in a box in between that takes care of the problem, transparent to the applications, if it achieves anything close to the maximum possible "goodput". Anyway, if you are interested in thinking about TCP/IP performance issues, I have found Hassan and Jain's book High Performance TCP/IP Networking to be a good introduction. In this discussion, not for the first time, trying to take advantage of UDP or features of various peer-to-peer network protocols was also discussed, and a quick Google search on DICOM and P2P or UDP file transfer will reveal some interesting articles and experiments.
David
PS. Note that in the foregoing I reference and provide links to numerous DICOM Supplements that introduced various features; since most of these supplements have long since been folded into the body of the DICOM standard, and may have had subsequent corrections applied, implementers need to reference the latest DICOM standard text and not the old supplement text, as appropriate. I reference the supplements only to provide a historical time line and to provide the context for interpreting their scope and use.
Sunday, December 12, 2010
Imaging and the PCAST (President's Council of Advisors on Science and Technology) Report
Summary: Is the national health care IT standards agenda going to ignore the lessons of the past and the progress made so far? If so, what impact will it have on imaging?
Long Version:
This morning I was sent a link to a report on Realizing the Full Potential of Health Information Technology to Improve Healthcare for Americans: The Path Forward released on December 8th, 2010 from the President's Council of Advisors on Science and Technology (PCAST). There is a video available of the press conference at which it was released, as well as some older video from July 16th, 2010 of a panel discussion about some of its content. The noble goals that the speakers in the videos espouse seems to be somewhat at odds with the actual content.
Already various healthcare IT bloggers involved in standards development and deployment have commented in some detail about the technical content of the report, including Keith Boone, John Halamka, and Joyce Sensmeier.
These bloggers all seem slightly dismayed that the report seems to dismiss, or at least not give adequate recognition to, many existing efforts that are well underway. In my own review I see that HL7 is barely acknowledged, CDA is attributed to the ONC rather than HL7, IHE gets a passing mention only, and there is no mention at all of XDS, XUA, XCA, BPPC or any of the many other IHE efforts to move information back and forth across communities.
Instead, amongst the other content of the report that seems pretty reasonable, there is the unsubstantiated assertion made that "document" or "record" oriented interchange is insufficient and that "tagged data elements" with accompanying meta data are necessary, and a new standard for that needs to be written.
If this were a broad non-technical review, I would have no issue with any of that, but it isn't. It is mostly a broad review, but dives into extreme detail about certain technical issues, including security and privacy and access control solutions, implying that somehow the narrowly focused technical solutions proposed are the only solutions applicable to the broad aims of the overall report. This I find distinctly surprising.
But back to imaging, and what prompted me to write this blog entry, given that the HIT professionals and their organizations are more than capable of speaking for themselves. A surprising aspect of the report is the mammogram use case, starting on page 41.
In this use case, a patient has had mammograms performed at multiple locations, and her current physician needs to retrieve them, given, and I am selectively quoting here, "enough identifying information about the patient to allow the data to be located", "privacy protection information—who may access the mammograms, either identified or deidentified, and for what purposes", and "the provenance of the data—the date, time, type of equipment used, personnel (physician, nurse, or technician), and so forth".
OK, sounds like your standard cross-community access to DICOM images use case, something that IHE RadTech is specifically addressing as a profile this year actually (XCA-I), which involves a relatively simple extension to the existing cross community access (XCA) profile used in the XDS world. Now, I don't mean to pretend that cross-community access control (e.g., via XUA) is easy, nor that reconciliation of patient identity across communities (PIX) is easy either. Merely to point out that the problems in this area are lack of deployment and shared infrastructure or the incentives to build such (as the PCAST report rightly emphasizes elsewhere), and NOT lack of standards. We already have standard mechanisms to provide images with the level of completeness and quality required for the specific use case, ranging from pre-windowed downsampled lossy compressed images for undemanding review applications, through to the full set of diagnostic quality images required for more sophisticated uses.
Since we already have standards to do exactly this, it is perhaps not the ideal use case for PCAST to pick. The idea that a mammogram image (or set of four standard view images, all 40 to 120 MB of them) can somehow be treated as a single "data element", like say, a single blood glucose value, flies in the face of decades of experience of an entire industry.
That said, if the example had been a "tagged data element" such as the BI-RADS category from the body of a series of mammography reports from different locations, the example would have been more plausible. Indeed, the notion of being able to query and retrieve that part of the content of what would traditionally be handled as entire documents is very attractive on the face of it, and undoubtedly a desirable goal, though one that does not require rejection of the traditional structured document oriented paradigm to achieve. Nor does the report address the key barrier to adoption of structured as opposed to unstructured content to facilitate data element extraction and query, which seems to be a combination of a lack of tools and incentives to author structured content in the first place.
Regardless, a new "tagged data element" approach is not a pre-requisite for progress, and we do not need to wait for new standards to be promulgated for this before realizing most, if not all of the benefits of connectivity and interoperability.
Nor frankly, do we need to be able to query across enterprises at a particularly granular level, say for which operator performed a study, or what kVP they used. The metadata envisioned by IHE is very narrowly focused for this reason, and the notion of exposing "all available meta data", whilst theoretically possible, has enormous performance implications. This is certainly not the biggest fish to fry in the short term.
The entire continuum from images through documents to "atomic" data elements share common barriers ... lack of any connection at all between enterprises and communities, lack of deployment of existing standard mechanisms for patient identity reconciliation, and lack of deployment of existing standard mechanisms for provisioning and controlling access.
Even on the access control front the report seems to ignore existing standards and infrastructure. After a basic tutorial on security principles, tied to the "tagged data element" concept, the mammography use case is revisited from this perspective, on page 51.
To paraphrase, an access control service authenticates the clinician, assures the patient has granted them access, and provides the locations of the encrypted mammography images, and supplies the necessary decryption keys. Then, and I quote, "in this and all scenarios, data and key are brought together only in the clinician’s computer, and only for the purposes of immediate display; decrypted data are not replicated or permanently stored locally".
Really ? It is hard to imagine how to enforce that. And how practical is it, given that you generally need a pretty clever viewer if not an entire mammography workstation or plugin to your PACS viewer to effectively utilize a mammogram ? Given that the standard of care currently seems to be to import outside studies into the PACS for use as priors and for distribution throughout the enterprise to allow them to be used with the (expensive and advanced) tools that expert physicians are used to using, the report's proposal seems unrealistic.
Though the mammography imaging use case almost reduces to the absurd the notion that this form of restricted access without local importation is workable, even in the case of a local doctor's EHR, I would imagine that they would want to record even a simple data element, such as the blood glucose, imported from outside systems, rather than be restricted to accessing them on demand only. Even to manifest the prior values in a graphical form or as a flow sheet, which would seem highly desirable for enhanced decision making, would be very challenging if "outside" data points could not be persisted locally, permanently. Indeed I dare say there might be a record keeping requirement to maintain such information.
Contrast the PCAST report's proposal with what is already well standardized; as described before with XDS-I used in conjunction with XCA and XUA and PIX mechanisms, the XDS-I Imaging Document Source would provide the DICOM images to the clinician once authorized, encrypted in transit but not otherwise, via the protocol and in the format requested by the recipient, and importable into a "real" mammography display system in the normal manner. All without having to have every DICOM-compatible viewer or display system be updated to support some mythical new "universal language" based on"tagged data elements" and without being prevented from importing the images for transient or persistent use as is necessary to provide quality care. See John Moehrke's blog for a primer on what security features IHE has to offer.
Anyway, without getting too strident about, at best I find some of the technical content of the PCAST report a disappointment. At worst, I see it as a distraction from the most important items of the national agenda, well espoused in other parts of the report, which include finding a way to provide the proper incentives to get connectivity adopted in a manner that improves the quality and efficiency of care, preferably in a manner that gives patients granular control over access to their information.
Also surprising is the choice of the mammography imaging use case as the poster child for PCAST, given that the ONC has essentially ignored imaging in its initial stages of "meaningful use" probably quite reasonably in terms of return on investment, but much to the chagrin of the various professional societies, judging by the ACR/ABR/SIIM/RSNA joint comments and MITA comments. When later stages of "meaningful use" get around to imaging, they will probably emphasize reporting, decision support and avoidance of unnecessary imaging (probably much more important goals), by which time, even in the absence of specific incentives, distributed cross-community image exchange via the "cloud" will probably be commonplace. One could certainly leave the RSNA show a week or so ago with the impression that this is a solved problem, and high time too.
David
Long Version:
This morning I was sent a link to a report on Realizing the Full Potential of Health Information Technology to Improve Healthcare for Americans: The Path Forward released on December 8th, 2010 from the President's Council of Advisors on Science and Technology (PCAST). There is a video available of the press conference at which it was released, as well as some older video from July 16th, 2010 of a panel discussion about some of its content. The noble goals that the speakers in the videos espouse seems to be somewhat at odds with the actual content.
Already various healthcare IT bloggers involved in standards development and deployment have commented in some detail about the technical content of the report, including Keith Boone, John Halamka, and Joyce Sensmeier.
These bloggers all seem slightly dismayed that the report seems to dismiss, or at least not give adequate recognition to, many existing efforts that are well underway. In my own review I see that HL7 is barely acknowledged, CDA is attributed to the ONC rather than HL7, IHE gets a passing mention only, and there is no mention at all of XDS, XUA, XCA, BPPC or any of the many other IHE efforts to move information back and forth across communities.
Instead, amongst the other content of the report that seems pretty reasonable, there is the unsubstantiated assertion made that "document" or "record" oriented interchange is insufficient and that "tagged data elements" with accompanying meta data are necessary, and a new standard for that needs to be written.
If this were a broad non-technical review, I would have no issue with any of that, but it isn't. It is mostly a broad review, but dives into extreme detail about certain technical issues, including security and privacy and access control solutions, implying that somehow the narrowly focused technical solutions proposed are the only solutions applicable to the broad aims of the overall report. This I find distinctly surprising.
But back to imaging, and what prompted me to write this blog entry, given that the HIT professionals and their organizations are more than capable of speaking for themselves. A surprising aspect of the report is the mammogram use case, starting on page 41.
In this use case, a patient has had mammograms performed at multiple locations, and her current physician needs to retrieve them, given, and I am selectively quoting here, "enough identifying information about the patient to allow the data to be located", "privacy protection information—who may access the mammograms, either identified or deidentified, and for what purposes", and "the provenance of the data—the date, time, type of equipment used, personnel (physician, nurse, or technician), and so forth".
OK, sounds like your standard cross-community access to DICOM images use case, something that IHE RadTech is specifically addressing as a profile this year actually (XCA-I), which involves a relatively simple extension to the existing cross community access (XCA) profile used in the XDS world. Now, I don't mean to pretend that cross-community access control (e.g., via XUA) is easy, nor that reconciliation of patient identity across communities (PIX) is easy either. Merely to point out that the problems in this area are lack of deployment and shared infrastructure or the incentives to build such (as the PCAST report rightly emphasizes elsewhere), and NOT lack of standards. We already have standard mechanisms to provide images with the level of completeness and quality required for the specific use case, ranging from pre-windowed downsampled lossy compressed images for undemanding review applications, through to the full set of diagnostic quality images required for more sophisticated uses.
Since we already have standards to do exactly this, it is perhaps not the ideal use case for PCAST to pick. The idea that a mammogram image (or set of four standard view images, all 40 to 120 MB of them) can somehow be treated as a single "data element", like say, a single blood glucose value, flies in the face of decades of experience of an entire industry.
That said, if the example had been a "tagged data element" such as the BI-RADS category from the body of a series of mammography reports from different locations, the example would have been more plausible. Indeed, the notion of being able to query and retrieve that part of the content of what would traditionally be handled as entire documents is very attractive on the face of it, and undoubtedly a desirable goal, though one that does not require rejection of the traditional structured document oriented paradigm to achieve. Nor does the report address the key barrier to adoption of structured as opposed to unstructured content to facilitate data element extraction and query, which seems to be a combination of a lack of tools and incentives to author structured content in the first place.
Regardless, a new "tagged data element" approach is not a pre-requisite for progress, and we do not need to wait for new standards to be promulgated for this before realizing most, if not all of the benefits of connectivity and interoperability.
Nor frankly, do we need to be able to query across enterprises at a particularly granular level, say for which operator performed a study, or what kVP they used. The metadata envisioned by IHE is very narrowly focused for this reason, and the notion of exposing "all available meta data", whilst theoretically possible, has enormous performance implications. This is certainly not the biggest fish to fry in the short term.
The entire continuum from images through documents to "atomic" data elements share common barriers ... lack of any connection at all between enterprises and communities, lack of deployment of existing standard mechanisms for patient identity reconciliation, and lack of deployment of existing standard mechanisms for provisioning and controlling access.
Even on the access control front the report seems to ignore existing standards and infrastructure. After a basic tutorial on security principles, tied to the "tagged data element" concept, the mammography use case is revisited from this perspective, on page 51.
To paraphrase, an access control service authenticates the clinician, assures the patient has granted them access, and provides the locations of the encrypted mammography images, and supplies the necessary decryption keys. Then, and I quote, "in this and all scenarios, data and key are brought together only in the clinician’s computer, and only for the purposes of immediate display; decrypted data are not replicated or permanently stored locally".
Really ? It is hard to imagine how to enforce that. And how practical is it, given that you generally need a pretty clever viewer if not an entire mammography workstation or plugin to your PACS viewer to effectively utilize a mammogram ? Given that the standard of care currently seems to be to import outside studies into the PACS for use as priors and for distribution throughout the enterprise to allow them to be used with the (expensive and advanced) tools that expert physicians are used to using, the report's proposal seems unrealistic.
Though the mammography imaging use case almost reduces to the absurd the notion that this form of restricted access without local importation is workable, even in the case of a local doctor's EHR, I would imagine that they would want to record even a simple data element, such as the blood glucose, imported from outside systems, rather than be restricted to accessing them on demand only. Even to manifest the prior values in a graphical form or as a flow sheet, which would seem highly desirable for enhanced decision making, would be very challenging if "outside" data points could not be persisted locally, permanently. Indeed I dare say there might be a record keeping requirement to maintain such information.
Contrast the PCAST report's proposal with what is already well standardized; as described before with XDS-I used in conjunction with XCA and XUA and PIX mechanisms, the XDS-I Imaging Document Source would provide the DICOM images to the clinician once authorized, encrypted in transit but not otherwise, via the protocol and in the format requested by the recipient, and importable into a "real" mammography display system in the normal manner. All without having to have every DICOM-compatible viewer or display system be updated to support some mythical new "universal language" based on"tagged data elements" and without being prevented from importing the images for transient or persistent use as is necessary to provide quality care. See John Moehrke's blog for a primer on what security features IHE has to offer.
Anyway, without getting too strident about, at best I find some of the technical content of the PCAST report a disappointment. At worst, I see it as a distraction from the most important items of the national agenda, well espoused in other parts of the report, which include finding a way to provide the proper incentives to get connectivity adopted in a manner that improves the quality and efficiency of care, preferably in a manner that gives patients granular control over access to their information.
Also surprising is the choice of the mammography imaging use case as the poster child for PCAST, given that the ONC has essentially ignored imaging in its initial stages of "meaningful use" probably quite reasonably in terms of return on investment, but much to the chagrin of the various professional societies, judging by the ACR/ABR/SIIM/RSNA joint comments and MITA comments. When later stages of "meaningful use" get around to imaging, they will probably emphasize reporting, decision support and avoidance of unnecessary imaging (probably much more important goals), by which time, even in the absence of specific incentives, distributed cross-community image exchange via the "cloud" will probably be commonplace. One could certainly leave the RSNA show a week or so ago with the impression that this is a solved problem, and high time too.
David
Wednesday, November 24, 2010
RSNA 2010 RFID Update
By the way, RSNA is still using RFID tags, as described in a previous blog entry ... I checked my badge that arrived in the mail recently and indeed it contains such a tag.
Since last year I microwaved it for too long and it caught fire, and I wandered around the whole week with a black hole on my chest, this year I will use the knife or the chisel.
Just to confirm that indeed the vendors do have access to the tracking information, here is the link to the RFID Exhibitor Package Order Form, a press release from the provider, and the usual spiel in the RSNA materials about what the RFID tag will be used for and how to opt out.
David
Since last year I microwaved it for too long and it caught fire, and I wandered around the whole week with a black hole on my chest, this year I will use the knife or the chisel.
Just to confirm that indeed the vendors do have access to the tracking information, here is the link to the RFID Exhibitor Package Order Form, a press release from the provider, and the usual spiel in the RSNA materials about what the RFID tag will be used for and how to opt out.
David
Dose Matters at RSNA 2010
Summary: Look for vendors offering the NEMA X-25 Dose Check feature and DICOM Radiation Dose SR (RDSR) (IHE REM profile) output from their CT modalities, and products able to store and process RDSRs for dose monitoring, alerting and registry submission. Bring along a list of your installed base of CT, PACS and RIS model and version numbers, and ask your vendors when Dose Check and RDSR capability will be supported. Don't forget to ask your PACS, CD burning and importing, cloud/Internet storage and distribution, Modality Worklist (MWL), reporting system, ordering and decision support vendors about this too. Visit the RSNA dose demo at Booth 2852, Exhibit Hall A, South Building.
Long Version:
In my last blog entry, I discussed the need for tools for monitoring and controlling radiation dose from CT, and with RSNA's Annual Scientific Pilgrimage to Chicago coming up next week, I thought I would consider the progress in the last six months, and what attendees might want to focus on. Undoubtedly the CT vendors will be heavily focused on what new dose-reduction technology they can deliver in new products, but do not lose sight of the importance of evaluating the monitoring and management technology as well.
One notable event was the release in November of a public letter from the FDA to MITA (NEMA), the vendors' trade organization, summarizing their investigation of the brain perfusion incidents.
In October, NEMA released the X-25 Computed Tomography Dose Check standard, which you can download from here. This feature, which the vendors had already committed at the FDA Public Meeting to develop and implement, is intended to "notify and alert the operating personnel ... that prepare and set the scan parameters — prior to starting a scan — whether the estimated dose index is above the value defined and set by the ... institution ... to warrant notification to the operator". Clearly this requires two things, 1) the implementation of the feature in the scanner, and 2) suitable values to be configured by the institution. No doubt the vendors will promulgate default levels, and organizations like AAPM or ACR might provide them, or the local medical physicists may decide for themselves. Eventually the X-25 feature will get folded into the CT manufacturer's safety bible, IEC 60601-2-44.
The RSNA meeting will be an opportunity for you to ask the CT sales people and application specialists about the Dose Check feature, and particularly how and when they plan to retrofit the scanners you already have installed to support it and how much it will cost (if anything). A commitment to the FDA is one thing, but there is nothing like evidence of demand from the customers to motivate product managers to deliver.
X-25 distinguishes between "notifications" for "protocol elements" prior to scanning, and alerts for the "examination" that accumulate what has been done so far. There is also an alert prior to saving (not just attempting to perform) a protocol that exceeds limits, which specifically helps to address a concern that arose in the Cedars-Sinai perfusion incident. Proceeding despite a notification or alert requires the recording of who, what, when and why in an audit trail. DICOM is working on additional information to be included in the Radiation Dose Structured Report (RDSR) to record the X-25 parameters and audit trail information (see CP 1047). You might also want to ask the modality vendors at RSNA when they plan to implement CP 1047, which should be made final text at the Jan 2011 WG 6 meeting. If you are looking for dose monitoring systems that can process RDSRs, you might also want to ask them about when they plan to be able to provide you with a human-readable report of CP 1047 X-25 events.
On the subject of RDSR, one vendor, GE, has already provided a list of which models and versions of scanner support RDSR, and which earlier models produce dose screen secondary capture images; you can find the list here. Hopefully other vendors, perhaps at RSNA, will provide a similar list, and I will tabulate them on the Radiation Dose Informatics web site on the Software and Devices page. In lieu of information supplied by the vendors, I will also tabulate information based on what scanners and models that I encounter RDSR objects from, so feel free to submit samples to me if you encounter them.
When shopping for new CT scanners or upgrades next week, asking the vendors for RDSR support is something obvious that you should do, but even if you are not buying new equipment, it is reasonable to ask about upgrading your installed base. If I were you, I would bring along a complete list of all the equipment that I was responsible for including models and versions, and ask very specifically of the sales people, which of those on my list can be upgraded, and when, and which of those will never be upgraded. Not only will this serve to alert the product managers to your concern about this issue, but the answers will help you plan your own dose monitoring strategy. If you don't get the answer that you want to hear (all your scanners will soon support RDSR), then you are going to need to develop a strategy that perhaps involves a third-party solution that can either OCR the dose screens if the scanners produce them, or provides a means for operator data entry and transcription of the information displayed on the console.
As for "dose monitoring systems", or whatever the name the industry is going to converge on for monitoring and reporting CT scanner dose output, the upcoming RSNA is an opportunity to look around for vendors of those systems too. It remains to be seen whether this feature becomes routinely embedded in the PACS or the RIS, or whether for the time being or indefinitely, it will be the province of dedicated third-party systems (I will maintain a list of the latter at the Software and Devices page, which is, so far, depressingly short).
In the IHE REM profile, the modality can send RDSRs either to the Image Manager/Image Archive (IM/IA) (usually the PACS) or directly to a Dose Information Reporter (DIR), which might be the RIS or a third-party system, or such a system may query the PACS. The REM design assumes that since RDSRs are DICOM objects, the PACS is the logical actor to persist and distribute them.
However, RDSR output from the modality is not going to be of much immediate use to you if a) your PACS won't accept and store them, and b) you don't have something that will display their content and, more importantly, produce management reports of dose output, if not alerts and notifications when limits are exceeded. At the very worst, you can start storing these RDSRs in the PACS now, so that when you do settle on a dose management solution, you will be able to use your historical data, as both a benchmark for your local historical practice, as well as for individual patient dose management decisions (recognizing the limitations of using dose output as a surrogate for effective dose to the patient).
Accordingly, not only do you need to be asking your CT vendor for RDSR output, but you need to be asking your PACS vendor if they will accept, store and faithfully regurgitate RDSRs, even if they do not yet have plans to render and collate the contents.
This also includes recording RDSRs on CDs, since referring physicians want to know about dose too, as does the next facility in the chain that is going to import these CDs. So your third-party CD burning and import and viewer vendors are also candidates for interrogation next week about RDSR support. You also need to ask any Internet distribution and storage vendors offering "CD substitutes" in the "cloud" about this too.
Your RIS vendor doesn't escape either. Though they may not be planning on offering RDSR management, they will still be providing Modality Worklists (MWL) to the CT scanners. It turns out that it is really important to convey the age, sex, height and weight information, as well as anatomic and procedure codes, if downstream one is to make size-appropriate use of the dose output information (which after all is based on standard sized phantoms and needs adjustment for kids and for particularly small or large people). The CT scanner vendors are well aware of these issues, and hopefully can reliably copy the information from the worklist into the RDSR (another question to ask your scanner vendor if you want to get into that much detail with them).
Finally, when generating the radiology report, it is good practice if not required by regulation (such as in Germany or now California with SB 1237), to include information about the radiation dose, and a creative reporting system vendor could automatically copy information directly from the RDSR for the study into the report template being populated by the radiologist. Now is the time to get the reporting system vendors thinking about this, particularly since some of them already offer features for doing the same sort of thing from other types of structured report "input", notably for ultrasound and echocardiography. Even the ordering and decision support system vendors should not be immune to your questions, since they too can take advantage of patient-specific historical information acquired from RDSRs.
In conclusion, next week you have the opportunity to put penetrating questions about radiation dose to everyone you meet with a product that is involved in any part of the imaging chain, from ordering all the way through to reporting.
If you want to get a more detailed briefing, perhaps prior to visiting the vendors' booths, and see some of the components of the IHE REM profile in action, feel free to come and visit the RSNA Image Sharing and Radiation Dose Monitoring demonstration. A group of CT modality, PACS and dose reporting vendors, together with some academic groups, the ACR and myself will be participating. Strangely enough this will actually be held in the Technical Exhibits area, rather than in the Lakeside Learning Center, specifically Booth 2852, Exhibit Hall A, South Building. Email me if you can't find the demo or have any questions about it.
David
PS. By the way, while I am thinking of it, if you use lossy compression in your archive, make sure it is turned off for series that contain dose screens (or indeed any secondary captures containing text and graphics like perfusion curves), since not only will it make them look like crap but will also reduce the performance, if not entirely cripple, any OCR that you might later apply.
Long Version:
In my last blog entry, I discussed the need for tools for monitoring and controlling radiation dose from CT, and with RSNA's Annual Scientific Pilgrimage to Chicago coming up next week, I thought I would consider the progress in the last six months, and what attendees might want to focus on. Undoubtedly the CT vendors will be heavily focused on what new dose-reduction technology they can deliver in new products, but do not lose sight of the importance of evaluating the monitoring and management technology as well.
One notable event was the release in November of a public letter from the FDA to MITA (NEMA), the vendors' trade organization, summarizing their investigation of the brain perfusion incidents.
In October, NEMA released the X-25 Computed Tomography Dose Check standard, which you can download from here. This feature, which the vendors had already committed at the FDA Public Meeting to develop and implement, is intended to "notify and alert the operating personnel ... that prepare and set the scan parameters — prior to starting a scan — whether the estimated dose index is above the value defined and set by the ... institution ... to warrant notification to the operator". Clearly this requires two things, 1) the implementation of the feature in the scanner, and 2) suitable values to be configured by the institution. No doubt the vendors will promulgate default levels, and organizations like AAPM or ACR might provide them, or the local medical physicists may decide for themselves. Eventually the X-25 feature will get folded into the CT manufacturer's safety bible, IEC 60601-2-44.
The RSNA meeting will be an opportunity for you to ask the CT sales people and application specialists about the Dose Check feature, and particularly how and when they plan to retrofit the scanners you already have installed to support it and how much it will cost (if anything). A commitment to the FDA is one thing, but there is nothing like evidence of demand from the customers to motivate product managers to deliver.
X-25 distinguishes between "notifications" for "protocol elements" prior to scanning, and alerts for the "examination" that accumulate what has been done so far. There is also an alert prior to saving (not just attempting to perform) a protocol that exceeds limits, which specifically helps to address a concern that arose in the Cedars-Sinai perfusion incident. Proceeding despite a notification or alert requires the recording of who, what, when and why in an audit trail. DICOM is working on additional information to be included in the Radiation Dose Structured Report (RDSR) to record the X-25 parameters and audit trail information (see CP 1047). You might also want to ask the modality vendors at RSNA when they plan to implement CP 1047, which should be made final text at the Jan 2011 WG 6 meeting. If you are looking for dose monitoring systems that can process RDSRs, you might also want to ask them about when they plan to be able to provide you with a human-readable report of CP 1047 X-25 events.
On the subject of RDSR, one vendor, GE, has already provided a list of which models and versions of scanner support RDSR, and which earlier models produce dose screen secondary capture images; you can find the list here. Hopefully other vendors, perhaps at RSNA, will provide a similar list, and I will tabulate them on the Radiation Dose Informatics web site on the Software and Devices page. In lieu of information supplied by the vendors, I will also tabulate information based on what scanners and models that I encounter RDSR objects from, so feel free to submit samples to me if you encounter them.
When shopping for new CT scanners or upgrades next week, asking the vendors for RDSR support is something obvious that you should do, but even if you are not buying new equipment, it is reasonable to ask about upgrading your installed base. If I were you, I would bring along a complete list of all the equipment that I was responsible for including models and versions, and ask very specifically of the sales people, which of those on my list can be upgraded, and when, and which of those will never be upgraded. Not only will this serve to alert the product managers to your concern about this issue, but the answers will help you plan your own dose monitoring strategy. If you don't get the answer that you want to hear (all your scanners will soon support RDSR), then you are going to need to develop a strategy that perhaps involves a third-party solution that can either OCR the dose screens if the scanners produce them, or provides a means for operator data entry and transcription of the information displayed on the console.
As for "dose monitoring systems", or whatever the name the industry is going to converge on for monitoring and reporting CT scanner dose output, the upcoming RSNA is an opportunity to look around for vendors of those systems too. It remains to be seen whether this feature becomes routinely embedded in the PACS or the RIS, or whether for the time being or indefinitely, it will be the province of dedicated third-party systems (I will maintain a list of the latter at the Software and Devices page, which is, so far, depressingly short).
In the IHE REM profile, the modality can send RDSRs either to the Image Manager/Image Archive (IM/IA) (usually the PACS) or directly to a Dose Information Reporter (DIR), which might be the RIS or a third-party system, or such a system may query the PACS. The REM design assumes that since RDSRs are DICOM objects, the PACS is the logical actor to persist and distribute them.
However, RDSR output from the modality is not going to be of much immediate use to you if a) your PACS won't accept and store them, and b) you don't have something that will display their content and, more importantly, produce management reports of dose output, if not alerts and notifications when limits are exceeded. At the very worst, you can start storing these RDSRs in the PACS now, so that when you do settle on a dose management solution, you will be able to use your historical data, as both a benchmark for your local historical practice, as well as for individual patient dose management decisions (recognizing the limitations of using dose output as a surrogate for effective dose to the patient).
Accordingly, not only do you need to be asking your CT vendor for RDSR output, but you need to be asking your PACS vendor if they will accept, store and faithfully regurgitate RDSRs, even if they do not yet have plans to render and collate the contents.
This also includes recording RDSRs on CDs, since referring physicians want to know about dose too, as does the next facility in the chain that is going to import these CDs. So your third-party CD burning and import and viewer vendors are also candidates for interrogation next week about RDSR support. You also need to ask any Internet distribution and storage vendors offering "CD substitutes" in the "cloud" about this too.
Your RIS vendor doesn't escape either. Though they may not be planning on offering RDSR management, they will still be providing Modality Worklists (MWL) to the CT scanners. It turns out that it is really important to convey the age, sex, height and weight information, as well as anatomic and procedure codes, if downstream one is to make size-appropriate use of the dose output information (which after all is based on standard sized phantoms and needs adjustment for kids and for particularly small or large people). The CT scanner vendors are well aware of these issues, and hopefully can reliably copy the information from the worklist into the RDSR (another question to ask your scanner vendor if you want to get into that much detail with them).
Finally, when generating the radiology report, it is good practice if not required by regulation (such as in Germany or now California with SB 1237), to include information about the radiation dose, and a creative reporting system vendor could automatically copy information directly from the RDSR for the study into the report template being populated by the radiologist. Now is the time to get the reporting system vendors thinking about this, particularly since some of them already offer features for doing the same sort of thing from other types of structured report "input", notably for ultrasound and echocardiography. Even the ordering and decision support system vendors should not be immune to your questions, since they too can take advantage of patient-specific historical information acquired from RDSRs.
In conclusion, next week you have the opportunity to put penetrating questions about radiation dose to everyone you meet with a product that is involved in any part of the imaging chain, from ordering all the way through to reporting.
If you want to get a more detailed briefing, perhaps prior to visiting the vendors' booths, and see some of the components of the IHE REM profile in action, feel free to come and visit the RSNA Image Sharing and Radiation Dose Monitoring demonstration. A group of CT modality, PACS and dose reporting vendors, together with some academic groups, the ACR and myself will be participating. Strangely enough this will actually be held in the Technical Exhibits area, rather than in the Lakeside Learning Center, specifically Booth 2852, Exhibit Hall A, South Building. Email me if you can't find the demo or have any questions about it.
David
PS. By the way, while I am thinking of it, if you use lossy compression in your archive, make sure it is turned off for series that contain dose screens (or indeed any secondary captures containing text and graphics like perfusion curves), since not only will it make them look like crap but will also reduce the performance, if not entirely cripple, any OCR that you might later apply.
Monday, May 31, 2010
Dose Matters
Summary: Reducing the radiation exposure from diagnostic imaging is an increasing priority; standards exist for encoding dose information but are not yet widely adopted, though soon will be given regulatory pressure and industry commitments; few tools, commercial or open source, exist yet for monitoring and reporting radiation dose.
Long Version:
You would have to have been living on a desert island or under a rock to not be aware that there is a heightened sensitivity amongst the general populace and the regulatory authorities to the matter of radiation dose exposure from diagnostic imaging and the risk of cancer. Whether it be well publicized disasters like the Jacoby Roth or Cedars-Sinai incidents, or general concern related to dose from procedures like virtual colonoscopy, or articles evaluating the contribution of diagnostic imaging as a source of exposure, the need to deal with the matter is inescapable. This is true regardless of whether you are a "believer" in the linear no-threshold model, which says that no amount of radiation is safe, or not. The FDA is going to require that efforts be made to reduce the dose delivered by both CT and fluoroscopy, as discussed in their initiative white paper and reviewed at the recent public meeting, though they have been working on this for some time. Vendors are already delivering equipment incorporating dose saving technology. Attention is being drawn to the radiation dose caused by the ordering of repeat or low-yield procedures, as well as optimal strategies for pediatric imaging (image gently).
Yet so much remains in the hands of the user in terms of ordering as well as performance of the examination. If you cannot measure it, you cannot improve it (Lord Kelvin), so the question arises as to how one can track the amount of radiation being delivered, either to the population, or at a site, or to an individual, and hence benchmark one's own performance then make improvements to the process. Surprisingly, though devices have long been required to provide visual feedback to the operator at the console, it has proven remarkably difficult to get this information out of the scanners and into some sort of database or registry that can be searched or monitored.
DICOM has a number of ways that dose information can be encoded, but for the last few years has been focusing on the Radiation Dose Structured Report (SR), with the goal of having the modalities produce this directly. Many people expect that dose information would be in the image headers, but the image is the wrong place to encode this; images may be transmitted before the study is complete and hence not contain the cumulative information, and more than one image may be reconstructed from the same irradiation event, creating the risk that the dose may be counted more than once. Further, not all originally acquired images are necessarily retained (e.g., thin slices from CT), and a large volume of images is a poor means of communicating what is essentially a small amount of information. One upon a time, it was thought that the modality performed procedure step (MPPS) might be a suitable mechanism to communicate this information, but it was soon realized that there is no easy way to persist what is essentially a transient message, not to mention that MPPS is relatively poorly adopted.
To meet the users' immediate needs, some vendors have gone so far as to provide images that are saved screens containing the text of the delivered dose information. Both GE and Philips do this, and there is a large installed base of such scanners as well as archives full of such information. Though Philips had the foresight to also encode the same information in the header attributes of these images (albeit in a non-standard way), both as plain text and as individual elements, unfortunately GE did not, so many folks who want to perform a retrospective review of their dose information need to manually examine these images, or develop some optical character recognition (OCR) software.
For the time being, there is a relative paucity of tools available both to handle information from legacy devices, as well as to use more standard approaches, including that espoused in the IHE Radiation Exposure Monitoring (REM) profile, which is based on modalities producing DICOM Radiation Dose SR objects, and provides specific actors for consuming and reporting information, including transmission to registries, such as the ACR's Dose Index Registry. The good news is that there has been significant activity at recent IHE Connectathons with respect to implementing REM; you can review these yourself at the connectathon results page, where you can see which vendors have specific offerings in this field. MITA, the modality vendors' industry trade group, has made a strong commitment not only to dose reduction in general, and the CT Radiation Dose Check feature, but also to retrofitting at least the current platform in the installed base to produce DICOM SR objects .
At a recent teleconference of the newly convened Quality and Safety Subcommittee of the RSNA's Radiology Informatics Committee (RIC) , it was apparent that several academic groups have been working in this field, and the need to make available open source tools was highlighted, if for no other reason than to serve until the industry catches up and provides a robust infrastructure.
To this end I thought I would externalize some of my own primitive efforts, as extensions to my Pixelmed Java DICOM toolkit. Specifically, I put together a little application called DoseUtility, which brings together a number of components that I have been working on, including the construction and validation of Radiation Dose SR objects, as well as the ability to perform OCR on GE dose screen saved images. I have already used the validator to good effect during the last few connectathons, and the experience constructing and testing it has led to a number of proposed changes to the standard and the IHE profile.
Eventually I hope to extend this tool and its components to provide a complete infrastructure for dose management, at least from the DICOM and IHE side of the problem. Currently it focuses on CT, but it will be extended to fluoroscopy and projection X-ray soon, as well as injected dose from NM and PET, as those standards evolve.
I dare say that the various academic groups who have been working on the same types of problems may well have much more sophisticated tools, likely more easily integrated with their own PACS and RIS, perhaps taking advantage of proprietary APIs. As yet, I am unfamiliar with the specifics of most of them, but I will make a catalog of whatever becomes available.
David
Long Version:
You would have to have been living on a desert island or under a rock to not be aware that there is a heightened sensitivity amongst the general populace and the regulatory authorities to the matter of radiation dose exposure from diagnostic imaging and the risk of cancer. Whether it be well publicized disasters like the Jacoby Roth or Cedars-Sinai incidents, or general concern related to dose from procedures like virtual colonoscopy, or articles evaluating the contribution of diagnostic imaging as a source of exposure, the need to deal with the matter is inescapable. This is true regardless of whether you are a "believer" in the linear no-threshold model, which says that no amount of radiation is safe, or not. The FDA is going to require that efforts be made to reduce the dose delivered by both CT and fluoroscopy, as discussed in their initiative white paper and reviewed at the recent public meeting, though they have been working on this for some time. Vendors are already delivering equipment incorporating dose saving technology. Attention is being drawn to the radiation dose caused by the ordering of repeat or low-yield procedures, as well as optimal strategies for pediatric imaging (image gently).
Yet so much remains in the hands of the user in terms of ordering as well as performance of the examination. If you cannot measure it, you cannot improve it (Lord Kelvin), so the question arises as to how one can track the amount of radiation being delivered, either to the population, or at a site, or to an individual, and hence benchmark one's own performance then make improvements to the process. Surprisingly, though devices have long been required to provide visual feedback to the operator at the console, it has proven remarkably difficult to get this information out of the scanners and into some sort of database or registry that can be searched or monitored.
DICOM has a number of ways that dose information can be encoded, but for the last few years has been focusing on the Radiation Dose Structured Report (SR), with the goal of having the modalities produce this directly. Many people expect that dose information would be in the image headers, but the image is the wrong place to encode this; images may be transmitted before the study is complete and hence not contain the cumulative information, and more than one image may be reconstructed from the same irradiation event, creating the risk that the dose may be counted more than once. Further, not all originally acquired images are necessarily retained (e.g., thin slices from CT), and a large volume of images is a poor means of communicating what is essentially a small amount of information. One upon a time, it was thought that the modality performed procedure step (MPPS) might be a suitable mechanism to communicate this information, but it was soon realized that there is no easy way to persist what is essentially a transient message, not to mention that MPPS is relatively poorly adopted.
To meet the users' immediate needs, some vendors have gone so far as to provide images that are saved screens containing the text of the delivered dose information. Both GE and Philips do this, and there is a large installed base of such scanners as well as archives full of such information. Though Philips had the foresight to also encode the same information in the header attributes of these images (albeit in a non-standard way), both as plain text and as individual elements, unfortunately GE did not, so many folks who want to perform a retrospective review of their dose information need to manually examine these images, or develop some optical character recognition (OCR) software.
For the time being, there is a relative paucity of tools available both to handle information from legacy devices, as well as to use more standard approaches, including that espoused in the IHE Radiation Exposure Monitoring (REM) profile, which is based on modalities producing DICOM Radiation Dose SR objects, and provides specific actors for consuming and reporting information, including transmission to registries, such as the ACR's Dose Index Registry. The good news is that there has been significant activity at recent IHE Connectathons with respect to implementing REM; you can review these yourself at the connectathon results page, where you can see which vendors have specific offerings in this field. MITA, the modality vendors' industry trade group, has made a strong commitment not only to dose reduction in general, and the CT Radiation Dose Check feature, but also to retrofitting at least the current platform in the installed base to produce DICOM SR objects .
At a recent teleconference of the newly convened Quality and Safety Subcommittee of the RSNA's Radiology Informatics Committee (RIC) , it was apparent that several academic groups have been working in this field, and the need to make available open source tools was highlighted, if for no other reason than to serve until the industry catches up and provides a robust infrastructure.
To this end I thought I would externalize some of my own primitive efforts, as extensions to my Pixelmed Java DICOM toolkit. Specifically, I put together a little application called DoseUtility, which brings together a number of components that I have been working on, including the construction and validation of Radiation Dose SR objects, as well as the ability to perform OCR on GE dose screen saved images. I have already used the validator to good effect during the last few connectathons, and the experience constructing and testing it has led to a number of proposed changes to the standard and the IHE profile.
Eventually I hope to extend this tool and its components to provide a complete infrastructure for dose management, at least from the DICOM and IHE side of the problem. Currently it focuses on CT, but it will be extended to fluoroscopy and projection X-ray soon, as well as injected dose from NM and PET, as those standards evolve.
I dare say that the various academic groups who have been working on the same types of problems may well have much more sophisticated tools, likely more easily integrated with their own PACS and RIS, perhaps taking advantage of proprietary APIs. As yet, I am unfamiliar with the specifics of most of them, but I will make a catalog of whatever becomes available.
David
Wednesday, April 28, 2010
This blog has moved
This blog is now located at http://dclunie.blogspot.com/.
You will be automatically redirected in 30 seconds, or you may click here.
For feed subscribers, please update your feed subscriptions to
http://dclunie.blogspot.com/feeds/posts/default.
Monday, April 13, 2009
To push or to pull: that is the question
"Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous delay and inconvenience,
Or to take arms against a lack of bandwidth,
And by anticipating avoid it?"
Summary: Sharing of images across the network is a potentially attractive alternative to CDs, but image sets are large, bandwidth is limited, lossy compression is controversial, security infrastructure is non-existent, and recipients are busy and impatient; why have to pull images on demand slowly or with poor quality, when one can anticipate where they are needed and push or pre-fetch ? The standards and technology exists now, not tomorrow.
Long version:
There is a renewed enthusiasm for image sharing using the network.
This idea is not new, and for many years now the momentum has been growing to establish standards and build infrastructure to support image as part of the distributed electronic record. In the current US political climate, the huge cost and disparate availability of health care (and imaging utilization in particular) has IT proponents, who are as eager to jump on the stimulus package gravy train as anyone else, seeking to find a "meaningful use" of IT to address the image sharing problem.
The current generation of computer literate doctors is used to the convenience of the Internet, and on-demand access to arbitrary information à la Google. It is reasonable for them to demand that they have access with a similar level of convenience to patient information, including images.
However, this requirement is easy to demand but not so easy to satisfy. Practical realities intrude: radiological images are more complex than consumer grade images, need more manipulation for adequate interactive visualization, tend to be very large individually (e.g., digital mammograms) and occur in very large sets (e.g., thin slice CT or CT/PET). Yet bandwidth, particularly in the "last mile" from the providers to the Internet, is limited.
Some tout the use of lossy image compression as a panacea, yet this remains controversial and adequately powered studies to "prove" that such compression does not lower the quality of care are few in number. Others say the bandwidth problem will go away over time, yet in underserved rural areas, and particularly in medical offices, high-speed DSL or cable access is limited; for large institutions with very large volumes, very high bandwidth "pipes" may add significantly to operational cost. Even with high bandwidth, high latency can degrade the transfer rates achieved and impact any interactive protocol perceptibly. Like healthcare in general, not everyone has equal access at equal cost.
Leaving the DICOM images "on the server" and interacting remotely with an application, either using a proprietary approach like Terarecon's, or a generic application sharing approach like Citrix, or web browser approach that serves up consumer format images on demand, is certainly possible. These approaches introduce new classes of problems such as access control and familiarity with the user interface. One frequently hears from radiologists who serve a number of hospitals, about how irritating it is to have to learn the remote interface of each of the different installed PACS, for example. This is exactly the same problem that the AMA has raised about the different viewers on different vendors' CDs. Provisioning every possible user with the appropriate identity and authentication information, and then assuring they have access to what they should have and nothing else, is also obviously a major administrative task. In the absence of a national or regional infrastructure for centralizing such provisioning, or a framework of "trust" between providers, this will remain a difficult problem. Providing patients with access to their own information and images adds another dimension to the scale and complexity problem.
For years now IHE has been promoting its cross-enterprise document sharing (XDS) architecture as a potential solution. The idea is to have each source register what it has available with a centrally accessible registry, and then consumers use the location information in the registry to go back to the source repository to pull what they need. The underlying technology is appropriately buzzword compliant (XML and SOAP and all that), and there is an additional layer to deal with the number and size of images (XDS-I, currently undergoing revision to become XDS-I.b using MTOM/XOP to efficiently handle the binary image data). However, this architecture still presupposes an unparalled (and as yet largely unimplemented) degree of cooperation between everyone involved in the sharing problem.
Healthcare providers do not normally cooperate, at least in the US; indeed the very essence of the healthcare system encourages them to compete, and cooperation is anathema to them. Does it make sense to rely on the future deployment of an infrastructure that involves cooperation and yet likely with additional cost associated with it and little incentive to participate ? Who are the providers already interested in providing information to ? Their "customers" obviously, the referring doctors who order (or in civilized countries "request") the imaging services in the first place.
These referring doctors span the gamut in terms of technologic sophistication and requirements. Some may be satisfied with just the report. Many though, and often it depends on the specific patient and their condition, will need some access to the images. A significant proportion will need access to the original DICOM images in order to perform their own interpretation or to use their own visualization or planning tools. Yet these are busy people who have neither the patience, nor the time to waste, nor are reimbursed for, screwing around with artifical technological barriers to using the images, such as network delays or unfamiliar user interfaces
Should it not be a simple matter in this day and age to send the images to where they are needed, just as one sends (faxes or emails) the report, a well established practice ?
Obviously this is possible. No imaging facility is going to perform an examination without knowing who ordered (requested) it, so the information about where to send it exists. If the potential recipients had a system capable of receiving it, this process could be automated.
Just as I have advocated in the past that referring doctors set up a system in their office and have their staff handle CD importing, so that such images are ready to view in their system when they need them, one could envisage the same or a similar in-office system with a port listening to the outside world ready to receive incoming images. Just like the fax machine that is sitting their waiting to receive phone calls.
Do the standards and technology exist to do this safely and securly right now ? Of course they do. All one would need is to perform an ordinary DICOM network transfer of the images from the sending site (imaging center) to the receiving site (referring doctor). Should it be a secure transfer to protect confidentiality ? Of course it should, but one does not need to set up a VPN to every possible referring doctor, nor from every possible sending site, since DICOM already defines transport over TLS (SSL), the same encryption protocol that one uses for ecommerce with sites with whom one has no pre-established relationship.
Does one need any identification or authentication infrastructure to achieve these ? Beyond perhaps checking that the receiving site has a valid TLS certificate (signed by a well-known certificate authority, just like for web browsing), the answer is no. The fact that the recipient ordered (requested) the examination should be sufficient to establish that they are entitled to access the images, for example. By analogy, one does not require any special authentication to receive the faxed report.
Would recipients potentially be vulnerable to "DICOM image spam" ? Well theoretically if someone attacker was that determined, but this would easily be solved by "filtering" on a list of known and approved source sites.
Is there any risk to the integrity of the sending site ? Well no, because this is an outbound transfer (push), and there is no need for the sending site to respond to queries (unless for some reason, it wants to).
This is pretty easy stuff to set up, and apart from the encryption layer, involves nothing that imaging vendors are not already intimately familiar with. No fancy web services stuff, no XML or SOAP messages. Just plain old boring store-and-forward point-to-point DICOM. And there are certainly already software tool kits that provide support for the secure transfer of DICOM images over TLS. Some of these tool kits also support the use of the various standard lossless and lossy compression "transfer syntaxes" that DICOM defines, including JPEG 2000, which can be used as appropriate and negotiated automatically depending on the receiving system's capabilities. Is DICOM the fastest possible network transfer protocol ? Well arguably not, depending on the latency of the network and the quality of the implementation, but in a store-and-forward paradigm this is much less of a factor, and there are many ways to optimize DICOM transfers if required, without throwing away the interoperability of a well known protocol.
What about confirming the success of the transfer ? One could use the existing DICOM Storage Commitment in the same way IHE uses it between modalities and the PACS, and/or one could include a "manifest" of what should have been sent, e.g., as a DICOM SR the way the IHE Teaching File and Clinical Trial Export (TCE) profile does.
What about the matter of inconsistent patient identifiers ? How is the receiving site going to know how to match the incoming images that use the imaging center's patient identifier against their own internal patient identifier. This is certainly a non-trivial problem, but just as when paying an invoice a business normally tracks the orderer's purchase order number in addition to its own numbering system, there is no reason why an imaging system cannot do the same. There are certainly HL7 and DICOM attributes related to dealing with this class of problem, but in the short term and in the absence of a consistent convention for handling this, it may be necessary to have a heuristic matching algorithm and/or human oversight of this "import reconciliation" problem. Perhaps one day there will be a national patient identifier to reduce the complexity of this problem, but there will always be errors that need reconciliation. The same class of problem exists with CDs, and the IHE Import Reconciliation Workflow (IRWF) profile provides ways to deal with this, either an an unscheduled manner by using patient identity queries, or in a scheduled manner, whereby the system that placed the order in the first place could be expecting the result in the form of images and perform the matching against a reduced set of potential alternatives.
Note that this entire solution avoids the need for any type of centralized infrastructure. It just needs the sending site to know the "DICOM address" (host, port and AET) of the ordering (requesting) doctor's site to which to send the images. This could be configured in the system in advance, just like the fax number for the report, and it could be included in every order (printed or electronic) to allow manual or automatic addition of new sites.
Ideally the sending capability would be built in to imaging centers' information systems and PACS. Could one retrofit an existing RIS/PACS with this capability with a third-party device or piece of software ? Certainly; one could envisage a system in which the modality worklist provider was polled on a regular basis to extract information about what examinations had been requested, and within the worklist entries there should be identification of the referring doctor. Such a system would then query the PACS to see what images were available for these requests, retrieve them, and forward them on to the pre-configured recipients site. Other DICOM services, such as Modality Performed Procedure Step (MPPS) and Instance Availability Notification (IAN) might be of additional assistance in making this process more reliable or timely, and in particular help assure that a complete set of images was transferred. Alternatively, rather than polling the MWL provider, one might listen to an HL7 ADT and Order Entry feed to extract the order information or gather additional details.
The bottom line though is that the images could be in the hands of the remote referring doctor before the radiologist has even had a chance to look at them, a state that has become well established as appropriate within a typical enterprise's PACS and hence should be available to outsiders as well.
What if a mistake is made, and the images need to be corrected later ? This is the same class of problem that one faces with film, or faxed reports or CDs, and in the short term there likely needs to be a human process involved to be sure that everyone is notified. That said, the more immediate and automated transfers become the more this is potentially an issue; it is shared by all distributed infrastructures whether point-to-point or centralized or federated. IHE has started to define transactions for flagging images as rejected (using a DICOM Key Object Selection Document with a defined title), with the intent that the corrected images then be resent. This has work has been started in Image Rejection Note Stored transaction of the IHE Mammography Acquisition Workflow supplement.
What if there are multiple potential recipients, i.e., a "cc list" on the order, such as is often the case when a specialist orders (requests) the examination with the intent of referring the patient onwards, as well as sending a copy to the primary care doctor ? Simple, forward the images to everyone on the cc list. From a consent and HIPAA Privacy Rule authorization perspective, it would be the responsibility of the person writing the order (request) to be sure that everyone on the cc list was appropriately authorized.
What if the patient wants a copy ? Well, it is unlikely that they would have their own personal receiving setup, and unreasonable to expect the imaging provider to support every such recipient (at least until this became as ubiquitous as email). There is always CD of course, but if the patient had a personal electronic health record provider (whoever that might be), they would be able to designate that provider's address as a target, and the imaging provider could send the images there as well. Likely there would be a few such providers configured in advance and it would merely be a matter of recording which one with the patient's registration information.
Are there other use-case beyond the simple "order imaging, perform imaging, send to orderer" example ? Certainly there are. The typical emergency case referral, in which a patient is imaged at the first site then transferred for further care, is an example of whether the same point-to-point store-and-forward paradigm can be used. Though in this case, one needs an infrastructure with sufficient bandwidth to cope with the disaster scenarios where a lot of images on multiple patients need to be transferred very quickly; as a consequence, a more formal arrangement between the two sites is probably necessary than the more ad hoc "email like" pattern for an arbitrary and extensible set of referring doctors.
Teleradiology use-cases, either for a specialist radiologist consultation, or primary interpretation "at home", or even a preliminary interpretation off-shore, are other examples in which exactly the same paradigm store-and-forward paradigm is applicable. This is nothing new, and people have been doing exactly this for many years, using DICOM C-STORE transactions with or without compression in some cases and proprietary protocols in others. Some such teleradiology scenarious could be better supported by removing the patient's true identity first and replacing it with a reversible pseudonym (e.g., for specialisty or off-shore teleradiology), but that is a subtletly and not a pre-requisite.
All that is new here is essentially recognition that every potential recipient needs a secure DICOM "address", just like an email address, and that sending sites be configured to support a multitude of them, and that recipients need to have an Internet connected "DICOM listener" ready to receive images into their own preferred viewing system. I.e., it is a matter to taking well-established existing technology and making it routine rather than occasional.
Does this undermine the need for centralized and regional archives and repositories and registries, and web services orientated infrastructures that are more easily integrated with other sources of information than images ? No, certainly it does not, since there are many other use-cases in which the doctor needs to search for information whose need cannot be so easily anticipated. Still though, many of those use-cases can make use of a certain amount of prior knowledge to optimize the doctor's experience, for example by pre-fetching relevant prior or current images to local system, again to prevent interactive delays or the need to use unfamiliar user interfaces. After all, it is a rare patient that is seen without an appointment.
However, in the interim, there is no need to wait for these archives and repositories and registries to be built, administered or paid for by someone (else).
In the longer term there will no doubt be competing protocols to DICOM network services for the store-and-forward transaction (which might be zip file encapsulated, and secure or grid ftp based) and for retrieval transactions (which might be web services based). I am sure that both sending and receiving systems will grow to support multiple different transactions as this shakes itself out. The store-and-forward payload will always remain pure DICOM of course, since there is no competition for the "file format" itself (as opposed to the interactive on demand display use case, for which protocols like JPIP and its ilk show promise).
But you don't need to wait for a new infrastructure, or new standards, or a new incentive (reimbursement or regulatory) model to deal with some of the easy use-cases. Just go ahead and do it with DICOM.
David
The slings and arrows of outrageous delay and inconvenience,
Or to take arms against a lack of bandwidth,
And by anticipating avoid it?"
Summary: Sharing of images across the network is a potentially attractive alternative to CDs, but image sets are large, bandwidth is limited, lossy compression is controversial, security infrastructure is non-existent, and recipients are busy and impatient; why have to pull images on demand slowly or with poor quality, when one can anticipate where they are needed and push or pre-fetch ? The standards and technology exists now, not tomorrow.
Long version:
There is a renewed enthusiasm for image sharing using the network.
This idea is not new, and for many years now the momentum has been growing to establish standards and build infrastructure to support image as part of the distributed electronic record. In the current US political climate, the huge cost and disparate availability of health care (and imaging utilization in particular) has IT proponents, who are as eager to jump on the stimulus package gravy train as anyone else, seeking to find a "meaningful use" of IT to address the image sharing problem.
The current generation of computer literate doctors is used to the convenience of the Internet, and on-demand access to arbitrary information à la Google. It is reasonable for them to demand that they have access with a similar level of convenience to patient information, including images.
However, this requirement is easy to demand but not so easy to satisfy. Practical realities intrude: radiological images are more complex than consumer grade images, need more manipulation for adequate interactive visualization, tend to be very large individually (e.g., digital mammograms) and occur in very large sets (e.g., thin slice CT or CT/PET). Yet bandwidth, particularly in the "last mile" from the providers to the Internet, is limited.
Some tout the use of lossy image compression as a panacea, yet this remains controversial and adequately powered studies to "prove" that such compression does not lower the quality of care are few in number. Others say the bandwidth problem will go away over time, yet in underserved rural areas, and particularly in medical offices, high-speed DSL or cable access is limited; for large institutions with very large volumes, very high bandwidth "pipes" may add significantly to operational cost. Even with high bandwidth, high latency can degrade the transfer rates achieved and impact any interactive protocol perceptibly. Like healthcare in general, not everyone has equal access at equal cost.
Leaving the DICOM images "on the server" and interacting remotely with an application, either using a proprietary approach like Terarecon's, or a generic application sharing approach like Citrix, or web browser approach that serves up consumer format images on demand, is certainly possible. These approaches introduce new classes of problems such as access control and familiarity with the user interface. One frequently hears from radiologists who serve a number of hospitals, about how irritating it is to have to learn the remote interface of each of the different installed PACS, for example. This is exactly the same problem that the AMA has raised about the different viewers on different vendors' CDs. Provisioning every possible user with the appropriate identity and authentication information, and then assuring they have access to what they should have and nothing else, is also obviously a major administrative task. In the absence of a national or regional infrastructure for centralizing such provisioning, or a framework of "trust" between providers, this will remain a difficult problem. Providing patients with access to their own information and images adds another dimension to the scale and complexity problem.
For years now IHE has been promoting its cross-enterprise document sharing (XDS) architecture as a potential solution. The idea is to have each source register what it has available with a centrally accessible registry, and then consumers use the location information in the registry to go back to the source repository to pull what they need. The underlying technology is appropriately buzzword compliant (XML and SOAP and all that), and there is an additional layer to deal with the number and size of images (XDS-I, currently undergoing revision to become XDS-I.b using MTOM/XOP to efficiently handle the binary image data). However, this architecture still presupposes an unparalled (and as yet largely unimplemented) degree of cooperation between everyone involved in the sharing problem.
Healthcare providers do not normally cooperate, at least in the US; indeed the very essence of the healthcare system encourages them to compete, and cooperation is anathema to them. Does it make sense to rely on the future deployment of an infrastructure that involves cooperation and yet likely with additional cost associated with it and little incentive to participate ? Who are the providers already interested in providing information to ? Their "customers" obviously, the referring doctors who order (or in civilized countries "request") the imaging services in the first place.
These referring doctors span the gamut in terms of technologic sophistication and requirements. Some may be satisfied with just the report. Many though, and often it depends on the specific patient and their condition, will need some access to the images. A significant proportion will need access to the original DICOM images in order to perform their own interpretation or to use their own visualization or planning tools. Yet these are busy people who have neither the patience, nor the time to waste, nor are reimbursed for, screwing around with artifical technological barriers to using the images, such as network delays or unfamiliar user interfaces
Should it not be a simple matter in this day and age to send the images to where they are needed, just as one sends (faxes or emails) the report, a well established practice ?
Obviously this is possible. No imaging facility is going to perform an examination without knowing who ordered (requested) it, so the information about where to send it exists. If the potential recipients had a system capable of receiving it, this process could be automated.
Just as I have advocated in the past that referring doctors set up a system in their office and have their staff handle CD importing, so that such images are ready to view in their system when they need them, one could envisage the same or a similar in-office system with a port listening to the outside world ready to receive incoming images. Just like the fax machine that is sitting their waiting to receive phone calls.
Do the standards and technology exist to do this safely and securly right now ? Of course they do. All one would need is to perform an ordinary DICOM network transfer of the images from the sending site (imaging center) to the receiving site (referring doctor). Should it be a secure transfer to protect confidentiality ? Of course it should, but one does not need to set up a VPN to every possible referring doctor, nor from every possible sending site, since DICOM already defines transport over TLS (SSL), the same encryption protocol that one uses for ecommerce with sites with whom one has no pre-established relationship.
Does one need any identification or authentication infrastructure to achieve these ? Beyond perhaps checking that the receiving site has a valid TLS certificate (signed by a well-known certificate authority, just like for web browsing), the answer is no. The fact that the recipient ordered (requested) the examination should be sufficient to establish that they are entitled to access the images, for example. By analogy, one does not require any special authentication to receive the faxed report.
Would recipients potentially be vulnerable to "DICOM image spam" ? Well theoretically if someone attacker was that determined, but this would easily be solved by "filtering" on a list of known and approved source sites.
Is there any risk to the integrity of the sending site ? Well no, because this is an outbound transfer (push), and there is no need for the sending site to respond to queries (unless for some reason, it wants to).
This is pretty easy stuff to set up, and apart from the encryption layer, involves nothing that imaging vendors are not already intimately familiar with. No fancy web services stuff, no XML or SOAP messages. Just plain old boring store-and-forward point-to-point DICOM. And there are certainly already software tool kits that provide support for the secure transfer of DICOM images over TLS. Some of these tool kits also support the use of the various standard lossless and lossy compression "transfer syntaxes" that DICOM defines, including JPEG 2000, which can be used as appropriate and negotiated automatically depending on the receiving system's capabilities. Is DICOM the fastest possible network transfer protocol ? Well arguably not, depending on the latency of the network and the quality of the implementation, but in a store-and-forward paradigm this is much less of a factor, and there are many ways to optimize DICOM transfers if required, without throwing away the interoperability of a well known protocol.
What about confirming the success of the transfer ? One could use the existing DICOM Storage Commitment in the same way IHE uses it between modalities and the PACS, and/or one could include a "manifest" of what should have been sent, e.g., as a DICOM SR the way the IHE Teaching File and Clinical Trial Export (TCE) profile does.
What about the matter of inconsistent patient identifiers ? How is the receiving site going to know how to match the incoming images that use the imaging center's patient identifier against their own internal patient identifier. This is certainly a non-trivial problem, but just as when paying an invoice a business normally tracks the orderer's purchase order number in addition to its own numbering system, there is no reason why an imaging system cannot do the same. There are certainly HL7 and DICOM attributes related to dealing with this class of problem, but in the short term and in the absence of a consistent convention for handling this, it may be necessary to have a heuristic matching algorithm and/or human oversight of this "import reconciliation" problem. Perhaps one day there will be a national patient identifier to reduce the complexity of this problem, but there will always be errors that need reconciliation. The same class of problem exists with CDs, and the IHE Import Reconciliation Workflow (IRWF) profile provides ways to deal with this, either an an unscheduled manner by using patient identity queries, or in a scheduled manner, whereby the system that placed the order in the first place could be expecting the result in the form of images and perform the matching against a reduced set of potential alternatives.
Note that this entire solution avoids the need for any type of centralized infrastructure. It just needs the sending site to know the "DICOM address" (host, port and AET) of the ordering (requesting) doctor's site to which to send the images. This could be configured in the system in advance, just like the fax number for the report, and it could be included in every order (printed or electronic) to allow manual or automatic addition of new sites.
Ideally the sending capability would be built in to imaging centers' information systems and PACS. Could one retrofit an existing RIS/PACS with this capability with a third-party device or piece of software ? Certainly; one could envisage a system in which the modality worklist provider was polled on a regular basis to extract information about what examinations had been requested, and within the worklist entries there should be identification of the referring doctor. Such a system would then query the PACS to see what images were available for these requests, retrieve them, and forward them on to the pre-configured recipients site. Other DICOM services, such as Modality Performed Procedure Step (MPPS) and Instance Availability Notification (IAN) might be of additional assistance in making this process more reliable or timely, and in particular help assure that a complete set of images was transferred. Alternatively, rather than polling the MWL provider, one might listen to an HL7 ADT and Order Entry feed to extract the order information or gather additional details.
The bottom line though is that the images could be in the hands of the remote referring doctor before the radiologist has even had a chance to look at them, a state that has become well established as appropriate within a typical enterprise's PACS and hence should be available to outsiders as well.
What if a mistake is made, and the images need to be corrected later ? This is the same class of problem that one faces with film, or faxed reports or CDs, and in the short term there likely needs to be a human process involved to be sure that everyone is notified. That said, the more immediate and automated transfers become the more this is potentially an issue; it is shared by all distributed infrastructures whether point-to-point or centralized or federated. IHE has started to define transactions for flagging images as rejected (using a DICOM Key Object Selection Document with a defined title), with the intent that the corrected images then be resent. This has work has been started in Image Rejection Note Stored transaction of the IHE Mammography Acquisition Workflow supplement.
What if there are multiple potential recipients, i.e., a "cc list" on the order, such as is often the case when a specialist orders (requests) the examination with the intent of referring the patient onwards, as well as sending a copy to the primary care doctor ? Simple, forward the images to everyone on the cc list. From a consent and HIPAA Privacy Rule authorization perspective, it would be the responsibility of the person writing the order (request) to be sure that everyone on the cc list was appropriately authorized.
What if the patient wants a copy ? Well, it is unlikely that they would have their own personal receiving setup, and unreasonable to expect the imaging provider to support every such recipient (at least until this became as ubiquitous as email). There is always CD of course, but if the patient had a personal electronic health record provider (whoever that might be), they would be able to designate that provider's address as a target, and the imaging provider could send the images there as well. Likely there would be a few such providers configured in advance and it would merely be a matter of recording which one with the patient's registration information.
Are there other use-case beyond the simple "order imaging, perform imaging, send to orderer" example ? Certainly there are. The typical emergency case referral, in which a patient is imaged at the first site then transferred for further care, is an example of whether the same point-to-point store-and-forward paradigm can be used. Though in this case, one needs an infrastructure with sufficient bandwidth to cope with the disaster scenarios where a lot of images on multiple patients need to be transferred very quickly; as a consequence, a more formal arrangement between the two sites is probably necessary than the more ad hoc "email like" pattern for an arbitrary and extensible set of referring doctors.
Teleradiology use-cases, either for a specialist radiologist consultation, or primary interpretation "at home", or even a preliminary interpretation off-shore, are other examples in which exactly the same paradigm store-and-forward paradigm is applicable. This is nothing new, and people have been doing exactly this for many years, using DICOM C-STORE transactions with or without compression in some cases and proprietary protocols in others. Some such teleradiology scenarious could be better supported by removing the patient's true identity first and replacing it with a reversible pseudonym (e.g., for specialisty or off-shore teleradiology), but that is a subtletly and not a pre-requisite.
All that is new here is essentially recognition that every potential recipient needs a secure DICOM "address", just like an email address, and that sending sites be configured to support a multitude of them, and that recipients need to have an Internet connected "DICOM listener" ready to receive images into their own preferred viewing system. I.e., it is a matter to taking well-established existing technology and making it routine rather than occasional.
Does this undermine the need for centralized and regional archives and repositories and registries, and web services orientated infrastructures that are more easily integrated with other sources of information than images ? No, certainly it does not, since there are many other use-cases in which the doctor needs to search for information whose need cannot be so easily anticipated. Still though, many of those use-cases can make use of a certain amount of prior knowledge to optimize the doctor's experience, for example by pre-fetching relevant prior or current images to local system, again to prevent interactive delays or the need to use unfamiliar user interfaces. After all, it is a rare patient that is seen without an appointment.
However, in the interim, there is no need to wait for these archives and repositories and registries to be built, administered or paid for by someone (else).
In the longer term there will no doubt be competing protocols to DICOM network services for the store-and-forward transaction (which might be zip file encapsulated, and secure or grid ftp based) and for retrieval transactions (which might be web services based). I am sure that both sending and receiving systems will grow to support multiple different transactions as this shakes itself out. The store-and-forward payload will always remain pure DICOM of course, since there is no competition for the "file format" itself (as opposed to the interactive on demand display use case, for which protocols like JPIP and its ilk show promise).
But you don't need to wait for a new infrastructure, or new standards, or a new incentive (reimbursement or regulatory) model to deal with some of the easy use-cases. Just go ahead and do it with DICOM.
David
Subscribe to:
Posts (Atom)