Monday, April 13, 2009

To push or to pull: that is the question

"Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous delay and inconvenience,
Or to take arms against a lack of bandwidth,
And by anticipating avoid it?"

Summary: Sharing of images across the network is a potentially attractive alternative to CDs, but image sets are large, bandwidth is limited, lossy compression is controversial, security infrastructure is non-existent, and recipients are busy and impatient; why have to pull images on demand slowly or with poor quality, when one can anticipate where they are needed and push or pre-fetch ? The standards and technology exists now, not tomorrow.

Long version:

There is a renewed enthusiasm for image sharing using the network.

This idea is not new, and for many years now the momentum has been growing to establish standards and build infrastructure to support image as part of the distributed electronic record. In the current US political climate, the huge cost and disparate availability of health care (and imaging utilization in particular) has IT proponents, who are as eager to jump on the stimulus package gravy train as anyone else, seeking to find a "meaningful use" of IT to address the image sharing problem.

The current generation of computer literate doctors is used to the convenience of the Internet, and on-demand access to arbitrary information à la Google. It is reasonable for them to demand that they have access with a similar level of convenience to patient information, including images.

However, this requirement is easy to demand but not so easy to satisfy. Practical realities intrude: radiological images are more complex than consumer grade images, need more manipulation for adequate interactive visualization, tend to be very large individually (e.g., digital mammograms) and occur in very large sets (e.g., thin slice CT or CT/PET). Yet bandwidth, particularly in the "last mile" from the providers to the Internet, is limited.

Some tout the use of lossy image compression as a panacea, yet this remains controversial and adequately powered studies to "prove" that such compression does not lower the quality of care are few in number. Others say the bandwidth problem will go away over time, yet in underserved rural areas, and particularly in medical offices, high-speed DSL or cable access is limited; for large institutions with very large volumes, very high bandwidth "pipes" may add significantly to operational cost. Even with high bandwidth, high latency can degrade the transfer rates achieved and impact any interactive protocol perceptibly. Like healthcare in general, not everyone has equal access at equal cost.

Leaving the DICOM images "on the server" and interacting remotely with an application, either using a proprietary approach like Terarecon's, or a generic application sharing approach like Citrix, or web browser approach that serves up consumer format images on demand, is certainly possible. These approaches introduce new classes of problems such as access control and familiarity with the user interface. One frequently hears from radiologists who serve a number of hospitals, about how irritating it is to have to learn the remote interface of each of the different installed PACS, for example. This is exactly the same problem that the AMA has raised about the different viewers on different vendors' CDs. Provisioning every possible user with the appropriate identity and authentication information, and then assuring they have access to what they should have and nothing else, is also obviously a major administrative task. In the absence of a national or regional infrastructure for centralizing such provisioning, or a framework of "trust" between providers, this will remain a difficult problem. Providing patients with access to their own information and images adds another dimension to the scale and complexity problem.

For years now IHE has been promoting its cross-enterprise document sharing (XDS) architecture as a potential solution. The idea is to have each source register what it has available with a centrally accessible registry, and then consumers use the location information in the registry to go back to the source repository to pull what they need. The underlying technology is appropriately buzzword compliant (XML and SOAP and all that), and there is an additional layer to deal with the number and size of images (XDS-I, currently undergoing revision to become XDS-I.b using MTOM/XOP to efficiently handle the binary image data). However, this architecture still presupposes an unparalled (and as yet largely unimplemented) degree of cooperation between everyone involved in the sharing problem.

Healthcare providers do not normally cooperate, at least in the US; indeed the very essence of the healthcare system encourages them to compete, and cooperation is anathema to them. Does it make sense to rely on the future deployment of an infrastructure that involves cooperation and yet likely with additional cost associated with it and little incentive to participate ? Who are the providers already interested in providing information to ? Their "customers" obviously, the referring doctors who order (or in civilized countries "request") the imaging services in the first place.

These referring doctors span the gamut in terms of technologic sophistication and requirements. Some may be satisfied with just the report. Many though, and often it depends on the specific patient and their condition, will need some access to the images. A significant proportion will need access to the original DICOM images in order to perform their own interpretation or to use their own visualization or planning tools. Yet these are busy people who have neither the patience, nor the time to waste, nor are reimbursed for, screwing around with artifical technological barriers to using the images, such as network delays or unfamiliar user interfaces

Should it not be a simple matter in this day and age to send the images to where they are needed, just as one sends (faxes or emails) the report, a well established practice ?

Obviously this is possible. No imaging facility is going to perform an examination without knowing who ordered (requested) it, so the information about where to send it exists. If the potential recipients had a system capable of receiving it, this process could be automated.

Just as I have advocated in the past that referring doctors set up a system in their office and have their staff handle CD importing, so that such images are ready to view in their system when they need them, one could envisage the same or a similar in-office system with a port listening to the outside world ready to receive incoming images. Just like the fax machine that is sitting their waiting to receive phone calls.

Do the standards and technology exist to do this safely and securly right now ? Of course they do. All one would need is to perform an ordinary DICOM network transfer of the images from the sending site (imaging center) to the receiving site (referring doctor). Should it be a secure transfer to protect confidentiality ? Of course it should, but one does not need to set up a VPN to every possible referring doctor, nor from every possible sending site, since DICOM already defines transport over TLS (SSL), the same encryption protocol that one uses for ecommerce with sites with whom one has no pre-established relationship.

Does one need any identification or authentication infrastructure to achieve these ? Beyond perhaps checking that the receiving site has a valid TLS certificate (signed by a well-known certificate authority, just like for web browsing), the answer is no. The fact that the recipient ordered (requested) the examination should be sufficient to establish that they are entitled to access the images, for example. By analogy, one does not require any special authentication to receive the faxed report.

Would recipients potentially be vulnerable to "DICOM image spam" ? Well theoretically if someone attacker was that determined, but this would easily be solved by "filtering" on a list of known and approved source sites.

Is there any risk to the integrity of the sending site ? Well no, because this is an outbound transfer (push), and there is no need for the sending site to respond to queries (unless for some reason, it wants to).

This is pretty easy stuff to set up, and apart from the encryption layer, involves nothing that imaging vendors are not already intimately familiar with. No fancy web services stuff, no XML or SOAP messages. Just plain old boring store-and-forward point-to-point DICOM. And there are certainly already software tool kits that provide support for the secure transfer of DICOM images over TLS. Some of these tool kits also support the use of the various standard lossless and lossy compression "transfer syntaxes" that DICOM defines, including JPEG 2000, which can be used as appropriate and negotiated automatically depending on the receiving system's capabilities. Is DICOM the fastest possible network transfer protocol ? Well arguably not, depending on the latency of the network and the quality of the implementation, but in a store-and-forward paradigm this is much less of a factor, and there are many ways to optimize DICOM transfers if required, without throwing away the interoperability of a well known protocol.

What about confirming the success of the transfer ? One could use the existing DICOM Storage Commitment in the same way IHE uses it between modalities and the PACS, and/or one could include a "manifest" of what should have been sent, e.g., as a DICOM SR the way the IHE Teaching File and Clinical Trial Export (TCE) profile does.

What about the matter of inconsistent patient identifiers ? How is the receiving site going to know how to match the incoming images that use the imaging center's patient identifier against their own internal patient identifier. This is certainly a non-trivial problem, but just as when paying an invoice a business normally tracks the orderer's purchase order number in addition to its own numbering system, there is no reason why an imaging system cannot do the same. There are certainly HL7 and DICOM attributes related to dealing with this class of problem, but in the short term and in the absence of a consistent convention for handling this, it may be necessary to have a heuristic matching algorithm and/or human oversight of this "import reconciliation" problem. Perhaps one day there will be a national patient identifier to reduce the complexity of this problem, but there will always be errors that need reconciliation. The same class of problem exists with CDs, and the IHE Import Reconciliation Workflow (IRWF) profile provides ways to deal with this, either an an unscheduled manner by using patient identity queries, or in a scheduled manner, whereby the system that placed the order in the first place could be expecting the result in the form of images and perform the matching against a reduced set of potential alternatives.

Note that this entire solution avoids the need for any type of centralized infrastructure. It just needs the sending site to know the "DICOM address" (host, port and AET) of the ordering (requesting) doctor's site to which to send the images. This could be configured in the system in advance, just like the fax number for the report, and it could be included in every order (printed or electronic) to allow manual or automatic addition of new sites.

Ideally the sending capability would be built in to imaging centers' information systems and PACS. Could one retrofit an existing RIS/PACS with this capability with a third-party device or piece of software ? Certainly; one could envisage a system in which the modality worklist provider was polled on a regular basis to extract information about what examinations had been requested, and within the worklist entries there should be identification of the referring doctor. Such a system would then query the PACS to see what images were available for these requests, retrieve them, and forward them on to the pre-configured recipients site. Other DICOM services, such as Modality Performed Procedure Step (MPPS) and Instance Availability Notification (IAN) might be of additional assistance in making this process more reliable or timely, and in particular help assure that a complete set of images was transferred. Alternatively, rather than polling the MWL provider, one might listen to an HL7 ADT and Order Entry feed to extract the order information or gather additional details.

The bottom line though is that the images could be in the hands of the remote referring doctor before the radiologist has even had a chance to look at them, a state that has become well established as appropriate within a typical enterprise's PACS and hence should be available to outsiders as well.

What if a mistake is made, and the images need to be corrected later ? This is the same class of problem that one faces with film, or faxed reports or CDs, and in the short term there likely needs to be a human process involved to be sure that everyone is notified. That said, the more immediate and automated transfers become the more this is potentially an issue; it is shared by all distributed infrastructures whether point-to-point or centralized or federated. IHE has started to define transactions for flagging images as rejected (using a DICOM Key Object Selection Document with a defined title), with the intent that the corrected images then be resent. This has work has been started in Image Rejection Note Stored transaction of the IHE Mammography Acquisition Workflow supplement.

What if there are multiple potential recipients, i.e., a "cc list" on the order, such as is often the case when a specialist orders (requests) the examination with the intent of referring the patient onwards, as well as sending a copy to the primary care doctor ? Simple, forward the images to everyone on the cc list. From a consent and HIPAA Privacy Rule authorization perspective, it would be the responsibility of the person writing the order (request) to be sure that everyone on the cc list was appropriately authorized.

What if the patient wants a copy ? Well, it is unlikely that they would have their own personal receiving setup, and unreasonable to expect the imaging provider to support every such recipient (at least until this became as ubiquitous as email). There is always CD of course, but if the patient had a personal electronic health record provider (whoever that might be), they would be able to designate that provider's address as a target, and the imaging provider could send the images there as well. Likely there would be a few such providers configured in advance and it would merely be a matter of recording which one with the patient's registration information.

Are there other use-case beyond the simple "order imaging, perform imaging, send to orderer" example ? Certainly there are. The typical emergency case referral, in which a patient is imaged at the first site then transferred for further care, is an example of whether the same point-to-point store-and-forward paradigm can be used. Though in this case, one needs an infrastructure with sufficient bandwidth to cope with the disaster scenarios where a lot of images on multiple patients need to be transferred very quickly; as a consequence, a more formal arrangement between the two sites is probably necessary than the more ad hoc "email like" pattern for an arbitrary and extensible set of referring doctors.

Teleradiology use-cases, either for a specialist radiologist consultation, or primary interpretation "at home", or even a preliminary interpretation off-shore, are other examples in which exactly the same paradigm store-and-forward paradigm is applicable. This is nothing new, and people have been doing exactly this for many years, using DICOM C-STORE transactions with or without compression in some cases and proprietary protocols in others. Some such teleradiology scenarious could be better supported by removing the patient's true identity first and replacing it with a reversible pseudonym (e.g., for specialisty or off-shore teleradiology), but that is a subtletly and not a pre-requisite.

All that is new here is essentially recognition that every potential recipient needs a secure DICOM "address", just like an email address, and that sending sites be configured to support a multitude of them, and that recipients need to have an Internet connected "DICOM listener" ready to receive images into their own preferred viewing system. I.e., it is a matter to taking well-established existing technology and making it routine rather than occasional.

Does this undermine the need for centralized and regional archives and repositories and registries, and web services orientated infrastructures that are more easily integrated with other sources of information than images ? No, certainly it does not, since there are many other use-cases in which the doctor needs to search for information whose need cannot be so easily anticipated. Still though, many of those use-cases can make use of a certain amount of prior knowledge to optimize the doctor's experience, for example by pre-fetching relevant prior or current images to local system, again to prevent interactive delays or the need to use unfamiliar user interfaces. After all, it is a rare patient that is seen without an appointment.

However, in the interim, there is no need to wait for these archives and repositories and registries to be built, administered or paid for by someone (else).

In the longer term there will no doubt be competing protocols to DICOM network services for the store-and-forward transaction (which might be zip file encapsulated, and secure or grid ftp based) and for retrieval transactions (which might be web services based). I am sure that both sending and receiving systems will grow to support multiple different transactions as this shakes itself out. The store-and-forward payload will always remain pure DICOM of course, since there is no competition for the "file format" itself (as opposed to the interactive on demand display use case, for which protocols like JPIP and its ilk show promise).

But you don't need to wait for a new infrastructure, or new standards, or a new incentive (reimbursement or regulatory) model to deal with some of the easy use-cases. Just go ahead and do it with DICOM.



Jerad Johnson said...

I enjoyed reading your article as it has addressed precisely an issue that I am attempting to resolve now. I began work with a Telehealth network provider several months back in an IT administration role, my background being IT only, no medical. I would like to further discuss this issue if you have the time. How can I contact you?

Nick James, UK said...

Its elegant, its simple and it will work.
Nice concise summary Dave.

Dan Banach said...


Very interesting ideas and certainly very doable, but I don't think there is enough general knowledge out there to make this happen in a ubiquitous, secure fashion. Most conversations break down when you talk public versus private IP let alone how to hang an IP out on the internet with TLS encryption and allow others to access it. The current products and knowledge set on how to exchange certs securely is not where it needs to be to make this happen. However, whoever can figure it out will do quite well for themselves.


Fizdom said...

Great post David. We also have a requirement of encripting the DICOM image using a FIPS 140-2 algorithm and then transferring the image using TLS protocol. Are there any open source / commercial vendors that support it ?

What about Oracle Clinical support for DICOM ?


Unknown said...

David, you stated there are tollkits out there to make this happen. Can you direct me to a good one? I am afraid I am going to need something of the "cook book" genre. Thanks

Marco Crispini said...

I agree. Further, I think it's an 80/20 rule, where push-model sharing fits the most frequently encountered use-cases. Indeed, for exactly the reasons you state we started bbRad with push-model; pull-model is less frequent, although XDS delivers that nicely.