Saturday, May 14, 2016

Image Sharing: Are we there yet? It seems not.

Short version: Why are we still using CDs? Its not the lack of standards or commercial solutions, it seems to be the lack of will, aka. incentives.

Long version.

In Joe Biden's Full Interview With Robin Roberts on the Cancer Moonshot he rightly bemoans (at 08:20 minutes in) the inability of two prestigious organizations, Walter Reed Hospital in Washington, D.C., and MD Anderson Cancer Center in Houston, TX, to share his son's medical imaging data electronically, without resorting to flying discs across the country (and even that apparently required the intervention of his son-in-law, who is a surgeon). Unfortunately, he attributes this to an absence of a "common language", which for this particular case is not true (since we have DICOM, which is the lingua franca of images). Earlier in the interview, the issue of incentives is discussed though.

This experience mirrors my own, dealing with family attending Memorial Sloan Kettering Cancer Center (MSKCC)  in New York, NY. The only mechanism I have to obtain images from there is again via CD. Speaking to one of the radiologists at Memorial, I was told that the inbound problem is just as bad; they employ 10 (!) FTEs whose only function is to stuff CDs received into drives to import them. Apparently they do have one of the commercial network image sharing alternatives installed, but are planning on ditching it and going with another vendor, not sure why. "Continuing bandwidth issues" were cited as a concern. MSKCC has a limited patient portal, which does have radiology results available through it (plain text of course, nothing structured to download), but apparently making images available (whether to View, Download or Transmit) through the portal is not a priority. It does make paying the bills easier though (I guess that is important for them).

Now, it is great that CDs work at all, and work relatively well. And of course they are thoroughly standardized (using the DICOM PS3.10 files that are specified by IHE PDI), as long as they don't come from older Stentor/Philips crap. But surely, well into the 21st Century, we can do better than "sneaker net", especially between major medical centers.

Yesterday, on a call with the HIMSS-SIIM Enterprise Imaging Joint Workgroup Best Practice Image Exchange and Sharing (Team 3) (which I have belatedly joined), there was a discussion about reorganizing the work groups and starting a new one on Standards and Interoperability. I was keen to emphasize that I don't think the interoperability problem is one of a lack of standards or implementation of them, but rather a lack of incentives, funding, prioritization or indeed a clearly articulated value proposition for deploying solutions, using the standards that we already have (or even using a non-standard solution, if it works).

When the UK folks were facing the problem of image sharing, and the NHS failed to deliver a suitable central solution, an ad hoc network of push-driven sharing evolved, the Image Exchange Portal (IEP), which has been bought and expanded by Sectra. They claim that:

"100% of NHS Acute Trusts in England plus private hospitals are connected to one another via the IEP network".

As I understand it, these guys were no more incentivized to develop, join or use the IEP sharing than are their counterparts in the US, nor were there any disincentives for not bothering to share images. Perhaps there were just no funds available to employ an army of CD-stuffers to work around the problem, so the pain was being felt by the decision makers. Or perhaps the resources for repeat imaging were more tightly controlled (as opposed to being a potential source of more revenue in the US), so the shared images were the only images available. I am just guessing, but I doubt it was because the Brits are any more altruistic or sensible than their Cousins (I can say that, since I am nominally a Brit, even though I have lived and worked in the US for decades).

The Canadians have their much vaunted, centrally funded, regional Diagnostic Image Repositories (DI-r's), but am I told that, in some provinces at least, you are lucky if you can get out what you put in, and there is little if any useful access to images submitted by other sites. Some provinces have apparently been able to do better though.

Regardless, all of us who work in medical imaging IT know that the technology is there, and is affordable, and the workflow is manageable despite having to deal with stupid things like the lack of a single national patient identifier. It doesn't really matter for the sharing use case which standard or combination of standards you choose for the transfer, as long as the payload is DICOM. Whether you push them or pull them, use traditional DICOM protocols or DICOMweb or XDS-I RAD-69 or XDR-I or some proprietary mechanism, or follow IHE Import Reconciliation Workflow (IRWF) to deal with the identifiers or do it your own way, with a little configuration, the images are going to get where they need to be. It is really just a question of motivating sites to get off their collective asses.

In the "collective" probably lies part of the problem, since on a large scale, what motivates competitors to share?

For once though, the problem can hardly be laid at the door of the evil vendors who might be accused of "data blocking". For image sharing, there is an army of vendors willing to help solve your sharing problem, as well as open source components to assemble your own, there are no format issues, the problem is way simpler than that of general EHR interoperability, and there is no debate over documents versus APIs (all of the radiology and cardiology images, at least, are already in DICOM format and document-like in that respect).

When I discussed this in late 2012 with Farzad Mostashari, after expressing my disappointment that the MU2 didn't insist on image sharing, he wrote that:

"My hope is that the business case for this is so clear that it will happen regardless (perhaps with some help from convening, best practices, etc) and we can point to the on-the-ground reality in two years as the ultimate refutation of the concerns."
 
Now here we are three and a half years later, not two, with a plethora of commercial solutions as well as multitude of standards for image sharing, but the "business case" is apparently not so clear after all, if the Vice President of the United States still needs to arrange to fly CDs around.

Shame on us all for failing him and his family.

David

PS. As far as I have been able to ascertain, the MACRA proposed rule doesn't provide any incentives or requirements for imaging sharing either. This may be as much because nobody has submitted sharing related performance measures as the lack of central recognition that this is important or a priority. Maybe the VP should submit comments on it!

PPS. In the same interview, Joe Biden also takes a shot at the much reviled editor of the NEJM, Jeffrey Drazen,  over his ill-considered "data parasites" comments (actually "research parasites", in the editorial co-authored with Deputy Editor Dan Longo). While Drazen may be well on his way to becoming the most hated man in America (perhaps overshadowing Martin Shkreli, the AIDS drug robber baron) the issues raised in Drazen's editorial are about a different kind of "sharing" than the subject of this post.

No doubt Drazen's comments reflect the opinion of many in the "elite healthcare research establishment", who seem to regard the right to solely exploit their taxpayer-funded research and data in order to exclude success by their funding competitors (not to mention their unwillingness to have their own data and analysis scrutinized for integrity and repeatability) as something akin to the divine right of kings. Again, this all seems to be a matter of incentives, this time the perverse incentives of the research funding infrastructure that encourage data hoarding rather than sharing due to the competitive nature of the process. NIH, perhaps crippled by the Bayh–Dole Act, doesn't seem to have any teeth in its data sharing policy when it comes to reviewing and approving grant applications or monitoring their performance, so there is no "level playing field" of mandatory and immediate sharing. Since most of what is published is probably false anyway, perhaps it doesn't matter:(

There is something for everyone in the interview, and the lack of open access to research publications comes in for its share of criticism too. Hear, hear!

I wish the VP every success in his crusade.

Sunday, May 8, 2016

To C-MOVE is human; to C-GET, divine

Summary: C-GET is superior to C-MOVE for use beyond the firewall; contrary to some misleading reports, it has NOT been retired from DICOM, and implementations do exist.

Long Version.

With apologies to Alexander Pope, I wanted to draw attention to what appears to be a common misconception, that DICOM C-GET is retired or obsolete or deprecated.

C-GET is not retired; it most definitely is alive and well, and more importantly, useful.

C-GET is especially useful for DICOM use over the public Internet, beyond the local area network.

As you know, by far the most common way to retrieve a study, series or individual instances is to use a C-MOVE request, which instructs the server (SCP) to initiate the necessary C-STORE operations on one or more different connections (associations) to transfer the data.

This necessitates:
  • the requester being able to listen for and accept inbound connections (i.e., be a C-STORE SCP),
  • that any impediments on the network (like firewalls) allow such inbound connections,
  • that the sender be configured with the host/IP address and port of the requester (since only the Destination AET is communicated in the C-MOVE request), and
  • that Network Address Translation (NAT) be correctly configured to forward the inbound connections to the requester.
By comparison, a C-GET request does not depend on separate associations being established, but rather "turns around" the same connection on which the request is made, and re-uses it to receive the inbound C-STORE operation. I.e., it is just like an HTTP GET in that all the data comes back on the same connection. It is similar in functionality to a Passive FTP transfer, though in ftp there are actually two separate connections, though both are initiated by the requester (one for commands and one for data).

With all three protocols, DICOM C-GET, HTTP GET and Passive FTP GET, there is:
  • no need for the requester to be able to respond to inbound connections
  • no need to configure firewalls to allow inbound connections or perform NAT, and
  • no need (other than for access control) to configure the sender to know anything about the requester.
Of course, firewalls may also restrict outbound connections, but that affects all protocols similarly.

All three protocols can of course communicate over secured channels, whether by using TLS or a VPN.

So, if C-GET is so useful, why is it not as commonly implemented?

Historically, when DICOM was first getting started and being used mostly for mini-PACS clusters of acquisition modalities and workstations, the thinking of the designers went something like this. First, I have to be able to send and receive images by pushing them around, so I have to implement C-STORE as an SCU and SCP. Now, the product manager says I have to allow users to pull them too, so the easiest way is to write a C-MOVE SCU and SCP to command that the transfer takes place, but I can just reuse the existing C-STORE SCU and SCP code that I have already written. I only have a handful of devices to connect on the LAN, so the administrative burden of configuring them all to know about each other is not an issue. QED.

As smaller systems were scaled to enterprise level, and larger proprietary systems added DICOM Q/R capability to allow the same mini-PACS workstations to gain access to the archive, the use of C-MOVE became entrenched, without much further thought being given to the potential future benefits of C-GET for use beyond the walls of the enterprise or on a really large scale. Much later, IHE specified C-MOVE for the Retrieve Images (RAD-16) transaction (in Year 2 for 2000), which subsequently became part of the Scheduled Workflow Profile, but did not mention C-GET, presumably because the conventional wisdom at the time was that C-MOVE was much more widely implemented.

So who does support C-GET?

A Google search reveals quite a few systems that do. There are some open source or freely available SCUs and SCPs too. When I monitor at Connectathons, it is extremely convenient to be able to retrieve stuff from testers' systems (to compare what they have with what is expected) without having to go and bother them to add my configuration for C-MOVE, and off hand I would guess about 15-25% of the systems respond to a C-GET, including, of course, the central archive, which for the last few years has been dcm4chee. Dave Harvey's publicly accessible server and PixelMed's support C-GET, as do clients like Osirix, though I don't think either ClearCanvas or K-PACS do:(

The tricky thing with implementing C-GET as an SCU is the Association Negotiation, and particularly the (annoying, gratuitous, arbitrary) limit on the total number of Presentation Contexts caused by the "odd integers between 1 and 255" requirement on the single byte Presentation-context-ID. The naive (though inefficient) approach of listing all possible (storage) SOP Classes permuted with all possible Transfer Syntaxes reaches that limit quickly nowadays. Allowing the SCP to choose the Transfer Syntax, and using SOP Classes in Study from an earlier STUDY level C-FIND (or using plausible SOP Classes based on Modalities in Study, or if these are not supported as return keys by the C-FIND SCP, Modality from a SERIES level C-FIND, or worst case, the SOP Class UID from an IMAGE level C-FIND) helps a lot with this, though does limit the re-usability of the Association if you want to keep it alive in a "connection pool" for later retrievals.

From a performance perspective, single connection C-GET and C-MOVE are similar, which is not surprising since both are often limited by latency effects on the synchronous C-STORE response. In the absence of Asynchronous Operations support, it is obviously easier to accelerate C-MOVE by opening multiple return Associations across which to spread the C-STORE operations, which one can't do with C-GET, unless one selectively retrieves at the IMAGE level, which is possible, but tedious to set up and requires an initial IMAGE level C-FIND to get SOP Instance UIDs. Using large multi-frame images instance mitigates this issue.

It would be interesting to see, for the simple pull use case, how close the C-GET with Asynchronous Operations support could approach raw socket transfer speeds though, and how it would compare with an HTTP GET or Passive FTP GET.

The security considerations (include channel confidentiality, access control and audit trail) would seem to be similar for C-GET and C-MOVE, and both TLS and user identity communication are available if necessary.

David

PS. I was motivated to write this when I noticed that Sébastien Jodogne says in Note 1 of his description of "C-Move: Query/retrieve" documenting his Orthanc server:

"Even if C-Move may seem counter-intuitive, it is the only way to initiate a query/retrieve. Once upon a time, there was a conceptually simpler C-Get command, but this command is now deprecated."

I asked Sébastien where he got this impression and attributes the source of his confusion to this post by Roni Zaharia. Both are incorrect in this respect.

During the great DICOM purge of 2006 (Sup 98), though the Patient/Study Only Query/Retrieve Information Model was retired from the Query/Retrieve Service, C-GET was left alone, and none of the other Supplements or CPs related to retirement touched it either. On the contrary, subsequent additions to the standard to support Instance and Frame Level Retrieve and Composite Instance Retrieve Without Bulk Data (Sup 119) extended the use of C-GET significantly.

Sébastien profusely apologizes for relying on hearsay and failing to check the standard, and hopes to implement C-GET when he has a chance.

PPS. I observe in passing that Roni also recommends the use of Patient Root rather than Study Root queries, which I would strongly disagree with. In the early days, many systems' databases were implemented with the study as the top level and the patient's identifiers and characteristics were managed as attributes of the study, if for no other reason than HIS/RIS integration was not as common as it is today, and patient level stuff was often inconsistent and/or incorrect. IHE, for example, when Q/R was added in Year Two, specified the Study Root C-FIND as required and the Patient Root as optional for the Query Images (RAD-14) and Retrieve Images (RAD-16) transactions, and that is still true in Scheduled Workflow today. I never use Patient Root if I can avoid it, and Roni's assertion that "everyone supports it" certainly didn't used to be true.

PPPS. Some old comp.protocols.dicom posts on the subject of C-GET include the following, which show the "evolution" of my thinking:

C-MOVE vs. C-GET
Difference between C-GET and C-MOVE
DICOM retrieve (C-GET-RQ) example anyone?
C-GET vs C-MOVE (was Retrieving off-line studies from DICOM archive)
C-Get versus C-Move, was Re: C-Move