Saturday, May 14, 2016

Image Sharing: Are we there yet? It seems not.

Short version: Why are we still using CDs? Its not the lack of standards or commercial solutions, it seems to be the lack of will, aka. incentives.

Long version.

In Joe Biden's Full Interview With Robin Roberts on the Cancer Moonshot he rightly bemoans (at 08:20 minutes in) the inability of two prestigious organizations, Walter Reed Hospital in Washington, D.C., and MD Anderson Cancer Center in Houston, TX, to share his son's medical imaging data electronically, without resorting to flying discs across the country (and even that apparently required the intervention of his son-in-law, who is a surgeon). Unfortunately, he attributes this to an absence of a "common language", which for this particular case is not true (since we have DICOM, which is the lingua franca of images). Earlier in the interview, the issue of incentives is discussed though.

This experience mirrors my own, dealing with family attending Memorial Sloan Kettering Cancer Center (MSKCC)  in New York, NY. The only mechanism I have to obtain images from there is again via CD. Speaking to one of the radiologists at Memorial, I was told that the inbound problem is just as bad; they employ 10 (!) FTEs whose only function is to stuff CDs received into drives to import them. Apparently they do have one of the commercial network image sharing alternatives installed, but are planning on ditching it and going with another vendor, not sure why. "Continuing bandwidth issues" were cited as a concern. MSKCC has a limited patient portal, which does have radiology results available through it (plain text of course, nothing structured to download), but apparently making images available (whether to View, Download or Transmit) through the portal is not a priority. It does make paying the bills easier though (I guess that is important for them).

Now, it is great that CDs work at all, and work relatively well. And of course they are thoroughly standardized (using the DICOM PS3.10 files that are specified by IHE PDI), as long as they don't come from older Stentor/Philips crap. But surely, well into the 21st Century, we can do better than "sneaker net", especially between major medical centers.

Yesterday, on a call with the HIMSS-SIIM Enterprise Imaging Joint Workgroup Best Practice Image Exchange and Sharing (Team 3) (which I have belatedly joined), there was a discussion about reorganizing the work groups and starting a new one on Standards and Interoperability. I was keen to emphasize that I don't think the interoperability problem is one of a lack of standards or implementation of them, but rather a lack of incentives, funding, prioritization or indeed a clearly articulated value proposition for deploying solutions, using the standards that we already have (or even using a non-standard solution, if it works).

When the UK folks were facing the problem of image sharing, and the NHS failed to deliver a suitable central solution, an ad hoc network of push-driven sharing evolved, the Image Exchange Portal (IEP), which has been bought and expanded by Sectra. They claim that:

"100% of NHS Acute Trusts in England plus private hospitals are connected to one another via the IEP network".

As I understand it, these guys were no more incentivized to develop, join or use the IEP sharing than are their counterparts in the US, nor were there any disincentives for not bothering to share images. Perhaps there were just no funds available to employ an army of CD-stuffers to work around the problem, so the pain was being felt by the decision makers. Or perhaps the resources for repeat imaging were more tightly controlled (as opposed to being a potential source of more revenue in the US), so the shared images were the only images available. I am just guessing, but I doubt it was because the Brits are any more altruistic or sensible than their Cousins (I can say that, since I am nominally a Brit, even though I have lived and worked in the US for decades).

The Canadians have their much vaunted, centrally funded, regional Diagnostic Image Repositories (DI-r's), but am I told that, in some provinces at least, you are lucky if you can get out what you put in, and there is little if any useful access to images submitted by other sites. Some provinces have apparently been able to do better though.

Regardless, all of us who work in medical imaging IT know that the technology is there, and is affordable, and the workflow is manageable despite having to deal with stupid things like the lack of a single national patient identifier. It doesn't really matter for the sharing use case which standard or combination of standards you choose for the transfer, as long as the payload is DICOM. Whether you push them or pull them, use traditional DICOM protocols or DICOMweb or XDS-I RAD-69 or XDR-I or some proprietary mechanism, or follow IHE Import Reconciliation Workflow (IRWF) to deal with the identifiers or do it your own way, with a little configuration, the images are going to get where they need to be. It is really just a question of motivating sites to get off their collective asses.

In the "collective" probably lies part of the problem, since on a large scale, what motivates competitors to share?

For once though, the problem can hardly be laid at the door of the evil vendors who might be accused of "data blocking". For image sharing, there is an army of vendors willing to help solve your sharing problem, as well as open source components to assemble your own, there are no format issues, the problem is way simpler than that of general EHR interoperability, and there is no debate over documents versus APIs (all of the radiology and cardiology images, at least, are already in DICOM format and document-like in that respect).

When I discussed this in late 2012 with Farzad Mostashari, after expressing my disappointment that the MU2 didn't insist on image sharing, he wrote that:

"My hope is that the business case for this is so clear that it will happen regardless (perhaps with some help from convening, best practices, etc) and we can point to the on-the-ground reality in two years as the ultimate refutation of the concerns."
 
Now here we are three and a half years later, not two, with a plethora of commercial solutions as well as multitude of standards for image sharing, but the "business case" is apparently not so clear after all, if the Vice President of the United States still needs to arrange to fly CDs around.

Shame on us all for failing him and his family.

David

PS. As far as I have been able to ascertain, the MACRA proposed rule doesn't provide any incentives or requirements for imaging sharing either. This may be as much because nobody has submitted sharing related performance measures as the lack of central recognition that this is important or a priority. Maybe the VP should submit comments on it!

PPS. In the same interview, Joe Biden also takes a shot at the much reviled editor of the NEJM, Jeffrey Drazen,  over his ill-considered "data parasites" comments (actually "research parasites", in the editorial co-authored with Deputy Editor Dan Longo). While Drazen may be well on his way to becoming the most hated man in America (perhaps overshadowing Martin Shkreli, the AIDS drug robber baron) the issues raised in Drazen's editorial are about a different kind of "sharing" than the subject of this post.

No doubt Drazen's comments reflect the opinion of many in the "elite healthcare research establishment", who seem to regard the right to solely exploit their taxpayer-funded research and data in order to exclude success by their funding competitors (not to mention their unwillingness to have their own data and analysis scrutinized for integrity and repeatability) as something akin to the divine right of kings. Again, this all seems to be a matter of incentives, this time the perverse incentives of the research funding infrastructure that encourage data hoarding rather than sharing due to the competitive nature of the process. NIH, perhaps crippled by the Bayh–Dole Act, doesn't seem to have any teeth in its data sharing policy when it comes to reviewing and approving grant applications or monitoring their performance, so there is no "level playing field" of mandatory and immediate sharing. Since most of what is published is probably false anyway, perhaps it doesn't matter:(

There is something for everyone in the interview, and the lack of open access to research publications comes in for its share of criticism too. Hear, hear!

I wish the VP every success in his crusade.

Sunday, May 8, 2016

To C-MOVE is human; to C-GET, divine

Summary: C-GET is superior to C-MOVE for use beyond the firewall; contrary to some misleading reports, it has NOT been retired from DICOM, and implementations do exist.

Long Version.

With apologies to Alexander Pope, I wanted to draw attention to what appears to be a common misconception, that DICOM C-GET is retired or obsolete or deprecated.

C-GET is not retired; it most definitely is alive and well, and more importantly, useful.

C-GET is especially useful for DICOM use over the public Internet, beyond the local area network.

As you know, by far the most common way to retrieve a study, series or individual instances is to use a C-MOVE request, which instructs the server (SCP) to initiate the necessary C-STORE operations on one or more different connections (associations) to transfer the data.

This necessitates:
  • the requester being able to listen for and accept inbound connections (i.e., be a C-STORE SCP),
  • that any impediments on the network (like firewalls) allow such inbound connections,
  • that the sender be configured with the host/IP address and port of the requester (since only the Destination AET is communicated in the C-MOVE request), and
  • that Network Address Translation (NAT) be correctly configured to forward the inbound connections to the requester.
By comparison, a C-GET request does not depend on separate associations being established, but rather "turns around" the same connection on which the request is made, and re-uses it to receive the inbound C-STORE operation. I.e., it is just like an HTTP GET in that all the data comes back on the same connection. It is similar in functionality to a Passive FTP transfer, though in ftp there are actually two separate connections, though both are initiated by the requester (one for commands and one for data).

With all three protocols, DICOM C-GET, HTTP GET and Passive FTP GET, there is:
  • no need for the requester to be able to respond to inbound connections
  • no need to configure firewalls to allow inbound connections or perform NAT, and
  • no need (other than for access control) to configure the sender to know anything about the requester.
Of course, firewalls may also restrict outbound connections, but that affects all protocols similarly.

All three protocols can of course communicate over secured channels, whether by using TLS or a VPN.

So, if C-GET is so useful, why is it not as commonly implemented?

Historically, when DICOM was first getting started and being used mostly for mini-PACS clusters of acquisition modalities and workstations, the thinking of the designers went something like this. First, I have to be able to send and receive images by pushing them around, so I have to implement C-STORE as an SCU and SCP. Now, the product manager says I have to allow users to pull them too, so the easiest way is to write a C-MOVE SCU and SCP to command that the transfer takes place, but I can just reuse the existing C-STORE SCU and SCP code that I have already written. I only have a handful of devices to connect on the LAN, so the administrative burden of configuring them all to know about each other is not an issue. QED.

As smaller systems were scaled to enterprise level, and larger proprietary systems added DICOM Q/R capability to allow the same mini-PACS workstations to gain access to the archive, the use of C-MOVE became entrenched, without much further thought being given to the potential future benefits of C-GET for use beyond the walls of the enterprise or on a really large scale. Much later, IHE specified C-MOVE for the Retrieve Images (RAD-16) transaction (in Year 2 for 2000), which subsequently became part of the Scheduled Workflow Profile, but did not mention C-GET, presumably because the conventional wisdom at the time was that C-MOVE was much more widely implemented.

So who does support C-GET?

A Google search reveals quite a few systems that do. There are some open source or freely available SCUs and SCPs too. When I monitor at Connectathons, it is extremely convenient to be able to retrieve stuff from testers' systems (to compare what they have with what is expected) without having to go and bother them to add my configuration for C-MOVE, and off hand I would guess about 15-25% of the systems respond to a C-GET, including, of course, the central archive, which for the last few years has been dcm4chee. Dave Harvey's publicly accessible server and PixelMed's support C-GET, as do clients like Osirix, though I don't think either ClearCanvas or K-PACS do:(

The tricky thing with implementing C-GET as an SCU is the Association Negotiation, and particularly the (annoying, gratuitous, arbitrary) limit on the total number of Presentation Contexts caused by the "odd integers between 1 and 255" requirement on the single byte Presentation-context-ID. The naive (though inefficient) approach of listing all possible (storage) SOP Classes permuted with all possible Transfer Syntaxes reaches that limit quickly nowadays. Allowing the SCP to choose the Transfer Syntax, and using SOP Classes in Study from an earlier STUDY level C-FIND (or using plausible SOP Classes based on Modalities in Study, or if these are not supported as return keys by the C-FIND SCP, Modality from a SERIES level C-FIND, or worst case, the SOP Class UID from an IMAGE level C-FIND) helps a lot with this, though does limit the re-usability of the Association if you want to keep it alive in a "connection pool" for later retrievals.

From a performance perspective, single connection C-GET and C-MOVE are similar, which is not surprising since both are often limited by latency effects on the synchronous C-STORE response. In the absence of Asynchronous Operations support, it is obviously easier to accelerate C-MOVE by opening multiple return Associations across which to spread the C-STORE operations, which one can't do with C-GET, unless one selectively retrieves at the IMAGE level, which is possible, but tedious to set up and requires an initial IMAGE level C-FIND to get SOP Instance UIDs. Using large multi-frame images instance mitigates this issue.

It would be interesting to see, for the simple pull use case, how close the C-GET with Asynchronous Operations support could approach raw socket transfer speeds though, and how it would compare with an HTTP GET or Passive FTP GET.

The security considerations (include channel confidentiality, access control and audit trail) would seem to be similar for C-GET and C-MOVE, and both TLS and user identity communication are available if necessary.

David

PS. I was motivated to write this when I noticed that Sébastien Jodogne says in Note 1 of his description of "C-Move: Query/retrieve" documenting his Orthanc server:

"Even if C-Move may seem counter-intuitive, it is the only way to initiate a query/retrieve. Once upon a time, there was a conceptually simpler C-Get command, but this command is now deprecated."

I asked Sébastien where he got this impression and attributes the source of his confusion to this post by Roni Zaharia. Both are incorrect in this respect.

During the great DICOM purge of 2006 (Sup 98), though the Patient/Study Only Query/Retrieve Information Model was retired from the Query/Retrieve Service, C-GET was left alone, and none of the other Supplements or CPs related to retirement touched it either. On the contrary, subsequent additions to the standard to support Instance and Frame Level Retrieve and Composite Instance Retrieve Without Bulk Data (Sup 119) extended the use of C-GET significantly.

Sébastien profusely apologizes for relying on hearsay and failing to check the standard, and hopes to implement C-GET when he has a chance.

PPS. I observe in passing that Roni also recommends the use of Patient Root rather than Study Root queries, which I would strongly disagree with. In the early days, many systems' databases were implemented with the study as the top level and the patient's identifiers and characteristics were managed as attributes of the study, if for no other reason than HIS/RIS integration was not as common as it is today, and patient level stuff was often inconsistent and/or incorrect. IHE, for example, when Q/R was added in Year Two, specified the Study Root C-FIND as required and the Patient Root as optional for the Query Images (RAD-14) and Retrieve Images (RAD-16) transactions, and that is still true in Scheduled Workflow today. I never use Patient Root if I can avoid it, and Roni's assertion that "everyone supports it" certainly didn't used to be true.

PPPS. Some old comp.protocols.dicom posts on the subject of C-GET include the following, which show the "evolution" of my thinking:

C-MOVE vs. C-GET
Difference between C-GET and C-MOVE
DICOM retrieve (C-GET-RQ) example anyone?
C-GET vs C-MOVE (was Retrieving off-line studies from DICOM archive)
C-Get versus C-Move, was Re: C-Move

Wednesday, March 2, 2016

DICOM and SNOMED back in bed together

Summary: Users and commercial and open source DICOM developers can be reassured that they may continue to use the subset of SNOMED concepts in the DICOM standard in their products and software, globally and without a fee or individual license.

Long Version.

The news from IHTSDO and a summary of the relationship can be found at this IHTSDO DICOM Partnership page, including links to the text of the agreement and a press release.

DICOM has used SNOMED since the days of the SNOMED DICOM Microglossary in the mid-nineties. This was the work of Dean Bidgood, who was not only very actively involved in DICOM but also a member of the SNOMED Editorial Board. As SNOMED evolved over time, it became necessary to reach an agreement with the original producers, the College of American Pathologists. This allowed DICOM to continue to publish and use SNOMED codes in software and products without a fee, and in return DICOM continued to contribute imaging concepts to be added to SNOMED.

This has worked out really well so far, so it is reassuring that we now have a similar agreement in place with the new owners, IHTSDO.

The subset of SNOMED concepts that DICOM may use includes all concepts that are currently in the standard as of the 2016a release and that are active in the SNOMED 2016 INT release, as well as those in some upcoming Supplements and CPs. I have been going through and cleaning up any concepts that have been inactivated in SNOMED (due to errors, duplicates, ambiguities, etc.) and adding them to CP 1495 to replace them and mark them as retired. This is pretty tedious but with the XML DocBook source of the standard, a lot of the checking can be automated, so this process should converge pretty soon. Note that per both the original agreement with CAP and the new agreement with IHTSDO, there is recognition that products and software that use retired inactive codes may continue to do so if necessary.

A small subset of codes (for non-human applications) have been handed off by IHTSDO to the maintainers of the Veterinary Extension of SNOMED CT, and we have been reassured by those folks that it is OK for us to continue to use them too.

If anyone actually needs a tabulated list of all the concepts in the SNOMED DICOM subset in some more convenient form than the PDF that lists the concept identifiers, just let me know and I can send you some of my working files. I also have some XSLT style sheets that can be used to trawl the source for both coded tuples and codes in tables, so if you need to do that sort of thing, just let me know (I will add these to the source and rendering archive file in the next release of the DICOM standard).

David

Tuesday, March 1, 2016

How many (medical image exchange) standards can dance on the head of a pin?

Summary: There are too many alternative standards for sharing images. For the foreseeable future, traditional DICOM DIMSE services will remain the mainstay of modality and intra-enterprise image management, perhaps with the exception of viewers used internally. The WADO-URI and WADO-RS services are attractive in their simplicity and have sufficient features for many other uses, including submission of other 'ology images using STOW (WIC). If one has not already deployed it (and even then), one might want to give serious consideration to "skipping over" XDS-I as a dead-end digression and going straight to the more mobile and ZFP friendly WADO-RS instead (including potentially revised MHD-I). The RSNA Image Share Validation program for XDS-I is perhaps not such a cool idea, and should be refocused on validating WADO-RS-based services. How/if FHIR ImagingStudy and ImagingObjectSelection fit in remains to be determined.

Long Version.

Do standards have location in space, but not extension, so the answer is an infinite number? Or no location at all, so, perhaps none?

We certainly have no shortage of standards in general, as the sarcastic quote from Andy Tanenbaum ("The nice thing about standards is that you have so many to choose from") illustrates. This xkcd cartoon explains one among many reasons for their proliferation.

Some of the drivers that encourage excessive proliferation of multiple standards for the same thing include:
  • extension of an existing successful standard into a new domains to compete with an incumbent
  • "technology refreshment" (wanting to use the latest and greatest trendy buzzword compliant mechanisms that may or may not offer real benefit)
  • simpler solutions to address real or perceived complexity of existing standards
  • "not invented here"
  • laziness (easy to write than read)
  • pettiness (we hate your standard and the horse it rode in on)
  • low barrier to entry (anyone can use the word "standard")
  • bad standards (seemed like a good idea to someone at the time)
So what does this mean for medical image sharing, both for traditional radiology and cardiology applications, as well as the other 'ologies?

If we just consider DICOM image and related "payloads" for the moment, and focus strictly on the exchange services, currently one has a choice of several overlapping mainstream "standard" services:
as well as some niche services for specific purposes:
Each of these can be considered from many perspectives, including:
  • installed base (for various scenarios)
  • intra-enterprise (LAN) capability
  • extra-enterprise (remote, WAN) capability
  • cross-enterprise (WAN, cross identity and security domain) capability
  • performance (bandwidth and latency)
  • functionality (to support simple and advanced use cases)
  • complexity (from developer, deployment and dependency aspect)
  • security support
  • scalability support (server load, load balancing, caching)
  • reliability support
  • ...
However, to cut a long story short, at one end of the spectrum we have the ancient DICOM services. These are used ubiquitously:
  • between traditional acquisition modalities and the PACS or VNA
  • for pushing stuff around inside an enterprise
  • for pushing (over secure connections) to central/regional/national archives (like Canadian DIrs)
  • for interfacing to traditional "workstations" for RT, advanced image processing, etc.
Many people hate traditional DICOM for inbound queries, whine about "performance" issues (largely due to poor/lazy implementations that are excessively latency sensitive due to the default protocol's need for acknowledgement), and rarely bother to secure it (whether over TLS or with use of any of its user identity features). Certainly traditional DICOM protocols are excessively complicated and obscurely documented in arcane OSI-reminiscent terminology, making it much harder for newbies to implement it from scratch. But it works just fine, and everybody sensible uses a robust open-source or commercial toolkit to hide the protocol details; but that creates a dependency, which in an ideal world would be avoidable.

At the other end of the spectrum, there is the closest thing to a "raw socket" (the network developers' ideal), which is an HTTP GET or POST from/to an endpoint specified by a URL. In terms of medical imaging standards this means WADO-URI or WADO-RS for fetching stuff, STOW-RS for sending stuff, and QIDO-RS for finding it. FHIR's ImagingStudy resource also happens to have a means for actually including the payload in the resource as opposed to using WADO URLs.

Nothing is ever as simple as it seems though, and many committee hours have been spent on the low level details, like parameters, accept headers, character sets, media types and transfer syntaxes. There is insufficient experience to know whether the lack of a SOP Class specific negotiation mechanism really matters or not. But certainly for the simple use cases of getting DICOM PS3.10 or rendered JPEG "files", a few examples probably suffice to get a non-DICOM literate developer handwriting the code on either end without resorting to a toolkit or the need for too many dependencies. If one puts aside the growing "complexity" of HTTP itself, especially HTTP 2.0 with of its optimizations, in its degenerate form, this WADO-URI and WADO-RS stuff can be really "simple". Theoretically, WADO-RS is also supposed to be "RESTful", whatever that is, if anyone actually cares.

But its main claim to fame is there is no SOAP involved. On the subject of which ...

Somewhere in the middle (or off to one side) we have the old-fashioned SOAP Web Services based XDS-I.b, and the retrospectively DICOM-standardized and extended version of its transfer mechanism, WADO-WS. XDS-I.b includes SOAP services to interact with a registry to find stuff (documents and image manifests), and then the image manifest can be used to fetch the DICOM images, either using another SOAP transaction (RAD 69 based on ITI 42) or various DICOM or WADO mechanisms.

Born of a well-intentioned but perhaps misguided attempt to leverage the long defunct OASIS ebXML standard, and built on the now universally-despised SOAP-based web services, the entire XDS family suffers from being both complex and not terribly developer friendly. Though, the underlying XDS standards are gaining some traction (perhaps because there really weren't too many competing standards for moving documents around), there are not that many XDS-I.b implementations actually being used, though certainly some vendors have implemented it (and a few aggressively promote it).

Or to put in another way, with the benefit of 20-20 hindsight, XDS-I.b is beginning to look like the worst of all worlds - excessively complex, bloated, dependent on a moribund technology and with a negligible installed base.

What XDS-I.b does bring to the table is an architectural concept with registries and repositories and sources. So, rather than throw the baby out with the bathwater, there is ongoing IHE work to get rid of the SOAP stuff and make FHIR-based MHD the new profile on which to implement the same architecture (though it is not phrased in terms of "getting rid" of anything, of course, at least not yet). In IHE Radiology there is ongoing work to redo the first try at MHD-I to use WADO-URI and WADO-RS and the FHIR ImagingObjectSelection resource as a manifest.

Of course, it is very easy to be critical of XDS-I.b in retrospect.

Long before it became "obvious" (?) that simple HTTP+URL was sufficient for most use cases, as long as XDS-I, and later XDS-I.b, were the "only" non-DICOM-protocol approaches sanctioned by IHE, we all ran around promoting it as preferable to proprietary solutions, myself included. There was tacit acceptance that DICOM protocol detractors would never be satisfied with a non-port 80 solution, and so XDS-based image exchange was the only theoretical game in town.

Fortunately, hardly anybody listened.

I am oversimplifying, as well as eliding numerous subtleties (e.g., difficulties of cross-community exchange without URL rewriting, or benefits for caching, concerns about how to pass SAML assertions, benefits of leveraging same services and architecture as documents). And I am probably underestimating the size of the installed base (just as protagonists probably exaggerate it).

But the core message is important ... should we abandon XDS-I.b now, before it is too late?

I am increasingly convinced that for every objection some XDS-loving Neanderthal raises against using a light-weight HTTP non-SOAP no-action-semantics-in-the-payload URL-only pseudo-RESTful solution (LWHNSNASITPUOPRS), there is a solution somewhere out in the "real" (non-healthcare) world. Religious wars have been fought over less, but I think I have finally come around to the SOAP Sucks camp, not because XDS-I.b can't be made to work, obviously it can, but because nobody in this day and age needs to be burdened with trying to do so.

Since DICOM and HL7 embraced the RESTful way, it really seems like a waste of time to be swimming against the current, so to mitigate the issue of standards proliferation leading to barriers to interoperability, something has to be sacrificed, and the older less palatable approach may need to die.

Unfortunately, some folks are pulling in the wrong direction. One major imaging vendor (GE) is totally obsessed with XDS, and some (though not all) of its representatives jump up and down like Cartman having a tantrum whenever it is suggested that we retire the no-longer-useful and potentially harmful standards like WADO-WS (and even XDS-I.b itself perhaps). A few small vendors who have bet the farm on XDS join the chorus, to prove the point that somebody somewhere has actually used XDS-I.b for something. Right now there is a discussion in IHE Radiology about extending XDS-I.b to include more of the WADO-WS transactions like fetching rendered images, etc., which is quite the opposite of retirement.

So, as usual, the standards organizations like DICOM and IHE go back to the cycle of developing and promoting the union of alternatives, not the intersection, and almost everyone suffers. Not least of whom is the customer who has to (a) pay for the all the development and testing effort for their vendors to maintain all of these competing interfaces, (b) endure poor performance from any one of these interfaces on which insufficient effort has been devoted to optimization, and (c) is restricted in their choice of products when incompatible choices of competing standards have been implemented. Once upon a time the value proposition for IHE was navigating through the morass of standards but now it is an equal opportunity offender.

Some folks make out like bandits amongst this chaos, of course, including the more agile newbie VNA vendors who make it their bread and butter to try and support every imaginable interface (some even claim to support MINT). Whether they work properly or add any actual value is another matter, but there will always be an opportunity for those who make the glue. Can you say "HL7 Interface Engine?

Sadly, RSNA has recently jumped on the XDS-I.b bandwagon with the announcement of their RSNA Image Share Validation program. To be fair, I was among those who years ago encouraged the RSNA Image Share developers to use out-of-the-box XDS-I.b transactions to implement the original Edge Server to Clearinghouse and PHR connections, in lieu of any standard alternatives (given that they wouldn't just use DICOM). But the government handout from the Recovery Act is drying up, it is clear that patient's aren't rushing to pay to subscribe to PHRs, much less image-enabled ones, and frankly, this project has run its course. I am not really sure why RSNA wants to get involved in the image sharing certification business in the first place (which is what the prospectus describes), but in XDS-I.b they may have picked the wrong standard for this day and age.

Of course, may be we should just give up now and start making a new even simpler completely different universal standard that covers everyone's use cases :)
Oops, that was FHIR, wasn't it? Subject for another day perhaps.

David

PS.  You may respond that my complaining about the "complexity" of XDS-I.b is a case of the pot calling the kettle black: I am an advocate of DICOM, and DICOM is hardly "simple" in terms of either its encoding or its information model (which is why the official DICOM XML and more recently DICOM JSON representations are, at the very least, superficially attractive), or the size of its documentation (which we have been trying to improve in terms of navigability).

And I would agree with you. But trying to simplify the payload, it turns out, is a lot harder than trying to simplify the exchange and query protocols, and if we can do the latter before yet another bloated and excessively complicated standard is inflicted on the developers and users, why not?


PPS. Few people notice it, but there is actually yet another DICOM standard for exchanging images, and that is in PS3.19 Application Hosting interfaces, which define SOAP-based WS transport intended for interoperability between host and applications written in different languages and running on the same machine. It is theoretically usable across multiple machines though. Using SOAP to pass parameters seemed like the best alternative at the time to making up something new, particularly given the tooling available to implement it in various popular languages. There has been talk in WG 23 of revisiting this with REST instead, but nothing has got off the ground yet; but think JSON with JAX-RS and JAXB, or similar. Since "API" is the buzzword du jour, maybe there is life in that idea!