tag:blogger.com,1999:blog-13671028026586037892024-03-13T10:10:32.114-07:00David Clunie's BlogDavid Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.comBlogger64125tag:blogger.com,1999:blog-1367102802658603789.post-86381810901662610422017-10-19T10:52:00.000-07:002017-10-19T10:52:46.569-07:00Shopping then Sharing: Interoperability to reduce Patient Out-of-pocket CostsShort version: Patients can save enormously by shopping for the cheapest scan; electronic transmittal of protocols, reports and images may alleviate some concerns.<br />
<br />
Long version.<br />
<br />
The <a href="http://dclunie.blogspot.com/2016/05/image-sharing-are-we-there-yet-it-seems.html" target="_blank">lack of sharing infrastructure</a> for medical images continues to fester. <br />
<br />
In the United States, the cost of imaging, both acquisition and interpretation can be ludicrously expensive and vary enormously, depending on where the scan is done. The recent case of a pediatric MRI for which a <a href="https://www.vox.com/policy-and-politics/2017/10/16/16357790/health-care-prices-problem" target="_blank">Stanford hospital charged $16.5k</a> for the scan alone (not including the anesthesia charges that brought the total to $25k) serves to highlight the issue, as does the recent move by <a href="https://www.ibj.com/blogs/17-the-dose/post/65184-anthem-to-hospitals-no-more-mris-ct-scans-for-outpatients-without-preapproval" target="_blank">Anthem to stop paying</a> for such abuses.<br />
<br />
One of the issues that the <a href="http://www.npr.org/sections/health-shots/2017/09/27/553483496/anthem-says-no-to-many-scans-done-by-hospital-owned-clinics" target="_blank">NPR discussion of the Anthem move</a> raises is described by Leonard Lichtenfeld from the American Cancer Society:<br />
<br />
<i>'They have to go to a new outpatient facility, get the film, get it read and transmitted back to the cancer center," Lichtenfeld says. If, as often happens, the hospital and the imaging center's computer systems don't talk to each other, the patient may have to bring the results back to the doctor on a CD.'</i><br />
<br />
I guess this gives new meaning to the term "out of network" from an infrastructure and technology (as opposed to insurance and billing) perspective.<br />
<br />
So, all we need to do is solve the transmission problem, and everything will be hunky-dory and patients can more easily shop around for the scan with the cheapest out of pocket costs. Tedious, but essential, especially if you are "out of network" (from an an insurance perspective), as the unfortunate woman ripped off by Stanford apparently was.<br />
<br />
Though personally, if I could save that much as a patient, a faxed or printed report and hand carrying a CD of images would not be the end of the world, so obviously image transmittal is not the only factor.<br />
<br />
Some facilities already have the systems in
place to do this (transmit images and reports), for handling outside referrals, undermining
Lichtenfeld's assertion.<br />
<br />
To be fair, there may be some value in a specialized interpretation, though that is no doubt difficult to quantify. And in some cases there may even be some value in a specialized acquisition, but is that ever worth a ten-fold difference in cost?<br />
<br />
When visiting large academic institutions that tend to be on the high end of hospital charges, I observe that their equipment used for routine patient care (as opposed to research) is often out-dated compared to the latest shiny toy that an unaffiliated facility may have recently installed. It would be interesting to study this question (manufacturer model and version of scanner at different types of site), but I don't have any contemporary data. Not that there is necessarily anything wrong with using an older scanner. Certainly when I did a lot of multi-center clinical trial work, image quality was not an issue, comparing studies from community centers against those from academic hospitals, for routine oncology imaging. There is some <a href="http://www.jacr.org/article/S1546-1440(17)30181-3/fulltext" target="_blank">evidence in the literature</a> to the contrary though. It may be a less of a case what you have than how you use it.<br />
<br />
The NPR article uses <a href="http://www.thespinejournalonline.com/article/S1529-9430(16)31093-2/fulltext" target="_blank">a study about reader variability in low back pain MRI interpretation</a> to implicitly support the argument that more expensive scans might be better, though only going as far as asserting that imaging studies conducted by qualified providers may not yield comparable results. The study does indeed highlight variation, but does not provide evidence that more expensive acquisitions or interpretations are better. Nor is low back pain and MRI necessarily a great test case (see "<a href="https://www.painscience.com/articles/mri-and-x-ray-almost-useless-for-back-pain.php" target="_blank">worse than useless</a>"). A better example might be oncology, and there is certainly some data to support <a href="http://pubs.rsna.org/doi/abs/10.1148/radiology.210.1.r99ja47109" target="_blank">reinterpretation of outside scans</a>.<br />
<br />
Specialized interpretation, if it truly adds value, or even remote protocolling to assure acquisition quality/relevance, can
theoretically be addressed by teleradiology. Worst case reinterpretation of the
shared scans by someone the referring physician knows/respects can be performed (and theoretically reimbursed if medically necessary, although this is challenging). Clearly greater collaboration between the site performing the scan and the person interpreting the scan, if they were financially and geographically separated, could provide an optimal solution.<br />
<br />
But
patients are deluded (or being misinformed) if they think that they
will <i>necessarily</i> get better scans or even better interpretations by
paying more or going somewhere fancy. It may or may not the case, and they should expect some supporting evidence (as well as some up front transparency in charges so they can make a cost/benefit assessment).<br />
<br />
Speaking of radiologists, the ACR had its usual knee-jerk "save reimbursement at all costs" <a href="https://www.acr.org/Advocacy/Economics-Health-Policy/Managed-Care-and-Private-Payer/20171002-Resources-to-Counter-Anthem-Outpatient-Imaging-Policy?utm_campaign=CarouselTracking&utm_medium=HomepageCarousel&utm_source=Use%20ACR%20Resources%20to%20Oppose%20Anthem%20Imaging%20Policy" target="_blank">reaction to the Anthem move</a>, and started a campaign against it. Since when did ACR become apologists for the hospital industry?<br />
<br />
If one looks at the big picture, there is an interesting perverse incentive at work here. Prior to the Anthem move, it seems to have been in a hospital's financial interest to make it as difficult as possible for patients and physicians to share images beyond the enterprise. Anthem's action will reverse that, in that it eliminates the financial incentive to perform the imaging in the hospital, and to maintain the same quality of care, the hospital will now benefit in terms of efficiency by improving interoperability with the outside facilities where the patients will now have to go.<br />
<br />
So everyone will win, except the <a href="https://www.healthline.com/health-news/hospital-ceo-pay-rises-while-americans-in-medical-debt" target="_blank">ludicrously overpaid executives</a> of both for-profit and non-profit hospitals who are ripping us all off.<br />
<br />
Insurance companies believe, and patients need to accept, that scans are largely a commodity, at least in terms of acquisition, if occasionally not interpretation. The technology, standards and products exist to level the playing field of quality and cost (charges, anyway). It is only the perverse incentives that preclude deployment of a more distributed image and report sharing infrastructure, as well as greater control of the referring physicians over the manner in which scans are performed. So if payers can improve the situation, then good luck to them, even if their own motivation is strictly profit (or cost reduction) driven.<br />
<br />
We may never have a "single payer" in the US, but we could theoretically have a "single imaging sharing network", but I dare say there isn't a snowball's chance in hell of that either. But it is possible on a local or regional level, so maybe the payers should be thinking about organizing/funding/implementing that, since some imaging facilities and hospitals seem to have trouble finding their own way out of a paper bag.<br />
<br />
E.g., Anthem could build an imaging protocolling and sharing network, so that in addition to causing such a ruckus, they could provide some tools to circumvent some of the issues allegedly associated with it.<br />
<br />Then ACR could go back to focusing on improving quality and consistency and appropriateness, without trying to defend the indefensible.<br />
<br />
David<br />
<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com3tag:blogger.com,1999:blog-1367102802658603789.post-19882079063686724262017-10-16T15:21:00.000-07:002017-10-16T15:21:35.366-07:00AMA Integrated Health Model Initiative - A 15th Standard? Should we be very afraid?Short version: Do we need yet another Data Standard Framework? After CPT, can the AMA be trusted not to monopolize and monetize? Who is pulling the strings behind the scenes?<br />
<br />
Long Version.<br />
<br />
Perhaps I am too much of a cynic, and hardly a day goes by without an announcement of some new "initiative", but ...<br />
<br />
Today the AMA announced that they would <a href="https://www.ama-assn.org/ama-unleash-new-era-patient-care" target="_blank">Unleash a New Era of Patient Care</a> (no hyperbole there) in the form of the <a href="https://www.ama-assn.org/integrated-health-model-initiative-ihmi" target="_blank">Integrated Health Model Initiative (IHMI)</a> with the assertion that a "a common data model ... is missing in health care".<br />
<br />
Oh right, I guess we don't have enough standards already. Cue obligatory <a href="https://xkcd.com/927/" target="_blank">XKCD cartoon</a>.<br />
<br />
Indeed one might wonder if the AMA should be in the standards business in the first place.<br />
<br />
And do we trust them to make this an open standard, free to access and free to use?<br />
<br />
The AMA's <a href="https://www.ama-assn.org/ama-unleash-new-era-patient-care" target="_blank">own announcement </a>makes no mention of license or fees or the lack thereof, as far as I could tell.<br />
<br />
This cheery <a href="https://www.forbes.com/sites/brucejapsen/2017/10/16/ama-partners-with-ibm-watson-cerner-on-health-data-model" target="_blank">Forbes article</a> by <a href="https://www.forbes.com/sites/brucejapsen" target="_blank">Bruce Japsen</a>, interviewing <a href="https://www.ama-assn.org/james-l-madara-md" target="_blank">AMA CEO James Madara</a>, asserts that "there are no licensing fees for participants or potential users of what is eventually created", which sounds promising, though it does not necessarily translate to unequivocally open, and hints of hedges.<br />
<br />
But if one actually goes to the <a href="https://www.ama-assn.org/integrated-health-model-initiative-ihmi" target="_blank">AMA's IHMI site</a> and then attempts to "join", one can't get in without accepting a <a href="https://ihmi.communities.ama-assn.org/user_agreement" target="_blank">burdensome agreement</a>, which does not specify what the IHM's licensing terms actually are, but does explicitly warn "some features may require payment for subscription services associated with or in support of the use of IHM". It is not clear whether this applies to just the web site itself, or the IHM, and whether those features will be required for actual use of IHM.<br />
<br />Since I am not willing to agree to terms without know what they actually are, I declined, and I guess I will never know what IHM actually is, or whether I could have usefully contributed.<br />
<br />
Given AMA's track record as a selfless, sharing entity (not; see the CPT <a href="https://www.fenwick.com/FenwickDocuments/The_Wrong_Way.pdf" target="_blank">copyright misuse lawsuit</a>, <a href="http://caselaw.findlaw.com/us-9th-circuit/1296863.html" target="_blank">this appeal</a>, and <a href="https://www.techdirt.com/articles/20100105/0333597616.shtml" target="_blank">commentary</a>), can they ever be trusted? Are we really to believe this a new kindler gentler AMA?<br />
<br />
A conspiracy theorist might suggest the AMA is seeking to impose yet another tax on every healthcare transaction, this time every electronic one.<br />
<br />
Or that there is some disillusioned major player with their own plan for world data model domination who isn't getting satisfaction from HL7, FHIR, ONC, et al, and is seeking a new umbrella organization to foist its own approach on everyone else.<br />
<br />
One might wonder who is pulling the strings. With IBM Watson, Cerner and Intermountain Healthcare involved, according to Forbes, is this just an end run around Epic?<br />
<br />
Personally, I would not draw such cynical conclusions in the absence of further information, but oops, I can't get to any because of that click through agreement.<br />
<br />
Here's hoping their motives are genuine and their efforts are not duplicative, divisive or anti-competitive.<br />
<br />
But I can't help wonder if we should <a href="https://youtu.be/--hMJPUBwMc" target="_blank">be afraid, be very afraid</a>.<br />
<br />
David<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com1tag:blogger.com,1999:blog-1367102802658603789.post-9827591343258638412016-05-14T12:58:00.000-07:002016-05-16T05:13:56.380-07:00Image Sharing: Are we there yet? It seems not.Short version: Why are we still using CDs? Its not the lack of standards or commercial solutions, it seems to be the lack of will, aka. incentives.<br />
<br />
Long version.<br />
<br />
In <span class="st"><a href="http://abcnews.go.com/GMA/video/joe-bidens-full-interview-robin-roberts-cancer-moonshoot-39038339" target="_blank">Joe Biden's Full Interview With Robin Roberts on the Cancer Moonshot</a> he rightly bemoans (at 08:20 minutes in) the inability of two prestigious organizations, </span><span class="st">Walter Reed Hospital in Washington, D.C., and MD Anderson Cancer Center in</span><span class="st"> Houston, TX, to share his son's medical imaging data electronically, without resorting to flying discs across the country (and even that apparently required the intervention of his son-in-law, who is a surgeon). Unfortunately, he attributes this to an absence of a "common language", which for this particular case is not true (since we have DICOM, which is the <i>lingua franca</i> of images). Earlier in the interview, the issue of incentives is discussed though.</span><br />
<br />
<span class="st">This experience mirrors my own, dealing with family attending </span><span class="st"><span class="st">Memorial Sloan Kettering Cancer Center (MSKCC) in New York, NY. The only mechanism I have to obtain images from there is again via CD. Speaking to one of the radiologists at Memorial, I was told that the inbound problem is just as bad; they employ 10 (!) FTEs whose only function is to stuff CDs received into drives to import them. Apparently they do have one of the commercial network image sharing alternatives installed, but are planning on ditching it and going with another vendor, not sure why. "Continuing bandwidth issues" were cited as a concern. MSKCC has a limited patient portal, which does have radiology results available through it (plain text of course, nothing structured to download), but apparently making images available (whether </span></span><span class="st"><span class="st"><span class="st"><span class="st">to View, Download or Transmit</span></span>) through the portal is not a priority. It does make paying the bills easier though (I guess that is important for them).</span></span><br />
<br />
<span class="st"><span class="st">Now, it is great that CDs work at all, and work relatively well. And of course they are thoroughly standardized (using the <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part10/chapter_7.html" target="_blank">DICOM PS3.10</a> files that are specified by <a href="http://wiki.ihe.net/index.php/Portable_Data_for_Imaging" target="_blank">IHE PDI</a>), as long as they don't come from older <a href="https://groups.google.com/forum/#!topic/comp.protocols.dicom/zBm71mWeg04" target="_blank">Stentor/Philips crap</a>. But surely, well into the 21st Century, we can do better than "<a href="https://en.wikipedia.org/wiki/Sneakernet" target="_blank">sneaker net</a>", especially between major medical centers.</span></span><br />
<br />
<span class="st"><span class="st">Yesterday, on a call with the <a href="http://siim.org/?page=himss_siim_ei_workgr" target="_blank">HIMSS-SIIM Enterprise Imaging Joint Workgroup</a> Best Practice Image Exchange and Sharing (Team 3) (which I have belatedly joined), there was a discussion about reorganizing the work groups and starting a new one on Standards and Interoperability. I was keen to emphasize that I don't think the interoperability problem is one of a lack of standards or implementation of them, but rather a lack of incentives, funding, prioritization or indeed a clearly articulated value proposition for deploying solutions, using the standards that we already have (or even using a non-standard solution, if it works).</span></span><br />
<br />
<span class="st"><span class="st">When the UK folks were facing the problem of image sharing, and the NHS failed to deliver a suitable central solution, an ad hoc network of push-driven sharing evolved, the <a href="http://www.image-exchange.co.uk/about-the-iep-network/" target="_blank">Image Exchange Portal (IEP)</a>, which has been bought and expanded by Sectra. They claim that:</span></span><br />
<br />
<i><span class="st"><span class="st">"</span></span><span class="st"><span class="st">100% of NHS Acute Trusts in England plus private hospitals are connected to one another via the IEP network".</span></span></i><br />
<br />
<span class="st"><span class="st">As I understand it, these guys were no more incentivized to develop, join or use the IEP sharing than are their counterparts in the US, nor were there any disincentives for not bothering to share images. Perhaps there were just no funds available to employ an army of CD-stuffers to work around the problem, so the pain was being felt by the decision makers. Or perhaps the resources for repeat imaging were more tightly controlled (as opposed to being a potential source of more revenue in the US), so the shared images were the only images available. I am just guessing, but I doubt it was because the Brits are any more altruistic or sensible than their Cousins (I can say that, since I am nominally a Brit, even though I have lived and worked in the US for decades).</span></span><br />
<br />
<span class="st"><span class="st">The Canadians have their much vaunted, centrally funded, regional <a href="http://hospitalnews.com/diagnostic-imaging-repositories-provincial-strategy-begins-with-regional-successes/" target="_blank">Diagnostic Image Repositories (DI-r's)</a>, but am I told that, in some provinces at least, you are lucky if you can get out what you put in, and there is little if any useful access to images submitted by other sites. Some provinces have apparently been able to do better though.</span></span><br />
<span class="st"><span class="st"><br /></span></span>
<span class="st"><span class="st">Regardless, all of us who work in medical imaging IT know that the technology is there, and is affordable, and the workflow is manageable despite having to deal with stupid things like the <a href="http://hitconsultant.net/2016/02/08/31764/" target="_blank">lack of a single national patient identifier</a>. It doesn't really matter for the sharing use case which standard or combination of standards you choose for the transfer, as long as the payload is DICOM. Whether you push them or pull them, use traditional DICOM protocols or <a href="https://en.wikipedia.org/wiki/DICOMweb" target="_blank">DICOMweb</a> or <a href="http://wiki.ihe.net/index.php/Cross-enterprise_Document_Sharing_for_Imaging" target="_blank">XDS-I</a> RAD-69 or XDR-I or some proprietary mechanism, or follow <a href="http://wiki.ihe.net/index.php/Import_Reconciliation_Workflow" target="_blank">IHE Import Reconciliation Workflow (IRWF)</a> to deal with the identifiers or do it your own way, with a little configuration, the images are going to get where they need to be. It is really just a question of motivating sites to get off their collective asses.</span></span><br />
<br />
<span class="st"><span class="st">In the "collective" probably lies part of the problem, since on a large scale, what motivates competitors to share?</span></span><br />
<br />
<span class="st"><span class="st">For once though, the problem can hardly be laid at the door of the evil vendors who might be accused of "<a href="https://www.healthit.gov/sites/default/files/reports/info_blocking_040915.pdf" target="_blank">data blocking</a>". For image sharing, there is an army of vendors willing to help solve your sharing problem, as well as open source components to assemble your own, there are no format issues, the problem is way simpler than that of general EHR interoperability, and there is no debate over <a href="http://blogs.opentext.com/integration-smackdown-documents-versus-api-in-b2b/" target="_blank">documents versus APIs</a> (all of the radiology and cardiology images, at least, are already in DICOM format and document-like in that respect).</span></span><br />
<br />
<div wrap="">
<span class="st"><span class="st">When I discussed this in late 2012 with <a href="https://www.linkedin.com/in/farzad-mostashari-933210" target="_blank">Farzad</a></span></span><a href="https://www.linkedin.com/in/farzad-mostashari-933210" target="_blank"> Mostashari</a>, after expressing my disappointment that the MU2 didn't insist on image sharing, he wrote that:</div>
<div wrap="">
<br /></div>
<div wrap="">
<i>"My hope is that the business case for this is so clear that it will happen regardless (perhaps with some help from convening, best practices, etc) and we can point to the on-the-ground reality in two years as the ultimate refutation of the concerns."</i></div>
<span class="st"><span class="st"> </span></span> <br />
Now here we are three and a half years later, not two, with a plethora of commercial solutions as well as multitude of standards for image sharing, but the "business case" is apparently not so clear after all, if the Vice President of the United States still needs to arrange to fly CDs around.<br />
<br />
Shame on us all for failing him and his family.<br />
<br />
David<br />
<br />
PS. As far as I have been able to ascertain, the <a href="https://www.federalregister.gov/articles/2016/05/09/2016-10032/medicare-program-merit-based-incentive-payment-system-mips-and-alternative-payment-model-apm" target="_blank">MACRA proposed rule</a> doesn't provide any incentives or requirements for imaging sharing either. This may be as much because nobody has submitted sharing related performance measures as the lack of central recognition that this is important or a priority. Maybe the VP should submit comments on it! <br />
<br />
PPS. In the same interview, Joe Biden also takes a shot at the much reviled editor of the NEJM, <a href="https://en.wikipedia.org/wiki/Jeffrey_M._Drazen" target="_blank">Jeffrey Drazen</a>, over his ill-considered "data parasites" comments (actually "research parasites", in the <a href="http://www.nejm.org/doi/full/10.1056/NEJMe1516564">editorial co-authored with Deputy Editor Dan Longo</a>). While Drazen may be well on his way to becoming the most hated man in America (perhaps overshadowing <span class="st"><a href="https://en.wikipedia.org/wiki/Martin_Shkreli" target="_blank">Martin Shkreli</a>, the AIDS drug <a href="https://en.wikipedia.org/wiki/Robber_baron_(industrialist)" target="_blank">robber baron</a>) the issues raised in Drazen's editorial are about a different kind of "sharing" than the subject of this post.</span><br />
<br />
<span class="st">No doubt Drazen's comments reflect the opinion of many in the "</span><span class="st"><a href="https://medium.com/tincture/perspective-from-a-data-parasite-5e96dc66ebcd?source=latest---------16" target="_blank">elite healthcare research establishment</a>", who seem to regard the right to solely exploit their taxpayer-funded research and data in order to exclude success by their funding competitors (not to mention their unwillingness to have their own data and analysis scrutinized for integrity and r</span><span class="st"><span class="_Tgc">epeatability</span>) as something akin to the <a href="https://en.wikipedia.org/wiki/Divine_right_of_kings" target="_blank">divine right of kings</a>. Again, this all seems to be a matter of incentives, this time the perverse incentives of the research funding infrastructure that encourage data hoarding rather than sharing due to the competitive nature of the process. NIH, perhaps crippled by the </span><span class="st"><span class="_Tgc"><a href="https://en.wikipedia.org/wiki/Bayh%E2%80%93Dole_Act" target="_blank">Bayh–Dole Act</a>, doesn't seem to have any teeth in its <a href="http://grants.nih.gov/grants/policy/data_sharing/" target="_blank">data sharing policy</a> when it comes to reviewing and approving grant applications or monitoring their performance, so there is no "level playing field" of mandatory and immediate sharing. </span>Since <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/" target="_blank">most of what is published is probably false anyway</a>, perhaps it doesn't matter:(</span><br />
<span class="st"><br /></span>
<span class="st">There is something for everyone in the interview, and the lack of open access to research publications comes in for its share of criticism too. Hear, hear!</span><br />
<span class="st"><br /></span>
<span class="st">I wish the VP every success in his crusade.</span><br />
<span class="st"><br /></span>David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com4tag:blogger.com,1999:blog-1367102802658603789.post-56410817054501093192016-05-08T11:16:00.000-07:002016-05-08T11:16:29.644-07:00To C-MOVE is human; to C-GET, divineSummary: C-GET is superior to C-MOVE for use beyond the firewall; contrary to some misleading reports, it has NOT been retired from DICOM, and implementations do exist.<br />
<br />
Long Version.<br />
<br />
With apologies to <a href="http://www.quotecounterquote.com/2010/12/to-err-is-human-to-forgive-divine.html" target="_blank">Alexander Pope</a>, I wanted to draw attention to what appears to be a common misconception, that DICOM C-GET is <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part01/sect_1.4.2.html" target="_blank">retired</a> or <a href="http://www.dictionary.com/browse/obsolete" target="_blank">obsolete</a> or <a href="http://www.dictionary.com/browse/deprecated" target="_blank">deprecated</a>.<br />
<br />
C-GET is not retired; it most definitely is alive and well, and more importantly, useful.<br />
<br />
C-GET is especially useful for DICOM use over the public Internet, beyond the local area network.<br />
<br />
As you know, by far the most common way to retrieve a study, series or individual instances is to use a C-MOVE request, which instructs the server (SCP) to initiate the necessary C-STORE operations on one or more different connections (associations) to transfer the data.<br />
<br />
This necessitates:<br />
<ul>
<li>the requester being able to listen for and accept inbound connections (i.e., be a C-STORE SCP),</li>
<li>that any impediments on the network (like firewalls) allow such inbound connections,</li>
<li>that the sender be configured with the host/IP address and port of the requester (since only the Destination AET is communicated in the C-MOVE request), and</li>
<li>that <a href="https://en.wikipedia.org/wiki/Network_address_translation" target="_blank">Network Address Translation (NAT)</a> be correctly configured to forward the inbound connections to the requester.</li>
</ul>
By comparison, a C-GET request does not depend on separate associations being established, but rather "turns around" the same connection on which the request is made, and re-uses it to receive the inbound C-STORE operation. I.e., it is just like an <a href="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods" target="_blank">HTTP GET</a> in that all the data comes back on the same connection. It is similar in functionality to a <a href="http://slacksite.com/other/ftp.html#passive" target="_blank">Passive FTP</a> transfer, though in ftp there are actually two separate connections, though both are initiated by the requester (one for commands and one for data).<br />
<br />
With all three protocols, DICOM C-GET, HTTP GET and Passive FTP GET, there is:<br />
<ul>
<li>no need for the requester to be able to respond to inbound connections</li>
<li>no need to configure firewalls to allow inbound connections or perform NAT, and</li>
<li>no need (other than for access control) to configure the sender to know anything about the requester.</li>
</ul>
Of course, firewalls may also restrict outbound connections, but that affects all protocols similarly.<br />
<br />
All three protocols can of course communicate over secured channels, whether by using TLS or a VPN. <br />
<br />
So, if C-GET is so useful, why is it not as commonly implemented?<br />
<br />
Historically, when DICOM was first getting started and being used mostly for <a href="http://dx.doi.org/10.1117/12.174328" target="_blank">mini-PACS</a> clusters of acquisition modalities and workstations, the thinking of the designers went something like this. First, I have to be able to send and receive images by pushing them around, so I have to implement C-STORE as an SCU and SCP. Now, the product manager says I have to allow users to pull them too, so the easiest way is to write a C-MOVE SCU and SCP to command that the transfer takes place, but I can just reuse the existing C-STORE SCU and SCP code that I have already written. I only have a handful of devices to connect on the LAN, so the administrative burden of configuring them all to know about each other is not an issue. <a href="https://en.wikipedia.org/wiki/Q.E.D." target="_blank">QED</a>.<br />
<br />
As smaller systems were scaled to enterprise level, and larger proprietary systems added DICOM Q/R capability to allow the same mini-PACS workstations to gain access to the archive, the use of C-MOVE became entrenched, without much further thought being given to the potential future benefits of C-GET for use beyond the walls of the enterprise or on a really large scale. Much later, IHE specified C-MOVE for the Retrieve Images (RAD-16) transaction (in <a href="https://archive.org/download/iheyr2tf_rev4.0_03-28-2000/iheyr2tf_rev4.0_03-28-2000.pdf" target="_blank">Year 2</a> for 2000), which subsequently became part of the Scheduled Workflow Profile, but did not mention C-GET, presumably because the conventional wisdom at the time was that C-MOVE was much more widely implemented.<br />
<br />
So who does support C-GET?<br />
<br />
A <a href="https://www.google.com/search?q=%221.2.840.10008.5.1.4.1.2.2.3%22+%22DICOM+Conformance+Statement%22" target="_blank">Google search</a> reveals quite a few systems that do. There are some open source or freely available SCUs and SCPs too. When I monitor at Connectathons, it is extremely convenient to be able to retrieve stuff from testers' systems (to compare what they have with what is expected) without having to go and bother them to add my configuration for C-MOVE, and off hand I would guess about 15-25% of the systems respond to a C-GET, including, of course, the central archive, which for the last few years has been <a href="http://www.dcm4che.org/" target="_blank">dcm4chee</a>. <a href="https://www.medicalconnections.co.uk/kb/Medical_Connections_Public_DICOM_Server" target="_blank">Dave Harvey's publicly accessible server</a> and <a href="http://pixelmed.com/publicdicomserver.html" target="_blank">PixelMed's</a> support C-GET, as do clients like <a href="http://www.osirix-viewer.com/PACS.html" target="_blank">Osirix</a>, though I don't think either <a href="http://clearcanvas.ca/Home/Community/OldForums/tabid/526/aff/1/aft/14249/afv/topic/Default.aspx" target="_blank">ClearCanvas</a> or <a href="http://www.k-pacs.net/10555.html" target="_blank">K-PACS</a> do:(<br />
<br />
The tricky thing with implementing C-GET as an SCU is the Association Negotiation, and particularly the (annoying, gratuitous, arbitrary) limit on the total number of Presentation Contexts caused by the "<a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part08/sect_9.3.2.2.html#para_2da8ac76-063d-45a3-ba12-405b08462964" target="_blank">odd integers between 1 and 255</a>" requirement on the single byte Presentation-context-ID. The naive (though inefficient) approach of listing all possible (storage) SOP Classes permuted with all possible Transfer Syntaxes reaches that limit quickly nowadays. Allowing the SCP to choose the Transfer Syntax, and using SOP Classes in Study from an earlier STUDY level C-FIND (or using plausible SOP Classes based on Modalities in Study, or if these are not supported as return keys by the C-FIND SCP, Modality from a SERIES level C-FIND, or worst case, the SOP Class UID from an IMAGE level C-FIND) helps a lot with this, though does limit the re-usability of the Association if you want to keep it alive in a "connection pool" for later retrievals.<br />
<br />
From a performance perspective, single connection C-GET and C-MOVE are similar, which is not surprising since both are often limited by latency effects on the synchronous C-STORE response. In the absence of Asynchronous Operations support, it is obviously easier to accelerate C-MOVE by opening multiple return Associations across which to spread the C-STORE operations, which one can't do with C-GET, unless one selectively retrieves at the IMAGE level, which is possible, but tedious to set up and requires an initial IMAGE level C-FIND to get SOP Instance UIDs. Using large multi-frame images instance mitigates this issue.<br />
<br />
It would be interesting to see, for the simple pull use case, how close the C-GET with Asynchronous Operations support could approach raw socket transfer speeds though, and how it would compare with an HTTP GET or Passive FTP GET.<br />
<br />
The security considerations (include channel confidentiality, access control and audit trail) would seem to be similar for C-GET and C-MOVE, and both TLS and user identity communication are available if necessary.<br />
<br />
David<br />
<br />
PS. I was motivated to write this when I noticed that<a href="mailto:s.jodogne@chu.ulg.ac.be" target="_blank"> Sébastien Jodogne</a> says in Note 1 of his description of <a href="https://orthanc.chu.ulg.ac.be/book/dicom-guide.html#c-move-query-retrieve" target="_blank">"C-Move: Query/retrieve"</a> documenting his <a href="http://www.orthanc-server.com/" target="_blank">Orthanc server</a>:<br />
<br />
<i>"Even if C-Move may seem counter-intuitive, it is the only way to initiate a query/retrieve. Once upon a time, there was a conceptually simpler C-Get command, but this command is now deprecated."</i><br />
<br />
I asked Sébastien where he got this impression and attributes the source of his confusion to <a href="http://dicomiseasy.blogspot.be/2012/01/dicom-queryretrieve-part-i.html" target="_blank">this post</a> by <a href="mailto:roni.zaharia@gmail.com" target="_blank">Roni Zaharia</a>. Both are incorrect in this respect.<br />
<br />
During the great DICOM purge of 2006 (<a href="http://www.dclunie.com/dicom-status/status.html#Supplement98" target="_blank">Sup 98</a>), though the <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part04/sect_C.3.3.html" target="_blank">Patient/Study Only Query/Retrieve Information Model</a> was retired from the <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part04/chapter_C.html" target="_blank">Query/Retrieve Service</a>,
C-GET was left alone, and none of the other Supplements or CPs related
to retirement touched it either. On the contrary, subsequent additions
to the standard to support <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part04/chapter_Y.html" target="_blank">Instance and Frame Level Retrieve</a> and <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part04/chapter_Z.html" target="_blank">Composite Instance Retrieve Without Bulk Data</a> (<a href="http://www.dclunie.com/dicom-status/status.html#Supplement119" target="_blank">Sup 119</a>) extended the use of C-GET significantly.<br />
<br />
Sébastien profusely apologizes for relying on hearsay and failing to check the
standard, and hopes to implement C-GET when he has a chance.<br />
<br />
PPS. I observe in passing that Roni also recommends the use of Patient Root rather than Study Root queries, which I would strongly disagree with. In the early days, many systems' databases were implemented with the study as the top level and the patient's identifiers and characteristics were managed as attributes of the study, if for no other reason than HIS/RIS integration was not as common as it is today, and patient level stuff was often inconsistent and/or incorrect. IHE, for example, when Q/R was added in <a href="https://archive.org/download/iheyr2tf_rev4.0_03-28-2000/iheyr2tf_rev4.0_03-28-2000.pdf" target="_blank">Year Two</a>, specified the Study Root C-FIND as required and the Patient Root as optional for the Query Images (RAD-14) and Retrieve Images (RAD-16) transactions, and that is still true in <a href="http://www.ihe.net/uploadedFiles/Documents/Radiology/IHE_RAD_TF_Vol2.pdf" target="_blank">Scheduled Workflow today</a>. I never use Patient Root if I can avoid it, and Roni's assertion that "everyone supports it" certainly didn't used to be true. <br />
<br />
PPPS. Some old <a href="https://groups.google.com/forum/#!searchin/comp.protocols.dicom/C-GET" target="_blank">comp.protocols.dicom posts on the subject of C-GET</a> include the following, which show the "evolution" of my thinking:<br />
<br />
<a href="https://groups.google.com/d/msg/comp.protocols.dicom/y6v4sJ5T62M/mNdjXr7ZxPIJ" target="_blank"><span class="IVILX2C-sb-X" id="t-t">C-MOVE vs. C-GET</span></a><br />
<a href="https://groups.google.com/d/msg/comp.protocols.dicom/HjKiFHhLJOs/La1md2qBUkcJ" target="_blank"><span class="IVILX2C-sb-X" id="t-t"><span class="IVILX2C-sb-X" id="t-t">Difference between C-GET and C-MOVE</span></span></a><span class="IVILX2C-sb-X" id="t-t"></span><br />
<a href="https://groups.google.com/d/msg/comp.protocols.dicom/JGPsvwXIiZw/J6TtEDMYnCIJ" target="_blank"><span class="IVILX2C-sb-X" id="t-t"><span class="IVILX2C-sb-X" id="t-t">DICOM retrieve (C-GET-RQ) example anyone?</span></span></a><span class="IVILX2C-sb-X" id="t-t"></span><br />
<span class="IVILX2C-sb-X" id="t-t"><a href="https://groups.google.com/d/msg/comp.protocols.dicom/uE4dFEo_MiQ/rP8-VxzXmdsJ" target="_blank">C-GET vs C-MOVE (was Retrieving off-line studies from DICOM archive)</a></span><br />
<a href="https://groups.google.com/d/msg/comp.protocols.dicom/iVRypIoY1Sg/bPEFXPayPaUJ" target="_blank">C-Get versus C-Move, was Re: C-Move</a><br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com0tag:blogger.com,1999:blog-1367102802658603789.post-74163861107577253102016-03-02T15:24:00.001-08:002016-03-02T15:24:19.900-08:00DICOM and SNOMED back in bed togetherSummary: Users and commercial and open source DICOM developers can be reassured that they may continue to use the subset of SNOMED concepts in the DICOM standard in their products and software, globally and without a fee or individual license.<br />
<br />
Long Version. <br />
<br />
The <a href="http://www.ihtsdo.org/news-articles/new-global-licensing-agreement-for-snomed-ct-code-inclusion-in-the-dicom-standard">news from IHTSDO</a> and a summary of the relationship can be found at this I<a href="http://www.ihtsdo.org/about-ihtsdo/partnerships/dicom">HTSDO DICOM Partnership page</a>, including links to the <a href="http://www.ihtsdo.org/resource/resource/272">text of the agreement</a> and a <a href="http://www.ihtsdo.org/resource/resource/271">press release</a>.<br />
<br />
DICOM has used <a href="http://www.ihtsdo.org/snomed-ct">SNOMED</a> since the days of the <a href="http://www.ncbi.nlm.nih.gov/pubmed/9865038">SNOMED DICOM Microglossary</a> in the mid-nineties. This was the work of Dean Bidgood, who was not only very actively involved in DICOM but also a member of the SNOMED Editorial Board. As SNOMED evolved over time, it became necessary to reach an agreement with the original producers, the College of American Pathologists. This allowed DICOM to continue to publish and use SNOMED codes in software and products without a fee, and in return DICOM continued to contribute imaging concepts to be added to SNOMED.<br />
<br />
This has worked out really well so far, so it is reassuring that we now have a similar agreement in place with the new owners, IHTSDO. <br />
<br />
The subset of SNOMED concepts that DICOM may use includes all concepts that are currently in the standard as of the 2016a release and that are active in the SNOMED 2016 INT release, as well as those in some upcoming Supplements and CPs. I have been going through and cleaning up any concepts that have been inactivated in SNOMED (due to errors, duplicates, ambiguities, etc.) and adding them to <a href="http://www.dclunie.com/dicom-status/status.html#CP1495">CP 1495</a> to replace them and mark them as retired. This is pretty tedious but with the XML DocBook source of the standard, a lot of the checking can be automated, so this process should converge pretty soon. Note that per both the original agreement with CAP and the new agreement with IHTSDO, there is recognition that products and software that use retired inactive codes may continue to do so if necessary.<br />
<br />
A small subset of codes (for non-human applications) have been handed off by IHTSDO to the maintainers of the <a href="http://vtsl.vetmed.vt.edu/">Veterinary Extension of SNOMED CT</a>, and we have been reassured by those folks that it is OK for us to continue to use them too.<br />
<br />
If anyone actually needs a tabulated list of all the concepts in the SNOMED DICOM subset in some more convenient form than the <a href="http://dicom.nema.org/Partnerships/IHTSDO/Exhibit%20A%20SNOMED%20CT%20DICOM%20subset%2020160216.v1.00.pdf">PDF that lists the concept identifiers</a>, just let me know and I can send you some of my working files. I also have some XSLT style sheets that can be used to trawl the source for both coded tuples and codes in tables, so if you need to do that sort of thing, just let me know (I will add these to the source and rendering archive file in the next release of the DICOM standard).<br />
<br />
David<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com1tag:blogger.com,1999:blog-1367102802658603789.post-64092751811238351042016-03-01T17:30:00.002-08:002019-03-15T06:22:53.544-07:00How many (medical image exchange) standards can dance on the head of a pin?Summary: There are too many alternative standards for sharing images. For the foreseeable future, traditional DICOM <a href="http://dicom.nema.org/medical/Dicom/current/output/chtml/part07/sect_7.5.html">DIMSE</a> services will remain the mainstay of modality and intra-enterprise image management, perhaps with the exception of viewers used internally. The <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.2.html">WADO-URI</a> and <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.5.html">WADO-RS</a> services are attractive in their simplicity and have sufficient features for many other uses, including submission of other 'ology images using <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.6.html">STOW</a> (<a href="http://www.ihe.net/uploadedFiles/Documents/Radiology/IHE_RAD_Suppl_WIC.pdf">WIC</a>). If one has not already deployed it (and even then), one might want to give serious consideration to "skipping over" <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I</a> as a dead-end digression and going straight to the more mobile and ZFP friendly <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.5.html">WADO-RS</a> instead (including potentially revised <a href="http://www.ihe.net/uploadedFiles/Documents/Radiology/IHE_RAD_Suppl_MHDI.pdf">MHD-I</a>). The <a href="http://sequoiaproject.org/rsna-image-share-validation-program/">RSNA Image Share Validation</a> program for <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I</a> is perhaps not such a cool idea, and should be refocused on validating <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.5.html">WADO-RS</a>-based services. How/if <a href="http://wiki.hl7.org/index.php?title=FHIR">FHIR</a> <a href="https://www.hl7.org/fhir/imagingstudy.html">ImagingStudy</a> and <a href="https://www.hl7.org/fhir/imagingobjectselection.html" target="_blank">ImagingObjectSelection</a> fit in remains to be determined.<br />
<br />
Long Version.<br />
<br />
Do standards have location in space, but not extension, so the answer is an infinite number? Or no location at all, so, perhaps none?<br />
<br />
We certainly have no shortage of standards in general, as the sarcastic quote from <a href="https://en.wikiquote.org/wiki/Andrew_S._Tanenbaum" target="_blank">Andy Tanenbaum</a> (<i>"The nice thing about standards is that you have so many to choose from"</i>) illustrates. This <a href="https://xkcd.com/927/" target="_blank">xkcd cartoon</a> explains one among many reasons for their proliferation.<br />
<br />
Some of the drivers that encourage excessive proliferation of multiple standards for the same thing include:<br />
<ul>
<li>extension of an existing successful standard into a new domains to compete with an incumbent </li>
<li>"technology refreshment" (wanting to use the latest and greatest trendy buzzword compliant mechanisms that may or may not offer real benefit)</li>
<li>simpler solutions to address real or perceived complexity of existing standards</li>
<li>"not invented here"</li>
<li>laziness (easy to write than read)</li>
<li>pettiness (we hate your standard and the horse it rode in on)</li>
<li>low barrier to entry (anyone can use the word "standard")</li>
<li>bad standards (seemed like a good idea to someone at the time) </li>
</ul>
So what does this mean for medical image sharing, both for traditional radiology and cardiology applications, as well as the other 'ologies?<br />
<br />
If we just consider DICOM image and related "payloads" for the moment, and focus strictly on the exchange services, currently one has a choice of several overlapping mainstream "standard" services:<br />
<ul>
<li>the original DICOM <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part04/PS3.4.html">PS3.4</a>/<a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part07/PS3.7.html">3.7</a>/<a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part08/PS3.8.html">3.8</a> "<a href="http://dicom.nema.org/medical/Dicom/current/output/chtml/part07/sect_7.5.html">DIMSE</a>" services (C-STORE, C-MOVE, C-GET) over <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part08/chapter_9.html">ULP</a></li>
<li>the first form of <a href="http://dicom.nema.org/dicom/workshop-03/pres/cordonnier.ppt">Web Access to persistent DICOM Objects</a> (WADO), now called <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.2.html">WADO-URI</a></li>
<li>IHE <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> (and <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.4.html">WADO-WS</a>) and the related <a href="http://wiki.ihe.net/index.php?title=Cross-Community_Access_for_Imaging">XCA-I</a></li>
<li>the new <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.5.html">WADO-RS</a> services (branded as <a href="http://dicomweb.org/">DICOMWeb</a>), which evolved out of <a href="https://code.google.com/archive/p/medical-imaging-network-transport/">MINT</a></li>
<li><a href="http://wiki.hl7.org/index.php?title=FHIR">FHIR</a>'s <a href="https://www.hl7.org/fhir/imagingstudy.html">ImagingStudy</a> resource </li>
</ul>
as well as some niche services for specific purposes: <br />
<ul>
<li><a href="https://en.wikipedia.org/wiki/JPIP">JPEG Interactive Protocol (JPIP)</a> using the <a href="http://dicom.nema.org/medical/Dicom/current/output/chtml/part05/sect_8.4.html">DICOM Pixel Data Provider Service</a></li>
<li>the <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part19/chapter_8.html">PS3.19 Application Hosting interfaces</a> (vide infra)</li>
</ul>
Each of these can be considered from many perspectives, including:<br />
<ul>
<li>installed base (for various scenarios)</li>
<li>intra-enterprise (LAN) capability</li>
<li>extra-enterprise (remote, WAN) capability</li>
<li>cross-enterprise (WAN, cross identity and security domain) capability</li>
<li>performance (bandwidth and latency)</li>
<li>functionality (to support simple and advanced use cases)</li>
<li>complexity (from developer, deployment and dependency aspect)</li>
<li>security support</li>
<li>scalability support (server load, load balancing, caching)</li>
<li>reliability support</li>
<li>...</li>
</ul>
However, to cut a long story short, at one end of the spectrum we have the ancient DICOM services. These are used ubiquitously:<br />
<ul>
<li>between traditional acquisition modalities and the PACS or VNA</li>
<li>for pushing stuff around inside an enterprise</li>
<li>for pushing (over secure connections) to central/regional/national archives (like Canadian DIrs)</li>
<li>for interfacing to traditional "workstations" for RT, advanced image processing, etc. </li>
</ul>
Many people hate traditional DICOM for inbound queries, whine about "performance" issues (largely due to poor/lazy implementations that are excessively latency sensitive due to the default protocol's need for acknowledgement), and rarely bother to secure it (whether over TLS or with use of any of its user identity features). Certainly traditional DICOM protocols are excessively complicated and obscurely documented in arcane OSI-reminiscent terminology, making it much harder for newbies to implement it from scratch. But it works just fine, and everybody sensible uses a robust open-source or commercial toolkit to hide the protocol details; but that creates a dependency, which in an ideal world would be avoidable.<br />
<br />
At the other end of the spectrum, there is the closest thing to a "raw socket" (the network developers' ideal), which is an HTTP GET or POST from/to an endpoint specified by a URL. In terms of medical imaging standards this means <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.2.html">WADO-URI</a> or <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.5.html">WADO-RS</a> for fetching stuff, <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.6.html">STOW-RS</a> for sending stuff, and <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.7.html">QIDO-RS</a> for finding it. <a href="http://wiki.hl7.org/index.php?title=FHIR">FHIR</a>'s <a href="https://www.hl7.org/fhir/imagingstudy.html">ImagingStudy</a> resource also happens to have a means for actually including the payload in the resource as opposed to using WADO URLs.<br />
<br />
Nothing is ever as simple as it seems though, and many committee hours have been spent on the low level details, like parameters, accept headers, character sets, media types and transfer syntaxes. There is insufficient experience to know whether the lack of a SOP Class specific negotiation mechanism really matters or not. But certainly for the simple use cases of getting DICOM PS3.10 or rendered JPEG "files", a few examples probably suffice to get a non-DICOM literate developer handwriting the code on either end without resorting to a toolkit or the need for too many dependencies. If one puts aside the growing "complexity" of HTTP itself,
especially <a href="https://tools.ietf.org/html/rfc7540">HTTP 2.0</a> with of its optimizations, in its
degenerate form, this <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.2.html">WADO-URI</a> and <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.5.html">WADO-RS</a> stuff can be really "simple". Theoretically, <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.5.html">WADO-RS</a> is also supposed to be "RESTful", <a href="https://www.ics.uci.edu/~fielding/pubs/dissertation/fielding_dissertation.pdf" target="_blank">whatever that is</a>, if <a href="https://news.ycombinator.com/item?id=9138700" target="_blank">anyone actually cares</a>.<br />
<br />
But its main claim to fame is there is no <a href="https://www.w3.org/TR/soap/">SOAP</a> involved. On the subject of which ...<br />
<br />
Somewhere in the middle (or off to one side) we have the old-fashioned <a href="https://www.w3.org/TR/soap/">SOAP</a> Web Services based <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a>, and the retrospectively DICOM-standardized and extended version of its transfer mechanism, <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.4.html">WADO-WS</a>. <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> includes <a href="https://www.w3.org/TR/soap/">SOAP</a> services to interact with a registry to find stuff (documents and image manifests), and then the image manifest can be used to fetch the DICOM images, either using another <a href="https://www.w3.org/TR/soap/">SOAP</a> transaction (RAD 69 based on ITI 42) or various DICOM or WADO mechanisms.<br />
<br />
Born of a <span class="st">well-intentioned</span> but perhaps misguided attempt to leverage the long defunct <a href="https://en.wikipedia.org/wiki/EbXML">OASIS ebXML</a> standard, and built on the now <a href="http://tech.slashdot.org/story/06/12/20/0155238/google-deprecates-soap-api" target="_blank">universally-despised</a> <a href="https://www.w3.org/TR/soap/">SOAP</a>-based web services, the entire <a href="http://wiki.ihe.net/index.php?title=Cross-Enterprise_Document_Sharing">XDS</a> family suffers from being both complex and not terribly developer friendly. Though, the underlying <a href="http://wiki.ihe.net/index.php?title=Cross-Enterprise_Document_Sharing">XDS</a> standards are gaining some traction (perhaps because there really weren't too many competing standards for moving documents around), there are not that many <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> implementations actually being used, though certainly some vendors have implemented it (and a few aggressively promote it).<br />
<br />
Or to put in another way, with the benefit of <a href="https://en.wiktionary.org/wiki/hindsight_is_20/20">20-20 hindsight</a>, <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> is beginning to look like the worst of all worlds - excessively complex, bloated, dependent on a moribund technology and with a negligible installed base.<br />
<br />
What <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> does bring to the table is an architectural concept with registries and repositories and sources. So, rather than throw the baby out with the bathwater, there is ongoing IHE work to get rid of the <a href="https://www.w3.org/TR/soap/">SOAP</a> stuff and make <a href="http://wiki.hl7.org/index.php?title=FHIR">FHIR</a>-based <a href="http://wiki.ihe.net/index.php?title=Mobile_access_to_Health_Documents_%28MHD%29">MHD</a> the new profile on which to implement the same architecture (though it is not phrased in terms of "getting rid" of anything, of course, at least not yet). In IHE Radiology there is ongoing work to redo the first try at <a href="http://www.ihe.net/uploadedFiles/Documents/Radiology/IHE_RAD_Suppl_MHDI.pdf">MHD-I</a> to use <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.2.html">WADO-URI</a> and <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.5.html">WADO-RS</a> and the <a href="http://wiki.hl7.org/index.php?title=FHIR">FHIR</a><a href="https://www.hl7.org/fhir/imagingobjectselection.html" target="_blank"> ImagingObjectSelection</a> resource as a manifest.<br />
<br />
Of course, it is very easy to be critical of <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> in retrospect.<br />
<br />
Long before it became "obvious" (?) that simple HTTP+URL was sufficient for most use cases, as long as XDS-I, and later <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a>, were the "only" non-DICOM-protocol approaches sanctioned by IHE, we all ran around promoting it as preferable to proprietary solutions, myself included. There was tacit acceptance that DICOM protocol detractors would never be satisfied with a non-port 80 solution, and so XDS-based image exchange was the only theoretical game in town.<br />
<br />
Fortunately, hardly anybody listened.<br />
<br />
I am oversimplifying, as well as eliding
numerous subtleties (e.g., difficulties of cross-community exchange without URL rewriting, or benefits for caching, concerns about how to pass <a href="https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language">SAML assertions</a>, benefits of leveraging same services and architecture as documents). And I am probably underestimating the size of the installed base (just as protagonists probably exaggerate it).<br />
<br />
But the core message is important ... should we abandon <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> now, before it is too late?<br />
<br />
I am increasingly convinced that for every objection some XDS-loving Neanderthal raises against using a light-weight HTTP non-SOAP no-action-semantics-in-the-payload URL-only pseudo-RESTful solution (LWHNSNASITPUOPRS), there is a solution somewhere out in the "real" (non-healthcare) world. Religious wars have been fought over less, but I think I have finally come around to the <a href="http://www.somebits.com/weblog/tech/bad/whySoapSucks.html">SOAP Sucks</a> camp, not because <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> can't be made to work, obviously it can, but because nobody in this day and age needs to be burdened with trying to do so.<br />
<br />
Since DICOM and HL7 embraced the <a href="https://en.wikipedia.org/wiki/Representational_state_transfer">RESTful</a> way, it really seems like a waste of time to be swimming against the current, so to mitigate the issue of standards proliferation leading to barriers to interoperability, something has to be sacrificed, and the older less palatable approach may need to die.<br />
<br />
Unfortunately, some folks are pulling in the wrong direction. One major imaging vendor (GE) is totally obsessed with <a href="http://wiki.ihe.net/index.php?title=Cross-Enterprise_Document_Sharing">XDS</a>, and some (though not all) of its representatives jump up and down like <a href="https://web.archive.org/web/20170705190016if_/http://skyrocket.me/wp-content/uploads/sites/108/2013/03/cartman-300x272.jpg" target="_blank">Cartman having a tantrum</a> whenever it is suggested that we retire the no-longer-useful and potentially harmful standards like WADO-WS (and even <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> itself perhaps). A few small vendors who have bet the farm on <a href="http://wiki.ihe.net/index.php?title=Cross-Enterprise_Document_Sharing">XDS</a> join the chorus, to prove the point that somebody somewhere has actually used <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> for something. Right now there is a discussion in IHE Radiology about extending <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> to include more of the <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part18/sect_6.4.html">WADO-WS</a> transactions like fetching rendered images, etc., which is quite the opposite of retirement. <br />
<br />
So, as usual, the standards organizations like DICOM and IHE go back to the cycle of developing and promoting the union of alternatives, not the intersection, and almost everyone suffers. Not least of whom is the customer who has to (a) pay for the all the development and testing effort for their vendors to maintain all of these competing interfaces, (b) endure poor performance from any one of these interfaces on which insufficient effort has been devoted to optimization, and (c) is restricted in their choice of products when incompatible choices of competing standards have been implemented. Once upon a time the value proposition for IHE was navigating through the morass of standards but now it is an equal opportunity offender.<br />
<br />
Some folks make out like bandits amongst this chaos, of course, including the more agile newbie <a href="https://en.wikipedia.org/wiki/Vendor_Neutral_Archive">VNA</a> vendors who make it their bread and butter to try and support every imaginable interface (some even claim to support <a href="https://code.google.com/archive/p/medical-imaging-network-transport/">MINT</a>). Whether they work properly or add any actual value is another matter, but there will always be an opportunity for those who make the glue. Can you say "<a href="http://wiki.hl7.org/index.php?title=Interface_Engine">HL7 Interface Engine</a>? <br />
<br />
Sadly, RSNA has recently jumped on the <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> bandwagon with the announcement of their <a href="http://sequoiaproject.org/rsna-image-share-validation-program/">RSNA Image Share Validation</a> program. To be fair, I was among those who years ago encouraged the <a href="http://www.rsna.org/Image_Share.aspx">RSNA Image Share</a> developers to use out-of-the-box <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> transactions to implement the original Edge Server to Clearinghouse and <a href="https://www.healthit.gov/providers-professionals/faqs/what-personal-health-record">PHR</a> connections, in lieu of any standard alternatives (given that they wouldn't just use DICOM). But the government handout from the Recovery Act is drying up, it is clear that patient's aren't rushing to pay to subscribe to <a href="https://www.healthit.gov/providers-professionals/faqs/what-personal-health-record">PHR</a>s, much less image-enabled ones, and frankly, this project has run its course. I am not really sure why RSNA wants to get involved in the image sharing certification business in the first place (which is what the <a href="http://web.archive.org/web/20160226135112/http://sequoiaproject.org/wp-content/uploads/2015/12/RSNA-Image-Share-Validation-Program-Prospectus.pdf">prospectus</a> describes), but in <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a> they may have picked the wrong standard for this day and age.<br />
<br />
Of course, may be we should just give up now and start making a new even simpler completely different <a href="https://xkcd.com/927/">universal standard that covers everyone's use cases</a> :)<br />
Oops, that was <a href="http://wiki.hl7.org/index.php?title=FHIR">FHIR</a>, wasn't it? Subject for another day perhaps. <br />
<br />
David<br />
<br />
PS. You may respond that my complaining about the "complexity" of <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I.b</a>
is a case of the pot calling the kettle black: I am an advocate of
DICOM, and DICOM is hardly "simple" in terms of either its encoding or
its information model (which is why the official <a href="http://dicom.nema.org/dicom/2013/output/chtml/part19/chapter_A.html#sect_A.1">DICOM XML</a> and more recently <a href="http://dicom.nema.org/dicom/2013/output/chtml/part18/sect_F.2.html">DICOM JSON</a>
representations are, at the very least, superficially attractive), or
the size of its documentation (which we have been trying to improve in
terms of navigability).<br />
<br />
And I would agree with you. But
trying to simplify the payload, it turns out, is a lot harder than
trying to simplify the exchange and query protocols, and if we can do
the latter before yet another bloated and excessively complicated
standard is inflicted on the developers and users, why not?<br />
<br />
<br />
PPS. Few people notice it, but there is actually yet another DICOM standard for exchanging images, and that is in <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part19/chapter_8.html">PS3.19 Application Hosting interfaces</a>, which define <a href="https://www.w3.org/TR/soap/">SOAP</a>-based WS transport intended for interoperability between host and applications written in different languages and running on the same machine. It is theoretically usable across multiple machines though. Using <a href="https://www.w3.org/TR/soap/">SOAP</a> to pass parameters seemed like the best alternative at the time to making up something new, particularly given the tooling available to implement it in various popular languages. There has been talk in WG 23 of revisiting this with REST instead, but nothing has got off the ground yet; but think <a href="http://docs.oracle.com/javaee/6/tutorial/doc/gkknj.html#gmfnu">JSON with JAX-RS and JAXB</a>, or similar. Since "API" is the buzzword <i>du jour</i>, maybe there is life in that idea!<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com5tag:blogger.com,1999:blog-1367102802658603789.post-36525311941721254012015-10-31T10:14:00.000-07:002015-10-31T10:21:28.250-07:00The slings and arrows of outrageous radiologists - I want my FLA.Summary: W<span class="maBody">e don't need fewer
arrows. We need more arrows more often. And we need better arrows (in
the sense that they are hyperlinked to the findings in the report when
images are rendered, i.e., are Findings-Linked Annotations (FLA)). The term "arrows" being a surrogate for "visual indication of location".</span><br />
<br />
Long Version.<br />
<br />
I came across the strangest article about <a href="http://dx.doi.org/10.1016/j.ejrad.2015.08.011" target="_blank">"arrows" in EJR</a>.<br />
<br />
Now, I don't normally read <a href="http://www.ejradiology.com/" target="_blank">EJR</a> because it is a little expensive, it doesn't come along with any professional society membership I have, I don't work at an institution that gets it, most of its articles are not open access (there is an <a href="http://www.ejropen.com/" target="_blank">EJR Open</a> companion journal though), and it doesn't have a lot of informatics content. But this paper happened to be quoted in full for some reason on <a href="http://www.auntminnieeurope.com/index.aspx?sec=sup&sub=pac&pag=dis&itemId=612058" target="_blank">Aunt Minnie Europe</a>, so I got a <a href="http://www.yourdictionary.com/squizz" target="_blank">squizz</a> without having to wait to receive a pre-print from the authors via <a href="http://www.researchgate.net/" target="_blank">ResearchGate</a> or some other mechanism.<br />
<br />
The thesis of the radiologist authors seems to be that "arrows" on images pointing to findings are a bad thing, and that recipients of the report should read the report instead of having access to such visual aids.<br />
<br />
This struck me as odd, from the perspective of someone who has spent the last two decades or so building and evangelizing about standards and systems to do exactly that, i.e., to make annotations on images and semantically link them to specific report content so that they can be visualized interactively (ideally through DICOM Structured Reports, less ideally through the non-semantic but more widely available DICOM Softcopy Presentation States, and in the worst case in a pre-formatted multimedia rather than plain text report).<br />
<br />
What are the authors' arguments against arrows? To summarize (fairly I hope), arrows:<br />
<ul>
<li>are aesthetically ugly, especially if multitudinous, and may obscure underlying features</li>
<li>draw attention from unmarked less obvious findings (may lead to satisfaction of search)</li>
<li>are not a replacement for the more detailed account in the report</li>
<li>are superfluous in the presence of the more detailed account in the report</li>
<li>might be removed (or not be distributed)</li>
<li>detract from the role of the radiologist as a "<span class="maBody">readily accessible collaborator"</span></li>
</ul>
For the sake of argument, I will assume that what the authors' mean by "arrows" includes any <span class="maBody">"visual indication of location" rendered on an image, passively or interactively. They actual describe them as "</span><span class="maBody">an unspoken directional signal".</span><br />
<span class="maBody"><br /></span>
The authors appear to conflate the presence of arrows with either the absence of, or perhaps the ignorance of, the report ("<span class="maBody">relying on an arrow alone as a manifestation of our special capabilities", "</span><span class="maBody"><span class="maBody">are merely a figurative crutch we can very well do without"</span>).</span><br />
<br />
<span class="maBody">I would never assert that arrows </span><span class="maBody"><span class="maBody">alone </span>(or any form of selective annotation) substitute for a good report, nor it would seem to me, would it be best or even common practice to fail to produce a full report. The implication in the paper seems to be that when radiologists use arrows (that they expect will be visible to the report recipient), they record less detail about the location in the report, or the recipient does not read the report. Is that actually the case? Do the authors put forth any evidence to support that assertion? No, they do not; nor any evidence about what recipients actually prefer.</span><br />
<br />
<span class="maBody">I would completely agree with the authors that there is an inherent beauty in many images, and they are best served in that respect unadorned. That's why we have buttons to toggle annotations on and off, including not only arrows but those in the corners for demographics and management as well. And why lead markers suck. And who really cares whether we can check to see if we have the right patient or not? OK, so there are safety issues to consider, but that's another story.</span><br />
<span class="maBody"><br /></span>
<span class="maBody">As for concerns about satisfaction of search, one could equally argue that one should not include an impression or conclusion in a report either, since I gather few recipients will taken the time to read more than that. Perhaps they should be forced to wade through reams of verbosity just in case they miss something subtle not restated in its entirety in the impression anyway. And there is no rule that says one can't point out subtle findings with arrows too. Indeed, I was lead to believe during my training that it was the primary interpreting radiologist's function (and major source of added value) to detect, categorize and highlight (positively or negatively) those subtle findings that might be missed in the face of the obvious.</span><br />
<span class="maBody"><br /></span>
<span class="maBody">Wrt. superfluousness, I don't know about you, but when I read a long prose description in a report that attempts to describe the precise location of a finding, whether it uses:</span><br />
<ul>
<li><span class="maBody">identifiers ("in series 3, on slice 9, approximately 13.8 mm lateral to the left margin of the descending aorta", which assumes incorrectly that the recipient's viewer numbers things the same way the radiologist's does),</span></li>
<li><span class="maBody">approximate regions ("left breast MLO 4 o'clock position"), or</span></li>
<li><span class="maBody">anatomical descriptions ("apical segment of the right lower lobe")</span></li>
</ul>
<span class="maBody">even if I find something on the image that is plausibly or even undeniably associated with the description, I am always left wondering if I am looking at exactly the same thing as the reporting radiologist is talking about, and with the suspicion that I have missed something. My level of uncertainty is significantly higher than it needs to be. Arrows are not superfluous, they are complementary and add significant clarity.</span><br />
<br />
<span class="maBody">Or to put it another way, there is a reason the wax pencil was invented.</span><br />
<span class="maBody"><br /></span>
<span class="maBody">In my ideal world, every significant localized finding in a report would be intimately linked electronically with a specific set of coordinates in an image, whether that be its center (which might rendered as an arrow, or a cross-hair, or some other user interface element), or its outline (which might be a geometric shape like an ellipse or rectangle, or an actual outline or filled in region that has been semi-automatically segmented, if volume measurements are reported). Further, the display of such locations would be under my interactive control as a recipient (just as one turns on and off CAD marks, or applies presentation states selectively); this would address the "aesthetic" concern of the annotation obscuring underlying structure.</span><br />
<span class="maBody"><br /></span>
<span class="maBody">We certainly have the standards. Coordinate references in reports were one of the core elements of Dean Bidgood's Structured Reporting (SR) initiative in </span><span class="maBody">DICOM ("<a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2233520/" target="_blank">Documenting the information content of images</a>", 1997). I used a (contrived) example of a human-generated report to emphasize the point in Figure 1 of</span><span class="maBody"><span class="maBody"> my 2000 <a href="http://www.pixelmed.com/srbook.html" target="_blank">DICOM SR textbook</a> (long due for revision, I know)</span>. There was even work to port the DICOM SR coordinate reference pattern into HL7 CDA (although of late this has been de-emphasized in favor of leaving these in the DICOM realm and referencing them, e.g., in <a href="http://dicom.nema.org/medical/dicom/current/output/chtml/part20/chapter_9.html#sect_9.1.2.4" target="_blank">PS3.20</a>).</span><br />
<br />
<span class="maBody">Nor is this beyond the state of the art of authoring and rendering applications, even if it is not commonly implemented or used. The primary barriers to adoption seem to be:</span><br />
<ul>
<li><span class="maBody">the diversity of the heterogeneous mix of image display, voice reporting and report display systems that are difficult to integrate tightly enough to achieve this,</span></li>
<li><span class="maBody">coupled with the real or perceived difficulty of enabling the radiologist to author more highly linked content without reducing their "productivity" (as currently incentivized).</span></li>
</ul>
<span class="maBody">In a world in which </span><span class="maBody"><span class="maBody">the standard of </span><span class="maBody"><span class="maBody">care in the </span>community is </span>the fax of a printed report, possibly coupled with a CD full of images with a brain-dead viewer (and no presentation state or structured report coordinate rendering)</span><span class="maBody">, the issue of any arrows at all is probably moot. The financial or quality incentives are focused on embellishing the report not with clinically useful content but instead with content for reimbursement optimization. The best we can probably do for these scenarios is the (non-interactive) "multimedia report", i.e., the one that has the selected images or regions of images pre-windowed and embedded in the report with arrows and numbers shared with the findings in the prose, or similar. An old concept once labelled as an "<a href="http://dx.doi.org/10.2214/ajr.167.5.8911158" target="_blank">illustrated</a>" report, recently <a href="http://dx.doi.org/10.1016/j.acra.2013.09.002" target="_blank">revisited</a> or <a href="http://dx.doi.org/10.1016/j.jacr.2014.11.009" target="_blank">renamed (MERR)</a>, but still rarely implemented AFAIK.</span><br />
<br />
<span class="maBody">Even within a single enterprise, </span><span class="maBody"><span class="maBody">the "hyperlink" between specific findings in the report content and the image annotations is usually absent. The EHR and PACS may be nominally "integrated" to the point of being able to trigger the PACS viewer whilst reading the report (whether to get Well Meaningful Use Brownie Points or to actually serve the needs of the users), and the </span><span class="maBody"></span>PACS may be able to render the radiologist's arrows (e.g., if they are stored as presentation states in the PACS). While this scenario is way better than having no arrows at all, it is not IMHO as good as "findings-linked annotations" (let's call them FLA, since we need more acronyms like we need a hole in the head). Such limited integrated deployments are typically present when the lowest common denominator for "report interchange" is essentially the same old plain text report, perhaps "masquerading" as something more sophisticated (e.g., by wrapping the text in CDA or DICOM SR, with or without a few section headings but without "semantic" links from embedded findings to image coordinates or references to softcopy presentation states).</span><br />
<br />
<span class="maBody">Likewise, though the <a href="https://www.rsna.org/Reporting_Initiative.aspx" target="_blank">radiology</a> and <a href="http://dx.doi.org/10.1016/j.jacc.2014.03.020" target="_blank">cardiology</a> professional societies have been strongly pushing so-called "structured reporting" again lately, these efforts are pragmatic and only an incremental extension to the lowest common denominator. They are still essentially limited to standardization of layout and section headings, and do not extend to visual hyperlinking of findings to images. Not to dismiss the importance of these efforts; they are a vital next step, and when adopted offer valuable improvements, but </span><span class="maBody"><span class="maBody">IMHO they are </span>not sufficient to communicate most effectively with the report recipients.</span><br />
<span class="maBody"><br /></span>
<span class="maBody">So, as radiologists worry about their inevitable </span><span class="maBody"><span class="maBody">outsourcing and </span>commodification, perhaps they should be more concerned about how to provide added value </span><span class="maBody"><span class="maBody">beyond the traditional verbose prose</span>, rather than bemoaning the hypothetical (if not entirely spurious) disadvantages of visual cues. The ability to "illustrate" a report effectively may become a key component of one's "<a href="https://books.google.com/books?isbn=0684841460" target="_blank">competitive advantage</a>" at some point.</span><br />
<span class="maBody"><br /></span>
<span class="maBody">I suggest that we need more FLA to truly enable radiologists to be "</span><span class="maBody">informative and participatory as caregivers, alerting our colleagues with more incisiveness and counsel" (paraphrasing the authors). That is, to more effectively combine the annotations and the report, rather than to exaggerate the importance of one over the other.</span><br />
<span class="maBody"><br /></span>
David<br />
<br />
PS. Patients read their reports and look at their images too, and they really seem to like arrows, not many of them being trained anatomists.<br />
<br />
PPS. I thought for a moment that the article might be a joke, and that the
authors were being sarcastic, but its Halloween not April Fool's, the paper
was submitted in August and repeated on Aunt Minnie, so I guess it is a
serious piece with the intention of being provocative rather than being taken literally. It certainly provoked me!<br />
<br />
PPPS. Do not interpret my remarks to in any way advocate a "burned in" arrow, i.e., one that replaces the original underlying pixel values and which is then sent as the only "version" of the image; that is obviously unacceptable. I understand the author's article to be referring to arrows in general and not that abhorrent encoding mechanism in particular.<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com1tag:blogger.com,1999:blog-1367102802658603789.post-89893177524879248142015-10-22T11:26:00.001-07:002015-10-22T11:26:36.745-07:00I think she's dead ... no I'm not ... Is PACS pining for the fiords?Summary: The death of PACS, and its deconstruction, have been greatly exaggerated. Not just recently, but 12 years ago.<br />
<br />
Long Version:<br />
<br />
Mixing quotes from different Monty Python sketches (<a href="http://www.montypython.net/scripts/maryscot.php" target="_blank">Death of Mary Queen of Scots</a>, <a href="http://www.montypython.net/scripts/petshop.php" target="_blank">Pet Shop</a>) is probably almost as bad as mixing metaphors, but as I grow older it is more effort to separate these early associations.<br />
<br />
These lines came to mind when I was unfortunately reminded of one the most annoying articles published in the last few years, "<a href="http://dx.doi.org/10.1007%2Fs10278-013-9660-1" target="_blank">PACS in 2018: An Autopsy</a>", which is in essence an unapologetic unsubstantiated promotion of the VNA concept.<br />
<br />
Quite apart from the fact that nobody can agree on WTF a VNA actually is (despite my own lame attempt at a retrospective <a href="https://en.wikipedia.org/wiki/Vendor_Neutral_Archive" target="_blank">Wikipedia definition</a>), this paper is a weird collage of observable technological trends in standards and products, marketing repackaging of existing technology with new labels, and fanciful desiderata that lack real market drivers or evidence of efficacy (or the regulatory (mis-)incentives that sometimes serve in lieu).<br />
<br />
That's fine though, since it is reasonable to discuss alternative architectures and consider their pros and cons. But wait, surprise, there is actually very little if any substance there? No discussion of the relative merits or drivers for change? Is this just a fluff piece, the sort of garbage that one might see in a vendor's press release or in one of those junk mail magazines that clutter one's physical mailbox? All hype and no substance? What is it doing in a supposedly peer-reviewed scientific journal like JDI?<br />
<br />
OK, so its cute, and its provocative, and let's give the paper the benefit of the doubt and categorize it as editorial rather than scientific, which allows for some latitude.<br />
<br />
And no doubt, somewhat like Keeping Up with the Kardashians and its ilk, since folks seem to be obsessed with train wrecks, it is probably destined to become the "most popular JDI article of all time".<br />
<br />
And let's be more even generous and forgive the drawing of pretty boxes that smells like "<a href="https://en.wikipedia.org/wiki/Marchitecture" target="_blank">Marchitecture</a>". Or, that it would be hard for a marketing executive to draft a more buzzword compliant brochure. And perhaps as an itemized list of contemporary buzzwords, it has some utility.<br />
<br />
My primary issue is with the title, specifically the mention of "autopsy".<br />
<br />
Worse, the author's follow up at the SIIM 2015 meeting in his opening address entitled "<a href="http://siim.org/?page=15next_evolution" target="_blank">The Next Imaging Evolution: A World Without PACS (As We Know It)</a>" perpetuated this theme of impending doom for PACS, a theme that dominated the meeting.<br />
<br />
Indeed, though the SIIM 2015 meeting was, overall, very enjoyable and relatively informative, albeit repetitive, the main message I returned home with was the existence of a pervasive sense of desperation among the attendees, many of whom seem to fear not just commoditization (Paul Chang's theme in past years) but perhaps even total irrelevance in the face of the emerging "threat" that is enterprise image management. I.e., PACS administrators and radiologists are doomed to become redundant. Or at least they are if they don't they buy products with different labels, or re-implement the same solutions with different technology.<br />
<br />
When did SIIM get hijacked by fear-mongers and doubters? We should be demanding more rigidly defined areas of doubt and uncertainty ... wait, no, wrong radio show.<br />
<br />
OK, I get that many sites are faced with the challenge of expanding imaging beyond radiology and cardiology, and indeed many folks like the VA have been doing that for literally decades. And I get that Meaningful Use consumes all available resources. And that leveraging commodity technology potentially lowers barriers to entry. And that mobile devices need to be integrated. And that radiology will no longer be a significant revenue stream as it becomes a cost rather than profit center (oops, who said that). But surely the message that change may be coming can be spun positively, as an opportunity rather than a threat, as incremental improvement rather than revolution. Otherwise uninformed decision makers as well as uneducated worker bees who respond to hyperbole rather than substance, or who are seeking excuses, may be unduly influenced in undesirable or unpredictable ways.<br />
<br />
More capable commentators than I have criticized this trend of hyping the supposed forthcoming "death of PACS", ranging from <a href="http://www.auntminnie.com/index.aspx?sec=sup&sub=pac&pag=dis&ItemID=111667" target="_blank">Mike Cannavo</a> to Herman O's <a href="http://blog.otechimg.com/2015/06/siim2015-my-top-ten-of-whats-new.html" target="_blank">review of SIIM 2015</a> and the equally annoying <a href="http://blog.otechimg.com/2015/09/truths-and-myths-about-deconstructed.html" target="_blank">deconstruction mythology</a>.<br />
<br />
Call me a <a href="https://en.wikipedia.org/wiki/Luddite" target="_blank">Luddite</a>, but these sorts of predictions of PACS demise are not new; indeed, I just came across an old RSNA 2003 abstract by Nogah Haramati entitled "<a href="https://www.researchgate.net/publication/266133038_Web-based_Viewers_as_Image_Distribution_Solutions_Is_PACS_Already_a_Dead_Concept" target="_blank">Web-based Viewers as Image Distribution Solutions: Is PACS Already a Dead Concept?</a>". Actually, encountering that abstract was what prompted me to write this diatribe, and triggered the festering irritation to surface. It is interesting to consider the current state of the art in terms of web viewing and what is currently labelled as "PACS" in light of that paper, considering it was written and presented 12 years ago. Unfortunately I don't have the slides, just the abstract, but I will let you know if/when I do get hold of them.<br />
<br />
One has to wonder to what extent recent obsession with this morbid terminology represents irresponsible fear mongering, detachment from whatever is going on in the "real world" (something I am often accused of), self-serving promotion of a new industry segment, extraordinary popular delusions and the madness of crowds, or just a desire to emulate the breathless sky-is-falling reporting style that seems to have made the transition from cable news even to documentary narrative (judging by the "Yellowstone fauna are doomed" program we watched at home on Animal Planet the other night). Where is <a href="https://en.wikipedia.org/wiki/David_Attenborough" target="_blank">David Attenborough</a> when you need him? Oh wait, I think he's dead. No he's not!<br />
<br />
David<br />
<br />
<i><span class="st">plus c'est la même chose</span></i><br />
<br />
<span class="st"></span><span class="st"></span> David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com7tag:blogger.com,1999:blog-1367102802658603789.post-11164069502904533812015-10-04T10:54:00.000-07:002015-10-04T10:54:30.096-07:00What's that 'mean', or is 'mean' 'meaningless'?Summary: The current SNOMED code for "mean" used in DICOM is not defined to have a particular meaning of mean, which comes to light when considering adding geometric as opposed to arithmetic mean. Other sources like NCI Thesaurus have unambiguously defined terms. The STATO formal ontology does not help because of its circular and incomplete definitions.<br />
<br />
Long Version:<br /><br />In this <a href="https://www.youtube.com/v/umy2V114PNs" target="_blank">production company closing logo</a> for Far Field Productions, a boy point to a tree and says "what's that mean?"<br />
<br />
One might well ask when reading DICOM PS3.16 and trying to decide when to use the coded "concept" (R-00317, SRT, "Mean") (SCT:373098007).<br />
<br />
This question arose when Mathieu Malaterre asked about adding "geometric mean", which means (!) it is now necessary to distinguish "geometric" from "arithmetic" mean.<br />
<br />
As you probably know, DICOM prefers not to "make up" its own "concepts" for such things, but to defer to external sources when possible. SNOMED is a preferred such external source (at least for now, pending an updated agreement with IHTSDO that will allow DICOM to continue to add SNOMED terms to PS3.16 and allow implementers to continue to use them with license or royalty payments, like the old agreement). However, when we do this, we do not provide explicit (textual or ontologic) definitions, though we may choose to represent one of multiple possible alternative terms (synonyms) rather than the preferred term, or indeed make up our own "code meaning" (which is naughty, probably, if it subtly alters the interpretation).<br />
<br />
So what does "mean" "mean"?<br />
<br />
Well, SNOMED doesn't say anything useful about (R-00317, SRT, "Mean") (SCT:373098007). The SNOMED "concept" for "mean" has parents:<br />
<br />
<a href="https://www.blogger.com/null" id="0fh-cd1_canvas-treeicon-138875005" style="color: inherit; text-decoration: inherit;"><span class="treeLabel selectable-row" data-concept-id="138875005" data-module="900000000000207008" data-term="SNOMED CT Concept (SNOMED RT+CTV3)">> SNOMED CT Concept (SNOMED RT+CTV3)</span></a><br />
<span class="treeLabel selectable-row" data-concept-id="138875005" data-module="900000000000207008" data-term="SNOMED CT Concept (SNOMED RT+CTV3)"> > </span><a href="https://www.blogger.com/null" id="0fh-cd1_canvas-treeicon-362981000" style="color: inherit; text-decoration: inherit;"><span class="treeLabel selectable-row" data-concept-id="362981000" data-module="900000000000207008" data-term="Qualifier value (qualifier value)">Qualifier value (qualifier value)</span></a><span class="treeLabel selectable-row" data-concept-id="272099008" data-module="900000000000207008" data-term="Descriptor (qualifier value)"> </span><br />
<a href="https://www.blogger.com/null" id="0fh-cd1_canvas-treeicon-272099008" style="color: inherit; text-decoration: inherit;"><span class="treeLabel selectable-row" data-concept-id="272099008" data-module="900000000000207008" data-term="Descriptor (qualifier value)"> > Descriptor (qualifier value)</span></a><span class="" data-concept-id="277434004" data-module="900000000000207008" data-term="Numerical descriptors (qualifier value)" id="0fh-cd1_canvas-treeicon-277434004">
</span><br />
><a href="https://www.blogger.com/null" id="0fh-cd1_canvas-treeicon-138875005" style="color: inherit; text-decoration: inherit;"><span class="treeLabel selectable-row" data-concept-id="138875005" data-module="900000000000207008" data-term="SNOMED CT Concept (SNOMED RT+CTV3)"> </span></a><span class="" data-concept-id="277434004" data-module="900000000000207008" data-term="Numerical descriptors (qualifier value)" id="0fh-cd1_canvas-treeicon-277434004">Numerical descriptors (qualifier value)</span><br />
<span class="" data-concept-id="277434004" data-module="900000000000207008" data-term="Numerical descriptors (qualifier value)" id="0fh-cd1_canvas-treeicon-277434004"> </span>
<br />
<a href="https://www.blogger.com/null" style="color: inherit; text-decoration: inherit;"></a> <br />
<a href="https://www.blogger.com/null" style="color: inherit; text-decoration: inherit;"></a>which doesn't help a whole lot. This is pretty par for the course with SNOMED, even though some SNOMED "concepts" (not this one) have (in addition to their "Is a" hierarchy), a more formal definition produced by other types of relationship (e.g., "Procedure site - direct", "Method"), etc. I believe these are called "fully defined" (as distinct from "primitive").<br />
<br />
So one is left to interpret the SNOMED "term" that is supplied as best one can.<br />
<br />
UMLS has (lexically) mapped SCT:373098007 to <a href="https://uts.nlm.nih.gov/metathesaurus.html#C1298794;0;1;CUI;2015AA;EXACT_MATCH;*;" target="_blank">UMLS:C1298794</a>, which is "Mean - numeric estimation technique", and unfortunately has no mappings to other schemes (i.e., it is a dead end). UMLS seems to have either consciously or accidentally not linked the SNOMED-specific meaningless mean with any of <a href="https://uts.nlm.nih.gov/metathesaurus.html#C0444504;0;1;CUI;2015AA;EXACT_MATCH;*;" target="_blank">(C0444504 ,UMLS, "Statistical mean")</a>, <a href="https://uts.nlm.nih.gov/metathesaurus.html#C2347634;0;1;CUI;2015AA;EXACT_MATCH;*;" target="_blank">(C2347634, UMLS, "Population mean")</a> or <a href="https://uts.nlm.nih.gov/metathesaurus.html#C2348143;0;1;CUI;2015AA;EXACT_MATCH;*;" target="_blank">(C2348143, UMLS, "Sample mean")</a>.<br />
<br />
There is no UMLS entry for "arithmetic mean" that I could find, but the "statistical mean" that UMLS reports, is linked to the "mean" from NCI Thesaurus, <a href="https://ncit.nci.nih.gov/ncitbrowser/ConceptReport.jsp?dictionary=NCI_Thesaurus&code=C53319&ns=NCI_Thesaurus" target="_blank">(C53319, NCIt, "Mean")</a>, which is defined textually as one might expect, as "the sum of a set of values divided by the number of values in the set". This is consistent with how Wikipedia, the ultimate albeit evolving source of all knowledge, defines "<a href="https://en.wikipedia.org/wiki/Arithmetic_mean" target="_blank">arithmetic mean"</a>.<br />
<br />
SNOMED has no "geometric mean" but UMLS and NCI Thesaurus do. <a href="https://uts.nlm.nih.gov/metathesaurus.html#C2986759;0;1;CUI;2015AA;EXACT_MATCH;*;" target="_blank">UMLS:C2986759</a> maps to <a href="https://ncit.nci.nih.gov/ncitbrowser/ConceptReport.jsp?dictionary=NCI_Thesaurus&code=C94906&ns=NCI_Thesaurus" target="_blank">NCIt:C94906</a>.<br />
<br />
One might expect that one should be able to do better than arbitrary textual definitions for a field as formalized as statistics. Sure enough I managed to find <a href="http://frog.oerc.ox.ac.uk:8080/stato-app/" target="_blank">STATO</a>, a general-purpose STATistics Ontology, which looked promising on the face of it. One can poke around in it <a href="http://bioportal.bioontology.org/ontologies/STATO" target="_blank">on-line</a> (hint: look at the classes tab and expand the tree), or download the OWL file and use a tool like <a href="http://protege.stanford.edu/" target="_blank">Protégé</a>.<br />
<br />
If you are diligent (and are willing to wade through the <a href="http://ifomis.uni-saarland.de/bfo/" target="_blank">Basic Formal Ontology (BFO)</a> based hierarchy: <br />
<br />
entity<br />
> continuant<br />
> dependent continuant<br />
> generic dependent continuant<br />
> information content entity<br />
> data item<br />
> measurement data item<br />
> measure of central tendency<br />
> average value<br />
<br />
one finally gets to a child, "average value", which has an "alternative term" of "arithmetic mean".<br />
<br />
Yeah!<br />
<br />
But wait, what is its definition? There is a textual annotation "definition" that is "a data item that is produced as the output of an averaging data transformation and represents the average value of the input data".<br />
<br />
F..k! After all that work, can you say "circular"? I am sure Mr. Rogers can.<br />
<br />
More formally, STATO says "average value" is equivalent to "is_specified_output_of some 'averaging data transformation'". OK, may be there is hope there, so let's look at the definition of "averaging data transformation" in the "occurrent" hierarchy (don't ask; read the <a href="https://books.google.com/books?id=AUxQCgAAQBAJ" target="_blank">"Building Ontologies with Basic Formal Ontology" book</a>).<br />
<br />
Textual definition: "An averaging data transformation is a data transformation that has objective averaging". Equivalent to "(has_specified_output some 'average value') or (achieves_planned_objective some 'averaging objective')".<br />
<br />
Aargh!<br />
<br />
Shades of <a href="https://en.wikipedia.org/wiki/Lexical_semantics" target="_blank">lexical semantics</a> (<a href="http://www.cambridge.org/us/academic/subjects/languages-linguistics/semantics-and-pragmatics/lexical-semantics" target="_blank">Cruse</a> is a good read, by the way), and about as useful for our purposes:(<br />
<br />At least though, we know that STATO:'average value' is a sub-class of STATO:'measure of central tendency', which has a textual definition of "a measure of central tendency is a data item which attempts to describe a set of data by identifying the value of its centre", so I guess we are doing marginally better than SNOMED in this respect (but that isn't a very high bar). Note that in the previous sentence I didn't show "codes" for the STATO "concepts", because it doesn't seem to define "codes", and just uses the human-readable "labels" (but Cimino-Desiderata-non-compliance is a subject for another day).<br />
<br />
In my quest to find a sound ontological source for the "concept" of "geometric mean", I was also thwarted. No such animal in STATO apparently, yet, as far as I could find (maybe I should ask them).<br />
<br />
So not only does STATO have useless circular definitions but it is not comprehensive either. Disappointed!<br />
<br />
So I guess the best we can do in DICOM for now, given that the installed base (especially of ultrasound devices) probably use (R-00317, SRT, "Mean") a lot, is to add text that says when we use that code, we really "mean" "mean" in the sense of "arithmetic mean", and not the more generic concept of other things called "mean", and add a new code that is explicitly "geometric mean". Perhaps SNOMED will add a new "concept" for "geometric mean" on request and/or improve their "numerical descriptors" hierarchy, but in the interim either the NCI Thesaurus term <a href="https://ncit.nci.nih.gov/ncitbrowser/ConceptReport.jsp?dictionary=NCI_Thesaurus&code=C94906&ns=NCI_Thesaurus" target="_blank">NCIt:C94906</a> or the UMLS entry <a href="https://uts.nlm.nih.gov/metathesaurus.html#C2986759;0;1;CUI;2015AA;EXACT_MATCH;*;" target="_blank">UMLS:C2986759</a> would seem to be adequate for our purposes. Sadly, the more formal ontologies have not been helpful in this respect, at least the one I could find anyway.<br />
<br />
Maybe we should also be extremely naughty and replace all uses of (R-00317, SRT, "Mean") in the DICOM Standard with (R-00317, SRT, "Arithmetic mean"), just to be sure there is no ambiguity in the DICOM usage (and suggest to SNOMED that they add it as an alternative term). This would be less disruptive to the DICOM installed base than replacing the inadequately defined SNOMED code with the precisely defined NCI Thesaurus code.<br />
<br />David <br />
<br />
PS. I italicize "concept" because there is debate over what SNOMED historically and currently defines "concept" to be, quite apart from the philosophical distinctions made by "realist" and "idealist" ontologists (or is it "nominalists" and "conceptualists"). I guess you know you are in trouble when you invoke Aristotle. Sort of like invoking Lincoln I suppose (sounds better when James McEachin says it).<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com1tag:blogger.com,1999:blog-1367102802658603789.post-22794396996668310392014-10-26T10:32:00.003-07:002014-10-26T10:43:15.682-07:00Keeping up with Mac Java - Bundling into Executable AppsSummary: Packaging a Java application into an executable Mac bundle is not difficult, but has changed over time; JavaApplicationStub is replaced by JavaAppLauncher; manually building the package content files and hand editing the Info.plist is straightforward, but the organization and properties have changed. Still irritating that JWS/JNLP does not work properly in Safari.<br />
<br />
Long Version.<br />
<br />
I have long been a fan of Macs and of Java, and I have a pathological aversion to writing single-platform code, if for no other reason than my favorite platforms tend to vanish without much notice. Since I am a command-line weenie, use XCode only for text editing and never bother much with "integrated development environments" (since they tend to vanish too), I am also a fan of "make", and tend to use it in preference to "ant" for big projects. I am sure "ant" is really cool but editing all those build.xml files just doesn't appeal to me. This probably drives the users of my source code crazy, but c'est la vie.<br />
<br />
The relevance of the foregoing is that my Neanderthal approach makes keeping up with Apple's and Oracle's changes to the way in which Java is developed and deployed on the Mac a bit of a challenge. I do need to keep up, because my primary development platform is my Mac laptop, since it has the best of all three "worlds" running on it, the Mac stuff, the Unix stuff and the Windows stuff (under Parallels), and I want my tools to be as useful to as many folks as possible, irrespective of their platform of choice (or that which is inflicted upon them).<br />
<br />
Most of the tools in my <a href="http://www.dclunie.com/pixelmed/software/" target="_blank">PixelMed DICOM toolkit</a>, for example, are intended to be run from the command line, but occasionally I try to make something vaguely useful with a user interface (not my forte), like the <a href="http://www.dclunie.com/pixelmed/software/webstart/DoseUtilityUsage.html" target="_blank">DoseUtility</a> or <a href="http://www.dclunie.com/pixelmed/software/webstart/DicomCleanerUsage.html" target="_blank">DicomCleaner</a>. I deploy these as <a href="http://docs.oracle.com/javase/tutorial/deployment/webstart/" target="_blank">Java Web Start</a>, which fortunately continues to work fine for Windows, as well for Firefox users on any platform, but since an unfortunate "<a href="http://support.apple.com/kb/HT5672" target="_blank">security fix</a>" from Apple, is not so great in Safari anymore (it downloads the JNLP file, which you have to go find and open manually, rather than automatically starting; blech!). I haven't been able to find a way to restore JNLP files to the "CoreTypes safe list", since the "XProtect.plist XProtect.meta.plist" and "XProtect.plist" files in "/System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/" don't seem to be responsible for this undesirable change in behavior, and I haven't found an editable file that is yet.<br />
<br />
Since not everyone likes JWS, and in some deployment environments it is disabled, I have for a while now also been creating selected downloadable executable bundles, both for <a href="http://www.dclunie.com/pixelmed/software/winexe/" target="_blank">Windows</a> and the <a href="http://www.dclunie.com/pixelmed/software/macexe/" target="_blank">Mac</a>.<br />
<br />
Once upon a time, the way to do this to build Mac applications was with a tool that Apple supplied called "<a href="https://developer.apple.com/legacy/library/documentation/Java/Conceptual/Jar_Bundler/Jar_Bundler.pdf" target="_blank">jarbundler</a>". This did the work of populating the tree of files that constitute a Mac application "<a href="https://developer.apple.com/library/mac/documentation/CoreFoundation/Conceptual/CFBundles/BundleTypes/BundleTypes.html" target="_blank">bundle</a>"; every Mac application is really a folder called "something.app", and it contains various property files and resources, etc., including a binary executable file. In the pre-Oracle days, when Apple supplied its own flavor of Java, the necessary binary file was "JavaApplicationStub", and jarbundler would stuff that into the necessary place when it ran. There is <a href="https://developer.apple.com/library/mac/documentation/Java/Conceptual/Java14Development/03-JavaDeployment/JavaDeployment.html" target="_blank">obsolete documentation</a> of this still available from Apple.<br />
<br />
Having used jarbundler once, to see what folder structure it made, I stopped using it and just manually cut and past stuff into the right places for each new application, and mirrored what jarbundler did to the Info.plist file when JVM options needed to be added (such as to control the heap size), and populated the resources with the appropriate jar files, updated the classpaths in Info.plist, etc. Automating updates to such predefined structures in the Makefiles was trivial. Since I was using very little if anything that was Apple-JRE specific in my work, when Apple stopped doing the JRE and Oracle took over, it had very little impact on my process. So now I am in the habit of using various bleeding edge OpenJDK versions depending on the phase of the moon, and everything still seems to work just fine (putting aside changes in the appearance and performance of graphics, a story for another day).<br />
<br />
Even though I have been compiling to target the 1.5 JVM for a long time, just in case anybody was still on such an old unsupported JRE, I finally decided to bite the bullet and switch to 1.7. This seemed sensible when I noticed that Java 9 (with which I was experimenting) would no longer compile to such an old target. After monkeying around with the relevant javac options (-target, -source, and -bootclasspath) to silence various (important) warnings, everything seemed good to go.<br />
<br />
Until I copied one of these 1.7 targeted jar files into a Mac application bundle, and thought hey, why not rev up the JVMVersion property from "1.5+" to "1.7+"? Then it didn't work anymore and gave me a warning about "unsupported versions".<br />
<br />
Up to this point, for years I had been smugly ignoring all sorts of anguished messages on the <a href="https://lists.apple.com/mailman/listinfo/java-dev" target="_blank">Mac Java mailing list</a> about some new tool called "<a href="http://docs.oracle.com/javase/7/docs/technotes/guides/jweb/packagingAppsForMac.html" target="_blank">appbundler</a>" described by Oracle, and the Apple policy that executable apps could no longer depend on the installed JRE, but instead had to be bundled with their own complete copy of the appropriate JRE (see this <a href="http://docs.oracle.com/javase/7/docs/technotes/guides/jweb/packagingAppsForMac.html#bundle_jre" target="_blank">link</a>). I was content being a fat dumb and happy ostrich, since things were working fine for me, at least as soon as I <a href="http://www.cnet.com/news/how-to-bypass-gatekeeper-in-os-x-mavericks/" target="_blank">disabled</a> all that Gatekeeper nonsense by allowing apps from "anywhere" to run (i.e., not just from the App Store, and without signatures), which I do routinely.<br />
<br />
So, when my exposed ostrich butt got bitten by my 1.7 target changes (or whatever other incidental change was responsible), I finally realized that I had to either deal with this properly, or give up on using and sharing Mac executables. Since I have no idea how many, if any, users of my tools are dependent on these executables (I suspect not many), giving up wouldn't have been so bad except that (a) I don't like to give up so easily, and (b) occasionally the bundled applications are useful to me, since they support such things as putting it in the Dock, dragging and dropping to an icon, etc.<br />
<br />
How hard can this be I thought? Just run appbundler, right? Well, it turns out the appbundler depends on using ant, which I don't normally use, and its configuration out of the box doesn't seem to handle the JVM options I wanted to specify. One can download it from <a href="http://java.net/projects/appbundler" target="_blank">java.net</a><span id="goog_1274073430"></span><span id="goog_1274073431"></span>, and here is its <a href="http://java.net/downloads/appbundler/appbundler.html" target="_blank">documentation</a>. I noticed it seemed to be a little old (two years) and doesn't seem to be actively maintained by Oracle, which is a bit worrying. It turns out there is a <a href="https://bitbucket.org/infinitekind/appbundler" target="_blank">fork</a> of it that is maintained by others (infinitekind) that has more configuration options, but this all seemed to be getting a little more complicated than I wanted to have to deal with. I found a post from Michael Hall on the Mac Java developers mailing list that mentioned a tool he had written, <a href="http://www195.pair.com/mik3hall/index.html#appconverter" target="_blank">AppConverter</a>, which would supposedly convert the old to the new. Sounded just like what I needed. Unfortunately, it did nothing when I tried it (did not respond to a drag and drop of an app bundle as promised).<br />
<br />
I was a bit bummed at this point, since it looked like I was going to have to trawl through the source of one of the appbundler variants or AppConverter, but then I decided I would first try and just cheat, and see if I could find an example of an already bundled Java app, and copy it.<br />
<br />
AppConverter turned out to be useful after all, if only to provide a template for me to copy, since when I opened it up to show the Package Contents, sure enough, it was a Java application, contained a copy of the java binary executable JavaAppLauncher, which is what is used now instead of JavaApplicationStub, and had an Info.plist that showed what was necessary. In addition, it was apparent that the folder where the jar files go has moved, from being in "Contents/Resources/Java" to "Contents/Java" (and various posts on the Mac Java developers mailing list mentioned that too).<br />
<br />
So, with a bit of manual editing of the file structure and the Info.plist, and copying the JavaAppLauncher out of AppConverter, I got it to work just fine, without the need to figure out how to run and configure appbundler.<br />
<br />
By way of example, here is the Package Contents of DicomCleaner the old way:<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAVTiHRepEBnk4BTQ0pduEA_7wN2WJbyN7EXLM4HI0FXVVFIW6iP88kaXEPqqJGBAoJRbW75LFFrfn3bQgwxpmGTj-hOCnCKIcD2Mp6A6HOlKHjlWSbFyRBtaOXkt5X87kBhJyN8jkgaT1/s1600/PackageContents_old.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAVTiHRepEBnk4BTQ0pduEA_7wN2WJbyN7EXLM4HI0FXVVFIW6iP88kaXEPqqJGBAoJRbW75LFFrfn3bQgwxpmGTj-hOCnCKIcD2Mp6A6HOlKHjlWSbFyRBtaOXkt5X87kBhJyN8jkgaT1/s1600/PackageContents_old.png" height="320" width="309" /></a></div>
<br />
and here it is the new way:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOEGsa8eSCcKKiR93K9OFslA2iXu1ALto5dBin2HXi6VWyIN8xkRJTdAfP7GlTB_4_YydoiAz6VPkn1GFlN_JvEAU2GzYNmifjryOphsPi6l3KFv-ZchhzquPHnc3HL_j2Ji8ssiW6q98e/s1600/PackageContents_new.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOEGsa8eSCcKKiR93K9OFslA2iXu1ALto5dBin2HXi6VWyIN8xkRJTdAfP7GlTB_4_YydoiAz6VPkn1GFlN_JvEAU2GzYNmifjryOphsPi6l3KFv-ZchhzquPHnc3HL_j2Ji8ssiW6q98e/s1600/PackageContents_new.png" height="320" width="288" /></a></div>
<br />
And here is the old Info.plist:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNKW-BEPjeUaLQYAm347emJgIkvzL0HPJLe7SI5RsnzvAytZTuHGKK7G80kY3fFY_OFjB5Uebd8L_XGWw-anj8a04VNahM8kKMfsmRHFzIQd4o_fiMy4tpFyvVo_KSUuqqHWRP466zsRIN/s1600/Info.plist_old.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNKW-BEPjeUaLQYAm347emJgIkvzL0HPJLe7SI5RsnzvAytZTuHGKK7G80kY3fFY_OFjB5Uebd8L_XGWw-anj8a04VNahM8kKMfsmRHFzIQd4o_fiMy4tpFyvVo_KSUuqqHWRP466zsRIN/s1600/Info.plist_old.png" height="253" width="320" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
and here is the new Info.plist:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhE6YW_GYyYWsxFtg77X2CjxlSrynF3aPP8Ej8TLN9Vj2xW14umFusIng53jAkFSZDTRoXxoh9tzS8ALXDqQKTC37gVIei6KcEBd62BtINeg4s4A3AkDf2cHtoRV_jE5Ejln6DUtLpZH-p_/s1600/Info.plist_new.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhE6YW_GYyYWsxFtg77X2CjxlSrynF3aPP8Ej8TLN9Vj2xW14umFusIng53jAkFSZDTRoXxoh9tzS8ALXDqQKTC37gVIei6KcEBd62BtINeg4s4A3AkDf2cHtoRV_jE5Ejln6DUtLpZH-p_/s1600/Info.plist_new.png" height="197" width="320" /></a></div>
Note that it is no longer necessary to specify the classpath (not even sure how to); apparently the JavaAppLauncher adds everything in Contents/Java to the classpath automatically.<br />
<br />
Rather than have all the Java properties under a single Java key, the JavaAppLauncher seems to use a JVMMainClassName key rather than Java/MainClass, and JVMOptions, rather than Java/VMOptions. Also, I found that in the absence of a specific Java/Properties/apple.laf.useScreenMenuBar key, another item in JVMOptions would work.<br />
<br />
Why whoever wrote appbundler thought that they had to introduce these gratuitous inconsistencies, when they could have perpetuated the old Package Content structure and Java/Properties easily enough, I have no idea, but at least the structure is sufficiently "obvious" so as to permit morphing one to the other.<br />
<br />
Though I had propagated various properties that jarbundler had originally included, and added one that AppConverter had used (Bundle display name), I was interested to know just what the minimal set was, so I started removing stuff to see if it would keep working, and sure enough it would. Here is the bare minimum that "works" (assuming you don't need any JVM options, don't care what name is displayed in the top line and despite the <a href="https://developer.apple.com/library/mac/documentation/CoreFoundation/Conceptual/CFBundles/BundleTypes/BundleTypes.html" target="_blank">Apple documentation's</a> list of "required" properties):<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgohbcr1ztH_5asytTvq_f-gjt4e6fKDcGl38fvoYbvgisuShG8b7fffp1KKsS03Atw5l4QaxQpBDD-6RQco6Su-vDe4lGQtelVSroTpJmkl9RyWesEVFKq5s6z3hXzP3gkme5HuT7mXg2j/s1600/Info.plist_minimal.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgohbcr1ztH_5asytTvq_f-gjt4e6fKDcGl38fvoYbvgisuShG8b7fffp1KKsS03Atw5l4QaxQpBDD-6RQco6Su-vDe4lGQtelVSroTpJmkl9RyWesEVFKq5s6z3hXzP3gkme5HuT7mXg2j/s1600/Info.plist_minimal.png" height="51" width="320" /></a></div>
<br />
To reiterate, I used the JavaAppLauncher copied out of AppConverter, because it worked, and it wasn't obvious where to get it "officially".<br />
<br />
I did try copying the JavaAppLauncher binary that is present in the "com/oracle/appbundler/JavaAppLauncher" in appbundler-1.0.jar, but for some reason that didn't work. I also poked around inside javapackager (vide infra), and extracted "com/oracle/tools/packager/mac/JavaAppLauncher" from the JDKs "lib/ant-javafx.jar", but that didn't work either (reported "com.apple.launchd.peruser ... Job failed to exec(3) for weird reason: 13"), so I will give up for now and stick with what works.<br />
<br />
It would be nice to have an "official" source for JavaAppLauncher though.<br />
<br />
In case it has any impact, I was using OS 10.8.5 and JDK 1.8.0_40-ea whilst doing these experiments.<br />
<br />
David<br />
<br />
PS. What I have not done is figure out how to include a bundled JRE, since I haven't had a need to do this myself yet (and am not motivated to bother with the AppStore), but I dare say it should be easy enough to find another example and copy it. I did find what looks like a fairly thorough description in this <a href="http://speling.shemnon.com/blog/2014/04/10/getting-your-java-app-in-the-mac-app-store/" target="_blank">blog entry by Danno Ferrin</a> about getting stuff ready for the AppStore.<br />
<br />
PPS. I will refrain from (much) editorial comment about the pros and cons of requiring an embedded JRE in every tiny app, sufficeth to say I haven't found many reasons to do it, except for turn key applications (such as on a CD) where I do this on Windows a bit, just because one can. I am happy Apple/Oracle have enabled it, but surprised that Apple mandated it (for the AppStore).<br />
<br />
PPPS. There is apparently also something from Oracle called "<a href="http://docs.oracle.com/javafx/2/deployment/javafxpackager001.htm" target="_blank">javafxpackager</a>", which is pretty well <a href="http://docs.oracle.com/javafx/2/deployment/self-contained-packaging.htm" target="_blank">documented</a>, and which is supposed to be able to package non-FX apps as well, but I haven't tried it. Learning it looked more complicated than just doing it by hand. Digging deeper, it seems that this has been renamed to just "<a href="http://docs.oracle.com/javase/8/docs/technotes/tools/unix/javapackager.html" target="_blank">javapackager</a>" and is distributed with current JDKs.<br />
<br />
PPPPS. There is apparently an effort to develop a binary app that works with either the Apple or Oracle Package Contents and Info.plist properties, called "<a href="https://github.com/tofi86/universalJavaApplicationStub" target="_blank">universalJavaApplicationStub</a>", but I haven't tried that either. <br />
<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com6tag:blogger.com,1999:blog-1367102802658603789.post-45337976553489277422013-10-19T10:52:00.000-07:002013-10-19T10:52:09.898-07:00How Thick am I? The Sad Story of a Lonely Slice.Summary: Single slice regions of interest with no multi-slice context or interval/thickness information may need to be reported as area only, not volume. Explicit interval/thickness information can and should be encoded. Thickness should be distinguished from interval.<br />
<br />
Long Version.<br />
<br />
Given a Region of Interest (ROI), no matter how it is encoded (as contours or segmented pixels or whatever), one can compute its area, using the pixel spacing (size) information. If a single planar ROI (on one slice) is grouped with a bunch of siblings on contiguous slices, then one can produce a sum of the areas. And if one knows the (regular) spacing between the slices (reconstruction interval in CT/MR/PET parlance), one can compute a volume from the sum of the areas multiplied by the slice spacing. Often one does not treat the top and bottom slice specially, i.e., the ROI is regarded as occupying the entire slice interval. Alternatively, one could consider the top and bottom slices (or both slices) as only being partially occupied, and perhaps halve the contribution of the top and bottom slices.<br />
<br />
The slice interval is distinct from the slice "thickness" (Slice Thickness (0018,0050)), since data may be acquired and reconstructed such that there is either a gap between slices, or slices overlap, and in such cases, using the thickness rather than the interval would not return a volume representative of the object represented by the ROI(s). The slice interval is rarely encoded explicitly, and even if it is, may be unreliable, so one should compute the interval from the distance along the normal to the common orientation (parallel slices) using the Image Position (Patient) origin offset and the Image Orientation (Patient) row and column vectors. The Spacing Between Slices (0018,0088) is only officially defined for the MR and NM objects, though one does see it in CT images occasionally. In the past, some vendors erroneously encoded the gap between slices rather than the distance between their centers in Spacing Between Slices (0018,0088), so be wary of it.<br />
<br />
This all presupposes that one does indeed have sufficient spatial information about the ROI available, encoded in the appropriate attributes, which is the case for 2D contours defined relative to 3D slices (e.g., SR SCOORDS with referenced cross-sectional images), 3D contours (e.g., SR SCOORD3D or RT Structure Sets), and Segmentation objects encoded as image objects with plane orientation, position and spacing.<br />
<br />
And it works nicely down to just two slices.<br />
<br />
But what if one only has one lonely slice? Then there is no "interval" per se.<br />
<br />
For 2D contours defined relative to 3D image slices one could consult the adjacent (unreferenced) image slices and deduce the slice interval and assume that was applicable to the contour too. But for 3D contours and segmentation objects that stand alone in 3D space, and may have no explicit reference to the images from which they were derived, if indeed there were any images and if indeed those images were not re-sampled during segmentation, then there may be no "interval" information available at all.<br />
<br />
The RT Structure Set does handle this in the ROI Contour Module, by the provision of an (optional) Contour Slab Thickness (3006,0044) value, though it may interact with an the associated Contour Offset Vector (3006,0045) such that the plane of the coordinates is not the center of the slab. See PS 3.3 Section C.8.8.6.2.<br />
<br />
The Segmentation object, by virtue of inclusion of the Pixel Measures Sequence (functional group macro), which defines the Pixel Spacing, also requires the presence of the Slice Thickness attribute, but only if Volumetric Properties (0008,9206) is VOLUME or SAMPLED. And wouldn't you know it, the Segmentation IOD does not require the presence of Volumetric Properties :( That said, it is possible to encode it, so ideally one should; the question arises as to what the "thickness" of a segmentation is, and whether one should slavishly copy the slice thickness from the source images that were segmented, or whether one should use the interval (computed if necessary), since arguably one is segmenting the volume, regardless of how it was sampled. We should probably consider whether or not to include Spacing Between Slices (0018,0088) in the Pixel Measures Sequence as well, and to refine their definitions to make this clear.<br />
<br />
The SR SCOORD3D content item attributes do not include interval or thickness. That does not prevent one from encoding a numeric content item to associate with it, though no standard templates currently do. Either way, it would be desirable to standardize the convention. Codes are already defined in PS 3.16 for 112225, DCM, “Slice Thickness”) and (112226, DCM, “Spacing between slices”) (these are used in the Image Library entries for cross-sectional images in the CAD templates).<br />
<br />
Anyhow, from a recipient's perspective, given no explicit information and no referenced images there is no other choice than to report only area. If an image is referenced, and its interval or thickness are available, then one may be tempted to use it, but if they are different, which should one use? Probably the interval, to be consistent with the general case of multiple slices.<br />
<br />
From a sender's perspective, should one explicitly encode interval or thickness information in the RT Structure Set, SR SCOORD3D, and Segmentation objects, even though it is not required? This is probably a good move, especially for single slice ROIs, and should probably be something considered by the standard for inclusion as a CP.<br />
<br />
David<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com1tag:blogger.com,1999:blog-1367102802658603789.post-15617068020300796862013-10-14T16:18:00.001-07:002013-10-14T16:30:37.105-07:00Binge and Purge ... Archive Forever, Re-compress or Discard ... PACS Lifecyle ManagementSummary: Technical solutions and standards existing for implementing a hodge-podge of varied retention policies; teaching and research facilities should hesitate before purging or recompressing though; separating the decision making engine from the archive is desirable.<br />
<br />
Long version:<br />
<br />
As we continue to "binge" on imaging modalities that produce ever large quantities of data, such as MDCT, breast tomosynthesis and maybe one day whole slide imaging, the question of duration of storage becomes more pressing.<br />
<br />
An Australian colleague recently circulated a link to a piece entitled "<a href="http://www.ehi.co.uk/insight/analysis/1168/what-should-we-do-with-old-pacs-images_tcq" target="_blank">What should we do with old PACS images?</a>", in which Kim Thomas from eHealth Insider magazine discusses whether or not to discard old images, and how. The article nicely summarizes the UK situation, and concludes with the usual VNA hyperbole, but fails to distinguish the differences in practice settings in which such questions arise. <br />
<br />
In an operational environment that is focused only on immediate patient care, risk and cost minimization, and compliance with regulatory requirements, the primary questions are whether or not it is cheaper to retain, re-compress or delete studies that are no longer necessary, and whether or not the technology in use is capable of implementing it. In such environments, there is little if any consideration given to "secondary re-use" of such images, such as for research or teaching. Typically a freestanding ambulatory setting might be in such a category, the priorities being quality, cost and competitiveness.<br />
<br />
An extreme case of "early discarding" arises in Australia where, as I understand it, the policy of some private practices (in the absence of any statutory requirement to the contrary) is to hand the images to the patient and discard the local digital copy promptly. Indeed, this no doubt made sense when the medium was radiographic (as opposed to printed) film.<br />
<br />
In many jurisdictions though, there is some (non-zero) duration required by a local regulation specific to medical imaging, or a general regulation for retention of medical records that includes images. Such regulations define a length of time during which the record must be stored and made available. There may be a statutory requirement for each facility to have a written policy in place.<br />
<br />
In the US, the HIPAA Privacy Rule <a href="http://www.hhs.gov/ocr/privacy/hipaa/faq/safeguards/580.html" target="_blank">does not include medical record retention requirements</a>, and the rules are defined by the states, and vary (see for instance, the ONC summary of <a href="http://www.healthit.gov/sites/default/files/appa7-1.pdf" target="_blank">State Medical Record Laws</a>). Though not regulatory in nature, the <a href="http://www.acr.org/~/media/AF1480B0F95842E7B163F09F1CE00977.pdf" target="_blank">ACR–AAPM–SIIM Technical Standard For Electronic Practice of Medical Imaging</a> requires a written policy and that digital imaging data management systems must provide storage capacity capable of complying with all facility, state, and federal regulations regarding medical record retention. The current policy of the ACR Council is described in <a href="http://www.dclunie.com/documents/2012%20Digest%20of%20Council%20Actions%20-%20Appendix%20E%20-%20Ownership%2C%20Retention%20and%20Patient%20Access%20to%20Medical%20Records.pdf" target="_blank">Appendix E <i>Ownership, Retention and Patient Access to Medical Records</i></a> of the <a href="http://www.acr.org/~/media/ACR/Documents/PDF/Membership/Governance/2012%20Digest%20of%20Council%20Actions.pdf" target="_blank">2012-2012 Digest of Council Actions</a>. This seems a bit outdated (and still refers to "magnetic tapes" !). Google did reveal a draft of <a href="http://amclc.acr.org/LinkClick.aspx?fileticket=1TJ6xp5BX78%3D&tabid=61" target="_blank">an attempt to revise this</a>, but I am not sure of the status of that, and I will investigate whether or not our Standards and Interoperability group can help with the technical details. I was interested though, to read that:<br />
<br />
"<i>The scope of the “discovery rules” in other states mean that records should conceivably be held indefinitely. Evidence of “fraud” could extend the statute of limitations indefinitely.</i>"<br />
<br />
Beyond the minimum required, whatever that might be, in many settings there are good reasons to archive images for longer.<br />
<br />
In an academic enterprise, the needs of teaching and research must be considered seriously, and the (relatively modest) cost of archiving everything forever must be weighed against the benefit of maintaining a durable longitudinal record in anticipation of secondary re-use.<br />
<br />
I recall as a radiology registrar (resident in US-speak) spending many long hours in film archives digging out ancient films of exotic conditions, using lists of record numbers generated by queries for particular codes (which had been diligently recorded in the limited administrative information system of the day), for the purpose of preparing teaching content for various meetings and forums. These searches went back not just years but decades, if I remember correctly. This would not have been possible if older material had been discarded. Nowadays in a teaching hospital it is highly desirable that "good cases" be identified, flagged, de-identified and stored prospectively (e.g., using the <a href="http://wiki.ihe.net/index.php?title=Teaching_File_and_Clinical_Trial_Export" target="_blank">IHE Teaching File and Clinical Trial Export (TCE) profile</a>). But not everyone is that diligent, or has the necessary technology deployed, and there will remain many situations in which the value of a case is not recognized except in retrospect.<br />
<br />
Retrospective research investigations have a place too. Despite the need to perform prospective randomized controlled trials there will always be a place for <a href="http://www.ajronline.org/doi/full/10.2214/ajr.183.5.1831203" target="_blank">observational studies in radiology</a>. Quite apart from clinical questions, there are technical questions to be answered too. For example, suppose one wanted to compare the performance of irreversible compression algorithms for a specific interpretation task (or to demonstrate non-inferiority compared to uncompressed images). To attain sufficient statistical power to detect the absence of a small but clinically significant difference in observer performance, a relatively large number of cases would be required. Obtaining these prospectively, or from multiple institutions, might be cost prohibitive, yet a sufficiently large local historical archive might render the problem tractable. The further the question strays from those that might be answered using existing public or sequestered large image collections (such as those available through the <a href="http://ncia.nci.nih.gov/ncia/" target="_blank">NBIA</a> or <a href="http://www.cancerimagingarchive.net/" target="_blank">TCIA</a> or <a href="http://www.loni.usc.edu/" target="_blank">ADNI</a> or <a href="http://www.cardiacatlas.org/" target="_blank">CardiacAtlas</a>), the more often this true.<br />
<br />
Such questions also highlight the potential danger of using irreversible compression as a means of reducing storage costs for older images. Whilst such a strategy may or may not impinge upon the utility of the images for prior comparison or evidential purposes, they may render them useless for certain types of image processing research, such as CAD, and certainly so for research into compression itself.<br />
<br />
Technologically speaking, as the eHI article reminds us, not all of the installed base of PACS have the ability to perform what is colloquially referred to as "life cycle management", especially if it is automated in some manner, based on some set of rules that implement configurable local policy. So, even if one decides that it is desirable to purge, one may need some technology refreshment to implement even a simple retention policy.<br />
<br />
This might be as "easy" as upgrading one's PACS to <a href="http://dclunie.blogspot.com/2013/07/my-pacs-has-fallen-down-and-i-cant-get.html" target="_blank">a more recent version</a>, or it might be one factor motivating a PACS replacement, or it might require some third party component, such as a VNA. One might even go so far as to separate the execution of the purging from the decision making about what to purge, using a separate "rules engine", coupled with a standard like <a href="http://wiki.ihe.net/index.php?title=Imaging_Object_Change_Management" target="_blank">IHE Image Object Change Management (IOCM)</a> to communicate the purge decision (as I discussed in an old thread on <a href="http://www.pacsgroup.org.uk/forum/messages/2/70254.html" target="_blank">Life Cycle Management in the UK Imaging Informatics Group</a>). We added "Data Retention Policy Expired" as a KOS document title in <a href="http://www.dclunie.com/dicom-status/status.html#CP1152" target="_blank">DICOM CP 1152</a> specifically for this purpose. <br />
<br />
One also needs a reliable source of data to drive the purging decision. Some parameters like the patient's age, visit dates, condition and types of procedure should be readily available locally; others may not, such as whether or not the patient has died. As I mentioned in that same UK thread, and has also been discussed in <a href="http://groups.yahoo.com/neo/groups/pacs_admin/search/messages?query=lifecycle" target="_blank">lifecycle, purging and deletion threads in the pacsadmin group</a>, in the US we have the <a href="https://www.ssdmf.com/" target="_blank">Social Security Administration's Death Master File</a> available for this.<br />
<br />
Since the necessary information to make the decision may not reside in the PACS or archive, but perhaps the HIS or EHR, separating the decision maker from the decision executor makes a lot of sense. Indeed, when you think about it, the entire medical record, not just the images, may need to be purged according to the same policy. So, it seems sensible to make the decision in one place and communicate it to all the places where information may be stored within an enterprise. This includes not only the EHR and radiology, but also the lab, histopathology, cardiology, and the visual 'ologies like ophthalmology, dermatology, etc. Whilst one day all databases, archives and caches may be centralized and consolidated throughout an enterprise (VNA panacea scenario), in the interim, a more loosely coupled solution is possible.<br />
<br />
That said, my natural inclination as a researcher and a hoarder (with a 9 track tape drive and an 8" floppy drive in the attic, just in case) is to keep everything forever. Fortunately for the likes of me, disk is cheap, and even the power and HVAC required to maintain it are not really outrageously priced in the scheme of things. However, if you feel you really must purge, then there are solutions available, and a move towards using standards to implement them.<br />
<br />
David<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com1tag:blogger.com,1999:blog-1367102802658603789.post-44866183645519150722013-09-29T10:38:00.000-07:002013-09-29T10:46:38.174-07:00You're gonna need a bigger field (not) ... Radix 64 RevisitedSummary: It is easy to fit a long number in a short string field by transcoding it to use more (printable) characters; the question is what encoding to use; there are more alternatives than you might think, but Base64 is the pragmatic choice.<br />
<br />
Long Version.<br />
<br />
Every now and then the subject of how to fit numeric SNOMED <a href="http://www.ihtsdo.org/fileadmin/user_upload/doc/en_us/tig.html?t=trg2main_sctid_datatype" target="_blank">Concept IDs</a> (defined by the SNOMED <a href="http://www.ihtsdo.org/fileadmin/user_upload/doc/en_us/tig.html?t=trg2main_sctid_datatype" target="_blank">SCTID Data Type</a>) into a DICOM representation comes up. These can be up to 18 decimal digits (and fit into a signed or unsigned 64 bit binary integer), whereas in DICOM, the Code Value has an SH (Short String) Value Representation (VR), hence is limited to 16 characters.<br />
<br />
Harry Solomon suggested "Base64" encoding it, either always, or on those few occasions when the Concept ID really was too long (and then using a "prefix" to the value to recognize it).<br />
<br />
The need arises because DICOM has always used the "old fashioned" SNOMED-RT style <a href="http://www.ihtsdo.org/fileadmin/user_upload/doc/en_us/rf1.html?t=trg_app_table_struct_concepts_table_data_fields_snomedid" target="_blank">SnomedID</a> values (like "T-A0100" for "Brain") rather than the SNOMED-CT style SNOMED <a href="http://www.ihtsdo.org/fileadmin/user_upload/doc/en_us/rf1.html?t=trg_app_table_struct_descriptions_table_data_fields_conceptid" target="_blank">Concept ID</a> values (like "12738006"). DICOM was a relatively "early adopter" of SNOMED, and the numeric form did not exist in the early days (prior to the <a href="https://en.wikipedia.org/wiki/Read_code#READ_and_SNOMED" target="_blank">incorporation</a> of the <a href="http://www.connectingforhealth.nhs.uk/systemsandservices/data/uktc/readcodes" target="_blank">UK Read Codes</a> that resulted in SNOMED-CT). Fortunately, SNOMED continues to issue the older style codes; unfortunately, folks outside the DICOM realm may need to use the newer style, and so converting at the boundary is irritating (and needs a dictionary, unless we transmit both). The negative impact on the installed base that depends on recognizing the old-style codes, were we to "change", is a subject for another day; herein I want to address only how it could be done.<br />
<br />
Stuffing long numbers into short strings is a generic problem, not confined to using SNOMED ConceptIDs in DICOM. Indeed, this post was triggered as a result of pondering another use case, stuffing long numbers into Accession Number (also SH VR). So I thought I would implement this to see how well it worked. It turns out that there are a few choices to be made.<br />
<br />
My first pass at this was to see if there was something already in the standard Java class library that supported conversion of arbitrary length base10 encoded integers into some other radix; I did not want to be constrained to only handling 64 bit integers.<br />
<br />
It seemed logical to look at the arbitrary length numeric <a href="http://docs.oracle.com/javase/7/docs/api/java/math/BigInteger.html" target="_blank">java.math.BigInteger</a> class, and indeed it has a radix argument to its String constructor and toString() methods. It also has constructors based on two's-complement binary
representations in byte[] arrays. Sounded like a no brainer.<br />
<br />
Aargh! It turns out that BigInteger has an implementation limit on the size of the radix that it will handle. The maximum radix is 36 (the 10 digits plus 26 lowercase alphabetic characters that is the limit for <a href="http://docs.oracle.com/javase/7/docs/api/java/lang/Character.html#MAX_RADIX" target="_blank">java.lang.Character.MAX_RADIX</a>). Bummer.<br />
<br />
OK, I thought, I will hand write it, by doing successive divisions by the radix in BigInteger, and character encoding the modulus, accumulating the resulting characters in the correct order. Turned out to be pretty trivial.<br />
<br />
Then I realized that I now had to choose which characters to select beyond the 36 that Java uses. At which point I noticed that BigInteger uses completely different characters than the traditional "Base64" encoding. "<a href="https://en.wikipedia.org/wiki/Base64" target="_blank">Base64</a>" is the encoding used by folks who do anything that depends on <a href="https://en.wikipedia.org/wiki/MIME" target="_blank">MIME</a> content encoding (email attachments or XML files with embedded binary payloads), as is defined in <a href="http://www.ietf.org/rfc/rfc2045.txt" target="_blank">RFC 2045</a>. Indeed, there are <a href="https://en.wikipedia.org/wiki/Base64#Implementations_and_history" target="_blank">variants</a> on "Base64" that handle situations where the two characters for 62 and 63 (normally '+' and '/' respectively) are problematic, e.g., in URLs (<a href="http://www.ietf.org/rfc/rfc4648.txt" target="_blank">RFC 4648</a>). Indeed RFC 4648 seems to be the most current definition of not only "Base64" and variants, but also "Base32" and "Base16" and so-called "extended hex" variants of them.<br />
<br />
If you think about it, based on the long-standing hexadecimal representation convention that uses characters '0' to '9' for numeric values [0,9], then characters 'a' to 'f' for numeric values [10,15], it is pretty peculiar that "Base64" uses capital letters 'A' to 'J' for numeric values [0,9], and uses the characters '0' to '9' to represent numeric values [52,61]. Positively unnatural, one might say.<br />
<br />
This is what triggered my dilemma with the built-in methods of the Java BigInteger. BigInteger returns strings that are a natural progression from the traditional hexadecimal representation, and indeed for a radix of 16 or a radix of 32, the values match those from the RFC 4648 "base16" and "base32hex" (as distinct from "base32") representations. Notably, RFC 4648 does NOT define a "base64hex" alternative to "base64", which is a bit disappointing.<br />
<br />
It turns out that a long time ago (1992) in a galaxy far, far away, this was the subject of a discussion between <a href="https://en.wikipedia.org/wiki/Phil_Zimmermann" target="_blank">Phil Zimmerman</a> (of PGP fame), and <a href="https://en.wikipedia.org/wiki/Marshall_Rose" target="_blank">Marshall Rose</a> and <a href="https://en.wikipedia.org/wiki/Ned_Freed" target="_blank">Ned Freed</a> on the <a href="http://www.imc.org/ietf-822/old-archive1/msg02335.html" target="_blank">MIME working group mailing list</a>, in which Phil noticed this discrepancy and proposed it be changed. His suggestion was rejected on the grounds that it would not improve functionality and would threaten the installed base, and was made at a relatively late stage in development of the "standard". The choice of the encoding apparently traces back to the Privacy Enhanced Mail (PEM) <a href="http://tools.ietf.org/html/rfc989" target="_blank">RFC 989</a> from 1987. I dare say there was no love lost between Phil and the PEM/S-MIME folks, given that they were developers of competing methods for secure email, but you can read the exchange yourself and make up your own mind.<br />
<br />
So I dug a little deeper, and it turns out that <a href="http://pubs.opengroup.org/onlinepubs/009695399/mindex.html" target="_blank">The Open Group Base (IEEE Std 1003.1)</a> (<a href="https://en.wikipedia.org/wiki/POSIX" target="_blank">POSIX</a>, <a href="https://en.wikipedia.org/wiki/Single_Unix_Specification" target="_blank">Single Unix Specification</a>) has a definition for how to encode radix 64 numbers as ASCII characters too, in the specification of the <a href="http://pubs.opengroup.org/onlinepubs/009695399/functions/a64l.html" target="_blank">a64l() and l64a()</a> functions, which uses '.' (dot) for 0, '/' for 1, '0' through '9' for [2,11], 'A' through 'Z' for [12,37], and 'a' through 'z' for [38,63]. Note that is this is not part of the <a href="https://en.wikipedia.org/wiki/C_standard_library" target="_blank">C standard library</a>.<br />
<br />
An early attempt at stuffing binary stuff into printable characters was used by the "<a href="https://en.wikipedia.org/wiki/Uuencoding" target="_blank">uuencode</a>" utility used in <a href="https://en.wikipedia.org/wiki/Uucp" target="_blank">Unix-to-Unix copy (UUCP)</a> implementations, such as was once used for mail transfer. It used the expedient of adding the 32 (the US-ASCII space character) to the 6 bit (base 64) numeric value and came up with a range of printable characters.<br />
<br />
Of course, from the perspective of stuffing a long decimal value into a short string and making it fit, it doesn't matter which character representation is chosen, as long as it is valid. E.g., a 64 bit unsigned integer that has a maximum value of <a href="https://en.wikipedia.org/wiki/Integer_%28computer_science%29#Common_long_integer_sizes" target="_blank">18,446,744,073,709,551,615</a>, which is 20 digits, is only 11 characters long when encoded with a radix of 65, regardless of the character choices.<br />
<br />
For your interest, here is what each of the choices described above looks like, for single numeric values [0,63], and for the maximum unsigned 64 bit integer value:<br />
<br />
Extension of Java and base16hex to hypothetical "base64hex":<br />
<span style="font-family: "Courier New",Courier,monospace;">0 1 2 3 4 5 6 7 8 9 a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z : _<br />f__________</span><br />
<br />
Unix a64l:<br />
<span style="font-family: "Courier New",Courier,monospace;"> . / 0 1 2 3 4 5 6 7 8 9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z<br />Dzzzzzzzzzz</span><br />
Base64 (RFC 2045):<br />
<span style="font-family: "Courier New",Courier,monospace;"> A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9 + /<br />P//////////</span><br />
<br />
uuencode (note that space is the first character):<br />
<span style="font-family: "Courier New",Courier,monospace;"> ! " # $ % & ' ( ) * + , - . / 0 1 2 3 4 5 6 7 8 9 : ; < = > ? @ A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _<br />/__________</span><br />
<br />
Returning to DICOM then, the choice of what to use for a Short String (SH) VR is constrained to be any US-ASCII (ISO IR 6) character that is not a backslash (used as a value delimiter in DICOM) and not a control character. This would exclude the uuencode representation, since it contains a backslash, but any of the other choices would produce valid strings. The SH VR is case-preserving, which is a prerequisite for all of the choices other than uuencode. Were that not to be the case, we would need to define yet another encoding that was both case-insensitive and did not contain the backslash character. I can't thank of a use for packing numeric values into the Code String (CS) VR, the only upper-case only DICOM VR.<br />
<br />
The more elegant choice in my opinion would be the hypothetical "base64hex", for the reasons Phil Z eloquently expressed, but ...<br />
<br />
Pragmatically speaking, since RFC 989/1113/2045/4648-style "Base64" coding is so ubiquitous these days for bulk binary payloads, it would make no sense at all to buck that trend.<br />
<br />
Just to push the limits though, if one uses all 94 printable US-ASCII characters except backslash, one can squeeze the largest unsigned 64 bit integer into 10 rather than 11 characters. However, for the 18 decimal digit longest SNOMED Concept ID, the length of the result is the same whether one uses a radix of 64 or 94, still 10 characters.<br />
<br />
David<br />
<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com4tag:blogger.com,1999:blog-1367102802658603789.post-1401264419103715862013-09-12T12:49:00.000-07:002013-09-12T12:49:22.641-07:00What Template is that?Summary: Determining what top-level template, if any, has been used to create a DICOM Structured Report can be non-trivial. Some SOP Classes require a single template, and an explicit Template ID is supposed to always be present, but if isn't, the coded Document Title is a starting point, but is not always unambiguous.<br />
<br />
Long Version.<br />
<br />
When Structured Reports were introduced into DICOM (<a href="http://www.dclunie.com/dicom-status/status.html#Supplement23" target="_blank">Supplement 23</a>), the concept of a "template" was somewhat nebulous, and was refined over time. Accordingly, the requirement to specify which template was used, if any, to author and format the content, was, and has remained, fairly weak.<br />
<br />
The original intent, which remains the current intent, is that if a template was used, it's identity should be explicitly encoded. A means for doing so is the Content Template Sequence. Originally this was potentially encoded at each content item, but was later clarified by <a href="http://www.dclunie.com/dicom-status/status.html#CP452" target="_blank">CP 452</a>. In short, the identification applies only to CONTAINER content items, and in a particular to the root content item, and consists of a mapping resource (DCMR, in the case of templates defined in PS 3.16), and a string identifier.<br />
<br />
The requirement on its presence is:<br />
<br />
"<i>if a template was used to define the content of this Item, and the template consists of a single CONTAINER with nested content, and it is the outermost invocation of a set of nested templates that start with the same CONTAINER</i>" <br />
<br />
Since the document root is always a container, whenever one of the templates that defines the entire content tree of the SR is used, then by definition, an explicit Template ID is required to be present.<br />
<br />
That said, though most SR producers seem to get this right, sometimes the Template ID is not present, which presents a problem. I don't think this can be excused by lack of awareness of the requirement, or of failure to notice CP 452 (from 2005), since the original requirement in Sup 23 (2000) read:<br />
<br />
"<i>Required if a template was used to define the content of this Item</i>".<br />
<br />
Certainly CP 452 made things clearer though, in that it amended the definition to not only apply to the content item, but also "its subsidiary" content items.<br />
<br />
Some SR SOP Classes define either a single template that shall be used, the KOS being one example, the CAD family (including Mammo, Chest and Colon) CAD being others. So, even if an explicit Template ID is not present, the expected template can be deduced from the SOP Class. Sometimes though, such instances are encoded as generic (e.g., Comprehensive) SR, perhaps because an intermediate system did not support the more specific SOP Class, and so one still needs to check for the template identifier.<br />
<br />
In the absence of a specific SOP Class or an explicit template identifier, what is a poor recipient to do? One clue can be the concept name of the top level container content item, which is always coded, and always present, and which is referred to as the "document title". In many cases, within the scope of PS 3.16, the same coded concept is used only for a single root template. For example, (122292, DCM, "Quantitative Ventriculography Report”) is used only for TID 3202. That's helpful, at least as long as nobody other than DICOM (like a vendor) has re-used the same code to head a different template.<br />
<br />
Other situations are more challenging. The basic diagnostic reporting templates, e.g., TID 2000, 2005 or 2006, are encoded in generic SOP Classes and furthermore don't have a single code or unique code for the document title, rather, any code can be used, and a defined set of them is drawn from LOINC, corresponding to common radiological procedures. It is not at all unlikely that some other completely different template might be used with the same code as (18747-6,LN,"CT Report"), or (18748-4,LN,"Diagnostic Imaging Report"), for instance.<br />
<br />
One case of interest demonstrates that in the absence of an explicit Template ID, even a specific SOP Class and a relatively specific Document Title is insufficient. For Radiation Dose SRs, the same SOP Class is used for both CT and Projection X-Ray. Both TID 10001 Projection X-Ray Radiation Dose and TID 10011 CT Radiation Dose have the same Document Title, (113701, DCM, “X-Ray Radiation Dose Report”).<br />
<br />
One can go deeper into the tree though. One of the children of the Document Title content item is required to be (121058, DCM, ”Procedure reported”). For a CT report, it is required to have an enumerated value of (P5-08000,SRT, “Computed Tomography X-Ray”), whereas for a Projection X-Ray report, it may have a value of (113704, DCM, “Projection X-Ray”) or (P5-40010, SRT, “Mammography”), or something else, because these are defined terms.<br />
<br />
So, in short, at the root level, the absence of a Template ID is not the end of the world, and a few heuristics might be able to allow a recipient to proceed.<br />
<br />
Indeed, if one is expecting a particular pattern based on a particular template, and that pattern "matches" the content of the tree that one has received, does it really matter? It certainly makes life easier though, to match a top level identifier, than have to write a matching rule for the entire tree.<br />
<br />
Related to the matter of the identification of the "root" or "top level" template is that of recognizing subordinate or "mini" templates. As you know, most of PS 3.16 is taken up not by monstrously long single templates but rather by invocation of sub-templates. So there are sub-templates for identifying things, measuring things, etc. These are re-used inside lots of application-specific templates.<br />
<br />
Certainly "top-down" parsing from a known root template takes one to content items that are expected to be present based on the "inclusion" of one of these sub-templates. These are rarely, if ever, explicitly identified during creation by a Template ID, even though one could interpret that as being a requirement if the language introduced in CP 452 is taken literally. Not all "included" sub-templates start with a container, but many do. I have to admit that most of the SRs that I create do not contain Template IDs below the Document Title either, and I should probably revisit that.<br />
<br />
Why might one want to be able to recognize such a sub-template?<br />
<br />
One example is being able to locate and extract measurements or image coordinate references, regardless of where they occur in some unrecognized root template. An explicit Template ID might be of some assistance in such cases, but pattern matching of sub-trees can generally find these pretty easily too. When annotating images based on SRs, for example, I will often just search for all SCOORDs, and explore around the neighborhood content items to find labels and measurements to display. Having converted an SR to an XML representation also allows one to use XSL-T match() clauses and an XPath expression to select even complex patterns, without requiring an explicit ID.<br />
<br />
David<br />
<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com1tag:blogger.com,1999:blog-1367102802658603789.post-85552263942981005672013-09-07T09:21:00.000-07:002013-09-07T09:21:09.245-07:00Share and share alike - CSIDQSummary: Image sharing requires the availability (download and transmission) of a complete set of images of diagnostic quality (CSIDQ), even if for a particular task, viewing of a lesser quality subset may be sufficient. The user then needs to be able to decide what they need to view on a case-by-case basis.<br />
<br />
Long Version.<br />
<br />
The title of this post comes from the legal use of the term "<a href="http://legal-dictionary.thefreedictionary.com/share+and+share+alike" target="_blank">share and share alike</a>", the equal division of a benefit from an estate, trust, or gift.<br />
<br />
In the context of image sharing, I mean to say that all potential recipients of images, radiologists, specialists, GPs, patients, family, and yes, even lawyers, need to <i>have the means</i> to access the same thing: a complete set of images of diagnostic quality (CSIDQ). Note the emphasis on "have the means". CSIDQ seems to be a less unwieldy acronym that CSoIoDQ, so that's what I will use for notational convenience.<br />
<br />
There are certainly situations in which images of lesser quality (or less than a complete set) might be sufficient, might be expedient, or indeed might even be necessary to enable the use case. A case in point being the need to make an urgent or rapid decision remotely when there is a only slow link available.<br />
<br />
For folks defining architectures and standards, and deploying systems to make this happen, it is essential to assure that the CSIDQ is available throughout. In practice, this translates to requiring that<br />
<ul>
<li>the acquisition modality produce a CSIDQ,</li>
<li>the means of distribution (typically a departmental or enterprise PACS) in the local environment stores and makes available a CSIDQ,</li>
<li>the system of record where the acquired images are stored for archival and evidential purposes contains a CSIDQ</li>
<li>any exported CD or DVD contains a CSIDQ,</li>
<li>any point-to-point transfer mechanism be capable of supporting transfer of a CSIDQ</li>
<li>any "edge server" or "portal" that permits authorized access to the locally stored images is capable of sharing a CSIDQ on request,</li>
<li>any "central" archive to which images are stored also retain and be capable of distributing a CSIDQ</li>
<li>any "clearinghouse" that acts as an intermediary needs to be capable of transferring a CSIDQ</li>
</ul>
These requirements apply particularly to the "Download" and "Transmit" parts of the Meaningful Use "View, Download and Transmit" (VDT) approach to defining sharing, as it applies to images and imaging results.<br />
<br />
In other words, it is essential that whatever technologies, architectures and standards are used to implement Download and Transmit, that they be capable of supporting a CSIDQ. Otherwise, anything that is lost early in the "chain of custody", if you will, is not recoverable later when it is needed.<br />
<br />
From a payload perspective, the appropriate standard for a CSIDQ is obviously DICOM, since that is the only widely (universally) implemented standard that permits the recipient to make full use of the acquired images, including importation, post-processing, measurement, planning, templating, etc. DICOM is the only format whose pixel data and meta data all medical imaging systems can import.<br />
<br />
That said, it may be desirable to also provide Download of a subset, or a subset of lesser quality, or in a different format, for one reason or another. In doing so it is vital not to compromise the CSIDQ principle, e.g., by misleading a recipient (such as a patient or a referring physician) into thinking that anything less that a CSIDQ that has been download is sufficient for future use (e.g., subsequent referrals). And it is vital not to discard the DICOM format meta data. EHR and PHR vendors need to be particularly careful about not making expedient implementation decisions in this regard that compromise the CSIDQ principle (and hence may be below the standard of practice, may be misleadingly labelled, may introduce the risk of a bad outcome, and may expose them to product liability or regulatory action).<br />
<br />
Viewing is an entirely different matter, however.<br />
<br />
Certainly, one can download a CSIDQ and then view it, and in a sense that is what the CD/DVD distribution mechanism is ... a "thick client" viewer is either already installed or executed from the media to display the DICOM (IHE PDI) content. This approach is typically appropriate when one wants to import what has been downloaded (e.g., into the local PACS) so that it can be viewed along with all the other studies for the patient. This is certainly the approach that most referral centers will want to adopt, in order to provide continuity of patient care coupled with familiarity of users with the local viewing tools. It is also equally reasonable to use for an "in office" imaging system, as I have <a href="http://dclunie.blogspot.com/2008/04/requirements-for-office-imaging-system.html" target="_blank">discussed before</a>. It is a natural extension of the current widespread CD importation that takes place, and the only difference is the mode of transport, not the payload.<br />
<br />
For sporadic users though, who may have no need to import or retain a local copy of the CSIDQ, many other standard (WADO and XDS-I) and proprietary alternatives exist for viewing. Nowadays web-based image viewing mechanisms, including so-called "zero footprint" viewers, can provide convenient access to an interactively rendered version of that subset of the CSIDQ that the user needs access to, with the appropriate quality, whether using client or server-side rendering, and irrespective of how and in what format the pixel data moves from server to client. Indeed, these same mechanisms may suffice even for the radiologist's viewing interface, as long as the necessary image quality is assured, there is access to the complete set, and the necessary tools are provided.<br />
<br />
The moral being that the choice needs to be made by the user, and perhaps on the basis of whatever specific task they need to perform or question they want to answer. For any particular user (or type of user), there may be no single best answer that is generally applicable. For one patient, at one visit, the user might be satisfied with the report. On another occasion they might just want to illustrate something to the patient that requires only modest quality, and on yet another they might have a need to examine the study with the diligence that a radiologist would apply.<br />
<br />
In other words, the user needs to be able to make the viewing quality choice dynamically. So, to enable the full spectrum of quality needs, the server needs to have the CSIDQ in the first place.<br />
<br />
David<br />
<br />
PS. By the way, do not take any of the foregoing to imply that irreversibly (lossy) compressed images are not of diagnostic quality. It is easy to make the erroneous assumptions that uncompressed images are diagnostic and compressed ones are not, or that DICOM images are uncompressed (when they may be encoded with lossy compression, including JPEG, even right off the modality in some cases), or that JPEG lossy compressed images supplied to a browser are not diagnostic. Sometimes they are and sometimes they are not, depending on the modality, task or question, method and amount of compression, and certainly last but not least, the display and viewing environment.<br />
<br />
What "diagnostic quality" means and what constitutes sufficient quality and when, in general, and in the context of "<a href="http://www.i3-journal.org/cms/website.php?id=/en/index/read/image_compression.htm" target="_blank">Diagnostically Acceptable Irreversible Compression</a>" (DAIC), are questions for another day. The point of this post is that the safest general solution is to preserve whatever came off the modality. Doing anything less than that might be safe and sufficient, but you need to prove it. Further, regardless of the quality of the pixel data, losing the DICOM
"meta data" precludes many downstream use cases, including even simple
size measurements.<br />
<br />
PPS. This blog post elaborates on a principle that I attempted to convey during my recent testimony to the ONC <a href="http://www.healthit.gov/facas/health-it-standards-committee" target="_blank">HIT Standards Committee</a> <a href="http://www.healthit.gov/policy-researchers-implementers/federal-advisory-committees-facas/clinical-operations" target="_blank">Clinical Operations Workgroup</a> about standards for image sharing, which you can <a href="http://www.healthit.gov/facas/sites/faca/files/2013-08-29_ImageSharingUseCasesAndStandards_Clunie.pptx" target="_blank">see</a>, <a href="http://www.healthit.gov/facas/sites/faca/files/2013-08-29_standards_co_transcript_final.pdf" target="_blank">read</a> or <a href="http://www.healthit.gov/facas/sites/faca/files/2013-08-29_standards_co.mp3" target="_blank">listen to</a> if you have the stomach for it. If you are interested in the entire series of meetings at which other folks have testified or the subject has been discussed, here is a short summary, with links (or you can go to the <a href="http://www.healthit.gov/policy-researchers-implementers/federal-advisory-committees-facas/clinical-operations" target="_blank">group's homepage</a> and follow the calendar link, to future meetings if you are interested in joining them, or past meetings:<br />
<br />
<a href="http://www.healthit.gov/facas/calendar/2013/04/19/standards-clinical-operations-workgroup" target="_blank">2013-04-19</a> (initial discussion)
<br /><a href="http://www.healthit.gov/facas/calendar/2013/06/14/standards-clinical-operations-workgroup" target="_blank">2013-06-14</a> (RSNA: Chris Carr, David Avrin, Brad Erickson)
<br /><a href="http://www.healthit.gov/facas/calendar/2013/06/28/standards-clinical-operations-workgroup" target="_blank">2013-06-28</a> (RSNA: David Mendelson, Keith Dreyer)
<br /><a href="http://www.healthit.gov/facas/calendar/2013/07/19/standards-clinical-operations-workgroup" target="_blank">2013-07-19</a> (lifeIMAGE: Hamid Tabatabaie, Mike Baglio)
<br /><a href="http://www.healthit.gov/facas/calendar/2013/07/26/standards-clinical-operations-workgroup" target="_blank">2013-07-26</a> (general discussion)
<br /><a href="http://www.healthit.gov/facas/calendar/2013/08/09/standards-clinical-operations-workgroup" target="_blank">2013-08-09</a> (general discussion)
<br /><a href="http://www.healthit.gov/facas/calendar/2013/08/29/standards-clinical-operations-workgroup" target="_blank">2013-08-29</a> (standards: David Clunie)
<br />
<br />Also of interest is the parent HIT Standards Committee:
<br />
<br /><a href="http://www.healthit.gov/facas/calendar/2013/04/17/hit-standards-committee" target="_blank">2013-04-17</a> (establish goal of image exchange)
<br />
<br />And the HIT Policy Committee:
<br />
<br /><a href="http://www.healthit.gov/facas/calendar/2013/03/14/hit-policy-committee" target="_blank">2013-03-14</a> (prioritize image exchange)
<br />
<br />
PPPS. The concept of "complete set of images of diagnostic quality" was first espoused by an AMA Safety Panel that
met with a group of industry folks (2008/08/27) to try to address the
historical "CD problem". The problem was not the existence of the CD transport mechanism, which everyone is now
eager to decry in favor of a network-based image sharing solution, but rather the
problem of <a href="http://dclunie.blogspot.com/2008/08/is-winter-of-discontent-with-cds.html" target="_blank">inconsistent formats, content and viewer behavior</a>. The effort
was triggered by a group of unhappy neurosurgeons in 2006 (<a href="http://www.dclunie.com/documents/AMA-2006-539.pdf" target="_blank">AMA House of Delegates Resolution 539 A-06</a>). They were concerned about potential safety issues caused by inadequate or delayed access or incomplete or inadequately displayed MR images. To cut a long story short, a meeting with industry was proposed (<a href="http://www.dclunie.com/documents/AMA-2007-bot_30a07.pdf" target="_blank">Board of Trustees Report 30 A-07</a> and <a href="http://www.dclunie.com/documents/AMA-2008-523.pdf" target="_blank">House of Delegates Resolution 523 A-08</a>), and that meeting resulted in two outcomes.<br />
<br />
One was the statement that we hammered out together in that clinical-industry meeting, which was attended not just by the AMA and MITA (NEMA) folks, but also representatives of multiple professional societies, including the American Association of Neurological Surgeons, Congress of Neurological Surgeons, American Academy of Neurology, American College of Radiology, American Academy of Orthopedic Surgeons, American College of Cardiology, American Academy of Otolaryngology-Head and Neck Surgery, as well as vendors, including Cerner, Toshiba, Philips, General Electric and Accuray, and DICOM/IHE folks like me. You can read a <a href="http://www.dclunie.com/documents/final%20MRI%20and%20imaging%20summary.doc" target="_blank">summary of the meeting</a>, but the most important part is the <a href="http://www.dclunie.com/documents/Medical%20Imaging%20Recommendations%20Physicians%20and%20Industry.doc" target="_blank">recommendation</a> for a standard of practice, which states in part:<br />
<br />
<i>"The American Medical Association Expert Panel on Medical Imaging (Panel) is concerned whether medical imaging data recorded on CD’s/DVD’s is meeting standards of practice relevant to patient care. </i><br />
<i><br /><b>The Panel puts forward the following statement, which embodies the standard the medical imaging community must achieve. </b></i><br />
<ul>
<li><i><b>All medical imaging data distributed should be a complete set of images of diagnostic quality in compliance with IHE-PDI.</b></i></li>
</ul>
<i><b>This standard will engender safe, timely, appropriate, effective, and efficient care; mitigate delayed care and confusion; enhance care coordination and communication across settings of care; decrease waste and costs; and, importantly, improve patient and physician satisfaction with the medical imaging process.</b>"</i><br /><br />More recently, the recommendation of the panel is incorporated in the AMA's discussion of the implementation of EHRs, in the <a href="http://www.ama-assn.org/assets/meeting/2013a/a13-bot-24.pdf" target="_blank">Board of Trustees Report 24 A-13</a>, which recognizes the need to "disseminate this statement widely".<br />
<br />
The other outcome of the AMA-industry meeting was the development of the <a href="http://wiki.ihe.net/index.php?title=Basic_Image_Review" target="_blank">IHE Basic Image Review (BIR) Profile</a>, intended to standardize the user experience when using any viewer. The original neurosurgeon protagonists contributed actively to the development of this profile, even to the extent of sacrificing entire days of their time to travel to Chicago to sit with us in IHE Radiology Technical Committee meetings. Sadly, adoption of that profile has been much less successful than the now almost universal use of IHE PDI DICOM CDs. Interestingly enough, with a resurgence of interest in web-based viewers, and with many new vendors entering the field, the BIR profile, which is equally applicable to both network and media viewers, could perhaps see renewed uptake, particularly amongst those who have no entrenched "look and feel" user interface conventions to protect.<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com2tag:blogger.com,1999:blog-1367102802658603789.post-77479047947900482222013-09-06T09:07:00.000-07:002013-09-06T09:07:30.596-07:00DICOM rendering within pre-HTML5 browsersSummary: Retrieval of DICOM images, parsing, windowing and display using only JavaScript within browsers without using HTML5 Canvas is feasible.<br />
<br />
Long Version.<br />
<br />
Earlier this year, someone challenged me to display a DICOM image in a browser without resorting to <a href="https://en.wikipedia.org/wiki/Canvas_element" target="_blank">HTML5 Canvas</a> elements, using only JavaScript. This turned out to be rather fun and quite straightforward, largely due to the joy of Google searching to find all the various concepts and problems that other folks had already explored and solved, even if they were intended for other purposes. I just needed to add the DICOM-specific bits. As a consequence it took just a few hours on a Saturday afternoon to figure out the basics and in total about a day's work to refine it and make the whole thing work.<br />
<br />
The crude demonstration JavaScript code, hard-wired to download,
window (using the values in the DICOM header) and render a particular 16
bit MR image image can be found <a href="http://www.dclunie.com/jsdemo/fetchbinary.js" target="_blank">here</a> and executed from this <a href="http://www.dclunie.com/jsdemo/fetchbinary.html" target="_blank">page</a>.
It is fully self-contained and has no dependencies on other JavaScript
libraries. The code is ugly as sin, filled with commented out
experiments and tests, and references to where bits of code and ideas
came from, but hopefully it is short enough to be self-explanatory.<br />
<br />
It
seems to work in contemporary versions of Safari, Firefox, Opera,
Chrome and
even IE (although a little more slowly in IE, probably due to the need
to convert some extra array stuff, and it seemed to work in IE 10 on
Windows 7 but not IE 8 on XP, haven't figured out why yet). I was
pleased to see that it also works on my Android phones and tablets.<br />
<br />
Here is how it works ...<br />
<br />
First task - get the DICOM binary object down to the client and accessible via JavaScript. That was an easy one, since as everyone probably knows, the infamous <a href="https://en.wikipedia.org/wiki/XMLHttpRequest" target="_blank">XMLHttpRequest</a> can be used to pull pretty much anything from the server (i.e., even though its name implies it was designed to pull XML documents). The way to make it return a binary file is to set the XMLHttpRequest.overrideMimeType parameter, and to make sure that no character set conversion is applied to the returned binary stream. This trick is due to Marcus Granado, whose archived blog entry can be found <a href="http://web.archive.org/web/20071103070418/http://mgran.blogspot.com/2006/08/downloading-binary-streams-with.html" target="_blank">here</a>, and which is also discussed along with other helpful hints at the Mozilla Developer Network site <a href="https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Sending_and_Receiving_Binary_Data?redirectlocale=en-US&redirectslug=DOM%2FXMLHttpRequest%2FSending_and_Receiving_Binary_Data" target="_blank">here</a>. There is a little bit of further screwing around needed to handle various Microsoft Internet Explorer peculiarities related to what is returned, not in the responseText, but instead in the responseBody, and this needs an intermediate VBArray to get the job done (discussed in a <a href="http://stackoverflow.com/questions/1919972/how-do-i-access-xhr-responsebody-for-binary-data-from-javascript-in-ie" target="_blank">StackOverflow thread</a>).<br />
<br />
Second task - parse the DICOM binary object. Once upon a time, using the bit twiddling functions in JavaScript might have been too slow, but nowadays that does not seem to be the case. It was pretty trivial to write a modest number of lines of code to skip the 128 byte preamble, detect the DICM magic string, then parse each data element successively, using explicit lengths to skip those that aren't needed and skipping undefined length sequences and items, and keeping track of only the values of those data elements that are needed for later stages (e.g., Bits Allocated) and ignore the rest. Having written just a few DICOM parsers in the past made this a lot easier for me than starting from scratch. I kept the line count down by restricting the input to explicit VR little endian for the time being, not trying to cope with malformed input, and just assuming that the desired data element values were those that occurred last in the data set. Obviously this could be made more robust in the future for production use (e.g., tracking the top level data set versus data sets nested within sequence items), but this was sufficient for the proof of concept.<br />
<br />
Third task - windowing a greater than 8 bit image. It would have been easy to just download an 8 bit DICOM image, whether grayscale or color, since then no windowing from 10, 12 or 16 bits to 8 would be needed, but that wouldn't have been a fair test. I particularly wanted to demonstrate that client-side interactivity using the full contrast and spatial resolution DICOM pixel data was possible. So I used the same approach as I have used many times before, for example in the <a href="http://www.pixelmed.com/#PixelMedJavaDICOMToolkit" target="_blank">PixelMed toolkit</a> <a href="http://www.dclunie.com/pixelmed/software/javadoc/com/pixelmed/display/WindowCenterAndWidth.html">com.pixelmed.display.WindowCenterWidth</a> class, to build a lookup table indexed by all possible input values for the DICOM bit depth containing values to use for an 8 bit display. I did handle signed and unsigned input, as well as Rescale Slope and Intercept, but for the first cut at this, I have ignored special handling of pixel padding values, and other subtleties.<br />
<br />
These first three tasks are essentially independent of the rendering approach, and are necessary regardless of whether Canvas is going to be used or not.<br />
<br />
The fourth and fifth tasks are related - making something the browser will display, and then making the browser actually display it. I found the clues for how to do this in the work of Jeff Epler, who described a tool for<a href="http://emergent.unpythonic.net/software/01126462511-glif" target="_blank"> creating single bit image files in the browser</a> (client side) to use as glyphs.<br />
<br />
Fourth task - making something the browser will display. Since without Canvas one cannot write directly to a window, the older browsers need to be fed something they know about already. An image file format that is sufficient for the task, and which contributes no "loss" in that it can directly represent 8 bit RGB pixels, is <a href="https://en.wikipedia.org/wiki/Graphics_Interchange_Format">GIF</a>. But you say, GIF involves a lossless compression step, with entropy coding using <a href="https://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch">LZW</a> (the compression scheme that was at the heart of <a href="http://www.kuro5hin.org/story/2003/6/19/35919/4079" target="_blank">the now obsolete patent-related issues with using GIF</a>). Sure it does, but many years ago, <a href="https://en.wikipedia.org/wiki/Tom_Lane_%28computer_scientist%29" target="_blank">Tom Lane</a> (of <a href="https://en.wikipedia.org/wiki/Libjpeg" target="_blank">IJG</a> fame) observed that because of the way LZW works, with an initial default code table in which the code (index) is the same as the value it represents, as long as one adds one extra true bit before each code, and resets the code table periodically, once can just send the original values as if they were entropy coded values. Add a bit of blocking and a few header values, and one is good to go with a completely valid uncompressed (albeit slightly expanded) bitstream that any GIF decoder should be able to handle. This concept is now immortalized in the <a href="http://directory.fsf.org/wiki/Libungif" target="_blank">libungif</a> library, which was developed to be able to create "uncompressed GIF" files to avoid infringing on the Unisys LZW patent. Some of the details are described under the heading of "Is there an uncompressed GIF format?" in the old <a href="http://www.faqs.org/faqs/graphics/fileformats-faq/part1/" target="_blank">Graphic File Formats FAQ</a>, which references Tom Lane's original post. In my implementation, I just make 9 bit codes from 8 bit values, and added a clear code every 128 values, and made sure to stuff the bits into appropriate length blocks preceded by a length value, and it worked fine. And since I have 8 bit gray scale values as indices, I needed to populate the global color tables that mapped each gray scale index to RGB triplets with the same intensity value (since GIF is an indexed color file format, which is why GIF is lossless for 8 bit single channel data, but lossy (needs quantization and dithering) for true color data with more than 256 different RGB values).<br />
<br />
Fifth task - make the browser display the GIF. Since JavaScript in the browser runs in a sort of "sand box" to <a href="https://en.wikipedia.org/wiki/JavaScript#Security">prevent unsecure access</a> to the local file system, etc., it is not so easy to feed the GIF file we just made to the browser, say as an updated IMG reference on an HTML page. It is routine to update an image reference with an "http:" URL that comes over the network, but how does one achieve that with locally generated content? The answer lies in the <a href="https://en.wikipedia.org/wiki/Data_URI_scheme" target="_blank">"data:" URI</a> that was introduced for this purpose. There is a whole web site, <a href="http://dataurl.net/">http://dataurl.net/</a>, devoted to this subject. <a href="http://www.websiteoptimization.com/speed/tweak/inline-images/">Here</a>, for example, is a description of using it for inline images. It turns out that what is needed to display the locally generated GIF is to create a (big) string that is a "data:" URI with the actual binary content Base64 encoded and embedded in the string itself. This seems to be supported by all recent and contemporary browsers. I don't know what the ultimate size limits are for the "data:" URL, but it worked for the purpose of this demonstration. There are actually various online "image to data: URI convertors" available, for generating static content (e.g., at <a href="http://websemantics.co.uk/online_tools/image_to_data_uri_convertor/" target="_blank">webSemantics</a>, the <a href="http://software.hixie.ch/utilities/cgi/data/data" target="_blank">URI Kitchen</a>) but for the purpose of rendering DICOM images this needs to be done dynamically by the client-side JavaScript. Base64 encoding is trivial and I just copied a function from <a href="http://stackoverflow.com/questions/7370943/retrieving-binary-file-content-using-javascript-base64-encode-it-and-reverse-de" target="_blank">an answer on StackOverflow</a>, and then tacked the Base64 encoded GIF file on the end of a "data:image/gif;base64," string, <i>et voilà</i>!<br />
<br />
Anyway, not rocket science, but hopefully useful to someone. I dare say that in the long run the HTML Canvas element will make most of this moot, and there are certainly already a small but growing number of "pure JavaScript" DICOM viewers out there. I have to admit it is tempting to spend a little time experimenting more with this, and perhaps even write an entire IHE Basic Image Review Profile viewer this way, using either Canvas or the "GIF/data: URI" trick for rendering. Don't hold your breath though.<br />
<br />
It would also be fun to go back through previous generations of browsers to see just how far back the necessary concepts are supported. I suspect that size limits on the "data:" URI may be the most significant issue in that respect, but one could conceivably break the image into small tiles, each of which was represented by a separate small GIF in its own small enough "data:" URI string. I also haven't looked at client-side caching issues. These tend to be significant when one is displaying (or switching between) a lot of images or frames. I don't know whether browsers handle caching of "data:" URIs objects differently from those fetched via http, or indeed how they handle caching of files pulled via XMLHttpRequest.<br />
<br />
Extending the DICOM parsing and payload extraction stuff to handle other uncompressed DICOM transfer syntaxes would be trivial detail work. For the compressed transfer syntaxes, for single and three channel 8 bit baseline JPEG, one can just strip out the JPEG bit stream from its DICOM encapsulated fragments, concatenate and Base64 encode the result, and stuff each frame in a data:url with a media type of image/jpeg instead of image/gif. Same goes for DICOM encapsulated MPEG, I suppose, though that might really stretch the size limits of the "data:" URI.<br />
<br />
Since bit-twiddling is not so bad in JavaScript after all, one could even write a <a href="https://en.wikipedia.org/wiki/Lossless_JPEG#Lossless_mode_of_operation">JPEG lossless</a> or <a href="https://en.wikipedia.org/wiki/Lossless_JPEG#JPEG-LS">JPEG-LS</a> decoder in JavaScript that might not perform too horribly. After all, JPEG-LS was based on <a href="http://www.hpl.hp.com/research/info_theory/loco/">LOCO</a> and that was simple enough to <a href="http://www.hpl.hp.com/news/2004/jan-mar/hp_mars.html">fly to Mars</a>, so it should be a cakewalk in a modern browser; it is conceptually simple enough that even I managed to write a <a href="http://www.dclunie.com/jpegls.html">C++ JPEG-LS</a> codec for it, some time back. That said, <a href="http://www.dclunie.com/papers/spie_mi_2000_compression.pdf">modest compression</a> without requiring an image-specific lossless compression scheme can be achieved using gzip or deflate (zip) with <a href="https://en.wikipedia.org/wiki/HTTP_compression">HTTP compression</a>, and may obviate the need to use a DICOM lossless compression transfer syntax, unless the server happens to already have files in such transfer syntaxes.<br />
<br />
Doing the JPEG 12 bit DCT process might be a bit of a performance dog, but you never know until someone tries. Don't hold your breath for these from me any time soon though, but if I get another spare Saturday, you never know ...<br />
<br />
Oops, spoke to soon, someone has already done a pure <a href="https://github.com/notmasteryet/jpgjs">JavaScript JPEG decoder</a> ...<br />
<br />
<a href="https://en.wikipedia.org/wiki/Standing_on_the_shoulders_of_giants"><i>nanos gigantum humeris insidentes</i></a><br />
<br />
David<br />
<i> </i><br />
<br />
<br />
<br />
<br />
<br />
David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com4tag:blogger.com,1999:blog-1367102802658603789.post-47400821700484925852013-08-02T11:11:00.000-07:002013-08-02T11:11:51.588-07:00So Many EHR Vendors, So Little TimeSummary: In the US, >=442 vendors need to interface to imaging systems. Good thing we have IID.<br />
<br />
Long Version.<br />
<br />
A report on <a href="http://www.skainfo.com/health_care_market_reports/EMR_Electronic_Medical_Records.pdf">US Physician Office EHR adoption</a> from <a href="http://www.skainfo.com/">SK&A</a> that <a href="http://www.ahier.net/p/home.html">Brian Ahier</a> described in a recent <a href="http://www.ahier.net/2013/07/ehr-vendor-market-share.html">post</a>, contains some interesting numbers of relevance to imaging folks. Overall adoption hovers around 50%, but what really intrigued me was the list of vendors with market share ... Allscripts, eClinicalWorks and Epic had about 10% each, another 17 vendors split about half the market (45%), then there were 422 more (!!!) splitting the remaining 25% or so.<br />
<br />
That seemed like an awful lot, and I was wondering if perhaps the "other" category confused different versions or something, or was just an error. So I went to the <a href="http://oncchpl.force.com/ehrcert/ehrproductsearch">ONC's Certified Heath IT Products List</a>, and in the combination of 2011 and 2014 editions, elected to browse all Ambulatory Products, and was rewarded with 3470 products found! That list includes lots of different bits and pieces and versions, but it does confirm the presence of a large number of choices.<br />
<br />
That is a very large number of EHR vendors and systems for PACS and VNA and Imaging Sharing system producers to interface with, in order to View (or Download or Transmit) images.<br />
<br />
It certainly is a very large "n" for the "n:m" combinations of individually customized interfaces, if one goes that route. It is a good thing perhaps that we just finished the <a href="http://dclunie.blogspot.com/2013/07/display-it-now-i-command-you.html">IID</a> profile in IHE, to potential make it "n+m" instead.<br />
<br />
It is hard to believe that there won't be some very dramatic consolidation some time soon, but no matter how rapidly that occurs, being able to satisfy the menu option for stage 2 that includes image viewing, would seem to be a potential discriminator and a competitive advantage.<br />
<br />
This may be particularly true for the smaller players, who clearly seem to be satisfying some customers (judging by the stratification in the SK&A report by practice size). Perhaps the big players are too expensive or too complicated, or too busy to bother with small accounts, as MU obsession consumes all their available resources.<br />
<br />
Imaging vendors that make it easy for small EHR players to access images by implementing the Image Display actor of <a href="http://dclunie.blogspot.com/2013/07/display-it-now-i-command-you.html">IID</a> might help imaging facilities that purchase their products to remain competitive in this age of reimbursement reduction. If their referring providers' insist on integration with whichever one of the 442 to 3470 EHR products they happen to have, if not satisfied, they can easily switch to another imaging provider. <br />
<br />
Small EHR players that don't take advantage of standards like <a href="http://dclunie.blogspot.com/2013/07/display-it-now-i-command-you.html">IID</a>, and succumb to pressure from even a modest number of PACS vendors to customize to an existing proprietary interface, may run out of resources pretty quickly. Our other imaging standards like <a href="http://www.dclunie.com/dicom-status/status.html#PartPS%203.18">WADO</a> and <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I</a> are good as far as they go, and very important for imaging vendors to support. But they require a level of sophistication on the client side that may be beyond most small EHR vendors, particularly if interactive viewing is required by the referring providers. <a href="http://www.dclunie.com/dicom-status/status.html#PartPS%203.18">WADO</a> and <a href="http://wiki.ihe.net/index.php?title=Cross-enterprise_Document_Sharing_for_Imaging">XDS-I</a> might be the means used to support <a href="http://dclunie.blogspot.com/2013/07/display-it-now-i-command-you.html">IID</a> on the imaging side, but the EHR doesn't need to sweat the details.<br />
<br />
David<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com0tag:blogger.com,1999:blog-1367102802658603789.post-16379768990953316512013-08-01T17:40:00.002-07:002013-08-01T17:40:52.021-07:00Lumpers vs Splitters - Anatomy and Procedures, Prefetching and BrowsingSummary: For remote access and pre-fetching, should one lump anatomic regions into a small number of categories, or retain the finer granularity inherent in the procedure codes and explicitly encoded in the images?<br />
<br />
Long Version.<br />
<br />
Of late you may have noticed a spate of posts from me about <a href="http://dclunie.blogspot.com/2013/07/out-of-body-experience-anatomical.html">anatomy in the images</a>, <a href="http://dclunie.blogspot.com/2013/07/do-you-want-side-with-that-procedure.html">procedure codes</a>, as well as <a href="http://dclunie.blogspot.com/2013/07/pre-fetching-zombie-apocalypse-or.html">pre-fetching</a>. Needless to say these topics are related, and there is a reason for my recently renewed interest in researching these subjects.<br />
<br />
You may or may not have noticed that in IHE XDS-I.b, there is a bunch of information included in the registry metadata that is specifically described for imaging use (see <a href="http://www.ihe.net/Technical_Framework/upload/IHE_RAD_TF_Vol3.pdf">IHE RAD TF:3</a>, section 4.68.4.1.2.3.2 XDSDocumentEntry Metadata).<br />
<br />
The typeCode is supposed to contain the (single) Procedure Code. Unfortunately, since almost nobody currently uses standard sets of codes, these will usually contain local institution codes. So whilst their display name may be rendered and browseable by a human, they will not easily be recognized by a machine, e.g., for pre-fetching or hanging. The specification currently says typeCode should contain the Requested Procedure Code rather than the performed Procedure Code, which is an interesting choice, since what was requested is not always what was done.<br />
<br />
There is also an eventCodeList that is currently defined to contain a code for the Modality (drawn from <a href="http://www.dclunie.com/dicom-status/status.html#PartPS%203.16">DICOM PS 3.16</a> CID 29), and one or more Anatomic Region codes (from DICOM CID 4).<br />
<br />
Now, no matter where the anatomic codes come from (be they derived from the local or standard procedure codes, extracted from the images, from some mysterious out of band source, or entered by a human), there is a fairly long list of theoretical values and practical values that are actually encountered, depending on the scenario, whether it be radiology, cardiology, or some other specialty that is a source of images, like ophthalmology.<br />
<br />
There are different potential human users of this information, whether it be radiologists viewing radiology images, those physicians who requested the imaging viewing radiology images (like an ophthalmologist requesting an MR of the orbits), or other specialists viewing their own images (like ophthalmologists, endoscopists, dermatologists, etc.). Even confining oneself to the radiology domain, the reasons for retrieving a set of images may vary.<br />
<br />
One might think that there is no problem, since XDS-I.b requires that the anatomical information be present, and requires that it be drawn from a rich set of choices.<br />
<br />
However, some folks seem to think that the set of choices of anatomical concepts is too rich and too long, and want to cut it down to just a short list, "lumping" a whole bunch of stuff together, rather than leaving it "split" into its fined grained descriptions.<br />
<br />
Why, one might ask, would one ever want to discard potentially useful information by such "coarsening" of the anatomical concepts in advance, when if there was a need to do so, one could easily do it on the querying end, when necessary?<br />
<br />
So I did ask, and the result was a fairly vigorous and prolonged email "debate" back and forth between the "lumpers" and the "splitters". The net result of which is that neither side is convinced of the merits of the others' argument, and are not interested in talking to each other anymore. So the process has stalled, and in the interim individual XDS "affinity domains" will do whatever they see fit, with their choices no doubt modulated by what their vendors are able or willing to deliver in this respect.<br />
<br />
An obvious compromise would be to always send both coarse and fine codes. Unfortunately, since the eventCodeList is a flat list of codes, there is no easy way to communicate name-value pairs, and since coarse and fine grained anatomy come from the same coding scheme (SNOMED), there is no easy way to send both and distinguish them, which turns out to be important. At least not without a change to the underlying ITI requirements for XDS, and they are loathe to make changes for fear apparently of invalidating the installed base of XDS systems (modest sized though that might be at this early stage). Getting a slot added to send Accession Number was like pulling teeth from ITI, and nobody has the stomach for a repeat of that tedious exercise.<br />
<br />
The context in which this arose initially was pre-fetching. One reasonable approach is pre-fetching all those studies in the same coarse group as the current study, and the expectation is that this would be better than pre-fetching everything, or nothing, or relying on workflow related reasons, such as pre-fetching the most recent studies or the study that one actually ordered in the first place, or studies of the same modality, or intended for the same recipient, etc.<br />
<br />
However, one can potentially do a better job of pre-fetching if one applies more granular rules, and this is particularly the case when one has a specific clinical question or task to perform.<br />
<br />
An example may help. Suppose one is interested in, say, a patient's screening virtual CT colonoscopy, whether one is a radiologist reporting it, or the ordering physician. And one wants to compare it with previous virtual CT colonoscopy. Should one pre-fetch all CT's of the abdomen for comparison (and there may be quite a few given that they are handed out in the emergency room like candies), not to mention whole body CT-PET scans that include the abdomen, etc.? Or should one pre-fetch only CT's of the colon? Now, if one could match procedure codes, and there was only one or a limited number of procedure codes for CT colonoscopy, one could match on that and ignore all the extraneous studies. But we have already established that procedure codes are currently largely non-standardized and in any reasonable size enterprise that has grown through acquisition or changed its EHR or RIS lately (can you say MU?), there may be a multitude of different coding schemes used in the archives.<br />
<br />
So, the lumpers would say, send abdomen for the anatomy, and pre-fetch them all. The splitters would say send colon for the anatomy, and pre-fetch whatever comes out of rules you want to apply at the requesting end (lump with other abdomens if you want to, or not, depending on your preference, or the sophistication of your rules, and your knowledge of the question).<br />
<br />
The clinical question really is important. If you are a vascular surgeon wondering about change in size of an aortic aneurysm, you might really want any imaging that included the abdominal aorta, for whatever reason, and not just cardiovascular images, and CT colonoscopy would include useful images in the axial set.<br />
<br />
One can come up with all sorts of similar examples, perfusion brain CT or petrous temporal bone CT versus any head CT, coronary or pulmonary CT angiogram versus any chest CT, etc. Beyond radiology, does an ophthalmologist want all head and neck, or just eyes, or just retinas?<br />
<br />
The "lumping" strategy required also depends on the use, since there may be potential ambiguities. Is a cervical spine lumped into "spine" or does it go with "head and neck", for example, and with multiple contribution sources, will they implement the same lumping decisions?<br />
<br />
The point being that it is impossible to anticipate the requirements on the receiving end until the question is asked, not when the studies are registered in the first place. Accordingly, in my opinion the richness should be recorded in the registry and available in the query, and the pre-fetching decision-making, including any "lumping" if appropriate, should be performed at the receiving end.<br />
<br />
Retaining the more granular information is particularly important when one considers the possibility of using more sophisticated artificial intelligence approaches to pre-fetching, rather than simple heuristics or manually authored rules; you will find some references to those techniques in my recent <a href="http://dclunie.blogspot.com/2013/07/pre-fetching-zombie-apocalypse-or.html">pre-fetching post</a>. Adaptive systems can learn what individual users (or sets of users in the same role) need based on what they are observed to actually view. But even simple rule-based pre-fetchers can be more sophisticated than just using a coarse list (e.g., the <a href="http://www.radmapps.com/Anatomy.html">RadMapps</a> approach based on string study descriptions).<br />
<br />
Besides, if one believes in "lumping", it is not as if the task is very burdensome, no matter where it is performed, given the modest numbers of codes to deal with. Though I described the list of fine grained codes as "fairly long" earlier, it isn't really that long. Even were one to need to select from the list in a user interface, just like for a user interface for procedures (a much longer list than anatomic locations), there are tactics for presenting long lists in an easily navigable manner.<br />
<br />
It is interesting to consider the history of the DICOM list in this respect. Over time the list has grown, from the original 19 that were CR-specific in DICOM 1993, to contain now 112 string values for Body Part Examined, most of which have been added to reflect experience in the field (e.g., what CR vendors started to send when they couldn't find a good match, or what other modalities needed). DICOM defines the SNOMED coded equivalents of all of those, plus various others that are used in specific objects (especially cardiology objects, and those for echocardiography in particular); the total is 340 coded concepts at the moment, many of which are not relevant to the application of anatomic region for a procedure for a registry and wouldn't be applicable, and some of which are the same concept but different meanings for different contexts (e.g., X and endo-X with same code). This is all summarized in <a href="http://www.dclunie.com/dicom-status/status.html#PartPS%203.16">DICOM PS 3.16</a> Annex L, which is related to CID 4. There are probably a few too many highly specific cardiovascular locations that got pulled in this way. There are a few specialties that have separate lists, e.g., ophthalmology, which have not been folded into Annex L yet, and do not have string equivalents for coded values. These lists may not be perfect, but they are a line in the sand and do reflect what people have asked for, over 20 years of experience with the standard.<br />
<br />
So, in short, no more than a few hundred codes probably need to be mapped from the procedure codes (or acquired by some other means) at the sending end. And at the receiving end, no more than that few hundred codes need to be "lumped" to apply coarse pre-fetching rules, if that floats your boat. And since all the anatomy codes defined in DICOM CID 4 are SNOMED codes, the mapping is already right there for the implementer to extract in the relationships present in the SNOMED files.<br />
<br />
One concern that has been expressed is that there are too many anatomical codes to map to from one's local procedure code, and it is easier to map to a short list. I would argue the opposite, in that it is easier to map "XR wrist" to "wrist" than "lower extremity", or "MR Pituitary" to "pituitary" rather than "head and neck". I.e. a literal mapping doesn't require a knowledge of anatomy. Not to mention the fact that the better approach is to map one's local procedure codes to standard procedure codes (like SNOMED or LOINC or RadLex Playbook) in the first place, then extract the anatomy automatically from the ontologies that back those standards. <br />
<br />
I asked a bunch of radiologists in the US and Australia what their preference was, fine or coarse grained anatomy, and they all expressed a preference for retaining the fine grained concept.<br />
<br />
A similar sentiment was expressed by several UK radiologists in the UK Imaging Informatics Group when <a href="http://www.pacsgroup.org.uk/forum/messages/2/74450.html">a short list was suggested</a>. The interest in "lumping" in the UK is particularly surprising, when one considers that they all have to use the <a href="http://www.datadictionary.nhs.uk/web_site_content/supporting_information/clinical_coding/national_interim_clinical_imaging_procedure_code_set.asp">NICIP</a> codes, which are not only already mapped to SNOMED, but are also already mapped to <a href="http://www.datadictionary.nhs.uk/web_site_content/supporting_information/clinical_coding/opcs_classification_of_interventions_and_procedures.asp?shownav=1">OPCS-4</a>, which already contains fine-grained anatomy codes (their Z codes and O codes). If you read the UK forum posts carefully though, you will see a distinction suggested between using their standard procedure (rather than anatomy) code for plain radiography pre-fetching, versus "lumping" anatomy for cross-sectional modalities.<br />
<br />
Anyhow, I am not certain that I have convinced anyone who already has their mind made up (that coarse codes are sufficient), nor anyone who is for some reason intimidated by the more comprehensive fine grained list in DICOM CID 4 than a short and arbitrary list.<br />
<br />
Personally though, given the limitations inherent in the XDS metadata model, I remain convinced that the more precise information is valuable, and the coarse information not only limits what a recipient can find but contaminates the information with noise (claiming more territory was imaged than actually was). Not only does this undermine the utility of XDS, but it creates an artificial distinction between what is possible using local PACS protocols like DICOM queries as opposed to cross-enterprise protocols, when instead we should be working to make such artificial distinctions transparent to the user. In my opinion, the remote user deserves the same level of pre-fetching and manual browsing performance that is achievable locally.<br />
<br />
What do you think? <br />
<br />
David <br />
<br />
It is interesting to consider what concepts might be included in a lumped list.<br />
<br />
The original IHE CP, which triggered this debate, proposed a list that consisted of:<br />
<br />
Abdomen<br />Cardiovascular<br />Cervical Spine<br />Chest<br />Entire Body<br />Head<br />Lower Extremity<br />Lumbar Spine<br />Neck<br />Pelvis<br />Thoracic Spine<br />Upper Extremity<br /><br />
Not much use if you are a mammographer looking for last year's priors, for example, so at the very least it would make sense to add Breast.<br />
<br />
The proposed UK forum list was initially:<br />
<br />
Abdo<br />Body (esp for overlapping CT body areas)<br />Chest<br />Head<br />Heart<br />Lower Limb<br />Misc<br />Neck<br />Pelvis<br />Spine<br />Upper Limb<br />Vessels<br /><br />
to which there were later suggestions in the forum to add Breast and Bowel.<br />
<br />
When the ACR ITIC was discussing appropriateness criteria work, it had found it helpful to group procedures for that specific purpose, and the list was:<br />
<br />
Abdomen<br />Breast<br />Cardiac<br />Chest<br />Head<br />Lower extremity<br />Maxface-dental<br />Neck<br />Pelvis<br />Spine<br />Unspecified<br />
Upper extremity<br />Whole body<br /><br />
Another source of interest is the RadLex PlayBook, which categorizes procedures by Body Region (e.g., abdomen), a very short list by comparison with the more fine-grained Anatomic Focus (e.g., pancreas) that is also used. That list is:<br />
<br />
Abdomen<br />Abdomen and Pelvis<br />Bone<br />Breast<br />Cervical Spine<br />Chest<br />Face<br />Head<br />Lower Extremity<br />Lumbar Spine<br />Lumbosacral Spine<br />Neck<br />Pelvis<br />Spine<br />Thoracic Spine<br />Thoracolumbar Spine<br />Upper Extremity<br />Whole Body<br /><br />The Canadian DI Standards collaborative working group (SCWG 10) short list for XDS (after they were not convinced by my argument that no short list is necessary) is currently proposed to be:<br />
<br />
Abdomen<br />Breast<br />Cardiovascular<br />Cervical Spine<br />Chest<br />Entire Body<br />Head<br />Lower Extremity<br />Lumbar Spine<br />Neck<br />Pelvis<br />Thoracic Spine<br />Upper Extremity<br />
<br />
When I asked various radiologists what they would prefer if they were forced to live with a coarse list only, one proposal was:<br />
<br />
Abdomen<br />Breast<br />Cardiac<br />Cardiovascular (not heart)<br />Cervical Spine<br />Chest<br />Entire/Whole Body<br />Facial/dental<br />Head<br />Lower Extremity<br />Lumbar Spine<br />Neck<br />Pelvis<br />Thoracic Spine<br />Unspecified<br />Upper Extremity<br />
<br />
There was then a discussion about whether Face should be separated from Brain within Head, and then what one should do about Base of Skull and Inner Ear, which serves to emphasize my point that it is difficult to come up with a list that satisfies every constituent.<br />
<br />
To be fair, putting aside the fact that "unspecified" is undesirable, and that combined body parts may not be needed since one can send multiple codes (IHE XDS-I permits this), there is a lot of similarity between the proposals.<br />
<br />
One might wonder about the apparent obsession with lumping regions within an upper or lower extremity category, and why one would want shoulders with wrists, etc. I suppose it might reflect the continuum of radiographic views that extend along the limbs (e.g. does humerus include shoulder and elbow). Then again, if one were doing a skeletal survey for metastases one might want a category of Bone instead I suppose, in which would be included Skull, and all Spines, and Pelvis, and Chest (for ribs). Or for a skeletal survey for arthritis, just Joints and Spine perhaps.<br />
<br />What would your list be, if you needed one?<br />
<br />
David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com4tag:blogger.com,1999:blog-1367102802658603789.post-9771787316979086582013-07-29T10:33:00.000-07:002013-07-29T10:45:41.328-07:00Do You Want a Side With That Procedure Code?Summary: Communicating laterality via procedure codes is challenging, and varies between coding systems and across interfaces.<br />
<br />
Long Version.<br />
<br />
There are basically two ways to communicate in a structured manner the laterality of a procedure (i.e., left or right knee, versus both knees, versus an unpaired body part like the pelvis).<br />
<br />
One can either send one code that is defined to mean both the procedure and the laterality, so called pre-coordination, or send multiple codes (or elements or attributes) with the meanings kept separate. <br />
<br />
<a href="http://www.ihtsdo.org/snomed-ct/" target="_blank">SNOMED</a>, for example, does not pre-coordinate the laterality with the procedure. For example, it specifies only P5-09024 (241641004) as the generic code for MR of the knee. There is a SNOMED generic qualifier for right, G-A100 (24028007). This is a somewhat arbitrary limitation, since SNOMED does pre-coordinate the concepts of "MR" and "Knee", however.<br />
<br />
<a href="http://loinc.org/" target="_blank">LOINC</a>, on the other hand, has pre-coordinated codes for left, right and bilateral procedures. For the MR knee example, these are 26257-6, 26258-4 and 26256-8 respectively.<br />
<br />
The UK has a <a href="http://www.datadictionary.nhs.uk/web_site_content/supporting_information/clinical_coding/national_interim_clinical_imaging_procedure_code_set.asp">National Interim Clinical Imaging Procedure (NICIP)</a> code set, and it also uses pre-coordinated codes, in this case MKNEL, MKNER and MKNEB, respectively. The NICIP code set has the interesting feature of being mapped to SNOMED, which we will return to later.<br />
<br />
So if one has a procedure code with laterality pre-coordinated, one is good to go. These codes can be used in the HL7 V2 ordering messages (Universal Service ID), and passed through the DICOM Modality Worklist and into the images and the MPPS unhindered.<br />
<br />
Better still would be to have the modality extract the laterality from the supplied procedure code and populate the DICOM Laterality attribute (or Image Laterality, depending on the IOD) so as to facilitate downstream use (hanging protocols, etc.) and reduce the need for operator entry or selection. This would be easier, of course, if the laterality concept were sent separately, but it isn't. The extraction is not often (ever?) done, and population of Laterality, if populated at all, remains operator-dependent. Nothing prevents a clever downstream PACS or VNA extracting this information on ingestion though, and creating a "better" Laterality value and coercing it in the stored object, if none is already present in the images, or raising an alert if there is a conflict with what is present in the images.<br />
<br />
Nor is laterality required, or even mentioned, in the Assisted Acquisition Protocol Setting option of <a href="http://wiki.ihe.net/index.php?title=Scheduled_Workflow" target="_blank">IHE Scheduled Workflow (SWF)</a>. There is, however, the possibility of sending laterality information in the protocol codes, as opposed to the procedure codes, but this is not usually done either.<br />
<br />
On the other hand, if one is using SNOMED for one's procedure codes, there are several practical problems. SNOMED's contemporary solution would be to create values that could be sent in a single attribute by using post-coordinated expressions using their "<a href="http://en.wikipedia.org/wiki/SNOMED_CT#Precoordination_and_postcoordination">compositional syntax</a>". For the MR of the right knee example, that might be "241641004 | Magnetic resonance imaging of knee | : 272741003 | laterality | = 24028007 | right".<br />
<br />
This is all very well in theory, and the <a href="http://www.connectingforhealth.nhs.uk/systemsandservices/data/uktc/imaging/nicipfaqs#9-why-is-the">British</a> and the Canadians (well, the <a href="http://miircam.ca/miit2012/7-Larwood.pdf">Ontarians</a> anyway) are very excited about using SNOMED for procedure codes, but there is the small practical matter of implementing this in HL7 V2 and DICOM. Code length limits are (probably) not an HL7 V2 issue, but they certainly are in DICOM.<br />
<br />
Since both modalities and worklist providers can only encode 16 characters in the Code Value (which has an SH Value Representation), we are out of luck trying to encode arbitrary length compositional expressions. Indeed even switching to the SNOMED-CT style numeric Concept IDs (24028007), rather than using the SNOMED-RT style Snomed ID strings (G-A100) that DICOM has traditionally used for Code Value, is a problem. The Concept ID namespace mechanism allows for up to 18 characters, which is too long for DICOM unless there happen to be enough elided leading zeroes, and this is a special problem for national extensions. Bummer.<br />
<br />
Unfortunately, the Code Value length limit cannot be changed since it would invalidate the installed base. There have been various discussions about adding alternative attributes or Base64 encoding to stuff in the longer numeric value, but there is no consensus yet.<br />
<br />
For the time being, for practical use, either laterality has to be pre-coordinated in the single procedure code, or it has to be conveyed as a separate attribute in the DICOM Modality Worklist.<br />
<br />
With respect to the possibility of a separate attribute, a forthcoming white paper from the IHE Radiology Technical Committee, Code Mapping in IHE Radiology Profiles, discusses the flow of codes throughout the system. It mentions the matter of laterality, and what features of <a href="http://wiki.ihe.net/index.php?title=Scheduled_Workflow" target="_blank">IHE Scheduled Workflow</a> can be used if laterality is conveyed separately from the procedure code. In short, there are specific HL7 V2 attributes (OBR-15 in v2.3.1 and OBR-46 in v2.5.1), whose modified use is defined by IHE to convey laterality. And there is an accompanying requirement to append the value to Requested Procedure Description (0032,1060) for humans to read, but that is better than nothing (or depending on a piece of paper or the RIS terminal). But there is no standard way to convey laterality separately and in a structured manner in the DICOM Modality Worklist, which means there is no (automated) way to get it into the images.<br />
<br />
Another effort to standardize procedure codes, the <a href="http://playbook.radlex.org/" target="_blank">RadLex Playbook</a>, also currently defines pre-coordinated codes for left (RPID708) and right (RPID709) MR of the knee. A minor and remediable issue is that it does not currently have a concept for a bilateral procedure, unless one gets more specific and additionally pre-coordinates the use of intravenous contrast. This does highlight that the RadLex PlayBook is a bit patchy at the moment, since it grows over time as new concepts are required when encountered during mapping of local coding schemes. Earlier attempts to include every permutation of the attributes of a procedure resulted in an explosion of largely meaningless concepts and was abandoned, so the current approach is a good one, but these are early days yet. <br />
<br />
On the subject of contrast media, one does not usually use intravenous contrast for MR of joints, unless there is a specific reason (infection, tumor, rheumatoid arthritis). On those occasions when it is required, it is desirable to be able to specify it during ordering or protocolling, and it certainly affects mapping to billing codes. There is also the possibility of intra-articular contrast (MR arthrography) to consider.<br />
<br />
Each of these concepts needs to be pre-coordinated with the side to come up with one code. It can be difficult to determine, unless separate concepts are defined, whether the more general code (contrast not mentioned) is intended to mean the absence of contrast, or if it is just not specified and is a "parent" concept for more specific child concepts that make it explicit. SNOMED, for example, does indeed have concepts for knee MR with IV contrast, P5-09078 (432719005), and knee MR arthrography, P5-09031 (241654006). These are both children of 241641004, implying that the later is agnostic about contrast. There are no contrast-specific SNOMED concepts that have laterality pre-coordinated, though, as expected.<br />
<br />
So, for codes specific to the MR of the right knee, in LOINC, NICIP and RadLex one finds:<br />
<br />
<table>
<tbody>
<tr><td style="text-align: center;">26258-4</td><td style="text-align: center;">MKNER</td><td style="text-align: center;">RPID709</td><td style="text-align: center;">contrast unspecified</td></tr>
<tr><td style="text-align: center;">36510-6</td><td style="text-align: center;">none</td><td style="text-align: center;">RPID1610</td><td style="text-align: center;">without IV contrast</td></tr>
<tr><td style="text-align: center;">36228-5</td><td style="text-align: center;">MKNERC</td><td style="text-align: center;">RPID1611</td><td style="text-align: center;">with contrast IV</td></tr>
<tr><td style="text-align: center;">26201-4</td><td style="text-align: center;">none</td><td style="text-align: center;">RPID1606</td><td style="text-align: center;">with and without contrast IV</td></tr>
<tr><td style="text-align: center;">43453-0</td><td style="text-align: center;">none</td><td style="text-align: center;">none</td><td style="text-align: center;">dynamic IV contrast</td></tr>
<tr><td style="text-align: center;">36127-9</td><td style="text-align: center;">MJKNR</td><td style="text-align: center;">none</td><td style="text-align: center;">with intra-articular contrast</td></tr>
</tbody></table>
<br />
So LOINC is the only scheme currently that is sufficiently comprehensive in this respect. There is talk of a RadLex-LOINC harmonization effort, which, when underway should address that gap in RadLex. There is also a new <a href="https://loinc.org/collaboration/ihtsdo/agreement.pdf" target="_blank">LOINC-SNOMED agreement</a> that has recently been announced, which will hopefully result in pre-coordinated LOINC codes being those used "on the wire" (encoded in messages and objects), but with the advantage of the availability of a mapping to their equivalent SNOMED concepts. It will be interesting to see how those who have hitched their wagon to encoding SNOMED on the wire are affected by this new agreement, or whether they switch to using LOINC codes.<br />
<br />
By the way, NICIP has an MRI Knee dynamic code too, MKDYS, but it is not side-specific, so there is some patchiness therein as well.<br />
<br />
NICIP is also interesting because mapping issues related to laterality are explicitly described in <a href="http://www.connectingforhealth.nhs.uk/systemsandservices/data/uktc/imaging/imp_guid0413.pdf">Guidance for the National Interim Clinical Imaging Procedure (NICIP) Mapping Table to OPCS-4</a>. I gather that <a href="http://www.datadictionary.nhs.uk/web_site_content/supporting_information/clinical_coding/opcs_classification_of_interventions_and_procedures.asp?shownav=1">OPCS-4</a> is the UK equivalent of a billing code set, but used for operational and resource management purposes. Specifically, the issue of mapping side-specific NICIP codes to SNOMED's non-specific code is addressed, in the context of a body region multiplier that is needed. To use their example, an MR of the left knee would map from MKNEL to SNOMED 241641004, and thence to U13.3 (MR of bone) and Z84.6 (knee joint) but would need to have laterality post-coordinated with the SNOMED code to translate to Y98.1 (radiology of one body area). Whereas an MKNEB would translate to Y98.2 (radiology of two body areas) (and interestingly a different primary code (U21.1 MR), though that may just be an error in their mapping table).<br />
<br />
Most folks, in the US at least, don't use standard procedure codes for ordering and instead rely on those codes internally developed for use in their "charge master", which may or may not bear some resemblance to billing codes or something that a vendor has supplied. This may change as more robust and well accepted standard schemes are developed and harmonized, or integration is required with other systems for handling appropriateness of ordering and utilization, and reporting of quality measures.<br />
<br />
Regardless, whether one uses standard or local codes, the question of communicating laterality in a structured electronic manner remains a challenging one. It is best addressed by looking at all the systems as an integrated whole, to take advantage of as much automation as possible, without manual re-entry, to improve quality and operational efficiency. Hopefully as many standard attributes and mappings can be leveraged as possible, without local customization.<br />
<br />
David<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com9tag:blogger.com,1999:blog-1367102802658603789.post-79507395083796513442013-07-28T08:49:00.000-07:002013-07-28T08:49:49.127-07:00Cloudy, With a Chance of CollapseSummary: The Software as a Service (SaaS) business model has long-term viability challenges. Cloud/SaaS enthusiasts beware.<br />
<br />
Long Version.<br />
<br />
I came across a piece of hype about <a href="http://venturebeat.com/2013/05/20/imaging-and-lab-results-are-begging-for-an-electronic-cloud-based-makeover" target="_blank">"cloud-based makeovers" for imaging and lab results in Venture Beat</a>, referenced from a <a href="http://www.linkedin.com/groupItem?view=&gid=73687&type=member&item=260832408" target="_blank">Linked In Clinical Trial Imaging group posting</a>.<br />
<br />
It is nice perhaps that an EHR vendor executive apparently "gets it". What interested me though, was not the fact that that folks were repeating the obvious, that CDs suck, and it is worth exploring a "cloud" or Software as a Service (Saas) medical image delivery method, whether for clinical care or clinical trials.<br />
<br />
Rather, at the top of the page, was a link to another article entitled "<a href="http://venturebeat.com/2013/07/24/the-unprofitable-saas-business-model-trap/" target="_blank">the unprofitable SaaS business model trap</a>" by <a href="http://blog.asmartbear.com/jason-cohen" target="_blank">Jason Cohen</a>. Now that was interesting, not because I indulge fantasies of starting a SaaS business (at least not on a very regular basis or very seriously), but because it caused me to start to wonder about how potential customers of SaaS services for EHR and medical image sharing, PACS and VNA assess the potential longevity of any service provider they get into bed with.<br />
<br />
Not that "cloud" and "SaaS" are necessarily synonymous (e.g., see Wikipedia's description of <a href="http://en.wikipedia.org/wiki/Cloud_computing" target="_blank">Cloud Computing</a>, "<a href="http://en.wikipedia.org/wiki/Software_as_a_service" target="_blank">SaaS</a>" and even <a href="http://en.wikipedia.org/wiki/Storage_as_a_service" target="_blank">Storage as a Service (STaaS)</a>, "<a href="http://www.cloud-vs-saas.info/" target="_blank">Cloud Computing vs SaaS</a>", "<a href="http://blogs.boomi.com/bod/2009/03/demystifying-saas-vs-cloud.html" target="_blank">Demysityfing SaaS vs. Cloud</a>", "<a href="http://www.accountingweb.com/topic/technology/cloud-computing-versus-software-service" target="_blank">Cloud Computing Versus Software as a Service</a>", "<a href="http://www.cloudtrust.biz/article/cloud_vs_saas.html" target="_blank">Cloud vs SaaS</a>", "<a href="http://www.rackspace.com/knowledge_center/whitepaper/understanding-the-cloud-computing-stack-saas-paas-iaas" target="_blank">Understanding the Cloud Computing Stack: SaaS, PaaS, IaaS</a>"). For the sake of argument, in the context of EHR and image transfer or sharing or distribution or viewing, let us assume that the customer is using a pay-as-you-go (PAYG?) service, which is the issue discussed in the article.<br />
<br />Healthcare use cases have an additional quality and regulatory burden that is inflicted, for better or for worse. This creates the need for even more spending by the provider, beyond R&D and Admin cited by Chohen. So the long term viability question should perhaps be even more at the forefront of healthcare customers' minds. Not to mention wasteful certification (aargh!) spending, as well as the costs of integration and the cost/risk of migration at the end of a failed service-provider relationship. Cohen describes 75% annual retention, with the potential for complete customer turnover after 4 years; it would be interesting to see healthcare-specific numbers.<br />
<br />
Some vendors have been successfully offering SaaS in the PACS world for a while. This 2011 <a href="http://www.auntminnie.com/index.aspx?d=1&sec=sup_n&sub=pac&pag=dis&ItemID=97779" target="_blank">Aunt Minnie article</a> summarizes an <a href="http://in-medica.com/research-area/Medical_InMedica/Medical_Imaging_and_Healthcare_IT" target="_blank">InMedica</a> report. It would be interesting to see what the relative proportions are now, and whether the <span><span class="maBody">1% share in 2010 that was </span></span><span><span class="maBody">both storage and software hosted by a third party has grown since, and by how much.</span></span><br />
<br />
<span><span class="maBody">One question I might have for a potential service provider would be how diversified they are, and whether the SaaS offering is their only source of revenue. Diversity though, is no guarantee the provider would not kill off an unprofitable business line, of course. How many large PACS vendors regularly completely change their architecture or even their entire product line and end the lives of their customers' installations, whether a capital acquisition or service was involved?</span></span><br />
<br />
S<span><span class="maBody">ophisticated customers and vendors probably have stock questions and responses in this respect. I do wonder how often the small, inexperienced customer gets sucked in by a "0% API for an introductory period" pitch, and potentially puts their data at risk in the face of impending penalties or loss of incentives if they don't "go electronic". Or the small and enthusiastic provider makes promises with the best of intentions but without the ability to follow through. Or perhaps, without the best of intentions, has a "someone big will buy us for our customer base before our burn rate catches up with us" exit strategy.</span></span><br />
<span><span class="maBody"><br /></span></span>
<span><span class="maBody">On the other hand, worst case, if your SaaS vendor goes under and takes all your data down with them, how bad could it actually be? No worse than an imaging center or clinic or hospital closing, and no longer being a reliable source of priors or historical records, maybe. Here is an interesting article about "<a href="http://library.ahima.org/xpedio/groups/public/documents/ahima/bok1_049257.hcsp?dDocName=bok1_049257" target="_blank">Protecting Patient Information after a Facility Closure</a>" that is worth a read, perhaps with respect to what you might want in a SaaS contract. At least you won't have to worry about migration though, if the data is completely lost.</span></span><br />
<span><span class="maBody"><br /></span></span>
<span><span class="maBody">I guess in the long term, as health care systems globally collapse under the weight of aging and sicker populations, it will merely be a matter of which races to the bottom faster, the non-viable SaaS providers or the non-viable health care providers. Their lost or inaccessible electronic records will probably be the least of our worries. It's a cloudy, gray and rainy day in the North-East today!</span></span><br />
<span><span class="maBody"><br /></span></span>
David<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com0tag:blogger.com,1999:blog-1367102802658603789.post-31628743029084165532013-07-25T08:42:00.000-07:002013-07-25T08:42:27.288-07:00MU Stage 3 Imaging CommentsSummary: Early this year comments were submitted for MU Stage 3, addressing viewing, downloading and transmitting images and radiation dose information.<br />
<br />
Long Version.<br />
<br />
I should have posted this back in January when I submitted my own comments, but better late than never.<br />
<br />
<br />
The HIT Policy Committee put out a <a href="http://www.regulations.gov/#!docketDetail;D=HHS-OS-2012-0007" target="_blank">Request for Comment Regarding the Stage 3 Definition of Meaningful Use of Electronic Health Records (EHRs), Docket ID: HHS-OS-2012-0007</a>, which was just that, an RFC, and not a proposed rule making. Within it, several issues were raised of relevance to the image sharing community, including the following that I considered it was important to comment on:<br />
<ul>
<li>Moving Stage 2 Menu Item to Core, regarding "imaging results consisting of the image itself and any explanation or other accompanying information are accessible through Certified EHR Technology"</li>
<li>With respect to View, Download and Transmit (VDT), a question was asked about exploring the readiness of vendors and the pros and cons of including certification for actual images, not just reports</li>
<li>With respect to View, Download and Transmit (VDT), a question was asked about exploring the readiness of vendors and the pros and cons of including certification for radiation dosing information from tests involving radiation exposure in a structured field so that patients can view the amount of radiation they have been exposed to</li>
</ul>
If you are interested in reading my comments, you can find them in the docket as <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0082" target="_blank">HHS-OS-2012-0007-0082</a>. I won't repeat them here, though I did just notice a typo (<a href="http://dclunie.blogspot.com/2013/07/display-it-now-i-command-you.html" target="_blank">IID</a> will be tested at the 2014 connectathon, not the 2015 connectathon).<br />
<br />
Other folks also made relevant comments, including MITA (<a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0559" target="_blank">HHS-OS-2012-0007-0559</a>), DICOM (<a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0575" target="_blank">HHS-OS-2012-0007-0575</a>), and the ACR ITIC (<a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0571" target="_blank">HHS-OS-2012-0007-0571</a>).<br />
<br />
The government's site allows you to search the contents of the docket to find relevant comments.<br />
<br />
For example, if you search on the word "DICOM", you will find in addition to the aforementioned, a bunch more from various vendors and facilities, some of which are generally supportive (e.g., <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0216" target="_blank">Aware</a>, <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0010" target="_blank">Green Leaves</a>, <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0411" target="_blank">ACR</a>, <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0541" target="_blank">Siemens</a>, <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0434" target="_blank">lifeIMAGE</a>, <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0330" target="_blank">AAO</a>, <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0409" target="_blank">ACC</a>), some less so (e.g., <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0496" target="_blank">Philips</a>, <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0569" target="_blank">Heart Rhythm Society</a>, <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0279" target="_blank">Boston Medical Center</a>, <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0203" target="_blank">AAFP</a>) and even some still completely opposed, for example, to providing images to patients (<a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0102" target="_blank">Intuit Health</a>).<br />
<br />
Even the <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0318" target="_blank">EHRA</a> comments this time, though still expressing concern, were not relentlessly negative; confirming perhaps, that the strategy of using a link and having the images supplied by a different type of system, does indeed assuage the EHR vendors concerns expressed last time around.<br />
<br />
One can dig deeper, e.g., by looking for all comments related to "image", though one gets a lot of spurious hits. But for example, one finds individual facilities expressing concern. For example <a href="http://www.regulations.gov/#!documentDetail;D=HHS-OS-2012-0007-0323" target="_blank">Montefiore</a>, who are concerned about the need for integration with radiology practices, and that "interpretation of the image is not within the expertise of the orderer".<br />
<br />
There does seem though, to be a positive trend in the direction of including imaging more comprehensively and in a standard manner in Stage 3, though there is certainly a long way to go yet. Who knows who is listening, whether they have an open mind, and whether any proposed rule making will goes as far as imaging-centric folks like me might hope (not to mention what standards, if any, might be required).<br />
<br />
David<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com0tag:blogger.com,1999:blog-1367102802658603789.post-22371581409859582832013-07-24T12:49:00.000-07:002013-07-24T12:49:04.241-07:00Display It! Now! I Command You!Summary: The new IHE Invoke Image Display (IID) Profile enables an EHR/EMR/PHR/RIS to command a PACS/VNA/Viewer to display one or more imaging studies, without being concerned about where those images live or what form the viewer takes.<br />
<br />
Long Version.<br />
<br />
One of the good things about Meaningful Use is that it has drawn attention to the View use case for images, all limitations with respect to Download and Transmit that I have <a href="http://dclunie.blogspot.com/2012/09/diagnostic-quality-is-vital-download.html" target="_blank">bemoaned before</a> aside. A similar use case is important to the <a href="http://www.pacsgroup.org.uk/cgi-bin/forum/show.cgi?195/70429" target="_blank">UK Imaging Informatics community</a>, and no doubt everywhere else too.<br />
<br />
The ink is drying on the new <a href="http://www.ihe.net/Technical_Framework/upload/IHE_RAD_Suppl_IID.pdf" target="_blank">Invoke Image Display (IID) Profile</a> from the IHE Radiology Technical Committee, which is intended to help with this use case.<br />
<br />
Since I have to give a <a href="https://ihe.webex.com/ec0606l/eventcenter/enroll/join.do?confViewID=1251639481&theAction=detail&confId=1251639481&path=program_detail&siteurl=ihe" target="_blank">Webinar</a> on the subject next week, I thought I would discuss the general principles (you can find the <a href="http://www.dclunie.com/papers/IHE-RAD-IID-Webinar-Clunie_20130708.pdf" target="_blank">slides here</a>).<br />
<br />
IID works with a simple HTTP GET request and some parameters encoded in the URL. One system, like an EHR or EMR or PHR (or RIS or HIS or whatever the "non-image-aware" system is), can request that one or more studies, identified generically by id and date range or recency, or specifically by UID or accession number, etc., be displayed by another system (like a PACS, VNA, Workstation, Viewer, Image Portal (Staff or Patient), Proxy or Gateway or whatever).<br />
<br />
No questions asked. Just display it. No concerns about format. No SOAP. No XML. No REST. No arguments about capabilities. Just do what you are told. This approach appeals to the closet (?) autocrat in me.<br />
<br />
Examples of different requests:<br />
<br />
<pre>http://<location>www.myhospital.org/IHEInvokeImageDisplay ?requestType=PATIENT &patientID=99998410^^^AcmeHospital &mostRecentResults=1
http://<location></location></location><location><location>www.myhospital.org/IHEInvokeImageDisplay ?requestType=STUDY &accessionNumber=93649236 </location></location>
http://www.myhospital.org/<location>IHEInvokeImageDisplay ?requestType=STUDY &studyUID=1.2.840.113883.19.110.4,1.2.840.113883.19.110.5 &viewerType=IHE_BIR &diagnosticQuality=true</location></pre>
The "viewer" (Image Display), however it is invoked, whether it be on/from a phone, tablet or desktop, within the user's web browser, zero footprint or not, thin or thick client, or even a separate workstation sitting beside the browser computer (e.g. a mammography workstation), has certain minimum responsibilities. They are summarized as interactive viewing. They include navigating within the requested studies (including change studies, series, and scroll between images and frames), manipulating the appearance of the displayed image (window, zoom and pan), control over diagnostic quality or not, and key images only or not. The full <a href="http://www.ihe.net/Technical_Framework/upload/IHE_RAD_Suppl_BIR.pdf" target="_blank">Basic Image Review Profile</a> is not required, but is a named type of viewer that may be requested and optionally supported.<br />
<br />
This approach begs the question of how the requester knows which server to call. The answer in brief is by configuration (and perhaps matching of report locations to pre-configured lists of servers, etc.). But this is an alternative to having n:m proprietary customizations and configurations of EHR to PACS, and it is an alternative to hardwired URLs (e.g., to proprietary or WADO references to images) that may go stale, and require a separate viewer. And if the approach is adopted then an additional standard endpoint discovery mechanism could be figured out.<br />
<br />
It also avoids questions of security (authentication, authorization and access control) by de-referencing these to whatever standard mechanisms can be deployed at a lower level that are appropriate for HTTP requests. So whether <a href="http://wiki.ihe.net/index.php?title=Cross-Enterprise_User_Assertion_%28XUA%29" target="_blank">SAML</a>, <a href="http://healthcaresecprivacy.blogspot.com/2013/06/internet-user-authorization-why-and.html" target="_blank">OAuth</a> or something else prevails, or if in the worst case the invoked display requires one to log in yet again (ugh), or is just pre-configured to trust the requester, this again is a matter for site configuration.<br />
<br />
There are other deployment questions that are important to consider, not the least of which is browser capability and permissions to install/execute JavaScript, Java, ActiveX, plug-ins, or whatever, assuming that the requester is even browser-based, and not a thick client or native app performing HTTP requests.<br />
<br />
Regardless, it is expected that the deployment burden is lower with this approach than with proprietary customizations of a combinatorial explosion of pairs of EHR and PACS.<br />
<br />
Thus, IID is one more standard "component" to use as a tool bring to bear on the non-trivial problem of image distribution and sharing, particularly with loosely coupled non-integrated systems.<br />
<br />
Note also that IID is not confined to staff viewing use cases; there is no reason why the same mechanism can't be used for a patient portal that is not image enabled to request an imaging system to display images for a patient (non-trivial authentication, access control and provisioning issues having been addressed).<br />
<br />
It is also potentially useful for commanding behavior in a workflow managed environment, i.e., to use a workflow application to command a workstation to display something (that it has or knows where to get), rather than having a workstation pull a work list and have a user select from it.<br />
<br />
Historically, to give credit where it is due, the idea came from the IHE Cardiology group. They introduced it as a transaction in their <a href="http://www.ihe.net/Technical_Framework/upload/IHE_CARD_Suppl_IEO_Rev1-1_TI_2010-07-30.pdf" target="_blank">Image Enabled Office Profile</a>, and we have extended it and brought it out as a separate profile so that it may be more generally applicable (and Harry tells me he will update IEO retrospectively to account for our tweaks).<br />
<br />So, get coding ... it would be great to have a few IID implementations register for the <a href="http://www.iheusa.org/docs/IHE-flyer-2014_newdates_nobleed.pdf" target="_blank">IHE NA Connectathon</a> in snowy Chicago in January 2014 to work out the kinks. Maybe I will see you there, if I haven't quit IHE by then because of the Certification nonsense, which continues to spread like a cancer throughout the IHE organization.<br />
<br />
David<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com2tag:blogger.com,1999:blog-1367102802658603789.post-11158445884856979332013-07-09T07:09:00.004-07:002013-07-09T07:37:48.060-07:00Out of Body Experience - Anatomical Information in ImagesSummary: Anatomical information is sometimes hard to come by in images, but it's not as bad as you might expect. <br />
<br />
Long Version.<br />
<br />
Information about the anatomic region included in a set of images is useful for a number of obvious reasons.<br />
<br />
First and foremost, whether the user be an imaging specialist, a clinician who performs their own imaging, a referring practitioner who has requested imaging or is interested in procedures already performed, or a radiographer/technologist about to begin a new procedure, if one is browsing through a patient's record trying to find the "right" images(s) to answer some clinical question, anatomy, together with modality and approximate date are useful.<br />
<br />
A related use case, and one which is largely behind the scenes but impacts the quality of the user experience, is to pre-fetch images for any of the first set of use cases, and as we discussed last time, <a href="http://dclunie.blogspot.com/2013/07/pre-fetching-zombie-apocalypse-or.html" target="_blank">pre-fetching is back in vogue</a> for one reason or another.<br />
<br />
Hanging protocols are another application, particularly for longitudinal comparison of complex procedures that involve multiple parts, e.g. skeletal surveys.<br />
<br />
So where does the anatomical information come from, in terms of who populates it and in which data elements?<br />
<br />
In an ideal world, the anatomy would be implicit in a standard procedure code that was supplied in the request from the order entry system, which might be refined somewhat during the "protocolling" step in the RIS, then fed to the modality via the modality worklist, amended by the operator if they need to perform something other than what was requested, and then recorded in the images and the performed procedure step, and included in the reports. This procedure code, being standard, would have a standard mapping to its related concepts, i.e., what the general anatomic region was, and what the anatomic focus was.<br />
<br />
Though such standard procedure codes do exist, in SNOMED, LOINC and more recently in RadLex (which has recently been extended to include CR/DX and NM), they aren't widely used. Indeed, as far as I can tell, they aren't used at all yet. In over a decade of performing international multi-center cancer clinical trials in my last job at RadPharm/CoreLab Partners/BioClinica, I never saw a standard value in the Procedure Code Sequence data element of any image (with the occasional exception of US CPT-4 codes, which though arguably "standard" are billing not ordering codes). Most often there was nothing there, or sometimes illegal empty values or garbage dummy values. If anything was present, it was a private or local code.<br />
<br />
That said, there does seem to be reliable standard anatomic information in the image headers a large proportion of the time.<br />
<br />
The history of this begins with the <a href="ftp://medical.nema.org/medical/dicom/1992-1995/">original DICOM standard released in 1993</a>. Prior to that time, there were no data elements defined for describing the anatomy in the ACR-NEMA standards (of <a href="ftp://medical.nema.org/medical/dicom/1985/ACR-NEMA_300-1985.pdf">1985</a> and <a href="ftp://medical.nema.org/medical/dicom/1988/ACR-NEMA_300-1988.pdf">1988</a>). DICOM introduced the Body Part Examined data element at the Series level, primarily for use with projection radiography (CR at the time). The original list was relatively short, 16 defined terms, ABDOMEN, ANKLE, BREAST, CHEST, CLAVICLE, COCCYX, CSPINE, ELBOW, EXTREMITY, FOOT, HAND, HIP, KNEE, LSPINE, PELVIS, SHOULDER, SKULL, SSPINE, TSPINE. Being defined terms, vendors (and users) are permitted to extend this list, as long as they don't duplicate the meaning of an existing term, and for example Fuji CR describes in its conformance statements also sending HEAD, NECK, LOW_EXM, UP_EXM and TEST.<br />
<br />
How did CR modalities obtain a value to populate this data element? Simple, they asked the operator. In the case of Fuji CR, the image processing and parameters applied to make an interpretable image are body part specific, and so the operator selection serves multiple purposes, applying the right processing and populating the DICOM data element. Over time, more general image processing algorithms have evolved that may not require anatomical information, but as X-Ray generators and tubes have become integrated, the body part specific selection of X-Ray technique factors provides another source of this information.<br />
<br />
The Digital X-Ray object, introduced in 1998, both to support digital detectors and to improve upon the CR object in DICOM, went one step further and "coded" the anatomy more formally. I.e., rather than using a single string value, a triplet of coding scheme (e.g., SRT for SNOMED), code value (e.g., T-04000) and code meaning (e.g., "Breast") were used in a data element called Anatomic Region Sequence. A list of SNOMED codes for useful anatomic regions was provided, longer this time, 73 if I have counted those listed in <a href="ftp://medical.nema.org/medical/dicom/final/sup32_ft.pdf">Supplement 32</a> correctly. Included was a mapping from the "older" Body Part Examined string values to to the new SNOMED codes, the list of standard values having grown slightly in the interim.<br />
<br />
Some of these new codes remained at the same general level of specificity as the historical Body Part Examined values, e.g., (T-D3000, SRT, "Chest") and CHEST. Others were very specific and for particular uses of radiography, such as to support particular views (e.g., (T-61300, SRT, "Submandibular Gland") to describe submandibular sialograms); others were specialty-specific (i.e., support was added for not only general radiography, but also mammography and dentistry). As an aside, a much more rich description of the projection or view was also added, including codes for epnymous views (such as (R-102AE, SRT, "Waters"), etc.). The approach used at the time was to go through the classic projection radiography textbooks, enumerate all documented techniques, describe their anatomy and other dimensions, and add data elements and coded values for each, and then iterate with radiologists and applications specialists to assure comprehensive coverage. Some implementers expressed skepticism about burdening the console/QC station/plate reader operator, but with education about the possibility of using integrated generator/gantry information to capture the data, and the need to orient the image correctly and document its orientation, progress was made. I used to preach about this in my <a href="http://www.dclunie.com/papers/DR_Clunie_20051202.pdf" target="_blank">RSNA Refresher Course on Digital Radiography</a>.<br />
<br />
Over the years, all subsequent new DICOM image objects have been defined to use Anatomic Region Sequence, but Body Part Examined remains popular, and has been retrofitted with standard string values for a broad range of purposes, and the list now contains 112 standard values (including the aforementioned examples of GALLBLADDER and SUBMANDIBULAR). This has been done largely in recognition of the fact that the CR object has not gone away (despite the DX object being superior in every way, though I am not biased at all). Sadly, many PACS and viewers are still too dumb to handle coded triplets for display or switching. To be fair, if a PACS or viewer is going to allow the user or site to customize behavior based on some of these values, it is easier to develop a configuration user interface that allows them to enter plain text strings to match, rather than force them to think about codes or choose from a pre-populated drop down list of SNOMED codes (that may not be up to date).<br />
<br />
The list of body parts and anatomic region codes has been extended to cover the cross-sectional modalities too. In the early days, there was absolutely no indication of body part in CT and MR images. The standard described the use of Body Part Examined in the General Series module, so it was available, but you may recall that there was nowhere in the user interface on the console to enter it. There was no cutesy little homunculus to point and click to select the protocol, in which the anatomy was implicit. Before the days of modality worklist, there was no place to copy it from (not to say that anatomy is explicit in MWL either, but it can be derived from the Requested Procedure Code, or Scheduled Protocol Code Sequence, or nowadays the Protocol Context Sequence). Indeed, there were no standard protocols and one had to select (or type in) all the technique parameters individually every time. The best one could hope for was something meaningful in Study Description (more on that later).<br />
<br />
CT and MR operators nowadays have it pretty easy by comparison, and as vendors have made the user interface more automated and graphical and intelligent, more information has become available for re-use. Many contemporary CT and MR modalities are indeed populating Body Part Examined and/or Anatomic Region Sequence, using values derived from operator protocol selection (and in some cases IHE <span class="st">Assisted Acquisition Protocol Setting<i>).</i></span><br />
<br />
Ultrasound is a tricky modality, being so operator-dependent in terms of positioning, as well as requiring discipline in terms of selecting from the user interface a description of each captured image. After an abortive attempt in the original DICOM standard to define encoding of ultrasound images, which included stuffing body part information into a value of the Image Type data element, a much cleaner Ultrasound IOD was quickly released, in <a href="ftp://medical.nema.org/medical/dicom/final/sup05_ft.pdf" target="_blank">Supplement 5</a>. It was one of the first to use the Anatomic Region Sequence with codes, as described earlier, thanks to the influence of Dean Bidgood. Unfortunately, it seems that very few, if any, ultrasounds devices actually provide a means for the user to populate this attribute. Nor is Body Part Examined populated in ultrasound as far as I can tell.<br />
<br />
Which brings us back to the question of reality. What does one actually see in real world image objects received from various sites? Are these Body Part Examined and/or Anatomic Region Sequence being populated? Do they contain standard values or non-standard strings or codes? Even if they are populated, are they correct and reliable?<br />
<br />
The bottom line seems to be that in this day and age, for many modalities, they are often being populated, and if populated they are much more often using standard rather than non-standard values, and appear to be reliable when populated. This may be contrary to some peoples' beliefs or observations, but I can only report my own experience in this respect. As I mentioned before, in my former cancer clinical trials life, I had the opportunity to monitor images from literally thousands of sites around the globe, for most modalities (ultrasound being a major exception), from all vendors and vintages of machine. I can't report exact figures, but on several occasions in the past I examined what we were receiving to ascertain the feasibility of various efforts to improve the workflow of comparing successive time points, for both projection radiography and nuclear medicine bone scans as well as cross-sectional modalities.<br />
<br />
In general, for projection radiography with CR, Body Part Examined is populated with a standard value about 75% of the time, is empty or absent about 10%, and contains a non-standard value about 15% of the time. Spot checks on individual images showed that the value sent is rarely incorrect.<br />
<br />
This is surprisingly good for CR perhaps, which one might expect to be the least reliable, given the ease with which some vendors allow their sites to customize what can be put in there. If one inspects what non-standard customized values are being sent, they fall into a couple of categories:<br />
<ul>
<li>local language equivalents, e.g., BASSIN rather than PELVIS, BRUSTKORB rather than CHEST</li>
<li>extensions that include the view too, e.g. CHEST_PA</li>
<li>reasonable values that we should probably add to the standard list, e.g., FOREARM</li>
<li>incorrectly spelled equivalents, e.g. "L SPINE" with a space or "L_SPINE" with an underscore, instead of the standard "LSPINE"</li>
<li>incorrectly capitalized equivalents, e.g., "Chest" instead of "CHEST" </li>
<li>literal copies (sometimes capitalized) of some procedure or billing code, e.g., "CHEST 1 VIEW" or "XR ACUTE ABDOMEN W/PA CXR"</li>
</ul>
Not infrequently, non-standard values are not only non-standard, they are illegal. The CS (Code String) value representation does not permit lowercase or most special characters or accents, for example, and is limited in length to 16 bytes.<br />
<br />
I can see why non-English-speaking sites are tempted to replace all the codes with local language equivalents, since the literally encoded value may be displayed in some modality and PACS user interfaces, or at least in some configuration screens, such as for hanging protocols. But they really shouldn't, since the standard values are supposed to be used regardless of the locale, and the user interface should perform the translation. This is just a bad, though understandable, practice.<br />
<br />
One of the strengths of using Anatomic Region Sequence instead of Body Part Examined is that it is local language independent and one can send, and recognize, the same code value, regardless of the code meaning. I.e., one can send (T-D3000, SRT, "Chest") or (T-D3000, SRT, "Thorax") or (T-D3000, SRT, "<span class="short_text" id="result_box" lang="es"><span class="hps alt-edited">Tórax</span></span>") or (T-D3000, SRT, "<span class="short_text" id="result_box" lang="no"><span class="hps">Brystet</span></span>") or (T-D3000, SRT, "胸郭") and they all mean the same thing. The idea is that hanging protocols, routers, pre-fetchers or just ordinary human readable browsers should recognize the code (T-D3000, SRT) and render to the user whatever is the locale appropriate string. The code meaning encoded in the message is only there as a fall back in case the code is unrecognized (and indeed it used to be optional in DICOM when coded tuples were first introduced). Theoretically; unfortunately, the lowest common denominator in localization of PACS and viewing applications is probably not up to substituting code meanings yet, probably as a result of user's having higher priorities than localization (or their requirements not being taken seriously by the vendors).<br />
<br />
For cross-sectional modalities, given their history, I was expecting a lot worse than I actually observed. For CT, for example, about 60% of the time there is no value sent. No surprise there, but it could be much worse, and this is a sign of improvement. About 35% of the time there is a standard value, and about 5% of the time there is a non-standard value. For MR one sees values much less frequently; roughly 85% of the time there is no value, 10% a standard value, and 5% a non-standard value. For PET though, neither Body Part Examined or Anatomic Region Sequence are ever sent, which is pretty lame (how hard is it to send the code for "whole body" anyway?).<br />
<br />
Nuclear medicine is a mess. Like the ultrasound objects, the NM objects were revised early and redefined to include Anatomic Region Sequence. One standard value one sees fairly often is ("T-11000", "SRT", "Skeletal") for whole body bone scans, not surprising in an oncology practice. For historical reasons, the coding scheme may be "99SDM" or "SNM3" rather than "SRT", the price NM pays for being an early adopter of coded tuples. That said, one also sees a lot of private codes from one particular vendor, who sends "99NMG" for the coding scheme, and then sends codes that include not only the anatomy but also the view, which is the wrong thing to do since there is a separate coded data element for that.<br />
<br />
Interestingly, I do not see very many combined body parts showing up, apart from TLSPINE. This is probably a consequence of the fact that Body Part Examined is a Series level attribute (and Anatomic Region Sequence is image level). In other words, two different Series in a single Study may have different values for these attributes. This is important to account for if one wants to come up with a single anatomic descriptor of the entire procedure, so a system may need to have the ability to detect and combine these. DICOM defines a bunch of these combined parts, and adds more as they are conceived of (for example, I recently realized we don't have a good value for aortic arch plus carotids plus circle of willis for MRAs). There is a trivial example of how to do this using the available combinations defined in DICOM in com.pixelmed.anatproc.CombinedAnatomicConcepts in my <a href="http://www.pixelmed.com/#PixelMedJavaDICOMToolkit" target="_blank">PixelMed toolkit</a> if you are interested; i.e., one doesn't need the complete SNOMED ontology to recognize the relationships, only a tiny subset of it (more on that in a later blog post perhaps).<br />
<br />
On the subject of tools as well as limited structured anatomical information, I cannot finish without mentioning Study Description and its ilk, Series Description and maybe even Protocol Name. Worse even than non-standard string values in Body Part Examined, these descriptive data elements can contain anything at all. Indeed that was their intent, to be a human readable description, and not something that was machine recognized. Originally, the modality operator typed in free text values, and often they still have that flexibility, or at least the ability to edit what is pre-populated by protocol selection. Sadly, given that Study Description and Series Description are the most frequently populated data elements in practice, though they are incredibly useful for human browsing, it has become common place to try to match or parse their content to dictate downstream behavior, such as for hanging protocol selection or matching.<br />
<br />
Anyhow, given a site-specific set of such description data element values, one can either parse them and try to find anatomic words or phrases, in order to be adaptable to local variations, or one can just do a straight match on the entire string. In order to better support some of my use cases, particularly extracting anatomy for radiation dose extraction projects, I spent a while working on the parsing descriptions problem, with some success. You can find in the com.pixelmed.anatproc package a bunch of attempts to do this, both for cross-sectional and projection radiography, as well as for multiple languages. By comparison, you might want to look at the <a href="http://www.radmapps.com/" target="_blank">RadMapps</a> approach, which just does a straight out full string mapping, which requires one to build a mapping once for any sites' list of descriptions, and then maintain it as they evolve. This is the approach being used for the <a href="https://nrdr.acr.org/Portal/DIR/Main/page.aspx" target="_blank">ACR's Dose Index Registry</a>, for example, where they only have to cover a small subset of all possibly procedures. In these approaches, there is some blurring between purely anatomical information and other interesting things one might want to also extract, like why the procedure is being performed or the particular manner in which it is being performed (such as being a CT angiogram or being thin slice, etc.), but the anatomy is a key part of the process. For some use cases it may not even be necessary to extract the anatomy separately, since the goal may be to map to a particular standard procedure code.<br />
<br />
Indeed, one might suspect that the primary reason for the popularity of
VNAs and the dreaded "<a href="http://en.wikipedia.org/wiki/Vendor_Neutral_Archive#Dynamic_Tag_Morphing" target="_blank">dynamic tag morphing</a>" is to deal with the
impedance mismatch between the way different vendors and sites have
their modalities populating Study and Series Description and the limited
configurability of some PACS hanging protocols that depend on these. Of course, I hate to say it, but the "dynamic tag morpher"
is probably a good tool to do the extraction or matching of descriptive
attributes to populate structured attributes with standard codes for
procedures and anatomy, if it has the sophistication required; i.e., use it not just to "clean up" descriptive attributes, but to augment the header with codes extracted from them. Better of
course would be to get it right "first", i.e., off the modality or
fixed during ingestion, and for everyone to use the same standard codes
as the interoperable set, rather than have to "dynamically" coerce the
values to match varied expectations of the recipients.<br />
<br />
The bottom line is that reliable anatomical information is almost certainly available somewhere, if you want to go to the trouble to extract it, in decreasing order of desirability, increasing order of difficulty, and increasing order of likelihood of availability, from:<br />
<ul>
<li>implicit in a standard Procedure Code Sequence value, supplied by the worklist and encoded in the header</li>
<li>in a standard Body Part Examined value or Anatomic Region Sequence code, extracted from the worklist procedure code, automatically or operator selected protocol, or operator selected dropdown</li>
<li>extracted by matching or parsing the Study and/or Series Description or Protocol Name data element string values </li>
</ul>
David <br />
<br />
PS. Before someone asks, in DICOM, laterality is conveyed separately, encoded in either Laterality or Image Laterality (or in some cases Frame Laterality), and not pre-coordinated with (built in to) the Anatomic Region Sequence or Body Part Examined. The opposite is true for Procedure Code Sequence, which has no separate laterality modifier, and for which laterality needs to be pre-coordinated.<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com1tag:blogger.com,1999:blog-1367102802658603789.post-43661209452215878502013-07-06T09:04:00.003-07:002013-07-06T10:18:52.900-07:00Pre-Fetching: Zombie Apocalypse or Nirvana?Summary: Pre-fetching is back, driven by sluggish access to cloud-based archives and the need for a "local cache".<br />
<br />
Long Version.<br />
<br />
Like characters in a bad horror movie, or an eighties band, pre-fetching is back, resurrected from the dead (if it ever was truly dead).<br />
<br />
For a while, with the concept of "all images spinning all the time for all users" we thought we were on a roll in terms of on-demand access. Assuming all those images were spinning "locally" that is. Tape and optical disk was going the way of the dodo and we didn't have to listen to StorageTek marketing presentations about hierarchical storage masquerading as scientific abstracts at SPIE and SCAR (SIIM) any more. Worst case one could approach image egalitarianism, i.e., all image access equally fast or slow for everyone, if one also made available equal bandwidth.<br />
<br />
Not so, it would seem.<br />
<br />
When the HIPAA Security rule required everyone in the US to have a means of disaster recovery, and reliable off-site archives came into vogue, it was not expected that these archives would necessarily have on-demand access performance, though it created an obvious opportunity for off-site access. Likewise with the DI-r's in Canada. But nowadays the distinction between
the off-site archive and the only archive you have is becoming blurred, as everyone jumps on the "cloud" (aka. Software as a Service (<a href="http://en.wikipedia.org/wiki/Software_as_a_service">SaaS</a>), or Storage as a Service (<a href="http://en.wikipedia.org/wiki/Storage_as_a_service">STaaS</a>), formerly Application Service Provider (<a href="http://en.wikipedia.org/wiki/Application_service_provider">ASP</a>)) bandwagon, based on the naive assumption that if it is good for streaming movies on your smart phone or tablet, the "cloud" must be good for everything else too.<br />
<br />
The aggressive marketing of the Vendor Neutral Archive (<a href="http://en.wikipedia.org/wiki/Vendor_Neutral_Archive">VNA</a>) concept, often implemented as, or confounded with, cloud storage, has resulted in the introduction of another "layer" between the PACS user and where the images are, in some cases.<br />
<br />
Some disk (arrays) and their interfaces are also cheaper, and potentially slower, than others, so even in the absence of awful media like tape and optical disk, the concept of different "tiers" of storage performance (in terms of either access or in some cases reliability), has not gone away either. Obsession with regulatory and legal issues has led many people to initially purchase far more expensive storage than is perhaps the minimum necessary to do the "caring for the patient" part of the job, and left a nasty (expensive) taste in some customers' mouths. Regardless, it is hard to argue with the economies of scale a provider like Amazon might be able to obtain (as long as it wasn't branded "medical" aka. unnecessarily regulated and excessively expensive and ripe for profit taking).<br />
<br />
Anyhow, the buzzword <i>du jour</i>, much bandied about at the last SIIM, was "local cache". I.e., the images that you can access in reasonable time because they live on site and are optimized for performance, and perhaps are already "inside" your PACS and don't need to be retrieved from some other person's product (like a VNA). As opposed to those that are not, for which access performance may suck. Even if you don't have a PACS per se, or access images through it, but perhaps use a (buzzword alert) "universal viewer", the performance difference between images cached in a local server rather than pulled from off-site on demand may be "noticeable", to put it mildly.<br />
<br />
I was interested in a comment from someone (can't remember who it was, or what system or architecture they were using), who reported that a colleague genuinely thought that the "A" flag in their study browser stood for "Absent". Apparently it really stands for "Archived", but they drew their own conclusion based on their experience. [Update: Skip Kennedy claims responsibility for telling me this :)]<br />
<br />
So, whether you want one or not, it sounds like a "local cache" is in your future, if you don't already have one, whether it be for radiologists' priors or for other users' access to contemporary or older procedures.<br />
<br />
How do images get into such a cache in the first place? If the cache is the PACS, the obvious way is to keep the recent stuff, i.e., stuff that was recently acquired, or imported from CD or received from outside for contemporary patient care events (even if they are in the ED or the clinic and have nothing to do with radiology, i.e., are not read again). If the cache is not the PACS, but some pseudo-pod of the off-site archiving allowed to extrude into your local area network (i.e., the on-site box bit of the off-site archiving solution), then likewise, anything recent can be routed to it. But the PACS or local box may fill up, and hence a purging strategy is required (assuming failure and buying more disks are not options, which this discussion presupposes). Not every PACS can do this but let's assume it can. It might even do so intelligently (e.g., purge dead people (assuming Haley Joel Osment doesn't take up radiology), adults, acute not chronic conditions, etc.), but that is a digression.<br />
<br />
Sooner or later the priors that are potentially useful for new procedures or for clinical care will be purged and access will be slow or non-existent. Enter the pre-fetcher, which tries to bring some intelligence to bear (<a href="http://grcpublishing.grc.nasa.gov/WordOfWeekArchive/week52.cfm">?bare</a>) on the problem of what to fetch back and when, and hopefully do it in time. The literature from the 1990's and early 2000's is replete with articles about this (just search the SCAR/SIIM, CARS, SPIE Medical Imaging conference proceedings, journals like JDI and even RadioGraphics, as well as text books like Bernie Huang's). If you are interested, a couple of classics are <a href="http://dx.doi.org/10.1117/12.19036">Levin and Fielding from SPIE MI 1990</a>, <a href="http://www.ncbi.nlm.nih.gov/pubmed/9608932">Siegel and Reiner JDI 1998</a>, <a href="http://www.ncbi.nlm.nih.gov/pubmed/10847367">Andriole et al JDI 2000</a>, <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC131032/">Bui et al in JAMIA 2001</a>, and the work of <a href="http://faculty.utah.edu/u0358028-Olivia_Sheng/bibliography/index.hml">Olivia Sheng</a>'s group and <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3611617/">Okura et al JDI 2002</a> on artificial intelligence methods. Approaches range from the simple expedient of the age of the study, through using the modality, the body part or the clinical question. The relevance of the body part in particular will be discussed in a follow up post here, and was my motivation for addressing this topic in the first place.<br />
<br />
One of the important things to bear in mind is that pre-fetching is relevant not just for radiologists' priors before reporting the current procedure. It is also important for the clinicians, who may well be interested to know, even outside the context of a current radiological procedure, that other procedures have been performed in the past, whether locally or at other facilities, and want to access them without delay at the time of patient consultation, or surgery or some other intervention. Figuring out what is relevant for a clinician maybe considerably more complicated (to optimize) in some of these scenarios than finding priors for radiology reporting, and some of the systems in these users' offices may be much less robust. In particular, local cache sizes and bandwidth may be relatively low, and so not only is fast on-demand access for large studies like whole body CTs, PETs and breast tomosynthesis challenging, but excessive pre-fetching of all images for every scheduled patient encounter may overwhelm resources, and hence need to be selective and optimized. An interesting twist to this pre-fetching scenario, is that there may be no RIS involved and hence no access to the certain events and information; on the other hand the report will likely have been completed and more information may be available from the <a href="http://ed-informatics.org/healthcare-it-in-a-nutshell-2/emr-vs-ehr-vs-phr/">EMR/EHR/PHR</a>.<br />
<br />
Another SIIM theme this year, the decomposition of traditional PACS into its various component parts, archive, display and workflow, for example, seems to be well under way, with new hardware and software technology being brought to bear on classical problems, or having to leverage classical solutions. Hopefully lessons learned in the 1990's will be effectively reapplied, rather than needing to be reinvented. New factors, such as the ability to pre-fetch from central repositories and other facilities, will add interesting challenges, or opportunities if you choose to look at them that way. Likewise, the PACS migration problem potentially overlaps with pre-fetching when the decision is made to migrate patients or studies only on anticipated need, rather than all in advance.<br />
<br />
Don't forget though, that "A" should be for "Accessible" not "Absent", and whether it is "Archived" or not should be irrelevant to the users' experience.<br />
<br />
It is good to know that accessibility Nirvana (the <a href="http://en.wikipedia.org/wiki/Nirvana">goal</a>, not the <a href="https://en.wikipedia.org/wiki/Nirvana_%28band%29">band</a>) is just around the corner, once again.<br />
<br />
David<br />
<br />
PS. And yes, before you comment about it, I know about "server side rendering", and about Citrix, and why sometimes the images don't have to live locally, if these mechanisms float your boat.<br />
<br />
PPS. Just for clarity, I am obviously not talking about the use of the term "cache" in the HTTP protocol sense, by which means, as Jim Philbin regularly reminds us, non-specific "stuff" that has not changed in its content can be served up closer to where it is needed by various caching proxies using technology that has nothing to do with medical imaging applications. This is one of the major justifications for the WADO-RS DICOM stuff that grew out of the MINT project. Though, of course, if it hasn't been pre-fetched, it won't have been seen by the HTTP caches recently either, and even if it has been pre-fetched, it still might not be cached in the intervening proxies on the way to the user.<br />
<br />David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com1tag:blogger.com,1999:blog-1367102802658603789.post-87664793400611499972013-07-03T12:10:00.000-07:002013-07-12T09:48:48.555-07:00My PACS has fallen down, and I can't get it upgradedSummary: Many people have PACS that are not the latest version, and hence cannot use new features; new features are not added to old PACS versions.<br />
<br />
Long Version.<br />
<br />
In my travels preparing for the Breast Tomo forum that Rita Zuley and I hosted at SIIM (<a href="http://www.siim2013.org/digital_breast_tomosynthesis_ed_forum.shtml" target="_blank">Digital Breast Tomosynthesis & the Informatics Infra-Structure: How DBT Will Kill Your PACS/VNA</a>), I was surprised to discover that the key question was not just "Does your PACS vendor support the DICOM Breast Tomosynthesis SOP Class?", as one might have expected, or even "Do you have the bandwidth/storage/memory/display hardware to handle the large data volume?".<br />
<br />
Rather, it was "Do you even have the current version of your PACS?"<br />
<br />
This rather surprised me initially, but made sense when I thought of some of the barriers to upgrading, like the need for a fork-lift in some cases (or more seriously, the cost of the necessary server-side hardware). The site that initially exposed me to this dilemma has a problem that may be slightly unusual, extensive customization of additional services added on to a much older version of the PACS, which they cannot do with out.<br />
<br />
To try to get a better handle on how widespread this problem was, I did a little survey on a couple of forums, like <a href="http://health.groups.yahoo.com/group/pacs_admin/" target="_blank">pacsadmin</a> and <a href="https://groups.google.com/forum/#!forum/comp.protocols.dicom">comp.protocols.dicom</a><span id="goog_303938106">. The response wasn't very great, and in retrospect I should probably not have chosen returning an Acrobat form by email as the survey mechanism, but the online survey tools I checked out first had some limitations too.</span><br />
<span id="goog_303938106"><br />Anyhow, since I promised to share the survey results, and did at SIIM, here goes. I ultimately got 23 responses.</span><br />
<span id="goog_303938106"><br /></span>
<span id="goog_303938106">Systems were from</span><br />
<ul>
<li><span id="goog_303938106">different countries (18 US, 2 Canada, 2 Europe, 1 Asia),</span></li>
<li><span id="goog_303938106">various settings (13 metropolitan, 2 rural, 8 mixed),</span></li>
<li><span id="goog_303938106">various scales (5 multi-enterprise, 10 enterprise, 4 multi-departmental, 3 departmental and 1 sub-departmental) and</span></li>
<li><span id="goog_303938106">multiple vendors (2 Agfa, 2 DR, 3 Fuji, 6 GE, 2 InteleRad, 2 McKesson, 2 Merge, 1 Philips, 2 Sectra, 1 Siemens).</span></li>
</ul>
<span id="goog_303938106">Only 5 (22%) reported that they had the current (i.e., latest) version of their PACS in use, but 14 (61%) did say that they planned to deploy the current version within 3 months to 1 year (2 in 3 months, 4 more in 6 months, 8 more within 1 year).</span><br />
<br />
<span id="goog_303938106">The structured capture of reasons for not having the latest included:</span><br />
<ul>
<li><span id="goog_303938106">cost (5)</span></li>
<li><span id="goog_303938106">resources for deployment (1)</span></li>
<li><span id="goog_303938106"><span id="goog_303938106">resources for </span>validation (4)</span></li>
<li><span id="goog_303938106">Meaningul Use distraction (3)</span></li>
<li><span id="goog_303938106">custom RIS interface (1)</span></li>
<li><span id="goog_303938106">custom reporting/speech interface (0)</span><span id="goog_303938106"> </span></li>
<li><span id="goog_303938106">custom data mining interface (0)</span></li>
<li><span id="goog_303938106"><span id="goog_303938106">custom other interface (0)</span> </span></li>
<li><span id="goog_303938106">awaiting vendor change (2)</span></li>
<li><span id="goog_303938106">awaiting VNA (0)</span></li>
<li><span id="goog_303938106">other reasons (13) </span></li>
</ul>
<span id="goog_303938106">Some of the other reasons for delaying that were described in text comments (and which overlapped with some of the structured questions) included the need for validation and user feedback, new features not being "significant enough" (so waiting for next version), server hardware replacement being needed, completing an interim version that needs to be installed first, awaiting a possible vendor change, or the practice of waiting for a while until a release has been generally available (presumably to see what problems it has).</span><br />
<br />
<span id="goog_303938106">The remainder said they were not going to deploy the current version either for more than 2 years (2 sites) or ever (2 sites). Reasons cited were that the PACS was externally managed & the supplier refuses, or it already "works" so no need for it.</span><br />
<br />
<span id="goog_303938106">In terms of what they were missing out on by not upgrading:</span><br />
<ul>
<li><span id="goog_303938106">media export (2), import (2)</span></li>
<li><span id="goog_303938106">key images (1)</span></li>
<li><span id="goog_303938106">annotations (3)</span></li>
<li><span id="goog_303938106">3D (4), fusion (4)</span></li>
<li><span id="goog_303938106">DCE (4), breast DCE (3)</span></li>
<li><span id="goog_303938106">IHE Mammo Profile (3)</span></li>
<li><span id="goog_303938106">Breast tomo (3)</span></li>
<li><span id="goog_303938106">JPEG 2000 (4)</span></li>
<li><span id="goog_303938106">WADO (2)</span><span id="goog_303938106">, XDS-I.b (2)</span></li>
</ul>
<span id="goog_303938106">Other stuff mentioned as missing was remote caching, life cycle management and auto-deletion, increased exam capacity, reasonable performance (!), and some new SOP Classes (unspecified).</span><br />
<span id="goog_303938106"><br /></span>
<span id="goog_303938106"></span><br />
<span id="goog_303938106">Note that the survey did not include the initial site that prompted my interest, which has too much customized stuff that depends on an obsolete version, and were certainly missing out on Mammo tomo.</span><br />
<br />
<span id="goog_303938106">This was not a very scientific survey, and the respondents may well have been biased by the context in which the questions were asked, and selectively been more likely to respond if they had an older PACS version perhaps.</span><br />
<br />
<span id="goog_303938106">The information that Julian Marshall from Hologic presented at the same forum also suggested that there was poor uptake of new SOP Classes (and sufficient hardware performance) to cope with breast tomo.</span><br />
<span id="goog_303938106"><br /></span>
<span id="goog_303938106">Hopefully SIIM will post the slides and transcript on their web site soon, but in the interim, here are <a href="http://www.dclunie.com/papers/SIIM2013BreastTomoForum_Clunie.pdf">my slides from the forum</a>, and if you need any images to kill your lame old PACS with, try these <a href="http://www.dclunie.com/pixelmedimagearchive/upmcdigitalmammotomocollection/index.html">tomo ones</a>. If you have any of your own to contribute, let me know and I will provide a place to share them.</span><br />
<span id="goog_303938106"><br /></span>
<span id="goog_303938106">David</span><br />
<span id="goog_303938106"><br /></span>
<span id="goog_303938106">PS. Interestingly nobody mentioned that a reason was that their PACS vendor had failed and gone out of business, which I guess is a good thing :) Or even mentioned that they had been acquired by another vendor, which is interesting too. Too small a sample, methinks.</span><br />
<br />
<span id="goog_303938106">PPS. Here is a link to the <a href="http://www.dclunie.com/surveys/PACSUpdateStatusSurvey_2013_distributed.pdf">survey form used</a>, in case you are interested, or want to complete it yourself; I will continue collating results. </span><br />
<span id="goog_303938106"><br /></span>
<br />
<div id="stcpDiv" style="left: -1988px; position: absolute; top: -1999px;">
<b>Digital Breast Tomosynthesis & the Informatics Infra-Structure</b><br />
<b>How Digital Breast Tomosynthesis Kills Your PACS/VNA</b> - See more at: http://www.siim2013.org/digital_breast_tomosynthesis_ed_forum.shtml#sthash.O1d6c2cQ.d </div>
David Cluniehttp://www.blogger.com/profile/17331067317921452126noreply@blogger.com2