Sunday, June 3, 2007

On the lack of DICOM Police, the example of IHE content profiles, and the need for usability standards and cross-certification ...

Summary: Neither DICOM nor IHE may be sufficient to solve users' real world problems with usability of imaging devices, neither a hypothetical DICOM police nor the existing IHE Connectathon process would solve this problem, and there may be a need for a new type of "usability" standard and certification process, even to the extent of cross-certification of combinations of devices.

Long version:

As everyone is fond of saying, there are no "DICOM police".

NEMA, for example, specifically disclaims responsibility for policing or enforcing compliance to the DICOM standard. There is, for example, no DICOM "certification".

Nor is there an "IHE police", nor, for the time being, IHE "certification".

Some folks are under the mistaken impression that successful participation at an IHE Connectathon represents some sort of certification, but what is tested at IHE is not necessarily a product and may be a prototype, and often is not representative of what you can go out and buy, now, or ever. Furthermore, the IHE tests are limited in scope and depth, not only to the limits of the "profiles" being tested but also by the rigor of the tests themselves. For example, though vendors may demonstrate transfer of images within a specified workflow with the correct identifiers during the Connectathon, whether those images will be usable in any meaningful fashion by the receiver is not tested. These issues may be addressed over time as the IHE testing approach matures and is revised, and more "content" profiles like NM and Mammo are developed and tested. The Connectathon is a fantastic cooperative effort and an enormous investment of time and resources that results in considerable progress, but the fact remains that products are not certified during this process.

Hence the publicly posted "Connectathon Results" are only a guide to what vendors might or might not choose to make available as product, one is left to rely on so-called "self-certification" by the vendors. Vendors dutifully provide DICOM Conformance Statements and IHE Integration Statements, which both guide users with respect to what features are supposed to be available and outline what a product is supposed to do, but it seems that not infrequently products remain deficient in some small or significant way, either with respect to what is claimed, or even correct implementation of the underlying standard.

Who then, will police the compliance of the vendor in this respect? Currently, this is left to the users, or the experts with whom they consult. The vendors mostly appear to act in good faith, but when problems arise some are none too swift to acknowledge that they are at fault or to provide a solution.

But even if there were a DICOM (or IHE) police, would it actually help the users ?

Take for example the matter of compliance with the standard with respect to the encoding of images for a particular modality, say projection X-ray using the DICOM DX image object. Consider a frontal chest X-ray, which, depending on whether it is taken AP or PA, might from the perspective of the pixels read out from the detector have the left or the right side of the patient orientated towards the right side of the image. Now, the DICOM standard does NOT say that the transmitted image must be oriented in any particular manner; rather it says that the orientation of the rows and columns must be sent. In this case the row orientation would be sent either as towards the patient's left, meaning that the pixel data if rendered that way would look the way (most) radiologists would expect it, or the row orientation might be sent towards the patient's right, meaning that the receiver could use this orientation to flip the image into the expected orientation.

And therein lies the rub, since no standard, DICOM or IHE, currently requires that the receiver flip the image into the "desired" orientation for display based on the encoded orientation parameters. So a completely DICOM (and IHE) compliant storage SCU (Acquisition Modality actor) could encode an image in one orientation, and a DICOM (and IHE) compliant storage SCP (Image Display actor) could display it, and the user would still be unsatisfied and have to manually flip the image. No DICOM (or IHE) police or certification or anything else would be able to solve this problem for the user, beyond explaining it.

Conversely, if the modality were to not send the orientation at all, and violate the DICOM standard in this respect, if the pixels happened to be oriented correctly, the user experience would be satisfactory, and no problem would be perceived (except perhaps for the absence of an orientation marker to indicate the side). Indeed this would typically be the case for devices that use the older CR image object in DICOM, which allows the orientation to be empty, ostensibly on the grounds that sometimes it won't be known (e.g., there is a plate reader but no means for the operator to enter this information on the QC workstation, if there is one).

The acquisition modality vendors may solve this problem by making the sending device configurable in such a manner as to "flip" the images as necessary to give the expected result at the other end, either automatically or with the assistance of the operator, but the fact remains that this sort of configurability is not required by the standards.

Another example would be the matter of display shutters, such as to blank out the perimeter around a circular or rectangular angiography or RF acquisition, so that it remains black regardless of whether the image is inverted or not. The DICOM standard defines their existence encoded within an image, but does not mandate their application by the display (unlike in a presentation state). I was recently reminded of this when there was a compatibility issue between one vendor's acquisition device and another's PACS. The modality was sending a display shutter and the PACS was ignoring it, and the resulting white background was unacceptable to the user. A modality vendor would typically provide a configuration option to burn in the background as black in this case (resulting in white when inverted, but you can't configure around everything), and handle the lame PACS, but this particular modality did not have that feature. The PACS vendor had I am told only just released display shutter capability in a new and expensive release, so the user was essentially out of luck. Again, there would be no help from the DICOM police in this regard, assuming they could only act within the bounds of the "law" (what is written in the standard). Furthermore, it is very difficult to ascertain a priori from conformance statements what is possible in these situations, there typically being little if any documentation of the scope of configuration possible on the sending end, or the display behavior on the receiving end.

So, one is inevitably led to the conclusion that the standards are insufficient to satisfy the users needs in this regard, and that DICOM police or certification, whilst arguably necessary, would not be sufficient in their own right.

Or to put it another way, there seems to be a need for "usability" standards, perhaps layered on top of the DICOM and IHE standards. This is an area that vendors may be reluctant to address, since such standards might potentially erode what they see as "added value" (though many users might argue the same are "bare necessities"), and are a source of risk in that if they fail to offer the complete spectrum of "usability" requirements, they might be unmarketable.

There are two categories of precedent for this sort of thing that may be relevant. One category includes the IHE Radiology "content" profiles, specifically NM and Mammography; the other is the federally-mandated certification effort, exemplified currently by the Certification Commission for Healthcare Information Technology (CCHIT).

The IHE content profiles differ from much of the other radiology work in IHE in that they are less about workflow and more about modality-display interaction. Anyone with NM experience knows exactly how woeful most general purpose PACS are with respect to handling NM images, either in terms of providing interface tools with which NM physicians are comfortable, providing layout and reconstruction capability appropriate to different types of acquisition, not to mention analytic tools for quantitative measurements, especially cardiac. The NM folks (in the form of the SNM) finally said enough and ultimately decided to work through the IHE framework to achieve their goal. I have little experience in this area, so cannot say to what extent this profile has actually influenced purchasable products or helped the users in the real world, but this effort paved the way for content profiles that specified image display behavior in detail.

The IHE Mammo profile, on the other hand is one that I was directly involved in. In this case a bunch of very disgruntled users who had faced the realities of owning multiple vendors' FFDM equipment and trying to use it in high volume environments expressed their disappointment at a special SCAR session, which resulted in the formation of a sub-committee in IHE to address the concerns, and ultimately a profile that specified mutually compatible requirements for both modalities and displays.

The process by which the Mammo profile was developed is instructive. First the users expressed their concerns and requirements with respect to real world experience with products; second, the FFDM and dedicated display system vendors admitted that there were problems and expressed willingness to engage in a dialog; third, everyone met together face-to-face to hash out what the priorities were and where there was common ground. There was considerable argument on the fringes, especially with respect to exactly how much application behavior could be standardized or required as a minimum, and which of several competing solutions to choose for particular problems when there existed an installed base of incompatible solutions, but ultimately a reasonable compromise was reached. The users insisted that deployment be swift and arranged a series of public demonstrations at short intervals to ensure that progress was made.

What distinguishes the Mammo profile is that it is very specific about how displays behave and in particular what features they must have, e.g., the ability to display images all at the same size, current versus prior, regardless of vendor and detector pitch, to display true size, to display CAD marks, to annotate in a particular way to meet regulatory and quality requirements, and which DICOM header attributes to use in what manner to implement those features. Further, given the different type of processing and grayscale contrast from the various detectors, the display is required to implement all of the possible grayscale contrast windowing and lookup table mechanisms, not just a vendor-specific subset. I.e., in some cases the vendors agreed to standardize the "intersection" of various different possibilities, and in other cases the "union" of all possible, depending on the impact on the installed base and the usability of the feature.

This cooperative effort seems successful so far, though I am biased in this assessment having been intimately involved. However, is it scalable to more ambitious "content", "functional" or "usability" specifications, either within IHE or elsewhere ?

The mammography effort was made considerably easier by the fact that the digital mammography user and vendor communities are relatively small and tightly focused, if by no other factor than the regulatory burden imposed by MQSA. Everyone knows everyone else, basically everybody gets along and likes one another, and it is hard to take too unreasonable a stance in this group for very long. A certain amount of "cat herding" was required of course, but on a level of difficulty scale of 1 to 10, I would rate this one about a 4.

One risk to scalability is that "users" will not bother to ask for the IHE profile in their RFPs and contracts, and will buy whatever non-compliant lame "mammo package" their existing PACS vendor deigns to offer and force their radiologists to use it. This risk could be mitigated if the FDA were to require that only certified products were used for primary interpretation, but this would be a very special case since mammography is about the only area in which the FDA has authority to regulate the practice of medicine, and is not generally applicable. Other organizations, like JCAHO or third party payors could require certified compliance, but would there be any benefit for them to do so ?

Another risk with respect to generalizing the approach is the lack of interest by users in developing usability standards. The mammography and NM examples were perhaps atypical in that there were highly motivated individuals to champion the cause who devoted enormous amounts of time and energy with the support of their organizations. Is this degree of user involvement likely to be repeatable in other areas where the problems may not be so acutely felt, where the scope is broader, or the problem is larger in scale ?

Likewise, there is the risk that the vendors will be unresponsive to such efforts. Both DICOM and IHE development have been characterized by the active participation (some might say total domination) of vendors and have as a consequence been at least somewhat successful. Externally imposed standards to which there may be outright vendor opposition would be less likely to be successful.

On the subject of scale, it is potentially enormous, if one were to go the extent of defining the required functionality of an entire PACS with respect to usability of workflow and display. Anyone who has written requirements specifications and test scripts for the implementation of such products is familiar with the level of effort, but then again since this has already been done internally by vendors many times over this is not a new experience.

To that end it may be instructive to review the work of CCHIT so far; kick-started by federal funding and a requirement to certify ambulatory EHRs, this effort has produced some interesting materials, even if one is not a fan of the politics involved. On their web site you can find documentation of their process, the functional requirements against which certification takes place, the actual test scripts that are used, as well as the public comments received as these materials were being developed, which give an interesting insight into the vendors opinion of the process and the expense as well as the heavy handiness of the CCHIT.

I have no involvement in this process at all, so can't speak to its success or value so far, and you can read the materials as well as I can. It is interesting though, to review the functionality criteria for an ambulatory EHR and envisage how one might write similar criteria for a PACS. Likewise, to review the test script for these criteria from the perspective of perhaps testing an Image Display with the same approach. To return to the example at the beginning of this entry, one could envisage a criterion for a PACS such as "shall be able to display a frontal chest x-ray rotated or flipped into the correct orientation based on the DICOM orientation description" and a corresponding test script entry with a range of test materials that included images encoded in a manner that required such flipping. This is exactly the sort of testing that we did for the IHE Mammo profile.

If this were to be done, would self-attestation or self-certification be sufficient or would there need to be in addition external verification and certification such as CCHIT performs ?

Who would require either form of certification ? The users themselves ? The payors ? The government ?

What would be the appropriate organization to perform such work ? Would CCHIT take on imaging or do they have enough on their plate, not to mention no expertise in this area ? Could or would IHE do it, particularly now that it has grown well beyond radiology into other domains that have their own issues and priorities ? Would ACR, who are all very eager to "accredit" modalities, be interested in or capable of this ? SIIM would perhaps be a logical choice, were it not for the apparent influence vendors have on their decision making process about things controversial. How about RSNA, or are they too invested in IHE already to begin a separate effort if one were thought to be necessary ?

Or is there a need for yet another independent organization to do this ? If so who would start it ? Who would run it ? Who would pay for it ?

And ultimately, would "standalone" certification against criteria be of sufficient benefit ? It would be a start, but if there is one thing that the IHE Connectathons have demonstrated it is that the proof is in the testing of multiple devices working together. To that end, does one need an infrastructure to support certification of permutations and combinations of devices inter-operating together, either in a test environment or in the field ?

One could envisage an approach in which the two (or more) vendors involved submitted a "joint application" for certification of a combination evaluated against specific criteria based on the first actual deployment. Funding, implementing. monitoring and promulgating this information would be a challenge, but perhaps not insurmountable.

Imagine in the display shutter example that the forward thinking purchaser of the PACS had included in their support contract a requirement that the PACS vendor participate in such cross-certification activities as new modalities were acquired by the site; likewise before accepting the new modality the site would have required the same of the modality vendor. If both had been previously cross-tested satisfactorily they would already be certified, and indeed the purchaser would have known this by consulting the certifying authorities web site; any limitations would have been publicly documented and disclosed. If the particular combination had not, then a first-time test would need to be performed against the certification criteria, supervised by some sort of "designated examiner" trained and licensed by the certifying authority. The result, whether successful or not, would be promulgated in full. Fees to cover the cost would be payable by the pair of vendors, and they would recover this in their service contracts or purchase price. If one or other of the vendors refused to participate then the user could still execute the (publicly available) test script themselves at their own expense, with or without an examiner, the results could still be promulgated with or without either vendor's prior approval, and failure might be a clue to the user not to accept the modality or to plan to replace their PACS.

So we have come full circle, in that this is exactly the sort of paradigm that the IHE Connectathon supports. Except, that it would involve products rather than experimental or prototype systems, the details of test script execution would be fully public, rather than categorized as a simple pass/fail or prevented from disclosure by confidentiality agreements, a considerably more comprehensive range of old and new products would be tested, the result would be a formal certification, the criteria would be at a level that addressed functionality and usability not just message transfer and workflow, and the users and sites could specify certifications as criteria in their purchase and support contracts.

Or perhaps, the "great learning experience" for engineers, which is essentially what the Connectathon is, could be translated into a formalized process of direct, rather than only indirect (albeit very important), benefit to the user.

David

3 comments:

Herman said...

I agree with David that a DICOM or IHE "police" would not be effective without knowing what and how to police due to the lack of use case definitions and real-life practical situations. The good news is that there are many testcases developed for IHE, however, they need work. The available tools for interoperability testing are great (MESA tools), and the test images are also very useful, but in my opinion, they should be extended and expanded.

Than there is the problem that what is tested during the IHE connnectathons are often prototypes, witnessed by the fact that in many cases code changes are made on the spot to pass the IHE criteria (compare the pass/fail results from the tests at the beginning of the connectathon to the end of the week).

Interestingly enough, some larger organizations such as the VA have developed their own test criteria, which, for example, tests specifically for the MASK issue as David mentioned. However, in the case of the VA, it tests also for specific requirements (such as the abbreviated Patient ID), which not necessarily is applicable for all other institutions.

Ideally, it would be great if CCHIT would take this on, but why not using the IHE for this effort? I know it might be a radical thought, but I think we should eliminate half of the IHE profiles. I think we are getting too far ahead of ourselves with specifying new profiles while the majority of the new devices do not even support the basic ones such as Scheduled Workflow. We then put all that effort in expanding the existing use cases, generation of testcases and images; we have a much better change for interoperability.

Remains the issue that vendors show up with prototype software at the connectathon, which can be dealt with by a clear distinction of the software and hardware version that was tested, and an easy manner to retest the device as part of an acceptance test in front of the customer (hospital).

So, my recommendation is to use the existing IHE organization, but refocus the effort on quality instead of quantity (more extensive testing of lesser profiles), make the tests easily repeatable (develop a GUI or something instead of running command line tests), and get a clear identification and possibly registration of the version that is tested.

Herman O.

Mike Henderson said...

A normative, use-case-based conformance template has been adopted into the HL7 standard, effective with HL7 Version 2.5. The conformance template includes a dynamic profile that, together with the use case it supports, can be used as a basis for functional requirements.

VA has developed a profile document that uses the HL7 conformance template to enumerate full constraints on the HL7 transactions in the IHE Scheduled Workflow profile. The dynamic profile sections of the VA document provide precise functional requirements for commercial systems with which the VistA HIS interoperates via HL7 messaging.

Unknown said...

A little about my background, I have degree in Computer Science and Computer Engineering from a top school and so I consider my technical ability pretty good. I also happen to be working at radiation oncology clinic. I have been implementing a simple PACS server using DCMTK for internal use by reading the DICOM documentation. So I have had the oppertunity to see DICOM both from the end user side (day to day) and from the vendor side (implement DICOM). I must say, it shows all the signs of having too many cooks in the kitchen. I see images generated by CT machines that can't be read by treatment workstation. Strange private tags. Odd connection handling, you name it. And these software cost how much!? Compare these to software at BestBuy and it is truely a sad situation. The high quality of products like Quickbooks didn't come from certification, standards or testing. It came about because people demanded better software. Instead of trying to attack this problem from a standard (and thus vendor controlled) angle, why not start doing product reviews for users. For example, praise companies that are going the extra step of providing interoperability. Condemn the ones that are using private tags. Slowly, market force will push everyone to a 'standard'.