Summary: Is the national health care IT standards agenda going to ignore the lessons of the past and the progress made so far? If so, what impact will it have on imaging?
Long Version:
This morning I was sent a link to a report on Realizing the Full Potential of Health Information Technology to Improve Healthcare for Americans: The Path Forward released on December 8th, 2010 from the President's Council of Advisors on Science and Technology (PCAST). There is a video available of the press conference at which it was released, as well as some older video from July 16th, 2010 of a panel discussion about some of its content. The noble goals that the speakers in the videos espouse seems to be somewhat at odds with the actual content.
Already various healthcare IT bloggers involved in standards development and deployment have commented in some detail about the technical content of the report, including Keith Boone, John Halamka, and Joyce Sensmeier.
These bloggers all seem slightly dismayed that the report seems to dismiss, or at least not give adequate recognition to, many existing efforts that are well underway. In my own review I see that HL7 is barely acknowledged, CDA is attributed to the ONC rather than HL7, IHE gets a passing mention only, and there is no mention at all of XDS, XUA, XCA, BPPC or any of the many other IHE efforts to move information back and forth across communities.
Instead, amongst the other content of the report that seems pretty reasonable, there is the unsubstantiated assertion made that "document" or "record" oriented interchange is insufficient and that "tagged data elements" with accompanying meta data are necessary, and a new standard for that needs to be written.
If this were a broad non-technical review, I would have no issue with any of that, but it isn't. It is mostly a broad review, but dives into extreme detail about certain technical issues, including security and privacy and access control solutions, implying that somehow the narrowly focused technical solutions proposed are the only solutions applicable to the broad aims of the overall report. This I find distinctly surprising.
But back to imaging, and what prompted me to write this blog entry, given that the HIT professionals and their organizations are more than capable of speaking for themselves. A surprising aspect of the report is the mammogram use case, starting on page 41.
In this use case, a patient has had mammograms performed at multiple locations, and her current physician needs to retrieve them, given, and I am selectively quoting here, "enough identifying information about the patient to allow the data to be located", "privacy protection information—who may access the mammograms, either identified or deidentified, and for what purposes", and "the provenance of the data—the date, time, type of equipment used, personnel (physician, nurse, or technician), and so forth".
OK, sounds like your standard cross-community access to DICOM images use case, something that IHE RadTech is specifically addressing as a profile this year actually (XCA-I), which involves a relatively simple extension to the existing cross community access (XCA) profile used in the XDS world. Now, I don't mean to pretend that cross-community access control (e.g., via XUA) is easy, nor that reconciliation of patient identity across communities (PIX) is easy either. Merely to point out that the problems in this area are lack of deployment and shared infrastructure or the incentives to build such (as the PCAST report rightly emphasizes elsewhere), and NOT lack of standards. We already have standard mechanisms to provide images with the level of completeness and quality required for the specific use case, ranging from pre-windowed downsampled lossy compressed images for undemanding review applications, through to the full set of diagnostic quality images required for more sophisticated uses.
Since we already have standards to do exactly this, it is perhaps not the ideal use case for PCAST to pick. The idea that a mammogram image (or set of four standard view images, all 40 to 120 MB of them) can somehow be treated as a single "data element", like say, a single blood glucose value, flies in the face of decades of experience of an entire industry.
That said, if the example had been a "tagged data element" such as the BI-RADS category from the body of a series of mammography reports from different locations, the example would have been more plausible. Indeed, the notion of being able to query and retrieve that part of the content of what would traditionally be handled as entire documents is very attractive on the face of it, and undoubtedly a desirable goal, though one that does not require rejection of the traditional structured document oriented paradigm to achieve. Nor does the report address the key barrier to adoption of structured as opposed to unstructured content to facilitate data element extraction and query, which seems to be a combination of a lack of tools and incentives to author structured content in the first place.
Regardless, a new "tagged data element" approach is not a pre-requisite for progress, and we do not need to wait for new standards to be promulgated for this before realizing most, if not all of the benefits of connectivity and interoperability.
Nor frankly, do we need to be able to query across enterprises at a particularly granular level, say for which operator performed a study, or what kVP they used. The metadata envisioned by IHE is very narrowly focused for this reason, and the notion of exposing "all available meta data", whilst theoretically possible, has enormous performance implications. This is certainly not the biggest fish to fry in the short term.
The entire continuum from images through documents to "atomic" data elements share common barriers ... lack of any connection at all between enterprises and communities, lack of deployment of existing standard mechanisms for patient identity reconciliation, and lack of deployment of existing standard mechanisms for provisioning and controlling access.
Even on the access control front the report seems to ignore existing standards and infrastructure. After a basic tutorial on security principles, tied to the "tagged data element" concept, the mammography use case is revisited from this perspective, on page 51.
To paraphrase, an access control service authenticates the clinician, assures the patient has granted them access, and provides the locations of the encrypted mammography images, and supplies the necessary decryption keys. Then, and I quote, "in this and all scenarios, data and key are brought together only in the clinician’s computer, and only for the purposes of immediate display; decrypted data are not replicated or permanently stored locally".
Really ? It is hard to imagine how to enforce that. And how practical is it, given that you generally need a pretty clever viewer if not an entire mammography workstation or plugin to your PACS viewer to effectively utilize a mammogram ? Given that the standard of care currently seems to be to import outside studies into the PACS for use as priors and for distribution throughout the enterprise to allow them to be used with the (expensive and advanced) tools that expert physicians are used to using, the report's proposal seems unrealistic.
Though the mammography imaging use case almost reduces to the absurd the notion that this form of restricted access without local importation is workable, even in the case of a local doctor's EHR, I would imagine that they would want to record even a simple data element, such as the blood glucose, imported from outside systems, rather than be restricted to accessing them on demand only. Even to manifest the prior values in a graphical form or as a flow sheet, which would seem highly desirable for enhanced decision making, would be very challenging if "outside" data points could not be persisted locally, permanently. Indeed I dare say there might be a record keeping requirement to maintain such information.
Contrast the PCAST report's proposal with what is already well standardized; as described before with XDS-I used in conjunction with XCA and XUA and PIX mechanisms, the XDS-I Imaging Document Source would provide the DICOM images to the clinician once authorized, encrypted in transit but not otherwise, via the protocol and in the format requested by the recipient, and importable into a "real" mammography display system in the normal manner. All without having to have every DICOM-compatible viewer or display system be updated to support some mythical new "universal language" based on"tagged data elements" and without being prevented from importing the images for transient or persistent use as is necessary to provide quality care. See John Moehrke's blog for a primer on what security features IHE has to offer.
Anyway, without getting too strident about, at best I find some of the technical content of the PCAST report a disappointment. At worst, I see it as a distraction from the most important items of the national agenda, well espoused in other parts of the report, which include finding a way to provide the proper incentives to get connectivity adopted in a manner that improves the quality and efficiency of care, preferably in a manner that gives patients granular control over access to their information.
Also surprising is the choice of the mammography imaging use case as the poster child for PCAST, given that the ONC has essentially ignored imaging in its initial stages of "meaningful use" probably quite reasonably in terms of return on investment, but much to the chagrin of the various professional societies, judging by the ACR/ABR/SIIM/RSNA joint comments and MITA comments. When later stages of "meaningful use" get around to imaging, they will probably emphasize reporting, decision support and avoidance of unnecessary imaging (probably much more important goals), by which time, even in the absence of specific incentives, distributed cross-community image exchange via the "cloud" will probably be commonplace. One could certainly leave the RSNA show a week or so ago with the impression that this is a solved problem, and high time too.
David
Sunday, December 12, 2010
Wednesday, November 24, 2010
RSNA 2010 RFID Update
By the way, RSNA is still using RFID tags, as described in a previous blog entry ... I checked my badge that arrived in the mail recently and indeed it contains such a tag.
Since last year I microwaved it for too long and it caught fire, and I wandered around the whole week with a black hole on my chest, this year I will use the knife or the chisel.
Just to confirm that indeed the vendors do have access to the tracking information, here is the link to the RFID Exhibitor Package Order Form, a press release from the provider, and the usual spiel in the RSNA materials about what the RFID tag will be used for and how to opt out.
David
Since last year I microwaved it for too long and it caught fire, and I wandered around the whole week with a black hole on my chest, this year I will use the knife or the chisel.
Just to confirm that indeed the vendors do have access to the tracking information, here is the link to the RFID Exhibitor Package Order Form, a press release from the provider, and the usual spiel in the RSNA materials about what the RFID tag will be used for and how to opt out.
David
Dose Matters at RSNA 2010
Summary: Look for vendors offering the NEMA X-25 Dose Check feature and DICOM Radiation Dose SR (RDSR) (IHE REM profile) output from their CT modalities, and products able to store and process RDSRs for dose monitoring, alerting and registry submission. Bring along a list of your installed base of CT, PACS and RIS model and version numbers, and ask your vendors when Dose Check and RDSR capability will be supported. Don't forget to ask your PACS, CD burning and importing, cloud/Internet storage and distribution, Modality Worklist (MWL), reporting system, ordering and decision support vendors about this too. Visit the RSNA dose demo at Booth 2852, Exhibit Hall A, South Building.
Long Version:
In my last blog entry, I discussed the need for tools for monitoring and controlling radiation dose from CT, and with RSNA's Annual Scientific Pilgrimage to Chicago coming up next week, I thought I would consider the progress in the last six months, and what attendees might want to focus on. Undoubtedly the CT vendors will be heavily focused on what new dose-reduction technology they can deliver in new products, but do not lose sight of the importance of evaluating the monitoring and management technology as well.
One notable event was the release in November of a public letter from the FDA to MITA (NEMA), the vendors' trade organization, summarizing their investigation of the brain perfusion incidents.
In October, NEMA released the X-25 Computed Tomography Dose Check standard, which you can download from here. This feature, which the vendors had already committed at the FDA Public Meeting to develop and implement, is intended to "notify and alert the operating personnel ... that prepare and set the scan parameters — prior to starting a scan — whether the estimated dose index is above the value defined and set by the ... institution ... to warrant notification to the operator". Clearly this requires two things, 1) the implementation of the feature in the scanner, and 2) suitable values to be configured by the institution. No doubt the vendors will promulgate default levels, and organizations like AAPM or ACR might provide them, or the local medical physicists may decide for themselves. Eventually the X-25 feature will get folded into the CT manufacturer's safety bible, IEC 60601-2-44.
The RSNA meeting will be an opportunity for you to ask the CT sales people and application specialists about the Dose Check feature, and particularly how and when they plan to retrofit the scanners you already have installed to support it and how much it will cost (if anything). A commitment to the FDA is one thing, but there is nothing like evidence of demand from the customers to motivate product managers to deliver.
X-25 distinguishes between "notifications" for "protocol elements" prior to scanning, and alerts for the "examination" that accumulate what has been done so far. There is also an alert prior to saving (not just attempting to perform) a protocol that exceeds limits, which specifically helps to address a concern that arose in the Cedars-Sinai perfusion incident. Proceeding despite a notification or alert requires the recording of who, what, when and why in an audit trail. DICOM is working on additional information to be included in the Radiation Dose Structured Report (RDSR) to record the X-25 parameters and audit trail information (see CP 1047). You might also want to ask the modality vendors at RSNA when they plan to implement CP 1047, which should be made final text at the Jan 2011 WG 6 meeting. If you are looking for dose monitoring systems that can process RDSRs, you might also want to ask them about when they plan to be able to provide you with a human-readable report of CP 1047 X-25 events.
On the subject of RDSR, one vendor, GE, has already provided a list of which models and versions of scanner support RDSR, and which earlier models produce dose screen secondary capture images; you can find the list here. Hopefully other vendors, perhaps at RSNA, will provide a similar list, and I will tabulate them on the Radiation Dose Informatics web site on the Software and Devices page. In lieu of information supplied by the vendors, I will also tabulate information based on what scanners and models that I encounter RDSR objects from, so feel free to submit samples to me if you encounter them.
When shopping for new CT scanners or upgrades next week, asking the vendors for RDSR support is something obvious that you should do, but even if you are not buying new equipment, it is reasonable to ask about upgrading your installed base. If I were you, I would bring along a complete list of all the equipment that I was responsible for including models and versions, and ask very specifically of the sales people, which of those on my list can be upgraded, and when, and which of those will never be upgraded. Not only will this serve to alert the product managers to your concern about this issue, but the answers will help you plan your own dose monitoring strategy. If you don't get the answer that you want to hear (all your scanners will soon support RDSR), then you are going to need to develop a strategy that perhaps involves a third-party solution that can either OCR the dose screens if the scanners produce them, or provides a means for operator data entry and transcription of the information displayed on the console.
As for "dose monitoring systems", or whatever the name the industry is going to converge on for monitoring and reporting CT scanner dose output, the upcoming RSNA is an opportunity to look around for vendors of those systems too. It remains to be seen whether this feature becomes routinely embedded in the PACS or the RIS, or whether for the time being or indefinitely, it will be the province of dedicated third-party systems (I will maintain a list of the latter at the Software and Devices page, which is, so far, depressingly short).
In the IHE REM profile, the modality can send RDSRs either to the Image Manager/Image Archive (IM/IA) (usually the PACS) or directly to a Dose Information Reporter (DIR), which might be the RIS or a third-party system, or such a system may query the PACS. The REM design assumes that since RDSRs are DICOM objects, the PACS is the logical actor to persist and distribute them.
However, RDSR output from the modality is not going to be of much immediate use to you if a) your PACS won't accept and store them, and b) you don't have something that will display their content and, more importantly, produce management reports of dose output, if not alerts and notifications when limits are exceeded. At the very worst, you can start storing these RDSRs in the PACS now, so that when you do settle on a dose management solution, you will be able to use your historical data, as both a benchmark for your local historical practice, as well as for individual patient dose management decisions (recognizing the limitations of using dose output as a surrogate for effective dose to the patient).
Accordingly, not only do you need to be asking your CT vendor for RDSR output, but you need to be asking your PACS vendor if they will accept, store and faithfully regurgitate RDSRs, even if they do not yet have plans to render and collate the contents.
This also includes recording RDSRs on CDs, since referring physicians want to know about dose too, as does the next facility in the chain that is going to import these CDs. So your third-party CD burning and import and viewer vendors are also candidates for interrogation next week about RDSR support. You also need to ask any Internet distribution and storage vendors offering "CD substitutes" in the "cloud" about this too.
Your RIS vendor doesn't escape either. Though they may not be planning on offering RDSR management, they will still be providing Modality Worklists (MWL) to the CT scanners. It turns out that it is really important to convey the age, sex, height and weight information, as well as anatomic and procedure codes, if downstream one is to make size-appropriate use of the dose output information (which after all is based on standard sized phantoms and needs adjustment for kids and for particularly small or large people). The CT scanner vendors are well aware of these issues, and hopefully can reliably copy the information from the worklist into the RDSR (another question to ask your scanner vendor if you want to get into that much detail with them).
Finally, when generating the radiology report, it is good practice if not required by regulation (such as in Germany or now California with SB 1237), to include information about the radiation dose, and a creative reporting system vendor could automatically copy information directly from the RDSR for the study into the report template being populated by the radiologist. Now is the time to get the reporting system vendors thinking about this, particularly since some of them already offer features for doing the same sort of thing from other types of structured report "input", notably for ultrasound and echocardiography. Even the ordering and decision support system vendors should not be immune to your questions, since they too can take advantage of patient-specific historical information acquired from RDSRs.
In conclusion, next week you have the opportunity to put penetrating questions about radiation dose to everyone you meet with a product that is involved in any part of the imaging chain, from ordering all the way through to reporting.
If you want to get a more detailed briefing, perhaps prior to visiting the vendors' booths, and see some of the components of the IHE REM profile in action, feel free to come and visit the RSNA Image Sharing and Radiation Dose Monitoring demonstration. A group of CT modality, PACS and dose reporting vendors, together with some academic groups, the ACR and myself will be participating. Strangely enough this will actually be held in the Technical Exhibits area, rather than in the Lakeside Learning Center, specifically Booth 2852, Exhibit Hall A, South Building. Email me if you can't find the demo or have any questions about it.
David
PS. By the way, while I am thinking of it, if you use lossy compression in your archive, make sure it is turned off for series that contain dose screens (or indeed any secondary captures containing text and graphics like perfusion curves), since not only will it make them look like crap but will also reduce the performance, if not entirely cripple, any OCR that you might later apply.
Long Version:
In my last blog entry, I discussed the need for tools for monitoring and controlling radiation dose from CT, and with RSNA's Annual Scientific Pilgrimage to Chicago coming up next week, I thought I would consider the progress in the last six months, and what attendees might want to focus on. Undoubtedly the CT vendors will be heavily focused on what new dose-reduction technology they can deliver in new products, but do not lose sight of the importance of evaluating the monitoring and management technology as well.
One notable event was the release in November of a public letter from the FDA to MITA (NEMA), the vendors' trade organization, summarizing their investigation of the brain perfusion incidents.
In October, NEMA released the X-25 Computed Tomography Dose Check standard, which you can download from here. This feature, which the vendors had already committed at the FDA Public Meeting to develop and implement, is intended to "notify and alert the operating personnel ... that prepare and set the scan parameters — prior to starting a scan — whether the estimated dose index is above the value defined and set by the ... institution ... to warrant notification to the operator". Clearly this requires two things, 1) the implementation of the feature in the scanner, and 2) suitable values to be configured by the institution. No doubt the vendors will promulgate default levels, and organizations like AAPM or ACR might provide them, or the local medical physicists may decide for themselves. Eventually the X-25 feature will get folded into the CT manufacturer's safety bible, IEC 60601-2-44.
The RSNA meeting will be an opportunity for you to ask the CT sales people and application specialists about the Dose Check feature, and particularly how and when they plan to retrofit the scanners you already have installed to support it and how much it will cost (if anything). A commitment to the FDA is one thing, but there is nothing like evidence of demand from the customers to motivate product managers to deliver.
X-25 distinguishes between "notifications" for "protocol elements" prior to scanning, and alerts for the "examination" that accumulate what has been done so far. There is also an alert prior to saving (not just attempting to perform) a protocol that exceeds limits, which specifically helps to address a concern that arose in the Cedars-Sinai perfusion incident. Proceeding despite a notification or alert requires the recording of who, what, when and why in an audit trail. DICOM is working on additional information to be included in the Radiation Dose Structured Report (RDSR) to record the X-25 parameters and audit trail information (see CP 1047). You might also want to ask the modality vendors at RSNA when they plan to implement CP 1047, which should be made final text at the Jan 2011 WG 6 meeting. If you are looking for dose monitoring systems that can process RDSRs, you might also want to ask them about when they plan to be able to provide you with a human-readable report of CP 1047 X-25 events.
On the subject of RDSR, one vendor, GE, has already provided a list of which models and versions of scanner support RDSR, and which earlier models produce dose screen secondary capture images; you can find the list here. Hopefully other vendors, perhaps at RSNA, will provide a similar list, and I will tabulate them on the Radiation Dose Informatics web site on the Software and Devices page. In lieu of information supplied by the vendors, I will also tabulate information based on what scanners and models that I encounter RDSR objects from, so feel free to submit samples to me if you encounter them.
When shopping for new CT scanners or upgrades next week, asking the vendors for RDSR support is something obvious that you should do, but even if you are not buying new equipment, it is reasonable to ask about upgrading your installed base. If I were you, I would bring along a complete list of all the equipment that I was responsible for including models and versions, and ask very specifically of the sales people, which of those on my list can be upgraded, and when, and which of those will never be upgraded. Not only will this serve to alert the product managers to your concern about this issue, but the answers will help you plan your own dose monitoring strategy. If you don't get the answer that you want to hear (all your scanners will soon support RDSR), then you are going to need to develop a strategy that perhaps involves a third-party solution that can either OCR the dose screens if the scanners produce them, or provides a means for operator data entry and transcription of the information displayed on the console.
As for "dose monitoring systems", or whatever the name the industry is going to converge on for monitoring and reporting CT scanner dose output, the upcoming RSNA is an opportunity to look around for vendors of those systems too. It remains to be seen whether this feature becomes routinely embedded in the PACS or the RIS, or whether for the time being or indefinitely, it will be the province of dedicated third-party systems (I will maintain a list of the latter at the Software and Devices page, which is, so far, depressingly short).
In the IHE REM profile, the modality can send RDSRs either to the Image Manager/Image Archive (IM/IA) (usually the PACS) or directly to a Dose Information Reporter (DIR), which might be the RIS or a third-party system, or such a system may query the PACS. The REM design assumes that since RDSRs are DICOM objects, the PACS is the logical actor to persist and distribute them.
However, RDSR output from the modality is not going to be of much immediate use to you if a) your PACS won't accept and store them, and b) you don't have something that will display their content and, more importantly, produce management reports of dose output, if not alerts and notifications when limits are exceeded. At the very worst, you can start storing these RDSRs in the PACS now, so that when you do settle on a dose management solution, you will be able to use your historical data, as both a benchmark for your local historical practice, as well as for individual patient dose management decisions (recognizing the limitations of using dose output as a surrogate for effective dose to the patient).
Accordingly, not only do you need to be asking your CT vendor for RDSR output, but you need to be asking your PACS vendor if they will accept, store and faithfully regurgitate RDSRs, even if they do not yet have plans to render and collate the contents.
This also includes recording RDSRs on CDs, since referring physicians want to know about dose too, as does the next facility in the chain that is going to import these CDs. So your third-party CD burning and import and viewer vendors are also candidates for interrogation next week about RDSR support. You also need to ask any Internet distribution and storage vendors offering "CD substitutes" in the "cloud" about this too.
Your RIS vendor doesn't escape either. Though they may not be planning on offering RDSR management, they will still be providing Modality Worklists (MWL) to the CT scanners. It turns out that it is really important to convey the age, sex, height and weight information, as well as anatomic and procedure codes, if downstream one is to make size-appropriate use of the dose output information (which after all is based on standard sized phantoms and needs adjustment for kids and for particularly small or large people). The CT scanner vendors are well aware of these issues, and hopefully can reliably copy the information from the worklist into the RDSR (another question to ask your scanner vendor if you want to get into that much detail with them).
Finally, when generating the radiology report, it is good practice if not required by regulation (such as in Germany or now California with SB 1237), to include information about the radiation dose, and a creative reporting system vendor could automatically copy information directly from the RDSR for the study into the report template being populated by the radiologist. Now is the time to get the reporting system vendors thinking about this, particularly since some of them already offer features for doing the same sort of thing from other types of structured report "input", notably for ultrasound and echocardiography. Even the ordering and decision support system vendors should not be immune to your questions, since they too can take advantage of patient-specific historical information acquired from RDSRs.
In conclusion, next week you have the opportunity to put penetrating questions about radiation dose to everyone you meet with a product that is involved in any part of the imaging chain, from ordering all the way through to reporting.
If you want to get a more detailed briefing, perhaps prior to visiting the vendors' booths, and see some of the components of the IHE REM profile in action, feel free to come and visit the RSNA Image Sharing and Radiation Dose Monitoring demonstration. A group of CT modality, PACS and dose reporting vendors, together with some academic groups, the ACR and myself will be participating. Strangely enough this will actually be held in the Technical Exhibits area, rather than in the Lakeside Learning Center, specifically Booth 2852, Exhibit Hall A, South Building. Email me if you can't find the demo or have any questions about it.
David
PS. By the way, while I am thinking of it, if you use lossy compression in your archive, make sure it is turned off for series that contain dose screens (or indeed any secondary captures containing text and graphics like perfusion curves), since not only will it make them look like crap but will also reduce the performance, if not entirely cripple, any OCR that you might later apply.
Monday, May 31, 2010
Dose Matters
Summary: Reducing the radiation exposure from diagnostic imaging is an increasing priority; standards exist for encoding dose information but are not yet widely adopted, though soon will be given regulatory pressure and industry commitments; few tools, commercial or open source, exist yet for monitoring and reporting radiation dose.
Long Version:
You would have to have been living on a desert island or under a rock to not be aware that there is a heightened sensitivity amongst the general populace and the regulatory authorities to the matter of radiation dose exposure from diagnostic imaging and the risk of cancer. Whether it be well publicized disasters like the Jacoby Roth or Cedars-Sinai incidents, or general concern related to dose from procedures like virtual colonoscopy, or articles evaluating the contribution of diagnostic imaging as a source of exposure, the need to deal with the matter is inescapable. This is true regardless of whether you are a "believer" in the linear no-threshold model, which says that no amount of radiation is safe, or not. The FDA is going to require that efforts be made to reduce the dose delivered by both CT and fluoroscopy, as discussed in their initiative white paper and reviewed at the recent public meeting, though they have been working on this for some time. Vendors are already delivering equipment incorporating dose saving technology. Attention is being drawn to the radiation dose caused by the ordering of repeat or low-yield procedures, as well as optimal strategies for pediatric imaging (image gently).
Yet so much remains in the hands of the user in terms of ordering as well as performance of the examination. If you cannot measure it, you cannot improve it (Lord Kelvin), so the question arises as to how one can track the amount of radiation being delivered, either to the population, or at a site, or to an individual, and hence benchmark one's own performance then make improvements to the process. Surprisingly, though devices have long been required to provide visual feedback to the operator at the console, it has proven remarkably difficult to get this information out of the scanners and into some sort of database or registry that can be searched or monitored.
DICOM has a number of ways that dose information can be encoded, but for the last few years has been focusing on the Radiation Dose Structured Report (SR), with the goal of having the modalities produce this directly. Many people expect that dose information would be in the image headers, but the image is the wrong place to encode this; images may be transmitted before the study is complete and hence not contain the cumulative information, and more than one image may be reconstructed from the same irradiation event, creating the risk that the dose may be counted more than once. Further, not all originally acquired images are necessarily retained (e.g., thin slices from CT), and a large volume of images is a poor means of communicating what is essentially a small amount of information. One upon a time, it was thought that the modality performed procedure step (MPPS) might be a suitable mechanism to communicate this information, but it was soon realized that there is no easy way to persist what is essentially a transient message, not to mention that MPPS is relatively poorly adopted.
To meet the users' immediate needs, some vendors have gone so far as to provide images that are saved screens containing the text of the delivered dose information. Both GE and Philips do this, and there is a large installed base of such scanners as well as archives full of such information. Though Philips had the foresight to also encode the same information in the header attributes of these images (albeit in a non-standard way), both as plain text and as individual elements, unfortunately GE did not, so many folks who want to perform a retrospective review of their dose information need to manually examine these images, or develop some optical character recognition (OCR) software.
For the time being, there is a relative paucity of tools available both to handle information from legacy devices, as well as to use more standard approaches, including that espoused in the IHE Radiation Exposure Monitoring (REM) profile, which is based on modalities producing DICOM Radiation Dose SR objects, and provides specific actors for consuming and reporting information, including transmission to registries, such as the ACR's Dose Index Registry. The good news is that there has been significant activity at recent IHE Connectathons with respect to implementing REM; you can review these yourself at the connectathon results page, where you can see which vendors have specific offerings in this field. MITA, the modality vendors' industry trade group, has made a strong commitment not only to dose reduction in general, and the CT Radiation Dose Check feature, but also to retrofitting at least the current platform in the installed base to produce DICOM SR objects .
At a recent teleconference of the newly convened Quality and Safety Subcommittee of the RSNA's Radiology Informatics Committee (RIC) , it was apparent that several academic groups have been working in this field, and the need to make available open source tools was highlighted, if for no other reason than to serve until the industry catches up and provides a robust infrastructure.
To this end I thought I would externalize some of my own primitive efforts, as extensions to my Pixelmed Java DICOM toolkit. Specifically, I put together a little application called DoseUtility, which brings together a number of components that I have been working on, including the construction and validation of Radiation Dose SR objects, as well as the ability to perform OCR on GE dose screen saved images. I have already used the validator to good effect during the last few connectathons, and the experience constructing and testing it has led to a number of proposed changes to the standard and the IHE profile.
Eventually I hope to extend this tool and its components to provide a complete infrastructure for dose management, at least from the DICOM and IHE side of the problem. Currently it focuses on CT, but it will be extended to fluoroscopy and projection X-ray soon, as well as injected dose from NM and PET, as those standards evolve.
I dare say that the various academic groups who have been working on the same types of problems may well have much more sophisticated tools, likely more easily integrated with their own PACS and RIS, perhaps taking advantage of proprietary APIs. As yet, I am unfamiliar with the specifics of most of them, but I will make a catalog of whatever becomes available.
David
Long Version:
You would have to have been living on a desert island or under a rock to not be aware that there is a heightened sensitivity amongst the general populace and the regulatory authorities to the matter of radiation dose exposure from diagnostic imaging and the risk of cancer. Whether it be well publicized disasters like the Jacoby Roth or Cedars-Sinai incidents, or general concern related to dose from procedures like virtual colonoscopy, or articles evaluating the contribution of diagnostic imaging as a source of exposure, the need to deal with the matter is inescapable. This is true regardless of whether you are a "believer" in the linear no-threshold model, which says that no amount of radiation is safe, or not. The FDA is going to require that efforts be made to reduce the dose delivered by both CT and fluoroscopy, as discussed in their initiative white paper and reviewed at the recent public meeting, though they have been working on this for some time. Vendors are already delivering equipment incorporating dose saving technology. Attention is being drawn to the radiation dose caused by the ordering of repeat or low-yield procedures, as well as optimal strategies for pediatric imaging (image gently).
Yet so much remains in the hands of the user in terms of ordering as well as performance of the examination. If you cannot measure it, you cannot improve it (Lord Kelvin), so the question arises as to how one can track the amount of radiation being delivered, either to the population, or at a site, or to an individual, and hence benchmark one's own performance then make improvements to the process. Surprisingly, though devices have long been required to provide visual feedback to the operator at the console, it has proven remarkably difficult to get this information out of the scanners and into some sort of database or registry that can be searched or monitored.
DICOM has a number of ways that dose information can be encoded, but for the last few years has been focusing on the Radiation Dose Structured Report (SR), with the goal of having the modalities produce this directly. Many people expect that dose information would be in the image headers, but the image is the wrong place to encode this; images may be transmitted before the study is complete and hence not contain the cumulative information, and more than one image may be reconstructed from the same irradiation event, creating the risk that the dose may be counted more than once. Further, not all originally acquired images are necessarily retained (e.g., thin slices from CT), and a large volume of images is a poor means of communicating what is essentially a small amount of information. One upon a time, it was thought that the modality performed procedure step (MPPS) might be a suitable mechanism to communicate this information, but it was soon realized that there is no easy way to persist what is essentially a transient message, not to mention that MPPS is relatively poorly adopted.
To meet the users' immediate needs, some vendors have gone so far as to provide images that are saved screens containing the text of the delivered dose information. Both GE and Philips do this, and there is a large installed base of such scanners as well as archives full of such information. Though Philips had the foresight to also encode the same information in the header attributes of these images (albeit in a non-standard way), both as plain text and as individual elements, unfortunately GE did not, so many folks who want to perform a retrospective review of their dose information need to manually examine these images, or develop some optical character recognition (OCR) software.
For the time being, there is a relative paucity of tools available both to handle information from legacy devices, as well as to use more standard approaches, including that espoused in the IHE Radiation Exposure Monitoring (REM) profile, which is based on modalities producing DICOM Radiation Dose SR objects, and provides specific actors for consuming and reporting information, including transmission to registries, such as the ACR's Dose Index Registry. The good news is that there has been significant activity at recent IHE Connectathons with respect to implementing REM; you can review these yourself at the connectathon results page, where you can see which vendors have specific offerings in this field. MITA, the modality vendors' industry trade group, has made a strong commitment not only to dose reduction in general, and the CT Radiation Dose Check feature, but also to retrofitting at least the current platform in the installed base to produce DICOM SR objects .
At a recent teleconference of the newly convened Quality and Safety Subcommittee of the RSNA's Radiology Informatics Committee (RIC) , it was apparent that several academic groups have been working in this field, and the need to make available open source tools was highlighted, if for no other reason than to serve until the industry catches up and provides a robust infrastructure.
To this end I thought I would externalize some of my own primitive efforts, as extensions to my Pixelmed Java DICOM toolkit. Specifically, I put together a little application called DoseUtility, which brings together a number of components that I have been working on, including the construction and validation of Radiation Dose SR objects, as well as the ability to perform OCR on GE dose screen saved images. I have already used the validator to good effect during the last few connectathons, and the experience constructing and testing it has led to a number of proposed changes to the standard and the IHE profile.
Eventually I hope to extend this tool and its components to provide a complete infrastructure for dose management, at least from the DICOM and IHE side of the problem. Currently it focuses on CT, but it will be extended to fluoroscopy and projection X-ray soon, as well as injected dose from NM and PET, as those standards evolve.
I dare say that the various academic groups who have been working on the same types of problems may well have much more sophisticated tools, likely more easily integrated with their own PACS and RIS, perhaps taking advantage of proprietary APIs. As yet, I am unfamiliar with the specifics of most of them, but I will make a catalog of whatever becomes available.
David
Wednesday, April 28, 2010
This blog has moved
This blog is now located at http://dclunie.blogspot.com/.
You will be automatically redirected in 30 seconds, or you may click here.
For feed subscribers, please update your feed subscriptions to
http://dclunie.blogspot.com/feeds/posts/default.
Monday, April 13, 2009
To push or to pull: that is the question
"Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous delay and inconvenience,
Or to take arms against a lack of bandwidth,
And by anticipating avoid it?"
Summary: Sharing of images across the network is a potentially attractive alternative to CDs, but image sets are large, bandwidth is limited, lossy compression is controversial, security infrastructure is non-existent, and recipients are busy and impatient; why have to pull images on demand slowly or with poor quality, when one can anticipate where they are needed and push or pre-fetch ? The standards and technology exists now, not tomorrow.
Long version:
There is a renewed enthusiasm for image sharing using the network.
This idea is not new, and for many years now the momentum has been growing to establish standards and build infrastructure to support image as part of the distributed electronic record. In the current US political climate, the huge cost and disparate availability of health care (and imaging utilization in particular) has IT proponents, who are as eager to jump on the stimulus package gravy train as anyone else, seeking to find a "meaningful use" of IT to address the image sharing problem.
The current generation of computer literate doctors is used to the convenience of the Internet, and on-demand access to arbitrary information à la Google. It is reasonable for them to demand that they have access with a similar level of convenience to patient information, including images.
However, this requirement is easy to demand but not so easy to satisfy. Practical realities intrude: radiological images are more complex than consumer grade images, need more manipulation for adequate interactive visualization, tend to be very large individually (e.g., digital mammograms) and occur in very large sets (e.g., thin slice CT or CT/PET). Yet bandwidth, particularly in the "last mile" from the providers to the Internet, is limited.
Some tout the use of lossy image compression as a panacea, yet this remains controversial and adequately powered studies to "prove" that such compression does not lower the quality of care are few in number. Others say the bandwidth problem will go away over time, yet in underserved rural areas, and particularly in medical offices, high-speed DSL or cable access is limited; for large institutions with very large volumes, very high bandwidth "pipes" may add significantly to operational cost. Even with high bandwidth, high latency can degrade the transfer rates achieved and impact any interactive protocol perceptibly. Like healthcare in general, not everyone has equal access at equal cost.
Leaving the DICOM images "on the server" and interacting remotely with an application, either using a proprietary approach like Terarecon's, or a generic application sharing approach like Citrix, or web browser approach that serves up consumer format images on demand, is certainly possible. These approaches introduce new classes of problems such as access control and familiarity with the user interface. One frequently hears from radiologists who serve a number of hospitals, about how irritating it is to have to learn the remote interface of each of the different installed PACS, for example. This is exactly the same problem that the AMA has raised about the different viewers on different vendors' CDs. Provisioning every possible user with the appropriate identity and authentication information, and then assuring they have access to what they should have and nothing else, is also obviously a major administrative task. In the absence of a national or regional infrastructure for centralizing such provisioning, or a framework of "trust" between providers, this will remain a difficult problem. Providing patients with access to their own information and images adds another dimension to the scale and complexity problem.
For years now IHE has been promoting its cross-enterprise document sharing (XDS) architecture as a potential solution. The idea is to have each source register what it has available with a centrally accessible registry, and then consumers use the location information in the registry to go back to the source repository to pull what they need. The underlying technology is appropriately buzzword compliant (XML and SOAP and all that), and there is an additional layer to deal with the number and size of images (XDS-I, currently undergoing revision to become XDS-I.b using MTOM/XOP to efficiently handle the binary image data). However, this architecture still presupposes an unparalled (and as yet largely unimplemented) degree of cooperation between everyone involved in the sharing problem.
Healthcare providers do not normally cooperate, at least in the US; indeed the very essence of the healthcare system encourages them to compete, and cooperation is anathema to them. Does it make sense to rely on the future deployment of an infrastructure that involves cooperation and yet likely with additional cost associated with it and little incentive to participate ? Who are the providers already interested in providing information to ? Their "customers" obviously, the referring doctors who order (or in civilized countries "request") the imaging services in the first place.
These referring doctors span the gamut in terms of technologic sophistication and requirements. Some may be satisfied with just the report. Many though, and often it depends on the specific patient and their condition, will need some access to the images. A significant proportion will need access to the original DICOM images in order to perform their own interpretation or to use their own visualization or planning tools. Yet these are busy people who have neither the patience, nor the time to waste, nor are reimbursed for, screwing around with artifical technological barriers to using the images, such as network delays or unfamiliar user interfaces
Should it not be a simple matter in this day and age to send the images to where they are needed, just as one sends (faxes or emails) the report, a well established practice ?
Obviously this is possible. No imaging facility is going to perform an examination without knowing who ordered (requested) it, so the information about where to send it exists. If the potential recipients had a system capable of receiving it, this process could be automated.
Just as I have advocated in the past that referring doctors set up a system in their office and have their staff handle CD importing, so that such images are ready to view in their system when they need them, one could envisage the same or a similar in-office system with a port listening to the outside world ready to receive incoming images. Just like the fax machine that is sitting their waiting to receive phone calls.
Do the standards and technology exist to do this safely and securly right now ? Of course they do. All one would need is to perform an ordinary DICOM network transfer of the images from the sending site (imaging center) to the receiving site (referring doctor). Should it be a secure transfer to protect confidentiality ? Of course it should, but one does not need to set up a VPN to every possible referring doctor, nor from every possible sending site, since DICOM already defines transport over TLS (SSL), the same encryption protocol that one uses for ecommerce with sites with whom one has no pre-established relationship.
Does one need any identification or authentication infrastructure to achieve these ? Beyond perhaps checking that the receiving site has a valid TLS certificate (signed by a well-known certificate authority, just like for web browsing), the answer is no. The fact that the recipient ordered (requested) the examination should be sufficient to establish that they are entitled to access the images, for example. By analogy, one does not require any special authentication to receive the faxed report.
Would recipients potentially be vulnerable to "DICOM image spam" ? Well theoretically if someone attacker was that determined, but this would easily be solved by "filtering" on a list of known and approved source sites.
Is there any risk to the integrity of the sending site ? Well no, because this is an outbound transfer (push), and there is no need for the sending site to respond to queries (unless for some reason, it wants to).
This is pretty easy stuff to set up, and apart from the encryption layer, involves nothing that imaging vendors are not already intimately familiar with. No fancy web services stuff, no XML or SOAP messages. Just plain old boring store-and-forward point-to-point DICOM. And there are certainly already software tool kits that provide support for the secure transfer of DICOM images over TLS. Some of these tool kits also support the use of the various standard lossless and lossy compression "transfer syntaxes" that DICOM defines, including JPEG 2000, which can be used as appropriate and negotiated automatically depending on the receiving system's capabilities. Is DICOM the fastest possible network transfer protocol ? Well arguably not, depending on the latency of the network and the quality of the implementation, but in a store-and-forward paradigm this is much less of a factor, and there are many ways to optimize DICOM transfers if required, without throwing away the interoperability of a well known protocol.
What about confirming the success of the transfer ? One could use the existing DICOM Storage Commitment in the same way IHE uses it between modalities and the PACS, and/or one could include a "manifest" of what should have been sent, e.g., as a DICOM SR the way the IHE Teaching File and Clinical Trial Export (TCE) profile does.
What about the matter of inconsistent patient identifiers ? How is the receiving site going to know how to match the incoming images that use the imaging center's patient identifier against their own internal patient identifier. This is certainly a non-trivial problem, but just as when paying an invoice a business normally tracks the orderer's purchase order number in addition to its own numbering system, there is no reason why an imaging system cannot do the same. There are certainly HL7 and DICOM attributes related to dealing with this class of problem, but in the short term and in the absence of a consistent convention for handling this, it may be necessary to have a heuristic matching algorithm and/or human oversight of this "import reconciliation" problem. Perhaps one day there will be a national patient identifier to reduce the complexity of this problem, but there will always be errors that need reconciliation. The same class of problem exists with CDs, and the IHE Import Reconciliation Workflow (IRWF) profile provides ways to deal with this, either an an unscheduled manner by using patient identity queries, or in a scheduled manner, whereby the system that placed the order in the first place could be expecting the result in the form of images and perform the matching against a reduced set of potential alternatives.
Note that this entire solution avoids the need for any type of centralized infrastructure. It just needs the sending site to know the "DICOM address" (host, port and AET) of the ordering (requesting) doctor's site to which to send the images. This could be configured in the system in advance, just like the fax number for the report, and it could be included in every order (printed or electronic) to allow manual or automatic addition of new sites.
Ideally the sending capability would be built in to imaging centers' information systems and PACS. Could one retrofit an existing RIS/PACS with this capability with a third-party device or piece of software ? Certainly; one could envisage a system in which the modality worklist provider was polled on a regular basis to extract information about what examinations had been requested, and within the worklist entries there should be identification of the referring doctor. Such a system would then query the PACS to see what images were available for these requests, retrieve them, and forward them on to the pre-configured recipients site. Other DICOM services, such as Modality Performed Procedure Step (MPPS) and Instance Availability Notification (IAN) might be of additional assistance in making this process more reliable or timely, and in particular help assure that a complete set of images was transferred. Alternatively, rather than polling the MWL provider, one might listen to an HL7 ADT and Order Entry feed to extract the order information or gather additional details.
The bottom line though is that the images could be in the hands of the remote referring doctor before the radiologist has even had a chance to look at them, a state that has become well established as appropriate within a typical enterprise's PACS and hence should be available to outsiders as well.
What if a mistake is made, and the images need to be corrected later ? This is the same class of problem that one faces with film, or faxed reports or CDs, and in the short term there likely needs to be a human process involved to be sure that everyone is notified. That said, the more immediate and automated transfers become the more this is potentially an issue; it is shared by all distributed infrastructures whether point-to-point or centralized or federated. IHE has started to define transactions for flagging images as rejected (using a DICOM Key Object Selection Document with a defined title), with the intent that the corrected images then be resent. This has work has been started in Image Rejection Note Stored transaction of the IHE Mammography Acquisition Workflow supplement.
What if there are multiple potential recipients, i.e., a "cc list" on the order, such as is often the case when a specialist orders (requests) the examination with the intent of referring the patient onwards, as well as sending a copy to the primary care doctor ? Simple, forward the images to everyone on the cc list. From a consent and HIPAA Privacy Rule authorization perspective, it would be the responsibility of the person writing the order (request) to be sure that everyone on the cc list was appropriately authorized.
What if the patient wants a copy ? Well, it is unlikely that they would have their own personal receiving setup, and unreasonable to expect the imaging provider to support every such recipient (at least until this became as ubiquitous as email). There is always CD of course, but if the patient had a personal electronic health record provider (whoever that might be), they would be able to designate that provider's address as a target, and the imaging provider could send the images there as well. Likely there would be a few such providers configured in advance and it would merely be a matter of recording which one with the patient's registration information.
Are there other use-case beyond the simple "order imaging, perform imaging, send to orderer" example ? Certainly there are. The typical emergency case referral, in which a patient is imaged at the first site then transferred for further care, is an example of whether the same point-to-point store-and-forward paradigm can be used. Though in this case, one needs an infrastructure with sufficient bandwidth to cope with the disaster scenarios where a lot of images on multiple patients need to be transferred very quickly; as a consequence, a more formal arrangement between the two sites is probably necessary than the more ad hoc "email like" pattern for an arbitrary and extensible set of referring doctors.
Teleradiology use-cases, either for a specialist radiologist consultation, or primary interpretation "at home", or even a preliminary interpretation off-shore, are other examples in which exactly the same paradigm store-and-forward paradigm is applicable. This is nothing new, and people have been doing exactly this for many years, using DICOM C-STORE transactions with or without compression in some cases and proprietary protocols in others. Some such teleradiology scenarious could be better supported by removing the patient's true identity first and replacing it with a reversible pseudonym (e.g., for specialisty or off-shore teleradiology), but that is a subtletly and not a pre-requisite.
All that is new here is essentially recognition that every potential recipient needs a secure DICOM "address", just like an email address, and that sending sites be configured to support a multitude of them, and that recipients need to have an Internet connected "DICOM listener" ready to receive images into their own preferred viewing system. I.e., it is a matter to taking well-established existing technology and making it routine rather than occasional.
Does this undermine the need for centralized and regional archives and repositories and registries, and web services orientated infrastructures that are more easily integrated with other sources of information than images ? No, certainly it does not, since there are many other use-cases in which the doctor needs to search for information whose need cannot be so easily anticipated. Still though, many of those use-cases can make use of a certain amount of prior knowledge to optimize the doctor's experience, for example by pre-fetching relevant prior or current images to local system, again to prevent interactive delays or the need to use unfamiliar user interfaces. After all, it is a rare patient that is seen without an appointment.
However, in the interim, there is no need to wait for these archives and repositories and registries to be built, administered or paid for by someone (else).
In the longer term there will no doubt be competing protocols to DICOM network services for the store-and-forward transaction (which might be zip file encapsulated, and secure or grid ftp based) and for retrieval transactions (which might be web services based). I am sure that both sending and receiving systems will grow to support multiple different transactions as this shakes itself out. The store-and-forward payload will always remain pure DICOM of course, since there is no competition for the "file format" itself (as opposed to the interactive on demand display use case, for which protocols like JPIP and its ilk show promise).
But you don't need to wait for a new infrastructure, or new standards, or a new incentive (reimbursement or regulatory) model to deal with some of the easy use-cases. Just go ahead and do it with DICOM.
David
The slings and arrows of outrageous delay and inconvenience,
Or to take arms against a lack of bandwidth,
And by anticipating avoid it?"
Summary: Sharing of images across the network is a potentially attractive alternative to CDs, but image sets are large, bandwidth is limited, lossy compression is controversial, security infrastructure is non-existent, and recipients are busy and impatient; why have to pull images on demand slowly or with poor quality, when one can anticipate where they are needed and push or pre-fetch ? The standards and technology exists now, not tomorrow.
Long version:
There is a renewed enthusiasm for image sharing using the network.
This idea is not new, and for many years now the momentum has been growing to establish standards and build infrastructure to support image as part of the distributed electronic record. In the current US political climate, the huge cost and disparate availability of health care (and imaging utilization in particular) has IT proponents, who are as eager to jump on the stimulus package gravy train as anyone else, seeking to find a "meaningful use" of IT to address the image sharing problem.
The current generation of computer literate doctors is used to the convenience of the Internet, and on-demand access to arbitrary information à la Google. It is reasonable for them to demand that they have access with a similar level of convenience to patient information, including images.
However, this requirement is easy to demand but not so easy to satisfy. Practical realities intrude: radiological images are more complex than consumer grade images, need more manipulation for adequate interactive visualization, tend to be very large individually (e.g., digital mammograms) and occur in very large sets (e.g., thin slice CT or CT/PET). Yet bandwidth, particularly in the "last mile" from the providers to the Internet, is limited.
Some tout the use of lossy image compression as a panacea, yet this remains controversial and adequately powered studies to "prove" that such compression does not lower the quality of care are few in number. Others say the bandwidth problem will go away over time, yet in underserved rural areas, and particularly in medical offices, high-speed DSL or cable access is limited; for large institutions with very large volumes, very high bandwidth "pipes" may add significantly to operational cost. Even with high bandwidth, high latency can degrade the transfer rates achieved and impact any interactive protocol perceptibly. Like healthcare in general, not everyone has equal access at equal cost.
Leaving the DICOM images "on the server" and interacting remotely with an application, either using a proprietary approach like Terarecon's, or a generic application sharing approach like Citrix, or web browser approach that serves up consumer format images on demand, is certainly possible. These approaches introduce new classes of problems such as access control and familiarity with the user interface. One frequently hears from radiologists who serve a number of hospitals, about how irritating it is to have to learn the remote interface of each of the different installed PACS, for example. This is exactly the same problem that the AMA has raised about the different viewers on different vendors' CDs. Provisioning every possible user with the appropriate identity and authentication information, and then assuring they have access to what they should have and nothing else, is also obviously a major administrative task. In the absence of a national or regional infrastructure for centralizing such provisioning, or a framework of "trust" between providers, this will remain a difficult problem. Providing patients with access to their own information and images adds another dimension to the scale and complexity problem.
For years now IHE has been promoting its cross-enterprise document sharing (XDS) architecture as a potential solution. The idea is to have each source register what it has available with a centrally accessible registry, and then consumers use the location information in the registry to go back to the source repository to pull what they need. The underlying technology is appropriately buzzword compliant (XML and SOAP and all that), and there is an additional layer to deal with the number and size of images (XDS-I, currently undergoing revision to become XDS-I.b using MTOM/XOP to efficiently handle the binary image data). However, this architecture still presupposes an unparalled (and as yet largely unimplemented) degree of cooperation between everyone involved in the sharing problem.
Healthcare providers do not normally cooperate, at least in the US; indeed the very essence of the healthcare system encourages them to compete, and cooperation is anathema to them. Does it make sense to rely on the future deployment of an infrastructure that involves cooperation and yet likely with additional cost associated with it and little incentive to participate ? Who are the providers already interested in providing information to ? Their "customers" obviously, the referring doctors who order (or in civilized countries "request") the imaging services in the first place.
These referring doctors span the gamut in terms of technologic sophistication and requirements. Some may be satisfied with just the report. Many though, and often it depends on the specific patient and their condition, will need some access to the images. A significant proportion will need access to the original DICOM images in order to perform their own interpretation or to use their own visualization or planning tools. Yet these are busy people who have neither the patience, nor the time to waste, nor are reimbursed for, screwing around with artifical technological barriers to using the images, such as network delays or unfamiliar user interfaces
Should it not be a simple matter in this day and age to send the images to where they are needed, just as one sends (faxes or emails) the report, a well established practice ?
Obviously this is possible. No imaging facility is going to perform an examination without knowing who ordered (requested) it, so the information about where to send it exists. If the potential recipients had a system capable of receiving it, this process could be automated.
Just as I have advocated in the past that referring doctors set up a system in their office and have their staff handle CD importing, so that such images are ready to view in their system when they need them, one could envisage the same or a similar in-office system with a port listening to the outside world ready to receive incoming images. Just like the fax machine that is sitting their waiting to receive phone calls.
Do the standards and technology exist to do this safely and securly right now ? Of course they do. All one would need is to perform an ordinary DICOM network transfer of the images from the sending site (imaging center) to the receiving site (referring doctor). Should it be a secure transfer to protect confidentiality ? Of course it should, but one does not need to set up a VPN to every possible referring doctor, nor from every possible sending site, since DICOM already defines transport over TLS (SSL), the same encryption protocol that one uses for ecommerce with sites with whom one has no pre-established relationship.
Does one need any identification or authentication infrastructure to achieve these ? Beyond perhaps checking that the receiving site has a valid TLS certificate (signed by a well-known certificate authority, just like for web browsing), the answer is no. The fact that the recipient ordered (requested) the examination should be sufficient to establish that they are entitled to access the images, for example. By analogy, one does not require any special authentication to receive the faxed report.
Would recipients potentially be vulnerable to "DICOM image spam" ? Well theoretically if someone attacker was that determined, but this would easily be solved by "filtering" on a list of known and approved source sites.
Is there any risk to the integrity of the sending site ? Well no, because this is an outbound transfer (push), and there is no need for the sending site to respond to queries (unless for some reason, it wants to).
This is pretty easy stuff to set up, and apart from the encryption layer, involves nothing that imaging vendors are not already intimately familiar with. No fancy web services stuff, no XML or SOAP messages. Just plain old boring store-and-forward point-to-point DICOM. And there are certainly already software tool kits that provide support for the secure transfer of DICOM images over TLS. Some of these tool kits also support the use of the various standard lossless and lossy compression "transfer syntaxes" that DICOM defines, including JPEG 2000, which can be used as appropriate and negotiated automatically depending on the receiving system's capabilities. Is DICOM the fastest possible network transfer protocol ? Well arguably not, depending on the latency of the network and the quality of the implementation, but in a store-and-forward paradigm this is much less of a factor, and there are many ways to optimize DICOM transfers if required, without throwing away the interoperability of a well known protocol.
What about confirming the success of the transfer ? One could use the existing DICOM Storage Commitment in the same way IHE uses it between modalities and the PACS, and/or one could include a "manifest" of what should have been sent, e.g., as a DICOM SR the way the IHE Teaching File and Clinical Trial Export (TCE) profile does.
What about the matter of inconsistent patient identifiers ? How is the receiving site going to know how to match the incoming images that use the imaging center's patient identifier against their own internal patient identifier. This is certainly a non-trivial problem, but just as when paying an invoice a business normally tracks the orderer's purchase order number in addition to its own numbering system, there is no reason why an imaging system cannot do the same. There are certainly HL7 and DICOM attributes related to dealing with this class of problem, but in the short term and in the absence of a consistent convention for handling this, it may be necessary to have a heuristic matching algorithm and/or human oversight of this "import reconciliation" problem. Perhaps one day there will be a national patient identifier to reduce the complexity of this problem, but there will always be errors that need reconciliation. The same class of problem exists with CDs, and the IHE Import Reconciliation Workflow (IRWF) profile provides ways to deal with this, either an an unscheduled manner by using patient identity queries, or in a scheduled manner, whereby the system that placed the order in the first place could be expecting the result in the form of images and perform the matching against a reduced set of potential alternatives.
Note that this entire solution avoids the need for any type of centralized infrastructure. It just needs the sending site to know the "DICOM address" (host, port and AET) of the ordering (requesting) doctor's site to which to send the images. This could be configured in the system in advance, just like the fax number for the report, and it could be included in every order (printed or electronic) to allow manual or automatic addition of new sites.
Ideally the sending capability would be built in to imaging centers' information systems and PACS. Could one retrofit an existing RIS/PACS with this capability with a third-party device or piece of software ? Certainly; one could envisage a system in which the modality worklist provider was polled on a regular basis to extract information about what examinations had been requested, and within the worklist entries there should be identification of the referring doctor. Such a system would then query the PACS to see what images were available for these requests, retrieve them, and forward them on to the pre-configured recipients site. Other DICOM services, such as Modality Performed Procedure Step (MPPS) and Instance Availability Notification (IAN) might be of additional assistance in making this process more reliable or timely, and in particular help assure that a complete set of images was transferred. Alternatively, rather than polling the MWL provider, one might listen to an HL7 ADT and Order Entry feed to extract the order information or gather additional details.
The bottom line though is that the images could be in the hands of the remote referring doctor before the radiologist has even had a chance to look at them, a state that has become well established as appropriate within a typical enterprise's PACS and hence should be available to outsiders as well.
What if a mistake is made, and the images need to be corrected later ? This is the same class of problem that one faces with film, or faxed reports or CDs, and in the short term there likely needs to be a human process involved to be sure that everyone is notified. That said, the more immediate and automated transfers become the more this is potentially an issue; it is shared by all distributed infrastructures whether point-to-point or centralized or federated. IHE has started to define transactions for flagging images as rejected (using a DICOM Key Object Selection Document with a defined title), with the intent that the corrected images then be resent. This has work has been started in Image Rejection Note Stored transaction of the IHE Mammography Acquisition Workflow supplement.
What if there are multiple potential recipients, i.e., a "cc list" on the order, such as is often the case when a specialist orders (requests) the examination with the intent of referring the patient onwards, as well as sending a copy to the primary care doctor ? Simple, forward the images to everyone on the cc list. From a consent and HIPAA Privacy Rule authorization perspective, it would be the responsibility of the person writing the order (request) to be sure that everyone on the cc list was appropriately authorized.
What if the patient wants a copy ? Well, it is unlikely that they would have their own personal receiving setup, and unreasonable to expect the imaging provider to support every such recipient (at least until this became as ubiquitous as email). There is always CD of course, but if the patient had a personal electronic health record provider (whoever that might be), they would be able to designate that provider's address as a target, and the imaging provider could send the images there as well. Likely there would be a few such providers configured in advance and it would merely be a matter of recording which one with the patient's registration information.
Are there other use-case beyond the simple "order imaging, perform imaging, send to orderer" example ? Certainly there are. The typical emergency case referral, in which a patient is imaged at the first site then transferred for further care, is an example of whether the same point-to-point store-and-forward paradigm can be used. Though in this case, one needs an infrastructure with sufficient bandwidth to cope with the disaster scenarios where a lot of images on multiple patients need to be transferred very quickly; as a consequence, a more formal arrangement between the two sites is probably necessary than the more ad hoc "email like" pattern for an arbitrary and extensible set of referring doctors.
Teleradiology use-cases, either for a specialist radiologist consultation, or primary interpretation "at home", or even a preliminary interpretation off-shore, are other examples in which exactly the same paradigm store-and-forward paradigm is applicable. This is nothing new, and people have been doing exactly this for many years, using DICOM C-STORE transactions with or without compression in some cases and proprietary protocols in others. Some such teleradiology scenarious could be better supported by removing the patient's true identity first and replacing it with a reversible pseudonym (e.g., for specialisty or off-shore teleradiology), but that is a subtletly and not a pre-requisite.
All that is new here is essentially recognition that every potential recipient needs a secure DICOM "address", just like an email address, and that sending sites be configured to support a multitude of them, and that recipients need to have an Internet connected "DICOM listener" ready to receive images into their own preferred viewing system. I.e., it is a matter to taking well-established existing technology and making it routine rather than occasional.
Does this undermine the need for centralized and regional archives and repositories and registries, and web services orientated infrastructures that are more easily integrated with other sources of information than images ? No, certainly it does not, since there are many other use-cases in which the doctor needs to search for information whose need cannot be so easily anticipated. Still though, many of those use-cases can make use of a certain amount of prior knowledge to optimize the doctor's experience, for example by pre-fetching relevant prior or current images to local system, again to prevent interactive delays or the need to use unfamiliar user interfaces. After all, it is a rare patient that is seen without an appointment.
However, in the interim, there is no need to wait for these archives and repositories and registries to be built, administered or paid for by someone (else).
In the longer term there will no doubt be competing protocols to DICOM network services for the store-and-forward transaction (which might be zip file encapsulated, and secure or grid ftp based) and for retrieval transactions (which might be web services based). I am sure that both sending and receiving systems will grow to support multiple different transactions as this shakes itself out. The store-and-forward payload will always remain pure DICOM of course, since there is no competition for the "file format" itself (as opposed to the interactive on demand display use case, for which protocols like JPIP and its ilk show promise).
But you don't need to wait for a new infrastructure, or new standards, or a new incentive (reimbursement or regulatory) model to deal with some of the easy use-cases. Just go ahead and do it with DICOM.
David
Saturday, November 29, 2008
RSNA 2008 RFID Tracking of Attendees
Summary: RSNA is tracking attendees in the vendors' exhibit areas with RFID tags, with very little notice to the attendees; if you value your privacy, opt out or destroy the RFID tag in the back of your badge.
Long version:
I rarely duplicate an entire post from something that I have contributed to another forum, Aunt Minnie on this occasion, but in this case I feel strongly enough to reproduce the material in its entirety here.
Last year I got a bit annoyed that RSNA had deployed RFID tags in the attendees badges, for the purpose of piloting tracking attendance in the technical exhibits (i.e., vendor's booths), after Dalai pointed this out in his blog. See "http://www.auntminnie.com/forum/tm.aspx?m=120792". Mostly I was concerned about it not being made very clear to folks that this was going on, rather than because there was anything particularly nefarious about it.
This year, RSNA is again using this technology, and if you look for example at the back of my badge, you can see it taped underneath a label that identifies it:

In the RSNA Pocket Guide, the subject is also specifically mentioned, with instructions on where to go to "opt out" if you want:

Here is an article for from RSNA 2008 for the exhibitors, entitled "Increasing Revenue with RFID Exhibit Booth Tracking", which puts the objectives in perspective. Note that this is not a totally clandestine effort, and though in my opinion notice to registrants is hardly prominent, it was mentioned in the "November RSNA News", which contains similar text to what is in the pocket guide. What really bothers me is that there seems to be no mention of it at all in the "Registration Materials", at least as far as I can find (please correct me if I am wrong about this).
Now, whilst I am happy for RSNA to know that I attended, and happy to know which scientific sessions I participated in to help their planning, I am not at all happy about providing that information to the vendors. So, whilst I do not yet know what their "opt out" mechanism is, I suspect it is to record your details to be excluded from the reports sent to the vendors (they did that on request last year in my case).
So this year I am going to be proactive and remove or destroy the RFID tag that is in my badge. This is actually easier said than done, because it turns out they are tough little f..rs. The sticky label on the back of the badge will not peel off cleanly. Attacking the chip or antenna with a scalpel reveals that they are very hard, and without any way of confirming that the device is actually no longer working, doing a really good job (e.g., on the chip with a hammer) is going to make a mess of the badge. A Google search on the Internet (see for example, "How to kill your RFID chip") reveals that a short time in a microwave oven does the job, though at the risk of starting a fire, which doesn't sound cool. Also, most attendees won't have a microwave in their hotel room. I tried it on my wife's badge first (!), and when that didn't catch fire, did my own, and whacked the chip with a hammer, nailed it with a punch a couple of times, and cut the antenna. That said, I would still rather peel the whole thing off if it didn't look like the whole badge would tear apart.
Anyway, if you respect your privacy, as I do, then I suggest you find a way to deactivate the device before you go wandering around, and if you forget, make sure to go an opt out to prevent the information being disseminated.
David
PS. Another thing that bothered me last year was that the signage that notifies attendees that this sort of monitoring is going on was not terribly prominent. I will update this post as I wander around and investigate.
Long version:
I rarely duplicate an entire post from something that I have contributed to another forum, Aunt Minnie on this occasion, but in this case I feel strongly enough to reproduce the material in its entirety here.
Last year I got a bit annoyed that RSNA had deployed RFID tags in the attendees badges, for the purpose of piloting tracking attendance in the technical exhibits (i.e., vendor's booths), after Dalai pointed this out in his blog. See "http://www.auntminnie.com/forum/tm.aspx?m=120792". Mostly I was concerned about it not being made very clear to folks that this was going on, rather than because there was anything particularly nefarious about it.
This year, RSNA is again using this technology, and if you look for example at the back of my badge, you can see it taped underneath a label that identifies it:
In the RSNA Pocket Guide, the subject is also specifically mentioned, with instructions on where to go to "opt out" if you want:
Here is an article for from RSNA 2008 for the exhibitors, entitled "Increasing Revenue with RFID Exhibit Booth Tracking", which puts the objectives in perspective. Note that this is not a totally clandestine effort, and though in my opinion notice to registrants is hardly prominent, it was mentioned in the "November RSNA News", which contains similar text to what is in the pocket guide. What really bothers me is that there seems to be no mention of it at all in the "Registration Materials", at least as far as I can find (please correct me if I am wrong about this).
Now, whilst I am happy for RSNA to know that I attended, and happy to know which scientific sessions I participated in to help their planning, I am not at all happy about providing that information to the vendors. So, whilst I do not yet know what their "opt out" mechanism is, I suspect it is to record your details to be excluded from the reports sent to the vendors (they did that on request last year in my case).
So this year I am going to be proactive and remove or destroy the RFID tag that is in my badge. This is actually easier said than done, because it turns out they are tough little f..rs. The sticky label on the back of the badge will not peel off cleanly. Attacking the chip or antenna with a scalpel reveals that they are very hard, and without any way of confirming that the device is actually no longer working, doing a really good job (e.g., on the chip with a hammer) is going to make a mess of the badge. A Google search on the Internet (see for example, "How to kill your RFID chip") reveals that a short time in a microwave oven does the job, though at the risk of starting a fire, which doesn't sound cool. Also, most attendees won't have a microwave in their hotel room. I tried it on my wife's badge first (!), and when that didn't catch fire, did my own, and whacked the chip with a hammer, nailed it with a punch a couple of times, and cut the antenna. That said, I would still rather peel the whole thing off if it didn't look like the whole badge would tear apart.
Anyway, if you respect your privacy, as I do, then I suggest you find a way to deactivate the device before you go wandering around, and if you forget, make sure to go an opt out to prevent the information being disseminated.
David
PS. Another thing that bothered me last year was that the signage that notifies attendees that this sort of monitoring is going on was not terribly prominent. I will update this post as I wander around and investigate.
Saturday, November 22, 2008
The DICOM Exposure attribute fiasco
Summary: The original ACR-NEMA standard specified ASCII numeric data elements for Exposure, Exposure Time and X-Ray Tube Current that could be decimal values; for no apparent reason DICOM 3.0 in 1993 constrained these to be integers, which for some modalities and subjects are too small to be sufficiently precise; CPs and supplements since have been adding new data elements ever since to fix this with different scaling factors and encodings, so now receivers are faced with confusion; ideally receivers should look for all possible data elements and chose to display the most precise. Next time we do DICOM, we will do it right :)
Long Version:
Just how difficult can those of us who write standards for a living actually make an implementer's life ? Pretty difficult, is the answer, though largely this occurs as we strive to avoid breaking the installed base of existing applications that might never be upgraded.
Today I was responding to a question from a software engineer at a vendor of veterinary radiology equipment who had come to realize the the "normal" attribute for encoding Exposure Time was insufficiently precise, given that it was restricted to being an Integer String, and small things, like cats, may have exposure times shorter than a whole second. I say "normal attribute", because the original CR IOD, and most other IODs since, have used this and other attributes with similarly constrained encoding to describe X-Ray technique, and in some cases made these attributes mandatory or conditional. The attributes I am talking about are:
A naive approach would be to just change the VR for the existing data element, say from Integer String (IS) to Decimal String (DS), which would then allow fractional values. The problem with this solution would be that recipients that expected a string formatted in a particular manner might fail, for example if the parser, or display text field or database column did not expect decimal values. I.e., existing implementations might be broken, which is something we always try to avoid when "correcting" the standard.
You might well ask why the standard makes the distinction between integer strings and decimal strings in the first place, or indeed allows for both binary and string encoding of integers and floating point values. For example, a number might be encoded as an integer string (IS), decimal string (DS), unsigned 16 bit short (US) or 32 bit long (UL) or signed 16 bit (SS) or signed 32 bit (SL) binary integer, or as a 32 bit (FL) or 64 bit (FD) IEEE floating point binary value. The original ACR-NEMA standard had fewer and less specific encoding choices; it specified only four choices for value representation, 16 bit binary (BI), 32 bit binary (BD), ASCII numeric (AN) and ASCII text (AT). Note that there was no distinction between signed and unsigned binary values, and no distinction between integer and decimal string numeric values, and no way to encode floating point values in a binary form (indeed the standard for encoding binary floating point values, IEEE 754, was released in the same year as the first ACR-NEMA standard, 1985, and certainly was not universally adopted for many years). Anyway, if you review the list of data elements, the authors of the ACR-NEMA standard seem to have taken the approach of encoding:
Unfortunately, even though the DICOM standard introduced the concept of sending not only the value of a data element but also its type in the message, using the so-called "explicit value representation" transfer syntaxes, the new standard continued to support, and indeed require as the default, the "implicit value representation" that was equivalent to the way some vendors had implemented the ACR-NEMA standard over the network. Requiring only explicit VR would have allowed recipients to use the VR transmitted to decide what to do with the value, and opened the door to "fixing" incorrect VRs in the data dictionary. One could have required that recipients check and use the explicit VR. Unfortunately, by permitting implicit VR transfer syntaxes, the VR has to remain fixed forever, otherwise receivers have no way of knowing what to do with a value that is of an unexpected form. I am told that there was significant discussion of this issue with respect to the 1992 RSNA demonstration, and that implicit VR was allowed for the demonstration to maximize participation, with the intent that it not be included in the standard published in 1993, but there was not sufficient support to follow through with this improvement after all. In hindsight it is easy to criticize this short-sighted decision. On interchange media, added in 1995, only explicit VR transfer syntaxes are permitted, but by then it was too late.
So what does all this mean for our exposure-related attributes ? Given that one cannot reasonably change the VR of an existing data element, the only option was to add a new one. So this is what CP 77 did:
There are several other problems than the VR and the scaling factor with this approach of fixing inappropriate VRs by adding optional attributes that mean the same thing as what they are intended to "replace", without actually retiring and removing the old attribute. Specifically:
The problem with these new data elements is that now that they are in the data dictionary, some creative implementers of non-enhanced images have started to stuff them into the "old" IODs in order to send values with greater precision, instead of sending the intended CP 77 and CP 187 data elements. Strictly speaking this is legal as a so-called "Standard Extended SOP Class", but it creates an even greater problem for the receivers. When I first encountered someone doing this, I added a specific check to my dciodvfy validator to display an error if these attributes are present when they should not be in the DX IOD, and I have subsequently the check to other "old" IODs as well, including CR, XA/XRF and CT; I also implemented some limited consistency checking when multiple attributes for the same concept are present, since I encountered examples where completely different values were present that made no sense at all. As more and more modalities implement the Enhanced family of objects, however, and include the ability to "fall back" to sending the "old" objects if the SCP does not support the new ones, and do it by copying the "new" attributes from the functional group sequences into the top level datasets of old IOD objects rather than converting them to the "old" attributes, we may see more proliferation of a multitude of different data elements in which the exposure parameters might be encoded.
So back to the problem of what a poor receiver (of non-enhanced IOD) images is to do ? The bottom line in my opinion is that a modern receiver should check for the presence of any of the alternative attributes that encode the exposure parameters, and use whatever they find in order of greater precision. I implemented this rather crudely recently in the com.pixelmed.display.DemographicAndTechniqueAnnotations class in my PixelMed toolkit, if you are interested in taking a look at one approach to this; look for the use of the getOneOfThreeNumericAttributesOrNull() method.
If the foregoing sounds a little critical and sarcastic, it is intended to be. I continue to amaze myself with my own poor expedient decisions, lack of consistency and frequent carelessness when working on corrections and additions to the DICOM standard, and so this missive is intended to be as self-deprecating as it is critical of my contemporaries and predecessors. Much as we would like to change DICOM to make it "perfect", the need to correct problems and add functionality yet avoid breaking things that already work and avoid raising the implementation hurdle too high to be realistic are overriding; the result of compromise is significant "impurity".
If we ever had the chance to start DICOM all over again and "do it right", I am sure that despite our best intentions we would still manage to screw it up in equally egregious ways. We sometimes joke about doing a new standard called just "4", so-called because it would be the successor to DICOM 3.0, would not necessarily be just about images, and which would be an opportunity to skip the past the morass that is HL7 version 3. I doubt that we would really do much better and would no doubt encounter Fred Brooks' "second system syndrome". Indeed, DICOM 3.0 being the successor to ACR-NEMA already suffers in that respect, perhaps being accurately described as an "elephantine, feature-laden monstrosity". From what little I know about HL7 v3, it is not exempt either.
David
Long Version:
Just how difficult can those of us who write standards for a living actually make an implementer's life ? Pretty difficult, is the answer, though largely this occurs as we strive to avoid breaking the installed base of existing applications that might never be upgraded.
Today I was responding to a question from a software engineer at a vendor of veterinary radiology equipment who had come to realize the the "normal" attribute for encoding Exposure Time was insufficiently precise, given that it was restricted to being an Integer String, and small things, like cats, may have exposure times shorter than a whole second. I say "normal attribute", because the original CR IOD, and most other IODs since, have used this and other attributes with similarly constrained encoding to describe X-Ray technique, and in some cases made these attributes mandatory or conditional. The attributes I am talking about are:
- Exposure (0018,1152), which is IS VR
- Exposure Time (0018,1150), which is IS VR
- X-Ray Tube Current (0018,1151), which is IS VR
A naive approach would be to just change the VR for the existing data element, say from Integer String (IS) to Decimal String (DS), which would then allow fractional values. The problem with this solution would be that recipients that expected a string formatted in a particular manner might fail, for example if the parser, or display text field or database column did not expect decimal values. I.e., existing implementations might be broken, which is something we always try to avoid when "correcting" the standard.
You might well ask why the standard makes the distinction between integer strings and decimal strings in the first place, or indeed allows for both binary and string encoding of integers and floating point values. For example, a number might be encoded as an integer string (IS), decimal string (DS), unsigned 16 bit short (US) or 32 bit long (UL) or signed 16 bit (SS) or signed 32 bit (SL) binary integer, or as a 32 bit (FL) or 64 bit (FD) IEEE floating point binary value. The original ACR-NEMA standard had fewer and less specific encoding choices; it specified only four choices for value representation, 16 bit binary (BI), 32 bit binary (BD), ASCII numeric (AN) and ASCII text (AT). Note that there was no distinction between signed and unsigned binary values, and no distinction between integer and decimal string numeric values, and no way to encode floating point values in a binary form (indeed the standard for encoding binary floating point values, IEEE 754, was released in the same year as the first ACR-NEMA standard, 1985, and certainly was not universally adopted for many years). Anyway, if you review the list of data elements, the authors of the ACR-NEMA standard seem to have taken the approach of encoding:
- structural elements related to the encoding of the message (like lengths and offsets) and pixel value related (rows, columns, bits allocated) stuff as binary (16 or 32 bit as appropriate),
- "real world" things as ASCII numeric, even things things that could have been binary integers like counts of numbers of images, etc.
Unfortunately, even though the DICOM standard introduced the concept of sending not only the value of a data element but also its type in the message, using the so-called "explicit value representation" transfer syntaxes, the new standard continued to support, and indeed require as the default, the "implicit value representation" that was equivalent to the way some vendors had implemented the ACR-NEMA standard over the network. Requiring only explicit VR would have allowed recipients to use the VR transmitted to decide what to do with the value, and opened the door to "fixing" incorrect VRs in the data dictionary. One could have required that recipients check and use the explicit VR. Unfortunately, by permitting implicit VR transfer syntaxes, the VR has to remain fixed forever, otherwise receivers have no way of knowing what to do with a value that is of an unexpected form. I am told that there was significant discussion of this issue with respect to the 1992 RSNA demonstration, and that implicit VR was allowed for the demonstration to maximize participation, with the intent that it not be included in the standard published in 1993, but there was not sufficient support to follow through with this improvement after all. In hindsight it is easy to criticize this short-sighted decision. On interchange media, added in 1995, only explicit VR transfer syntaxes are permitted, but by then it was too late.
So what does all this mean for our exposure-related attributes ? Given that one cannot reasonably change the VR of an existing data element, the only option was to add a new one. So this is what CP 77 did:
- it described the problem with all three data elements
- it described the historic lack of constrains in ACR-NEMA
- it only fixed the problem for one of the data elements (Exposure (0018,1152)), without further explanation as to why only that one was addressed
- it add a new data element, Exposure in μAs (0018,1153), to the data dictionary and added it as an optional attribute in the CR Image Module
- it defined the new attribute to have a scaling factor 1,000 different than the original attribute, which was defined to be in mAs (as is normally displayed to the user)
- it gave the new attribute a VR of IS
- why CP 77 didn't just make the new data element a DS, keep the same units that were used previously and that are the normal units in which a user expects to see the value displayed ?
- why not just call the data element something like Exposure (Decimal), or indeed use the same name and rename the old one to Exposure (Retired) or similar ?
- why was the old attribute in the CR Image Module not simply retired or deprecated in some other way ?
- Exposure Time in μS (0018,8150), which is DS VR
- Exposure in μAs (0018,1153), which is IS VR
- X-Ray Tube Current in μA (0018,8151), which is DS VR
There are several other problems than the VR and the scaling factor with this approach of fixing inappropriate VRs by adding optional attributes that mean the same thing as what they are intended to "replace", without actually retiring and removing the old attribute. Specifically:
- How is a poor receiver to know which to use if it receives both (the sensible answer is to use the more precise one instead of the less precise one, but the standard does not require that) ?
- What about an old receiver that has never heard of the new attribute (it will display the old less precise one) ?
- Should a sender send both a less precise and a precise value, just to be able to allow such old receivers to display something rather than nothing (almost certainly yes) ?
- Exposure Time in ms (0018,9328), which is FD VR
- X-Ray Tube Current in mA (0018,9330), which is FD VR
- Exposure in mAs (0018,9332), which is FD VR
The problem with these new data elements is that now that they are in the data dictionary, some creative implementers of non-enhanced images have started to stuff them into the "old" IODs in order to send values with greater precision, instead of sending the intended CP 77 and CP 187 data elements. Strictly speaking this is legal as a so-called "Standard Extended SOP Class", but it creates an even greater problem for the receivers. When I first encountered someone doing this, I added a specific check to my dciodvfy validator to display an error if these attributes are present when they should not be in the DX IOD, and I have subsequently the check to other "old" IODs as well, including CR, XA/XRF and CT; I also implemented some limited consistency checking when multiple attributes for the same concept are present, since I encountered examples where completely different values were present that made no sense at all. As more and more modalities implement the Enhanced family of objects, however, and include the ability to "fall back" to sending the "old" objects if the SCP does not support the new ones, and do it by copying the "new" attributes from the functional group sequences into the top level datasets of old IOD objects rather than converting them to the "old" attributes, we may see more proliferation of a multitude of different data elements in which the exposure parameters might be encoded.
So back to the problem of what a poor receiver (of non-enhanced IOD) images is to do ? The bottom line in my opinion is that a modern receiver should check for the presence of any of the alternative attributes that encode the exposure parameters, and use whatever they find in order of greater precision. I implemented this rather crudely recently in the com.pixelmed.display.DemographicAndTechniqueAnnotations class in my PixelMed toolkit, if you are interested in taking a look at one approach to this; look for the use of the getOneOfThreeNumericAttributesOrNull() method.
If the foregoing sounds a little critical and sarcastic, it is intended to be. I continue to amaze myself with my own poor expedient decisions, lack of consistency and frequent carelessness when working on corrections and additions to the DICOM standard, and so this missive is intended to be as self-deprecating as it is critical of my contemporaries and predecessors. Much as we would like to change DICOM to make it "perfect", the need to correct problems and add functionality yet avoid breaking things that already work and avoid raising the implementation hurdle too high to be realistic are overriding; the result of compromise is significant "impurity".
If we ever had the chance to start DICOM all over again and "do it right", I am sure that despite our best intentions we would still manage to screw it up in equally egregious ways. We sometimes joke about doing a new standard called just "4", so-called because it would be the successor to DICOM 3.0, would not necessarily be just about images, and which would be an opportunity to skip the past the morass that is HL7 version 3. I doubt that we would really do much better and would no doubt encounter Fred Brooks' "second system syndrome". Indeed, DICOM 3.0 being the successor to ACR-NEMA already suffers in that respect, perhaps being accurately described as an "elephantine, feature-laden monstrosity". From what little I know about HL7 v3, it is not exempt either.
David
Sunday, November 16, 2008
Basic CD viewer requirements; extending PDI; software for sending images on CD media
Summary: IHE is defining requirements for basic CD viewers; PDI is being extended to add DVD, USB, compression and encryption; IHE PDI and DICOM CD media require viewers and importers to understand what is on the media; as compression, encryption and new types of images are used, receiving software struggles to keep up; this can be alleviated by executable software on the media that can decompress, decrypt and convert new image types to whatever has been negotiated with the recipient and then transmit them via the local DICOM network.
Long Version:
Since the cardiology community first began standardizing, promoting and adopting DICOM CDs as a means of interchange of images in the early 1990's, and radiology has rapidly caught up, CDs have proven to be wildly successful despite legitimate complaints about interoperability and ease of use. The PDI promotion effort by IHE initially focused on reducing confusion by insisting on only uncompressed images on CD, to reduce the burden on any device or software that the recipient may have installed. Dependence on on-board viewers was somewhat discouraged by IHE, both because of the potential security risk to executing externally supplied code and the variation in features that such viewers support.
As I have discussed previously, referring physicians who are the victims of a multitude of different viewers are "encouraging" us to improve the situation, both by endorsing the use of PDI as opposed to proprietary media, as well as joining with IHE to develop standards for what viewers are required to be able to do, in a manner that makes them intuitive to use. This latter effort is the Basic Image Review Profile. Last week we had our first Radiology Technical Committee meeting to discuss the requirements for this profile. The involvement of the users who are interested in this was extremely encouraging ... no fewer than three neurosurgeons attended the meeting to contribute! We discussed what features any basic viewer should have with respect to loading studies, navigating through them using thumbnails, comparing series side-by-side with synchronized scrolling, panning, zooming and windowing, making simple distance and angle measurements, displaying any if report present, and printing. We also discussed hardware and software requirements for such a viewer agreeing that it had to run on Windows (blech, but that's reality), and more controversially, to what extent elements of the user interface could be standardized in appearance to make unfamiliar viewers intuitively easy to use. Tooltips are one obvious means to assist with ease of use, but we also agreed to at least attempt to define what tools should be visible in the main interface and what they should look like (e.g., hand for pan, magnifying glass for zoom, etc.). We know there is a balance between consistency across vendors and the added value of proprietary look and feel, but hope that some consensus can be achieved on general principles. One item that everyone seems agreed on is the concept that the "basic" interface should be uncluttered, and "advanced" features should not be visible until they are called for, so the profile may well end up specifying what shall not be there in addition to what shall.
In the same meeting we also discussed extensions to PDI. For some time many applications have been limited by the size of datasets relative to the capacity and speed of uncompressed CD media. Accordingly, after our informal interoperability tests of DVD readability earlier this year at the Connectathon, the idea of extending PDI to support DVD as well as CD has been accepted, and at the same time it makes sense to add support for compression (as DICOM requires for DVD support) as well as for faster media like USB memory sticks and the like. The fuss about encryption of portable media makes this an opportune time to deal with that issue as well, to make sure that there is not a proliferation of proprietary alternatives to the DICOM secure media standard.
Yet extending PDI raises the bar for recipients that want to use their own pre-installed software or devices to display or to import media that may be compressed or encrypted in a manner that older software does not support. At the same time, we are well aware that any media may contain a multitude of different types of images, presentation states, key object selection and structured report documents, and IHE does not constrain this. What this means in practice is that though a viewer or importer (such as a PACS) may well support most of the image types, there may be content that is not successfully displayed or imported, the consequences of which may be unfortunate. The Basic Image Review Profile will address this for on-board viewers by adopting the fundamental principle that a compliant viewer on the media shall be able to view all the DICOM content on the media. That is a "no-brainer", but it doesn't help the pre-installed viewer or importer.
A solution that I have proposed for this that may help is to introduce the concept of "sending software" on the media. That is, even if one does not want to view the content on the media using an on-board viewer, which may or may not be present, easy to use, or even possible to execute on your hardware, it may be possible to execute software that helps to import the content into your own locally installed software. The requirements that I have drafted so far for the PDI extensions supplement include the ability to:
Ideally, the "sending software" present on the media would be multi-platform, and it is certainly possible to do that (say with Java and on-board JRE's for the popular platforms in case they are not already installed). But at the bare minimum, given the prevalence of Windows, the requirements are that it executes:
A potential problem is the need for the user to supply network parameters for the recipient (in the absence of self-discovery support, something not very widespread, unfortunately), and at the other end for the receiving PACS or workstation to be willing to accept inbound objects from a strange source (some are "promiscuous" in this respect, others are not). In the case where the media sending software is executed on the same machine as the "workstation" (or pre-installed viewer) into which the images are going to be imported, this should be less of a problem. Indeed defaulting to sending to port 104 or 11112 on the localhost with a pre-defined AET might well work for this and we could consider defining that in the IHE PDI profile option.
Anyway, though obviously the "sending software" option is not something ordinary users such as referring physicians will want to have to deal with, since their pre-installed or on-board Basic Image Review Profile viewer should cope most of the time, it provides a means of "last resort", if you will, for support personal to extract content from media that for some reason is unreadable locally through normal means. It also provides a means of helping the enterprise-to-enterprise interchange use-case, when the receiving PACS does not supported the more modern DICOM objects that advanced modalities produce, more modern compression techniques such as JPEG 2000, or the encryption that is being mandated by some jurisdictions specifically for this use-case.
David
Long Version:
Since the cardiology community first began standardizing, promoting and adopting DICOM CDs as a means of interchange of images in the early 1990's, and radiology has rapidly caught up, CDs have proven to be wildly successful despite legitimate complaints about interoperability and ease of use. The PDI promotion effort by IHE initially focused on reducing confusion by insisting on only uncompressed images on CD, to reduce the burden on any device or software that the recipient may have installed. Dependence on on-board viewers was somewhat discouraged by IHE, both because of the potential security risk to executing externally supplied code and the variation in features that such viewers support.
As I have discussed previously, referring physicians who are the victims of a multitude of different viewers are "encouraging" us to improve the situation, both by endorsing the use of PDI as opposed to proprietary media, as well as joining with IHE to develop standards for what viewers are required to be able to do, in a manner that makes them intuitive to use. This latter effort is the Basic Image Review Profile. Last week we had our first Radiology Technical Committee meeting to discuss the requirements for this profile. The involvement of the users who are interested in this was extremely encouraging ... no fewer than three neurosurgeons attended the meeting to contribute! We discussed what features any basic viewer should have with respect to loading studies, navigating through them using thumbnails, comparing series side-by-side with synchronized scrolling, panning, zooming and windowing, making simple distance and angle measurements, displaying any if report present, and printing. We also discussed hardware and software requirements for such a viewer agreeing that it had to run on Windows (blech, but that's reality), and more controversially, to what extent elements of the user interface could be standardized in appearance to make unfamiliar viewers intuitively easy to use. Tooltips are one obvious means to assist with ease of use, but we also agreed to at least attempt to define what tools should be visible in the main interface and what they should look like (e.g., hand for pan, magnifying glass for zoom, etc.). We know there is a balance between consistency across vendors and the added value of proprietary look and feel, but hope that some consensus can be achieved on general principles. One item that everyone seems agreed on is the concept that the "basic" interface should be uncluttered, and "advanced" features should not be visible until they are called for, so the profile may well end up specifying what shall not be there in addition to what shall.
In the same meeting we also discussed extensions to PDI. For some time many applications have been limited by the size of datasets relative to the capacity and speed of uncompressed CD media. Accordingly, after our informal interoperability tests of DVD readability earlier this year at the Connectathon, the idea of extending PDI to support DVD as well as CD has been accepted, and at the same time it makes sense to add support for compression (as DICOM requires for DVD support) as well as for faster media like USB memory sticks and the like. The fuss about encryption of portable media makes this an opportune time to deal with that issue as well, to make sure that there is not a proliferation of proprietary alternatives to the DICOM secure media standard.
Yet extending PDI raises the bar for recipients that want to use their own pre-installed software or devices to display or to import media that may be compressed or encrypted in a manner that older software does not support. At the same time, we are well aware that any media may contain a multitude of different types of images, presentation states, key object selection and structured report documents, and IHE does not constrain this. What this means in practice is that though a viewer or importer (such as a PACS) may well support most of the image types, there may be content that is not successfully displayed or imported, the consequences of which may be unfortunate. The Basic Image Review Profile will address this for on-board viewers by adopting the fundamental principle that a compliant viewer on the media shall be able to view all the DICOM content on the media. That is a "no-brainer", but it doesn't help the pre-installed viewer or importer.
A solution that I have proposed for this that may help is to introduce the concept of "sending software" on the media. That is, even if one does not want to view the content on the media using an on-board viewer, which may or may not be present, easy to use, or even possible to execute on your hardware, it may be possible to execute software that helps to import the content into your own locally installed software. The requirements that I have drafted so far for the PDI extensions supplement include the ability to:
- allow the user to enter the recipients network location (IP, port, AET)
- read all the content of the media via the DICOMDIR
- select what to send
- coerce patient & study identifiers using local values supplied by the user
- decrypt content if encrypted using the password supplied by the user
- decompress content if the receiving devices doesn't support compression
- convert instances whose SOP classes the receiver does not support to one that it does
- transfer everything
Ideally, the "sending software" present on the media would be multi-platform, and it is certainly possible to do that (say with Java and on-board JRE's for the popular platforms in case they are not already installed). But at the bare minimum, given the prevalence of Windows, the requirements are that it executes:
- from the media without installation
- on desktop Windows operating systems (XP or later)
- without requiring the presence of or installation of supporting frameworks (e.g., .NET or JRE), other than to be able to execute them from the media if required
- without requiring administrative privileges
A potential problem is the need for the user to supply network parameters for the recipient (in the absence of self-discovery support, something not very widespread, unfortunately), and at the other end for the receiving PACS or workstation to be willing to accept inbound objects from a strange source (some are "promiscuous" in this respect, others are not). In the case where the media sending software is executed on the same machine as the "workstation" (or pre-installed viewer) into which the images are going to be imported, this should be less of a problem. Indeed defaulting to sending to port 104 or 11112 on the localhost with a pre-defined AET might well work for this and we could consider defining that in the IHE PDI profile option.
Anyway, though obviously the "sending software" option is not something ordinary users such as referring physicians will want to have to deal with, since their pre-installed or on-board Basic Image Review Profile viewer should cope most of the time, it provides a means of "last resort", if you will, for support personal to extract content from media that for some reason is unreadable locally through normal means. It also provides a means of helping the enterprise-to-enterprise interchange use-case, when the receiving PACS does not supported the more modern DICOM objects that advanced modalities produce, more modern compression techniques such as JPEG 2000, or the encryption that is being mandated by some jurisdictions specifically for this use-case.
David
Friday, November 7, 2008
UK Encryption Update
Summary: Encryption is not required for CDs given to patients in the UK
Long Version:
In the discussion on AuntMinnie on this subject, Brandon Bertolli from London provided an update of the UK situation that clarifies when encryption is expected to be used, or not used. Specifically, a note in a letter from NHS Chief Executive David Nicholson to the president of the British Orthopaedic Association, dated 29 October 2008, includes important statements:
It seems very clear that the NHS is taking action primarily for transfers between organizations and between providers, which is as it should be. But the need for encryption can still not be dismissed lightly and is described in the letter as "good practice" even for CDs for patients. So we do need to make sure that we promote the appropriate standards for media creation vendors to implement so as to avoid the NHS or anybody else needing to adopt proprietary schemes for such transfers.
But the sky over Britain's CD users is not falling after all.
David
PS. Here is the scanned in text of the letter and the accompanying note (with thanks to Miss. Clare Marx who kindly provided a copy of the entire letter):
Long Version:
In the discussion on AuntMinnie on this subject, Brandon Bertolli from London provided an update of the UK situation that clarifies when encryption is expected to be used, or not used. Specifically, a note in a letter from NHS Chief Executive David Nicholson to the president of the British Orthopaedic Association, dated 29 October 2008, includes important statements:
- "Patients can continue to be given their own images on CD to carry away with them ... provided that the CDs are given directly to the patient, they are made aware of the risks and they take responsibility for their safekeeping, there is no fundamental problem if these are not encrypted."
- "If ... a CD needs to be used, which is possibly the case if the X-Ray is taken in a non acute setting ... then it should be encrypted ... alternatively it can be given to the patient and therefore encryption would not be necessary."
- "Naturally images will need to continue to be used for teaching, and the system for protecting data on CDs should not prevent entirely legitimate teaching activities ... if the teaching is outside the clinical environment then as long as the data on the CD contains no patient identifiable information then there is no need for it to be encrypted."
It seems very clear that the NHS is taking action primarily for transfers between organizations and between providers, which is as it should be. But the need for encryption can still not be dismissed lightly and is described in the letter as "good practice" even for CDs for patients. So we do need to make sure that we promote the appropriate standards for media creation vendors to implement so as to avoid the NHS or anybody else needing to adopt proprietary schemes for such transfers.
But the sky over Britain's CD users is not falling after all.
David
PS. Here is the scanned in text of the letter and the accompanying note (with thanks to Miss. Clare Marx who kindly provided a copy of the entire letter):
Wednesday, November 5, 2008
CD Encryption Revisited - UK Leads the Charge
Summary: UK NHS demands encryption of image CDs; should we use device or file-based encryption, standard or proprietary, password or public-key based ?
Long Version:
In a previous post I talked about Media Security and Encrypted DICOM CDs, and this topic has also come up on Aunt Minnie. Whilst there has been a general concern that the threat to privacy is small and the risk to usability high, it seems that in the UK at least, this discussion has been pre-empted by a decision by the NHS to require encryption, outlined in a letter from the NHS Chief Executive, David Nicholson. I quote from this letter:
Regardless, it would seem that the writing is on the wall for encryption of DICOM media, and solutions will need to be provided, even though the inconvenience and risk to patient safety will likely be significant. Accordingly, we have been considering a number of strategies to address this need, specifically, the encryption of an entire set of files (or an entire device), such as the open-source cross-platform TrueCrypt approach, or the encryption of individual files, such as by using the Cryptographic Message Syntax (CMS) that was designed for secure email (S/MIME) and which is already included in the DICOM standard for secure media. Further, one needs to make a choice between a password-based mechanism (so-called Password Based Encryption (PBE)), or a scheme that depends on the use of public keys and certificates and so forth, dependent on there being a Public Key Infrastructure (PKI) for senders and recipients.
The primary advantage of encrypting the entire file set or device would seem to be that one could do that, then present the encrypted set as if it were an ordinary filesystem, and the effect would be completely transparent to applications like DICOM viewers and PACS importers, once the decryption had been activated by the user entering a password or the appropriate private key being matched. Unfortunately, great as this sounds, it turns out that one needs to install some software into the operating system (like a device driver) to actually make this happen, and this requires administrative privileges. Either recipients need to have software pre-installed on their machine by someone appropriately authorized, or they need to have the right to do this themselves, for example when auto-running such a tool from the media itself. The latter is indeed supported by TrueCrypt, for example, but how likely is it that the average doctor receiving media will have such privileges, and how safe would it be (in terms of the risk of viruses) to allow them to do so ? This may be a showstopper for what otherwise seems on the face of it like the most expedient solution. There is also the matter that TrueCrypt is not a standard per se, nor is it included in other standards like DICOM, but the latter could easily be rectified since the format is fully documented and free from intellectual property restrictions.
By contrast, what seems like a more complex approach, inclusion of support for encryption directly into the DICOM viewing or importing software, may actually be a more effective solution, since it requires no additional permissions or privileges on the part of the user. Since often a viewer is supplied on the media anyway, that viewer can support the encryption mechanism used for the files. As long as the encryption scheme is a standard one, then other software can also view or import the media, if that other software also supports the standard scheme. In the interim, whilst other viewers and importers are being "upgraded" to support encryption, one could add to the on-board viewer the capability to not only decrypt and view the files, but also to send the decrypted images over a DICOM network to a PACS or workstation (preferrably allowing editing of the Patient ID field to allow for reconciliation of different sites identifiers in the process).
As mentioned, DICOM already defines the use of CMS for this purpose for secure media, though to my knowledge this feature has never been implemented in a commercial product. Further, in anticipation of this need we have been working on adding a standard password-based mechanism to augment the public-key approach used in the existing standard, specifically in DICOM CP 895, so that now we have the option of using either PBE or a PKI as the situation warrants. There are free and open-source encryption libraries that have support for CMS as well as the underlying encryption schemes like AES, for example the excellent Bouncy Castle libraries, and I and others have begun work on testing this concept using these libraries. Indeed, you can download from here a small test dataset that I created encrypted using the DICOM Secure Media profile using the CP 895 mechanism.
Regardless of which technical approach prevails, in all likelihood the simpler password-based mechanisms will likely be deployed, if only because of the complete lack of an existing PKI in most health care environments. Obviously, the privacy protection from encryption is only as good as the password chosen. Though security folks talk about long and complex passwords and phrases to improve protection, one does have to wonder how in reality imaging centers will choose passwords, and to what extent they will be based on well-known information that is memorable and predictable to simplify use, balanced against the relatively low perceived likelihood and consequences of a security breach. Further, there has yet to be discussion on good security practices and procedures for exchanging the media and the passwords separately, and what the recipient should do in this regard. For example, should the password be included in the printed report that is faxed or email to the intended recipient ? Should the patient have a copy of this for their long term use ? I would certainly expect so, but inevitably the patient sill store the report with the CD, which rather defeats the point !
None of these mechanisms address the concern that if a password is lost or not transmitted or the recipient cannot for some reason run the on-board viewer, then the patient's safety and convenience are potential at risk. In a network-based scenario, emergency access can be granted on demand, perhaps simply recording an auditable event that such emergency access by an authenticated but otherwise unauthorized individual was granted. With physical media, the sender and recipient are decoupled, however; indeed the recipient may not even be known a priori, such as when a patient takes their images for a second opinion, or for use as priors at a subsequent event. In such cases, loss or lack of access to the password becomes problematic. The problem is exacerbated in regions where it is not traditional for the imaging facility to provide long-term archival of images, such as Australia. One could imaging a scenario in which a woman has her screening mammogram recorded on an encrypted CD, the radiology center does not archive the images, and next year they cannot be used as priors because she has forgotten or lost the password.
Conceivably one could use a more complex form of encryption that allowed for escrow of additional keys that would allow recovery from some central authority perhaps, but such escrow schemes have been widely unpopular in the security community for many reasons. In the absence of an infrastructure to support this, all CDs could include the use of an additional key that was "well known" to some central authority, but of course eventually someone might be able to compromise such a key (consider the DVD Content Scramble System (CSS), for example).
So, though we do not yet have broad consensus on the standard mechanism that the industry should adopt, globally and not just in the UK, we are making some progress. Next week we will be meeting as the IHE Radiology Technical Committee and encryption is one of the topics for discussion for this year's extensions to PDI. The agenda is here, if perhaps you are interested in attending.
Though improving interoperability and reducing the barriers to viewing images on media has always been our primary goal, and encryption has the potential to threaten that objective, hopefully we will have a clear technical direction shortly for those folks who may no longer have the option of avoiding media encryption.
David
Long Version:
In a previous post I talked about Media Security and Encrypted DICOM CDs, and this topic has also come up on Aunt Minnie. Whilst there has been a general concern that the threat to privacy is small and the risk to usability high, it seems that in the UK at least, this discussion has been pre-empted by a decision by the NHS to require encryption, outlined in a letter from the NHS Chief Executive, David Nicholson. I quote from this letter:
- "You are aware that there is a mandatory requirement that all removable data, including laptops, CDs, USB Pens etc must be encrypted."
- "The encryption mandate applies equally to PACS images whether on CD or back-up tapes."
- "There could be occasional exceptions on patient safety grounds ..."
- "The CD and the password MUST be transferred by different routes."
Regardless, it would seem that the writing is on the wall for encryption of DICOM media, and solutions will need to be provided, even though the inconvenience and risk to patient safety will likely be significant. Accordingly, we have been considering a number of strategies to address this need, specifically, the encryption of an entire set of files (or an entire device), such as the open-source cross-platform TrueCrypt approach, or the encryption of individual files, such as by using the Cryptographic Message Syntax (CMS) that was designed for secure email (S/MIME) and which is already included in the DICOM standard for secure media. Further, one needs to make a choice between a password-based mechanism (so-called Password Based Encryption (PBE)), or a scheme that depends on the use of public keys and certificates and so forth, dependent on there being a Public Key Infrastructure (PKI) for senders and recipients.
The primary advantage of encrypting the entire file set or device would seem to be that one could do that, then present the encrypted set as if it were an ordinary filesystem, and the effect would be completely transparent to applications like DICOM viewers and PACS importers, once the decryption had been activated by the user entering a password or the appropriate private key being matched. Unfortunately, great as this sounds, it turns out that one needs to install some software into the operating system (like a device driver) to actually make this happen, and this requires administrative privileges. Either recipients need to have software pre-installed on their machine by someone appropriately authorized, or they need to have the right to do this themselves, for example when auto-running such a tool from the media itself. The latter is indeed supported by TrueCrypt, for example, but how likely is it that the average doctor receiving media will have such privileges, and how safe would it be (in terms of the risk of viruses) to allow them to do so ? This may be a showstopper for what otherwise seems on the face of it like the most expedient solution. There is also the matter that TrueCrypt is not a standard per se, nor is it included in other standards like DICOM, but the latter could easily be rectified since the format is fully documented and free from intellectual property restrictions.
By contrast, what seems like a more complex approach, inclusion of support for encryption directly into the DICOM viewing or importing software, may actually be a more effective solution, since it requires no additional permissions or privileges on the part of the user. Since often a viewer is supplied on the media anyway, that viewer can support the encryption mechanism used for the files. As long as the encryption scheme is a standard one, then other software can also view or import the media, if that other software also supports the standard scheme. In the interim, whilst other viewers and importers are being "upgraded" to support encryption, one could add to the on-board viewer the capability to not only decrypt and view the files, but also to send the decrypted images over a DICOM network to a PACS or workstation (preferrably allowing editing of the Patient ID field to allow for reconciliation of different sites identifiers in the process).
As mentioned, DICOM already defines the use of CMS for this purpose for secure media, though to my knowledge this feature has never been implemented in a commercial product. Further, in anticipation of this need we have been working on adding a standard password-based mechanism to augment the public-key approach used in the existing standard, specifically in DICOM CP 895, so that now we have the option of using either PBE or a PKI as the situation warrants. There are free and open-source encryption libraries that have support for CMS as well as the underlying encryption schemes like AES, for example the excellent Bouncy Castle libraries, and I and others have begun work on testing this concept using these libraries. Indeed, you can download from here a small test dataset that I created encrypted using the DICOM Secure Media profile using the CP 895 mechanism.
Regardless of which technical approach prevails, in all likelihood the simpler password-based mechanisms will likely be deployed, if only because of the complete lack of an existing PKI in most health care environments. Obviously, the privacy protection from encryption is only as good as the password chosen. Though security folks talk about long and complex passwords and phrases to improve protection, one does have to wonder how in reality imaging centers will choose passwords, and to what extent they will be based on well-known information that is memorable and predictable to simplify use, balanced against the relatively low perceived likelihood and consequences of a security breach. Further, there has yet to be discussion on good security practices and procedures for exchanging the media and the passwords separately, and what the recipient should do in this regard. For example, should the password be included in the printed report that is faxed or email to the intended recipient ? Should the patient have a copy of this for their long term use ? I would certainly expect so, but inevitably the patient sill store the report with the CD, which rather defeats the point !
None of these mechanisms address the concern that if a password is lost or not transmitted or the recipient cannot for some reason run the on-board viewer, then the patient's safety and convenience are potential at risk. In a network-based scenario, emergency access can be granted on demand, perhaps simply recording an auditable event that such emergency access by an authenticated but otherwise unauthorized individual was granted. With physical media, the sender and recipient are decoupled, however; indeed the recipient may not even be known a priori, such as when a patient takes their images for a second opinion, or for use as priors at a subsequent event. In such cases, loss or lack of access to the password becomes problematic. The problem is exacerbated in regions where it is not traditional for the imaging facility to provide long-term archival of images, such as Australia. One could imaging a scenario in which a woman has her screening mammogram recorded on an encrypted CD, the radiology center does not archive the images, and next year they cannot be used as priors because she has forgotten or lost the password.
Conceivably one could use a more complex form of encryption that allowed for escrow of additional keys that would allow recovery from some central authority perhaps, but such escrow schemes have been widely unpopular in the security community for many reasons. In the absence of an infrastructure to support this, all CDs could include the use of an additional key that was "well known" to some central authority, but of course eventually someone might be able to compromise such a key (consider the DVD Content Scramble System (CSS), for example).
So, though we do not yet have broad consensus on the standard mechanism that the industry should adopt, globally and not just in the UK, we are making some progress. Next week we will be meeting as the IHE Radiology Technical Committee and encryption is one of the topics for discussion for this year's extensions to PDI. The agenda is here, if perhaps you are interested in attending.
Though improving interoperability and reducing the barriers to viewing images on media has always been our primary goal, and encryption has the potential to threaten that objective, hopefully we will have a clear technical direction shortly for those folks who may no longer have the option of avoiding media encryption.
David
Saturday, October 4, 2008
A little PACS history
There has been a lot of discussion lately about certain PACS features and how long the ideas have been known to the community.
A good "snapshot" of features is available in the military specification for the MDIS (Medical Diagnostic Image Support) system, what later became the Siemens Gammasonics, Lockheed Martin, Loral and finally GE PACS - the precursor of the first "Centricity PACS". You can find a link to a scanned, OCR'd copy of the MDIS RFP here. This document is dated 28 March 1990.
If one turns, for example, to the Soft Copy Image Display (SCID) requirements in section 4.4, one will see such features as have been taken for granted since the early days of PACS and are now ubiquitous:
I am not sure exactly who the authors of this document were, or I would give them credit here, but I intend to research that a little further amongst some of my older colleagues (:)). Indeed, I think I will begin a little "PACS History" section in my FAQ, and start to accumulate links to documents that describe the early days, or at least references to papers and conference proceedings where the copyright is held by some publisher. Anyone who wants to contribute links or documents, please feel free to email me.
PS. I have to say that I am most impressed by Acrobat 9 Mac's OCR capabilities, which I rarely use. The original scanned paper PDF of the hand-typed MDIS RFP document that I submitted for OCR, primarily to index it for searching, also allows me to cut and past the above paragraphs with only a few edits for punctuation and without a single meaningul error; most impressive.
A good "snapshot" of features is available in the military specification for the MDIS (Medical Diagnostic Image Support) system, what later became the Siemens Gammasonics, Lockheed Martin, Loral and finally GE PACS - the precursor of the first "Centricity PACS". You can find a link to a scanned, OCR'd copy of the MDIS RFP here. This document is dated 28 March 1990.
If one turns, for example, to the Soft Copy Image Display (SCID) requirements in section 4.4, one will see such features as have been taken for granted since the early days of PACS and are now ubiquitous:
- 4.4.3.2. Pictorial Patient Directory. The workstation shall display the images from a patient's master "folder" and individual image subfolders (e.g., chest, bone, GI). Anatomic region subcategories shall be possible within each subfolder (e.g., elbow, ankle). Single or multiple images shall be easily selectable for full resolution viewing.
- 4.4.3.3. Worklist. The workstation shall automatically generate a worklist of unread exams to enable each radiologist to review the amount of the work ready for their review. The worklist can be created by radiologist or type of workstation (e.g. CT review workstation) at each as determined at each site.
- 4.4.3.4. Image Rearrangement and Display. The workstation shall display multiple reduced resolution images on a selected monitor with the ability to easily rearrange these images on the same monitor. It shall also allow rearrangement of the images easily from monitor to monitor on the same workstation.
- 4.4.3.5. Image Paging. Quick paging through multiple user selected images of an exam displayed on a single monitor shall be provided.
- 4.4.3.6. Default Display Protocol. This required function displays the images of a patient study in a user-selectable default protocol, activated each time the individual user logs on the workstation. The default display shall be modality and body part specific. It shall be a site-specific requirement- i.e. each MDIS site shall be capable of setting their own parameters.
(For example - a patient has new and previous posterioranterior (PA) and lateral chest studies to be interpreted. The radiologist viewing the study prefers to view the PA images on the central two monitors and the lateral images on theouter two monitors of a four monitor workstation. The radiologist also prefers to view the lateral images with the anterior border of the chest closest to the left monitor edge, the new PA images on the right central monitor, and the previous PA image on the left central monitor.)
Additionally, the images shall automatically be presented in an upright as well as correct right/left orientation. - 4.4.3.7. Image Enhancements Defaults. The workstation shall include user selectable image enhancement defaults for grayscale window and leveling, variable edge enhancement, and inverse video, activated each time the individual user logs on the workstation.
I am not sure exactly who the authors of this document were, or I would give them credit here, but I intend to research that a little further amongst some of my older colleagues (:)). Indeed, I think I will begin a little "PACS History" section in my FAQ, and start to accumulate links to documents that describe the early days, or at least references to papers and conference proceedings where the copyright is held by some publisher. Anyone who wants to contribute links or documents, please feel free to email me.
PS. I have to say that I am most impressed by Acrobat 9 Mac's OCR capabilities, which I rarely use. The original scanned paper PDF of the hand-typed MDIS RFP document that I submitted for OCR, primarily to index it for searching, also allows me to cut and past the above paragraphs with only a few edits for punctuation and without a single meaningul error; most impressive.
Thursday, August 28, 2008
Is the winter of discontent with CDs finally upon us ?
It is not news that I have been whining about systems that produce non-DICOM and non-PDI compliant CDs for a long time now. Yet some folks continue to believe that it is acceptable for vendors and sites to create proprietary CDs with proprietary viewers on board, despite the fact that these offer no advantage over standard CDs. I am often criticized for harping on this subject to the exclusion of all others (most recently in response to a comment on Sam's blog), but I make no apologies about it, since I believe it is indeed the primary interoperability issue facing the digital imaging community at the present time.
Well, those of us who have been focusing on the radiology-centric aspects of this mess have now been joined in battle by the community of physicians out there in the real world who have been struggling to deal with this nonsense.
The American Medical Association, as a consequence of complaints initiated by the American Association of Neurological Surgeons with respect to viewing MRIs, has produced a report from their board of trustees that resulted in a resolution with respect to "Development of Standards for MRI Equipment and Interpretation to Improve Patient Safety". Note that the discontent being expressed by the AMA is not confined to neurosurgeons, but involves everyone who receives CDs. Further, the emphasis is on safety, specifically, if media is unreadable or unusable or takes to long to use, then the safety of the patient may be at risk. This activity has been going on for several years, though few people outside of the groups involved have been aware of it.
Yesterday, I attended an interesting meeting in DC at the AMA office, which involved many of the stakeholders mentioned in the resolution, including vendor representatives from MITA (NEMA) as well as the ACR. Those referring physicians present made it abundantly clear that swift, dramatic and effective action by industry and by radiology facilities is expected without delay, and that delay will result in engagement of the regulators and the legislators.
During the course of that meeting we came to the consensus that emphasis would be placed on establishing that the standard of care will be compliance with the IHE PDI specification, and in the absence of any explicit enforcement mechanism, promulgating this as an AMA principle may suffice. Woe betide anyone who then expects to get paid for producing non-compliant CDs (since payers might not pay for less than the standard of care when they become aware of the issue), or who expects to prevail in the civil courts in the event of a negligence action caused by an unfortunate outcome from inability to read a CD.
The other outcome of the meeting was acceptance of the goal of defining a set of minimal functional requirements for a "simple viewer" that would set the lower bounds on what such a viewer would do (and to some extent, how it would do it). An IHE effort, with evaluation of compliance performed by users (not radiologists or engineers) was proposed as the mechanism to implement this.
There was debate, but not consensus, about actually standardizing aspects of the user interface, including what icons should be used and what they should look like. Though this may seem impractical, given the installed base and the investment by each vendor in their own look and feel, the matter arises because physicians faced with a completely unknown and unexpected interface have great difficulty figuring out how to make a viewer perform even basic tasks. This problem needs to be solved somehow.
Regardless, the writing is on the wall for proprietary media, and vendors who create it, or permit their users to create it, and sites that provide it. Let them all be "in the deep bosom of the ocean buried".
David
Well, those of us who have been focusing on the radiology-centric aspects of this mess have now been joined in battle by the community of physicians out there in the real world who have been struggling to deal with this nonsense.
The American Medical Association, as a consequence of complaints initiated by the American Association of Neurological Surgeons with respect to viewing MRIs, has produced a report from their board of trustees that resulted in a resolution with respect to "Development of Standards for MRI Equipment and Interpretation to Improve Patient Safety". Note that the discontent being expressed by the AMA is not confined to neurosurgeons, but involves everyone who receives CDs. Further, the emphasis is on safety, specifically, if media is unreadable or unusable or takes to long to use, then the safety of the patient may be at risk. This activity has been going on for several years, though few people outside of the groups involved have been aware of it.
Yesterday, I attended an interesting meeting in DC at the AMA office, which involved many of the stakeholders mentioned in the resolution, including vendor representatives from MITA (NEMA) as well as the ACR. Those referring physicians present made it abundantly clear that swift, dramatic and effective action by industry and by radiology facilities is expected without delay, and that delay will result in engagement of the regulators and the legislators.
During the course of that meeting we came to the consensus that emphasis would be placed on establishing that the standard of care will be compliance with the IHE PDI specification, and in the absence of any explicit enforcement mechanism, promulgating this as an AMA principle may suffice. Woe betide anyone who then expects to get paid for producing non-compliant CDs (since payers might not pay for less than the standard of care when they become aware of the issue), or who expects to prevail in the civil courts in the event of a negligence action caused by an unfortunate outcome from inability to read a CD.
The other outcome of the meeting was acceptance of the goal of defining a set of minimal functional requirements for a "simple viewer" that would set the lower bounds on what such a viewer would do (and to some extent, how it would do it). An IHE effort, with evaluation of compliance performed by users (not radiologists or engineers) was proposed as the mechanism to implement this.
There was debate, but not consensus, about actually standardizing aspects of the user interface, including what icons should be used and what they should look like. Though this may seem impractical, given the installed base and the investment by each vendor in their own look and feel, the matter arises because physicians faced with a completely unknown and unexpected interface have great difficulty figuring out how to make a viewer perform even basic tasks. This problem needs to be solved somehow.
Regardless, the writing is on the wall for proprietary media, and vendors who create it, or permit their users to create it, and sites that provide it. Let them all be "in the deep bosom of the ocean buried".
David
Subscribe to:
Posts (Atom)