Saturday, October 31, 2015

The slings and arrows of outrageous radiologists - I want my FLA.

Summary: We don't need fewer arrows. We need more arrows more often. And we need better arrows (in the sense that they are hyperlinked to the findings in the report when images are rendered, i.e., are Findings-Linked Annotations (FLA)). The term "arrows" being a surrogate for "visual indication of location".

Long Version.

I came across the strangest article about "arrows" in EJR.

Now, I don't normally read EJR because it is a little expensive, it doesn't come along with any professional society membership I have, I don't work at an institution that gets it, most of its articles are not open access (there is an EJR Open companion journal though), and it doesn't have a lot of informatics content. But this paper happened to be quoted in full for some reason on Aunt Minnie Europe, so I got a squizz without having to wait to receive a pre-print from the authors via ResearchGate or some other mechanism.

The thesis of the radiologist authors seems to be that "arrows" on images pointing to findings are a bad thing, and that recipients of the report should read the report instead of having access to such visual aids.

This struck me as odd, from the perspective of someone who has spent the last two decades or so building and evangelizing about standards and systems to do exactly that, i.e., to make annotations on images and semantically link them to specific report content so that they can be visualized interactively (ideally through DICOM Structured Reports, less ideally through the non-semantic but more widely available DICOM Softcopy Presentation States, and in the worst case in a pre-formatted multimedia rather than plain text report).

What are the authors' arguments against arrows? To summarize (fairly I hope), arrows:
  • are aesthetically ugly, especially if multitudinous, and may obscure underlying features
  • draw attention from unmarked less obvious findings (may lead to satisfaction of search)
  • are not a replacement for the more detailed account in the report
  • are superfluous in the presence of the more detailed account in the report
  • might be removed (or not be distributed)
  • detract from the role of the radiologist as a "readily accessible collaborator"
For the sake of argument, I will assume that what the authors' mean by "arrows" includes any "visual indication of location" rendered on an image, passively or interactively. They actual describe them as "an unspoken directional signal".

The authors appear to conflate the presence of arrows with either the absence of, or perhaps the ignorance of, the report ("relying on an arrow alone as a manifestation of our special capabilities", "are merely a figurative crutch we can very well do without").

I would never assert that arrows alone (or any form of selective annotation) substitute for a good report, nor it would seem to me, would it be best or even common practice to fail to produce a full report. The implication in the paper seems to be that when radiologists use arrows (that they expect will be visible to the report recipient), they record less detail about the location in the report, or the recipient does not read the report. Is that actually the case? Do the authors put forth any evidence to support that assertion? No, they do not; nor any evidence about what recipients actually prefer.

I would completely agree with the authors that there is an inherent beauty in many images, and they are best served in that respect unadorned. That's why we have buttons to toggle annotations on and off, including not only arrows but those in the corners for demographics and management as well. And why lead markers suck. And who really cares whether we can check to see if we have the right patient or not? OK, so there are safety issues to consider, but that's another story.

As for concerns about satisfaction of search, one could equally argue that one should not include an impression or conclusion in a report either, since I gather few recipients will taken the time to read more than that. Perhaps they should be forced to wade through reams of verbosity just in case they miss something subtle not restated in its entirety in the impression anyway. And there is no rule that says one can't point out subtle findings with arrows too. Indeed, I was lead to believe during my training that it was the primary interpreting radiologist's function (and major source of added value) to detect, categorize and highlight (positively or negatively) those subtle findings that might be missed in the face of the obvious.

Wrt. superfluousness, I don't know about you, but when I read a long prose description in a report that attempts to describe the precise location of a finding, whether it uses:
  • identifiers ("in series 3, on slice 9, approximately 13.8 mm lateral to the left margin of the descending aorta", which assumes incorrectly that the recipient's viewer numbers things the same way the radiologist's does),
  • approximate regions ("left breast MLO 4 o'clock position"), or
  • anatomical descriptions ("apical segment of the right lower lobe")
even if I find something on the image that is plausibly or even undeniably associated with the description, I am always left wondering if I am looking at exactly the same thing as the reporting radiologist is talking about, and with the suspicion that I have missed something. My level of uncertainty is significantly higher than it needs to be. Arrows are not superfluous, they are complementary and add significant clarity.

Or to put it another way, there is a reason the wax pencil was invented.

In my ideal world, every significant localized finding in a report would be intimately linked electronically with a specific set of coordinates in an image, whether that be its center (which might rendered as an arrow, or a cross-hair, or some other user interface element), or its outline (which might be a geometric shape like an ellipse or rectangle, or an actual outline or filled in region that has been semi-automatically segmented, if volume measurements are reported). Further, the display of such locations would be under my interactive control as a recipient (just as one turns on and off CAD marks, or applies presentation states selectively); this would address the "aesthetic" concern of the annotation obscuring underlying structure.

We certainly have the standards. Coordinate references in reports were one of the core elements of Dean Bidgood's Structured Reporting (SR) initiative in DICOM ("Documenting the information content of images", 1997). I used a (contrived) example of a human-generated report to emphasize the point in Figure 1 of my 2000 DICOM SR textbook (long due for revision, I know). There was even work to port the DICOM SR coordinate reference pattern into HL7 CDA (although of late this has been de-emphasized in favor of leaving these in the DICOM realm and referencing them, e.g., in PS3.20).

Nor is this beyond the state of the art of authoring and rendering applications, even if it is not commonly implemented or used. The primary barriers to adoption seem to be:
  • the diversity of the heterogeneous mix of image display, voice reporting and report display systems that are difficult to integrate tightly enough to achieve this,
  • coupled with the real or perceived difficulty of enabling the radiologist to author more highly linked content without reducing their "productivity" (as currently incentivized).
In a world in which the standard of care in the community is the fax of a printed report, possibly coupled with a CD full of images with a brain-dead viewer (and no presentation state or structured report coordinate rendering), the issue of any arrows at all is probably moot. The financial or quality incentives are focused on embellishing the report not with clinically useful content but instead with content for reimbursement optimization. The best we can probably do for these scenarios is the (non-interactive) "multimedia report", i.e., the one that has the selected images or regions of images pre-windowed and embedded in the report with arrows and numbers shared with the findings in the prose, or similar. An old concept once labelled as an "illustrated" report, recently revisited or renamed (MERR), but still rarely implemented AFAIK.

Even within a single enterprise, the "hyperlink" between specific findings in the report content and the image annotations is usually absent. The EHR and PACS may be nominally "integrated" to the point of being able to trigger the PACS viewer whilst reading the report (whether to get Well Meaningful Use Brownie Points or to actually serve the needs of the users), and the PACS may be able to render the radiologist's arrows (e.g., if they are stored as presentation states in the PACS). While this scenario is way better than having no arrows at all, it is not IMHO as good as "findings-linked annotations" (let's call them FLA, since we need more acronyms like we need a hole in the head). Such limited integrated deployments are typically present when the lowest common denominator for "report interchange" is essentially the same old plain text report, perhaps "masquerading" as something more sophisticated (e.g., by wrapping the text in CDA or DICOM SR, with or without a few section headings but without "semantic" links from embedded findings to image coordinates or references to softcopy presentation states).

Likewise, though the radiology and cardiology professional societies have been strongly pushing so-called "structured reporting" again lately, these efforts are pragmatic and only an incremental extension to the lowest common denominator. They are still essentially limited to standardization of layout and section headings, and do not extend to visual hyperlinking of findings to images. Not to dismiss the importance of these efforts; they are a vital next step, and when adopted offer valuable improvements, but IMHO they are not sufficient to communicate most effectively with the report recipients.

So, as radiologists worry about their inevitable outsourcing and commodification, perhaps they should be more concerned about how to provide added value beyond the traditional verbose prose, rather than bemoaning the hypothetical (if not entirely spurious) disadvantages of visual cues. The ability to "illustrate" a report effectively may become a key component of one's "competitive advantage" at some point.

I suggest that we need more FLA to truly enable radiologists to be "informative and participatory as caregivers, alerting our colleagues with more incisiveness and counsel" (paraphrasing the authors). That is, to more effectively combine the annotations and the report, rather than to exaggerate the importance of one over the other.


PS. Patients read their reports and look at their images too, and they really seem to like arrows, not many of them being trained anatomists.

PPS.  I thought for a moment that the article might be a joke, and that the authors were being sarcastic, but its Halloween not April Fool's, the paper was submitted in August and repeated on Aunt Minnie, so I guess it is a serious piece with the intention of being provocative rather than being taken literally. It certainly provoked me!

PPPS. Do not interpret my remarks to in any way advocate a "burned in" arrow, i.e., one that replaces the original underlying pixel values and which is then sent as the only "version" of the image; that is obviously unacceptable. I understand the author's article to be referring to arrows in general and not that abhorrent encoding mechanism in particular.

1 comment:

David Clunie said...

Since I mentioned "multimedia reports" in this blog post, it reminds me to draw your attention to an excellent new book I just finished reading, Radiology Reporting, written by a guru of "structured reporting" (lower case 's'), Curt Langlotz.

In his closing view of the what reporting should be like in 2025, pp 209-210, he bemoans the woeful state of current offerings in this respect.

I highly recommended it to all radiologists, experienced or trainee, referring physician and PACS, RIS, reporting and EMR/EHR informaticists and engineers, if only so that they have can improve their understanding of what they are missing out on and/or failing to deliver.