Saturday, October 31, 2015

The slings and arrows of outrageous radiologists - I want my FLA.

Summary: We don't need fewer arrows. We need more arrows more often. And we need better arrows (in the sense that they are hyperlinked to the findings in the report when images are rendered, i.e., are Findings-Linked Annotations (FLA)). The term "arrows" being a surrogate for "visual indication of location".

Long Version.

I came across the strangest article about "arrows" in EJR.

Now, I don't normally read EJR because it is a little expensive, it doesn't come along with any professional society membership I have, I don't work at an institution that gets it, most of its articles are not open access (there is an EJR Open companion journal though), and it doesn't have a lot of informatics content. But this paper happened to be quoted in full for some reason on Aunt Minnie Europe, so I got a squizz without having to wait to receive a pre-print from the authors via ResearchGate or some other mechanism.

The thesis of the radiologist authors seems to be that "arrows" on images pointing to findings are a bad thing, and that recipients of the report should read the report instead of having access to such visual aids.

This struck me as odd, from the perspective of someone who has spent the last two decades or so building and evangelizing about standards and systems to do exactly that, i.e., to make annotations on images and semantically link them to specific report content so that they can be visualized interactively (ideally through DICOM Structured Reports, less ideally through the non-semantic but more widely available DICOM Softcopy Presentation States, and in the worst case in a pre-formatted multimedia rather than plain text report).

What are the authors' arguments against arrows? To summarize (fairly I hope), arrows:
  • are aesthetically ugly, especially if multitudinous, and may obscure underlying features
  • draw attention from unmarked less obvious findings (may lead to satisfaction of search)
  • are not a replacement for the more detailed account in the report
  • are superfluous in the presence of the more detailed account in the report
  • might be removed (or not be distributed)
  • detract from the role of the radiologist as a "readily accessible collaborator"
For the sake of argument, I will assume that what the authors' mean by "arrows" includes any "visual indication of location" rendered on an image, passively or interactively. They actual describe them as "an unspoken directional signal".

The authors appear to conflate the presence of arrows with either the absence of, or perhaps the ignorance of, the report ("relying on an arrow alone as a manifestation of our special capabilities", "are merely a figurative crutch we can very well do without").

I would never assert that arrows alone (or any form of selective annotation) substitute for a good report, nor it would seem to me, would it be best or even common practice to fail to produce a full report. The implication in the paper seems to be that when radiologists use arrows (that they expect will be visible to the report recipient), they record less detail about the location in the report, or the recipient does not read the report. Is that actually the case? Do the authors put forth any evidence to support that assertion? No, they do not; nor any evidence about what recipients actually prefer.

I would completely agree with the authors that there is an inherent beauty in many images, and they are best served in that respect unadorned. That's why we have buttons to toggle annotations on and off, including not only arrows but those in the corners for demographics and management as well. And why lead markers suck. And who really cares whether we can check to see if we have the right patient or not? OK, so there are safety issues to consider, but that's another story.

As for concerns about satisfaction of search, one could equally argue that one should not include an impression or conclusion in a report either, since I gather few recipients will taken the time to read more than that. Perhaps they should be forced to wade through reams of verbosity just in case they miss something subtle not restated in its entirety in the impression anyway. And there is no rule that says one can't point out subtle findings with arrows too. Indeed, I was lead to believe during my training that it was the primary interpreting radiologist's function (and major source of added value) to detect, categorize and highlight (positively or negatively) those subtle findings that might be missed in the face of the obvious.

Wrt. superfluousness, I don't know about you, but when I read a long prose description in a report that attempts to describe the precise location of a finding, whether it uses:
  • identifiers ("in series 3, on slice 9, approximately 13.8 mm lateral to the left margin of the descending aorta", which assumes incorrectly that the recipient's viewer numbers things the same way the radiologist's does),
  • approximate regions ("left breast MLO 4 o'clock position"), or
  • anatomical descriptions ("apical segment of the right lower lobe")
even if I find something on the image that is plausibly or even undeniably associated with the description, I am always left wondering if I am looking at exactly the same thing as the reporting radiologist is talking about, and with the suspicion that I have missed something. My level of uncertainty is significantly higher than it needs to be. Arrows are not superfluous, they are complementary and add significant clarity.

Or to put it another way, there is a reason the wax pencil was invented.

In my ideal world, every significant localized finding in a report would be intimately linked electronically with a specific set of coordinates in an image, whether that be its center (which might rendered as an arrow, or a cross-hair, or some other user interface element), or its outline (which might be a geometric shape like an ellipse or rectangle, or an actual outline or filled in region that has been semi-automatically segmented, if volume measurements are reported). Further, the display of such locations would be under my interactive control as a recipient (just as one turns on and off CAD marks, or applies presentation states selectively); this would address the "aesthetic" concern of the annotation obscuring underlying structure.

We certainly have the standards. Coordinate references in reports were one of the core elements of Dean Bidgood's Structured Reporting (SR) initiative in DICOM ("Documenting the information content of images", 1997). I used a (contrived) example of a human-generated report to emphasize the point in Figure 1 of my 2000 DICOM SR textbook (long due for revision, I know). There was even work to port the DICOM SR coordinate reference pattern into HL7 CDA (although of late this has been de-emphasized in favor of leaving these in the DICOM realm and referencing them, e.g., in PS3.20).

Nor is this beyond the state of the art of authoring and rendering applications, even if it is not commonly implemented or used. The primary barriers to adoption seem to be:
  • the diversity of the heterogeneous mix of image display, voice reporting and report display systems that are difficult to integrate tightly enough to achieve this,
  • coupled with the real or perceived difficulty of enabling the radiologist to author more highly linked content without reducing their "productivity" (as currently incentivized).
In a world in which the standard of care in the community is the fax of a printed report, possibly coupled with a CD full of images with a brain-dead viewer (and no presentation state or structured report coordinate rendering), the issue of any arrows at all is probably moot. The financial or quality incentives are focused on embellishing the report not with clinically useful content but instead with content for reimbursement optimization. The best we can probably do for these scenarios is the (non-interactive) "multimedia report", i.e., the one that has the selected images or regions of images pre-windowed and embedded in the report with arrows and numbers shared with the findings in the prose, or similar. An old concept once labelled as an "illustrated" report, recently revisited or renamed (MERR), but still rarely implemented AFAIK.

Even within a single enterprise, the "hyperlink" between specific findings in the report content and the image annotations is usually absent. The EHR and PACS may be nominally "integrated" to the point of being able to trigger the PACS viewer whilst reading the report (whether to get Well Meaningful Use Brownie Points or to actually serve the needs of the users), and the PACS may be able to render the radiologist's arrows (e.g., if they are stored as presentation states in the PACS). While this scenario is way better than having no arrows at all, it is not IMHO as good as "findings-linked annotations" (let's call them FLA, since we need more acronyms like we need a hole in the head). Such limited integrated deployments are typically present when the lowest common denominator for "report interchange" is essentially the same old plain text report, perhaps "masquerading" as something more sophisticated (e.g., by wrapping the text in CDA or DICOM SR, with or without a few section headings but without "semantic" links from embedded findings to image coordinates or references to softcopy presentation states).

Likewise, though the radiology and cardiology professional societies have been strongly pushing so-called "structured reporting" again lately, these efforts are pragmatic and only an incremental extension to the lowest common denominator. They are still essentially limited to standardization of layout and section headings, and do not extend to visual hyperlinking of findings to images. Not to dismiss the importance of these efforts; they are a vital next step, and when adopted offer valuable improvements, but IMHO they are not sufficient to communicate most effectively with the report recipients.

So, as radiologists worry about their inevitable outsourcing and commodification, perhaps they should be more concerned about how to provide added value beyond the traditional verbose prose, rather than bemoaning the hypothetical (if not entirely spurious) disadvantages of visual cues. The ability to "illustrate" a report effectively may become a key component of one's "competitive advantage" at some point.

I suggest that we need more FLA to truly enable radiologists to be "informative and participatory as caregivers, alerting our colleagues with more incisiveness and counsel" (paraphrasing the authors). That is, to more effectively combine the annotations and the report, rather than to exaggerate the importance of one over the other.

David

PS. Patients read their reports and look at their images too, and they really seem to like arrows, not many of them being trained anatomists.

PPS.  I thought for a moment that the article might be a joke, and that the authors were being sarcastic, but its Halloween not April Fool's, the paper was submitted in August and repeated on Aunt Minnie, so I guess it is a serious piece with the intention of being provocative rather than being taken literally. It certainly provoked me!

PPPS. Do not interpret my remarks to in any way advocate a "burned in" arrow, i.e., one that replaces the original underlying pixel values and which is then sent as the only "version" of the image; that is obviously unacceptable. I understand the author's article to be referring to arrows in general and not that abhorrent encoding mechanism in particular.

Thursday, October 22, 2015

I think she's dead ... no I'm not ... Is PACS pining for the fiords?

Summary: The death of PACS, and its deconstruction, have been greatly exaggerated. Not just recently, but 12 years ago.

Long Version:

Mixing quotes from different Monty Python sketches (Death of Mary Queen of Scots, Pet Shop) is probably almost as bad as mixing metaphors, but as I grow older it is more effort to separate these early associations.

These lines came to mind when I was unfortunately reminded of one the most annoying articles published in the last few years, "PACS in 2018: An Autopsy", which is in essence an unapologetic unsubstantiated promotion of the VNA concept.

Quite apart from the fact that nobody can agree on WTF a VNA actually is (despite my own lame attempt at a retrospective Wikipedia definition), this paper is a weird collage of observable technological trends in standards and products, marketing repackaging of existing technology with new labels, and fanciful desiderata that lack real market drivers or evidence of efficacy (or the regulatory (mis-)incentives that sometimes serve in lieu).

That's fine though, since it is reasonable to discuss alternative architectures and consider their pros and cons. But wait, surprise, there is actually very little if any substance there? No discussion of the relative merits or drivers for change? Is this just a fluff piece, the sort of garbage that one might see in a vendor's press release or in one of those junk mail magazines that clutter one's physical mailbox? All hype and no substance? What is it doing in a supposedly peer-reviewed scientific journal like JDI?

OK, so its cute, and its provocative, and let's give the paper the benefit of the doubt and categorize it as editorial rather than scientific, which allows for some latitude.

And no doubt, somewhat like Keeping Up with the Kardashians and its ilk, since folks seem to be obsessed with train wrecks, it is probably destined to become the "most popular JDI article of all time".

And let's be more even generous and forgive the drawing of pretty boxes that smells like "Marchitecture". Or, that it would be hard for a marketing executive to draft a more buzzword compliant brochure. And perhaps as an itemized list of contemporary buzzwords, it has some utility.

My primary issue is with the title, specifically the mention of "autopsy".

Worse, the author's follow up at the SIIM 2015 meeting in his opening address entitled "The Next Imaging Evolution: A World Without PACS (As We Know It)" perpetuated this theme of impending doom for PACS, a theme that dominated the meeting.

Indeed, though the SIIM 2015 meeting was, overall, very enjoyable and relatively informative, albeit repetitive, the main message I returned home with was the existence of a pervasive sense of desperation among the attendees, many of whom seem to fear not just commoditization (Paul Chang's theme in past years) but perhaps even total irrelevance in the face of the emerging "threat" that is enterprise image management. I.e., PACS administrators and radiologists are doomed to become redundant. Or at least they are if they don't they buy products with different labels, or re-implement the same solutions with different technology.

When did SIIM get hijacked by fear-mongers and doubters? We should be demanding more rigidly defined areas of doubt and uncertainty ... wait, no, wrong radio show.

OK, I get that many sites are faced with the challenge of expanding imaging beyond radiology and cardiology, and indeed many folks like the VA have been doing that for literally decades. And I get that Meaningful Use consumes all available resources. And that leveraging commodity technology potentially lowers barriers to entry. And that mobile devices need to be integrated. And that radiology will no longer be a significant revenue stream as it becomes a cost rather than profit center (oops, who said that). But surely the message that change may be coming can be spun positively, as an opportunity rather than a threat, as incremental improvement rather than revolution. Otherwise uninformed decision makers as well as uneducated worker bees who respond to hyperbole rather than substance, or who are seeking excuses, may be unduly influenced in undesirable or unpredictable ways.

More capable commentators than I have criticized this trend of hyping the supposed forthcoming "death of PACS", ranging from Mike Cannavo to Herman O's review of SIIM 2015 and the equally annoying deconstruction mythology.

Call me a Luddite, but these sorts of predictions of PACS demise are not new; indeed, I just came across an old RSNA 2003 abstract by Nogah Haramati entitled "Web-based Viewers as Image Distribution Solutions: Is PACS Already a Dead Concept?". Actually, encountering that abstract was what prompted me to write this diatribe, and triggered the festering irritation to surface. It is interesting to consider the current state of the art in terms of web viewing and what is currently labelled as "PACS" in light of that paper, considering it was written and presented 12 years ago. Unfortunately I don't have the slides, just the abstract, but I will let you know if/when I do get hold of them.

One has to wonder to what extent recent obsession with this morbid terminology represents irresponsible fear mongering, detachment from whatever is going on in the "real world" (something I am often accused of), self-serving promotion of a new industry segment, extraordinary popular delusions and the madness of crowds, or just a desire to emulate the breathless sky-is-falling reporting style that seems to have made the transition from cable news even to documentary narrative (judging by the "Yellowstone fauna are doomed" program we watched at home on Animal Planet the other night). Where is David Attenborough when you need him? Oh wait, I think he's dead. No he's not!

David

plus c'est la même chose

Sunday, October 4, 2015

What's that 'mean', or is 'mean' 'meaningless'?

Summary: The current SNOMED code for "mean" used in DICOM is not defined to have a particular meaning of mean, which comes to light when considering adding geometric as opposed to arithmetic mean. Other sources like NCI Thesaurus have unambiguously defined terms. The STATO formal ontology does not help because of its circular and incomplete definitions.

Long Version:

In this production company closing logo for Far Field Productions, a boy point to a tree and says "what's that mean?"

One might well ask when reading DICOM PS3.16 and trying to decide when to use the coded  "concept" (R-00317, SRT, "Mean") (SCT:373098007).

This question arose when Mathieu Malaterre asked about adding "geometric mean", which means (!) it is now necessary to distinguish "geometric" from "arithmetic" mean.

As you probably know, DICOM prefers not to "make up" its own "concepts" for such things, but to defer to external sources when possible. SNOMED is a preferred such external source (at least for now, pending an updated agreement with IHTSDO that will allow DICOM to continue to add SNOMED terms to PS3.16 and allow implementers to continue to use them with license or royalty payments, like the old agreement). However, when we do this, we do not provide explicit (textual or ontologic) definitions, though we may choose to represent one of multiple possible alternative terms (synonyms) rather than the preferred term, or indeed make up our own "code meaning" (which is naughty, probably, if it subtly alters the interpretation).

So what does "mean" "mean"?

Well, SNOMED doesn't say anything useful about (R-00317, SRT, "Mean") (SCT:373098007). The SNOMED "concept" for "mean" has parents:

> SNOMED CT Concept (SNOMED RT+CTV3)
  > Qualifier value (qualifier value) 
     > Descriptor (qualifier value) 
        > Numerical descriptors (qualifier value)
 

which doesn't help a whole lot. This is pretty par for the course with SNOMED, even though some SNOMED "concepts" (not this one) have (in addition to their "Is a" hierarchy), a more formal definition produced by other types of relationship (e.g., "Procedure site - direct", "Method"), etc. I believe these are called "fully defined" (as distinct from "primitive").

So one is left to interpret the SNOMED "term" that is supplied as best one can.

UMLS has (lexically) mapped SCT:373098007 to UMLS:C1298794, which is "Mean - numeric estimation technique", and unfortunately has no mappings to other schemes (i.e., it is a dead end). UMLS seems to have either consciously or accidentally not linked the SNOMED-specific meaningless mean with any of (C0444504 ,UMLS, "Statistical mean"), (C2347634, UMLS, "Population mean") or (C2348143, UMLS, "Sample mean").

There is no UMLS entry for "arithmetic mean" that I could find, but the "statistical mean" that UMLS reports, is linked to the "mean" from NCI Thesaurus, (C53319, NCIt, "Mean"), which is defined textually as one might expect, as "the sum of a set of values divided by the number of values in the set". This is consistent with how Wikipedia, the ultimate albeit evolving source of all knowledge, defines "arithmetic mean".

SNOMED has no "geometric mean" but UMLS and NCI Thesaurus do. UMLS:C2986759 maps to NCIt:C94906.

One might expect that one should be able to do better than arbitrary textual definitions for a field as formalized as statistics. Sure enough I managed to find STATO, a general-purpose STATistics Ontology, which looked promising on the face of it. One can poke around in it on-line (hint: look at the classes tab and expand the tree), or download the OWL file and use a tool like Protégé.

If you are diligent (and are willing to wade through the Basic Formal Ontology (BFO) based hierarchy:

entity
> continuant
  > dependent continuant
    > generic dependent continuant
      > information content entity
        > data item
          > measurement data item
            > measure of central tendency
              > average value

one finally gets to a child, "average value", which has an "alternative term" of "arithmetic mean".

Yeah!

But wait, what is its definition? There is a textual annotation "definition" that is "a data item that is produced as the output of an averaging data transformation and represents the average value of the input data".

F..k! After all that work, can you say "circular"? I am sure Mr. Rogers can.

More formally, STATO says "average value" is equivalent to "is_specified_output_of some 'averaging data transformation'". OK, may be there is hope there, so let's look at the definition of "averaging data transformation" in the "occurrent" hierarchy (don't ask; read the "Building Ontologies with Basic Formal Ontology" book).

Textual definition: "An averaging data transformation is a data transformation that has objective averaging". Equivalent to "(has_specified_output some 'average value') or (achieves_planned_objective some 'averaging objective')".

Aargh!

Shades of lexical semantics (Cruse is a good read, by the way), and about as useful for our purposes:(

At least though, we know that STATO:'average value' is a sub-class of STATO:'measure of central tendency', which has a textual definition of "a measure of central tendency is a data item which attempts to describe a set of data by identifying the value of its centre", so I guess we are doing marginally better than SNOMED in this respect (but that isn't a very high bar). Note that in the previous sentence I didn't show "codes" for the STATO "concepts", because it doesn't seem to define "codes", and just uses the human-readable "labels" (but Cimino-Desiderata-non-compliance is a subject for another day).

In my quest to find a sound ontological source for the "concept" of "geometric mean", I was also thwarted. No such animal in STATO apparently, yet, as far as I could find (maybe I should ask them).

So not only does STATO have useless circular definitions but it is not comprehensive either. Disappointed!

So I guess the best we can do in DICOM for now, given that the installed base (especially of ultrasound devices) probably use (R-00317, SRT, "Mean") a lot, is to add text that says when we use that code, we really "mean" "mean" in the sense of "arithmetic mean", and not the more generic concept of other things called "mean", and add a new code that is explicitly "geometric mean". Perhaps SNOMED will add a new "concept" for "geometric mean" on request and/or improve their "numerical descriptors" hierarchy, but in the interim either the NCI Thesaurus term NCIt:C94906 or the UMLS entry UMLS:C2986759 would seem to be adequate for our purposes. Sadly, the more formal ontologies have not been helpful in this respect, at least the one I could find anyway.

Maybe we should also be extremely naughty and replace all uses of (R-00317, SRT, "Mean") in the DICOM Standard with (R-00317, SRT, "Arithmetic mean"), just to be sure there is no ambiguity in the DICOM usage (and suggest to SNOMED that they add it as an alternative term). This would be less disruptive to the DICOM installed base than replacing the inadequately defined SNOMED code with the precisely defined NCI Thesaurus code.

David

PS. I italicize "concept" because there is debate over what SNOMED historically and currently defines "concept" to be, quite apart from the philosophical distinctions made by "realist" and "idealist" ontologists (or is it "nominalists" and "conceptualists"). I guess you know you are in trouble when you invoke Aristotle. Sort of like invoking Lincoln I suppose (sounds better when James McEachin says it).