Excuses, Excuses, Excuses…Must Be The Russians’ Fault, But The AI STILL Isn’t Taking Your Job

Forgive me, loyal readers (both of you), for I have sinned. My last post was in May, months ago, and I’ve not posted since. This is unacceptable, and I humbly accept my penance of getting my writing back on track.

But there are reasons for my sloth. No, the Nuance Ransomware foolishness is not at fault. I don’t use Nuance, and I do apply security patches the moment they are released. Unlike Nuance. (I guess if you can convince the healthcare world that Speech Recognition actually works, you must feel invincible.) As you know, I’m going on another Rad-Aid trip to Tanzania in a few weeks. My task there is to aid the growth of Nuclear Medicine at the Aga Khan Hospital in Dar es Salaam. To that end, I’ve been quite busy putting together a number of lectures covering at least the basics of NM. Creating these talks is certainly a labor of love, and I’m hoping I’ve hit close to the mark on the level of complexity I’ll be presenting. I found on my previous trip to Ghana that the physicians were nothing short of brilliant; what I had to offer was not knowledge per se, but rather 28 years of experience in private practice. I suspect I’ll find the same in Tanzania. Please stay tuned for my daily (mostly) log of the trip. I do plan to make a 48 hour excursion to the Ngorongoro crater, and I’m not sure what sort of WiFi might be found there, so expect a gap. I’m hoping to spend one afternoon on the nearby island of Zanzibar as well; it is said to have some incredible beaches, and, well, it’s Zanzibar!

I’m taking a break from taking a break in writing due to a recent, rather sad post on Aunt Minnie from a fellow (I chauvinistically assume) named shouldadonerads:

Hi everyone,

I was seeking some advice on a peculiar situation I’ve gotten myself into. I graduated med school in May. I applied for rads and received 20+ interviews. However lurking these forums and others I couldn’t stop thinking about the AI scare and ended up applying to another specialty concurrently and ranking both. I ended up marching into the other specialty. I am currently in a transitional year and set to start the others soecialty in 7/2018. However I feel a deep sense of regret and realize now that rads is really where my passion lies.

Is it still possible or advisable for me to get a rads spot (I want DR not IR). If so, how should I go about this. Thank you very much.

On a side note, the amount of negativity I encountered on forums and even my interview trail was ridiculous. On more than one occasion I had a PD/faculty member question why anyone would go into radiology now. I accept sole responsibility for the predicament I’m in. But just as a side note, for those that are here in positions of interacting with medical students, please keep in mind what you say really effects prospective trainees and their view of the fields, possibly a lot more than you think.

Emphasis mine.

This makes me sad. No, this makes me angry. Really angry. You folks out there pushing the “AI will replace radiologists” meme are HURTING people. Like Dr. Shouldadonerads. And you are hurting the profession. Why are you doing this?

There are a number of possible explanations. Some truly believe that machines will someday (soon) take our jobs. Some might have invested in AI startups (or IBM). Some might think there are too many radiologists, and wish to thin the herd. And some are just jerks, trolls who want to make trouble. After all, they say, we have self-driving cars and Google can recognize a picture of a cat. Thus, robot-rads are obviously just around the corner. Right.

I cannot say with absolute certainty that machines won’t be able to read studies, render final reports, and displace rads, but I seriously doubt that this will happen. Conversely, I would take with a Mt. Everest (or shortly for me Kilimanjaro) sized grain of salt any claims that they will. I’ve had the opportunity, as I’ve reported previously, to speak with many of the principals of IBM’s Watson Health, and they insist that Watson will be a tool to be used by Radiologists, nothing more, nothing less. And I’ve even had a long chat with the person whom I most respect in this space, Dr. Eliott Siegel, who has been researching AI applications in Radiology for a very long time. Dr. Siegel is adamant that we are not going to be replaced. The key is to control the development of radiologic AI’s, he says, and I think he is absolutely correct.

While Wall Street has an imperfect record of accurate predictions, the old phrase, “Follow the money!” tends to be a safe recommendation. Hugh Harvey, a British Radiologist quite actively involved in commercial AI medical applications, writes in “Where to Invest In Radiology AI“:

Avoid companies claiming to replace humans. Not one single company has ever got FDA approval for a clinical diagnostic device that is not overseen by a human. Instead, to reduce regulatory burden, look for companies producing software that works alongside and augments humans, known as Clinical Decision Support. These may be triage systems, quantitative analysis tools, registration or segmentation systems. If you absolutely must invest in a diagnostic service, be sure to have deep pockets – FDA fees for PMAs start at $250,000. Good luck to you!

Dr. Harvey dismisses the famous Gregory Hinson’s famous dismissal of Radiologists: “We should stop training radiologists right now,” declared Google’s Hinson. Not so fast, says Dr. Harvey:

I’m a huge evangelist of AI in radiology, but also a pragmatist and a realist. I do not subscribe to the ideology that radiologists will be replaced in a mere 5 years time, but I do believe that radiologists will be incredibly well served and augmented by AI within the next decade. My respect for Geoffrey Hinton is immense; he is quite literally the godfather of image perception, after all. However, his famous quote over-eggs the pudding quite considerably (and I’m sure that if pressed he would clarify and cushion this statement!). For starters, his implication is that the only thing a radiologist does is interpret images — a huge misrepresentation of an entire profession. He also assumes hospitals will accept new technology unquestioningly. I only need point to the abysmal uptake of CADx software over the past decade to demonstrate how difficult it is to infuse new tech into the clinical frontline…

AI promises huge amounts of future reward, but total replacement of radiologists is not happening in the foreseeable future.

The nay-sayers thus have either been taken in by hype, or have some reason for spreading it.

Some hype is good; it helps drive research, bring investment, raise awareness, creates competition. But hype can also be detrimental; it can lead to over-promising, lack of investment in improving current practice, and rushed unscientific approaches to problems. . . As we start to drop over the hype apex into the trough of disillusionment, we will start to see excitement wear off rapidly as reality sets in. 

Graphically:

Clearly, we are still at the Peak of Inflated Expectations. As for investing, do take Dr. Harvey’s advice (again, emphasis mine):

Invest in companies that will help grow radiology AI as a sector, not just the end products. If I had a multi-million fund to invest, I wouldn’t even look for companies involved in image interpretation. What is sorely needed in the field is not the algorithms (these are the fruit) – it’s the infrastructure behind it (the trees) that’s important. Invest in the orchard!

{snip}

My final piece of advice is simple: be a tortoise, not a hare. You are in for the long haul. Do not expect significant return in under a 3 year timescale. Spread your investments and plan for a 5-10 (even 15) year period of scaling. Those who invest wisely now and choose companies that can scale smartly on focused problems can lead the market infrastructure. Those who rush and over-promise will only have to play catch-up later down the line.

While there are a good number of small-fry out there, working on this piece of AI, that piece of machine-learning, the other bit of image recognition, etc., I would have thought IBM’s Watson is closer to becoming our little electronic helper than anything else out there. I’ve always had tremendous respect for IBM (although I still favor Mac over PC’s) and if any company can get there, it should be IBM. But even Big Blue is seeing a tinge of red…

While I don’t think Watson is a “joke” as per the title of this Forbes piece, I think we need to realize that even he (it?) isn’t there yet:

In February 2017, M.D. Anderson Cancer Center canceled a promising, but troubled contract with IBM for its Watson platform. “The breakup with M.D. Anderson seemed to show IBM choking on its own hype about Watson,” Freedman added. “The University of Texas, which runs M.D. Anderson, announced it had shuttered the project, leaving the medical center out $39 million in payments to IBM—for a project originally contracted at $2.4 million.”

It’s unclear, however, what the root of the problem was for M.D. Anderson. “Most of the criticism of Watson, even from M.D. Anderson, doesn’t seem rooted in any particular flaw in the technology. Instead, it’s a reaction to IBM’s overly optimistic claims of how far along Watson would be by now,” Freedman added. “After four years it had not produced a tool for use with patients that was ready to go beyond pilot tests.”

The medical community was similarly concerned about Watson’s shortcomings at M.D. Anderson. “A university audit of the project exposed many procurement problems, cost overruns, and delays. Although the audit took no position on Watson’s scientific basis or functional capabilities, it did describe challenges with assimilating Watson into the hospital setting,” said Charlie Schmidt, writing for the Journal of the National Cancer Institute. “Experts familiar with Watson’s applications in oncology describe problems with the system’s ability to digest written case reports, doctors’ notes, and other text-heavy information generated in medical care.”

One could say a radiology report is text-heavy.

Why aren’t Watson and his AI cousins, there yet?

A team of Booz Allen Hamiltonnull +0% experts and an MD blogging for Health Affairs explained this challenge. “Human intelligence outperforms machine-learning applications in complex decision making routinely required during the course of care, because machines do not yet possess mature capabilities for perceiving, reasoning, or explaining,” explained Ernest Sohn, a chief data scientist in Booz Allen’s Data Solutions and Machine Intelligence group; Joachim Roski, a principal at Booz Allen Hamilton; Steven Escaravage, vice president in Booz Allen’s Strategic Innovation Group; and Kevin Maloy, MD, assistant professor of emergency medicine at Georgetown University School of Medicine. “Moreover, despite significant progress, even state-of-the-art machine-learning algorithms often cannot deliver sufficient sensitivity, specificity, and precision (that is, positive predictive value) required for clinical decision making.”

Right now, it all comes back to hype:

As the M.D. Anderson fiasco illustrates, IBM fell into the trap of over-promising and under-delivering. “IBM claimed in 2013 that ‘a new era of computing has emerged’ and gave Forbes the impression that Watson ‘now tackles clinical trials’ and would be in use with patients in just a matter of months,” Freedman noted.

As to whether Watson will ever be useful in clinical situations? “This is hard,” opined Stephen Kraus, a partner at Bessemer Venture Partners. “It’s not happening today, and it might not be happening in five years. And it’s not going to replace doctors.”

It may be that a successful AI will come from the heart (or bowels) of Google. Or Facebook. I’m thinking Apple, personally. And don’t count Watson out. Not at all. The folks at IBM are some of the best in the world at what they do, and Watson still has great potential. He just got caught at the Peak of Inflated expectations.

But to Dr. Shouldadonerads, and all the medical students out there who are listening to the trash-talk… The announcement of the death of Radiology is incredibly premature. (And some of those bleating it are immature.) I cannot envision AI taking you job, my job, or anyone else’s job as Radiologists for the working lifetime of any of you out there. IT WILL NOT HAPPEN.

What WILL happen, eventually, is that AI will be at your side, well, on your workstation, and it will assist you. It will flag things you should see, suggest what those things might be, give you ready access to the patient’s medical record and a host of other things. Think of AI as your butler, your medical student/scut monkey, your pal, your friend. It is NOT your competition.

Look at it this way. When you plan a romantic evening, you might want your butler/ladies’ maid to prepare a nice dinner, put rose-petals on the bed, draw a nice bath. And then LEAVE. You really don’t want the butler to take care of the, ummm,  pièce de résistance, now do you?

Your patients don’t either.

via Blogger http://ift.tt/2hf9CJ6 July 30, 2017 at 09:40PM

Advertisements