The Inmates Are Running The Asylum

I have stolen my title from a book of the same name by one Alan Cooper. Mr. Cooper is Founder and Chairman of the Board of the Cooper company, which is devoted to leading-edge, customer/user oriented product and software design. From the company website:

For over 30 years Alan Cooper has been a pioneer of the modern computing era. His groundbreaking work in software invention, design and construction has influenced a generation of programmers and business people—and helped a generation of users. Alan is the author of two best-selling books, About Face: The Essentials of User Interface Design and The Inmates Are Running the Asylum, and his visionary ideas and outspoken style make him a popular speaker. Whether you know him as the “Father of Visual Basic,” the inventor of personas, or the guy who thinks software should be spanked, we know him as the man whose ideas are the foundation of what we do.

Mr. Cooper has a rather radical outlook on software and indeed other high tech offerings. From his “Inmates” book:

No matter how early in the development process specifications are drafted, they cannot substitute for interaction design. And no matter how hard they try, programmers cannot consistently arrive at a successful design. Not only are their methods, training, and aptitude wrong for the job, but they are caught in a strong conflict of interest between serving the user’s needs and making their programming job easier. Yet, in company after company, software engineers are allowed to control the development process, often from start to finish. Sometimes, their control is overt, but more typically it is indirect.

Hence, the inmates (the programmers…no offense!) are running the asylum. This is more or less what yours truly has been saying for a while in reference to PACS. The vendors are allowing their engineers and programmers to create technological magnum opuses which allow too many permutations, and ultimately get in the way of my reading my studies. Why is this tolerated? Cooper compares this to a dancing bear: we are all so impressed with the fact that bear is dancing at all, that we fail to observe just how badly it is choreographed.

So few software-based products have exhibited any real dancing ability that most people are honestly unaware that things could be better-a lot better. Most people using spreadsheets and word processors on modern computers imagine that all the problems that a computer can solve have been solved, and solved adequately if not well…Bill Gates once observed, with uncharacteristic cynicism, that the way you made software user friendly was by making a rubber stamp and stamping each box with the legend “USER FRIENDLY.” Unintentionally, his method has become the computer industry’s real method.

Mr. Cooper’s premier design company will help one create a software product (or hardware, for that matter) that does not suffer from these maladies. The company motto is: “Companies are more successful when their products help users achieve their goals.” Accomplishing this gargantuan task takes several steps, including Planning, Research, Modeling, Requirements, Framework, Refinement, and Support. Of these, modeling may be the most characteristic of Cooper’s contributions.

Are you frustrated by trying to develop a product that serves thousands–or even millions–of different users? Imagine if instead of trying to please everyone, you could design for three distinct people you knew very well. Wouldn’t that make your job easier?

This is the power of personas, a technique for modeling users that we invented here at Cooper. As we interview and observe users in their natural habitats, we look for patterns of behavior and goals shared by multiple people. Each distinct behavior pattern becomes the basis for a persona: a description of an archetypal user.

Personas help everyone from programmers to executives understand users in a way no other tool can: at a gut level. Personas help guide design decisions, end those lengthy arguments about what users need, and get everyone to agree on what product you’re building.

You don’t need an enormous project to enjoy the benefits of personas. For short projects, personas may just be quick sketches based on limited data. For large initiatives with distributed teams, a well-researched and thoroughly documented set of personas can be an essential reference and communication tool.

In an article about Cooper, Allison J. Head further describes this process:

The gist of Cooper’s argument is fairly straightforward: There will be far greater success designing an interface that meets the goals of one specific person, instead of trying to design for the various needs of many. At first blush, though, it may seem downright counterintuitive to design for just one person, whether hypothetical or not. How can designing for a single soul possibly ensure an interface that supports the needs of many users? But as an interface becomes more layered and complex and tries to serve an ever-widening audience base, Cooper’s argument holds true.

As long as personas are developed with diligence, the planning and development tool has three key benefits for interface design projects of all kinds. First, personas introduce teams to hypothetical users who have names, personal traits, and habits that in a relatively short time become believable constructs for honing design specifications. Second, personas are stand-ins with archetypal characteristics that represent a much larger group of users. Third, personas give design teams a strong sense of what users’ goals are and what an interface needs to fulfill them.

She goes on to describe what may happen if programmers and designers don’t user personas:

One of the best arguments for using personas comes from some misguided design efforts at Microsoft. When the software giant geared up to redesign its well-known Microsoft Office Suite for a 1997 release, the research team soon discovered that many of the features users wanted already existed. In fact, four out of five of the features users requested for Office 97 came with Office 95. The outcome of Microsoft’s design approach is just what Cooper warns against. In trying to support the diverse tasks of many conceivably different software users, Microsoft cobbled together a product that was bloated with capabilities and ended up satisfying few users.

I think most would agree. I am reminded of some PACS systems I have used….

Anyway, how might this approach be used to create a PACS? Can we assemble a “Joe Radiologist” persona that would cover most radiologists? Well, let’s see…. What do radiologists do all day with respect to PACS? We sit in front of the darn things and read studies. That pretty much is all we do, except for the interventional types and the barium-slingers. The real question is how Joe Radiologist wants to interface with his PACS. That is indeed the question. If I were taking this approach, I would talk with all of my partners, and ask them what they like and what they don’t like about the PACS systems we already use. In fact, I would post the question here on the blog, and open it up to as many rads as possible. I would then synthesize and distill this wide pool of answers down to a usable persona.

The key to this process is, I believe, to drop all preconceived notions, but not to forget lessons learned. If you are going to build a PACS (or a car, or a toilet-paper holder) you would like to make it usable, and comfortable. There should be adequate features to cover at least 95% of situations that will arise (easy with toilet-paper holders, harder with cars, easier than some think with PACS.) Many products out there are obsessed with how to handle something that Joe Radiologist might have to do once a week, or once a month, and pay little attention to the repetitive grind that makes up 95% of our work. And many vendors are stuck in the rut of doing things the same way, version after version. Therefore, it is critical to see how a wide swath of your potential customers would use your product, and it is very important to understand how they work with your competitors’ versions of the same thing. When Saab designs a new car, for example, it is probably wise to determine if the 99% of the world that buys cars other than Saab’s would appreciate having the ignition key on the center console, which is where Saab (and no one else) puts it. If you are a PACS company, it is probably not terribly wise to gear your Joe Radiologist persona to those who are mostly users of your PACS, and then proceed to come up with the next generation of a PACS with its ignition key on the center console. I should also add that real radiologists need to be reintroduced to the process; don’t go with the stereotyped “Joe” for a prolonged period without a reality-check from the flesh-and-blood version. Trust me, if these caveats to the Cooper approach are discarded, the result will not be what “Joe Radiologist” wants at all.

One of my partners told me in all seriousness that I should create my own PACS system. I wish I had the time and the wherewithall to do so. Of course, out of vanity, my persona would be used instead of “Joe’s” which would naturally yield the most wonderful system ever. Absolutely. Guaranteed.

A Survivor Of Voice/Speech Recognition Speaks Out

(Dalai’s note: I was approached by a colleague, and asked if I would post his VR/SR experiences online, but anonymously so he would not have the vendor in question breathing down his neck. I am more than happy to do so, and here it is….)

I will start by telling of our voice recognition (VR) experience.

About one and a half years ago we launched VR to our hospital system. It was an integrated “front end” VR product that resided on our PACS workstations. You may or may not have preconceived ideas about what quality we might have purchased, suffice it to say we purchased what for all intents and purposes, seemed to be the best VR product on the market. No we didn’t buy some outdated piece of garbage.

It was a planned 6-week launch with training provided primarily by the VR company. Initially, we were expecting things to be a little stormy; how wrong we were. First, we had scheduling problems with training, there were rads that were unavailable for training and the amount of time dedicated to each rad was short, only a few hours. In addition, the rads were learning while still having to attend to some clinical duties, which was a BIG MISTAKE.

So after a few weeks, the trainers left, there were still rads that had almost zero actual training. Our IT/IS support staff tried to help out as much as possible. It was just over a month when things in the department finally caved in. We were chronically short staffed, radiologist wise, and work wasn’t being finished. Those of us who took to VR somewhat well were being completely overrun by those who did not, both in having to field multiple near hysterical calls and a markedly increased workload. There were multiple unexplained system crashes (denied initially but fixed somewhat in a later patch) and a greatly degraded workstation performance.

We had numerous direct meeting between the administration, radiologists, and VR company. Here is what we got, repeated requests for the radiologists to document every occurrence of problems (yeah, right, like we weren’t totally swamped as it was what’s one more thing to do), repeated denials of their software being to blame for workstation slowness and crashes (of course later we would learn that yes it was their software causing crashes and system performance issues), repeated promises to fix the problems.

After almost 2 months of this I, being the “radiologist champion” for VR and about 2 seconds from a complete nervous breakdown, told myself “Either I stop using VR or I hand in my 90 day notice and look for a new job”. I turned off the VR product and told my fellow radiologists that I would no longer be fielding their questions about “how to make VR work”. The rest of the group followed suit immediately. It imploded over night. Problems that still existed at this time were:

1) Random crashed requiring a system reboot
2) Unexplained, or rather un-admitted, slowness of the workstation
3) Consistent patterns of errors in VR, it did not appear to be learning no matter how many times corrected
4) Almost no one in the department (IT/IS included) really knew how to use the advanced functions, macro generation was “trial and error”
5) Lack of advanced support, many question to our IS/IT people generated a response “I’ll send and email to VR company and see what they say”
6) Many errors sent for “transcriptionist editing” came back uncorrected, the error rate of “transcritpionist editing” was far greater than the old system. Many report errors escaped the radiologist review as well and went out finalized. Many, many months later I would learn we had some actual “transcriptionist sabotage” going on and this was NEVER disclosed to the radiologists.
7) A horrific loss of productivity in an already short staffed group.
It died for a long time; of course the hospital had a vested interest in making this work.

We recently had a visit to a “model group” for VR and got to see them in action. The model group watched us and decided to purchase the product anyway. They are a large group; they owned the transcription, PACS, and other things and “contracted” these things out to hospitals. This was a business decision by a group that owned the entire infrastructure. They did things very differently and claim a 90+% “sign off” rate. Here is what we observed and were told:

They began by sending their IT folks to the VR company for intensive training. They refused the VR company’s offer to do radiologist training; they were going to handle that all in house. Prior to launch their entire IT/IS support staff had been using the product for “months” and were fluent in its use. This was before a radiologist touched it. After launch they had their trainers go back for additional training to learn to “hack” the word lexicon to overcome pattern of repeated errors. This has worked very well for them and is a process discouraged by the VR company. What we learned, to our dismay, was that VR was not just “appearing” to not learn certain repeated phrases or words, it REALLY WASN’T learning them at all, no matter how many times we made corrections manually. The “model group” placed skilled VR support staff available 24/7, to immediately respond and deal with radiologist problems with VR.

They began by doing a “staged” launch of VR, starting with sections and working 1 section at a time. Each radiologists was initially afforded a full half day of one-on-one training with no clinical duties (no phone calls, no tech interruptions, no exams needing to be read, NOTHING); after that they each received 3 full days of one-on-one training while working (they estimated a total of 24-28 training work hours for each rad). By one on one they mean the trainer was physically in the same room or sitting outside the radiologists door. They had section meetings every morning to discuss VR’s use and share tips; they relied on a lot of “peer pressure” to make things go smoothly. The entire launch was slated to take about 1 year.

The made sure their workstation greatly exceeded the VR companies recommendations. They made sure their transcriptionists were folded into other duties as they lost their need for so many of them. I would add here that prior to VR this groups report turn around was poor, over 2 days for routine stuff.

We were also told the group was now considering “financially penalizing” rads that are using too much transcriptionist editing, this was not implemented yet and the penalties they were considering was not shared with us.

We then got to observe some rads in action. It really wasn’t magic. First, they have employed radiology “assistants”. These folks do a lot of the “scut work”, phone calls and so on; they basically make sure the radiologists are not disturbed and the hospital pays for them. One rad was a “macro master” although he appeared to spend much more time looking at the VR than the exam. One rad absolutely hated the VR system, so all is not roses at this model group. The next few rads showed all the problems we have seen all along, lots of little annoying errors. Every rad admitted it slowed them down but they had “made up” for it through intense training, heavy assistant use, awesome IS support staff and excellent equipment. They did feel, well, most of them, that it was “worth it” for various reasons.

They did confirm many things we radiologists already knew. Things like the software’s inability to learn certain works on its own, or rather the pattern of making repeated patterns of errors. The VR company’s gross underestimation of the training needed and IS support staff needs. The performance issues on the VR companies “recommended” systems. Basically this is something the radiologists need to “buy into”. They did admit to the limitations of VR but that it is improving and “As long as the radiologists see continual improvement they will stick with it”. They also, on their own, identified and developed work-arounds for many conflicts the software was having with other office software products.

Where do we go from here? A few in my group had prior experiences with VR, all universally negative. I had a hope, as did the administration, that “This is such a superior product, it will be different this time”. It was very disappointing to see not only every fear confirmed, but also some new ones learned. Any trust between the VR company and the radiologists has been completely destroyed beyond repair. This experience has certainly made us much more cautious in trusting the word of any vendor, especially software vendors as I feel they may not fully grasp that they are selling a medical product and not a video game (i.e. doctors have zero tolerance for “buggy” software and more than they would tolerate a “buggy” CT scanner). So it stands with my group, many of whom now have the well justified attitude of “Hell will freeze over before I turn on VR again”. I am not sure this can ever be turned around. I hope, if nothing else, this review serves as a warning, to radiologists, administrators and most importantly VR vendors that VR is not something to be taken on lightly.

In a similar vein to Dalai Lama here are my 10 commandments for VR:

1) Voice recognition (VR) exists as only one of multiple possible solutions to address the problem of radiology report turn around time. The underlying problem being the near instantaneous distribution of images through PACS unaccompanied by the radiologist report.
2) VR will only see its maximum benefit (i.e. near instantaneous report generation) IF the radiologists self edit and sign off the majority of reports.
3) VR with or without self editing slows radiologists down. It costs productivity as radiologists are now forced to spend additional time either self editing or reviewing the VR generated reports for errors. Do not assume your radiologists are going to quietly eat this productivity loss.
4) The loss of radiologist productivity can be compensated, to some degree, by the use of radiologist assistants to do some tasks formerly done by the radiologist. Also there is a gain by the lessened need to re-review cases that have been dictated but not transcribed. Despite this there may still be a long term net productivity loss. Again, do not assume your radiologists are going to quietly eat this productivity loss.
5) Poor system performance is a guaranteed PACS/VR killer.
6) VR and PACS are by definition “beta” software. There is no such thing as a perfect PACS or VR software; there are only “acceptable” performance parameters that will improve over time. VR may never be perfect.
7) The hospital shall accept it’s responsibility for supporting VR by ensuring it’s IT/IS support staff are well versed in PACS and VR and available as long as the radiologist are working. The IS/IT support staff shall have a full working knowledge of VR and be capable of training radiologists before a radiologist ever touches the software.
8) VR requires far more IT/IS/training support that the VR companies will admit.
9) VR is not a big money saver if properly implemented. Any savings from transcriptionists will be quickly eaten up by the need for hardware upgrades, software upgrades, training and the needs for additional support staff.
10) Before implementing VR, ask yourself “Am I doing this for the right reasons and am I willing to commit additional resources, possible indefinitely, to the proper launching and support of VR?”

PACS Abuse

An observant AuntMinnie poster found this lovely little item from the Cleveland Clinic:

Sub-specialized medical care is the standard of care today. Only an orthopaedic surgeon would perform an anterior cruciate ligament reconstruction, and only a neurosurgeon would remove a brain tumor. It is still common however, for general radiologists with no sub-specialized training to interpret the images that lead to these procedures.

This lack of sub-specialized radiology interpretations previously was necessary because of the need to have a radiologist in close proximity to the imaging center where procedures were performed. Now, through the use of new developments in information technology systems, Cleveland Clinic’s Department of e-Radiology offers sub-specialized radiology interpretations to referring physicians anywhere in the country. Better quality of medicine is possible through the use of interpretations and protocols developed and performed by Cleveland Clinic sub-specialty-trained academic radiologists.

A number of factors made growth in e-Radiology possible, most notably the imaging explosion of the past few years, which is expected to grow at the rate of 8% to 10% per year into the foreseeable future. Two additional factors are the national shortage of radiologists and the rapid developments in information technology that have allowed image interpretation to occur independent of geographical restrictions.
A large portion of the imaging growth is in outpatient imaging, with an estimated half billion outpatient imaging examinations projected to be performed in 2008. In 2002, 84% of magnetic resonance, 64% of computed tomography and 98% of positron emission tomography scans were done on an outpatient basis, and these percentages are expected to continue to increase over the next ten years. Only 67% of imaging is currently done in hospitals, down from 95% in 1980. Outpatient imaging accounted for $14.8 billion in Medicare revenue for hospitals in 2001 and generated $3 billion in profits.

Radiologist shortages have been significantly resolved through the use of teleradiology. Over the past few years, dramatic advances in information technology and the availability of technology in cost-effective pricing, high-speed telecommunication lines have removed almost all geographic restrictions in interpreting images. It is now possible to send digital images securely over the Internet on a virtually real-time basis to radiologists located anywhere such telecommunication lines exist. The radiologist is then able to dictate the images using voice recognition software so that the images and report are available online for the referring physician within minutes via a HIPAA-compliant Web browser.

This is just plain disturbing on several levels. First off, the piece insults general radiologists, who are for the most part quite good at what they do. Will a super-specialized rad be better for the esoterica of his specialization? Probably, but on a day-to-day and head-to-head comparison, I thing we would find that the generalist can hold his own. I am very distressed in this regard by the use of the term “standard of care.” This is a legal term, and the Cleveland Clinic is basically saying that the generalists out in the boonies are beneath that standard and are therefore committing malpractice!
Secondly, there is note of the explosion of imaging, but no decrying the self-referral that is inherant in much of the “outpatient” studies that would be included in this designation.
Finally, the anonymous author of this piece is using the advent of my beloved PACS (technically teleradiology, but they have become essentially one in the same these days) to facilitate and even justify their desire to swoop in and snatch exams away from the lowly generalists.
I always advise my gentle readers to “follow the money,” and here, this is not a difficult task. Our friend at Cleveland Clinic is telling you that the lowly generalists out in the boonies can’t do an adequate job, but in years past, you had to settle for this because they were the only game in town. But wait, we now have PACS, and so we are finally able to help you poor downtrodden masses who have the bad fortune to live somewhere other than Cleveland. Just sign this reading contract, and pay us our fee, and we’ll all live happily ever after.
This drivel should be beneath an institution with the stature the Cleveland Clinic. It is an abuse of PACS technology to generate revenue that really belongs elsewhere. AND, it encourages self-referral in a round-about way. I do hope everyone at the Cleveland Clinic will enjoy Hillary-Care which they are helping to bring about with their greedy approach to radiology and medicine.

Stupid Pet, I Mean PACS Tricks

Image credit:

It’s amazing that a little glitch can cause a big pain. What’s even more amusing is the fact that this particular glitch is shared by Impax 6.x and GE Centricity 2.x.

Here’s the scenario…I’m looking at a CT, cine’ing through the slices. I want to change window settings, say from a mediastinal window to a bone window. Shouldn’t be a problem, especially if I have the window/level preset to a keyboard shortcut. BUT…sometimes, only the slice that is on the screen will have the parameters changed, with all other slices remaining at the original level. And just why does this happen only sometimes?

Our Agfa PACS-admin actually figured this one out, and it turns out to be the explanation for Centricity’s bad behaviour as well. If you did nothing else to the scan other than change with W/L settings, there is no problem, and such changes apply to all slices. However, if you happened to stop and measure something, then all bets are off. In their infinite wisdom, the programmers decided that a measurement should only apply to the slice on the screen. That makes sense, I guess, but their solution was to switch the tool application mode (my term, but it works) from applying a change to all slices to applying it to just one. And, it doesn’t return to the “all slice” mode when you go back to the window/level control. This makes me do it, which as a lazy slug, I resent, and I don’t even realize that I have to make this change until I’m flipping through the images and see one slice at lung windows and the rest at a different setting altogether.

I have to throw in one more little trick from Impax. At the bottom of the viewer, if you right-click on an UNMARKED point (even my Agfa PACS-Admin didn’t know about this tool), you get a little sub-menu with three selections. The first is a limited history, which will bring up a list of the last 20 studies that were marked as Dictated. (This, of course, doesn’t help if you lost something that was marked “Dictation started”.) The second entry on this secret menu allows one to auto hide the toolbars, which I have never seen anyone use. The final entry, “Show Navigation Toolbar at the top of the screen” does just that. Sadly, there is a significant lag in the activation of any of the items, and the grouping is unintuitive enough that when I go after the history, I accidentally hit the “Show Toolbar at top” as often as not. I don’t want the toolbar up there, and worse yet, it often disrupts the position of the other buttons at the top of the viewer, requiring me to log off and back on again to fix it. Really fun trick, guys.

How did we get these stupid PACS tricks? I will guarantee you that the developers didn’t demo or test these options in an actual production environment. I’m sure these functions made perfect sense on paper, but they yield yet another hinderance to my workflow, out here in the real world of private practice.

Would you guys at GE and Agfa mind fixing this stuff, please?

TWO $1000 Mikes Disabled By 3mm Piece of Plastic…A bit of bad design

Our suburban hospital just replaced their antiquated tank with the modern, computer-based Fusion dictation system from Dolbey. This is a digital voice system, but it does NOT include speech recognition, thank Heavens.

I got there today to read out the weekend carnage, and what do I find? The laser barcode scanner on the microphone isn’t working right. This microphone is a nice little piece of equipment, a Philips SpeechMike classic with barcode reader, which retails for over $1000. I asked the director of the department if we had another one, and yes, we did, but that one had a malfunctioning barcode reader as well. I peeked inside the little red window, and activated the laser scanner…yes, really bad idea for someone who makes a living with his eyes! I could see the laser, but it appeared to be hitting above the top of the window, and not getting out to scan the barcode. There is a tiny adjustment knob at one side, and this was a little loose, and didn’t seem to change anything inside.

The Fusion system depends on barcoding at the moment; when we get this site on PACS later this year, the demographics should download from PACS automatically. Still, that is several months away. So, I could either figure out what was wrong, key in the accession numbers manually (by using the tiny trackball on the mike to select numbers on the on-screen keypad), or I could just do one long dictation, reading off the accession numbers and hoping for the best.

Being a tinkerer at heart, and having total disregard for warranty voidance, I took screwdriver in hand and disassembled the offending mike. Inside the bottom, pictured below, you can see the red window to the left, and a mirror to the right.

The next image is a little blurry (sorry, my phone doesn’t have good macro capability!) but it tells the whole story. The black box at the top of the microphone is the laser assembly. The black bar-like thing next to the red crayon is a black diffuser, which is used to direct the beam a bit so you can hold the mike more naturally. It is supposed to sit atop the laser. But the little tiny black C-shaped thing between the two is one of two tiny brackets that holds the diffuser in place and allows the little knob thing to ratchet it and hold it in place. The problem is, this tiny $.001 piece of plastic broke right off of its base (the roughly triangular thing to the left at the top of the laser housing), and rendered the whole assembly useless. Bad design. Really bad design.

Being too lazy to do anything else, I simply left the back off of the mike, and used the microswitch that is at my thumb as seen below to activate the laser. It is a little bright, and must be pointed perpendicular to the barcode, but it works. Of course, normally the microswitch would be keyed by a plastic trigger in the base. Notice the high-tech rubber band that keeps the cord from detaching. Nothing like the feel of bare circuit board in your hands…

So, problem solved temporarily. I do expect a replacement mike, with perhaps several in reserve, until Philips redesigns this really bad implementation. And, don’t even think of telling me that opening the case voids the warranty…..