For me, the reading radiologist, the GUI, or the Graphic User Interface, IS the PACS system. I don’t (usually, anyway) tap into the service areas or play with the RAID, or even shove a CR cassette into the reader. What I do, day in and day out, is sit at the workstation and interpret images. As a Nuclear Radiologist, and not an interventionalist, I spend about 80+% of my working time with the mouse in my right hand and the microphone in my left. If I’m not talking into the mike, I’m not making money for my group, right?
It is no secret that I do like the Amicas LightBeam system, and Agfa Impax 4.5 to a lesser extent, and I don’t like GE Centricity or ScImage. I won’t bore anyone with a feature-by-feature comparison, and I don’t want to reveal anything proprietary anyway (although I would be bloody surprised if any major company out there doesn’t know most of the details of the competition’s products). So, what makes a good GUI? In my humble opinion, the key is to provide functionality without breaking my concentration on the image itself. In other words, the system needs to help me do my work without getting in my way.
There are a lot of little functions Amicas provides that really speed things along. Let me cite just one: the spine-labeling tool. I can label an entire MRI of the spine in less than 5 seconds. Amicas seems to have been one of the few companies to notice that three-dimensional localizing data is included in the DICOM header; they use this to propagate the label to all other planes after you have manually labeled the levels of one image of one projection. With Agfa, this simple procedure is painful and takes several minutes instead of seconds.
But I digress. To avoid getting anyone (more) upset with me, let’s stick to generalities. The following description includes elements of systems I’ve worked with, demo’d, or read about, as well as some elements I’ve just made up all by myself. The guts beneath the system are left to those better versed in such things, but I think there is no turning back from the web-based or TCP/IP approach. It makes sense not to reinvent the wheel. The web was designed to move data back and forth seamlessly and safely. True, the old ARPANET could never have envisioned transmission of gigabit-size datasets, but today, this is routine.
The Dalai-PACS system has two main elements, the worklist and the viewer. Both have to be customizable to user sign-on. The worklist has to show important information, name, exam, priors, etc, have some alert mechanism for status, such as color-coding, and must of course let other users know if an exam is in use. Many vendors have something like this today, and Amicas has the best implementation in my humble opinion. The distribution of exams to individual sign-ons makes no sense today, even less-so distribution to designated workstations. That made sense back in 1997.
The viewer is what makes or breaks the thing. The Dalai-PACS viewer is intuitive, and as automated as possible without taking control away from the user. Automation basically is a step beyond most of today’s implementations of hanging protocols, though some hanging protocols are getting pretty smart. I want my system to know what prior(s) I need pulled and how to display and link them to the viewer. Obviously, linkage of CT’s must be done by table position. With today’s technology, the user will still have to match the position manually. I have heard about an attempt to use surface mapping and contour matching to do this automatically. That would be most welcome. I should be able to link studies with a minimum of effort; select matching slices and click a button. I have this with Amicas today, with instant, one (maximum two) click synchronization even if I have 3 different windows of the same exam, or three different sequences for that matter. With Centricity, just linking a dual-window study with its prior takes about 10 clicks on a good day: you must go to a little drop-down menu, and select link or break link, and you have to have each and every window set at the proper level, or you’re not going to get what you want.
The next question is the display of 3D. Do we launch a Voxar or TeraRecon window, or should it be incorporated as window within the regular viewer? I personally like to view an exam with simultaneous displays of soft-tissue, bone, and lung windows; in a 2×2 display, the fourth window could be a volume rendering or a coronal reconstruction. The major 3D programs from Voxar, GE AW, TeraRecon, Siemens InSpace/Leonardo etc., can all be set up to do this, with varying degrees of difficulty. (Note that I don’t mention Vital Images…We own three Vitreas; one blew a hard disk drive, and Vital would not respond to TEN calls requesting service. They won’t be considered for anything in my place again.) One of my former colleagues believes that dual 3D display is the way to go for each and every CT, with isovolumetric voxel acquisition. Personally, I tend to be a little hesitant about acquiring sub-mm sections and doing full-fledged 3D analysis on a 500-1000 slice dataset on every single abominable pain witch-hunt. To my knowledge, dual 3D display is available today from ScImage, GE AW, and Philips, and is said to be in development by Siemens. Honestly, I don’t use it when I have to use ScI, since setting up a viewing session is a pain, and keeping the two exams synched is a bigger pain. My former colleague who believes in this approach I think does it more for speed, and probably syncs the axial images only with a thick MIP. To me, 3D is for problem-solving, and new and old studies can be satisfactorily compared for now with linking of the axial images. I’ll likely change my mind when someone comes up with the automatic matching thing.
Every system has to have the requisite markup, windowing, series and image tools, and these just need to be logically arranged, not scattered like one of my son’s LEGO constructions. The most often-used functions should be available with a right click; the user should be able to determine which those should be. However, I’m not sure I like the approach of having ALL functions user-customizable. Agfa 4.5 gives me literally 500 tweakable features, and these change with the modality; in the end, most would be better off with a bit narrower set of options. Likely 95% of users would want very similar deployments. There should be integration with the HIS/RIS and instant availability of prior reports and demographics, and in the age of the IHE, perhaps Pathology and other results as well.
It goes without saying that the thing has to work every single time, without losing studies repeatedly in transmission, or requiring the three-finger salute 20 times a day. If I click something as “Dictated”, it had better stay clicked. The “skin” of the GUI should look professional, but not distracting. I really don’t need the bridge of the Enterprise, unless someone can include a transporter in the whole package and beam me to the beach in time for my sunset margarita. The buttons and icons need to be similarly clean and clear, and not done with Microsoft Paint by a junior-high student. It is clear that some companies expend a great deal of their graphic arts dollars on their web-sites and AuntMinnie.com ads, and very little on the interface of their products.
There are about a thousand other factors to include, not counting all the tiny little details that are critical to making a system a success. Consider this a primitive first-pass at a very complex topic. I would welcome your comments.