This blog is finally starting to stimulate the kind of banter I had in mind. The discussion of web-based PACS has generated even more commentary. You can read the full versions by clicking on the “Comments” link below the last post. Here are some pertinent exerpts:
Once you are in the application (after it is launched), the web is really out of the equation. You are running an application that is installed on your local machine. Most (all I would hope) applications have some capability to auto download new clients as they are available. So, to make a long story short, web or not, if an application can be downloaded and installed/configured from a web browser and if that same application will communicate on standard web ports, then to me everything else is the same.”
Peter then comments:
I would much rather see a well designed, small fat client install which uses the internet to communicate to a backend database than…a “thin” web app…..An easily available web site with a small, intuitive install would be so much more preferable to a web app. Limiting the customizations possible by the end-user saves a lot of support time in the long run.”
Uhhhh…can a client be small and fat? Sorry, don’t set me up with a straight line! I think we all may be talking about the same thing, but with different buzz words. Let’s take a look at the way a generic “web-based” system functions, and see where we might have some common ground on this part of the wider issue….
Those I’ve tried all seem to work a good deal like both Anonymous and Peter suggest. You go to a web-site with your browser which acts as a gateway. Once you have logged in, the actual viewing client is examined and updated if needed, then launched. This client is generally a Java applet or Active X control or the like. A third component acts as a conduit for the image transmission through the web (usually on https port 443), and a worklist of some sort is displayed within the brower, either with HTML or maybe Java. If you have Voxar 3D, that is a completely separate program that either taps your PACS database or intertwines somehow with locally-cached images.
Now, what do we mean by “thin” and “thick” clients? Here is one of the better definitions I found whilst Googling, from
“Solving the ‘thin client’—‘thick client’ dilemma” by Robert Barnett in “A Forms Perspective”:
A simple program or hardware device that relies on having most or all of its functionality supplied by a network server. It is similar to a dumb terminal in that it gets all of its information from the network. For example, a simple HTML form filled out in a web browser is considered to be processed by a ‘thin client’ since much of the form’s functionality is supplied by the server.
‘Thick Client’ (or ‘Fat Client’)
A program that is stored locally on the user’s computer rather than the server. For example, word processing software used to write letters and other documents generally resides on the user’s computer rather than the server. Even when the software resides on the server it is actually on space allocated to the user and is, in reality, just an extension of the user’s computer. The term can also be hardware related, referring to fast stand alone PC’s that have large amounts of memory and high volume hard drives that run programs locally rather than off the server.
I think we will find that essentially all systems use mainly thick clients for viewing. You are not just tunneling into the server and watching the images being manipulated there, but rather you are pulling the images to your client, and playing with them on your very own computer. I can think of two instances in which I have used a thin client, based on the above definition: First, when I was in junior high sometime in the last century, we had the great priviledge of using a mainframe via acoustic-coupled modem over POTS (Plain Old Telephone Service) with a teletype at 110 baud. Much more recently, I have tried out a TeraRecon Aquarius via internet. I loaded a thin client on my laptop, and all the image manipulation was done on the Aquarius, wherever it was. The images were, of course, spectacular, but the process was completely bogged down by bandwidth. Therein lies the problem with thin-clients…while it might be feasible to do all the crunching on a central computer, you still have to get the results out to the viewers in the boonies. Not a big problem in-house, especially with gigabit ethernet, but even DSL or cable speeds may not be up to the task. I’m going to go out on a limb on this one, and come down hard in favor of the thick(er) clients. I just bought a number of Dell Precision 670 computers from the Dell Outlet Site for my group. For $4000 each, we get dual Xeon 3.6 GHz processors with 1 MB cache, 4 GB of RAM, 200 or so GB hard drives, and a 256 MB dual-DVI nVidia graphics card. That’s more computer power than all of NASA had at the time of the moon shots (and probably more than the Shuttles themselves have today). These machines can do a great job with 3D processing and cine-style viewing of 8 or 16 windows simultaneously. In the end, they represent a cheaper approach. Bandwidth to accomplish all this would be prohibitive, at least to deliver it outside the main hospital, anyway. So, to me, the thick client approach wins. A thick-client viewer is not bound to a web-browser, but the two can play nicely. Think of Adobe Reader, a client used to read .pdf files. It is downloaded (though the user has to initiate this) from the web, and its various incarnations can work as a plug-in within the browser, or as an independent app. Most importantly (and depending on its timing, sometimes annoyingly), Adobe will give you the opportunity to upgrade to the latest and greatest when such is available. Likewise with PACS viewers: usually they are downloaded with the initial connection, and the opportunity to upgrade is usually given upon subsequent sign-ons.
As far as communications are concerned, many transmission problems have been solved by the internet long ago. The ‘net is designed to transmit information from one point to another in packets with self-healing redundancy. If one route is cut, another is found. From the computer’s-eye view, the transmission of data via the ‘net is not particularly different than through the local intranet; the same TCP/IP protocol is utilized by both. For those who have suffered through dial-up and even ISDN connections, home broadband is nearly a miracle for telerad/remote PACS applications.
I think most of us agree that a web-based system should operate from one main database. There should be direct access to this archive, whether from within the enterprise or without. Here seems to be the main differentiator between classic and web-based systems: The old architecture requires an additional “box” with a partially-mirrored database for outside consumption. I have posted elsewhere that a web-based system should have each slice or image addressed by its URL, thus adhering to the internet’s conventions.
The sum of all this drivel is that a web-based PACS system mimics any other web product; it uses the web’s protocols and tools rather than reinventing the wheel. The example of Adobe Reader actually is quite pertinent…instead of reading .pdf files, we look at DICOM images with a web-based PACS. Now here is a little riddle for you: If I have a conventional PACS, say, like Agfa IMPAX 4.5, and I put new software on its web appendage, in this case the Web1000, such that this former appendage is now a true web server, tapping the main IMPAX database, what do I have? Answer: IMPAX 6.0. Riddle 2: Is this new system now web-based? Answer: Probably……