Merge RE-Focuses with Fovia . . . And Drops A Name

Over two years ago, AMICAS announced collaboration with Fovia Medical, Inc. Read my post about it HERE.

On September 27, 2011, Fovia sent out this announcement:

Fovia Medical, Inc., a world leader in volume rendering technology, and Merge Healthcare, a leading provider of enterprise imaging and interoperability solutions, announced today that Merge will enhance and standardize its product offering by incorporating Fovia’s High Definition Volume Rendering® technology across its entire radiology PACS platform.

Merge, which provides image management as part of its comprehensive suite of radiology solutions, will now be able to fully capitalize on Fovia’s advanced visualization tools, therefore providing the world’s highest quality medical imaging to all of its PACS customers.

Fovia’s HDVR® server-side platform has enabled Merge to deliver high performance 3D imaging through its web-based architecture. With HDVR seamlessly integrated into Merge PACS, the company has taken advantage of Fovia’s scalable, CPU-based solution using entirely off-the-shelf hardware. By standardizing its various PACS products on an HDVR platform, Merge will be able to deliver tightly integrated, advanced visualization to its entire customer base, while simultaneously meeting their changing demands with Fovia’s flexible and highly customizable thin-client solution.

“Fovia’s powerful HDVR software provides Merge with the ability to offer unparalleled quality and performance to all of our PACS customers – both locally and over the Internet,” said Paul Merrild, Senior Vice President – Solutions Management. “Combining Fovia’s embedded software with our PACS products and services will boost radiology productivity by providing a unified workflow and consistent user experience, including the ability to view extremely large data sets interactively and in real-time.”

Ken Fineman, Chief Executive Officer of Fovia, stated, “We have built a strong relationship with Merge over the past year and are very impressed with their focus on customer satisfaction and end-to-end radiology solutions. Merge’s extensive, remote unified solution will provide a game-changing, streamlined experience for radiologists, and we are thrilled that our natively integrated HDVR will be showcased in such a robust offering.”

Which all sounds pretty similar to the 2009 announcement. (FYI, here is the 2009 press release from Fovia.) I’m assuming that the 3D engine on my current AMICAS PACS is made by Fovia, which makes me wonder about the purpose of these announcements. I do have it on good authority that the HDVR platform is executed in software on the production server, and does NOT require an additional rendering server.

You’ll notice something rather sad in the current announcement; in fact, it heralds the end of an era. Nowhere is the word “AMICAS” mentioned; the system is now called Merge PACS.  Oh, well.  It had to happen, I guess.
With the name change come some improvements, a few of which I think are in response to my whining, I mean request for change.  Merge announced some of the improvements here:

New Release: Merge PACS v6.2 From improved launching time to Instant Messenger, Merge PACS just keeps getting better! Below are some of the highlights of the new 6.2 release that will help you improve efficiency:

  • Halo will be operational as a native 64-bit viewer
  • Improved launching time for Halo
  • Bidirectional XML support which will improve third party integrations
  • PowerScribe 360 dictation integration
  • Instant Messenger for communication between Halo 6.3 workstations
  • Improved Mammography location algorithm to ensure accurate location of the breast tissue line

Embedded viewer rapid launch is intended for use with an EMR for rounding. Viewer will stay alive in the background enabling rapid restart.TSM migration tool will be used for moving studies form old jukeboxes and other storage devices to new archive subsystems, notably including ECM.

Did You Know? HALO
Q: How do I turn off the Patient Record option in HALO?
A: Go to User and Groups Privileges in the Management Pages section. Select the check box location about a third of the way down on the Group Privileges section. This will turn off Patient Record for all of your radiologists. If any of the radiologists want to keep Patient Record, you will need to enable it at a User level instead of Group level.

The latter bit about turning off the Patient Record has made a huge difference for my group of cranky rads.  The “Patient Record” is a sort of iSite-like extension to the Real Time Worklist (RTWL), which gives more direct access to prior exams.  It was added more or less to satisfy a few users of both AMICAS and Stentor/iSite, and frankly, it bolluxed up the whole RTWL, which to me is what made AMICAS, well, AMICAS.  I was quite thrilled to find the way to get rid of it, and so were my colleagues.

The release notes for v6.2 mention a number of other new and improved features, including Merge Messenger, an IM subroutine to allow communication between online users, 64-bit support, a new Exit button, and some other needed tweaks. The “voice clip” button will be moved to a more convenient location on the main tool-bar, something we had requested a while back to speed us along.  If you are an AMICAS, I mean Merge PACS user, you will realize how this might help: in the current version, the voice clip button is tied to the study-level toolbar, and might be on the left OR the right screen, a bit confusing for something we use on many studies. It will now be placed (if desired, I think) on the main patient toolbar in a fixed location.

And coming in version 6.3 (or maybe later…check in at RSNA)–a PET/CT viewer! Too bad my AMICAS, I mean Merge PACS hospital doesn’t have a PET/CT.

Ah, progress. I will miss the AMICAS name, however. Please feel free to send in photos (or the actual item) of anything you might have with the AMICAS amoeba logo…it might be valuable someday.

Calgary Approved

Way back last year, when I wrote an article for about iPad apps for radiology, I featured the ResolutionMD program from Calgary Scientific, one of the first apps to use server-side processing and rendering as opposed to being an on-board thick-client.

Calgary has just received approval from the FDA for ResMD to be used as a mobile, diagnostic application, diagnostic being the key word here. From Calgary’s website:

Calgary Scientific Inc. has received clearance from the UnitedStates (US) Food and Drug Administration (FDA) to market its industry-leading medical imaging application, ResolutionMD™ Mobile, as a mobile diagnostic application, in the US. With this secure, highly scalable, server-based software solution, physicians located anywhere can rapidly access, view, and interact with patient images and reports stored within any healthcare facility, and render a clinical diagnosis using their mobile devices. Prior non-diagnostic versions of this product are currently distributed and licensed under Original Equipment Manufacturer (OEM) agreements with global leaders in medical imaging and information technology. In addition to the recent FDA clearance, ResolutionMD Mobile has also been licensed by Health Canada and bears the CE Mark for distribution in Europe.

Those global leaders include GE, whose iPhone/iPad app is simply a rebadged version of ResMD.

Calgary’s approach is at least in theory more secure, as patient data is never really resident on the mobile device:

. . .The mobile app offers unparalleled performance, advanced capabilities and complete security, utilizes minimal bandwidth and offers strong performance even on 3/4G wireless, and ensures that no highly sensitive or confidential patient information is ever retained on the mobile device. The patient image data cannot be lost or stolen, as in the case of traditional mobile-device rendered software, which requires that data to be sent to the mobile device. . .

And there’s more:

ResolutionMD Mobile incorporates many advanced capabilities including:

  • Interactive 2D, MIP/MPR and 3D advanced visualization
  • Anywhere, anytime access to andfully interactive use of MR and CT image
  • Supports HIPAA Compliance –User authentication and encrypted communications further protect confidential patient information
  • Concurrent support for Multiple Devices and Mobile Operating Platforms – The solution enables ResolutionMD Mobile capabilities on devices such as the Apple iPhone and iPad,as well as Android devices. Future versions of the server-based software solution will support an expanded list of mobile devices and operating systems, and supplementary applications to the FDA for clinical diagnostic imaging approval on those devices other than the iPhone and iPad will occur in then near future.
  • Ambient Light User Assessment –Provides sample images to the user to assess the viewing suitability of current lighting conditions.

ResolutionMD Mobile is currently available on Apple iPhone and iPad devices through the Apple App Store. A demonstration version (ResolutionMD Mobile Lite) is also available on the Apple App Store, providing access to anonymized patient images. The ResolutionMD Server is available from Calgary Scientific Inc., as well as from its family of global leading OEM partners in medical imaging and information technology.

Clearly, this system is in direct competition with Mobile MIM, which was FDA-approved for mobile use back in February as I outlined here. It’s hard to say which approach will win out, and most likely both apps will find their way into the clinical armamentarium.  I personally give a very slight edge to thin-client, server-side processing, BUT we have to keep in mind that this approach is totally dependent on bandwidth, which the thick-client approach isn’t, but how much of an impact that has on usability remeins to be seen.  I’m sure we’ll see others in this venue in the near future.

Another Nail in Speech Recognition’s Coffin

I have not been a fan of Speech Recogniton (it is NOT VOICE RECOGNITION!!!) for a number of reasons. First, it takes my eyes off of the image, where they are paid to be, and forces me to look at the developing report. Second, it turns me into an unpaid editor, doing the transcriptionist job for free. And finally, it doesn’t work very well at all.

That last point is often disputed, but a recent article in the American Journal of Roentgenography reiterates the fact. And it is a fact. Basma, et. al., took the following approach:

We scrutinized 615 reports for errors: 308 reports generated with ASR (data from the hospital at which ASR had been used for 2 years) and 307 reports generated with conventional dictation transcription (data from the hospital that continued to rely on transcriptionists for report generation). A total of 33 speakers made the 615 reports; 11 speakers used both ASR and conventional dictation transcription. . .

The voice recognition software used was Speech Magic (version 6.1, service pack 2, Nuance). ASR reports were verified and signed by the author as they were generated. If the speaker was a fellow or resident, a staff member was responsible for reviewing the case before dictation of the report. Dictation was completed with a handheld speech microphone (ProPlus LFH5276, Philips Healthcare).

Conventional dictation transcription was undertaken using the E-RIS transcription system, version 1.44 (Merge Technology). Transcription was completed by transcriptionists experienced in breast imaging reporting. Once transcribed, reports were sent to the original speaker for electronic amendment and verification.

All reports dictated by attending radiologists or trainees were reviewed on the radiology information system at an electronic PACS workstation, corrected for errors, and verified, making these reports immediately available on the hospital clinical information system. The speaker assumed complete responsibility for report production, including correcting typographic errors generated by the voice recognition software or the transcriptionist.

And the result?

Among the 308 reports generated with ASR, 159 reports (52%) contained at least one error compared with 68 of the 307 reports (22%) generated with conventional dictation transcription (p < 0.01). Reports generated with ASR were also more likely than conventional reports to contain at least one major error (23% vs 4%, p< 0.01).

A total of 230 errors were found in 159 ASR reports. The most common error types were added word (46 instances, 20% of total ASR errors), word omission (43 instances, 19%), word substitution (39 instances, 17%), and punctuation error (49 instances, 21%). A total of 77 errors were found in 68 conventional dictation transcription reports. The most common error types were word substitution (15 instances, 19% of total conventional report errors), word omission (13 instances, 17%), added word (11 instances, 14%), and punctuation error (14 instances, 18%). . .

Our data showed that breast imaging reports generated with ASR are 8 times as likely as reports generated with conventional dictation transcription to contain major errors, after adjustment for native language, academic rank of the speaker, and breast imaging modality. Twenty-three percent of the reports generated with ASR reviewed in this study contained at least one error that could have affected understanding of the report or altered patient care.

They conclude:

Complex breast imaging reports generated with ASR were associated with higher error rates than reports generated with conventional dictation transcription. The native language and the academic rank of the speaker did not have a strong influence on error rate. Conversely, the imaging modality used, such as MRI, was found to be a predictor of major errors in final reports. Careful editing of reports generated with ASR is crucial to minimizing error rates in breast imaging reports.

This comes as no big surprise.  You might wonder why so many ASR mistakes get through.  Jay Vance, CMT, author of the AHDI Lounge Blog (Association for Healthcare Documentation Integrity, not ADHD…) comments about this on HISTALK:

“…why the radiologist didn’t catch the mistakes on the screen when using speech recognition…”

As someone intimately familiar with speech recognition editing, I can tell you the eye tends to “see” what the brain tells you SHOULD be there rather than what actually IS there. This is a well-known phenomenon among SR editors. Add to that the fact that the physicians dictating these reports using front-end SR are in a hurry to just get it over with, and it’s no surprise to see such a high error rate.

“Also keep in mind that this compared only two transcription options, with the third being back-end speech recognition…which I believe has much higher accuracy…”

You make a valid point, but the issue isn’t the comparative accuracy of front-end versus back-end SR. The comparison is between reports reviewed by a second pair of eyes versus those which are not. Whether a report is transcribed “from scratch” or edited from a SR draft, in both cases there is a skilled healthcare documentation specialist reviewing the original dictation. With front-end SR, however, it’s once-and-done, which of course is the holy grail of clinical documentation. The problem, as this study clearly shows, is that once-and-done dramatically increases the risk of medical error. Unfortunately, that risk doesn’t seem to get factored into the ROI when the front-end SR vendor is making the sales pitch.

We in the medical transcription field are doing our best to highlight the crucial risk management/clinical documentation improvement role our practitioners perform as a matter of course, a role that up to this point seems to have been taken for granted. Studies like this help prove what we’ve been saying all along: removing skilled healthcare documentation professionals from the process puts patients at risk, not to mention increasing liability for healthcare providers and jeopardizing reimbursements due to improper documentation. That’s a message we’re determined to deliver to the rest of the healthcare community as well as the public at large.

Emphasis mine.  But that says it all.  ASR has at least the potential to put patients at risk and increase liability.  ASR does increase turn-around time, but really only for those sites that don’t have adequate transcription personell. And you can see the price that one might possibly pay to save the cost of a few FTE’s.

It will not be darkening my door for the foreseeable future.

Visitors From Afar

As one of only a few sites running Agfa IMPAX 6.5, we were asked to host some visitors from afar, who are using an earlier version of IMPAX. They must decide if they should

  1. Stick with what they have, which includes using another vendor’s product for web-access
  2. Move to IMPAX 6, which is why they came here, or
  3. Jump ship to another vendor
Tough choices. I must add that their site has about a zillion Terabytes of data, and the thought of migrating is painful, and the cost steep, though still in the realm of possibility.
They group was interested in speaking to me, The Dalai Lama of PACS, in hopes that I might have some wisdom beyond the rather high level of expertise these folks brought with them. Frankly, I suspect they were disappointed. I gave a rather disjointed tour through IMPAX 6.5, and it had the bad taste to actually run properly for the visitors, just like your car won’t make that annoying noise once you actually get it to the shop. 
I’ve chronicled our trials and tribulations with IMPAX 6.x over the years, so I won’t rehash them here. Suffice it to say that when it works, it works OK. When it doesn’t work, well, just ask the folks in Western Australia how much fun that can be.
They say familiarity breeds contempt, but in my experience, I think familiarity more commonly breeds comfort, or at least complacency. It’s hard to diss an IMPAX completely, especially when we’ve had mostly smooth sailing. The most recent gripe hasn’t been with IMPAX per se, but with an integration to a digital voice system that got more and more wonky until it quit altogether following server upgrades. There have been numerous little things like this which get us upset. Add that to the old tool-toggling crap, and to weird things like the way someone else can dive into a study I’m already reading, and you have those special things that drive me to drink…more than I already do.
What I can say with confidence is that Agfa’s upcoming Agility, IMPAX 7, if you will, will be significantly different, and almost certainly better, than its predecessor.  This should probably figure in our visitors’ equation somewhere. Does that answer the question?

Delousing Denoising CT

I mentioned below about the possibility of a SPECT/CT purchase, which may not happen for a while, especially given the fact that no one yet makes a 64-slice version. But there does seem to be some chance to purchase a shiny new CT scanner and a bundled advanced imaging package. Our hospital needs to upgrade for a number of reasons, not the least of which is the desire to decrease radiation dose.

Most of the technology and mathematics involved in cutting dose is beyond the scope of this article (and my meager intelligence). Better detectors and newer tubes play a part, of course, and there are other ways we can improve our acquisition.

Remember, the whole point of X-ray-based imaging is to see what’s inside the patient’s body, and we generally do this by mapping in some fashion the density (more accurately, the attenuation coefficient) of the various tissues. In the X-ray world, which includes Computed Tomography as most use the term, we do this by passing a beam of X-rays through the victim, I mean patient, and do something to detect it at the other end. Wilhelm Roentgen, our HERO, discovered that placing a piece of film on the other side of the body part would act as a detector, showing us how much radiation got through which parts, yielding a nice map of Mrs. Roentgen’s hand.

Not very sharp, I’m afraid, and the amount of radiation Roentgen’s old Crookes Tube produced would scare the pants of any self-respecting health physicist today. Various improvements have come about in the last 110 years, or so, including far more sophisticated tubes, better film, the addition of a phosphorescent screen, Bucky grids, and of course, the conversion from film to Digital and Computed Radiography (DR and CR).

CT is a different but related X-ray animal. Click on the animation below:

You don’t want to hear about the math (which has been around since discovered by an Austrian named Radon in 1917), but basically, passing a beam of X-rays through an object from various perspectives yields data that can be reconstructed into an image of the original object. Here is how it works with a CT of the head:
Ever since CT came about, there have been attempts to lower the dosage of X-rays required to get a decent picture. In the last year or so, perhaps due to scary articles like THIS, dose reduction has become quite in vogue. I don’t mean to be flippant, as radiation is something that deserves respect and careful handling, much like electricity, but I just have to laugh a bit about the mania that has taken over. The potential dangers are nothing new, and we really don’t need to panic. We’ve been dealing with this for quite a while.

The main principle of handling radiation in the imaging world is called ALARA: As Low As Reasonably Achievable. That doesn’t mean we avoid necessary studies, but we are simply attempting to do what we can to lower the dose, and still answer the question that prompted the exam in the first place. Working within this framework gives us some direction.

So…in very basic terms, we can reduce the dose by decreasing the radiation passed through the patient. We can use higher energy beams which do indeed pass through the patient more cleanly, if you will, stopped by less tissue, and therefore depositing less energy, but, alas, we reach a point where we don’t get much detail. We can send fewer X-rays through the patient, but too few and we don’t see, ummm, stuff. OR, we can increase the sensitivity of the detector, so we don’t need so many X-rays. OR…. we can use mathematics to recover information from a crappy image, thus lowering the dose and “rescuing” the picture later.

You REALLY don’t want the details of the mathematics involved here, but the newest scanners use something called iterative reconstruction to this end. Let me just borrow the definition from the Wikipedia:

The reconstruction of an image from the acquired data is an inverse problem. Often, it is not possible to exactly solve the inverse problem directly. In this case, a direct algorithm has to approximate the solution, which might cause visible reconstruction artifacts in the image. Iterative algorithms approach the correct solution using multiple iteration steps, which allows to obtain a better reconstruction at the cost of a higher computation time.

In computed tomography, this approach was the one first used by Hounsfield. There are a large variety of algorithms, but each starts with an assumed image, computes projections from the image, compares the original projection data and updates the image based upon the difference between the calculated and the actual projections.

There are typically five components to iterative image reconstruction algorithms, e.g. .[2]

  1. An object model that expresses the unknown continuous-space function f(r) that is to be reconstructed in terms of a finite series with unknown coefficients that must be estimated from the data.
  2. A system model that relates the unknown object to the “ideal” measurements that would be recorded in the absence of measurement noise. Often this is a linear model of the form \mathbf{A}x.
  3. statistical model that describes how the noisy measurements vary around their ideal values. Often Gaussian noise orPoisson statistics are assumed.
  4. cost function that is to be minimized to estimate the image coefficient vector. Often this cost function includes some form of regularization.
  5. An algorithm, usually iterative, for minimizing the cost function, including some initial estimate of the image and some stopping criterion for terminating the iterations.

Aren’t you glad you asked? Basically, you keep plugging your image back into the computer until it looks good. The major vendors all have iterative recon in one form or another. Now, I must give credit to GE, whose new VEO system (which just won FDA approval) goes this one better, and I’ll let my friends from Medgadget tell you how:

For decades, the standard CT image reconstruction algorithm has been filtered back projection, which uses mathematical methods to reconstruct tomographic images from the projections that are obtained by the circling detectors. More recently, a new reconstruction algorithm, adaptive statistical iterative reconstruction (ASIR), has been introduced that performs modeling of the noise distribution, cutting radiation dose by up to 80% for many applications.

Model-based iterative reconstruction (MBIR), employed by Veo, goes a step further by incorporating a physical model of the CT system into the reconstruction process to characterize the data acquisition phase, including noise, beam hardening, and scatter. It has the potential to cut radiation doses even more but is computationally more demanding, leading to longer reconstruction times (which will gradually become less of a problem with ever increasing computing power). It may potentially deliver lower noise, increased resolution, improved low contrast detectability and fewer artifacts. Veo is available on the GE Discovery CT750 HD system, and is suitable for use throughout the body.

This is really, really clever. The resulting image (bottom pane) looks pretty good as compared to standard reconstruction (top pane):

I’m assuming the modeling has to be done for each individual machine, because there are variances in even the most precisely-made product. No doubt there is scanning of some some standard phantom followed by back-tracking to form a mathematical version of what the scanner looks like to the average photon. Keep in mind, though, this is all done in software, not in hardware, and software can be reverse-engineered. Thus, I doubt GE is going to have this exclusively for much longer. Still, credit where credit is due. This was a rather brilliant innovation. It does seem to take a LOT of computing power to run these numbers, however, and the reconstruction is far from instant. I’m thinking GE needs to set up something like the old SETI desktop program wherein concerned individuals could donate their computers’ idle time to the processing of medical images.

No one has mentioned applying the above techniques to old scanners, but there are a lot of them out there, and they need some low-dose love, too. I’ve encountered two vendors who promise to provide that love. For a price, that is, and substantially more than what love goes for on the street corner not far from one of our hospitals.

The good folks from Sapheneia in Sweden are very anxious to sell you this:

The Sapheneia product Clarity Server is a software providing image quality enhancement optimized for greater diagnostic confidence. Clarity image processing algorithms enable radiologists to lower radiation dose exposures during image acquisition.

Clarity directly addresses the continued medical community concerns of increased radiation exposure to primarily pediatric and female patients and safety concerns for the clinical staff.

Clarity incorporates image-filtering techniques that are configurable based on medical modality and medical observer. Clarity utilizes 3D information for the image optimization and enables both noise reduction and edge enhancement in the same image.

Clarity is applicable to current CT modality technology and supports older generation technology, extending the lifetime of existing instrumentation.

To be characteristically blunt and pugilistic, the Clarity Server is a computer that sits between the CT and the PACS, massaging the data, and prettifying the images that you have deliberately scanned at suboptimal parameters, in hopes of recovery to robust diagnostic status. I’m not so sure about this approach. (Could you tell?) While I haven’t confirmed it, the third paragraph from Sapheneia suggests to me that all they are doing is simple digital filtration of the images, smoothing (Gaussian noise reduction) and then edge-enhancing, as you can do with the free Photoshop clone, Gimp2. This should be something that is built into your PACS viewer, and so it has been, at least partially, in AMICAS PACS since version 3.x. OK, AMICAS just gives you a few steps of edge enhancement, but it proves the concept. One press of the “S” key sharpens every CT slice in your study.

Here’s a single slice with progressing grades of edge enhancement:


Mild Enhancement

Moderate Enhancement
Too Much Enhancement!

Now, let’s try an experiment with Gimp…We’ll take the mildly-enhanced image and smooth it and then sharpen it a bit..

Mild Enhancement
Mild Enhancement with Smoothing and Resharpening

I didn’t spend a lot of time on the processed image, but you get the idea…you can, to some extent, decrapify, I mean denoise, a suboptimal image.

The real question we have to ask is whether or not data gets lost in the process. Sapheneia shows charts and so on that show various parameters are improved, mainly SNR and CNR (signal and contrast to noise ratio, respectively) but that doesn’t necessarily mean that some details aren’t smoothed out in the process. I haven’t yet found a paper that proves or disproves my paranoia, but I’m going to keep looking. In the meantime, Sepheneia has apparently jacked up the price on the magic box considerably in light of the dose mania over here in the States.

Another vendor offers similar denoising as part of their advanced imaging suite. Vital Images, now owned by Toshiba, has made it through my door in spite of misgivings from way back, not to mention the fact that Larry D. still works for them, and the new Vitrea looks pretty good. It includes a denoisify function which applies similar filtering to legacy CT images. It can be toggled on and off, something which I’m not sure the Sapheneia box can do. From Vital:

The Noise Reduction menu
Most of the Vitrea protocols contain a new menu button in the bottom right side of the MPR viewer and the 3D viewer to open the menu for the Noise Reduction settings.
The menu contains a list of predefined filters and tools to create, save and modify custom filters

Applying an existing filter
To apply an existing filter in both MPR and 3D viewer, open the Noise Reduction menu and select one of the available filters. The filters are sorted from more conservative – on the top of the list (Preserving the small details – less blurry but with limited noise reduction) to the most aggressive filters at the bottom of the list (strongly reducing noise but they may blur the thin details in the images).

The Classic filters
2 numbers (Classic) or 3 numbers (Advanced) follow the custom filter names they also appear in any filter annotations in the MPR or 3D images. Example: Smooth_25x30.

The first number (here: 25) is the smoothness of the result. Small smoothness values (such as 4 or 8) will preserve the image sharpness but with limited noise reduction. Larger values (12-40) strongly reduces the noise with increasing compromise in the image sharpness.

The second number (here: 30) is the Contrast. A low contrast value will make the filter more sensitive to the orientation and strength of the images edges. In other words, it will preserve structures with a low contrast but it will limit the denoising strength.

Higher values for contrast will preserve only stronger structures and will strongly reduce the noise at the expense of blurring low contrasted details.

The Advanced filters
The Advanced filters have 3 values: Smoothness, Contrast and Structure.

The Smoothness and Contrast are discussed earlier in this document (see Classic Filters).

The Structure number indicates how much 3D structure (like small vessels, focal lesions, etc.) you want to preserve at the expense of a reduced noise reduction. Low values for Structure preserve the details but may preserve speckles in the images and reduce the denoising strength. High values for structure will remove speckles but may preserve fewer edges.

Sure sounds like simple image filtering to me. These can be applied to 3D renderings, a nice touch.

Now, there are far more advanced methods to denoisify than simple filters, and if you really want to punish yourself, have a look at THIS thesis about using wavelets (the same stuff used in teleradiology compression) to do the job. I’m not going there, but you’ll be glad to know that there is a GIMP denoisifying wavelet filter app for that.

In the end, we have a few choices to achieve dose reduction. We can all go out an buy new scanners with the most efficient X-ray tubes and detectors, and the best iterative reconstruction.  These babies bring in the dose for a cardiac CT, for example, below 1mSV, which is very, very low indeed.  But if you don’t have a couple of million lying around, should you invest in one of the after-the-fact denoisers?  I’m not yet certain. They can definately give you a prettier picture, but will they obscure important findings in the process?  That is indeed the Million Dollar question.

Of course, the most-overlooked path to dose reduction, like the best but least-respected contraceptive, is abstinence. Think before you order a scan.

Nahhhhhh. Never mind. 

Oh, SPECT/CT Vendors…

I’ve got a little money burning a hole in my pocket…

We need a SPECT/CT, and last year, at RSNA, I had the chance to evaluate the main contenders, which are discussed in THIS post. The Siemens Symbia T-16 and the GE Discovery NM/CT 670 appear to be good machines, although I was not that impressed with the sample SPECT data from either one. Siemens probably has the better gamma camera and GE has a newer flavor of CT.  Decisions, decisions.

But Houston (or Milwaukee/Erlangen), we have another problem:  To justify spending what we have to spend on this thing, we will need to use it as a stand-alone CT at least some of the time. So I need a real workhorse CT, one that can do bread-and-butter scanning.  Which in this day and age includes cardiac work. Thus, I need a 64-slicer attached to my SPECT.

Right now, the only 64-slice SPECT/CT is the GE Discovery NM/CT 570c:

The “c” is for Cardiac, and you’ll note that the heads are fixed at 90 degrees, with relatively small detector-faces.  But it demonstrates the principal I seek.
So, dear vendors, if you are really interested in selling me a SPECT/CT, bolt one of your fine 64-slice CT scanners to your best SPECT camera, and we’ll talk.  
Is there a discount for cash?

You Can’t Make This Stuff Up…

Be prepared to cringe…From comes this whale (or eel) of a tale….

Eel removed from man’s bladder after entering penis during beauty spa
An erratic eel wriggled its way up a man’s penis and into his bladder following an accident during an unorthodox beauty spa in China.

Zhang Nan was bathing with live eels to cleanse his skin when one rogue serpent took a liking to his manhood.

Thinking that the eels would make him look ten years younger, Nan dived into the water and let them feast upon layers of dead skin.

The eel treatment in question is a similar concept to the popular London spas that offer fish pedicures.

But after laying in the spa bath, Nan felt a sharp pain and realised a small eel was working its way up his urethra and into his bladder.

‘I climbed into the bath and I could feel the eels nibbling my body. But then suddenly I felt a severe pain and realised a small eel had gone into the end of my penis,’ the 56-year-old from Honghu, Hubei province said.

‘I tried to hold it and take it out, but the eel was too slippery to be held and it disappeared up my penis.’

(OK, that’s enough cringing now… it’s horrible, though, we know…)

Rushing himself to hospital, the man underwent a three-hour operation to remove the six-inch eel which was dead by the time doctors found it.

Surgeon Jin Wang said that, because of the eel’s slippery nature, it was able to make a smooth entry into the genitals of Nan.

‘The diameter of the urethra in a man’s penis is just a little narrower, but because eels are quite slippery, its body worked as a lubricant and so it got into the penis smoothly,’ he said.

(Really – stop cringing – we can see you…)

Believe it or not, Nan’s case follows a similar incident when a 14-year-old boy in India had to undergo emergency surgery.

In a case study published by urologists Dr G Vezhaventhan and R Jeyaraman, they described how they removed a 2cm-long fish from the boy’s bladder.

The teenager said that while holding the fish he had gone to the toilet and, while urinating, the fish had ‘slipped from his hand and entered his urethra’.


Yup.  You can’t get this news on Fox, now can you?  Hat tip to Drudge, by the way.