Once in a while (a GREAT while, it seems, for me), one is struck with a simple solution to a complex problem. I have mentioned in earlier posts that linking between a new and an old CT (or MR for that matter) is critical for interpretation. This would be an easy proposition if patients could be very precisely positioned at the same spot on the gantry every single time, and of course if they held completely still. Then there’s the breathing aspect, but I’ve found in general that patients who don’t breathe at all tend not to pay their bills, so we have to live with that.
In the good old days, a couple of years ago that is, the only real option for comparing studies was to link by image number. In other words, if you scrolled down three slices on the new, you would scroll down three slices on the old. This would be OK if your scans were both performed with the same slice thickness. That is not always the case, especially if you have installed a new scanner since the patient’s last exam. So, the modern systems (except, of course, for our friends at Image Technology Laboratories, who don’t think this situation ever occurs) match scans by table position. This represents a considerable improvement, although all it really does is advance the old study intermittently to more or less match the position of the pertinent slice of the new scan. One still has to orient one study with the other. I try to pick some landmark, say the carina, or the SMA, find the slice on each study that demonstrates it well, and then I link the two together.
OK, here’s where my idea comes in. I won’t bother to try to patent it, because it is really just an extension (or a subset) of the fusion software used to match PET’s and CT’s. Those get automatically matched these days because the gantries are combined and the patient (hopefully) doesn’t move much. We have a computer from Hermes that supposedly will stretch and deform and magnify the PET to conform to the CT. Sadly, it doesn’t work very well unless you tweak it to death. But it tries to match without any real help from humans.
My idea is to use a simplified version of this approach to link new and old scans. Instead of having the computer grind away forever trying to match the scans, let the user do it: mark congruent points on each study, say the sternal notch, the carina, the SMA, and the symphysis, just to use my personal favorites. In the simplest implementation, slice incrementation could be adjusted to match the position relative to those marked points, rather than table position per se. The more points you place, the better the match, although I assume most people aren’t going to want to place more than three. The wider the distribution in the z-axis (head-to-toe), the better the match as well. In a really whiz-bang set-up, the scans could be treated as volumes and the old one morphed to the points marked. Some folks from Voxar hinted to me that they were working on a surface-mapping approach to this problem, but so far, several years later, no such luck. My approach is a lot easier, and therefore cheaper, and therefore more likely to appear on a PACS near you.
If someone gets the volumetric approach down, the next step would of course be linked 3D studies, including MPR’s and volume renderings. Again, I have heard that Siemens was working on that sort of thing for InSpace (courtesy of Eliot Fishman responding to my question on CTIsUs.com), but again, this has yet to see the light of day. My pals at ScImage did create a dual-MPR display (one can do the same with GE AW4.1 and Philips, I mean Sectra). ScI’s program suffers from their usual confusion as to where it thinks you have clicked, not to mention half-a-dozen other problems, and using the dual-MPR is so tedious that I don’t bother with it. Now, watch them be the first to run with my idea. That’s OK, as long as everyone else does so as well.
Phew. Having ideas is hard work. I think I’ll go back to bashing.