Image Compression algorithms (also known as codecs for coder/encoder) each have different tradeoffs in terms of compression ratios, encode time and decode time. Each codec has a different demand on underlying IT infrastructure such as CPU, Memory, Storage and Network. The codecs in use will therefore have a direct impact on the cost of an IT system and also the productivity of radiologists interpreting the users (e.g. relative value units or RVUs).

Over time, CPU, memory, disk and network have all been bottlenecks for imaging at one time or another. These bottlenecks are eventually resolved as technology improves, but medical images have continued to grow in size and quantity which continue to push the limits. The first digital medical image was a 128x128 CT image produced in 1975. At this time, x-rays were the main modality in radiology and were developed on film. Computers were not powerful enough to display x-ray images so digital radiology was just not possible. CT was the first digital modality because it required computers to perform complex math to generate the images from the detector data. It was easier to display the CT image on a monitor that it was to print it at the time. No alt text provided for this image

Figure 1: One of the first CT images ever produced.

I don’t know what bit depth this original CT image was reconstructed at but based on the screen capture above, it was likely just a few bits (perhaps 4). Storing a 128x128x4 bit image would require 8KB of RAM which was an incredible amount of memory in 1975. It might be like having a computer today with 10 Terabytes of RAM (yes possible, but extremely expensive). In 1975, the cost of RAM was likely the limiting factor preventing radiology from going digital initially. Use of Windows for medical imaging wasn’t possible until the 640k memory barrier was broken with Windows NT 3.5 and made mainstream with Windows 2000. Before the year 2000, Unix had to be used to display radiology imaging (SGI and SUN mainly). Over time, memory got cheaper and the bottleneck changed to network speed. Ethernet finally emerged as the way forward in the early 90’s and that was the time when radiology started to become digital. 10 mbps ethernet was fast enough for some images but it wasn’t until 1996 when 100 mbps came out that enabled radiology departments to go completely digital. Network speeds have continued to increase rapidly with 1 gbps readily available to workstations and 50+ gbps across the enterprise backbone. To put this in perspective, here is how long it takes to transmit a few studies over common network speeds: No alt text provided for this image

Study Type 1 mbps 10 mbps 100 mbps 1 gbps 50 gbps
1,000 512x512 CT Images Uncompressed (512 MB) 68 min 6.8 min 40 sec 4 sec 80 ms
1,000 512x512 CT Images JPEG 2000 (512 MB) 25 min 2.5 min 15 sec 1.5 sec 30 ms
2 2048x2560 CR Images Uncompressed (20 MB) 2.6 min 16 sec 1.6 sec .16 sec 3.2 ms
2 2048x2560 CR Images JPEG 2000 (10 MB) 1.3 min 8 sec .8 sec .08 sec .15 ms

Figure 2: Time to transmit various study types

With the advent of 100 mbps ethernet, the disk subsystem became the next bottleneck. Single hard disks just weren’t fast enough to fill a 100 mbps connection and certainly couldn’t handle sustained reading and writing (e.g. writing images from a new study being received while reading images to view a different study). RAID allowed multiple disks to be joined together to improve throughput, but concurrent access was still a problem. Large RAM caches were necessary to avoid thrashing the disks and allowing sustained writes and reads to occur. Fast disks were still quite expensive at this time and a hierarchical storage manager (HSM) was used to move data between different tiers of storage - recent data was stored in fast RAID, older data in slower disks and data older than 90 days moved to tape. The limitations of the disk subsystem required that data be auto routed from the central archive to the viewing workstations before it could be viewed. The networks and disks just weren’t fast enough to “stream” the data to the end user in time, and especially not during a spike condition (multiple users request data at the exact same time).

In ~2000, demand for radiology images expanded beyond the radiology department to the entire enterprise. Many radiology departments had gone fully digital but were still printing film so physicians could view the images. Physicians wanted instant access to images and the radiology department wanted to eliminate costs associated with printing film. The need for enterprise access to images emerged at a time when PACS systems were already struggling to support radiology departmental needs and were no ready to scale to full enterprise access. Dr. Paul Chang at the University of Pittsburgh Medical Center was one of the first charged with finding a truly enterprise imaging solution and couldn’t find a product on the market that met his needs. Dr. Chang decided the only way to meet UPMC’s needs was to build it himself and worked with John Huffman from SGI to create a DWT based codec and protocol which was named Dynamic Transfer Syntax (DTS). DTS was validated in the demanding trauma workflow at UPMC and ultimately lead to Dr. Chang and John Huffman launching a company named Stentor to commercialize the technology. DTS was rebranded as iSyntax in a product named iSite Enterprise (and eventually iSite PACS). I was fortunate enough to be one of the first engineers hired at Stentor and worked closely with Dr. Paul Chang and John Huffman in the development of iSite. I personally witnessed first-hand the benefits of progressive image display in the clinical setting and have been convinced ever since that all medical images should leverage DWT based compression.

Figure 3: The Stentor login page for iSite Enterprise which provided enterprise access to images via Internet Explorer

Not only did iSyntax provide an excellent user experience, but it also reduced the strain on the disk subsystem which had become the bottleneck for moving to the enterprise. iSyntax allowed lower resolutions of the images to be quickly viewed and then improved incrementally as fast as the disk and network subsystems were able to deliver the data. This smoothed out the spike condition allowing the system to scale to the enterprise. It also happened to work well over networks with lower bandwidth as the viewer could still provide fast time to first image and improve the image as fast as mathematically possible given the available bandwidth. Stentor was acquired by Philips in 2005 and has since been renamed IntelliSpace PACS. I left Stentor right as it was acquired by Philips and don’t know what exactly has changed since then in terms of compression. I have consistently heard from radiologists that use iSite that nothing compares to its speed – even now 15 years after Philips took over.

While JPEG 2000 did provide resolution scalability, it required a client/server protocol to fully leverage. JPEG standardized a protocol named JPEG 2000 Interactive Protocol (JPIP) but unfortunately few companies embraced it. I am exactly not sure why JPIP never caught on, but I suspect the high computation requirements of JPEG 2000 were the main reason. The Stentor iSyntax codec and was far more computationally efficient and was plenty fast on standard desktop hardware available in the enterprise.

Since JPIP did not gain traction and most PACS vendors didn’t have the ability to create an iSyntax like codec or protocol, they were forced to rely upon using expensive disk subsystems to support delivery of images to the enterprise and less sophisticated streaming mechanisms. The disk subsystem continued to be the main limiting factor until ~2008 when high speed storage networks (SAN, iSCSI) became popular. These high-speed storage networks emerged at the same time as VNA’s and is one of the key reasons VNA’s succeeded – it enabled IT to take control of the storage tier from the PACS vendors. Today we have solid state drives (SSD) which largely eliminate the storage bottleneck. High speed storage networks are well established though, so migrating to from HDD to SSD has been transparent to most users. Today we also have other storage options such as object storage, content addressable storage and cloud storage all of which provide functionality that enables IT to drive down costs and ensure availability of data.

The CPU resource has rarely been a bottleneck in medical imaging, other than with JPEG 2000. CPUs have always been good at image compression and many of the JPEG standards have been fast enough to encode and decode on standard hardware. It wasn’t until JPEG 2000 emerged that the CPU became a bottleneck. When JPEG 2000 first came out, the CPU immediately became a bottleneck and resulted in an end user experience that just wasn’t acceptable as demands for increased radiologist productivity were made.

The CPU has also been a bottleneck for web based viewing. JavaScript is the language of the web and it wasn’t until the browser wars in the late 2000’s that JavaScript became fast enough to do image compression. Today JavaScript can perform computation at about half the speed of native code and will approach native speed using WebAssembly. The slower speed of JavaScript and lack of SIMD acceleration commonly used in image compression is still a barrier today to using image compression on the web - especially with lower power mobile devices. The CornerstoneJS open source project supports decoding of all common transfer syntaxes including JPEG 2000 and JPEG-LS, but the performance is quite a bit lower than native codecs - especially those that leverage SIMD. No alt text provided for this image

Figure 4: Decoding a 512x512 CT Image compressed with JPEG 2000 using javascript via CornerstoneJS in 72ms. The commercially available Kakadu library which leverages SIMD and native code optimizations can decode the same image 10x faster.

With this context, it is clear that image compression is necessary to reduce strain on the network and disk subsystems. It is also necessary to reduce storage costs. In the next article, we will look at real world use of compression in medical imaging and how it impacts IT system performance.