TomoShop®, X-ray CT reconstruction software with artifact reduction tools (ring artifact, metal artifact, cupping artifact, etc) 2D/3D viewer, measurement functions & functions for connecting from CT to 3D printing, rapid prototyping etc.

Volume Visualization

Volume Visualization


Volume Rendering Method

When the 3D reconstruction is taken to the set of the projection data that is taken by the cone-beam CT system, the result of the 3D reconstruction is called as 3D slice image, and it also called as three-dimensional scalar field in mathematical terms. The display method that used very common to display this 3D image(3D scalar field) is called as volume rendering.

Volume rendering can obtain the high quality images of entire volume data easily, so that it used commonly in engineering science, medical science and other science study fields.

However, negative aspects of volume rendering that it takes long time to process the data due to deal with the large portion of data processing as well as need to use large size of memory that required with the PC environment. The other issue of volume rendering that is when the user move the image (turning, up side down, left/right )during the examination under the ordinal volume rendering software, it tend to be happened that the quality of the 3D image is reducing dramatically.

In comparison, the rendering method of TomoShop® with GPU can create high quality of rendering images, which is present with faster operations.

Ray Casting Method

TomoShop® renders 3D images by using Ray casting method. Ray casting method had been invented from rendering equation directly, it can render very high quality images. Therefore, Ray casting is also called as Direct rendering method.

The ray casting method, used by TomoShop® series visual the internal view by half transparent display. (Please see the diagram below) This internal visualization is created by giving color(RGB value) and opacity(α value) to each voxel in CT value (s(x)).

It possible to render with emphasis opacity of target object by extending the value of opacity α of the voxel in the target object that is displayed (see the equation A: below) Then, adding the product of color(RGB) and the opacity(α) of the each voxel, then total opacity(α) is equal to 1 or when the line of sight(Ray) exit the volume area that is targeted, it complete the process of the pixel of the drawing surface, and display the result of the addition as the value of the pixel.


Equation A) The equation that generate the pixel I on the drawing surface is shown below.
*λ is the distance between the point of view and the line of sight (ray), x(λ) is the position of inside of the volume.


In the equation A above, c(·) is a function indicating the color which emitted at each point (RGB value). Using these, assign the display color to be given CT value s(x) as the volume data. Usually, c(·) and α (·) is defined as a function of s(x), however the actual computing is done through the following two stages.

First, the first stage will make the classification. The classification is that deciding the color contribution of the CT values for each point by passing the color from the CT value s(x) (also referred to as a transfer function) conversion function to (RGB value) and opacity(α value). This first stage represents the effect of the light emitted from each point of the value. Transfer function is defined by the user. However, the perspective is constant without dependence, it is possible to change even in the middle of rendering. In this stage, it is included components of diffuse reflection and specular reflection of light.

In the next stage, apply the shading. Shading is to add the color information given from the positional relationship between the law and the direction light source of each point and each point.

The first of the classification and the next shading, can be synthesized by adding calculated individually. For visualizing the distribution of CT values of the input, it can also be drawn using only classification. For viewing the object of geometry, use shading method in order to accurately recognize a 3D space.

Implementing Ray casting method using CUDA techniques, the volume visualization features of TomoShop ® are able to draw volume visualization on a computer for normal PC user.


Depending on the type of shading, TomoShop® is able to apply three ways of volume visualization (MIP, VR and LER) and Multiplanar reconstruction (MPR).


Maximum intensity projection (MIP)

Maximum intensity projection method is processing for the volume projected in the direction of any example, the CT values of the projection path is the pixel value of the largest voxel color drawing surface.



Volume rendering (VR)

Volume rendering sets the opacity varies continuously in a certain range of CT value, then pickled shading performed by calculating the transmission and reflection of light.

Thus this method adds more realistic features on the 3D image.


Local edge rendering (LER)

Local edge rendering method set the opacity of each are at the edge of the object and set the internal area to be transparent.

Mathematically speaking, in the case of less than or equal to the threshold, it views the edge of the local magnitude of sample clearly by setting the opacity of the voxel to be 0.



Multiplanar reconstruction (MPR)

MPR is a method to extract and display any cross-section of the volume.

In order to be able to manipulate volume visualization easily, TomoShop® has easy parameter setting functions for next purpose.

  • Rendering direction
  • Type of camera (perspective projection, orthogonal projection)
  • Cylinder cutout and the box cutout
  • Show reality of Material
  • Others

TomoShop® has mouse-driven function tools, such as easily changed as well as the camera position and direction.

Please feel free to inquire.

Copyright © IKEDA Co., Ltd All Rights Reserved.
Powered by WordPress & BizVektor Theme by Vektor,Inc. technology.
%d bloggers like this: