|doi:10.1016/j.culher.2011.08.004 | How to Cite or Link Using DOI|
|Permissions & Reprints|
Received 20 December 2010; Accepted 31 August 2011. Available online 13 October 2011.
Computer tomography (CT) technology has greatly contributed to the feasibility and convenience of detecting and visualizing the internal material constitution and geometrical fabrication of museum artifacts. This paper presents a case study of 3D virtual reconstruction for the CT-acquisition-based study of a cultural heritage artifact. It documents the complete procedure, including the preprocessing, segmentation and visualization of the data by providing coarse interactive exploration and integrated high-quality renderings. A parallel aim achieved was to use open source tools and free software for segmentation and visualization, thus providing full transparency of the adopted methodology and 3D visualization methods, and a cost effective solution for ordinary CPU-based PC users. Furthermore, the challenges of the large data volumes involved have been addressed using preprocessing, a segmentation scheme and linked front-to-back management to keep interaction and high-quality rendering available, thus achieving corresponding demands.
Keywords: Cultural heritage; Artifacts; CT; Segmentation; Front-end; Back-end; Surface rendering; Volume rendering; Gelato; Manta
New opportunities and challenges for the development of PC-based Virtual Reconstruction (VR) application in the field of Cultural Heritage (CH) have been the direct effect of advances in the field of surveying and visualization technology. So far, laser scanning tends to be the main survey technique for 3D virtual reconstruction of cultural heritage in the museum field . Several major projects in three-dimensional digitalization work such as the Digital Michelangelo Project , the digital preservation of Aleijadinho's sculpture of the prophet Joel , and the digital delineation of Romanesque churches within the “Merindad de Aguilar de Campoo” medieval area (north of Spain) , all adopted laser scanning as the acquisition technique for digitalization of the cultural heritage artifacts and sites. However, high level of detail and accuracy, especially containing interior construction of an object, can be obtained by synchrotron radiation CT technology, but not laser scanning. A few works published so far have demonstrated how cultural heritage can benefit greatly from CT technology applied to artifact or archaeological site analysis  and , documentation , preservation and restoration  and . Three-dimensional rendering techniques have been used for the representation of CT data in the cultural heritage world as well as for assisting constituent analysis and for studying fabrication techniques  and . Open source tools have become an important part of developer and user toolsets for the implementation of such studies, providing full transparency of the adopted methodology . Topics such as the “true-to-lifeness” of virtual reconstructions and the efficient representation of virtual objects are still important issues in the field of VR applied to cultural heritage, as is the choice of appropriate digitization strategy . Based on these observations, it is probable that PC-based applications using CT technology will become more usable and widespread in the field of cultural heritage and specifically in artifact studies.
The case study documented in this paper is a continuation of the Prayer Nut Project by the Delft University of Technology and the Rijksmuseum, which has as one of its main goals to determine the exact composition and fabrication of the prayer nut, a cultural heritage artifact held by the Rijksmuseum, and hence to reveal aspects of the medieval culture surrounding the nut. Up to now, the relatively simple virtual reconstruction of the original CT data led to severe ring artifacts , which complicated virtual exploration. In addition, it was observed that higher quality volume rendering of the large dataset would enable higher precision observation. In order to address these problems, we have developed a pipeline consisting of preprocessing, segmentation and high-quality 3D rendering for large volume data. This paper documents the pipeline, as well as new findings that were revealed using these methods.
2. Research aims
The objective of this paper is to document a complete workflow, using open source tools and free software on an ordinary PC, for the preparation and detailed visualization of optimized CT volume data and derived attributes, in the context of cultural heritage study. Preprocessing and segmentation are required for CT volume data visualization since the former generates precise data with fewer artefacts and the latter engenders better comprehension of internal construction. By going through a similar 3D virtual reconstruction, historians and archeologists, who may not be experienced in traditional radiology or computer technology, could benefit through an accelerated cultural heritage study workflow.
3. Materials and methods
3.1. Data acquisition and preprocessing
The handcrafted artifact prayer nut, owned by Rijksmuseum (inventory No BK-1981-1), is a 16th century spherical micro-woodcut (Fig. 1). It measures 4 cm in diameter and consists of two hemispheres connected with a small hinge so that it can be opened. Tomographic images of the prayer nut were recorded using monochromatic X-rays available at the ID17 biomedical beamline of the European Synchrotron Radiation Facility (ESRF), with 30 keV energy. The set-up is described in detailed in . In summary, the Prayer Nut was placed in the fan beam and was rotated about a vertical axis for computed tomography imaging. For obtaining each tomographic slice, 1440 projections at 0.25° interval of 180° in all rotation were recorded. As the height of the beam is 0.7 mm on the detector side, only a few horizontal lines (11 lines) are illuminated each time. Consequently, in order to obtain tomographic data of the whole object, the prayer nut is moved incrementally along a longitudinal axis for projecting the next slice (11 lines) until the entire sphere has been covered.
The axis of rotation of the prayer nut could be slightly shifted due to the artifact motion described above. If not adjusting the reconstruction center of each slice (11 lines of images per slice) of projection data, the reconstructed images proved to be fuzzy, with streak artefacts observed from the sagittal plane, as shown in Fig. 2. Therefore, the reconstruction centre of every even slice was adjusted one-pixel offset to the left on the projection data before using filtered-back-projection (FBP) algorithm, in order to eliminate the streak artefacts. In addition, ring artefacts, normally caused by CT equipment non-uniformity or reconstruction algorithm issues, and commonly seen in tomographic images, were observed in the original data. A Moving Average Filter  was used on the sinograms before tomographic reconstruction to eliminate ring artefacts through Idltomo (an in-house dedicated software of ESRF, Grenoble). It can be accessed through an ESRF account when one makes use of their radiation facility for data acquisition. This software is free for non-commercial research use.
Furthermore, in this case data preprocessing was an essential component enabling interactive visualization of large volume data. Due to the high-precision data acquisition, the original reconstruction of the top half of the prayer nut resulted in 667 image slices, each with a resolution of 1201 pixels squared. This leads to 3.7 GB (floating point) of data for this half of the prayer nut, which is still challenging to load into RAM for rendering on an ordinary PC. Using TEEM1, an coordinated collection of open-source libraries for representing, processing, and visualizing scientific raster data, we applied margin-cropping, careful data-quantization, down-sampling, and data-reformatting to yield 736 MB of volume data in an 8-bit form for further manipulation.
To better understand the physical construction of the object and the parts it consists of, segmentation into sub-objects is necessary. Once the sub-objects, such as hinges and pins, are segmented, this opens the way to both virtual as well as physical exploration of the inner workings of the object. By using a 3D printer, we can construct a physical copy of all parts, which can then be manipulated and examined, without risk of damaging the original. This imposes quality constraints on the computed segmentation, as even small errors can prevent a successful assembly.
Visually, the prayer nut consists of two parts, connected with a hinge. However, it is clear that the halves can be decomposed further into components, as the inner relief has been created separately from the outer shell, and some pins can be seen that fix the components together . After an extensive manual slice-by-slice examination of the CT dataset, we are able to discern 11 separable components in the upper half: The four pins attaching the inner relief to the outer shell, three crucifixes, a flag, a cane and a spear, attached to the base relief, the arc ceiling of the inner relief and finally a fiber knot hidden in the hollow space between the inner relief and the outer shell (Fig. 4).
To extract all identified components from the dataset, we have chosen to use slice-by-slice segmentation. Due to the presence of many complex shape structures and especially the connections between them, more straight-forward thresholding and region-growing techniques do not cope well. The open source MITK toolkit  was used for interactive slice-by-slice region-growing as well as for performing region union and intersection, a combination, which worked well in our case.
Some components that were originally thought to be a single piece of wood turned out to be composed of multiple parts. For example, close-up study of the spear (the enlarged figure shown in Fig. 4) revealed that it could be separated into two parts: the upper part is held in the guard's hand, whereas the lower part is integrated with the base relief. We chose to treat the upper part as an independent unit during the segmentation process. Similar assumptions were used in other cases, as our prime goal is to segment the artifact so it can be practically prototyped and easily visualized, and not necessarily a correct reverse-engineering of the assembly process. That being said, having decomposed the data in this way still supports comprehending its construction.
3.3. System architecture
The visualization platform is implemented on a commercial PC with 3.0 GHz Intel CoreDuo CPU with 8 GB of RAM, and an NVidia Quadro FX 1700 graphics card. The architecture is composed of a front-end and a back-end. The user interface (UI) forms the front-end, which facilitates interactive exploration of down-sampled low-resolution data. The back-end contains two integrated renderers, which manage the high-resolution data to create high-quality renderings. Functions as rotating, zooming, clipping and component selection, are available in the front UI. In order to seamlessly switch from low-quality interactive rendering to high-quality offline rendering, the crucial parameters such as geometrical transfer function, component options, clipping planes etc. are linked between the UI and the back-end render tools (Fig. 3). Furthermore, current design allows for additional back-ends to be created, for example for photo-realistic rendering or CAD/CAM export, without making any changes to the front-end application.
System architecture: original volume data is quantized to 8bit, segmented and down-sampled during preprocessing. The front-end uses the low-resolution data, while the back-end uses the full high-resolution dataset. When a high-quality-rendering is requested, front-end render parameters are propagated, and sent to the selected high-quality rendering back-end.
The front-end was developed using the open source software package DeVIDE , which provides a rapid visual data-flow programming environment. Using this package helped us to rapidly construct and to quickly change the processing workflow, enabling us to focus on algorithmic instead of engineering contributions. To conserve memory and to keep the application interactive, the datasets in the front-end are down-sampled and quantized to 8-bit precision in a preprocessing step.
On the back-end, two high-performance packages provide the necessary rendering services. We have selected Nvidia Gelato2 and Manta for this task, as both are free and allow interoperability with our visualization platform. Having such high-quality photorealistic renderers available has aided tremendously in visualizing the intricate surface details.
As intricate surface details are visualized well with Gelato, we have set it as the default back-end when high-quality renderings are required. Although it ships with a simple and powerful C++ API, that offers tight integration by treating Gelato as a library, it cannot render volume data directly, as is limited to rendering surfaces. To extract these surfaces, a separate processing step is implemented that makes use of the isomofo tool3, which is a large surface extraction toolkit that uses the marching cubes algorithm.
The other back-end, Manta  is a ray-tracing system, which focuses on interactivity and flexibility to accommodate a wide range of demands from different problem domains. Manta provides a multi-threaded scalable parallel rendering pipeline, complemented by a set of software mechanisms and data structures. These acceleration techniques allow the renderer to function at interactive rates, even on large volume data. Manta is designed to be more general purpose than renderers that has been applied to a number of graphics and visualization problems, from triangle mesh rendering to time-varying multi-modal sphere glyph and volume rendering. Thanks to its flexibility and good performance, Manta was also chosen to be embedded as a back-end in our visualization system. To quickly render the segmented surface, we make use of Manta's accelerated octree raycaster, for which the volumes are preprocessed by the Octisovol4 tool.
4. Results and conclusion
Snapshots of the generated 3D visualizations are shown on Fig. 4, subsequently displaying:
The visualizations were generated on an ordinary desktop PC, with a 3 GHz Intel Core Duo processor, 8 GB of RAM, and an NVidia Quadro FX 1700 graphics card. For the most complex shape, the interactive coarse rendering can be generated in under two seconds per frame. The corresponding high-quality Gelato rendering tool takes 40 minutes to render, using the full 8-million triangle mesh extracted by isomofo. The Manta rendering takes around two minutes per frame, thanks to the algorithmic speedup provided by the octree-structured data generated by Octisovol.
In conclusion, the work presented here focuses on preprocessing, segmentation, and the building of a three-dimensional visualization platform for a cultural heritage artifact surveyed by CT scanning technology. Preprocessing and segmentation are considered to be important for optimizing CT data, in order to avoid misunderstanding during cultural heritage studies. Three-dimensional visualization is shown to be an effective way of representation for cultural heritage artifacts. Our platform can be applied for handling larger volume data through coupling the front-end interactive overviews with back-end high-quality renderings and through the integration of other available tools and software. Two different approaches, surface rendering and volume rendering, were considered for the 3D visualization of segmented sub-objects. The two approaches differ in terms of rendering speed, and in terms of facilitating the semantic linking of interesting sub-objects to their corresponding data. The integrated open source and free software components provide independence from the system used both for development and for usage. The pipeline can aid cultural heritage studies and provide suitable representations of the artifact, using open source and free software on an ordinary CPU-based PC. The pipeline we have presented is a low-cost solution for the rendering of large volume datasets, can be relatively easily implement and can be extended with more rendering tools.
We would like to thank ESRF for their facilities of data acquisition, and thank China Scholarship Council (CSC) of a financial support for the first author to join the Prayer Nut Project.
 F. Bruno, S. Bruno, G. De Sensi, M.L. Luchi, S. Mancuso and M. Muzzupappa, From 3D reconstruction to virtual reality: a complete methodology for digital archaeological exhibition. J. Cult. Herit., 11 (2010), pp. 42–49.
 M. Levoy, The digital Michelangelo project, Comput. Graph. Forum 18 (1999)..
 B.T. Andrade, C.M. Mendes, J. Santos Jr., O.R.P. Bellon, L. Silva, 3D preserving XVII century barroque masterpiece: challenges and results on the digital preservation of Aleijadinho's sculpture of the Prophet Joel, J. Cult. Herit. (2011), doi:10.1016/j.culher.2011.05.003..
 P.M. Lerones, J.L. Fernandez, A.M. Gil, J. Gomez-Garcia-Bermejo and E.Z. Casanova, A practical approach to making accurate 3D layouts of interesting cultural heritage sites through digital models. J. Cult. Herit., 11 (2010), pp. 1–9.
 E.H. Lehmann, P. Vontobel, E. Deschler-Erb and M. Soares, Non-invasive studies of objects from cultural heritage. Nucl. Instrum. Methods Phys. Res. Sect. A, 542 (2005), pp. 68–75.
 M.P. Morigi, F. Casali, M. Bettuzzi, D. Bianconi, R. Brancaccio, S. Cornacchia, A. Pasinia, A. Rossi, A. Aldrovandi and D. Cauzzi, CT investigation of two paintings on wood tables by Gentile da Fabriano. Nucl. Instrum. Methods Phys. Res. A, 580 (2007), pp. 735–738.
 D. Green and R. Mustalish, Digital Technologies and the Management of Conservation Documentation, Mellon Foundation, New York (2009).
 K. Zhang, H. Bao, Research on the Application of Industrial CT for Relics Image Reconstruction, in Asia-Pacific Conference on Information Processing, Shenzhen, 2009, pp. 404–408..
 F. Cesarani, M.C. Martina, A. Ferraris, R. Grilletto, R. Boano, E.F. Marochetti, A.M. Donadoni and G. Gandini, Whole-body three-dimensional multidetector Ct of 13 Egyptian human mummies. Am. J. Roentgenol., 180 (2003), pp. 597–606.
 R.L. Abel, S. Parfitt, N. Ashton, SimonG. Lewis, C. BeccyScott and Stringer, Digital preservation and dissemination of ancient lithic technology with modern micro-CT. Comput. Graph., 35 (2011), pp. 878–884.
 M.P. Morigi, F. Casali, M. Bettuzzi, R. Brancaccio and V. D’Errico, Application of X-ray computed tomography to cultural heritage diagnostics. Appl. Phys. A, 100 (2010), pp. 653–661.
 A. Guarnieri, F. Pirotti and A. Vettore, Cultural heritage interactive 3D models on the web: an approach using open source and free software. J. Cult. Herit., 11 (2010), pp. 350–353.
 G. Pavlidis, A. Koutsoudis, F. Arnaoutoglou, V. Tsioukas and C. Chamzas, Methods for 3D digitization of cultural heritage. J. Cult. Herit., 8 (2007), pp. 93–98.
 P. Reischig, J. Blaas, C. Botha, A. Bravin, L. Porra, C. Nemoz, A. Wallert and J. Dik, A note on medieval microfabrication: the visualization of a prayer nut by synchrotron-based computer X-ray tomography. J. Synchrotron Radiat., 16 (2009), pp. 310–313.
 J. Barrett and N. Keat, Artifacts in CT: recognition and avoidance. Radiographics, 24 (2004), pp. 1679–1691.
 M. Boin and A. Haibel, Compensation of ring artefacts in synchrotron tomographic images. Opt. Expr., 14 (2006), pp. 12071–12075.
 D. Maleike, M. Nolden, H.P. Meinzer and I. Wolf, Interactive segmentation framework of the Medical Imaging Interaction Toolkit. Comput. Methods Programs Biomed., 96 (2009), pp. 72–83.
 C.P. Botha, F.H. Post, D. Visualisation, hybrid scheduling in the DeVIDE dataflow visualisation environment, in Proceedings of Simulation and Visualization, 2008..
 J. Bigler, A. Stephens, S.G. Parker, Design for parallel interactive ray tracing systems, IEEE Symposium on Interactive Ray Tracing 2006, September 18–20, Salt Lake City, UT, USA, 2006, pp. 187–196..