Babylon time travels to the Age of Technology (part 2)

The Babylonian project is moving forward in the Imaging Lab.

All of the cuneiform tablets have undergone three of the four planned imaging processes.  During RTI, each object was placed on a support under the RTI dome.  Imaging was carried out for each face and each edge for a total of six images for each rectangular tablet. The data obtained through RTI will be processed in specialized software where light direction can be manipulated and shaders can be applied to emphasize the surface geometry of the face of the object. This data will then be further processed to create a 3D model for each face of the object.

Cuneiform is one of the earliest known systems of writing and is distinguished by the wedge shaped marks in the clay tablets.  These wedge shapes were made by pressing a reed stylus into the clay.  In addition to the inscribed text, many tablets were also impressed with cylinder seals.  When rolled across the wet clay, these seals left behind raised images and text that identified the seal owner and thus functioned much like a ‘signature’ does today.  By manipulating the light in the final RTI visualizations, the shadows emphasize not only the depressions in the clay but the raised sections as well.

Once the objects were imaged in the RTI array, they were then imaged with a 3D laser scanner.  Small objects were secured on a sturdy stand and were scanned with a NextEngine laser scanner. The scanner coordinated with the stand and rotated it automatically.  The larger objects were scanned with a ShapeGrabber laser scanner on a manually mechanized turntable. The point clouds obtained through the scanning process were coarsely aligned, cleaned, merged, and reconstructed in editing software (MeshLab) to yield comprehensive 3D models of the objects.

Multispectral Imaging was then performed on each of the demonstration pieces which will be used to check for traces of pigment.   Eight images were taken for each band (violet, dark blue, light blue, green, yellow, orange, light red, red).  Multispectral imaging includes not only the visible range but ultraviolet (UV) and infrared (IR) as well.

The last thing we have left to do is to photograph the objects with our Hasselblad camera. Objects will be placed on a backdrop lined copystand and photographed under strobe lighting. Images will then be edited and processed in image manipulation software (Photoshop).

We have lots of data to process and 3D models to make!!  Stay tuned for part 3!

Envelope.jpg

This object is actually one half of a clay envelope. It once held a clay letter. After the author had finished writing and the clay letter dried, a piece of clay was wrapped around it and sealed as the ‘envelope’. While we don’t have the original letter, the clay of the envelope was still wet when it was sealed around the letter. What you are actually looking at is the inside of the envelope and seeing the ‘mirror’ image of the letter that impressed on the wet clay of the envelope. By scanning this piece, we hope to invert the image to make reading the text of the letter easier.

RTIbabylon_blog.jpg

RTI array in action as it photographs one of the clay tablets with 45 different lights from 45 different directions.

RTIpropped_blog.jpg

Since the clay tablets had writing all over, they needed to be propped up so that RTI images could be obtained of each edge of the tablets. This will give us a complete set of images of all of the writing on the tablet. This is the tablet in which Gimillu is accused of hiring a contract killer.

Yinglights.jpg

Ying Yang checks to make sure there are 45 image files of the object being lit from 45 different angles–one for every light on the RTI array.

3DModel1_blog.jpg

This sealed envelope from the Old Assyrian period gets a spin on the turn style to get scanned for a 3D model. Notice the round impressions stamped into the clay from an individual’s ‘signature’ seal.

5Multiplicationtable_blog.jpg

This tablet is a student’s work of the multiplication table by 5’s. Here it is going for a 360 degree spin to get scanned for its 3D model with a NextEngine scanner. See kids? Even back then, they had homework!

3Dmodel_blog.jpg

When the objects are laser scanned for a NextEngine 3D model, the original color information is recorded as part of the scan. However, sometimes by removing the color information, some features are easier to read. In this case, it is much easier to read the cuneiform.

cookbook3D_blog.jpg

Here an ancient cookbook gets scanned by a ShapeGrabber scanner to acquire a 3D model of the tablet.  Notice the scanner is angled so that it can scan the edges of the tablet.

Gilgamesh_blog.jpg

This clay tablet, which tells part of the story of Gilgamesh in cuneiform, waits for its turn to be scanned by the ShapeGrabber 3D scanner.

Gilgamesh_blog.jpg

Here, the Gilgamesh tablet undergoes Multispectral Imaging (MSI) to determine if there are any residual pigments on the tablet. There are 8 filters on the camera that run through not only the visible spectrum but the ultraviolet (UV) and infrared (IR) as well.

Imaging Forum for Cultural Heritage Collections

On August 23, 2013, YDC2 hosted an Imaging Forum for Yale curatorial staff from around campus to learn about recent developments in imaging methods and techniques and discuss how computational imaging technologies might be used to further curatorial research goals.  Held at the Conference Center on West Campus, the Forum started off with a welcome introduction by Meg Bellinger, director of YDC2.   The talks included an overview of computational imaging given by Professor Holly Rushmeier, Chair of the Computer Science Department, which included techniques such as multi-spectral imaging (MSI), reflectance transformation imaging (RTI), and 3D. Louis King, Digital Information Architect for YDC2, talked about the new tools available at Yale for digital image viewing and analysis as part of the Digitally Enabled Scholarship with Medieval Manuscripts project. He explained the underlying Content Platform. The audience was then given a quick view into some of the projects in the new Imaging Lab and cultural heritage computing ranging in departments from the Yale Center for British Art, Peabody Museum, Yale University Art Gallery, and Computer Science/cultural heritage computing. (See slides here)

After the presentations, the participants of the Forum toured the Imaging Lab facility in the Collection Studies Center.  Representatives from all of the museums and Computing and the Arts demonstrated imaging technologies in action at six separate stations throughout the Lab.  The Forum concluded with a lunch talk given by Dr. Ruggero Pintus, a postdoctoral fellow in cultural heritage computing, on understanding 3D imaging methods and techniques.

There were several possible project ideas generated from this forum and we look forward to future projects that will result because of it!

Opening talk at the Curatorial Forum in the Conference Center

Opening talk at the Curatorial Forum in the Conference Center

Holly Rushmeier, Chair of the Computer Science Department, reviews computational imaging such as MSI, RTI and 3D and applications for research and teaching.

Holly Rushmeier, Chair of the Computer Science Department, reviewed computational imaging such as MSI, RTI and 3D and applications for research and teaching.

Louis King, Digital Information Architect for YDC2, demonstrated the new image viewing tool for the Digitally Enabled Scholarship with Medieval Manuscripts project (DESMM)

Louis King, Digital Information Architect for YDC2, demonstrated the new image viewing tool for the Digitally Enabled Scholarship with Medieval Manuscripts project (DESMM).

Melissa Fournier, Manager of Imaging Services and Intellectual Property, demonstrated the implementation of the JPEG 2000 zoom feature for the Yale Center for British Art online collection.

Melissa Fournier, Manager of Imaging Services and Intellectual Property, demonstrated the implementation of the JPEG 2000 zoom feature for the Yale Center for British Art online collection.

Ben Diebold, Senior Museum Assistant at the Yale Art Gallery, reviews how the Art Gallery used the new capacity of the YDC2 Imaging Lab to photograph their Indo-Pacific Textile project.

Ben Diebold, Senior Museum Assistant at the Yale Art Gallery, reviewed how the Art Gallery used the new capacity of the YDC2 Imaging Lab to photograph their Indo-Pacific Textiles.

Dr. Ying Yang, a postdoctoral fellow in the Computer Science department, reviews the automatic document layout analysis of masive sets of illuminated medieval manuscripts.

Dr. Ying Yang, a postdoctoral fellow in the Computer Science department, reviewed the automatic document layout analysis of massive sets of illuminated medieval manuscripts.

Larry Gall, Head of Computer Systems at the Yale Peabody Museum of Natural History, reviews the Peabody's current project using robotic book scanners to digitize museum ledgers, field notebooks and similar documentation.

Larry Gall, Head of Computer Systems at the Yale Peabody Museum of Natural History, reviewed the Peabody’s current imaging project using robotic book scanners to digitize museum ledgers, field notebooks and similar documentation.

John ffrench, Director of Visual Resources at the Yale University Art Gallery,  explains the importance of the large, open studio space in the YDC2 Imaging Lab as well as the benefits of having a built-in easel, a catwalk and a cove wall.

John ffrench, Director of Visual Resources at the Yale University Art Gallery, explained the importance of the large, open studio space in the YDC2 Imaging Lab as well as the benefits of having a built-in easel, a catwalk and a cove wall.

Melissa reviewed the Imaging Lab's large color proofing area (complete with black-out curtains) and the importance of having the proper lighting when colorproofing.

Melissa reviewed the Imaging Lab’s large color proofing area (complete with black-out curtains) and the importance of having the proper lighting when color proofing.

Larry demonstrated the Kirtas bookscanning machine, explaining that with the robotic arm, an average 300 page book could get scanned in 8 minutes.

Larry demonstrated the bookscanning machine, explaining that with the robotic arm, an average 300 page book could be scanned in 8 minutes.

Kurt Heumiller, Digital Imaging Technician at the Yale Center for British Art, demonstrated the 40"x60" vacuum copy stand with Hasselblad camera.

Kurt Heumiller, Digital Imaging Technician at the Yale Center for British Art, demonstrated the 40″x60″ vacuum copy stand with Hasselblad camera.  The vacuum allows the photographer to keep an item flat and in a fixed position.  The amount of suction can also be controlled depending on the fragility of the item being photographed.

Dr. Ruggero Pintus, post doctoral fellow for the Computer Science department, briefly explains 3D and multispectral imaging.

Dr. Ruggero Pintus, post doctoral fellow for the Computer Science department,  explained 3D and multispectral imaging methods and techniques.

Dr. Pintus demonstrates the Reflectance Transformation Imaging (RTI) dome to the crowd by running it through a photography cycle with all 45 lights.

Dr. Pintus demonstrates the Reflectance Transformation Imaging (RTI) dome to the crowd by running it through a photography cycle with all 45 lights.  An object is place on the table in the center of the dome.  A camera is mounted to the arm on top of the dome.  One light is lit and a photo is taken.  The light is then turned off and the next light is turned on and another photo is taken.  The process repeats until 45 images have been acquired- one for every light.  The computer then compiles the images into one image and allows the users to see the object lit from all different angles.  The light on the object in the image can then be manipulated with the computer’s mouse.

Lunchtime was a chance for people of various departments to discuss idea and projects with others that they normally wouldn't get the chance to interact with.

Lunchtime was a chance for people from various departments to discuss idea and projects.

Have multispectral camera, will travel!

The Alexander Pope project continues this week.  We are back at the Yale Center for British Art to do some scientific imaging on the marble bust of the poet Alexander Pope by the artist Louis Francois Roubiliac.  Ruggero Pintus and Ying Yang, Postdoctoral Fellows at the Computer Science Department, are using the Imaging Lab QSI multispectral camera and a xenon light to measure the quantity of electromagnetic radiation that is reflected by the material of the bust.  They will take a total of 8 images from every angle: 1 from the UV (ultraviolet) range, 6 from the visible light range, and 1 from the IR (infrared). Each series of 8 photographs is a more accurate way to acquire the optical properties of the studied object as opposed to an average photograph which only retains information from the red, blue and green spectrums.  This information will give conservators the proper tools to study the spatial variation of the material properties.

Ruggero Pintus describes a multispectral image as one that captures image data at specific frequencies across the electromagnetic spectrum.   Spectral imaging can allow extraction of additional information the human eye fails to capture.  Multispectral imaging aims at providing a description of the reflective properties of a surface.  Multispectral images provide a more precise color analysis which makes these images suitable for the monitoring or restoration of artwork as well as any research activities that require high quality color information.

Ruggero Pintus and Ying Yang, Postdoctoral Fellows for the Computer Science department, set up the multispectral camera and xenon light to take multispectral scans of the Alexander Pope bust. They will be measuring the quantity of electromagnetic radiation that is reflected by the material of the bust, which in this case is marble.

Ying prepares to calibrate the multispectral camera and xenon light by using the color chart and silver ball.

In this photo, the bust is being illuminated with a xenon light source, that emits light from ultraviolet, visible and infrared bands. Each spectral band contains a continuous range of wavelengths. For each band, Ruggero and Ying will measure the quantity of radiation that is reflected by the material. This will allow them to study the geometry and optical properties of the bust.

Ying rotates the bust in preparation for the next set of photos.

 

Have 3D scanner, will travel!

We have another exciting project this week!  We packed up the ShapeGrabber 3D scanner in the YDC2 Imaging Lab and set up shop temporarily at the Yale Center for British Art where Ruggero Pintus and Ying Yang, Postdoctoral Fellows for the Computer Science department, performed 3D scans of a marble bust of the esteemed poet Alexander Pope.

3D laser scanners are best for capturing surface topography.  The scanner passes a laser beam over an objects surface rapidly to take measurements from many location points on the object.  The resulting dense grid of 3D points is called a ‘point cloud’.  This ‘point cloud’ requires post processing to convert it into a useable format.   An accurate 3D reconstruction can help authenticate works of art and can be a valuable tool for conservators.

The Yale Center for British Art (YCBA) and Waddesdon Manor (the Rothschild Foundation and the National Trust) are co-organizing a major exhibition on the sculptural images of Alexander Pope, which will open at the YCBA in spring 2014 and at Waddesdon Manor in summer 2014.  The focus of the exhibition will be the series of busts of Pope made by the French émigré sculptor Louis Francois Roubiliac. The exhibition will assemble the signed and documented versions of Roubiliac’s busts of Pope, which span the years from 1738 to 1760, as well as a number of the adaptations and copies that were modeled after them.

By performing 3D scans of all of the busts, the YCBA’s aim is to explore not only the complex relationship between these various versions but also to shed new light on the hitherto little understood processes of sculptural production and replication in eighteenth-century Britain. The project offers a unique opportunity to study the objects side by side, both visually and technically, revealing similarities and differences in handling, surfaces, dimensions, construction, and materials.

The ShapeGrabber packed and all ready to go to the Yale Center for British Art to begin scanning!

Ruggero Pintus rotates the bust of Alexander Pope a few degrees so that the camera can acquire a new scan.

The scanner is placed level with the bust to get straight on scans of the bust.

Close up of the laser sweeping over the bust as it completes a scan.

Ying Yang checks to make sure the new angle of the laser is capturing the data from the underside of the bust. By lowering the scanner and angling the laser up, scans of the underside of the shoulders, chin, nose and ears of the bust can all be captured. The data in these scans will then be aligned with the data from the scans taken with the camera level with the bust.

Ruggero looks on as the laser acquires data from scans of the top of the bust. By moving the scanner to a higher position, the laser is now able to scan the top of the shoulders and the head of the bust. The scans of these areas will be added to the scans from the other two positions and will be compiled into a digital 3D rendering of the Pope bust.

Ying and Ruggero align and combine all of the scans to produce a 3D image of the bust.