Engineering Design Group Scans Musical Instruments

In early April, a group from an undergraduate Engineering Design course contacted the IPCH Digitization Lab concerning the prospect of 3D scanning objects held in the Yale Collection of Musical Instruments. The group of five wanted to focus their efforts on developing a final project that would bring some of the collection’s rare instruments to the forefront for interpretation and interaction with the public. They decided to scan select instruments in order to create 3D models and 3D print outs for incorporation into an interactive museum kiosk. A few objects from instrument groups that are less represented in the museum’s halls were selected for scanning from collections storage.

The students coordinated with the Susan Thompson, a Curator at the collection, as well as collections interns, Kelly Hill and Katrin Endrikat for transportation and handling of the instruments. Once on the West Campus, instruments, crecelle and sansa, were carefully placed in front of NextEngine triangulation laser scanners for collection of data points corresponding to their exterior geometry. A student learned about the scanning process from the Digital Imaging Specialist, who guided him through acquiring data for their project. Another student later continued scanning. Post-processing and work on the kiosk and user interface was shared by the group.

Creating 3D models, was just one facet of the group’s final project. Their ultimate goal was to design a digital interface in which they could feature the 3D models and information about the unusual instruments. In order to achieve their goal of increasing visitor engagement, the group designed a game that both educates the public and encourages user interaction. They presented their work on 29 April at the Yale CEID.

The group stands behind their work with representatives from the collection and their teaching fellow, Matthew Reagor.

The group stands behind their work with representatives from the collection, Susan Thompson and Kelly Hill as well as their teaching fellow, Matthew Reagor. From left to right: Summer Wu, Zobia Chunara, Matthew Reagor, Daniel Fischer, Kelly Hill, Susan Thompson, Trey LaChance, and Cameron Yick. Image courtesy of Zobia Chunara.

 

A close up of the kiosk

A close up of the kiosk. Image courtesy of Zobia Chunara.

IPCH 3D Scans Cycadeoidea Fossil

In the late 1800s, amid the hubbub of the Black Hills Gold Rush and the lawlessness of Deadwood, a great scientific discovery was made in Dakota Territory. A specimen dealer unearthed a treasure-trove of beautifully preserved petrified tree trunks. The stout tree trunks, with diamond shaped cavities from which leaves once sprouted, were unlike any contemporary tree trunks in the area. They harkened back to an older time and were subtropical in appearance. These fossils were classified as Cycadeoidea.

Cycadeoidea were very common during the Jurassic and Cretaceous, when dinosaurs last roamed the Earth. They date to a time when the global climate was comparatively warm and as a result sea levels were considerably higher. It is projected that during this period, North America was oriented about a 45 degree rotation toward the Prime Meridian and was situated closer to the equator than today. This gave present-day South Dakota a warm climate, which fostered conditions quite hospitable to Cycadeoidea.

In 1898, George Reber Wieland, a paleontologist from the Peabody Museum, who was on assignment collecting vertebrate fossils in Dakota Territory on behalf of O.C Marsh, learned about the rich cretaceous Cycadeoidea trunks in Black Hills. George Reber Wieland was intrigued by the fossilized plants found here and shifted his focus to collecting and studying the 120 million year old Cycadeoidea specimens.  Wieland is credited with amassing a collection of around one thousand specimens.

A view of a Cycadeoidea fossil locality in South Dakota in 2012. Photograph taken by Shusheng Hu.

A view of a Cycadeoidea fossil locality in South Dakota in 2012. Image taken by Shusheng Hu.

This collection, still the largest Cycadeoidea collection in the world, is housed here at Yale and currently cared for by the Collections Manager of Paleobotany, Shusheng Hu. Shusheng approached the Digitization Lab in October of 2014 with the idea of digitizing the well-preserved structures of a Cycadeoidea trunk as a part of a greater conservation and research project.  This project, lead by the curator of Paleobotany, Dr. Peter Crane, focuses on the origin of early angiosperms, or flowering plants. The Cycadeoidea specimens are viewed as crucial to this project as they may provide new information about the origin of angiosperms. Since these specimens are quite heavy and precarious to move, creating 3D models has the capacity to greatly aid research, teaching and exhibition!

Scanning the Cycadeoidea trunk on location at the Yale Peabody Museum. Photograph taken by Shusheng Hu.

Scanning the Cycadeoidea trunk on location at the Yale Peabody Museum. Image taken by Shusheng Hu.

The Cycadeoidea trunk was acquired via ShapeGrabber triangulation laser scanner on location at the Peabody Museum.  The fossil was rotated between scans taken from different angles and objects were placed in the foreground to aid in alignment. Once the acquisition was complete, much time was invested in post-processing. Individual scans were cleaned, aligned and refined in order to yield the final geometry of the model.

Visualizing geometry of the 3D model in MeshLab. Snapshots taken with the Lambertian Lit Sphere radiance scaling shader applied.

Visualizing geometry of the 3D model in MeshLab. Snapshots taken with the Lambertian Lit Sphere radiance scaling shader applied.

This geometry has the potential to be interacted with and analyzed remotely by paleobotanists and enthusiasts alike. The 3D model has been incorporated into visualizations for education and outreach.

Cycad_02HardLightEffect

The 3D model, visualizations and print of this fossil were created with contributions and assistance from Chelsea Graham of the Yale IPCH Digitization Lab, Shusheng Hu of the Yale Peabody Museum, Holly Rushmeier of the Department of Computer Science and Ngoc Doan of the Yale CEID. Special thanks must also be given to Tim White and Annette Van Aken of the Yale Peabody Museum for coordinating and providing transportation of the equipment.

 

Early Arthropod Fossils Imaged in the Digitization Lab

During the spring and summer of 2014, Peter Van Roy, an Associate Research Scientist at the Department of Geology and Geophysics funded by the Yale Peabody Museum, conducted high-resolution photography of large anomalocaridid arthropod fossils in the IPCH Digitization Lab. The fossils that Peter imaged were uncovered during expeditions in southeastern Morocco, a region to which the Yale Peabody Museum has been conducting expeditions since 2009. These anomalocaridid fossils were discovered in the Fezouata formations, which are muddy deposits that date back to the Early Ordovician (ca 480 million years old).

Dorsal view of a complete specimen of Aegirocassis benmoulae, a giant filter feeding anamalocaridid from the Early Ordovician. Photography by Peter Van Roy, Yale University

Lateral view of a complete specimen of Aegirocassis benmoulae, a giant filter feeding anamalocaridid from the Early Ordovician (ca 480 million years old). Photography by Peter Van Roy, Yale University

The Fezouata deposits consist of several thousand feet of shales and siltstones that accumulated in relatively shallow waters on the shelf off the ancient paleocontinent of Gondwana over a period of some 8 million years. During the Early Ordovician, the area where the Fezouata formations formed was situated close to the South Pole. The sediments contain an exceptionally well-preserved and diverse fauna, which provide unparalleled insights into the composition and functioning of Ordovician marine ecosystems. The animals that are preserved were rapidly entombed by storm-generated mudflows and include many delicate soft-bodied forms that under normal circumstances would have no chance of fossilization. Because swimming animals could more easily escape these mudflows, the fauna is mainly composed of benthic, or bottom-dwelling animals.

Among the swimming forms that have been discovered are several anomalocaridid fossils. Anomalocaridids are very early representatives of the Arthropoda, which is the most successful and diverse animal group on the planet, and includes, among many others, familiar creatures like horseshoe crabs, scorpions, spiders, millipedes and centipedes, crabs, lobsters, butterflies, ants, beetles, etc. The Fezouata specimens are the youngest unequivocal anomalocaridids that have been found to date; all other anomalocaridid fossils date back to the Cambrian period, with the oldest material being around 530 million years in age. Because they are such ancient creatures, they are of critical importance for understanding the origins and early evolution of Arthropoda.

Filter feeding appendage of an anomalocaridid. Photography by Peter Van Roy, Yale University

Complete filter feeding appendage of Aegirocassis benmoulae. Photography by Peter Van Roy, Yale University

To our modern eyes, anomalocaridids look very alien: they have a head with a pair of spinose grasping appendages and a circular mouth surrounded by toothed plates; their elongate, segmented bodies carry lateral flaps which they used for swimming. It was long believed that they only had one set of segmentally arranged flaps on each side of the body, but the Moroccan material has shown they actually possessed two sets, with gills attaching to the upper set – a finding which has important implications for our understanding of how modern arthropod limbs evolved. While most anomalocaridids were predators, the biggest Moroccan specimens were filter-feeders, gently harvesting plankton from the ocean. With a size of at least up to 7 feet, they are true giants, and rank among the very biggest arthropods to have ever lived. Interestingly, they foreshadow the appearance of giant filter-feeding whales and sharks much later, and provide a much older example of massive filter-feeders originating from among a predatory group at the time of a diversification of plankton.

Detailed glimpse of filter feeding appendage of Aegirocassis benmoulae. Photography by Peter Van Roy, Yale University

Detailed glimpse of an intricate filter feeding apparatus of Aegirocassis benmoulae. Photography by Peter Van Roy, Yale University

Peter’s high-resolution photography of the Yale Peabody Museum specimens has had an impact beyond documentation. His images facilitated study of the large specimens and have led to discoveries about the construction and morphology of the creature’s lateral flaps and gills. The images also helped inform renowned natural history illustrator Marianne Collins, who worked closely with Peter to bring this ancient, extinct giant back to life through her stunning artistic renderings!

Artistic rendering of Aegirocassis benmoulae feeding on a plankton cloud. © Marianne Collins/ArtofFact

Artistic rendering of Aegirocassis benmoulae filter feeding on a plankton cloud. © Marianne Collins/ArtofFact

For more information about Peter’s findings, please see the Yale News feature http://news.yale.edu/2015/03/11/giant-sea-creature-hints-early-arthropod-evolution Peter’s popular science article https://theconversation.com/fossils-of-huge-plankton-eating-sea-creature-shine-light-on-early-arthropod-evolution-38520#comment_619019 or delve deeper by reading his team’s recent Nature publication http://www.nature.com/nature/journal/vaop/ncurrent/full/nature14256.html !

Potential Mercurian Meteorite Visits Lab

In September 2014, the Yale Peabody Museum’s Collections Manager for Mineralogy and Meteoritics, Stefan Nicolescu, brought a very rare meteorite – known as NWA 7325 – to the IPCH Digitization Lab for 3D laser scanning. The unique specimen is thought to be the first Mercurian meteorite ever found! Prior to scanning, it spent nine months on display in a special exhibit at the Yale Peabody Museum entitled “From Mercury to Earth? A meteorite like no other

Exhibit panel provided courtesy of the Yale Peabody Museum

Exhibit panel provided courtesy of the Yale Peabody Museum

This meteorite is quite old. Its age has been determined to be 4562.8 ± 0.3 million years! That means the meteorite predates Earth (which boasts an age of ~ 4540 million years) and is only about four to five million years younger than the first solids in our Solar System (estimated to have formed 4567.18 ± 0.5 million years ago)!

After the meteorite was dislodged from its parent body, it is estimated that it spent over 20 million years traveling through the Solar System before eventually falling to Earth. As it fell through the Earth’s atmosphere, the surface of the meteorite melted then solidified, forming a crust. A few thousand years later, in 2012, a meteorite hunter discovered it scattered in 35 pieces.  The largest piece was sliced to generate thin sections and bits for analysis before its owner lent it to the Yale Peabody Museum for display. This piece weighs in at 79.2 grams and measures 3 x 3.5 x 4.5 centimeters.

The meteorite is dark green in appearance, with a light green crust. Over time, while resting on the desert floor in Morocco, terrestrial material precipitated onto its surface.  This precipitate is manifested as areas of yellow-whitish material consisting of calcium carbonate pigmented by iron oxides and hydroxides.

© Stefan Ralew http://sr-meteorites.de/; side of cube is 1 cm

© Stefan Ralew http://sr-meteorites.de/; side of cube is 1 cm

Upon scientific analysis, a telltale extraterrestrial signature was detected –the presence of meteoritic iron, which is quite rich in nickel. In contrast, terrestrial iron is devoid of nickel. This was an immediate indication that the rock was not formed on Earth! The meteorite is a fully crystalized rock; it is quite similar to terrestrial igneous rocks; however, both its mineral composition and chemical signature are dissimilar to any other known rock. The individual minerals are not incredibly unusual; it is simply that their combination has never before been observed in a rock. If the meteorite is not derived from the planet Mercury, it must be from a part of the asteroid belt (located between Mars and Jupiter) that has not yet been sampled!

The meteorite was digitally acquired via NextEngine triangulation laser scanner in the Digitization Lab at the Yale Institute for the Preservation of Cultural Heritage (IPCH) on a Monday. By Wednesday of the same week, Stefan had a scaled 3D print in hand courtesy of the Yale Center for Engineering, Innovation and Design (CEID).

The primary objective of the process was to create a digital surrogate of the meteorite for applications in packaging, exhibition and education.

Visualizing geometry in MeshLab. Snapshot taken with the Lambertian radiance scaling shader applied

Visualizing geometry of the 3D model in MeshLab. Snapshot taken with the Lambertian radiance scaling shader applied

Stefan, who was responsible for taking the meteorite back to its owner in Germany, aimed to apply the 3D geometry acquired via scanning for creating stable and sturdy packaging. The 3D print out was used directly to mold clasps to support the meteorite in a custom-made box.

3D print out courtesy of Yale CEID http://ceid.yale.edu/ Image taken by Fred E. Davis and provided courtesy of the Yale Peabody Museum; side of cube is 1 cm

3D print out courtesy of Yale CEID http://ceid.yale.edu/ Image taken by Fred E. Davis and provided courtesy of the Yale Peabody Museum; side of cube is 1 cm

The main goal of creating a high-resolution 3D model was to print a copy of the meteorite in order to keep a tangible, tactile version of it at Yale.  Stefan commissioned Michael Anderson, a natural history artist with the Yale Peabody Museum who also mounted the packaging clasps, to paint the 3D print. The result is outstanding!

Artistic rendering by Michael Anderson. Image taken by Fred E. Davis and provided courtesy of the Yale Peabody Museum; side of cube is 1 cm

Artistic rendering by Michael Anderson. Image taken by Fred E. Davis and provided courtesy of the Yale Peabody Museum; side of cube is 1 cm

Bon voyage! The print out (on a stand) and the meteorite (in the box) displayed for one final viewing. Custom-made box by Larry Favorite http://www.favoritedesigns.com/

Bon voyage! The print out (on a stand) and the meteorite (in the box) displayed for one final viewing. Custom-made ironwood & turquoise box crafted by Larry Favorite http://www.favoritedesigns.com/ Note clasp on top

The meteorite was scanned, post-processed, printed, painted and photographed with contributions and assistance from Chelsea Graham of the Yale IPCH Digitization Lab, Ellen Su formerly of the Yale CEID and Stefan Nicolescu, Jessica Utrup, Michael Anderson and Fred E. Davis of the Yale Peabody Museum.

For more information about NWA 7325’s time at Yale, please see http://news.yale.edu/2013/11/25/mercury-morocco-and-onward-yale-meteorite-s-tale

 

Babylon time travels to the Age of Technology (part 2)

The Babylonian project is moving forward in the Imaging Lab.

All of the cuneiform tablets have undergone three of the four planned imaging processes.  During RTI, each object was placed on a support under the RTI dome.  Imaging was carried out for each face and each edge for a total of six images for each rectangular tablet. The data obtained through RTI will be processed in specialized software where light direction can be manipulated and shaders can be applied to emphasize the surface geometry of the face of the object. This data will then be further processed to create a 3D model for each face of the object.

Cuneiform is one of the earliest known systems of writing and is distinguished by the wedge shaped marks in the clay tablets.  These wedge shapes were made by pressing a reed stylus into the clay.  In addition to the inscribed text, many tablets were also impressed with cylinder seals.  When rolled across the wet clay, these seals left behind raised images and text that identified the seal owner and thus functioned much like a ‘signature’ does today.  By manipulating the light in the final RTI visualizations, the shadows emphasize not only the depressions in the clay but the raised sections as well.

Once the objects were imaged in the RTI array, they were then imaged with a 3D laser scanner.  Small objects were secured on a sturdy stand and were scanned with a NextEngine laser scanner. The scanner coordinated with the stand and rotated it automatically.  The larger objects were scanned with a ShapeGrabber laser scanner on a manually mechanized turntable. The point clouds obtained through the scanning process were coarsely aligned, cleaned, merged, and reconstructed in editing software (MeshLab) to yield comprehensive 3D models of the objects.

Multispectral Imaging was then performed on each of the demonstration pieces which will be used to check for traces of pigment.   Eight images were taken for each band (violet, dark blue, light blue, green, yellow, orange, light red, red).  Multispectral imaging includes not only the visible range but ultraviolet (UV) and infrared (IR) as well.

The last thing we have left to do is to photograph the objects with our Hasselblad camera. Objects will be placed on a backdrop lined copystand and photographed under strobe lighting. Images will then be edited and processed in image manipulation software (Photoshop).

We have lots of data to process and 3D models to make!!  Stay tuned for part 3!

Envelope.jpg

This object is actually one half of a clay envelope. It once held a clay letter. After the author had finished writing and the clay letter dried, a piece of clay was wrapped around it and sealed as the ‘envelope’. While we don’t have the original letter, the clay of the envelope was still wet when it was sealed around the letter. What you are actually looking at is the inside of the envelope and seeing the ‘mirror’ image of the letter that impressed on the wet clay of the envelope. By scanning this piece, we hope to invert the image to make reading the text of the letter easier.

RTIbabylon_blog.jpg

RTI array in action as it photographs one of the clay tablets with 45 different lights from 45 different directions.

RTIpropped_blog.jpg

Since the clay tablets had writing all over, they needed to be propped up so that RTI images could be obtained of each edge of the tablets. This will give us a complete set of images of all of the writing on the tablet. This is the tablet in which Gimillu is accused of hiring a contract killer.

Yinglights.jpg

Ying Yang checks to make sure there are 45 image files of the object being lit from 45 different angles–one for every light on the RTI array.

3DModel1_blog.jpg

This sealed envelope from the Old Assyrian period gets a spin on the turn style to get scanned for a 3D model. Notice the round impressions stamped into the clay from an individual’s ‘signature’ seal.

5Multiplicationtable_blog.jpg

This tablet is a student’s work of the multiplication table by 5’s. Here it is going for a 360 degree spin to get scanned for its 3D model with a NextEngine scanner. See kids? Even back then, they had homework!

3Dmodel_blog.jpg

When the objects are laser scanned for a NextEngine 3D model, the original color information is recorded as part of the scan. However, sometimes by removing the color information, some features are easier to read. In this case, it is much easier to read the cuneiform.

cookbook3D_blog.jpg

Here an ancient cookbook gets scanned by a ShapeGrabber scanner to acquire a 3D model of the tablet.  Notice the scanner is angled so that it can scan the edges of the tablet.

Gilgamesh_blog.jpg

This clay tablet, which tells part of the story of Gilgamesh in cuneiform, waits for its turn to be scanned by the ShapeGrabber 3D scanner.

Gilgamesh_blog.jpg

Here, the Gilgamesh tablet undergoes Multispectral Imaging (MSI) to determine if there are any residual pigments on the tablet. There are 8 filters on the camera that run through not only the visible spectrum but the ultraviolet (UV) and infrared (IR) as well.

A trip to Babylon (part 1)

We have all heard about Babylon.  Already legendary in antiquity for its great walls, its man-made terraces of flora known as the Hanging Gardens — one of the Seven Wonders of the World — and especially for its great learning and culture.  But what happens when the past meets modern technology?

Recently, YDC2 had the opportunity to partner with the Yale Babylonian Collection.  This collection has the largest assemblage of seals, documents and other ancient Mesopotamian artifacts in the U.S. and is one of the leading collections of cuneiform tablets in the world.  This collection is also noteworthy for its close ties to an academic department where it upholds the University’s mission of teaching and learning.  To learn more about the Yale Babylonian Collection, please visit their website.

The major aim of this joint demonstration project is to create documentation of cuneiform tablets for application in research.  Fourteen objects have been selected with a variety of themes. Here are some of the highlights: For Old Assyrian tablets, we have a marriage proposal and a fragment of an envelope.  An interesting feature of this fragment is that it still retains the ‘mirror’ image of the letter it once enveloped because the clay was still wet when it was wrapped around the letter.  There are a few mathematical tablets to be imaged.  One is a demonstration of finding the diagonal of a square using the square root of 2.  One is an unsolved math problem which scholars, to this day, are still trying to solve.  We will also be imaging a letter from Nebuchadnezzar, as well as the first known example of a contract killing and a tablet containing part of the story of Gilgamesh.

For the first time ever, these pieces will be imaged using RTI (Reflectance Transformation Imaging), 3D imaging and MSI (Multispectral Imaging) as well as having high resolution photos taken of them.

Imaging of these objects began this week so stay tuned for the next part of this three part story!

Lee_blog.jpg

The Babylonian Collection Reading Room contains an extensive Assyriological research library. In the background is a replica of the famous Stele of Hammurabi.  Photo taken by Lee Payne.

Lee2_blog.jpg

Several rooms of Sterling Memorial Library were designed specifically for the Babylonian Collection, as a result the window medallions display Mesopotamian themes.  Photo taken by Lee Payne.

Lee 3_blog.jpg

An example of an unopened envelope, with the tablet visible inside. This is a record concerning barley for beer, ca. 2100 B.C.E.  Photo taken by Lee Payne.

 

Alexander Pope: 3D model or ‘bust’!

Early last summer, YDC2 worked with the post-docs to image a bust of Alexander Pope by Louis Francois Roubiliac (see our post– Have 3D scanner, will travel).  On February 6, the YDC2 Imaging Lab, along with the Computer Science department, continued the collaboration with the Yale Center for British Arts (YCBA) on the second part of this project.

The YCBA was planning a new exhibit:  Fame and Friendship:  Pope,  Roubiliac and the Portrait Bust of Eighteenth Century Britain.  Louis Francois Roubiliac produced eight sculptural representations of Alexander Pope which are now spread out among different collections around the world.  This exhibit would be the first time all eight busts would be together in approximately 50 years.  The YCBA requested the application of 3D laser scanning to yield digital replicas in the hopes of determining the chronology of the creation of the busts.  The 3D models would not only give researchers surface geometry but also dimensions of Pope’s features and how the tool mark placement varied between busts.

As the busts could not travel to the Imaging Lab, Jessica Slawski, Chelsea Graham, and Ying Yang set up the ShapeGrabber 3D laser scanning equipment in the YCBA. Chelsea, along with Ruggero Pintus and Ying Yang, Postdoctoral Fellows for the Computer Science department, began scanning the busts on February 6.  Once the busts were unpacked, they were photographed by YCBA staff.  After their photo shoot, the busts then began the 3D scanning process which took about 4 hours per bust.  When this process was complete, the busts were moved to the exhibit area to be installed.  Due to time constraints, only 4 of the busts were able to be 3D scanned before they were installed for the exhibit.  The hope is that the remaining 4 busts will be imaged during the exhibit de-installation.  The post processing of these 3D models will take up to 30 hours per bust.  Once the models are done, researchers will be able to overlay the models on top of each other to compare features and tool marks.

For more information on the Pope project, please see the following articles:

YCBA Pope Bust Scanning Project

The many faces of Alexander Pope: Illuminating art history through digital imaging

ChelseaYingsetup_blog.jpg

Chelsea Graham and Ying Yang set up the ShapeGrabber 3D laser scanner to start scanning the Pope busts for the Yale Center for British Art exhibit.

TerracottaPope_blog.jpg

While many of the busts were marble, this one was unique as it was made out of terracotta. Terracotta bust of Alexander Pope by Louis Francois Roubiliac. On loan from the Barber Institute. Photo by Chelsea Graham.

ChelseaYingcomp_blog.jpg

Ying Yang and Chelsea Graham examine the 3D model of the Pope bust compiled by the scans of the ShapeGrabber laser scanner. The empty spots on the 3D model denote areas that the laser scanner was not able to reach. The bust needed to be turned so the laser could reach these areas and produce scans of the missing areas.  The scans would then be added to the 3D model filling in the missing spaces.

ChelseaPope_blog.jpg

Once all of the scans of the bust have been completed, Chelsea Graham checks to make sure there are no ‘holes’, or missing data, in the 3D model. When this is confirmed, the model will then be post processed and the image ‘cleaned up’ to remove any extra data that was scanned along with the bust.

PopeMakerbot_blog.jpg

Once the 3D model was completed on the computer, it was then fed to a Makerbot 3D printer. The Makerbot printer lays down layer after layer of warm plastic and slowly builds the Pope bust from the ground up. The columns are support structures for various features on the bust such as the shoulders, ears and nose. Once the bust is finished printing and is cooled, these support structures can be broken off without harming the 4 inch bust underneath.  Photo by Chelsea Graham.

 

 

Imaging Forum for Cultural Heritage Collections

On August 23, 2013, YDC2 hosted an Imaging Forum for Yale curatorial staff from around campus to learn about recent developments in imaging methods and techniques and discuss how computational imaging technologies might be used to further curatorial research goals.  Held at the Conference Center on West Campus, the Forum started off with a welcome introduction by Meg Bellinger, director of YDC2.   The talks included an overview of computational imaging given by Professor Holly Rushmeier, Chair of the Computer Science Department, which included techniques such as multi-spectral imaging (MSI), reflectance transformation imaging (RTI), and 3D. Louis King, Digital Information Architect for YDC2, talked about the new tools available at Yale for digital image viewing and analysis as part of the Digitally Enabled Scholarship with Medieval Manuscripts project. He explained the underlying Content Platform. The audience was then given a quick view into some of the projects in the new Imaging Lab and cultural heritage computing ranging in departments from the Yale Center for British Art, Peabody Museum, Yale University Art Gallery, and Computer Science/cultural heritage computing. (See slides here)

After the presentations, the participants of the Forum toured the Imaging Lab facility in the Collection Studies Center.  Representatives from all of the museums and Computing and the Arts demonstrated imaging technologies in action at six separate stations throughout the Lab.  The Forum concluded with a lunch talk given by Dr. Ruggero Pintus, a postdoctoral fellow in cultural heritage computing, on understanding 3D imaging methods and techniques.

There were several possible project ideas generated from this forum and we look forward to future projects that will result because of it!

Opening talk at the Curatorial Forum in the Conference Center

Opening talk at the Curatorial Forum in the Conference Center

Holly Rushmeier, Chair of the Computer Science Department, reviews computational imaging such as MSI, RTI and 3D and applications for research and teaching.

Holly Rushmeier, Chair of the Computer Science Department, reviewed computational imaging such as MSI, RTI and 3D and applications for research and teaching.

Louis King, Digital Information Architect for YDC2, demonstrated the new image viewing tool for the Digitally Enabled Scholarship with Medieval Manuscripts project (DESMM)

Louis King, Digital Information Architect for YDC2, demonstrated the new image viewing tool for the Digitally Enabled Scholarship with Medieval Manuscripts project (DESMM).

Melissa Fournier, Manager of Imaging Services and Intellectual Property, demonstrated the implementation of the JPEG 2000 zoom feature for the Yale Center for British Art online collection.

Melissa Fournier, Manager of Imaging Services and Intellectual Property, demonstrated the implementation of the JPEG 2000 zoom feature for the Yale Center for British Art online collection.

Ben Diebold, Senior Museum Assistant at the Yale Art Gallery, reviews how the Art Gallery used the new capacity of the YDC2 Imaging Lab to photograph their Indo-Pacific Textile project.

Ben Diebold, Senior Museum Assistant at the Yale Art Gallery, reviewed how the Art Gallery used the new capacity of the YDC2 Imaging Lab to photograph their Indo-Pacific Textiles.

Dr. Ying Yang, a postdoctoral fellow in the Computer Science department, reviews the automatic document layout analysis of masive sets of illuminated medieval manuscripts.

Dr. Ying Yang, a postdoctoral fellow in the Computer Science department, reviewed the automatic document layout analysis of massive sets of illuminated medieval manuscripts.

Larry Gall, Head of Computer Systems at the Yale Peabody Museum of Natural History, reviews the Peabody's current project using robotic book scanners to digitize museum ledgers, field notebooks and similar documentation.

Larry Gall, Head of Computer Systems at the Yale Peabody Museum of Natural History, reviewed the Peabody’s current imaging project using robotic book scanners to digitize museum ledgers, field notebooks and similar documentation.

John ffrench, Director of Visual Resources at the Yale University Art Gallery,  explains the importance of the large, open studio space in the YDC2 Imaging Lab as well as the benefits of having a built-in easel, a catwalk and a cove wall.

John ffrench, Director of Visual Resources at the Yale University Art Gallery, explained the importance of the large, open studio space in the YDC2 Imaging Lab as well as the benefits of having a built-in easel, a catwalk and a cove wall.

Melissa reviewed the Imaging Lab's large color proofing area (complete with black-out curtains) and the importance of having the proper lighting when colorproofing.

Melissa reviewed the Imaging Lab’s large color proofing area (complete with black-out curtains) and the importance of having the proper lighting when color proofing.

Larry demonstrated the Kirtas bookscanning machine, explaining that with the robotic arm, an average 300 page book could get scanned in 8 minutes.

Larry demonstrated the bookscanning machine, explaining that with the robotic arm, an average 300 page book could be scanned in 8 minutes.

Kurt Heumiller, Digital Imaging Technician at the Yale Center for British Art, demonstrated the 40"x60" vacuum copy stand with Hasselblad camera.

Kurt Heumiller, Digital Imaging Technician at the Yale Center for British Art, demonstrated the 40″x60″ vacuum copy stand with Hasselblad camera.  The vacuum allows the photographer to keep an item flat and in a fixed position.  The amount of suction can also be controlled depending on the fragility of the item being photographed.

Dr. Ruggero Pintus, post doctoral fellow for the Computer Science department, briefly explains 3D and multispectral imaging.

Dr. Ruggero Pintus, post doctoral fellow for the Computer Science department,  explained 3D and multispectral imaging methods and techniques.

Dr. Pintus demonstrates the Reflectance Transformation Imaging (RTI) dome to the crowd by running it through a photography cycle with all 45 lights.

Dr. Pintus demonstrates the Reflectance Transformation Imaging (RTI) dome to the crowd by running it through a photography cycle with all 45 lights.  An object is place on the table in the center of the dome.  A camera is mounted to the arm on top of the dome.  One light is lit and a photo is taken.  The light is then turned off and the next light is turned on and another photo is taken.  The process repeats until 45 images have been acquired- one for every light.  The computer then compiles the images into one image and allows the users to see the object lit from all different angles.  The light on the object in the image can then be manipulated with the computer’s mouse.

Lunchtime was a chance for people of various departments to discuss idea and projects with others that they normally wouldn't get the chance to interact with.

Lunchtime was a chance for people from various departments to discuss idea and projects.

Multispectral Search for Medieval Manuscripts

Analytical or computational imaging offers important new tools for scholarship in the humanities. Computational imaging technologies can support the collection of replicable data about cultural heritage objects that allow scholars and conservators to answer questions about artifacts that cannot be answered by means of simple visual inspection. YDC2 is supporting a project that will use multispectral imaging (MSI) of medieval manuscripts to provide additional data about the inks and pigments used in their creation as part of the Digitally Enabled Scholarship with Medieval Manuscripts project.

With invaluable support from Beinecke Library colleagues Ray Clemens, Curator for Early Books and Manuscripts and Chris Edwards, Head of the Digital Studio, last week Ruggero Pintus and Ying Yang, Post-Doctoral Fellows from the Computer Science department were able to multispectrally image the entire third recension of the Confessio Amantis,1392-93 by John Gower.  These images of the Gower manuscript promise to reveal additional information about how it was written and illustrated while the set-up process was important preparation for taking the technology on the road to other repositories of medieval manuscripts. Many thanks to the Beinecke Rare Book and Manuscript Library for supporting this work.

A multispectral image is one that captures image data at specific frequencies across the electromagnetic spectrum.   Spectral imaging can allow extraction of additional information the human eye fails to capture.  Multispectral imaging aims at providing a description of the reflective properties of a surface.  Multispectral images provide a more precise color analysis which makes these images suitable for the monitoring or restoration of artwork as well as any research activities that require high quality color information. – See more at: http://ydc2imaginglab.commons.yale.edu/#sthash.FPGLLoXx.dpuf

Ruggero and Ying explained that the entirety of the Gower manuscript was acquired with an 8-band, high-resolution multispectral camera and a Xenon light source that emits from ultra-violet to infra-red wavelengths, in a completely dark room. The raw data is a set of high dynamic range (16bit) images. The multispectral camera resolution is 2504×3326 pixels (8.3Mpixels), and for each multi-spectral acquisition we have eight images (one for each band).  This set-up has spectral sampling intervals (about 50nm) similar to those employed by Ricciardi et al. [1], which demonstrated that the limited set of pigments used in manuscript illuminations can be at least separated and sometimes identified, even when mixed, by acquiring images at moderate spectral sampling intervals (50 nm) under low light levels (∼150 lux) while having high spatial sampling (∼250 pixels per in). Thus, the acquired data will allow us to study the distribution of different elements across the manuscript page, to map out similarities in measured color/material/pigment, and to achieve the more robust, objective specification of them.

[1] Ricciardi, P., Delaney, J.K., Glinsman, L.D., Thoury, M., Facini, M. & de la Rie, E.R. 2009. Use of Visible and Infrared Reflectance and Luminescence Imaging Spectroscopy to Study Illuminated Manuscripts: Pigment Identification and Visualization of Underdrawings. In: L. Pezzati & R. Salimbeni, eds. Proceedings of SPIE, O3A: Optics for Arts, Architecture, and Archaeology II. Bellingham, WA: SPIE, vol. 7391, pp. 739106–12.

This is the setup used to multispectral scan a manuscript at the Beinecke library. The multispectral camera is mounted on a copy stand with a 1 meter column. The xenon light (UV) source is mounted on a separate tripod. Color targets are used to callibrate the camera. Once this is completed, the manuscript will be laid out on the copy stand and images of each page will be taken one at a time.

The color target and spectralon (white and grey target) are used to help callibrate the 8 multispectral camera filters: 6 in the visible range, 1 in the ultraviolet (UV) range and 1 in the infrared (IR) range.

This photo shows the manuscript open so the right pages can be acquired by the multispectral camera.

The manuscript is supported by foam as the book is rotated and the photographing of the left pages begins. A velvet ‘snake’ weight is used to the hold the right pages down so that they don’t move into the shots of the left pages.

This is a closeup of a section of the Gower manuscript.

This is a closeup view of some of the intricate scrollwork found in the Gower manuscript.

Have multispectral camera, will travel!

The Alexander Pope project continues this week.  We are back at the Yale Center for British Art to do some scientific imaging on the marble bust of the poet Alexander Pope by the artist Louis Francois Roubiliac.  Ruggero Pintus and Ying Yang, Postdoctoral Fellows at the Computer Science Department, are using the Imaging Lab QSI multispectral camera and a xenon light to measure the quantity of electromagnetic radiation that is reflected by the material of the bust.  They will take a total of 8 images from every angle: 1 from the UV (ultraviolet) range, 6 from the visible light range, and 1 from the IR (infrared). Each series of 8 photographs is a more accurate way to acquire the optical properties of the studied object as opposed to an average photograph which only retains information from the red, blue and green spectrums.  This information will give conservators the proper tools to study the spatial variation of the material properties.

Ruggero Pintus describes a multispectral image as one that captures image data at specific frequencies across the electromagnetic spectrum.   Spectral imaging can allow extraction of additional information the human eye fails to capture.  Multispectral imaging aims at providing a description of the reflective properties of a surface.  Multispectral images provide a more precise color analysis which makes these images suitable for the monitoring or restoration of artwork as well as any research activities that require high quality color information.

Ruggero Pintus and Ying Yang, Postdoctoral Fellows for the Computer Science department, set up the multispectral camera and xenon light to take multispectral scans of the Alexander Pope bust. They will be measuring the quantity of electromagnetic radiation that is reflected by the material of the bust, which in this case is marble.

Ying prepares to calibrate the multispectral camera and xenon light by using the color chart and silver ball.

In this photo, the bust is being illuminated with a xenon light source, that emits light from ultraviolet, visible and infrared bands. Each spectral band contains a continuous range of wavelengths. For each band, Ruggero and Ying will measure the quantity of radiation that is reflected by the material. This will allow them to study the geometry and optical properties of the bust.

Ying rotates the bust in preparation for the next set of photos.