Printing a Plantation: Using Photogrammetry and 3D Printing to Bring Archaeology to the Public

Nestled among the dense forests and sprawling agricultural fields of Gloucester, Virginia is a little-known early Colonial property called Fairfield Plantation. Patented in 1648, Fairfield was the home of the Burwell family, one of the wealthiest and most politically influential families in Colonial Virginia. Their property reflected their prominent role in society, featuring a striking manor house and numerous outbuildings positioned at the heart of a substantial plantation. The house grew and evolved with the family over time, and remained home to the Burwells until they sold it in 1787. The property exchanged hands several times in the years that followed, until 1897 when the house burned and was left to ruin. Over a century later, Fairfield Foundation archaeologists are bringing new life to this important historic property using the latest innovations in 3D technology.

Figure 1
Historic photograph of the Fairfield Plantation manor house. (Courtesy of the Colonial Williamsburg Foundation)
Figure 2
Archaeologists working with Adventures in Preservation workshop participants at Fairfield Plantation. (Courtesy of the Fairfield Foundation)

The Fairfield Foundation is a non-profit organization headquartered in Gloucester, Virginia that has been promoting and involving the public in hands-on archaeology, preservation, and education activities within Virginia’s Middle Peninsula and surrounding areas since 2000. Their primary research site is Fairfield Plantation, where thousands of students, interns, and volunteers have learned about archaeology and colonial history at Fairfield Plantation, making it a valuable educational resource within our community.

Earlier this summer we initiated a project utilizing 3D technology to digitally record, reimagine, and recreate the historic landscape at Fairfield Plantation. Our goal is to develop an interactive 3D printed model that will bring the experience of archaeology to the community, and ultimately draw more attention and visitation to the site. Using a Phantom 4 Pro drone (affectionately named Major Tom), we have begun documenting the ruins and surrounding landscape by flying over the site and capturing hundreds of photographs, which are later transformed into highly detailed 3D models using Agisoft PhotoScan. These models will later be 3D printed to develop a tangible replica of the site.

Figure 3
3D model of a collapsed chimney at Fairfield Plantation. (Courtesy of the Fairfield Foundation)

What makes this project unique is that instead of having one solid model, we will be printing each test unit individually and repeating the documentation and printing process over time so that each layer we excavate in the field can be incorporated into the printed model as a removable piece. Members of the public will be able to take the model apart layer by layer and experience the same process of discovery that archaeologists do. We will also use the digital model as a basis for digitally reconstructing the house, which will be printed and incorporated into the replica. This replica will bring Fairfield Plantation to life, providing residents and visitors to Gloucester a chance to interact with the past and connect with local history. When finished, the model will be housed and publicly accessible at the Fairfield Foundation’s headquarters, the Center for Archaeology, Preservation, and Education (CAPE) in Gloucester Courthouse.

Figure 4
Ashley McCuistion flies the drone over the manor house ruins at Fairfield Plantation. (Courtesy of the Fairfield Foundation)

This project challenges people to experience history in a new, tangible way, and brings Fairfield Plantation into a local and global spotlight. Digital models and printed replicas of the site will be an integral part of lesson plans we will make available to individuals and classrooms around the world, drawing new attention to the rich history of Virginia as seen through Fairfield Plantation. It also brings the Fairfield Foundation new opportunities for public outreach and education, and places the organization at the forefront of a growing digital preservation movement in archaeology.

For future updates about this project, visit our blog at www.fairfieldfoundation.org/blog, and check out the 3D models we’ve produced of Fairfield Plantation at www.sketchfab.com/fairfieldfoundation.

 


Ashley McCuistion is an archaeologist with the Fairfield Foundation, a non-profit organization based in Gloucester, Virginia. In addition to being in the field, she is the Public Outreach Coordinator for the organization and Project Manager for the Fairfield Modeling Project. Ashley received her B.S. in Anthropology from Virginia Commonwealth University and her M.A. in Archaeology (pending thesis completion) from Indiana University of Pennsylvania. She has been researching creative ways to incorporate 3D scanning and printing technology into public archaeology since 2012.

Nephelococcygia (Cloud Watching) for Public Outreach

One of the limitations I often face when using 3D technology for public outreach is the still very small database of models to pull from. The process for creating a 3D model of an artifact or site is time consuming and most of us simply do not have time to model everything we would like too. Therefore, it is in our best interest to find ways of making 3D technology more appealing to our peers in other branches of archaeology. For this post, I tested the CloudCompare software and found that it has a lot of potential.

Testing the Program

Recently, the American Civil War monument that I used as a subject for my first photogrammetry model was knocked over. The Sons of the Confederacy group here in Tallahassee, Florida believe that it was accidently hit by a work vehicle and was not intentional vandalism. I saw this as a perfect opportunity to test out CloudCompare and see what changes might have occurred to the monument.

The CloudCompare software overlays the point cloud you get from either photogrammetry or laser scanning the original model

on another version

to get a very nice visual comparison of the two. This is the first version using CloudCompare:

Note the blue in the center of the monument, indicating little change, and the red edges, suggesting a lot of change. For this first comparison, I aligned the base and this revealed that the marker’s position had changed (it is now set more square than before). This is something that I had missed while photographing and looking at the models separately. If you click on annotation 3 we can very nicely see the new footprint for the marker on its base. This also might explain why I could not get CloudCompare’s auto-alignment feature to work like I was expecting.

Also in this comparison you can see the two large chips (marked by annotation 1) that were removed from the base. While these show up, comparing the base is problematic as the original model had a lot of grass growing right next to the stone, and the recent version has the glass cleared away. This resulted in a lot of false positives when looking for damage on the base.

Since it was the marker that was knocked over though, I really wanted to see how that had changed. After aligning the face of the models, I got some more interesting results.

The first thing you might notice is that the top edge of the plaque and the edge of the oval on the plaque are red. I suspect that what we are seeing is one version of the models not generating this edge very well.

Annotation 2 highlights another piece of damage that I had previously missed in the form of a new notch in the marker’s edge. If you look back at the new version of the completed model, these notches look pretty uniform and this marker has been moved before. This makes me wonder if the equipment used to hoist the stone is causing this damage.

Also note that the cosmetic damage to the plaque does not show up. I was hoping that the program might track color changes as well, but you cannot have everything. Also, a chip missing out of the top-front edge that is not highlighted as much as it probably should be. I suspect some fiddling with settings should make this pop out more.

Conclusion

As a test, this monument was the perfect subject. I saw the changes that I expected to see, saw changes I had missed, and learned about some of the limitations of CloudCompare. The potential for this program as a research tool for documenting sites and artifacts is pretty obvious. However, I see a more direct use in our public outreach programs as well.

In particular, the Florida Public Archaeology Network has a program called Heritage Monitoring Scouts where we are organizing public volunteers to help monitor coastal sites that are threatened by sea level rise. If we were to train certain volunteers to take pictures for photogrammetry purposes, we could use CloudCompare to help document the degradation of these resources over time. This would be very useful information and I suspect that participants would find it rewarding as well.

Any other ideas? How would you use this software?


Tristan Harrenstein is trained as an archaeologist and has a passion for outreach and education. This is fortunate as he is also an employee of the Florida Public Archaeology Network, an organization dedicated to promoting the preservation and appreciation of our archaeological resources. He shares a blog space with his boss (Barbara Clark) and you can read more blog posts on other archaeology related subjects that tickle our fancy here.

Autodesk ReMake – Review

Over the last couple years, we have seen the 3D archaeology community grow at an incredible rate. As awareness for the potential of these tools has grown, I have been asked several times about the best way to get started. Depending on their need, I generally recommended Agisoft PhotoScan as the current standard in archaeology circles. With PhotoScan they can get lots of help, flexibility in subject scale, few limitations, high accuracy, it is relatively easy to use, and it has recently come down in price.

However, to anyone currently looking to get into photogrammetry for interpretive and educational purposes, I have a new recommendation. Autodesk ReMake is perfect for any program that just needs to make a model of an object without investing too much time and money.

Pricing

The professional version of ReMake currently runs at $30 a month or $300 a year. You could compare this to PhotoScan’s regular professional license of $3499, but they are really not related. A more appropriate comparison is to PhotoScan’s standard license which is currently available for one-time fee of $179. As a result, ReMake really does not make financial sense at this level.

However, ReMake has a free or educational license which makes it worth considering. There are features that are available in the full version that are not in these licenses. For example, neither can freely use “Ultra quality” images (though I have not had any issues) and the free version is limited to 50 photos (though you can do an awful lot with 50 photos). For the rest of this post, I will be talking about these two licenses.

Making a Model

If you want to get an idea of what it takes to make a model in PhotoScan, I suggest checking out the guide I assembled here. ReMake simplifies this process immensely. You merely select your photos, name the model, decide if you want their auto crop or smart texture options, and then hit “Start.” As we are working with the free or educational licenses, the program will then upload your photos to a cloud, process them, and then notify you when your model is available to download.

It really is that simple. There are some limited editing options for a finished model, but I have not had much need for them. As you can see, the results are excellent and more than serviceable. Occasionally, I have gotten ReMake to assemble a model that PhotoScan could not figure out.

 

Features/Limitations

For ReMake features and limitations go hand-in-hand. If you are using one of the free licenses then you only have the option of using the cloud for processing your model, you cannot create it on your computer (locally). If you are like me, this is perfect because my computer struggles to do anything while a PhotoScan model is processing. Also, unlike programs such as 123D Catch you still retain full rights to your model.

If you want a fully 3D object in ReMake (one with a completed bottom) then you will have to use a second program to assemble it like Blender or MeshLab. Technically, you can do this in PhotoScan which automatically stitches two models together to create a complete model. In practice, this has been far more finicky so I do not expect to miss it.

Finally, there is no masking option in ReMake. This means that, if your background is not out of focus, you will run into problems. On the other hand, this will cause difficulties in PhotoScan too, though you can overcome them with effort. That being said, the model below turned out pretty well, and this is one that PhotoScan struggles to make sense of.

 

ReMake is a much simpler and more accessible program than PhotoScan. This naturally means that you have less control over your model and you have fewer options. If a model does not turn out well, for example, you do not have the option of spending hours trying to manipulate the program to get a result (which actually appeals to me somewhat). On the other hand, if you have a casual, non scientific project where you just need a model for a demonstration or for printing, it is excellent.


Tristan Harrenstein is trained as an archaeologist and has a passion for outreach and education. This is fortunate as he is also an employee of the Florida Public Archaeology Network, an organization dedicated to promoting the preservation and appreciation of our archaeological resources. He shares a blog space with his boss (Barbara Clark) and you can read more blog posts on other archaeology related subjects that tickle our fancy here.

The Basic PhotoScan Process – Step 7

Photogrammetry has incredible potential in archaeological research and education. However, despite Agisoft PhotoScan’s relatively simple initial workflow, things get complicated pretty quickly. Those of us using the program tend to learn by solving problems as they occur, but this is a very piecemeal, time-consuming, and often frustrating process. Currently, anyone getting started with the program must either go through the same thing, or find someone to offer guidance.

In this series I will assemble all the separate tips that I have learned or found into a step-by-step guide on the basic process (posted weekly). I do not consider myself an expert in PhotoScan. If you are familiar with the program and have any corrections or additions, please let me know.Each week, the previous step will be edited to include any comments and placed under the “Resources” menu to serve as a guide for beginners.

The previous steps can be found here.

Step 7: Building the Texture

Time for the final step in this series, texturing your model! If you only plan to use your model for 3D printing, then this part is unnecessary. However, one of the strengths of photogrammetry is that the photos themselves can be used to make your model look like the original object.

Changing the High Contrast Images

First of all, if you used edited, high contrast images in Step 2 we will want to change them back so the final texture is accurate (otherwise skip this part and move on to “Texturing” section). To do this:

  1. Right click one of your images in PhotoScan and select “Change Path…” on the menu that appears.
  2. A window will pop up, navigate to your original, unedited photos.
  3. Only the image that you right clicked on will show up, that is fine. Select it and click “Open.”
  4. A window titled “Relocate Photos” will appear. Select “Entire workspace” and hit “OK.”

You now have a model built with the high contrast images and the unedited images in place for your texture.

Texturing

texture-windowTo add a texture to your model, go to your “Workflow” menu one last time and select “Build Texture…” A window will pop up with some more options.

  • “Mapping mode” should be on “Generic” most of the time. “Adaptive Orthophoto” might be good if you are working with aerials or a relatively flat subject.
  • “Blending mode” should be on “Mosaic (default)” as this will chose the most appropriate photo for the texture.
  • “Texture size/count” will depend on the detail you want and your system requirements. The first box indicates the dimensions of the texture image. Larger numbers will give you finer detail (so long as your photos are in a high enough resolution) but can be very taxing on your computer’s RAM if they are too big. The second box will help you get around that by producing multiple files instead of just one large one.

Under the “Advanced” tab you will find:

  • An “Enable color correction” checkbox which is supposed to even out lighting from photo to photo. I leave this unchecked.
  • An “Enable hole filling” checkbox which will attempt to add a texture to places that were not covered by the photos. Depending on the object, how big the hole is, and how concerned you are with accuracy, the program’s attempt to fill these spaces in is usually obvious, but better than nothing.

Click “Okay” when you are satisfied. When PhotoScan is done processing you have a finished 3D model! You can now upload it directly to Sketchfab under the “File” menu or export it (obj or stl is recommended) to print or work with it in Blender or Meshlab.

 


That concludes the Basic PhotoScan Process series but there is plenty more to talk about. This final entry will be added to the permanent page next week. If you have a tutorial series you would like to see or do, just let us know!

The Basic PhotoScan Process – Step 6

Photogrammetry has incredible potential in archaeological research and education. However, despite Agisoft PhotoScan’s relatively simple initial workflow, things get complicated pretty quickly. Those of us using the program tend to learn by solving problems as they occur, but this is a very piecemeal, time-consuming, and often frustrating process. Currently, anyone getting started with the program must either go through the same thing, or find someone to offer guidance.

In this series I will assemble all the separate tips that I have learned or found into a step-by-step guide on the basic process (posted weekly). I do not consider myself an expert in PhotoScan. If you are familiar with the program and have any corrections or additions, please let me know.Each week, the previous step will be edited to include any comments and placed under the “Resources” menu to serve as a guide for beginners.

The previous steps can be found here.

Step 6: Building the Mesh

You have created your dense point cloud and it is time to turn all of those points into a “solid” object. Before proceeding though, check your dense cloud for any errors and remove them the same way you did with the sparse cloud. I always have some cleaning up to do, but not nearly as much since I started doing the “Gradual Selection” process in Step 4.


select-points-by-color-windowJeremiah Stager
points out that you can further clean up the dense cloud by going to Tools> Dense Cloud> Select Points by Color. He recommends starting with white or black and increasing the tolerance to remove points that are errors in the model.

Now, go to your “Workflow” drop-down menu again and select “Build Mesh…” and the relevant window will pop up with new options.

  • mesh-building-window“Surface Type” should be left at “Arbitrary” unless you are modeling from aerial photography.
  • “Source Data” should be your dense cloud. If the menu says “Sparse cloud” switch it.
  • “Face Count” puts a limit on how many triangles PhotoScan will apply. When we turn these points into a solid object, the resulting model is made up of lots of little triangles. Each triangle is a face. When making a model to share on Sketchfab 500,000 faces is a good upper limit. I have had good luck with up to 1 million faces, though this is close to what my computer can comfortably handle (once I apply a texture in Step 7). If you put “0” in this box you are placing no limit on the number of faces. Be careful with this, I have accidently created models with over 22 million faces this way. This made my computer cry.

Under the “Advanced” tab:

  • “Interpolation” describes how much the program automatically fills in holes. “Enabled (default)” is the middle ground where small holes are filled in. If you need perfect accuracy and do not mind holes you can select “Disabled” otherwise “Extrapolated” makes sure there are no holes left.
  • “Point Classes” I think are related to more aerial photographs. Here is a tutorial on creating them if you need it, otherwise ignore this option.

Click “OK” when you are satisfied. The progress window will pop up again. This may take up to an hour.