Blog

Against the Flow: Reconstructive Modeling of a Watermill

As an archaeologist, my first experience with modeling came almost as an afterthought to my thesis work. I had an opportunity to present my work as a poster during the Southeastern Archaeology Conference in Athens, Georgia and wanted to show people in a readily accessible visual format what the watermill structure looked like prior to the weather, moss, and debris created what exists today. Much like when an artifact is printed and made available to the public, I had a desire to connect the public to my mill. Once I realized its outreach capabilities, the model I had created became a center point of my research.

Mill two views.png
The mill as it stands today on the left and a scaled reconstruction on the right. (Image by author)
brickandgearattachment
Brick and Gear attachment piece that were recovered during excavations. (Image by author)

I began this model as an attempt to recreate the mill and then develop a scale model. The mill is a mid-nineteenth century watermill that is the focus of my master’s thesis. As it stands today, the mill is a dilapidated concrete foundation nestled on the shore of a creek in Central Florida, the wheel and gears are missing as well as the mill house that would have held milling equipment. Severe disturbance from both water intrusion and modern construction around the area made finding smaller artifacts difficult, and in the end only a few bricks and some metal pieces associated with the axle were recovered.

Since so little was found, I was forced to rethink what I could learn from the mill foundation, rather than a collection of artifacts. As a result, I turned my focus to determining the size of the wheel, the size of the gear and then from that information how much power the mill would have been able to produce. This led me to modeling.

tinkercadmill
Mill as it looked when finished in Tinkercad. From here I was able to have it printed. (Image by author)

While in the field I took measurements with surveying equipment as well as hand tools. I would later use this information to recreate the mill in Tinkercad, an excellent modeling site that is free. I decided to create a scale model of the mill, for several purposes:

  1. I wanted to play with different size wheels and gears and how they would have fit within the existing foundation.
  2. I needed to see what the association was between wood inserts in the floor of the gearbox and a gear.
  3. I wanted to be able to show future audiences how the mill would have looked when first constructed.
Gearbox.png
Gearbox with wood inserts pointed out. These were found after excavating the box. (Image by author)
gearboxmodel
The model in Tinkercad showing the correlation of the gear and where it would have been positioned over the wood insert. (Image by author)

The mill, recreated in Tinkercad, was designed on a 1 inch:1 foot scale, though it was scaled much smaller when printed to reduce cost. With the 3D reconstruction, I was able to visually determine that the wheel would have most likely been between 4 and 5 feet in diameter with a maximum width of 2 feet. Any smaller and the wheel would have lost efficiency. I also found that a gear would have aligned perfectly above the wood inserts in the bottom of the gear box, perhaps serving as a buffer in case the gear bounced against the floor.

The mill was printed using an online printing service and I took the model with me as a visual aid for a poster presentation I gave at the 2016 Southeast Archaeology Conference. Watching grown archaeologists spin the wheel on my model, only confirmed my belief in its ability to capture the public’s interest.

Modeling artifacts has brought them out of the lab and scale models of buildings and machines will bring large or immovable structures into the hands of the public. The educational material this type of reconstructive modeling could produce is vast; math (scale modeling teaches ratios), history (changing technologies can be recreated to give temporal context), physics (calculating horsepower and energy of machines), communication (creating modern instructions on the operation of a historic machine), etc. Making sure that math, history, science, and language arts are all covered is the intent of any archeologist working in schools.

Next week I will review Tinkercad and talk more about its potential in an educational setting.

model.jpg
Final mill model. The wheel and gear are one unit and spin freely on the foundation allowing audiences to physically experience the mill turning. (Image by author)

 


Elizabeth Chance Campbell is a Master’s student at the University of Central Florida and will be defending her thesis, on an 1866 watermill, in the spring. She worked in a low income middle school for five years where she taught students with learning disabilities before moving to Georgia, where her wife is stationed in the Air Force. She hopes to take her experience as an educator and as an archaeologist to the next level by creating lessons that can be incorporated into classroom settings with students of all levels.

The Basic PhotoScan Process – Step 7

Photogrammetry has incredible potential in archaeological research and education. However, despite Agisoft PhotoScan’s relatively simple initial workflow, things get complicated pretty quickly. Those of us using the program tend to learn by solving problems as they occur, but this is a very piecemeal, time-consuming, and often frustrating process. Currently, anyone getting started with the program must either go through the same thing, or find someone to offer guidance.

In this series I will assemble all the separate tips that I have learned or found into a step-by-step guide on the basic process (posted weekly). I do not consider myself an expert in PhotoScan. If you are familiar with the program and have any corrections or additions, please let me know.Each week, the previous step will be edited to include any comments and placed under the “Resources” menu to serve as a guide for beginners.

The previous steps can be found here.

Step 7: Building the Texture

Time for the final step in this series, texturing your model! If you only plan to use your model for 3D printing, then this part is unnecessary. However, one of the strengths of photogrammetry is that the photos themselves can be used to make your model look like the original object.

Changing the High Contrast Images

First of all, if you used edited, high contrast images in Step 2 we will want to change them back so the final texture is accurate (otherwise skip this part and move on to “Texturing” section). To do this:

  1. Right click one of your images in PhotoScan and select “Change Path…” on the menu that appears.
  2. A window will pop up, navigate to your original, unedited photos.
  3. Only the image that you right clicked on will show up, that is fine. Select it and click “Open.”
  4. A window titled “Relocate Photos” will appear. Select “Entire workspace” and hit “OK.”

You now have a model built with the high contrast images and the unedited images in place for your texture.

Texturing

texture-windowTo add a texture to your model, go to your “Workflow” menu one last time and select “Build Texture…” A window will pop up with some more options.

  • “Mapping mode” should be on “Generic” most of the time. “Adaptive Orthophoto” might be good if you are working with aerials or a relatively flat subject.
  • “Blending mode” should be on “Mosaic (default)” as this will chose the most appropriate photo for the texture.
  • “Texture size/count” will depend on the detail you want and your system requirements. The first box indicates the dimensions of the texture image. Larger numbers will give you finer detail (so long as your photos are in a high enough resolution) but can be very taxing on your computer’s RAM if they are too big. The second box will help you get around that by producing multiple files instead of just one large one.

Under the “Advanced” tab you will find:

  • An “Enable color correction” checkbox which is supposed to even out lighting from photo to photo. I leave this unchecked.
  • An “Enable hole filling” checkbox which will attempt to add a texture to places that were not covered by the photos. Depending on the object, how big the hole is, and how concerned you are with accuracy, the program’s attempt to fill these spaces in is usually obvious, but better than nothing.

Click “Okay” when you are satisfied. When PhotoScan is done processing you have a finished 3D model! You can now upload it directly to Sketchfab under the “File” menu or export it (obj or stl is recommended) to print or work with it in Blender or Meshlab.

 


That concludes the Basic PhotoScan Process series but there is plenty more to talk about. This final entry will be added to the permanent page next week. If you have a tutorial series you would like to see or do, just let us know!

The Basic PhotoScan Process – Step 6

Photogrammetry has incredible potential in archaeological research and education. However, despite Agisoft PhotoScan’s relatively simple initial workflow, things get complicated pretty quickly. Those of us using the program tend to learn by solving problems as they occur, but this is a very piecemeal, time-consuming, and often frustrating process. Currently, anyone getting started with the program must either go through the same thing, or find someone to offer guidance.

In this series I will assemble all the separate tips that I have learned or found into a step-by-step guide on the basic process (posted weekly). I do not consider myself an expert in PhotoScan. If you are familiar with the program and have any corrections or additions, please let me know.Each week, the previous step will be edited to include any comments and placed under the “Resources” menu to serve as a guide for beginners.

The previous steps can be found here.

Step 6: Building the Mesh

You have created your dense point cloud and it is time to turn all of those points into a “solid” object. Before proceeding though, check your dense cloud for any errors and remove them the same way you did with the sparse cloud. I always have some cleaning up to do, but not nearly as much since I started doing the “Gradual Selection” process in Step 4.


select-points-by-color-windowJeremiah Stager
points out that you can further clean up the dense cloud by going to Tools> Dense Cloud> Select Points by Color. He recommends starting with white or black and increasing the tolerance to remove points that are errors in the model.

Now, go to your “Workflow” drop-down menu again and select “Build Mesh…” and the relevant window will pop up with new options.

  • mesh-building-window“Surface Type” should be left at “Arbitrary” unless you are modeling from aerial photography.
  • “Source Data” should be your dense cloud. If the menu says “Sparse cloud” switch it.
  • “Face Count” puts a limit on how many triangles PhotoScan will apply. When we turn these points into a solid object, the resulting model is made up of lots of little triangles. Each triangle is a face. When making a model to share on Sketchfab 500,000 faces is a good upper limit. I have had good luck with up to 1 million faces, though this is close to what my computer can comfortably handle (once I apply a texture in Step 7). If you put “0” in this box you are placing no limit on the number of faces. Be careful with this, I have accidently created models with over 22 million faces this way. This made my computer cry.

Under the “Advanced” tab:

  • “Interpolation” describes how much the program automatically fills in holes. “Enabled (default)” is the middle ground where small holes are filled in. If you need perfect accuracy and do not mind holes you can select “Disabled” otherwise “Extrapolated” makes sure there are no holes left.
  • “Point Classes” I think are related to more aerial photographs. Here is a tutorial on creating them if you need it, otherwise ignore this option.

Click “OK” when you are satisfied. The progress window will pop up again. This may take up to an hour.

Hands on the Past: the Ferry Farm Touchbox, Virtual Curation, and Tactile Archaeology

This week’s entry is a repost Bernard Means from the Virtual Curation Laboratory. Dr. Means and VCL have been using 3D prints as a tool for working with the visually impaired for over three years now. For a more recent entry from earlier this year, see Jedi Master of 3D Printing: Creating Access Passes to the Past.

By Bernard K. Means, Director of the Virtual Curation Laboratory

Melanie Marquis demonstrated the Touchbox
Melanie Marquis demonstrates the touch box. (Image by author)

This past Friday (March 22, 2013), I had the opportunity to speak with Melanie Marquis, laboratory supervisor at George Washington’s Ferry Farm, regarding a Touchbox she had developed for blind and other visually impaired visitors.  Standard museum displays—with artifacts and text protected behind clear acrylic or glass case fronts—are inaccessible to those who cannot see or have difficulty seeing.  The Touchbox was developed to make sure that these visitors also have the opportunity to learn and experience the rich history imbedded in the archaeological landscape at Ferry Farm—a history that includes American Indian artifacts spanning millennia, objects associated with a young George Washington and his family, and items recovered from a significant Union encampment dating to the American Civil War. The Touchbox includes large print and Braille maps of the Ferry Farm archaeological investigations, unprovenienced artifacts that can be safely handled, and some objects purchased from thrift shops that are analogues of materials recovered archaeologically.

Raised map with Braille showing the Ferry Farm landscape.
Raised map with braille showing the Fairy Farm landscape. (Image by author)
Plastic replica (left) of an 18th century (right) recovered at Ferry Farm.
Plastic replica (left) of an 18th century brush (right) recovered at Ferry Farm. (Image by author)

What’s lacking from the Touchbox are key items recovered from Ferry Farm’s rich past that are too sensitive or fragile to be handled by any visitor to the site. Fortunately, our work at the Virtual Curation Laboratory allows us to create plastic replicas of artifacts from Ferry Farm that can be incorporated into the Touchbox.  We’ve been working with Ferry Farm’s artifact analyst, Laura Galke, over the last year-and-a-half to create virtual avatars of many significant small finds, including American Indian stone tools, 18th century wig curlers and buckle fragments, the Masonic pipe that may have belonged to George Washington, and Minié balls from the Civil War occupation—among other objects.  And, we have created plastic replicas using the MakerBot Replicator that is normally housed in the Virtual Curation Laboratory @ Virginia Commonwealth University.  The plastic replicas we create are scaled exactly the same as their more fragile actual analogues, and thus enable a tactile appreciation of Ferry Farm’s past.

Plastic (left) replica of an Adena point (right) from Ferry Farm.
Plastic (left) replica of an Adena point (right) from Ferry Farm. (Image by author)

Over the coming weeks, we will be creating plastic replicas of small finds virtually curated from George Washington’s Ferry Farm for specific inclusion into the Touchbox.  We here at the Virtual Curation Laboratory are excited about our chance to make Ferry Farm’s history available to a wider audience.


In the Virtual Curation Laboratory, a team of Virginia Commonwealth University undergraduate students and alumni works under project director Dr. Bernard K. Means to digitally preserve the past and share it with the world. Check out and download digital artifact models on our Sketchfab page.

The Basic PhotoScan Process – Step 5

Photogrammetry has incredible potential in archaeological research and education. However, despite Agisoft PhotoScan’s relatively simple initial workflow, things get complicated pretty quickly. Those of us using the program tend to learn by solving problems as they occur, but this is a very piecemeal, time-consuming, and often frustrating process. Currently, anyone getting started with the program must either go through the same thing, or find someone to offer guidance.

In this series I will assemble all the separate tips that I have learned or found into a step-by-step guide on the basic process (posted weekly). I do not consider myself an expert in PhotoScan. If you are familiar with the program and have any corrections or additions, please let me know.Each week, the previous step will be edited to include any comments and placed under the “Resources” menu to serve as a guide for beginners.

The previous steps can be found here.

Step 5: Building the Dense Cloud

Now that PhotoScan knows what points you want it to work with, it is time to ink in those lines. Go back to your “Workflow” drop-down menu and select “Build Dense Cloud…” Another window pops up. In the new window, the drop-down menu next to “Quality” will let you chose the (you guessed it!) quality of your dense cloud. I switch between “High” and “Medium” myself, but steer clear of “Ultra High” unless you have a really beefy computer.

dense-cloud-windowUnder the “Advanced” section you will find “Depth Filtering.” This is all about removing outlier points. If you have, and want, lots of fine detail in your model, choose “Mild” from the dropdown menu. If you are making the model for 3D printing, or do not care about small details like a crazy person, select “Aggressive.” Click “Okay.”

That is all for Step 5! I told you the rest was simpler. The progress window will pop up and will take anywhere from an hour to several hours to complete.