The Basic PhotoScan Process – Step 5

Photogrammetry has incredible potential in archaeological research and education. However, despite Agisoft PhotoScan’s relatively simple initial workflow, things get complicated pretty quickly. Those of us using the program tend to learn by solving problems as they occur, but this is a very piecemeal, time-consuming, and often frustrating process. Currently, anyone getting started with the program must either go through the same thing, or find someone to offer guidance.

In this series I will assemble all the separate tips that I have learned or found into a step-by-step guide on the basic process (posted weekly). I do not consider myself an expert in PhotoScan. If you are familiar with the program and have any corrections or additions, please let me know.Each week, the previous step will be edited to include any comments and placed under the “Resources” menu to serve as a guide for beginners.

The previous steps can be found here.

Step 5: Building the Dense Cloud

Now that PhotoScan knows what points you want it to work with, it is time to ink in those lines. Go back to your “Workflow” drop-down menu and select “Build Dense Cloud…” Another window pops up. In the new window, the drop-down menu next to “Quality” will let you chose the (you guessed it!) quality of your dense cloud. I switch between “High” and “Medium” myself, but steer clear of “Ultra High” unless you have a really beefy computer.

dense-cloud-windowUnder the “Advanced” section you will find “Depth Filtering.” This is all about removing outlier points. If you have, and want, lots of fine detail in your model, choose “Mild” from the dropdown menu. If you are making the model for 3D printing, or do not care about small details like a crazy person, select “Aggressive.” Click “Okay.”

That is all for Step 5! I told you the rest was simpler. The progress window will pop up and will take anywhere from an hour to several hours to complete.

The Basic PhotoScan Process – Step 4

Photogrammetry has incredible potential in archaeological research and education. However, despite Agisoft PhotoScan’s relatively simple initial workflow, things get complicated pretty quickly. Those of us using the program tend to learn by solving problems as they occur, but this is a very piecemeal, time-consuming, and often frustrating process. Currently, anyone getting started with the program must either go through the same thing, or find someone to offer guidance.

In this series I will assemble all the separate tips that I have learned or found into a step-by-step guide on the basic process (posted weekly). I do not consider myself an expert in PhotoScan. If you are familiar with the program and have any corrections or additions, please let me know. Each week, the previous step will be edited to include any comments and placed under the “Resources” menu to serve as a guide for beginners.

The previous steps can be found here.

Step 4: Cleaning the Sparse Point Cloud

Once processing is complete, you will have what is called a sparse point cloud. This is essentially a rough sketch before you ink the lines in during Step 5.

Hopefully, you will be able to make out the shape of your subject, though there will be a lot of points you either do not want, or do not make sense. If you hit the camera icon on your toolbar some blue squares will pop up with the name of your photos attached to each. These are your camera positions in relation the the subject when you took that photo. Pretty neat!

Checking Alignment

photo-alignment
The photo in the top-right did not align and is not being used by the program. (Image by author)

Before proceeding with the cleanup, scroll through your photos and make sure there is a green check mark in the top-right corner of each. This mark tells us that PhotoScan successfully aligned that picture (at least it thinks it did). If a photo failed to align then the program could not figure out where it was supposed to go. This is almost always a result of an out of focus picture, bad lighting, or not enough overlap between photos.

To fix this try right-clicking the offending photo, select “Reset Camera Alignment,” right click it again, and select “Align Selected Cameras.” This is a long shot, but it does work occasionally. If the photo still will not align, you can try messing around with placing markers, but you might be better off just removing or retaking the photos.

Gradual Selection

The first thing you should do to clean up your model is to use the “Gradual Selection” tool. This process is pulled directly from dinsaurpaleo’s blog and boy does it make a big difference. His description is a little hard to follow, so I will include it here again.

  1. Critical! Right click your “Chunk” on the left and select “Duplicate Chunk.” Make sure the “Copy of Chunk” is in bold text before proceeding (this tells you which chunk you are working on).
    copy-of-chunk

This way you will keep the original sparse cloud unaltered in case something goes wrong.

2. Under the “Edit” drop-down menu, select “Gradual Selection…” and a window will appear.

3. In the window, click the drop-down menu next to “Criterion” and select
“Reconstruction Uncertainty”

4. Next to “Level” enter “10” and select “Okay.”gradual-selection-window

You will see that a lot of the points in your sparse cloud turned pink. That means they are selected.

5. Hit your “Delete” key and those pink dots will disappear. (Yes, I’m serious. You won’t be sorry!)

gs-hack
You are better off without all those points. (Image by author)

6. Repeat numbers 2-5 one more time.

optomize-camera7. Open your “Tools” drop-down menu and select “Optimize Cameras…” and a window will pop up.

8. Select all checkboxes except for the last two (I admit to having no idea what these really do) and click the “Okay” button.

This will take a few minutes to reset your photos without the inaccurate points you just deleted. If you deleted a lot of points you might get a “Some cameras have insufficient number of projections and will be reset. Continue?” If you get this popup, click “yes” and the program will try to reset the photos based on the remaining points.

9. Now, go back to your “Gradual Selection…” window and the “Reproduction Error” option should already be selected next to “Criterion.”

10. If the slider total is less than “1” you can skip to number 15.

11. Otherwise, set the “Level” to “1” and select “Okay.”

12. Hit your “Delete” key to delete the selected points.

13. Open your “Tools” drop-down menu again and select “Optimize Cameras.”

14. Your previous setting should still be selected. Click the “Okay” button.

15. Go back to your “Gradual Selection…” window one more time and choose the “Projection Accuracy” option next to “Criterion.”

16. Play around with the number until about 10% of your points are selected and click “Okay.”

17. Hit your “Delete” key.

And you are done! Hopefully, your sparse cloud looks a lot more like the object you were trying to model. I have, occasionally, had this process be too aggressive with a weak sparse point cloud, resulting in big holes in the next step. That is why we made a duplicate!

All that is left to do is some last tidying up. Spin your model around and use your “Freeform Selection Tool” on your toolbar to select and delete any leftover points that you do not want to model. This includes background points and the inevitable nonsense points floating around your model.

freeform-tool-and-resize-tool
The “Freeform” (red), “Resize” (blue), and the “Rotate” (green) tools. Useful Tip: Quickly swap between these tools and your cursor with the spacebar. (Image by author)

Once everything is cleaned up to your satisfaction, use the “Resize Region” and “Rotate Region” tools to manipulate the box surrounding you subject until it is just bigger than what you want to model. This reduces the area PhotoScan has to process and speeds everything up.

Step 4 is done! This part is by far the most involved because it will define the quality of the rest of your model. Things are simpler from here on out.

The Basic PhotoScan Process – Step 3

Photogrammetry has incredible potential in archaeological research and education. However, despite Agisoft PhotoScan’s relatively simple initial workflow, things get complicated pretty quickly. Those of us using the program tend to learn by solving problems as they occur, but this is a very piecemeal, time-consuming, and often frustrating process. Currently, anyone getting started with the program must either go through the same thing, or find someone to offer guidance.

In this series I will assemble all the separate tips that I have learned or found into a step-by-step guide on the basic process (posted weekly). I do not consider myself an expert in PhotoScan. If you are familiar with the program and have any corrections or additions, please let me know. Each week, the previous step will be edited to include any comments and placed under the “Resources” menu to serve as a guide for beginners.

The previous steps can be found here.

Step 3: Align Photos

Now that PhotoScan knows which pictures you want it to work with, it is time to line them up spatially in relation to each other. Fortunately, the program does all the work here.

Once again, go to your “Workflow” menu and this time select “Align Photos.” A window will pop up (pictured below) with some settings to play with. If you want a really thorough explanation for what each does I recommend checking out: http://www.agisoft.com/forum/index.php?topic=3559.0.

align-photos-windowIn a nutshell though:

  • “Accuracy” is exactly what it sounds like. Higher will get you better results, but at the cost of a longer processing time and a model that is more taxing on your computer. I use “Medium” or “High” as the “Highest” setting takes a very long time on my computer and I am not sure it makes that big of a difference.
  • “Pair Preselection” is set to “Disabled” by default. Change this option to “Generic” to greatly speed up the processing time.

If you click on the “Advanced” tab you will get several more options.

  • “Key Point Limit” describes the maximum number of points the program will try to draw from a photo. A higher setting will improve camera alignment and increase processing times. The default is 40,000 and the post above was unable to see a difference with anything above that point. Because I have trouble with the idea of letting go of accuracy, I keep this set at 1,000,000.
  • “Tie Point Limit” sets how many of these points the program will use to align the photos to decrease processing time. If you set this to 10,000 the program will only use the ten thousand best points from whatever you set the “Key Point Limit” to. To use all the points, set this to “0.” If you feel that the photos are taking too long to align, then it might be worth adjusting this setting.
  • “Constrain Features by Mask” is a setting you will probably only need if you used a turntable for your pictures and have applied a mask to your photos. Leave this unchecked otherwise.
  • “Adaptive Model Fitting” improves your camera alignment in ways I do not understand. Leave this checked.

Once everything is set, hit “Okay” and let PhotoScan do its thing. Depending on your settings and how many pictures it has to process, this could take anywhere from a few minutes to hours.

The Basic PhotoScan Process – Step 2

Photogrammetry has incredible potential in archaeological research and education. However, despite Agisoft PhotoScan’s relatively simple initial workflow, things get complicated pretty quickly. Those of us using the program tend to learn by solving problems as they occur, but this is a very piecemeal, time-consuming, and often frustrating process. Currently, anyone getting started with the program must either go through the same thing, or find someone to offer guidance.

In this series I will assemble all the separate tips that I have learned or found into a step-by-step guide on the basic process (posted weekly). I do not consider myself an expert in PhotoScan. If you are familiar with the program and have any corrections or additions, please let me know. Each week, the previous step will be edited to include any comments and placed under the “Resources” menu to serve as a guide for beginners.

The previous steps can be found here.

Step 2: Add Photos

There is no model until you tell the program which photos to use. While in PhotoScan, simply select the “Workflow” dropdown menu and click on “Add Photos” or “Add Folder.” Navigate to your photos, select them, and click “Open.” That is it!

Okay, not quite. One way to make sure your photos align as best as possible, and to increase the detail of your model, is to use a program like Camera Raw. All you need to do (if using Camera Raw) is open all of your pictures together and increase the contrast to just before the shadows start merging together. Then you save these versions with the exact same names in a separate folder, do not save over your originals! If you follow this step, add these high contrast pictures instead of the originals.

contrast-example
Flat lighting (left) is has some hard to see details. Slightly adjusting the contrast (middle) can help this texture pop out for PhotoScan. Don’t take it too far though or you are losing details (right). (Images by Author)

That is all for this week. Look for step three next week when things really start to get complicated!

FPAN 3D Public Archaeology

Since hurricane Matthew threw off our schedule, this week we are reposting an entry from the Florida Public Archaeology Network’s Northwest region’s blog, Going Public: The Dirt on Public Archaeology. Here, the Kevin Gidusko talks about FPAN’s efforts to use 3D modeling and printing for public outreach and education.

Last year, several FPAN staff began to engage with a new type of technology that was making waves in the field of archaeology: 3D visualization of archaeological sites and artifacts (see here and here). Three-dimensional models of artifacts and archaeological sites have been around for a few years now, though for much of that time the hardware and software required to undertake a project was a bit cost-prohibitive, at least to us. However, as cost and 3D technology began to make a pivot towards more public use, we jumped at the opportunity to see what we could do with it. We were lucky in having colleagues, such as those at the VCU Virtual Curation Laboratory,  who had ventured into the field already and were able to give us much needed pointers. At FPAN, we immediately saw how this emerging technology could couple with our archaeology education outreach to engage the public in new and exciting ways.

Recently, we saw that the new Smithsonian National Museum of African American History and Culture has partnered with Google to assist in engineering a portion of the museum that will provide access to more of the museum’s collections through 3D visualization. Many museums have the mass of their collections in storage, due to the fact that there is simply not enough space to display these items, or that the items are too fragile for display. This new, interactive display will allow visitors to follow their personal interests through a vast collection of artifacts that have been modeled and interpreted by museum staff. In a way, this creates a unique visitor experience for each and every person that comes to the museum. Astonishing!

But, FPAN got there first! We have been busy creating 3D models of artifacts and archaeological sites that the public is able to interact with through our Sketchfab site. There, you can see unique items up close and personal. You can even, if you are able to, download the item for 3D printing. Of course, this is just some friendly bragging; several groups preceded us and there are sure to be many who will follow us in utilizing this new technology. What is important is that the public now has ways to interact with archaeological resources from around the world in ways they have never been capable of before. Want to visit the British Museum, but can’t afford an airline ticket? Have a lunch break to check out what archaeologists in Korea are working on? Want to see what a shipwreck looks like, but don’t want to bother with all that pesky SCUBA diving? Take a peek here. Certainly, this new trend in interpretation and engagement is taking solid hold, and we’re happy to see the Smithsonian embrace the technology for the public!

FPAN will continue in incorporating 3D technology in our future outreach and will also apply it to current curriculum. Plenty of projects are in the works, and we are currently working on incorporating 3D models to our current Project Archaeology curricula for Kingsley Plantation and Florida Lighthouses. But, stay tuned for more!

You can learn more by checking out the links above, or you can swing by the 3D Public Archaeology Working Group Facebook page where professionals from around the world share ideas, answer questions, and show what they’re currently working on. No prior knowledge needed; we’re happy to talk anytime!

Text and Pics: Kevin Gidusko
Models: Kevin Gidusko and Tristan Harrenstein