3D Scanning from Photographs with 123D Catch
A few weeks ago while researching options for creating 3-dimensional models from photographs, I came across a really amazing piece of software that is in beta right now (and free!) called 123D Catch. It’s from Autodesk who is well known for their CAD and 3D modeling software. Unfortunately it is only available for Windows right now, which is a real shame, but that wasn’t going to stop me from taking it for a test drive. The promise of the software is that you could take any old camera and take pictures of an object from all sides and send them to Autodesk’s servers and they would send you back a 3D model. Sounds like science fiction right? Let’s give it a shot and see if we can get something scanned in and printed out using it.
The first thing we need to do is grab photos of our subject, in this case it’s Perry the Platypus. I took roughly 40 shots just walking around the object and saved them all to a folder on the desktop. Then we open up the program and select the images.
Once you’ve grabbed your images you can click the button to create a photo scene. I choose to have it email me when the process is complete because depending on the complexity of the model you could be waiting a few hours for the file to come back. All the processing happens on their servers (which is great because they probably have much more powerful machines doing the processing than this old Windows laptop I had lying around). Once I get the e-mail that it’s done I can download the file and open it up in 123D.
The first thing I do after opening up the file is export it to an .obj file that I can use in a variety of programs to edit it. Here I have opened the .obj file in Meshmixer and I’ll use the lasso tool to discard the background objects and single in on just the model I want to eventually print. Ideally I’d also flatten the bottom but I’m still learning and haven’t found an easy way to do that yet, but this object is sitting on a table anyway so I’ll just print it on a raft to give it a solid base. This second screenshot is the final .stl file exported from Meshmixer and opened in Meshlab. I’d say Perry is looking pretty good so far, let’s see if we can get him in plastic.
Because of his arms extending out we need to print with support, which is as easy as selecting “Exterior Support” from the dropdown in the Print-o-matic settings when generating the gcode. After an hour and a half of printing we get something that looks like this.
This was actually my first time printing anything that needed support material and it worked pretty well. The support structure peels away fairly easily and after a few minutes of cutting and cleanup we’re left with a great little plastic model of our stuffed friend. I’ve included a photo below of the two of them hanging out together as well.
I’d say for the first stab at scanning and printing this was a huge success. Clearly I could calibrate a bit more and learn how to clean up 3D models before printing them, but it’s amazing that this was accomplished with no expensive software or devices (the photos were all taken with my phone). We’re going to look into building a turntable with clean white background to automate the process of grabbing the still images. We’ve already seen a lot of interest from folks in the Historic Preservation as well as the Art Department here on campus for capturing real life objects as 3D models. I’m hopeful this will be a low-tech way of accomplishing this task and break new ground for using this device around campus.