Structure Sensor – Machine vision’s new tools

Microsoft’s Kinect brought an interesting new technology to the living room on Xbox360.  It could see people, had a pretty good idea how tall you were, and how you posed your body.  And, Kinect was easily hacked to do more stuff (http://www.kinecthacks.com/) which led to more advancements in machine vision than the all of the last 20 years combined.

These massive leaps in machine vision is about to take another jump with Occipital’s Structure Sensor (http://structure.io/).  It’s like a Kinect, but higher resolution, faster, and attaches to your iPad. The combination of an excellent SDK, excellent sensor hardware, fast A7 64bit processor and full mobility is going to open the floodgates to whole new domains of easily accessed tools.   Already I’ve had the enjoyment of scanning people, and producing 3D models in under a minute each, done adhoc on the spot.  Just whip out the iPad with Structure Sensor, and walk around to fully capture a 3D model.

Conquer Mobile had the opportunity to meet the crew at Emily Carr’s Intersections Digital Studio (IDS). I scanned Maria Lantin’s bust, and pull it into VR as a 10ft tall statue.  It took less than 10minutes, but the effect was amazing.  You felt like you were experiencing a sculpture, walking around and seeing it from all angles as though you were in a park looking up at an empress of some great civilization.  Well, aside from the great effect on the ol’ ego, it goes to show digitization and importing our real world into a virtual world is going to get a whole lot easier.

In the case of digitizing for fit, we tried an experiment where my head was scanned and David Clement (http://davidonthings.wordpress.com/) fitted a 3D model of his custom head mounted display (HMD) to my head model.  Now it’s apparent how to cut and shape of the pieces to fit, where the gaps were, and how it could be supported.  The challenge with building any physical thing is that people’s body shapes are so very different.  When you’re going for ergonomic comfort, you have to gather lots of physical shape data, and check how your adjustment mechanisms work.  David has been gathering lots of head and eye data to explore how to make a highly ergonomic and ultra-light HMD.

With prosthetics the problem of ergonomics is so much more apparent.  A recent TED talk by David Sengeh (https://www.ted.com/talks/david_sengeh_the_sore_problem_of_prosthetic_limbs) describes how they approached customizing prosthetics with scanning and fit by leveraging 3D printing technology.   Now with the recently opened MakerLabs (http://www.makerlabs.ca/), we have open access to high-end powder 3D printer, you can 3D print your scanned shapes in full colour.

These models were built with a fixed voxel volume, suitable for the power and memory of an iPad, but the sensor has over a hundred times this fidelity and it’s possible to synchronize with the iPad’s camera for full colour model capture.  Occipital has acquired the company Skanect (http://skanect.occipital.com/), and are creating a wireless bridge.  You’ll be able to walk around and let the big iron computer integrate all the data.  This means direct capture of full fidelity colour models.

With these tools, mobile digitization of our physical world is going to get a whole lot more interesting.

Leave a Reply

Your email address will not be published. Required fields are marked *