Wednesday, December 2, 2015

Using Computer Vision Based Positional Tracking And A 360 Camera To Make a Freely Navigable Room

I have a vision that one day I will be able to take a 360 camera and quickly walk through a room and then with no additionally input from me be able to take that footage and have it automatically converted into a virtual reconstruction of that room that you could walk around in virtual reality.  I want to make the process so easy that anyone even the most untech savy person could do it.

So today's experiment is a first pass at that.  The way I figured I approach this is that if I can assess my position in the room at all times while walking through a room with a 360 video camera going, then I could map frames from the video to floor coordinates and  then as the user "walked" around a room I could show the appropriate frame according to the users floor coordinates.

Ideally, I would have some sort of high tech positional tracking like the Vive Lighthouse or The Oculus positional tracker.  Although requiring those would probably contradict my goal of making this accessible to anyone who is not tech savy.  So since I don't have either of those, instead I just coded up a rudimentary color based object tracker and using some geometry and the orientation of the camera  I made a python script that automatically calculates my position in the room at each frame.  So then the process becomes this:
  1. Walk through room with 360 video camera
  2. Load video into my software and in the first frame mark some object that can be seen everywhere in the room
  3. Let software run and reconstruct floor positions
  4. Done
The red is the window I am tracking to calculate my position
Follow the link below to see the end result.  Use the arrow keys to look around and the W key to walk forward.  (or if you are on mobile, just swipe around and tap on the screen to walk forward)


As you can see it worked okay but not perfect.  There are still a number of issues to be solved.  Like for example it let's you pass right through the couch.  And sometimes the position estimation is a little off so you get some weird jumps. And you will also notice that you can't go into the dining room.  I tried to do this but the hanging light blocked the marker which messed up the positional tracking. You may notice the one of the window is colored red.  This window is what I set as the object to be tracked. 
There are a lot of things I could do to make the tracking more robust and with some more work I am confident I could make it so the user doesn't even have to mark an initial tracking object at the beginning.  I am experimenting with that now using OpenCV and feature detection.  Hopefully my next blog update with have some positive results from that.  Also in the future I hope to be able to take all this positional information and all this still frames from various positions and use them to automatically generate a 3d model.


2 comments:

  1. Very cool first steps! Are you open sourcing your software?

    ReplyDelete
    Replies
    1. I may, I will have to see how far I can take it and how useful it is. Right now it is pretty simple and probably not worth open sourcing, but in the future it might be worth sharing the code

      Delete