Wednesday, December 16, 2015

Take A Virtual Walk Through The Christmas Lights On Temple Square

Every year at Christmas time, the folks at Temple Square in Salt Lake City Utah put on a brilliant display of Christmas lights.  No one knows exactly how many lights there are, but some people estimate it is just under a million. In addition to the lights, there is a giant nativity scene, a bunch of smaller nativity scenes from around the world, and various groups singing Christmas songs.

I figured this would be the perfect type of event to create a Walkabout virtual walk through for.  This Walkabout consists of over 130 full 360 degree panoramas linked together so that you can "walk" almost anywhere you want over the 10 acres of Temple Square.

Click on the link below to start your Walkabout through this wonderful display of lights.

START YOUR WALKABOUT THROUGH THE TEMPLE SQUARE LIGHTS

A Few Instructions For First Time Users


You can click and drag the images to look in any direction you want, including up or down.  Little green spheres will appear to show which ways you can walk


If you are a mobile device look for this icon:
 
 If you click on this, you will go into compass mode.  This will allow you to look around just by moving your phone in whatever direction you want to look.


If you own a Google Cardboard virtual reality headset, look for the Google Cardboard icon. Click it to go into virtual reality mode and have a truly immersion viewing experience

Want to see more cool stuff? Check us out on Facebook



Wednesday, December 2, 2015

Using Computer Vision Based Positional Tracking And A 360 Camera To Make a Freely Navigable Room

I have a vision that one day I will be able to take a 360 camera and quickly walk through a room and then with no additionally input from me be able to take that footage and have it automatically converted into a virtual reconstruction of that room that you could walk around in virtual reality.  I want to make the process so easy that anyone even the most untech savy person could do it.

So today's experiment is a first pass at that.  The way I figured I approach this is that if I can assess my position in the room at all times while walking through a room with a 360 video camera going, then I could map frames from the video to floor coordinates and  then as the user "walked" around a room I could show the appropriate frame according to the users floor coordinates.

Ideally, I would have some sort of high tech positional tracking like the Vive Lighthouse or The Oculus positional tracker.  Although requiring those would probably contradict my goal of making this accessible to anyone who is not tech savy.  So since I don't have either of those, instead I just coded up a rudimentary color based object tracker and using some geometry and the orientation of the camera  I made a python script that automatically calculates my position in the room at each frame.  So then the process becomes this:
  1. Walk through room with 360 video camera
  2. Load video into my software and in the first frame mark some object that can be seen everywhere in the room
  3. Let software run and reconstruct floor positions
  4. Done
The red is the window I am tracking to calculate my position
Follow the link below to see the end result.  Use the arrow keys to look around and the W key to walk forward.  (or if you are on mobile, just swipe around and tap on the screen to walk forward)


As you can see it worked okay but not perfect.  There are still a number of issues to be solved.  Like for example it let's you pass right through the couch.  And sometimes the position estimation is a little off so you get some weird jumps. And you will also notice that you can't go into the dining room.  I tried to do this but the hanging light blocked the marker which messed up the positional tracking. You may notice the one of the window is colored red.  This window is what I set as the object to be tracked. 
There are a lot of things I could do to make the tracking more robust and with some more work I am confident I could make it so the user doesn't even have to mark an initial tracking object at the beginning.  I am experimenting with that now using OpenCV and feature detection.  Hopefully my next blog update with have some positive results from that.  Also in the future I hope to be able to take all this positional information and all this still frames from various positions and use them to automatically generate a 3d model.