Brainstorming at Burning Man 2016

Contents for Brainstorming at Burning Man 2016

Our trip to Burning Man 2015 was so successful that we are expanding our presence for 2016 to a 30' PlayaDome and running 12 Brainsto...

Tuesday, October 22, 2013

A Sample Trip: Existing Maps vs. 4-Dimension Global Maps


To illustrate some of the challenges for existing maps, and how 4-Dimension Global Maps would address these challenges, let’s take a sample trip.

My friend is legally blind. That doesn’t mean he can’t walk around and paddle with us, but it does mean that we sometimes have to tell him which way to go, or to watch out for rocks or roots, and to help him in choosing and serving food at pot luck suppers. It also doesn’t mean he is helpless: he has published several scholarly books, still writes erudite articles and funny poems, and he is an engaging conversationalist. But it does mean there is no way he can drive!

So let’s see how he would do with current maps (I’ll use Google Maps, but others are similar) in his Autonomous Vehicle on a trip from Saranac Lake to the Mirror Lake Inn in Lake Placid, NY.

The first part of the trip is easy, just follow Route 86 toward Lake Placid.
We’ll assume, for the time being, that the Autonomous Vehicle can figure out details like Stop signs and traffic lights. And there is nothing particularly challenging on this road – although there was a very long detour for about 6 months which wasn’t well marked at two of the turns, so you needed to know where you were going and guess! That’s an example of the difference between a static map and a dynamic map, and between a 3-D map and a 4-D map.

Then we turn left on to Mirror Lake Drive.

The map says we arrive at Mirror Lake Inn. But that’s not the end of the driving part, unless you want a fairly long walk up a steep hill, and a search for the entrance!

The map shows some buildings, but no roads on the property. Of course we can switch to satellite view and get more information.
Hmm, I see a triangle and some roads, but not where to go.

The Autonomous Vehicle won’t know where to go to find the entrance, and my friend can’t read signs until he is almost on top of them – the entrance sign is very discreet and tricky to find, and what you would think is the main entrance isn’t.

And of course where should the Autonomous Vehicle park and details like that – there are parking restrictions depending on whether you are a guest, etc. If the front lot is full, as it is during the busy season, then where does it park?

Yahoo Maps gives a bit more detail in the satellite view, clearly taken at a different time, but shows the entrance at the other end of the building.


Clearly we need more information. Let’s take a look inside a building, I don’t have the detailed layout, so this is an office, but it will illustrate the challenges. 

This floor plan includes all the furniture, so it is not a typical office layout. It could be a static map, that is the furniture is in a nominal place, or it might be a dynamic map showing the current position of the furniture. As you can see, navigating to the office chair requires going around the sofa and armchair to the right and then through a narrow opening.

Now suppose we didn’t have the floor plan with furniture layout, could an Autonomous Vehicle operating on just its own sensors get to the desk on the left?
This view from the entrance isn’t very helpful, so lets move forward and enter the office.
Hmm, our Autonomous Vehicle sees the path directly forward is blocked, but should it go right or left? Or can it even get there from here? It’s going to take some trial and error to figure out how to get there without the floor plan view – not really acceptable.

Now let’s add some dynamic elements and see some of the other challenges waiting for our poor Autonomous Vehicle.

Our intrepid Autonomous Vehicle is coming down the hall to enter the office, as shown by the blue arrow. Unfortunately, a person, represented in green, is just coming down the hall from the left, and a golden retriever is rushing to greet her from the right. Will they collide? Should the Autonomous Vehicle speed up, or slow down, or something else? The Autonomous Vehicle sensors alone can’t yet pick up our green person or the golden retriever because of the walls in the way, so we may be headed for a collision.

You might ask how the proposed 4-Dimension Global Map would handle this situation.

Before the Autonomous Vehicle even entered the building, it would have obtained a copy of the local 4-D map showing the static view of the floor plan, 2-D, and some information about the doors, furniture and other features, 3-D.  To match the image sensors from the Autonomous Vehicle with the 3-D map, we need to know things: how tall the various items are, what color they are, etc. For example, there is a big difference in navigating between a rug and a sofa (later I’ll describe how we convey information like object identification, colors, etc.). This might be enough information if there are no other objects and no motion.

Now for the 4-D part: we need the current locations of the furniture. For example, the desk chairs may be left most anywhere, and you need the orientation if you are going to sit down in one. And we have people, pets, and other moving objects to consider. Our 4-D map would include information about each object, the type, identity, current location, velocity and acceleration. This allows the Autonomous Vehicle to decide what to do: slow down, stop, back up, prepare for a hug from Jane or Goldie jumping up on you J.



No comments:

Post a Comment