Brainstorming at Burning Man 2016

Contents for Brainstorming at Burning Man 2016

Our trip to Burning Man 2015 was so successful that we are expanding our presence for 2016 to a 30' PlayaDome and running 12 Brainsto...

Monday, December 2, 2013

Why Do We Need Dynamic 4-Dimension Maps?


Navigation information for our maps comes in very different time-frames: changing in decades, years, days, hours, minutes, or continuously.
  • “Motionless” or Static map information includes objects which are not readily moveable, such as roads, sidewalks, traffic lights (but not their current light color), signs, buildings, walls, and doors. Although these can change, the changes can be incorporated as more frequently changing items, such as a detour while a road is being repaired. Other changes are permanent, such as a building fire or demolition and can be incorporated as temporary changes until the database for Unmovable information can be updated – this is one of the reasons for having change management procedures to assure that you have the latest information.
  • “Moveable” or Less Static map information includes objects which can be moved, but are not in motion for some significant period of time: a stalled car, a pot hole, furniture. This category can also include other less static information, such as weather and traffic related information: a detour, street lights are on, the street light at Main and 1st Streets is out, road surface is wet and slick, and it’s raining lightly.
  • “Mobile” or Dynamic map information includes objects that are in motion or changing frequently, such as: vehicles, pedestrians, animals, and other moving objects. It can also include relatively frequently changing information, such as the current status of a traffic light.

The timeliness of information is very important for information retrieval and storage, for example, knowing the exact location of cars traveling down a street 10 minutes before you arrive is useless, although you may want to know how crowded the street is; and even better you would like a prediction of how crowded it will be when you plan to arrive in 10 minutes. Knowing there are 3 parking spaces available in the block you want is useful even if you won’t arrive for 10 minutes, and that information gets more useful the closer you get.

Thus we can usefully acquire the information on Motionless objects well before we plan to enter the area. In fact our GPS units probably have a lot of this information stored from months, or even years before, depending on how often you update your maps. And we use that information to help plan our trip, so we need it way ahead of leaving on the trip.

Maps also include varying levels of detail and accuracy, corresponding to zooming in and out. Thus if you are driving along a street you don’t need to know that the curb has a 1” protrusion unless you are driving too close to the curb or trying to get around some obstacle in the road. Similarly if you are walking down a sidewalk you don’t need to know the floor plan for the adjacent building, and you only need to know about the cars passing at the moment if you intend to cross the street or if one of them is headed over the curb toward you. So we need clever algorithms to determine what information you get and when you get it.

My GPS, and my computer, contain map information for the entire US, and I update it about every 6 months. This is generally sufficient for driving. However, to get more detail for paddling and hiking I need to buy more detailed topographical maps of portions of the country. My computer can store all the data, but my GPS can only store a portion of the data (I know, I could get a bigger memory card), so I have to choose where I am likely to go and load the appropriate detailed data into the GPS unit. None of this helps much with tide levels or seasonal water fluctuations (I have come very close to getting stuck a half mile from shore on a mud flat and only escaped by visually finding slightly deeper troughs – the mud was too deep to walk out, we really would have been stuck until the tide came in, and it was getting dark).

This linkage between scale and “accuracy” can be dangerous, for example we were paddling along a lake shore and my GPS showed a straight shore to our left; however, there was clearly land blocking our path; if I zoomed in more on the GPS, a small island appeared, and while it was only about 20’ across, it stuck out about 300’ into our path – fortunately we don’t paddle that fast, and the island was clearly visible (it wasn’t foggy J).

This suggests an innovation in map information and display: provide more detailed and accurate information along your intended path. An example of this is subway maps, which use a variable scale so you can see what is happening even though stations are tightly grouped downtown and sparser in the suburbs, so a map to a fixed scale would obscure the detail of the denser stations, where there is a lot more usage. However, we need to be cleverer than this, for example, a vehicle or other object might be on a collision path, but is off your intended path (I had a friend whose car was totaled when she started through an intersection with a green light and was hit broadside by a heavy truck tire that had come off a stopping truck approaching the intersection – more about this errant tire later).
There are other types of data that help in the Navigation process, and that is Plans and Projections of future actions, plus History of past actions:
  • Plans include the route we are planning on taking, or that our Autonomous Vehicle is planning for us. It also includes reservations and other planning types of information; for example, knowing that I have tickets for a concert tonight at 8 pm will help in predicting my Autonomous Vehicle’s behavior – if we are late it will be looking for the fastest route and might even act a bit more aggressive than usual. Information such as planned road construction and detours should also be included. As we discussed above, the planned route helps define what information we need. This can also include other relevant plans, for example, the pattern of a traffic signal (e.g., 60 seconds green, 10 seconds yellow, 60 seconds red).
  • Projections attempt to predict the future to assist in navigation and planning. Forecasts of weather and traffic delays are widely used in planning our trips. For example, if I need to catch a plane from Newark airport to Chicago at 10 am Monday morning, a lot of information goes into the decision of when I should leave my house and which route to take. Is the flight on time or delayed; what is the weather projected to be both on my route and on the route of the incoming flight; what are the predicted traffic delays along the possible routes as a function of time (from experience, leaving 15 minutes earlier in the morning can make the difference between a clear ride on the Garden State and an extra half-hour of traffic delays, especially on a Monday morning, and it matters whether this was a good beach weekend)? Note that the Plans described above will be a big help in making these projections. As more and more of the traffic is filing Plans, we get better and better projections of what is going to happen, and the system can even optimize across all the plans, for example, suggesting that some people leave a few minutes earlier or later, or take alternative routes. And it will help in scheduling traffic control, such as stoplights (if those still exist with all traffic being Autonomous Vehicles J). The History described below will also be useful in making projections.
  • History can include both details of individual trips, and statistical summaries of variables of interest from weather to traffic volumes and delays as functions of time of day, day of the week, season, and holidays. I’ve already talked about privacy issues, so the data on individual trips may be restricted, or have a cost to use the information, while the statistical summaries may not be restricted or have a cost for using them {74 November 11, 2013 – Privacy and the Autonomous Age}. History information is useful in making Projections not only for travel, but for example, planning changes in transportation and other infrastructure, market studies for store locations and all sorts of ventures.

Let's assume that our Autonomous Vehicle is entering an area with no information other than the maps we could download today for our GPS. The Autonomous Vehicle has only its sensors to rely on for navigation. This is analogous to when you are driving an old fashioned HCV (Human Controlled Vehicle J), where you only have your senses to rely on. This isn’t a problem as long as the road ahead is clearly visible and there aren’t any surprises like potholes masquerading as puddles. However, if another vehicle comes around a blind corner, perhaps your line of sight is blocked by a fence or building, that can be fatal (I have a friend who was driving along; a vehicle towing a big travel trailer was approaching, and suddenly a motorcycle pullout out to pass the trailer – major injuries for the motorcycle rider).

So how do we solve a problem like limited Point-of-View? We need more sensors. I don’t think it’s likely that we will have sensors at every corner, who would pay for them and maintain them?

A much more plausible solution is that every Autonomous Vehicle would share its information with nearby Autonomous Vehicles. Each Autonomous Vehicle already has all the sensors and the computing hardware and software to process the data for navigation, and local networking communications gear is inexpensive, plus everyone gains from this sharing , so the incremental cost provides a direct benefit – everyone wins.

Back to our blind corner: our Autonomous Vehicle broadcasts that we are approaching the corner from the South with our location and speed, and the other Autonomous Vehicle broadcasts that it is approaching from the West, so both Autonomous Vehicles immediately recognize the impending problem and can take appropriate action. We have both Points-of-View. This information could be sent pretty simply.

Now suppose there are more Autonomous Vehicles nearby, this is New Jersey after all. We need a way to share the information from all those Points-of-View and make sense of all the information, and be sure that it is accurate.

Again we start with each Autonomous Vehicle having the same Motionless, static map information – we actually need to check that each Autonomous Vehicle is using the same information  if one is using old information we might have a serious situation, so we use version control.

Consider the case where we are just entering the local area where the other Autonomous Vehicles have already formulated a consensus view of the local situation. We need an efficient mechanism to get all of the relevant information (assume we’ve defined what we need in level of detail and accuracy as described above, based on our planned route). A good way to represent that information is as a map showing the relevant Autonomous Vehicles and other features, such as potholes, pedestrians and deer, that are not included in the Motionless Map data or reflect changes in that data.

So one of the Autonomous Vehicles that has a strong signal with us can send us the information. The other Autonomous Vehicles nearby will check the transmission to be sure that it was sent correctly, and that our Autonomous Vehicle received it correctly – we are dealing with life and death situations, so we want everything to have lots of redundancy. (I’m not trying to be specific about the details of the protocols, rather just to give a feasible solution: so for example, several different Autonomous Vehicles might participate in sending the information to our Autonomous Vehicle, and the information might arrive in successive levels of increasing detail and decreasing relevance – we need to know that we are about to hit a deer before learning about a car stalled 1 mile down the road.)

While our Autonomous Vehicle has been receiving the map information, our sensors have been doing their job and formulating their view of the situation. If our results agree with the consensus map we received, our Autonomous Vehicle signals agreement. If there is some discrepancy, then we enter a process of conflict resolution: perhaps we have a unique Point-of-View so our Autonomous Vehicle’s information fills in gaps for the other Autonomous Vehicles, or perhaps our sensors are faulty, or perhaps someone is trying to spoof us, or … there are lots of cases to be resolved, and I won’t go into details here, but it’s critical that these get worked out, so we need an excellent management process.

As we move along the process continues with Autonomous Vehicles providing new data, all the Autonomous Vehicles checking the data, each one calculating the changes to the map, and one of the Autonomous Vehicles sending out the updates, which are then checked by all the other Autonomous Vehicles.

One key way to reduce the amount of information being sent is to predict the position and velocity of moving objects by integrating their acceleration. So we only need to be notified when the acceleration changes (other than frequent checks that the calculations are matching reality, but because each Autonomous Vehicle is doing those calculations and matching with their observations, we only need to flag when there is a change).

As we move along the process continues with Autonomous Vehicles providing new data, all the Autonomous Vehicles checking the data, each one calculating the changes to the map, and one of the Autonomous Vehicles sending out the updates, which are then checked by all the other Autonomous Vehicles.

We progress with new Autonomous Vehicles joining the process as they enter the local area and other Autonomous Vehicles exiting our local network as they leave the area. And finally we exit the local area. (I’ll get back to Cloudlets in a bit, for now I’m just focusing on the map information process.)