How do Cloudlets work in C4, pronounced “See
Forth” – Cloudlet Communications, Computing & Control?
Let’s follow an Autonomous Vehicle through a scenario and
see how it operates and reacts to other Cloudlets.
Let’s start with our Autonomous Vehicle out of range of
local communications with any other Autonomous Objects. The Autonomous Vehicle
forms a Cloudlet of its own in this situation, doing the same sorts of things
it would in a larger Cloudlet, but not having any other Autonomous Objects to
communicate with inside a Cloudlet. Its information is limited to what its
sensors can detect, plus the static maps it has acquired, and possibly dynamic
local information from Autonomous Objects previously traversing this local area
(the date and time stamp will give an indication of how out of date this
dynamic local information is, recall there aren’t any other Autonomous Objects
currently in the local area – I’ll discuss this further later in the scenario).
Functioning as a Cloudlet, this Autonomous Vehicle creates
and stores a current 4-Dimension Local Dynamic Map based on the information it has
available. It sends out “Cloudlet here” signals and communicates with higher
levels in the communications hierarchy, as a Cloudlet. For example, if it finds discrepancies between
the local map it creates and the static information it enters that into the
management protocols – note, because there is only a single Autonomous Object in
the Cloudlet, this information will be treated as less reliable than if it were
from a larger Cloudlet, which would have included multiple points-of-view and extensive
internal validation before making such a report, for example to account for
variation in sensors, because accuracy and redundancy are key features of the C4
system due to the critical uses of the information.
Now suppose this
Autonomous Vehicle detects another Cloudlet. The first step is to establish
communications Cloudlet-to-Cloudlet. The Cloudlets exchange significant
information about travel conditions (e.g., slippery road, accident ahead,
potholes, etc.). The Cloudlets determine their relationship to each other, for
example, should they merge or not.
Let’s assume for this scenario that they will not merge.
There are many possible reasons for this: they could be on separate roads
(e.g., one is on an over-pass, or going in a different direction on a divided
road; note if this were an A-Way, the A-Way would likely block local
transmissions between separate A-Ways, and pass through any relevant information).
Assume our Autonomous Vehicle is approaching a Cloudlet going in the opposite direction on an undivided road. This approaching Cloudlet is a cluster of Autonomous Vehicles. Because the Cloudlets are passing each other rapidly, and will not have comparable points-of-view, they decide not to merge into a single Cloudlet, but to continue to function Cloudlet-to-Cloudlet.
One important set of information to share is the Physical
Boundary of the Cloudlets (unless stated otherwise, this always includes 3-dimensional
information on position, velocity, and acceleration). For example, is one of
the Autonomous Vehicles in the Cloudlet passing others in the approaching
Cloudlet, and thus in the lane occupied by our intrepid Autonomous Vehicle? In
this case, the passing vehicle would have it’s Plan in place, showing when it
will complete the passing maneuver, and thus our Autonomous Vehicle can
determine whether there is danger of a collision or not. Or, is the Cloudlet
performing normally, for example if one of the vehicles were controlled by an
intoxicated driver, it might be weaving and even crossing the lane boundary,
and cause our Autonomous Vehicle to report this anomalous behavior using the
Management Protocol, and potentially to take evasive action depending on
distance, speed, acceleration – hopefully we don’t have to worry about
intoxicated Autonomous Vehicles, because that’s one of the major advantages of
Autonomous Vehicles over human drivers (over 90% of crashes are due to human
error). J
The corresponding Physical Boundary information to the other
Cloudlet about our Cloudlet would indicate that their Autonomous Vehicles
should not pass, and need to maintain suitable clearance. For example, our
Autonomous Vehicle might be carrying a wide load, and thus other Autonomous
Vehicles should give it additional space.
In sharing information on maps, they are coded to indicate
whether they are the most recent version. Today as an airplane is approaching
an airport, it gathers the latest weather and other airport information, and
when opening communication with the tower, it first gives the code of that
information, e.g., ZYING, so the tower knows it has the latest information.
Thus if the approaching Cloudlet has detected a change in the static map
information, it will be immediately clear and our Cloudlet can request the new
information.
Note we do not need to know information about the individual
components of other Cloudlets, and vice versa, that’s one of the key
differences between a Cloudlet and an Autonomous Vehicle, even if the Cloudlet
only contains one Autonomous Vehicle. This has implications for privacy, which
I will discuss later (I know I’m putting off a lot until later, but life is
complex).
Our Autonomous Vehicle, and thus Cloudlet, has indicated our travel plan. For example we may be turning right at the next intersection. This helps determine how much information in the other Cloudlet’s Local 4-DMap it needs to share. In this case, most of the information beyond the next intersection is irrelevant, although it would include information on Cloudlets that may reach the intersection before we do, and thus have relevant information.
If there are discrepancies between the information from the
two Cloudlets, then management protocols exist to help discover the cause of
the discrepancy and to resolve it; or to report it to higher levels in the
management protocol if they cannot be resolved. For example our Cloudlet might
sense that there is something ahead, but the other Cloudlet does not sense it.
In this case because the two Cloudlets are separate, there is not as much
opportunity for overlapping points-of-view, mutual calibration of sensors, etc.
so it may just be that the offending object is blocked from the other
Cloudlet’s Point-of-View. I’m reminded of the scene in Goldfinger, where
James Bond is driving down an alley and sees headlights approaching; he fires
his Aston Martin’s machine guns, but to no effect; at the last minute he sees
that he is approaching a mirror – sensors don’t always give the full picture.
No comments:
Post a Comment