Category Archives: autonomous cars

On the Road and Off the Map: Maps for Self-Driving Cars in an Over-Paved World

Self-driving or autonomous cars promise a change in patterns of mobility more radical than any change in transportation.  Already, we have cars able to signal their approach of the edges of traffic lanes, altering their human drivers of impending danger.  The promises of self-driving cars have generated increasing optimism in the United States and Japan, as the next generation of driving vehicles in a culture ready to embrace the new, perhaps because they promise the very possibility of constant motion in a country of speed.  But by removing routes of human motion and how humans move through road systems from direct intelligence, the maps that are being designed for autonomous vehicles to navigate the roadways of America and beyond suggest a new nature of space, as much as of transportation or transit:  and the maps for self-driving cars, while not designed for human readers, suggest a scary landscape rarely open to surprises and eerily empty of any sign of human habitation.  The maps for autonomous vehicles are, commonsensically, absent of human presence in the automative landscape they reveal–and that has grown up around them.  They are the creation of an over-paved world.

For in promising to synthesize, compress and make available amazing amounts of spatial information and data sufficient to to process the rapid increase of roadways that increasingly clog much of the inhabited world, they are maps for the age of the anthropocene, when ever-increasing spaces are being paved.  And although even after the arrival of promising “autonomous vehicles” from Tesla, which has introduced a new Autopilot feature able to maneuver in well-marked highways, and tests for urban driving by Uber, General Motors, and of course Google, the limited safety of relying only on sensors to navigate space in many areas, where vehicles are forced to integrate Lidar, mid- and low-range radar, camera-based sensors, and road maps in real time, and have difficulty calibrating road conditions and weather with the efficiency human drivers do.   The absence of a clear road map for their integration, however, is paralleled by the inability to synthesize contingent information in maps, which in their absence of selectivity offer oddly hyper-rich levels of information.

The notion of processing such comprehensive maps was far away when DARPA sent tout a call in 2003 inviting engineers to design self-driving cars that could navigate a one-hundred-and-forty-two-mile-long course in the desert, near Barstow, CA, across the desert to Prima, Nevada, without giving them a sense of its coordinates on a race-course filled with gullies, turns, rocks, switchbacks and obstacles–from train tracks to cacti–hoped to integrate GPS and sensors to create a car able to navigate space in as complete an image of road conditions as was possible.  If the rugged nature of these rigged-out vehicles recalled the first-run of a Mad Max film in their outsized nature paramilitary nature, designed as if to master landscape of any sort, they were so overfitted were they with what seemed futuristic sensors that were tantamount to signage–

 

DARPAGCSa_04.jpg

darpa_cars.jpg

 

Poster 619x316

 

–to seem to wrestle with the fundamental problem of mastering spatial information that the new generation of autonomous vehicles have placed front and center.

The top-down attempt of DARPA to stage a race of autonomous vehicles, was intended to keep soldiers out of harm’s way in a military context.  But the attempt to generate a new sort of military vehicles raised compelling questions of integrating a range of spatial signs in their apparatus of machine vision, laser range-finding data, and satellite imagery, but suffered from an inability to take in environmental information–no cars completed the course, as it was staged, and the vehicle traveling the furthest went only seven and a half miles.  Even in a course that was located in the desert–still the preferred site, given the lack of weather conditions and better kept up road surfaces, to test most self-driving cars to minimize unpredicted external influence–the relation of car to world was less easily negotiated than many thought.

While the results of the DARPA grand challenge wasn’t immediately successful, although the basis it set for future collaboration between machine-learning and automotive companies in notions of remote sensing.  It placed front and center the problem remains of how to establish more than a one-dimensional picture of the road ahead of the car to navigate the road ahead most easily.  And by 2007, the Urban Challenge, invited autonomous vehicles to navigate streets of an urban environment in Victorville, Calif., against moving traffic and obstacles and following traffic regulations, in ways that lifted a corner on the mappability of the future of driverless cars.

Although the new starting point of self-driving cars on a network of readable roads, equipped with recognizable signage, remains the most profitable area for development, the machine-readable road maps eerily naturalize the parameters of the roads in their content, and absent humans from their surface.  Despite the recourse to satellite photography and attempts to benefit from aerial views, the notion of a map for the autonomous vehicle was barely conceived.  But in the almost fifteen years since, the maps that are being developed for self-driving cars have grown into an industry of their own, promising to orient cars to machine-readable records of the roadways in real time.

Continue reading

1 Comment

Filed under 3-D maps, autonomous cars, HD Maps, machine-readable maps, self-driving cars