Self-driving or autonomous cars promise a change in patterns of mobility more radical than any change in transportation. While depending on maps, the maps made for self-driving cars are perhaps unlike any other: not made or designed for human eyes.
They suggest a deep change in mapping, assembling a machine-readable record of each edge of the road. At the same time as roads are scanned, integrated with LIDAR imagery of the environment, and augmented with real-time feedback loops in ways that seem to provide a virtual 1:1 map of automative environments in which cars can navigate. Yet how intelligent are these “intelligent maps” by which self-driving maps integrate and position themselves in the space networks of roads? As Shannon Mattern has argued, “With the stakes so high, we need to keep asking critical questions about how machines [are able] to conceptualize and operationalize space” that can recognize the human actors in space, and how the increased role of these networks of mapping serve as actants–shifting the networks of on-road behavior, or how, as Mattern puts it, “artificial intelligences, with their digital sensors and deep learning models” that perpetuate one image of space will “intersect with cartographic intelligences and subjectivities beyond the computational ‘Other'” How will such maps, put differently, register people who also occupy the sidewalks, and the other cars on the highway (either as drivers or passengers), and how will they be effected by them?
The lack of a clearly sensory cartography of place is not only inherent in the ghostly nature of Lidar views of streets, and street-settings, but they are rooted in the trust we are increasingly inclined to assign machines for reading space in an accurate and comprehensive way, and indeed a manner with greater continuity and precision than many maps can hope to contain. The amazement at the possibilities posed by mechanical sensing is, in a sense, pushed to new limits in the promises of self-driving cars, who have quickly gained multiple evangelists. We already have cars able to signal their approach of the edges of traffic lanes, altering their human drivers of impending danger. The promises of self-driving cars have generated increasing optimism in the United States and Japan, as the next generation of driving vehicles in a culture ready to embrace the new, perhaps because they promise the very possibility of constant motion in a country of speed. But by removing routes of human motion and how humans move through road systems from direct intelligence, the maps that are being designed for autonomous vehicles to navigate the roadways of America and beyond suggest a new nature of space, as much as of transportation or transit: and the maps for self-driving cars, while not designed for human readers, suggest a scary landscape rarely open to surprises and eerily empty of any sign of human habitation.
The maps for autonomous vehicles are, commonsensically, absent of human presence in the automative landscape they reveal–and that has grown up around them. They are the creation of an over-paved world, and also on the readiness to accept the growth of this over-paved world: while based on LIDAR sensing, much of the sensing that goes into their construction or appears in rough cuts will end up on the cutting-rom floor, as maps focus on the qualities, contours or criteria of roads, in ways that naturalize the man-made features that will be sensed by machines, including variations in traffic flows, rather than familiarity with the surrounding landscape, weather, or even road conditions. The promises of reduced commute hours, expanded public transit lines, fewer fatalities, and an economy of passenger-friendly vehicles seem to depend on the “intelligence” of these maps, however, and on how a computer intelligence can provide an intelligent reading of an automatically sensed environment of other self-driving cars, presumably programmed to drive like humans, or at least to register their own motions. While the licensing of map data will mean, in fact, the broadest ever generation and destruction of cartographical data–unless someone develops a deep interest in historical road conditions recorded in real-time–digital sensors and deep learning models are promised to save the day in rendering static maps finally obsolete. But will the maps for self-driving cars be able to adequately interact with the cartographic intelligences of human drivers, or of the humans who will presumably also people the world and street intersections?
And what will even guarantee that the self-driving cars will not go off the roads? The absence of human intelligence from the maps for self-driving cars creates a code-space that seems to depend on its interaction with human intelligence far more than its maps seem to register at first sight. The simulated scenarios that have been created for such self-driving cars by engineers seem to seek to “provide a view of the world that a driver alone cannot access, seeing in every direction simultaneously, and on wavelengths that go far beyond the human senses,” but by nature depend on the ability to translate real-time scenarios in HD maps–as well as topological models–into the car’s actual course.
For in promising to synthesize, compress and make available amazing amounts of spatial information and data sufficient to process the rapid increase of roadways that increasingly clog much of the inhabited world, they are maps for the age of the anthropocene, when ever-increasing spaces are being paved. And although even after the arrival of promising “autonomous vehicles” from Tesla, which has introduced a new Autopilot feature able to maneuver in well-marked highways, and tests for urban driving by Uber, General Motors, and of course Google, the limited safety of relying only on sensors to navigate space in many areas, where vehicles are forced to integrate LIDAR, mid- and low-range radar, camera-based sensors, and road maps of real-time situations, and have difficulty calibrating road conditions and weather with the efficiency human drivers do. The absence of a clear road map for their integration, however, is paralleled by the inability to synthesize contingent information in maps, which in their absence of selectivity offer oddly hyper-rich levels of information.
The notion of processing such comprehensive maps was far away when DARPA sent tout a call in 2003 inviting engineers to design self-driving cars that could navigate a one-hundred-and-forty-two-mile-long course in the desert, near Barstow, CA, across the desert to Prima, Nevada, without giving them a sense of its coordinates on a race-course filled with gullies, turns, rocks, switchbacks and obstacles–from train tracks to cacti–hoped to integrate GPS and sensors to create a car able to navigate space in as complete an image of road conditions as was possible. If the rugged nature of these rigged-out vehicles recalled the first-run of a Mad Max film in their outsized nature paramilitary nature, designed as if to master landscape of any sort, they were so over-fitted with machinery were they with what seemed futuristic sensors that were tantamount to signage–
–to seem to wrestle with the fundamental problem of mastering spatial information that the new generation of autonomous vehicles have placed front and center.
The top-down attempt of DARPA to stage a race of autonomous vehicles, was intended to keep soldiers out of harm’s way in a military context. But the attempt to generate a new sort of military vehicles raised compelling questions of integrating a range of spatial signs in their apparatus of machine vision, laser range-finding data, and satellite imagery, but suffered from an inability to take in environmental information–no cars completed the course, as it was staged, and the vehicle traveling the furthest went only seven and a half miles. Even in a course that was located in the desert–still the preferred site, given the lack of weather conditions and better kept up road surfaces, to test most self-driving cars to minimize unpredicted external influence–the relation of car to world was less easily negotiated than many thought.
While the results of the DARPA grand challenge wasn’t immediately successful, although the basis it set for future collaboration between machine-learning and automotive companies in notions of remote sensing. It placed front and center the problem remains of how to establish more than a one-dimensional picture of the road ahead of the car to navigate the road ahead most easily. And by 2007, the Urban Challenge, invited autonomous vehicles to navigate streets of an urban environment in Victorville, Calif., against moving traffic and obstacles and following traffic regulations, in ways that lifted a corner on the mappability of the future of driverless cars, as if to throw pasta against th ewall in the hopes tht some of it would stick. Although the new starting point of self-driving cars on a network of readable roads, equipped with recognizable signage, remains the most profitable area for development, the machine-readable road maps eerily naturalize the parameters of the roads in their content, and absent humans from their surface. Despite the recourse to satellite photography and attempts to benefit from aerial views, the notion of a map for the autonomous vehicle was barely conceived. But in the almost fifteen years since, the maps that are being developed for self-driving cars have grown into an industry of their own, promising to orient cars to machine-readable records of the roadways in real-time.