Monthly Archives: March 2013

Mapping Radiation Levels: Toward a Vigilante Cartography or a Model of Data-Sharing?

Few maps rely entirely on self-reported measurements today:  the data-rich basis of maps make poor controls on data an early modern throwback.  But the ability to transmit datasets to the internet from local devices has changed all that.  The recent proliferation of radioactivity maps are based on the open sourcing of self-reported measurements to form a new picture, placing information taken with Geiger counters into a framework analogous to a template borrowed from Google Maps.  Although the only instrument to register radiation’s presence is a Geiger counter, and no standards have been developed for representing the rises in radiation counts in different regions–or indeed the limits of danger to personal health–the provision of such a map is crucial to disseminating any information about a potential future disaster.  By using the internet  to upload and broadcast shifting radiation levels, the flexibility of maps of radiation levels gain a new flexibility and readability through the platform of Google Maps:  maps can help instantaneously register ambient radiation in air, earth, water, or rainfall, as well as the radioactivity of food, in striking visualizations of geographic space.  This came to a head in the maps that were made to respond to the threats of the Fukushima Daiichi nuclear disaster of March 2011.

These multiple maps might assemble an otherwise ‘hidden map’ of local levels of radiation to expose local dangers populations face from radiation leaks, in a shared database in cases of eventual emergency that can be regularly updated online.  Although the measurements of danger are debated by some, mapping radiation levels provides a crucial means to confront meltdowns, the breaching of chambers’ walls, or leaks, and to define limits of danger in different regions.   Interestingly, the map stands in inverse relation to the usual mapping of human inhabitation: rath er than map sites of habitation or note, it tracks or measures an invisible site of danger as it travels under varied environmental influences in ways hard to predict or track.  Although the notion of what such a disaster would be like to map has been hypothetical–and is, to an extent, in datasets like the National Radiation Map, which use the Google Earth platform or available GIS templates to diffuse information not easily accessible.  This is a huge improvement over the poor state of information at the time of the threatened rupture of the containment structure of the Three Mile Island in Harrisburg PA in 1978, when no sources had a clear of what radius to evacuate residents around the plant, or how to best serve health risks:  if a radius of 10 miles was chosen in 1978, the Chernobyl disaster required a radius of over the double.  The clean up of the plant went on from 1980 to 1993; within a 10 mile radius, high radiation levels continued in Harrisburg today.

The larger zones that were closed around the more serious and tragic Chernobyl Nuclear Power Plant that in fact exploded in April 1986 led to a clear Zone of alienation that was evacuated three days after the explosion, and considerable fear of the diffusion of radioactive clouds born in the environment to Europe and North America.  The irregular boundary of immediate contamination, including pockets of radiation hotspots not only in Belarus, but in Russia and  the Ukraine suggest limited knowledge of the vectors of contamination, and imprecise measurements.


This raised a pressing question:  how to render what resists registration or simple representation–and even consensus–on a map?  And is this in any way commensurate with the sorts of risks that maps might actually try to measure?

The tragic occurrence of the 2011 Fukushima meltdown raised similar questions, but converged with a new basis to define an internet-based map of the region.  If the incident provided a case-in-point instance of ready demand for maps, the availability of a large number of online access in the region led to a considerable improvisation with the value of a crowd-sourced map not defined the local government or nuclear authorities, but by the inhabitants of the region who demanded such a map.  The accident that resulted from the tsunami no doubt contributed to a resurgence and perfecting of the crowd-sourced map both in the United States and, in  a more flexible way, Japan, as websites try to refine the informative nature carried in radiation maps to create an informative response by open-access maps that can quickly register the consequences of nuclear disaster–or indeed detect such a leak or structural compromise–in the age of the internet, and offer a reassuring image (or a cautionary image) adequate to meet with the invisible and intangible diffusion of radiation in the local or regional environment.

Demand for such online databases reveal and feed upon deeper fears of an official failure to share such data.  Indeed, the drive to create a map of some authority has dramatically grown in the light of recent radiation disasters that have not been mapped earlier, in part because of liability issues and because of fears that government protection of the nuclear industry has compromised their own responsibility.  If the growth of online sites is a sensible and effective use of data-sourcing on an open platform created by internet providers, it is also one no doubt fed by a paranoid streak in the American character stoked most heavily these days by folks on the Right.  I’ve decided to look at two examples of these maps below, both to reflect on the nature of a crowd-sourced map and to suggest the plusses and minuses of their use of a GIS framework to visualize data.

The emphasis on the map as a shared database and resource to monitor and publicize the sensitive information about radiation levels has unsurprisingly increased by the recent threat of contaminated waters that breached the containing walls during the meltdown of the Daichi-Fukushima reactor in March 2011, and also of the difficulties that providing a reliable map of radiation creates:  although reactors are licensed by governments and monitored by government agencies, debates about the public dangers that reactors pose concern both the danger levels of radiation and the ability to collect exact data about their spatial distribution, and communication through waters, air, and other environmental vectors.  The ability to upload such measurements directly to data-sharing platforms provides a new access for the relatively low-cost creation of maps that can be shared online among a large group of people in regularly updated formats.  Given the low-cost of accumulating a large data-set, Safecast concentrated on devising a variety of models to visualize distributions along roads or by interpolating variations in existing maps.

The group-sourced website showing regional and local fluctuations are not visually or cartographically inventive, but pose questions about using data feeds to reveal a hidden topography, as it were, of radiation across the country or landscape–as if to remedy the absence of an open-access trustworthy source of this information local governments would sponsor or collate.  Against a field that notes the sites of reactors by standard hazard-signs that designate active reactors, viewers can consult fluctuating readings in circled arabic numbers to compare the relative intensity measured at each reporting monitor station.  While rudimentary, and without adjustments or standardized measurements, this is an idea with legs:  the Safecast Project proposes to take mapping radiation in the environment along a crowd-sourced model–an example of either a pluralization of radical cartography or a radical cartography that has morphed into a crowd-sourced or “vigilante” form of mapping radiation levels.

Safecast wants to create a “global sensor network” with the end of “collecting and sharing radiation measurements to empower people with data about their environments.”  Its implicit if unspoken message of “Cartography to the People!”  echoes a strain in American skepticism, if not paranoia, about information access, and fear of potential radioactive leaks–in a counter-mapping of USGS topographic surveys, the movement to generate such composite maps on the internet is both an exciting dimension of crowd-sourced cartographical information, and a potentially destabilizing moment of the authority of the map, or a subversion of its authority as an image produced by a single state.

The interesting balance between authority and cartography is in a sense built into the crowd-sourced model that is implied the “global sensor network” that Safecast corporation wants to construct:  while not readily available in maps on access to government-sponsored sites, those interested in obtaining a cartographical record of daily shifting relative radioactive danger can take things into their own hands with a handy App.

The specific “National Radiation Map” at aims at “depicting environmental radiation levels across the USA, updated in real-time every minute.”  They boast:  “This is the first web site where the average citizen (or anyone in the world) can see what radiation levels are anywhere in the USA at any time.”  As impressive are the numbers of reactors that dot the countryside, many concentrated on the US-Canadian border by the Great Lakes, as in Tennessee or by Lake Michigan.  Although a credible alert level is  100, it’s nice to think that each circle represents some guy with a Geiger counter, looking out for the greater good of his country.  The attraction of this DIY cartography of inserting measurements that are absent from your everyday Google Map or from the Weather Channel is clear:  self-reporting gives a picture of the true lay of the radioactive land, one could say.  This is a Jeffersonian individual responsibility of the citizen in the age of uploading one’s own GPS-determined measurements; rather than depending on surveying instruments, however, readings from one’s own counters are uploaded to the ether from coordinates that are geotagged for public consumption.

Of course, there’s little level of standard measurements here, as these are all self-reported based on different models and designs–they list the fifteen acceptable models on the site–in order to broadcast their own data-measurements or “raw radiation counts,” which makes the map of limited scientific reliability and few controls.  So while the literally home-made nature of the map has elements of a paranoid conspiracy–as most any map of nuclear reactors across the country would seem to–the juxtaposition of trefoil radiation hazard signs against the bucolic green backdrop oddly renders it charmingly neutral at the same time:  the reactors are less the point of the map than the radiations levels around them.


USA map radioactivity


But the subject that is mapped is anything but reassuring.  When we focus on one region, the density of self-reported sites gains a finer grain in the Northeast, we can see the concentration of hazard signs noting reactors clustering around larger inhabited areas, oddly, like the ring around New Jersey, just removed from New York, the nuclear reactors in the triangle of Tennessee and Virginia, or those outside of Chicago and in Iowa, and one sees a somewhat high reading near Harrisburg PA.  But it’s reassuring that a substantial number of folks were using their Geiger counters at that moment, and inputting data into this potentially useful but probably also potentially paranoid site.  I hope they do interview them beforehand, given the very divergent readings at some awfully proximate sites.




If we go to a similarly dense network on the West Coast, the folks at Mineralab offer a similar broad spread among those informants, and the odd location of so many reactors alongside rivers–no doubt using their waters for cooling, but posing potential risks of downriver contamination at the same time.



Although the view of Southern California is perhaps still more scary, and reminds us that the maps have not taken time to denote centers of population:




And there’s a charming globalism to this project. Things aren’t particularly worse off in the USA in terms of the reliance on reactors, if we go to Europe, where reporters are similarly standing vigilant with Geiger counters at the ready given the density of those familiar trefoil hazard signs in the local landscape:




The truly scary aspect of that map is the sheer distribution of reactors, no doubt, whose hazard signs that dot the countryside like scary windmills or danger signs.  And, just to put in perspective the recent tsunami that leaked radioactive waste and waters from the Fukushima reactor whose walls it breached, sending material waste and leaching radioactive waters to California’s shores, consider Japan.  An impressive range of reactors dot the countryside, and but one vigilant reporter in Sapporo notes the very low levels of radiation that reach his counter:




Withe a week after the March 11, 2011 earthquake hit Japan, the greatest to ever hit Japan, Safecast was born as a volunteer group dedicated to open-platform radiation monitoring in the country and worldwide; in addition to over 15880 dead in the Tsunami and quake, the tsunami caused level 7 meltdowns at three reactors in the Fukushima Daiichi Nuclear Power Plant complex, necessitating evacuating hundreds of thousands of residents, as at least three nuclear reactors exploded due to hydrogen gas that breached outer containment buildings after cooling system failure.   When residents were asked to evacuate who dwelled within a 20 km radius of the Fukushima Daiichi Nuclear Power Plant, the United States government urged American citizens evacuate who lived within a radius up to 80 km (50 mi) of the plant to evacuate.  This rose questions about the dispersal of radiation from the plant, and deeper questions arose about the safety of returning within a set zone, or the need to demarcate an no-entry zone around the closed plant.

The rapid measurement of radiation distributions not only gained wide demand but provided as of July 2012, Safecast includes some 3,500,000 data points that collect radiation levels, and provided a new mode of sharing information about dangerous levels of radiation.  In ways that capitalize on how the internet allows a massive amount of data to be uploaded from numerous points around the world, Safecast exploits a model of data-sharing on its open platform, offering different models to visualize their relation to each other:  Safecast allows the possibility to visualize the maps against a road-map, topographic map, and map of local population distributions, so that they can better understand their relation to the readings that they’ve collated on line.

The process of massing data is what makes Safecast such a pioneer in creating a large range of readings that can promise a more comprehensive picture of radiation distribution than the uneven distributions that isolated readers might allow.  The Safecast team hopes and promises to improve upon their readings by designing and promoting a new Geiger counter, and has made available the handy workhorse bGeigie, although the cost of $1000/apiece and the time-consuming nature of their assembly is a major obstacle they’re trying to confront. The smaller and handier Geigier Nano Kit creates a dandy device you can easily carry, affix to your car, and whose measurements are easily uploaded to the Safecast website:




The DIY glee of presenting the tool to measure radiation levels with one’s own mini-Geiger is part of the excitement with which Safecast promises to provide a new map of Japan’s safely habitable land.  The excitement also derives from a belief in the possibility of “empowering” people to measure and compile data about their environments, rather than trust a map that is assembled by “experts” or official sources who have not been that forthcoming with data-measurements by themselves.  The above smile also reflects the vertiginous success of Safecast in distributing its bGeigei, and the boast to have amassed an open-sourced database for open access.

This seems the new key to revealing knowledge in the multiple visualizations that Safecast offers for viewers:  with the enthusiasm of great marketing, their website announces with some satisfaction: “attach it to your car and drive around collecting geo-tagged radiation data easily uploaded to Safecast via our API upload page.” This suggest a whole other idea of a road trip, or even of a vacation, in the multiple ‘road-maps’ that volunteers have uploaded for approval on the Safecast site, with over 10,000 data points deriving from bGeigei imports, that Safecast can readily convert to a map:

Tokyo traffic Safequest


This is also quite serious stuff, taking crowd-sourced cartography to a new degree:  with some 4,000,000 radiation points detected by the Safecast team, the website is able to assemble a comprehensive map of relatively uniform readings, complimenting the sites of radioactivity assembled and culled by the Japanese government with their own independent data from an impressive range of aggregate feeds of environmental data from several NGOs and individual observers across Japan’s coast:



The image of such aggregate data feeds allowed Yahoo! Japan to build their own map displaying the static sensor data of Safecast:

Yahoo Japan feeds

Kailin Kozhuharov has created a detailed map to visualize the distribution of radiation levels in the island through the Safecast database:

 Kozhuharov Visualization of Radiation Levels


The coverage is truly impressive, and multiplication of data points technically unlimited and potentially comprehensive. While divergent readings may be entered every so often as a Geiger counter wears down or malfunctions, controls are built into the system. An example of the coverage in Japan, again the focus of mapping radioactivity in the wake of the recent Fukushima disaster, where Safecast is based, using locally obtained data once again:


Safecast Japan


The widespread appeal of this device, even more than the Radiation Network, reveals the widespread nature of a belief or suspicion–no doubt with some grounds or justification–that a true map of the dangers or levels of radiation is in fact never already provided or available to citizens, and that the failure of governments of communicating an accurate mapping of radiation demands a privatized response.  And with its partnership with Keio University, developed after Fukushima, Safeguard has developed the “Scanning the Earth” (STE) project that maps the historical data of radiation readings across the globe.  With the Fukushima prefecture, Safecast has also issued a comprehensive global mapping of the dispersal of high levels of radioaction from Fukushima from its own massive database to chart the impact of the environmental disaster over time:


Fukushima Prefecture World Map

Although this map reflect the ties to the MIT Media Lab, it is informed by a dramatically new local awareness of the importance to create a map flexible enough to incorporated locally uploaded data measurements for open access.  It is also a great example of how an event can create, provoke, or help to generate a new sense of how maps can process the relation of local phenomenon to the global in a variety of readily viewable formats.  The demand for creating this world map clearly proceeded from the local event of the 2011 Tsunami, Safecast was at a position to observe the importance of maintaining an open-sourced database (now including some 2,500,000 readings) that offer an unprecedented basis for developing a platform of data-sharing that is readily available online.  In working with the same databases, they also offer some cool visualizations of the data that they collect to illustrate differentials radiation levels in readable ways linked to potential dangers to individual health:

Fukushima?  Safecast

The new facility that the internet has created in the ability to upload, share, and compile information from diverse and multiple sites without considerable costs has meant so lowered the cost of collaboration that it can occur without any reference or dependence on a central governmental authority. This has allowed the compilation of an immense amount of simultaneous data to be regularly uploaded and stored with almost no extra cost from a group of volunteers and to be available in transparent ways on an open-access platform.  (Late in updating this post, I came across an earlier PBS NewsHour episode on Safecast’s interest in data-collection in the wake of the disaster, and the demand of local residents in Japan for further data, given the silence of official government sources on the disaster and its dangers:

(The episode offers great data on using Geiger counters to detect radiation levels at multiple sites near the exclusion zone that rings the reactor, including a restaurant parking lot.)

The means for offering locally contributions to a world map of radiation level distributions reveal an expanded ability to share information in a map the relation of place to environmental disasters.  Indeed, the map itself foregrounds new graphical forms of information-sharing.  There are clear problems with the Safecast model that Japan, in fact, is likely to be an exception to:  Japan was a place providing access to large numbers of its population already in 2003, offering free wi-fi in trains, airports, and cafés or tea houses.  In comparison, the far more limited numbers of the population have access to wi-fi or online resources in rural American towns, or even in urban areas, would make access questions less possible in the United States, where a similar movement has failed to expand not only because of the lack of a disaster of similar proportions.  There is the danger that the “freedom of information” they champion is in the end not as openly accessible as one would wish:  if roughly one quarter of hotspots worldwide are in the United States, it shared with China and Italy the lowest number of hotspots per person, at lower than 3 per person as of 2007, while Japan had nearly 30 million 3G connections.  This creates a significant obstacle to the expansion of the internet as a universal access service outside urban areas with municipal wireless networks, despite the significant plans to expand internet access on interstates.  Despite plans to expand free service zones in Asia, Canada, and parts of the Americas, the broadcasting of regional variations in a natural disaster would be limited.

There may be something oddly scary that Safecast has had its own corporate profile and successful Kickstarter campaign, marking the privatization of the sort of public traditions of cartography formerly undertaken by states for their own populations to devolve to the private sphere.  For whereas we have in the past treated cartographical records as an accepted public good, there is limited acceptance of accessible data collection and synthesis.  As a result, one seems more dependent on the active participation in one’s construction of a more accurate of radiation levels, or upon a network of vigilant vigilante cartographers who can upload data from wi-fi zones.  Is there the risk of a disenfranchisement of a larger population, or is data-sharing the best available option?

An alternative model for mapping radiation might be proposed in the compelling map of the oceanic travel of radiation (probably in contaminated waters, but also in physical debris) that has been suggested by vividly compelling cartographical simulations of the dispersal of the long-term dispersal of Cesium 137 (137CS) from waters surrounding the Fukushima reactor.  Although the map is indeed terrifyingly compelling, in relying only on oceanic currents to trace the slow-decaying tracer across the Pacific, the video map seems to arrogate capacities of measuring the dispersal over the next ten years of radioactive material in ocean waters with a degree of empiricism that it does not in fact have.  How ethical is that?

For all the beauty of the color-spectrum map of a plume of radiation expanding across ocean waters–and the value of its rhetorical impact of strikingly linking us directly to the reactor’s meltdown–its projected charting of the plume of contaminated waters due to reach the waters of the United States during 2014, if normal currents continue, is far less accurate or communicative than it would seem.  To be sure, as the Principal Investigator and oceanographer Vincent Rossi, a post-doctoral researcher at the Institute for Cross-Disciplinary Physics and Complex Systems in Spain, “In 20 years’ time, we could go out, grab measurements everywhere in the Pacific and compare them to our model.”  But for now, this expanding miasma offers an eery reminder of the threat of widespread circulation of radioactive materials worldwide.



Indeed, the charts that project the spread of radiation over a period of five years, benefitting from the power of computer simulations to map by tracer images the diffusion of radioactive discharge along clear contour lines in the Atlantic, provide a compelling chart of how we might want to look and measure the levels of radioactivity in our national waters.


tracer image of radiation from Fukushima


Filed under Cesium-137 dispersal map, Chernobyl, DIY Cartography, Fukushima Daiichi, Global Sensor Network, open-source maps, radiation maps, Safecast, Scanning the Earth, Vigilante Cartography, Vincent Rossi


In late January, my daughter Clara wrote with conviction that “Great adventures in eating must include the all-important meal known as the dessert,” and confessed “I could not live with out dessert.”  We could all live without desertification, or the expanse of areas of the world on the that threaten to become enlargements of existing uncultivable land, which the United Nations in 2007 declared “the greatest environmental challenge” and a particular emergency in sub-Saharan Africa that could provoke an impending displacement of some 50 million people within the decade.  Several countries have tried to contain the expanding regions of deserts by planting trees, restore grasslands or introduce plants to stem eroding soils, the huge expense of using water and of using water that evaporates as often as it feeds  plants, is far less effective or practical than it might seem as an ecological bulwark.

Scientists have debated and struggled to understand the causes and origins of the growth of deserts across the world, asking whether the underlying causes lie with declining rainfall, a severe drought that began in a period leading to the 1980s, and how to place local measurements of vegetation that revealed flourishing vegetation near barren landscapes of desertification.  The British ecologist Stephen Prince expressed his frustration at assembling a larger picture of desertification based on data that he described as “pinpricks in a map” which failed to assemble a larger picture by studying vegetation from space by using time-lapse photos of the area of the African Sahel caused famines across sub-Saharan Africa to assemble an image that better revealed relations between local conditions across a huge expanse, of which this photograph by Andrew Heavens created a synthetic document that reveals the broad proportions by which the desert encroached on arable land:




The phenomenon is not limited to Sub-Saharan Africa, moreover, as an image of the variety of microclimates in which the threats of sensitivity to desertification has been mapped in the fertile region of the Mediterranean basin:


Desertification Sensitivity in EU

The challenge lies in understanding the global proportions of desertification–revealed in the below map that notes expanding deserts by tan bands–in a coherent understanding of the huge variations of local contexts from Asia to Anatolia to Patagonia to Australia to the western United States:



The global risks of desertification–most prominently on five continents–have been dramtically heightened in recent years not only by global warming, but our own practices of land use, the Zimbabwe-born environmentalist and ecologist Allan Savory notes, describing it as a “cancer” of the world’s drylands, a “perfect storm” resulting from huge increases in population and land turning to desert at a time of climate change.  The areas of land turning to desert are not only occurring in dry lands, but in the lack of any use of the land that leaves it bare and removes it from land-use.  Savory has argued in a persuasive and recent TED talk that the global dangers of desertification has multiple consequences, of which climage change is only one. 

The growth of areas of desertification are apparent in this satellite view, which reveals the extent of a global process of desertification not confined to Africa’s Sahara, but already progressed across quite large regions of both North and South America as well, on account of rapidly accelerating changes in micro-climates world-wide:

world deserts satellite view


The expanses of desertification are even more apparent in a global projection of our newfound vulnerability to desertification, that illustrates the massive degree of changes in the world’s land, in part effected by the bunching and moving of animals, largely encouraged by federal governments who reduced the lands open to cattle grazing in the belief that good land-management practices meant protecting plants from grazing animals:




The above map made by the United States Department of Agriculture-NRCS, Soil Science Division, reveals the dangers of expanding desertification at an extremely fine grain. We can view this map by highlighting the expanses threatened by increased desertification in this world-wide satellite view, whose regions ringed in red highlight the areas of a dramatic increase of desertification and an apparently unstoppable cascade of deep environmental change and release of carbon gasses:


Deserts RInged in Red



The difficulty of understanding the causes of desertification arose from a deeply unholistic ecological view of the nature of microclimates begun 10,000 years ago but rapidly increasing now.  Savory asks us to relate this to the hugely artificial contraction in the number of herds grazing land seen in recent years, creating a resulting very high vulnerability to desertification–noted in bright red–that would result in carbon-releasing bare soil, threatening to increase climate change, much as does the burning of one million hectares of grass-lands in the continent of Africa alone.  Savory argues that the only alternative open to mankind is to use bunched herds of animals, in order to mimic nature, whose waste could act as mulch help to both store carbon and break down the methane gases that would be released by bare or unfertilized–and bare–soil.

Such mimicry of nature would effectively repatriate grasslands by introducing the planned pasturing animals and livestock like goats to regions bare of grass or already badly eroding–and has lead to the return of grasses, shrubs, and even trees and rivers in regions of Africa, Patagonia and Mexico, with beneficial consequences to farmers and food supplies.  By the planning the movements of herds alone, replicating the effects of nature can turn back the threat of desertification by movable herds of sheep and cows, already increased in some areas by 400% to dramatic effects of returning grasslands to denuded regions of crumbly soil and straggling grasses.  Even in areas of the accelerating decay of grasslands and growth of bare soil, Savory argues, we can both provide more available food and combat hunger through planned pasturing, and reducing a large threats of climate change that would remain even if we eliminated the worldwide use of fossil fuels.  He argues that we can both take carbon out of the air and restore it to grasslands’ soils that would return us to pre-industrial levels, based on a deeper appreciation of the ecological causation of desertification and by replacing rejected notions of land-management, actively reducing the frontiers of desertification.  Although Savory does not note or perhaps need to call attention to the risks of the huge displacement of populations and consequent struggles over arable lands, planning the repatriation of land by animals would provide mulch and fertilizers to rapidly effect a return of grasslands in only a manner of several years.

The prognostication of the expansion of the desert is not often as mapped as the rising of ocean waters in the media.  But it may offer a more accurate map of the alternative over the next fifty years, and hint at the huge attendant consequences:


Human IMpact on Deserts

(I’m including a post by Susan Macmillan on Allan Savory’s March TED talk here.)


Filed under Climate Change, Desertification, Global Drought, Global Warming, mapping arable land, Mapping Desertification

Mapping the Universe? (Why a Map?)

Significant celebratory buzz has accompanied the recent images that “map” the distribution of matter and heat in the early universe.  Let’s stop on the word map, however, as we admire their content.  It might be worth it to consider how their content became considered as a map–as opposed to a simple image or visualization–to tell us  about how we see maps.  Back when Arno Penzias and Robert Woodrow Wilson first accidentally recorded background microwave radiation resonating through the universe while at Bell Labs in Murray Hill, NJ, they were so puzzled that they paused before reporting their results, trying to make sense of the buzzing in the background by cleaning their instruments of registration and even scooping some fifty pounds of accumulated pigeon poop out of their massive antenna.  They knew the significance of the sound, but weren’t quite sure of how to make sense of the measurements.  And so they checked multiple times before they went public about their observations, because their was no clear way to imagine what form that background static took.  They barely had images of background static–a hum that barely registered, and was not even imagined as able to be visualized.

The reconstruction of their results was hardly a map, since the detection of background radiation was so slight and the images itself so trace-like and whispily diaphanous.  What radiation did register showed, according to a modern reconstruction, something like a very primitive radar or sonar or the ghostly shadows of an ultrasound–after all, this is the afterglow, as it were, of the Big Bang explosion that first scattered the universe’s matter:


Penzias and WIlson 1965


While a reconstruction, this image provides a basis to trace a history of background radiation images that reveals how their forms of visualization were seen as retrospectively mapping the distribution of matter just after the time of the Big Bang.  The COBE probe dramatically clarified these measurements by 1989, using satellite measurements to differentiate something of a clearer visualization or intensity map of remaining background radiation eery afterglow corresponding to variations in local temperatures.  This is somewhat like an early form of medical imaging, and is reminiscent of an MRI with isotopes or coloring agents injected in a bloodstream, highlighting areas with a sort of detailed fuzziness.


Cobe Afterglow


The COBE probe gave important confirmation of the “lumpiness” or uneven distribution of matter–in other words, it suggested that there was already a sort of map of the distribution of densities in the 10 to 34 seconds after the Big Bang, with long-term consequences for our universe’s configuration:  variations in density were defining elements of the background radiation distribution COBE showed.  The evolution of the universe’s size increased this lumpiness, as concentrations of matter in specific places created disparate gravitational pulls  Other images oddly suggested continents and oceans by their terraqueous color-scheme, which indeed recalled a terrestrial map, if something like a relief map of considerable granularity:




The metaphor of a relief map is helpful.  Despite bluriness even of a later 1992 image from the Cosmic Background Explorer Probe (COBE), whose forms are less reminiscent of a familiar oval terrestrial projection than a map of heat distribution, and plots the temperature changes in revealed by the background radiation on galactic coordinates:  it maps the after-image of a primeval heat-distribution, giving a sense of matter distribution that gave off light and created the iridescent glow.  But it seemed to lack the detail of a map so much as a visualization or imaging with considerably great lumpiness or variations in cosmic microwave background radiation that resulted from that primeval explosion:


1992 MAP


The enormous augmentation of detail within the image that the larger Microwave Anisotropy Probe generated after it was sent to space in June of 2001 provided a far more detailed picture than earlier believed possible.  To be sure, its acronym aside, this didn’t look like anything we’d call a map that recorded a uniform distribution of space.  But the visualization of emissions published in 2003 based on a year of data collection was of truly stunning detail as “an image of infant universe”–the map of temperature variations was a big improvement from the first “baby pictures of the universe” taken by Cosmic Background Imager high in the Andes.  (Each ‘map’ has been announced to the public with considerable fanfare focussing on the wonder of recording this image of background radiation in any visually readable form.)

To improve the resolution of this “picture,” the later WMAP spacecraft, an improved named after the cosmologist gist David Todd Wilkinson, a physicist who was researching background radiation when Penzias and Wilson first got their results.   The WMAP employed reflecting Gregorian dish mirrors to register background radiation at a far greater resolution with a 45-fold greater sensitivity than the COBE probe.  The exquisite heat-variations compiled temperature fluctuations in the “Cosmic Dark Ages” 380,000 years after the Big Bang, “mapping” the last scattering of light from the explosion.  The findings set a new basis for imaging the landscape of the early universe, supporting the current Standard Model of Cosmology, and revealing a picture of a universe in which dark energy played a significant role.  It depicts with unprecedented granularity a record of of heat variations astounding even when shrunk to the size of a 4-by-6 index-card:


background microwave radiation 203.

Leaving aside the embedding of “map” in the handy acronym of the WMAP, or the Wilkinson Microwave Anisotropy Probe, the beauty of its synthesis of data was called an “image” of background cosmic radiation, more than a “map” of the early universe, in 2003, as when 3-year data were released in 2006.  The oval projections used to compile an all-sky image of temperature fluctuations reveal the spots and clusters that grew to form galaxies, was a massive effort of data collection.  Take a moment to click on this image of the some 15 billion years past, and examine its variations in detail,  revealed a striking lacking any uniformity:




The images of the universe published several days ago were universally presented as a map, is a similar data synthesis based on data accumulated over 15 months.  The sensitivities revealed below in a heat map of readings from the orbiting Max Planck space telescope.  They suggest an even more surprisingly variegated distribution of matter that looks closer to something like a continental division of the world, as if the macrocosm prefigured the microcosm of earth, but seems even more un-uniformly distributed and, as astronomers put it, more “lumpy”:




The image, like a world map, is a visual register of knowledge, as well as a basis for future research and theory-checking, condensing substantial empirical information and sightings.  It provides a partial confirmation of the age of the universe, and grounds to support the Big Bang model and theory of cosmic inflation.  (This image suggests decreases the amount of so-called “Dark Matter” that is around, and caused some adjustment of the dates of time since that huge explosion.)  As such, the “map” seems to reveal our ability and interest to the stock of progress, see some new directions for work, and be satisfied we have something (at last) like a fine-grained map based on these results, even without the identifying names or signs of orientation we expect from land maps.

How does it offer viewers a map?  For one, it presents a sort of key unlocking a mysterious architecture, and something like a hidden architecture of the world that is analogous in its potential meanings as the human genome map.  Synthesizing a huge amount meaning, much in the manner the human genome project, both reveal a sort of master-code in meta-data scientific images that seem mythical master-maps of knowledge.  The earliest image of the universe is also stunningly beautiful in the variations that it reveals with an almost palpable resolution and the sheer beauty of delineating a primeval topography of matter is stunning if not mind-boggling given it is recording a skyscape of some billions of years ago:  the sheer “beauty” of the image, admired by cosmologists like David N. Spergel, qualifies the refined synthesis of variations as a “map” as well as a registration, and as revealing a distribution of local temperatures, even though it is not that comprehensible as a record of space.

But the data compilation also interestingly fits how expectations for mapping have radically shifted since the widespread acceptance of GIS, or computer-generated images not based on individual images or transcriptions but compiled as graphics, whereas earlier visualizations would not have been so readily classified as maps.  The broad purchase of maps as intellectual tools and visual bases for further inquiry is reflected in the large number of media maps that are daily diffused on the web, Television, and other platforms of visual consumption.  While we used to consider maps drawn renderings based on hard data or derived from surveying tools, the acceptance of computer-generated images makes us ready to call the synthesis as a map.  The refinement of its  color schema and what might be called its greater resolution or granularity suggests the broad popular currency of something like a projected heat map that project changes in summer temperatures as a result of global warming–or the palpability of abstract blobs of bright red coloration in weather maps denotes heat fronts.

The below maps for example chart the projected growth of summer temperatures for the years 2050 and 2090, and make an immediately felt and particularly vivid argument even by abstract forms, in which the earth assumes a temperature in relation to previous summertime highs:


global warming


The Plank image of the universe meets the demand to have an actual image of what was then, as well as of a readable synthesis of data.

Of course, it’s a much more satisfying picture, and one of considerable technical dexterity and achievement.  More to the point, it is a synthesis of astronomical observations, whose measurements are correlated to assemble what seems a continuous and coherent whole.  And that makes it a map, after all, in an age of data correlation.  It doesn’t have any of the symbolic recognizability of anything like a global projection, to be sure, or contain any orienting words or textual signs, but the notion of an early differentiation into degrees of lumpiness of matter is as compelling an image as we’re likely to get or could reasonably hope.  Indeed, the “baby photograph” metaphor is pushed to new heights at the same time–“fatter than expected“–and interest in the image’s general bumpiness.  The new language of the topography of the Big Bang is what the map allows, and where its beauty lies.

It’s an amazingly satisfying as an image that allows us to peer so far back in time and try to grasp what it is we see before us on its surface.  The elegance of the detail of its data of sources of heat allows us to feel, or sensorily apprehend, aspects like the “lumpiness” and lack of uniformity in the early universe, and to start to grasp what an odd terrain of matter it was.





It’s also a way that the distribution of matter, in its untouched, pristine state, returns to us even in the age of the anthropocene.

1 Comment

Filed under Background Radiation, Big Bang, Cosmic Background Radiation, early universe, Mapping the Big Bang, Microwave Radiation

How Do You Map Your Meat?

The elegance of mapping the sectoring of an animal’s carcass onto the form of a living creature is particularly jarring.  It implies a clear category confusion that might be described as either jarring, deeply unheimlich or uncanny in the double existence of the cow, and of all farmed animals, that they create for the viewer:  in such a carefully segmented diagram, we are invited to move on a passage from the living to dead animals, performing a doubling of the animal that is portrayed in its domesticated environment by focussing on alternate aspects of our relation to it.  As viewers, we’re invited to accept the sectorization of an image of an animal as the surrogate for the actual division of its parts. The fine dotted red lines of that vectorize each region according to its musculature run against the rippling of the individual cow’s skin, and seems a striking way to translate the living animal by an artisanal tradition of apportioning an organism into cuts of meat.  While the marginal image of the butchered cadaver of a cow reminds viewers of the eventual division of the animal body for its human consumers, and a microscopic view of coliform bacteria reveal the dangers if the sanitary procedures of such a translation are not carefully followed.

The portioning of what appears a living body, but reveals a sort of doubling that the alchemy of maps is particularly suited to perform.  For the farm animal that seems to have paused while grazing is transformed for the eye of thher images of the same treatise are moe anaotmical–re learned viewer into a carcass: the guidelines for portioning the meat of the animal, before its slaughtering, is something of a championing of the dexterity of such division as an art that is akin to the technical skill of drawing maps–even if the art of butchery is, in general, judged far removed from the art or technical skills of drafting accurate maps, the orientation to the culinary arts served as a translation of the living animal to an edible form.  The origins of such a tradition of apportioning meat and carving the meat after it is cooked but before it is placed on the table goes back to the Renaissance, but a complex condensation of the civilizing process seems to be projected onto the process of dividing the cow’s meat in preparation for the table, in ways we’re starting to recover.  Unlike images from the treatise more anatomical in their copious level of descriptive detail to the sectioning of pork parts–




It is almost as if the doubling of the animal is removed from the objectivity of a medicalized image, but is rather a field that moves from three dimensions of a living animal’s body to a flat surface.

But the skill of dividing farmed animals and of recognizing individual cuts of meats suggest a transformative remapping of meat:  the subjects of good animal husbandry are rendered into regular configurations of cuts after they are slaughtered, in a metamorphosis of meat that is almost as important as the distinction between raw and cooked.  Indeed, the instruction in accurate cuts of beef was invested with a geometric regularity for Hylas de Puytorac, who would win a certificate of agricultural merit to the state which Jules Ferry had created, used to teach readers of the nature of the parts of animal we eat.

The award of merit de Puytorac won reflected his presentation of butchery as an moral message to instruct readers to eat in the most healthy manner.  The diagrams of Hilaire de Puytorac create not only a condensation of the civilizing process, but a confluence of a Cartesian sensibility and bourgeois lifestyle and attitudes toward food–they not only translate between the living and the dead, in an evidence of the uncanny, but effectively bring the cuts of meat from the butcher’s stall to the houses in which meat cuts are served.




The elegantly designed image de Puytorac used to imagine the transformation of cow to carcass was a question of domestic economy circa 1920, and prepared the animal by distilling the principals of meat division as if they were naturalized in dotted lines atop the skins of living animals; long before they became suitable art hangings, charming for the endearing way that they represent the meat cuts that entered kitchens or butcher shops, their pedagogic clarity directly translated the bodies of living animals to meat, dividing the living animal to the names of meat cuts without needing to convert it to a carcass.

Hylas de Puytorac’s elegant line is far removed from the gross familiarity with which bovine animals seem to gaze right into the eyes of customers at some urban burger joints, now presented as if emblems of the high quality of meats that they use by virtue, perhaps, of their heft, without any sense of carefully portioning meat cuts as de Puytorac so prized.

Cow outside Burger Joint.png

Plastic Cow.png

To be sure, there is something unseemly in the lack of drawing a boundary between the cow or farm animal whose carcass is offered to provide markets with meat and food preparation.  For de Puytorac, the naturalization of meat cuts followed crystal clear logic, echoed in those crispy defined dotted lines which almost elided the technical skill of slaughtering and butchering by which the sheep was made lamb, and the cow made beef:  the subject of each “map” was the translation of animal to meat, and clearly subtitled “the animals that we eat,” but the cuts of butchery were replaced by a sanitized map for public consideration by the educated or informed, as if this domesticated and civilized the very process of describing, cutting, and consuming cuts of meat.


Acts of butchery was the elegant mapping of edible meat were omitted because the map instructed viewers in a new relation to the body of the farm animal, even if they lived far from the farm and probably weren’t yet familiar with the butcher stall or the laboratory of the kitchen.  But the rise of meat portioning, recently returned to artisanal butcher shops as well as marketplaces and abattoirs, has lead to an increased interest in re-mapping the cuts of meat with an aesthetic elegance that the mass-market of food production had long forgotten.  With the return of the mapped body of meat at local butcher shops gain an aggressive economic presence in select metropolitan areas where the revisionist of the art of butchery gains a new appeal of reintroducing the varieties of beef, lamb, or pork made edible by the linguistic transformation of the living animal to a carcass, and the mapping of a beings’ edible elements, one detects not only an aesthetics of the ‘whole animal’ movement–

sanagansSanagans Meat Locker, Toronto CA/Letters in Ink

–but the embrace of butchery, cookery and meat-consumption as a valued aesthetic has led to the revival of such once antiquated maps of meat in American and European visual cultures.  Indeed, the linguistic championing of the art of butchery seems an emphasis on the value of its learned, transmitted intellectual status of the names of meat cuts that frame the image of animal that looks straight into the customer’s eye at Sanagan’s Meat Locker in Toronto’s Kensington Market, and championing their first-hand familiarity with meat.


Meat Names Sanagans.JPG

The cultural transmission of adept skills of meat-carving is found, in other words, not only at the butcher-shop, but on the drafting table:  as much as the whole-animal ethos has increased consciousness of artisanal skills of portioning freshly butchered meat among a new generation of hipster butchers, the division of the animal body was defined in increasingly elegant diagrams of butchery echo the skills of discrimination encouraged at the dining tables of courts as well as the domestic dining tables of mid-nineteenth century.  New modes of mapping meat were drawn on the forms of living animals, and widely diffused in detailed diagrams that increased admiration in engravings that delineated meat cuts with the objectivity of an anatomical diagram–but that maintained the illusion that steers were divided for the table directly from nature, or from the farm, so that “Le Boeuf” is standing, hopes planted firmly on the ground, gazing duly ahead as the cuts into which his body will be divided are inscribed according to discrete cuts to be distinguished by their ratios of taste, toughness, fat and flesh.


The graphic sectorization of the animal “body” transformed the slaughtered carcass not only to butchered meat, but to a gastronomic culture of increasing and considerable sophistication.  The portioning of meat and the cutting of the cooked body maps onto a signifier of socioeconomic class, marking the transformation of the animal body into a recognizable and elegantly edible product.  One might continue the metaphor or dine out with it as more than a convenient or apt figure of speech.  Much as mapping is a practice of imposing clear configuration on space to codify spatial relations in a recognized form, the mapping of the cooked and the slaughtered carcass transformed the natural boundaries of the well-husbanded steer or other animal into shapes that we invest with meaning and naturalize by their own geometry, and were easily renamed in works of popular education that might be traced back to the efforts of Charles Dressiens’ hopes to ameliorate the lives of Frenchmen by “addressing their stomaches” by codifying “une science de ménage” as precepts of “education ménagère.”

Vismara_le boeuf_small.jpg

The domestic economy of middle class homes placed a strong emphasis on elegant cutting of cooked meats.  “One of the most important acquisitions in the routine of daily life is the ability to carve well,” advised the 1852 Illustrated London Cookery Book somewhat sanctimoniously; even if “the modes now adopted of sending meats, etc. to table are fast banishing the necessity for promiscuous carving from the elegantly served boards of the wealthy,” it continued, “in circles of middle life . . .  the utility of a skill in the use of a carving knife is sufficiently obvious.”  The accomplished decorum of severing joints, carving birds, and the dexterity of manipulating knife and fork garnered spousal approval and admiration, evidencing an ability to divide meat that designated class differences.


“Carving presents no difficulties; it requires simply knowledge,” Frederick Bishop continued to tell readers.  Lack of expertise is simply a question for Bishop of good decorum and tasteful bodily comportment.  “All displays of exertion or violence are in very bad taste; for, if not proved an evidence of the want of ability on the part of the carver, they present a very strong testimony of the toughness of a joint or the more than full age of a bird: in both cases they should be avoided.  A good knife of moderate size, sufficient length of handle, and very sharp, is requisite; for a lady it should be light, and smaller than that used by gentlemen. Fowls are very easily carved, and joints, such as loins, breasts, fore-quarters, etc, the butcher-should have strict injunctions to separate the joints well.”

The transformation or passage of animal carcass to meat suitable for preparation, and the linguistic conversion of indicating meat cuts distinct from an animal is an ethical question of renaming, but also a deeply cultural process rather than only mapping animal parts.  If all mapping is something of a conversion of nature into culture–and a creation of place as a known identity, able to exist as a set of coordinates, as well as recognized in one’s mind–the mapping of meat is more than a transformation of raw to cooked, but once-complex process of rendering meat subject to and fit for human consumption, in a combination of the arts of gastronomy and butchery far more than simply anatomy–if the language of mapping meat hides both the work and presence of the butcher and slaughterer as well as the cook by which the tender morsels are prepared, as clear linear divisions were imposed on the steer that was transformed into beef, ready to arrive into the stewing pots illustrated above the animal, or cut into pieces ready for consumption.


Far from only employing a sophisticated language, the mapping of meat is something like a deeply historical and cultural sedimentation of rites of renaming what was once alive in ways that entered local food cultures and prescribed models for the preparation of food that seem eerily akin to recipes.

But the decorum for separating cuts of meat or meat apportionment has a long history, and reveals a cultural form of mapping, and the artifice of mapping accurately.  I begin with such a polite and decorous image since I’m moving toward some diagrams of sectioning prime cuts that focus on the separation of cattle into meat cuts–maps that similarly separate the division of animals’  bodies by butchers and convert what was a body into portions of edible meat.  Although Nicola poetically described on “Edible Geography” “the sculptural discovery of secret shapes within the familiar architecture of an animal,” mapping the carcass is not only a process of unpacking, or of revealing, but a transcription as well as a form of translation of the body of the animal to the provision of cuts of meat–a renaming of body parts as forms of meat.  The transcription converts embodied form to table, dismembering the body by preparing of the cow’s carcass into pieces of prime cuts for the eyes of the chef.  The process of extracting individual cuts of meat from the body, and renaming them, is the ultimate denaturalization, or repackaging of meat cuts for the market place–as its unwanted head, horns, ears, and hooves are discarded and not destined for consumption.  And although the map suggests proximity to the steer, few folks who read the image would have first-hand relations to the carcass, but rather a naming of regular configurations once seen in the butcher shop.

everythingbeefflat-careful division

How did this division come to be codified?  As much as how we bring the meat to our table, it is a sort of map of how we ingest our meat, and deserves to be examined as such.  Rendering cows as canvasses to make maps, and imagining the coat of a European Holstein cow as  bearing a “natural” image of the world may universalize the breed, but is a pleasant fantasy, neatly naturalizing a global projection by a clever photoshop.


The photoshopped cow was clearly painted by stencil, but we’ve long mapped the cattle varieties specific to regions–as the Holsteins of France, who seem naturalized by region akin to a map of cheeses or wines, locating different breeds of cattle as if indigenous to different provinces and landscapes of France.  But rather than derive from specific regions or terroirs that distinguish the different qualities of wine–perhaps embodied by their local mineralogy, acidity, flavor, and earthiness–the mapping of meat is a more profound conversion of the natural to a cultural product, and indeed an illustration of the mastery over nature of the sort that finds expression in a map.


To be sure, there is considerable defense of the healthful or patriotic properties of “meat” that various nations produce.

100% British.pngTesco Advertisement

But mapping meat into prime cuts acts more like a practical sort of map of cuts, both by distancing bovine forms by labeling, converting limbs beneath its skin to an ownable set of parts and taking possession by some alchemy of them as cuts, renaming the animal as the edible.  The relabelling of the animal indeed turns it into the consumed:  and the artistry of the artisan translates the animal into anonymous cuts of meat that can be recognized as discrete.  In this butcher diagram–or butchery map–the cow becomes the territory, removed from ts location and subject to division into brightly colored prime cuts with far less specific local knowledge of the neck, cheek, and tongue:

beefcuts400 clear

This process of translation, and of unpacking distinct cuts, is familiar already in the below enumeration of prime cuts, respecting a basic division of chuck, rib, loin round, and brisket, plate, flank and shank, and hinting at the deeper cultural division of mapping styles, even if the act of butchering has been separated from the meat shop:

Cuts-of-Beef 1922 Jane Eayre Fryer

The point is restated in Sanagans, as well as in the Harlem Shambles, by openly distancing butchery from the slaughterhouse:


This is not so much a mapping of nature, as a culturally specific labeling of body parts.  For the naming of meat cuts serves as a way of processing the formerly live cow into discrete areas that arrive in the kitchen or chef’s table, or a distinct language by which to name regions in relation to distinct styles of meal preparation and indeed a form that they might be most consumed.  The idea is less to better know the topography of the steer’s divisions, than to mark the progression of the steer’s body in the slaughterhouse and to convert it into a new lexicon as it moves to the preparation of food.  The division of the steer’s body in many sectors is not merely about labelling, but about naturalizing the division by which the body of the steer becomes transformed to meat.

Beef Cut Chart.png

Yet the work of translation in such diagrams follows a distinct set of practices–or so goes the hypothesis of this post–as the steer’s body and musculature meets distinct markets for meat cuts, in ways mediated by technologies of dividing as well as the art of butchery.  At one extreme, the very different division of cattle carcass best known is perhaps that mandated by Kosher laws of food preparation through expert division by which animal slaughterers prepare the meat suggest not only follow bodily sectors–despite the limitation of edible meat to the front quarters.  As much as a distinct form of meat-preparation, the sanctioned division of the steer seems to reflect a degree politesse in its avoidance of the rear quarters of the animal, as well as meat use–refusing to acknowledge the flank, rump, or sirloin cut, and sanctioning only the central cavity up to the thirteenth rib, or often stopping at the twelfth.  (In Israel, the rear leg is considered kasher, and trained slaughterers take the time to remove the sciatic nerve, leading butchers to sell meat from the leg as kosher meat and maximize the amount of meat available–even as they disdain and discard the “non-kosher” nether-quarters of the animal and its head:

primal-cuts-beef.jpgPrimal Cuts

kosher:non-koser -food

The division of the animal results in the celebration of the slow-roasted or braised brisket for the ritual meal.  The quite exact mapping of the sanctioned cuts of meat from the twelfth rib are quite distinct from the cuts of meat that are sold by kosher butchers, from Square Roast to French Roast to Ribs and the Deckle, or the fore shank, but all are separate from the cuts sold off for non-kosher consumption–

kosher beef diagramcr1.jpg

The sectioning an animal does not simply follow the form of the body, however, or the musculature of the steers:  different cultural styles of butchery may be geographically distinguished within the variety of ways meat is mapped in established diagrams, a rich subject for research, that reveal cultural division in the preparation of the animal–and sectioning does not only reflect sanctioning.

In Israel, the division of cattle can indeed be utterly more complex, where trained shochets divide the steer’s body not only into brisket, shank, ribs and rump and the like, but prizing the delicate tenderloin, reflecting what seems at first glance an old-school style of butchery, no doubt imported from Europe, irrespective of the laws of kashrut:


Scots butcher-cuts are similarly refined and complex, as if suggesting a professional transmission among artisans or animal slaughterers, emphasizing the neck and cloud, and different cuts around the round, leaving oddly absent and unnamed the often esteemed tenderloin:

beefcuts-scotch cuts

There is no doubt similarly complex division of the cow in Mexico suggests a distinctly artisanal culture of meat division, oddly similar to that in Israel in its emphasis on individual specialized cuts, each destined for different preparation, in a considerably complex practice of apportionment and distinction of the specific value of meat cuts:


In Greek meat markets, the culture of a distinct uses for meat cuts animates butchery around an even more comprehensive ‘whole-animal’ culture of meat consumption:


Farmer’s markets wholesale meat sellers have given rise to a new sense of wholesomeness of whole animal consumption, including the pig’s feet and jowl, as well as the “picnic.”

Wholesale meat cujts.png

But pride of place in the complexity of artisanal meatcuts must go to the Austrian butchers, whose care of carving carcasses is perhaps a legacy of the Hapsburg court:  dividing the steer’s carcass into some 65 distinct cuts from the Rostbraten to the Tafelspitz and Waldschinkin suggests the survival of local ingenuity and refined taste.  Rather than informed by a unique whole-animal ethos, the sophisticated division seems oriented to the distinct preparation different parts of the steer are due in the old empire, and perhaps the difficulty any but the best butcher will have in locating the desired filet:


The partitioning of the entire denatured and bisected carcass of the steer that arrives from the slaughtering house is converted to distinct portions, often destined for different dishes, betraying a distinct level of refinement in both traditional techniques of meat preparation and the codification of a high local level of butchery skills:


Austrian Meats.png


Of course, the greater simplicity of bisecting the cow’s body and dividing it into quarters is, in many ways, an American invention of which we can be proud:  it is a reflection of the rise of industrial butchery, which process multiple carcasses based on sawing the linear saw-lines to create a division and cross-sections–and a consequent decline in the taste for specialized cuts of meat.  The shop for buying meat has long migrated from the site of slaughtering–although there’s a been a return in some communities for fresh meats.


Slaughtered Daily.png


The predominant most current divisions of meat preparation have tended towards overtly linear cuts and schematization, mapping cuts readily performed by sawing bovine bodies fresh from the slaughterhouses to showcase “chuck” and “round” as generic qualities of ground meat–as much as cuts–and perhaps made more distinct by gradations of marbling than specific bodily origin:




Or, in an alarmingly denatured schematic image of a side of contemporary “retail beef,” removed from a steer and ready for shrink-wrapped packaging at your local COSTCO– the schematic rendering of the steer seems cut by a cleaver, rather than with attention to recipes, as if the cuts would break apart as so many quadrants of a chocolate bar:


Retail Beef Cuts and Recommended Cooking Methods

The meat-packing industry has developed its own form of mapping, easily transferred from veal to beef in ways that render the butchered animal ready to eat:

Beef:Veal Ready to Eat.png

A near identical imagery survives in Costco and in many food service outlets.  But the range of meat cuts that one associates with the artisanal has however made quite a comeback in certain niche markets recently, where the elegance of butchery as a form of sectioning has returned, less often perhaps for immediate consumption of prepared meats than the careful preparation of meats in restaurants:

Beef Cuts for Foodservices

But the disembodied icon of denatured cuts is far less disturbing than the brightly packaged glistening cuts stacked in freezer bins in supermarket meat sections, value packs of USDA Choice that serve as opulent illustrations of plenty for consumers unlikely to ever undertake butchering skills.

Value Packs.jpg

–or those recognizable cuts, stacked in rows beside fake plastic grass in butcher windows, fake grass whose presence itself suggests the meat’s remove from the scenes of butchery but oddly conjure the fields where the slaughtered cows presumably once pastured.

There seems to be far less of a lexicon for differentiating meat cuts, as they are more denatured from the animal or steer, and familiar only as varieties of steaks:

USDA cuts.jpg


Removed from the active division of the steer’s body, such meat cuts appear as if shorn of the animal, in the plastic packaging in which one might meet them in the refrigeration section of a supermarket, or beneath butcher glass.


Yet to remember the distinct interest in distinguishing distinct diverse cuts of meat, take the time to compare the elegant distinctions for dividing beef clarified in a later nineteenth-century image of the range of cuts by which those technically adept can elegantly carve a cow’s carcass, around its neck and shoulders, corresponding to a bull’s musculature, so as to refine desirable neck and chuck meat, that convey the aura of a mustachioed blade-sharpening daintily aproned overweight butcher sipping wine, if not the wisdom they would pass on to good raisers of livestock in the later nineteenth century by thatnwonderfully didactic educator de Puytorac:




Dividing the animal is the clear precursor to eating one.  For Hylas de Puytorac, Chevalier du Mérite agricole, images as”Le Boeuf” were destined more for schoolkids than for butchers; primarily didactic in nature, they sought to preserve an agricultural knowledge in danger of disappearance.


Tableaux demonstratifs


The multiple images he carefully engraved of pigs, cows, and other animals reflect the basic prime cuts suggest a tripartite sectioning of each half of the cow, but are elided with a naturalistic rendering of the bull whose musculature he would have recognized–an appreciation whose embodied animality were not so distant from the art of butchery.  Cuts are clearly subdivided by muscle-groups, in ways that recall the deeply artisanal skill set of butchery.  The division of the body of cattle is far more refined in this 1852 engraving of the over forty available cuts on the bullock:


A-Bullock-marked-as-cut-into-joints-by-the-Butcher (1852)


A 1928 division of the cow for butchering is similarly detailed; although its portioning is far less refined, it similarly covers the whole animal, dividing the brisket in multiple cuts as well as the shank in ways that suggest a process of repackaging bordering on schematization of an almost Tayloristic fashion:



Cuts-of-Beef 1922 Jane Eayre Fryer



Much modern meat-portioning may lack so much of a map, even in its more refined images, as they privilege “fine: cuts, sadly, as the division of animals is performed by saws and only rarely is the butchery of the entire animal actually on view:





The generic division into sectors seems far more readily sawed for repackaging, which, disturbingly, somehow acquire their greatest color and tactile proximity only after being segmented:




This occurs in ways that, unhealthily to me, threaten to elide for perpetuity the distinction between cow and beef for customers, and ignoring as inedible a good amount of bones (and meat) that seem too reminiscent of their bovine origins, a transformation that seems elegantly if somehow quite inappropriately elided in this condensation of the remapping of the live cow to a carcass in this image of meat cuts from the Encyclopedia Britannica:


cow to cuts


Filed under animals, food history, gastronomy, meat, meat diagrams

Mapping Gun Violence versus Gun Ownership

When State Senator Audrey Gibson introduced a Bill 1678 in Florida’s Duval County to mandate completion of a two-hour anger management course before their purchase of a firearm–and for that course to be retaken every ten years–a flurry of consternation and protest broke loose.  Never mind that you can take the course online, and that it only lasts two hours–it was seen as an infringement on the sacrosanct individual rights, independently form the traditions of government designed to protect the common good–yet, by a dogmatism of faith, asserting the protection of individual rights in ways that seem particularly corrosive to the ideal of a representative democracy.

How did this come about, and is there a distinct geography in which liberties for gun ownership are more fiercely protected and agitated for in place of equal protection for citizens?  Such a demand feeds both into the elevation of “rights” as an extension of the absence of constraints on individuals, and the expression of a politics of true sincerity and purity, based on the protection of the individual both against government oversight in any form, and a conviction that government policies need to be scrutinized for their infringement on a purely individual concept of liberty.  The misconstrual of gun ownership as an individual right within the Bill of Rights and that is not able to be over-run or revised by state or local government is based on an interpretation of the Second Amendment long advocated by the NRA in District of Columbia v. Heller (2008), affirming an individual right to own guns within one’s home and on one’s person–as if it was implicit in the framers’ assertion that “well regulated Militia, being necessary to the [collective] security of a free State, the right of people to keep and bear Arms, shall not be infringed.”   The opinion that this assertion constitutes grounds for the legal protection of individual gun ownership rested on the majority opinion that the thirteen words that began the Second Amendment constituted “prefatory” matter,” rather than qualify the terms of and reasons for the defense of gun ownership, has enshrined a perversely popular interpretation, long sought by the National Rifle Association, protects individual possession of guns, even in the face of a near-epidemic of mass shootings in the United States.

It is not a coincidence that the geography of such claims match with the melding of a dogma of liberty in a fear of the usurpation of rights, and need to protect such rights from below–by bringing politics directly to “the people,” their true source of expression–in ways rooted in faith, rather than a model of inclusive citizenship, and a dogmatism rooted in faith, rather than a body of open debate.  It is in specific geographic corridors that gun culture is not only greater, but a “healthy” relation to guns asserted to exist–despite some significant evidence to the contrary.

In Florida, Gibson’s bill was quickly labeled not only an infringement of (“God”-given) inalienable rights, but labeled as “the stupidest thing I  have ever heard” by the owner of Jacksonville’s St. Nicholas Gun and Sporting Goods Store, who might be worried it would hamper sales more than lead to a black-market in firearms (which would further undercut his business, I suppose).  The serious abuse of the Second Amendment across the country rests in part in how the right to a piece is believed natural.  Indeed, the attempt to align it with populist claims, hostile to pluralism, dissent, or liberal traditions, is not only parasitical on a tradition of representative democracy, but against the dispersion of power in a state based on representational democracy:  based on an intentionally polarizing discourse, the interpretation of democracy on which it turns–the trumpeting of individual rights over those of the collective–may indeed be mapped more clearly than one would imagine by normalizing it as opinion, and merely placing it at one end of a political spectrum.

Recalling the sort of strained logic by which the firearms of abusive spouses are being legally protected after threats to employ them as instruments of attack or violent murder, despite the issuance of restraining orders, civil protection orders and affidavits attesting to substantial  fears of abuse.  “Rights” to bear arms are meanwhile championed as if they were inalienable, and in fact even in need of protection from the “insult” of a requisite two-hour anger management course.  If the intent was to encourage introspection, they’ve been seen as fighting words.  While we can both learn about the bill and track it here, we can see a broader set of trends across the country by creating a Google map served up by Mother Jones that starkly maps the state of our nation, using light brown to designate states in which  gun-related deaths already exceeded traffic-related fatalities, and a dismal darker brown to designate states where the number of gun-related suicides exceed the number deaths caused by automobile accidents:


Car Fatalities v. Deaths by Guns


It’s not a secret that the United States leads the world in gun-ownership, and that a Gallup poll puts the percentage of Americans owning guns at a massive 34%.  Al Jazeera compiled this map, of less clear correlation, between gun-ownership and gun-deaths, which I suppose reveals the results of local densities of firearms:




Tempting as it is to map this onto a red state/blue state divide and a familiar choropleth map, such an supposition would depend on universal voter registration–which is far from the case.  A more provocative map might use the same statistics to construct a set of maps of the relative deadliness of individual states’ “gun cultures.”  The notion that each state has its own culture of violence is clearly itself a fiction that is perpetuated in large part by pollsters, choroplethic mappers or demographic ingenuity, and oddly erases regional difference or urban specificity–and discards economic variations in favor of a viewable poll that clearly consciously recalls electoral data.  But the variations could tell us a lot about the universality of gun violence in regional terms, beyond a  simple mapping on to zip codes, which predictably intensify in urban areas, like the recent map that was published as an interactive map by the White Plains Journal News in Westchester and Rockland counties, but which created a mini-controversy as an invasion of personal privacy by internet vigilantes who decried tactics of intimidation, rather news–as if someone would chase these poor firearm owners.




That site was quickly pulled, of course, though the interactive properties of such a dense map of gun ownership is hardly such a gross invasion of privacy, since the guns are registered in legal databases.  Yet law-abiding gun owners like Keisha Sutton felt that the public map exposed “me, my family, my friends, and others at risk,” presumably from the violent gun-control crowd, and was argued by others to increase underground black-market gun sales:  look at the huge number of on-line responses that the article generated.  It is of course not a problem to post multiple locator maps of your local gun retailers, should you want to purchase one.  (It would be interesting to map these sites’ internet use, of course.)

But let’s return to the question of regional variations that these folks have mapped.  Questions of national variation are considerably complex.  If we start by mapping the most guns owned by percentage of the state’s population,




to get a little more statistically refined as mappers, distinguishing rates in the ownership of guns and firearms in relation to individual states’ populations:




We can hypothesize an index of the oxymoron of a local “healthy gun culture health,” that, argues one website, might be a better index of personal or individual safety from guns–and which exists in something like a direct inverse to sites of the greatest levels gun-ownership or numbers of gun in circulation:
Color Map of State Gun Cultures


But if we apply what might be called critical thinking, or just comparison, although the deadly gun culture in Nevada is revealed in the above comparison of the balance between gun-related deaths to traffic-accidents, some of the states with “healthy” gun cultures, to use that tragically oxymoronic term, like Oregon, Colorado, or Utah, turn out to not be the very places where an anger-management course might have been just the thing that would have saved some lives.  (Let’s table for now the related question of why Colorado is plagued by such a high number of gun-related suicides–or the fact that 75% of gun-related deaths are due to suicide in the state–or the impact that announcing restrictions on firearm ownership might create.)  That at least offers convincing cases of where it might actually have been a very good idea to keep guns out of some people’s hands.

The defeat of any of the proposed limitations on the purchase or ownership of guns in this country–either though screening purchases by background checks, checking for mental illness, or for restrictions on on-line gun shows–turns a blind eye to these maps, and to the notion that the prevalence of guns in our schools and culture is not a problem:  and even to believe that one can cast owning firearms as a protection of a liberty.  Not only is the Senate in “the gun lobby’s grip,” as Gabrielle Giffords put it, but country and media seem to have turned a blind eye to their responsibilities to regulate access to guns.  Indeed, the notion that the government could even release any information to map gun-ownership was wholeheartedly rejected by Senators, in response to a request from the Republican senator from Wyoming, penalizing local governments for releasing any public registry of ownership of guns.  Indeed, the rush to get gun licenses and sharp increase in weapon sales  in response to consideration of tighter gun control laws–“not just because President Obama and his administration are hell-bent on introducing some form of worse-than-useless gun control in the aftermath of the terrorist attack in San Bernardino CA,” put it, but because the government “cannot keep you safe.”  The notion that free access to semiautomatic rifles increased safety is so reflexive after major mass shootings in the United States that the demand for purchasing guns at stores like Walmart–currently the nation’s largest gun retailer–suggests the appeal of gun-ownership as an assertion of responsibility.



United Artists

It is considerably scary, and incredible, that while convicted felons are prohibited by law from purchasing rounds of ammunition by law, no identity is required for purchasing gun ammunition.  Although the industry generated a $3 billion revenue that grew 8.9% from 2010-15, the United States government’s Bureau of Alcohol, Tobacco and Firearms sees no easy corelation between the sale of ammunition alone and the growing gun problems that increasingly plague the nation.

In an age of increased data-accumulation, data-selling, and data-compilations, this type of data–the clearest data to prevent the public circulation and availability of firearms and guns–seems off-limits.  Looking at the lay of the land, could this really be safe?


Gun Owners' Map


The image is truly daunting. A recent study from the Chicago Crime Lab suggests that rather than curtailing access to guns or their possession, preventing the public carrying of guns is a more critical deterrence to violence.  The study may have a valid point, but the question of where folks will gain their access to guns–or exposure to a culture of guns–forces us to go back to that question of all those red dots in the map above.  If many gangs may change their gun-carrying behavior in response to police pressure against illegal gun carrying, is the enforcement against carrying guns able to be sustained while respecting civil rights?  A study of illegal gun carrying indicates support for the potential effectiveness of this approach, but the ability to procure guns is at the same time a surer restraint against the ability to carry them.

Along these lines, Senator Mark Leno has introduced several bills in California to confiscate those guns that are illegally owned,  estimated at 40,000, using licensing fees for firearms, as well as of introducing mandatory background checks.  While the legal owners of guns are not necessarily tied to those not in legal possession, the widespread possession and acquisition of firearms in the country increases the risk of their illegal circulation, and to monitor whose hands they can enter.  This is a stab at offering a better mapping of the circulation, commerce, and traffic of firearms, if not to map their personal possession–and to map such possession onto the essentially populist claims of the protection of individual rights of gun ownership.


Leave a comment

Filed under data visualizations, datamaps, firearms in USA, Gun Control, gun-ownership, Gun-Related Deaths and Traffic-Related Deaths, Right to Bear Arms

Intoxicants! (Choose Your Poison)

Indigenous Intoxicants Big

“Indigenous” is a bit of a buzz-word, since now not much is.  Expanding the worthy cult of rediscovering the local but also reminding us of its historical origins, this “Whole Foods”-style map of the wide world of intoxicants is an appreciation of diversity and a true big picture.  In its most recent issue addressing the theme of Intoxication, Lapham’s Quarterly has backed a boggling collage of historical snippets of moments of intoxication past–Casanova’s night on the town; Stephen Crane on opium; Honore de Balzac on the delights of coffee; or the Apple in Eden– with the dimension of space.  The map offers a nicely complementary map to an image of the inhabited world, even if one you won’t see on the walls of elementary school classrooms soon:  is it where there are inhabitants, there are media of intoxication, or that societies grow up around intoxicants?  (Although given teenagers’ habits for self- experimentation, perhaps it should be mandatory to post it in every US high school to encourage global awareness in a provocative DIY way.)

Intoxicants are a measure of sociability, at least.  Beer seems missing from the list, or diminished in the face of Michael Jackson’s claim for the “perfectly reasonable academic theory that civilization began with with beer” in his World Guide to Beer some years ago, a theory that brewer Dave Alexander of Brickseller Brewery summed up that “beer is probably the reason for civilization.”  Archeologist Brian Hayden of Simon Fraser University has both pursued and refined this argument by suggesting that the Neolithic domestication of cereals was largely for domestic brewing, linking beer to the “emergence of complex societies, leading Charles Q. Choi to broadcast that “Beer lubricated civilization,” based on archeological evidence that maps beer to the analysis of human remains found in the Nile delta.  (This is not only an argument in Canada.)

But these theories beg the big picture.  If beer is bread, let’s expand our basket of intoxicants by cocktails that offer grounds for socialization beyond the sixpack in a site-specific map:  rather than a map of where you can go to get intoxicated, the above map takes a wider view, timed for St. Patrick’s Day, by amply recognizing the Mediterranean grape, honey, barley of Mesopotamia, palm wine, beside the grain and hops it calls indigenous to Europe. Broadening our horizons by embracing the prickly poppy, mushroom, peyote, beetroot, embracing the glorious juniper berry as well as the Sonoran desert toad, which join cannabis and coca or the Kola nut, to picture the origins of human sociability in more variegated and broader landscape.  No doubt toads and prickly poppies weren’t as easily domesticated, not to mention Arctic Club Moss, but the big picture provides a nicely bucolic view of varied ecological habitats, as well as providing a new sort of level for what Italians have come to call Agriturismo, just in time for Spring Vacation.  It may give fieldwork a good name, even after Napoleon Chagnon took the dark-green slime dripping from noses of hallucinogen-induced violence among Yanomani as signs of their state of perpetual warfare.

1 Comment

Filed under Beer, Brickseller Brewery, Cannabis, Coffee, Indigeneous Intoxicants, Lapham's Quarterly, Napoleon Chagnon, St. Patrick's Day, Uncategorized, Yanomani

Re-Mapping Terror

Going to Teherean

It’s hard not to admire the design of this poster that advertises the coming talk of Flint and Hillary Mann Leverett.  It’s not a secret  to readers of this blog that maps make great, cogent arguments:  and even a map without a clear road map or national boundary lines–or noting cities or centers of inhabitation.  This map effectively uses the lay of the land to shift the pragmatics of mapping our relations to a country and region poisoned by political posturing, and how–in a world without clear boundaries–it makes no sense to continue to demonize Teheran as a site of terror or the prime site of nuclear threat.  (It’s not as if non-proliferation treaties are in vogue across the rest of the globe.)  After years of some pretty oppositional if not Manichean rhetoric, shifting the map, emptying it of the targeting of danger is sane and salutary and quite a relief.

It is a counter-map to militarized map to distort actual geography and, whose cold-war paranoic cartography that imposes the ranges of missiles arrayed as geometric concentric rings on the land, designed to provoke panicked visions and activate one’s amygdala.

Map - Iranian Missile Range(1)

If we retain a memory map of the 44 active bases of the United States Army that surround Iran in eleven surrounding countries, most of which share borders with Iran, we could just as easily offer a different and equally compelling map of fear.

US Army Bases around Iran

The all too familiar military map below presents an even more haunting spectre of a military scenario all too easy to map. (The German text might underscore the active belligerence of the red missiles streaking from Tel Aviv toward Fordo, Arak and Isfahan, or the red Israeli Dolphin U-Boats floating in the Arabian Sea.)  Those provocatively elevated anti-aircraft guns noted in black in Iran map the seat of militarism and violence, unsurprisingly, that naturalize its military threat in the landscape of the Middle East.


It’s not a surprise we need to create another and radically different map of the region, and do so by a counter-map of aggression.  The swirls of yellow between the Persian Gulf and Gulf of Oman to the Caspian Sea conjure the muddiness of our maps of the region and benefits and needs of remapping its place in our spatial imaginary.

1 Comment

Filed under Iran, Map of Fear, News Maps

The Ancient Glaciers of F. E. Matthes’ Cartographical Sublime

Matthes' Map of the Ancient Glaciers of Yosemite Region


François Emile Matthes’ stunning survey of the ancient glaciers of Yosemite were his first geological investigation.  They built in a stunning manner on questions raised about the origins of the Valley’s unique form in repeated surveys of the Valley’s unique and striking topography, and renewed attention to the role of glaciers, winds, and water in its formation.  From the USGS survey of organized by George Montague Wheeler, whose large-scale topographic map of the region immediately preceded the geodetic survey begun in July and August of 1890, to link the region to the transcontinental surveys by means of astronomical observations, and the stunning progress refining maps of the back country Sierras since the 1880s.  Wheeler’s men in fact mounted a twenty-inch theodolite on a concrete pier atop Mount Conness, dragging it up by horseback and donkeys, to be housed in a small observatory tethered on a rocky peak by sixteen separate twisted wire cables.  The subsequent Sierra Club map of the Yosemite and Hetch Hetchy valleys in the 1890s by U.S. Army officers and finished in 1909, just after Matthes began work, used short seasons to map the Valley as best they could–R. B. Marshall and H.E.L. Fussier surveying both the Yosemite Sheet, Dardanelles, and Mt. Lyell sheets, and two colleagues the Bridgeport Triangle–but faced multiple obstacles to coordinate a completed survey in this hitherto hidden region of the Sierras.

Matthes’ deep appreciation of cartography not only as a survey of the land, but as “an interpretative and synthetic art,” as he often sustained, that depended on “intelligent insight into the nature of land forms” as much as the accuracy of delineation suggests how Matthes realized good mapping was an interpretation based on the successful wielding of the line as a tool of reflecting actively on excavating a deep understanding of a terrain.  Unlike a topographic relief map, Matthes’ map of the valley’s ancient glaciers charted the progress of glaciers that defined the Valley by arrows that suggest their ancient progress and movement, viewable in the above map from the National Park Services, delineating how the movements of the Hoffman, Merced, Tenaya, and Yosemite ice glaciers shaped the valley’s waterfalls and mountains.



Matthes delineated the region’s complex geomorphology with an eye trained to synthesize observations of how glacial progress sculpted and shaped the land, as much as craft a scientific image that asserted authority in a rhetoric of objectivity.  The historical depth of Matthes’ work sustained his attention to the Valley floor as a geomorphological investigation, distinguishing it from any map ever made since of that part of the Sierras.   Matthes’ map uses new surveying techniques of registering elevations, which he studied at MIT, and had wanted to perfect it by shading.  Matthes’ learned cartography approached surveying from a distinctly academic perspective to pose new questions about the terrain, as much as map its trails, hidden lakes, and shifting topography of a region that was then over half the year under snow, displaying the drama of the landscape in a cartographical form, that recall his own intensive observation of mountainous Alpine topography in Switzerland and nourished in America by studies of his beloved White Mountains and the Grand Canyon.


From the age of eight, François and his twin brother George contracted malaria in their canal-side residence in Amsterdam, where their father had directed “Natura Artis Magister,” Amsterdam’s zoological gardens.  They belonged to a merchant family that furnished rubber for the first transatlantic cable.  Just before their tenth birthday, the family doctor prescribed a multi-year stay in the Alps for the twins in the Alps, far from malarial flies, that gave him both a visual interest in alpine topography and a close familiarity with map-use:  when they lived on a mountain overlooking Lake Geneva in 1885-6, in summer months, they joined their father to climb mountains using “cloth-mounted military maps (General Dufour’s series) which showed all triangulation stations” that his father brought from Amsterdam, which lacked any contours but he remembered as “beautifully finished with hatchures, which brought out the relief in great detail” much in the manner that Matthes would later aspire.

His early exposure to maps was somewhat legendary, and perhaps mythologized.  Their father taught him and his twin how to read these maps, leaving them to wander with the maps “without fear of getting lost,” Matthes remembered, and the family regularly returned to the Alps, visiting Chammonix to scale Mont Blanc and its glacial formations for summer expeditions.  Both studied technical drawing from an early age, admiring Frederick Remington’s art and other images of nature, and after studying technical drafting in a German Ober-realschule in Frankfurt am Main, travelled to America with their family where François studied at MIT, as well as being active in the Agassiz Association at Boston’s Museum of Natural History, influenced by a professor to work with the US Coast and Geodetic Survey, and in 1895 he started surveying in Rutland, Vermont, Indian territories, and the Grand Canyon, pioneering a telescopic alidade to sketch form lines and contours lines that effected alpine glaciation.  Matthes’ survey of 500 square miles of the upper Grand Canyon provided a basis for “perfecting the extreme test of the efficiency of that instrument,” and he petitioned to construct the map the unheard of scale of USGS,1:48,000, with 50-foot contour intervals, following his own field map, to reveal its complexity.

And so he arrived in the Yosemite Valley, where he would devote two seasons of fieldwork. He interrupted the survey begun in 1902 to travel to the Yosemite Valley the following June, impressed with the “overwhelming” view from Inspiration Point in the Valley that he would later photograph: he returned after spending the winter in postgraduate studies in geomorphology at Harvard that informed the dense observations compiled in the map engraved in July 1907.

yosemite valley from Wawona road

The scientific artifice of such unprecedented large-scale maps documented of nature as the Yosemite valley and Grand Canyon were first known, mediating a scientific record by which their nature was apprehended with wonder.

Matthes’ desire to chart the genesis of the Valley’s landforms led him not only to invest detailed attention in the 1906 map of Yosemite value, but to view its compilation as a study of glacial geomorphology, paired with loving black and white photographs.  “Natura Artis Magister”: before Carleton Watkin’s photographs of the Valley, or the later black and white images of Ansel Adams, or John Muir’s inspirational 1912 The Yosemite, Matthes’ scientific atlas of glacial paths constituted a model of the artifice of recording nature from which others would later depart.


There are many narratives of the valley’s mapping.  Matthes’ topographic map constitutes at least two narratives–both loving–one of scientific activities of practices of observation, sightings, and measurement, and, more importantly and compellingly, a deeply historical one about the glacial formation of the Valley itself.  Mathes’ close attention to the Valley that he loved, and to the shape of the contours excavated by glacial drifts over time, is sublimated in the detailed maps he drafted to track glacial courses and showed how “glaciers take advantage, rather, of the fractures already existing in the rock—the joints by which is divided into natural blocks and slabs,” carrying objects stuck to the frozen ice, and “shod with coarse rock waste frozen in their basal layers” had such “strong frictional hold on their beds” that “as they move forward, though at a rate of only an inch or two a day, they  dislodge and drag forth entire blocks and slabs.” The natural sculpturing of the Valley was tracked, as if in stop-motion photography, in the small arrows that punctuated his topographic rendering of the floor.

Matthes, trained in Germany, used his first geographic assignment to draft the first accurate topographic map of the Valley Sheet in hopes to chart a path of glacial progress across the region.  A scholar of glacial geomorphology, Matthes mapped “the Incomparable Valley” at an unprecedented scale of 1:24,000 or an inch to 2,000 feet.  The map appears a relief map of the Valley’s topography, but records the topographic tracks of glacial flows and provided keys to track the courses of glacial movement that refined the Valley’s form, that substantiated his theories of its formation.  His attempt to investigate the origins of the Valley’s formation, disputed since the 1860s, resulted in a painstaking account of  the “process whereby glaciers excavate to best effect in hard rocks is by plucking, or ‘quarrying’ entire blocks and slabs,” and map the process by which, glaciers of up to 3,000 feet exerted pressure of some thirty tons per square foot, “shod with coarse rock waste frozen in their basal layers, glaciers have a strong frictional hold on their beds; and so, as they move forward, though at a rate of only an inch or two a day, they dislodge and drag forth entire blocks and slabs” (USGS Professional Paper 160, “The Geologic History of Yosemite Valley“).  The detailed result of these glacial flows used a unique iconography to reconstruct the movements of the glacial ice mass, and retrospectively map its progress seven miles across the Valley for his readers to reconstruct its effects.
Glacial Flow


Constructing the map made Matthes realize that he had staked out his scope of life-long investigations in the map, indeed, for although it gave him a huge reputation as a topographer, it confirmed his attention to the creation of landforms, rather than their depiction or mapping, and focus on geomorphology in the remainder of his life, finally transferring from the topographic to the Geographic Branch of the USGS seven years after the survey–which was reprinted as late as 1946, although much of its topographic detail was lost by overprinting–had been published.  The 1946 map of the sculpted floor of the Valley added the shading that he had desired in place of a five-color map, with an artifice and splendor never achieved in later topographic maps of the region, but reveals the amazing attention that he dedicated to the valley in  his 1905-6 survey.  Shaded so carefully to appear as if a relief map, with its exquisite caring detail to basin, crater, ridge, and mountain range, the map organized the Valley’s historical formation in historical time, beyond triangulating sightings of mountain peaks.



1 Comment

Filed under geodetic survey, geomorphology, John Muir, Mapping Yosemite, relief map, telescopic aledade, topographic map, Yosemite, Yosemite Valley

Mapping the Materials of the Human Body

The pine planks displaying the configuration of veins in the human body that John Evelyn bought in Italy from a dissector of anatomy at the annual university dissection in 1646 gained immediate value as a depiction of the body’s inner structure.  Anatomical dissections that divided a corpse into its constituent bones, muscles, veins, arteries, nerves, and organs had been conducted at the medical university for over a century, and a permanent theater had been built at the imposing medical university’s building for some fifty years, if dissections had been long integrated into the academic teaching calendar; Evelyn felt privileged to have bought the planks on which the extracted veins were preserved and affixed onto the wooden planks, apparently covered with varnish, not only as a souvenir but as a teaching aid, bringing them back to England.  When Evelyn returned to England, where he had the tables shipped, he remembered how he had approached Johann Vesling’s surgical assistant, “of whom I purchased those rare Tables of Veines & Nerves & causd him to prepare a third of the Lungs, liver & Nervis . . . with the Gastric vains, which I transported to England” which he believed them “the first of that kind had ben ever seene in our Country, & for ought I know, in the World.”

Evelyn remembered with excitement watching the dissection of a male body, female body, and child by Giovanni Leoni d’Este, the dissector working with Vesling, and how the surgeon was expert at the task of “extracting the Veines and other vessels which containe the Blood, spirits &c. out of human bodys, . . . to distend and apply them on Tables according to their natural proportion and situation.”  Leoni d’Este showed the elegance and virtuosity to display the body that was expected at the annual dissections in Padua’s university, long an attraction and destination for medical students, since the creation of a permanently standing “theater” of anatomy in 1594.  Vesling was the successor to Giulio Casseri and Girolamo Fabrici, surgical doctors and pioneers of the use of visual aids to anatomy that surpassed the multi-panelled woodblock prints Vesalius had designed in six elegant tables for students in 1537 and expanded to a folio-sized book in 1543.  For his part, Fabricius had created cross-species anatomical atlases in colorful paints.  But these boards, if few survive, seem to have been regular teaching aids in the anatomy lessons that attracted students from all Europe, held in a permanent wooden theater in the form of an oculus at whose center lay a marble table, ringed with benches of steep grade.

Lauren Fried aptly wrote recently that the planks “looked like something a cartographer had given some serious time to fantasising about,” in a recent review of the multiple exhibits on the dissection of human anatomy showcased this year at both the  Hunterian Museum and Wellcome Collections in London, as well as Resurrection Men, at the Royal College of Physicians.  The intensity of a detailed cartographical rendering is evident, whatever the format and resolution of your screen:




The popularity of such traveling exhibits suggest the huge interest in the market for medical museums and the public display of bodily insides of which Evelyn’s “Tables” are something of a seventeenth century precedent. But the model of mapping, an innovative tool of demonstration and display Evelyn’s time, were more the prototype and model for the planks Leoni d’Este helped fabricate than the far more spectacular types of display of body parts in the contemporary exhibits that have been widely marketed in the United States as “Bodies–the Exhibit”” in recent years.

The darkened rooms, wax-museum like uncanniness, and oddly posed mute expressivity of these plastinated corpses of uncertain provenance are something like the spectacle of the anatomy lessons of past years in the traveling exhibit, a new height in our Debordian society of the spectacle.  Given the spectacular origins of the tables that Evelyn procured, it’s not surprising the pine planks he had shipped to London recall the traffic in foreign body parts that make up the exhibit on plastinated bodies that has been traveling the world.  The exhibits in London seem something like a British response to the traveling exhibits on body-parts procured from Chinese prisons–before setting up grounds at the Luxor in Las Vegas, as well as in Atlanta in New York–as well as a way to drum up further tourism from the rather eclectic collections of the Welcome Trust.  Surely these tables, elegant in the extreme, have gained a second life in comparison to the rather gruesome unveiling of flayed corpses in this show, most procured in a dubious manner, that has enjoyed such success for audiences of all ages.

Bodies–the Exhibition is promoted as a pedagogic aid:  “discover how to enlighten, inform, and inspire your students to learn about the human body,” enjoins the website’s special section for educators, arguing in hygienic (rather than biomedical) terms that by understanding “how the body works” student’s can better learn how to keep it healthy.  Multiple ethical questions about the origins of these bodies, and the trafficking in these multiple body parts–preserved by techniques of polymer plastination pioneered by Gunther von Hagens to forestall bodily putrification in organs or tissues by removing all fluids in tissues by a (patented) process of “forced vacuum impregnation.”  (Ethical questions around their origins abound, but the show was so popular that it is now booked permanently next to artifacts from the Titanic, as if both shockers could put the losses of gamblers into perspective in a bracing fashion.  But both are billed as hands-on educational experiences.) Recent entries on the exhibit’s blog however debate the benefits and healthiness of dark and white meat of turkeys, which is more of what might be going through your mind as you leave the rooms, trying to distance the fact that these are actual people’s body parts, moved under stage lights for public consumption after no doubt pretty tragic if not grizzly deaths, as a spectacle for the public good. (It might even be for the kids in a family trip:  “‘This exhibition taught my students more than I could ever teach them with mere words,'” reads an unattributed endorsement displayed prominently on this website, inviting schools the Las Vegas NV, New York, or Atlanta areas to book a class trip.  It would at least provide material for discussion that would not leave their attention spans.)

So what of mapping?  If the “Bodies” exhibit is an immersive spectacle that might nicely punctuate the boring routine of the school day, Freid’s comparison of the Evelyn tables to maps has a nice historical ring:  mapping is a classic form of distantiation, and would have been perceived as such.  If we are impressed with the formal parallels Freid noted, this sort of combination of a printed genre was more recognizable during the rapid increase of printed maps and images over the sixteenth century, when the production of new genres of printed engraved images rapidly grew.  In his 1502 treatise on human anatomy, which lacked illustrations, the Venetian physician Alessandro Benedetti invoked nautical maps produced by sailors as an analogy to describe the value of transmitting knowledge by images.  At the same time as the ancient geographer Claudius Ptolemy’s precepts for conformal map projections were revised by recent findings registered in sailors’ manuscript portolan charts, he noted the huge value of recent atlases of nearby Adriatic islands or the Greek peninsula in marine charts as wonderful addition to geographic maps, distinguishing the practical knowledge of mariners’ maps at the time when images had limited prestige or clear social niche as media of learning.  The terrestrial and island maps were both powerful models for keeping one’s visual distance, and objectifying a collective record of sense-perceptions, as much as offering a comprehensive synthesis of centers of habitation across the continents.

The Evelyn planks are in this sensed informed by such a cartographers’ attention to setting out the contours of the venous system  in all its gory materiality, and a sensibility of reading the detail of the mapped world–as opposed to the landscape–that paralleled the huge interest in maps and local skills of territorial and terrestrial mapping during the same period in Italian university milieu, where maps were esteemed as particularly valuable syntheses of overseas continents and their inhabitants, as well as elegantly stylized constructions.  The disembodied structures of the veins, if truly ghostly, offered a accurate if somewhat distorted map of venous anatomy valuable for one with limited recourse to comparable comprehensive dissections, and an emblem of his learning.




The panel, one of a set of four, were no doubt prepared as pedagogic devices, shortly before the death of both Vesling and his surgeon in 1649.  The two varnished panels of pines recall Leonardo da Vinci’s explicit comparison between the world or macrocosm and the microcosm of the body, as well as the varnished surface of an early globe.  The veins the surgeon disembodied and affixed to pine boards for Evelyn indeed resemble roads or routes, and recall the rivers that flow over the surface of the earth that Leonardo compared to the rivers that course through the macrocosm of the earth.   Leonardo boasted of plans for a book of anatomy in which:  “I will . . . divide them into limbs as [Ptolemy] divided the whole world into provinces, [and] then I will speak of the function of each part in every direction, putting before your eyes a description of the whole form and substance of man . . . ,” using the map as a likely concrete metaphor for investing the detailed images of body parts he drew with the weight of knowledge claims of a map, seeking to give them a jointly informative and orienting function.   He seemed to scoff at the futility of doctors’ attempts to describe human anatomy in words, and expressed the potential of anatomical images by drawing a likeness between how the works of Ptolemy and Galen relied on graphic artifice to  transmitting personal observations incommensurate with written texts.  The material reproduction of images in print of human anatomy and maps both grew in the early sixteenth century in ways that were linked not only to the prestige or design of engravings, but a new attention to how engraved images embodied their subject–at the very time that the anatomist Andreas Vesalius–who had himself studied surveying–helped design six large engraved tables of the body’s skeleton, veins and arteries, and nervous system.

The “Evelyn tables” echoed the anatomical prints, also made for students in Padua for consultation by students who attended public dissections.  Before techniques of wax-injection, these images not only provided an invaluable pedagogic device to “view” the body’s interior; the metaphor of the map also provided a ready model of readability for an image of the body.  The surgeon William Cowper, to whom Evelyn boasted of having obtained the planks, later designed engraved images after their disposition of veins to teach his students, reflecting the huge interest Evelyn had for an image he believed totally foreign to England.  They how it recalls the skill with which the anatomist Andreas Vesalius had crafted detailed a large detailed image the body’s veins engraved by an artist, Stephan von Calcar, in Padua in 1537-8, in an often-reprinted image that foregrounds the centrality of the vena cava–and whose position was  relevant in debates on sites of bleeding to relieve pain in the side of the body, and as a guide to understand the benefits of bleeding at the inner elbow, forearm, or below the knee.  The vena cava leading to the liver assumes a centrality in this image, more reminiscent of a caricature than an objective illustration or a map, but which focusses attention on its course:




This was, then, a sort of map for orienting students to methods of phlebotomy by providing a material image of the venous blood’s path.  The principles of such a selective guide to internal anatomy would have made such a notion of genre-crossing less surprising to educated readers.  Both images are specifically map-like in their selective attention to detail, as well as their departure from a point of view of individual observation.  Vesalius had himself studied with Gemma Frisius in Basel, the pioneer of land-surveying practices of triangulating by base-lines, which later provided a powerful model to envision local territories that Venetian cartographers would come to employ to chart inland possessions, but which had been increasingly refined from the early sixteenth century by men like Vesalius’ teacher Gemma.  Indeed, if maps objectify but create our concept of territories, the practice of mapping bodily structures constituted the autonomy of the underlying structures of skeletal, venous, and nervous anatomy as networks, before their physiological function were understood or theorized.

Hence Alessandro Benedetti, who demonstrated human anatomy in Venice at the time of Leonardo, invoked nautical maps as accurate records of the shorelines of coasts and islands of the Adriatic not only mediated spatial knowledge in visual form but embodied it in engravings.  By the 1530s, not only did Benedetto Bordone, an illuminator and engraver in Venice, publish books of islands in Italian, but doctors in Padua such as Girolamo da Verrazzano, the explorer, fashioned elegant globes gilded in copper in Padua– long before the printing and marketing maps in Venice peaked in the 1560s–like the one housed in the Morgan Library.  Their maps embodied knowledge in new ways, and with new pressing urgency, for a wide audience.



The following gores made after the 1530 globe reveal the precision of delineating coastlines of  islands and continents to meet considerable expectations for detailed topical cartographical detail:


Verrazano Gores


The engraved globe is  remarkable for its detail–apparent even more in its gores–and for the claims of a mimetic visuality that it promised, replicating an expanse that could be readily surveyed in its coastlines and totality by readers, who could imagine a relation to the embodied contours as a whole.  Benedetto Bordone’s book that claimed to map “the islands of the entire world” of 1528 included some 111 maps of individual islands.  A shift in the visibility and embodiment of the world that occurred around 1535 in Italy and Europe, and particularly in Venice, shifting awareness in maps not only a register of information, but as embodying a tactile relation to the world through their synthesis of different registers of spatial information and first-hand observation.  Space is embodied in maps of Crete and Cyprus printed by Giovan Andrea Vavassore in 1538, and earlier views of Constantinople and the Levant, but also by the generation of contemporary cartographer-engravers who specialized in woodcut maps that were contemporaries, after the many nautical charts in that maritime city:  and in 1516, a Venetian engraver first boasted to reconcile the forms of Ptolemaic world maps with charts based on maritime observations, and in 1528 the engraver Benedetto Bordone printed in Venice his popular Isolario, which mapped not only the maritime situation of Venice itself,




but included numerous maps of cities and islands in the New World–among them a famous map of Cuzco, in Italian, which provided the first information of New World inhabitants for many book-buyers, including an image of human sacrifice at its central square:




The print business that led to such a market for maps of new curiosities in maps–so unlike the more humanistic maps that accompanied editions of the ancient geographer Claudius Ptolemy’s treatise on world-mapping, or Geography, encouraged the above medical images to be engraved as tools that embodied a similar relation to human anatomy, at the expense [“de sumptibus”] of the artist who is credited with their design, based on drawings of the anatomist.


isolario bordone


Vesalius was also not the only anatomist to image these parts of the body in ways that provided the notion of synthesizing first hand observations of interior anatomy that their viewers would consult, whose engravings form of visual learning and teaching human anatomy. The work of Bartholomeo Eustachio contested Vesalius’ account of venous anatomy and the relations between human and animal anatomy–so central to Vesalius’ critique of Galenic medicine, and of Galen’s own investigations in the six books of his Anatomical Procedures–by offering his own “map” of the pathway of the azygos vein, including both the thoracic duct and the and valvula venae in the heart’s right ventricle:  to reveal the material distribution of the relation of the azygos vein to the heart, he worked with his disciple Pier Matteo Pini and the engraver Giulio de’ Musi, both to better objectify the body’s hidden structure and to create a clearer model to debate its form.  The thirty-seven images they drafted employed a numbered grid to better situate the vein in measured coordinates for their readers:




The vast majority of plates that embodied human anatomy which Eustachi had engraved in his life were not printed, although the anatomist prized them enough to leave them to his pupil, Pier Matteo Pini, but can be viewed among the collections of the U.S. National Library of Medicine’s Historical Anatomies on its website.  It’s not known who viewed them, if anyone, in the period from Eustachi’s death in the 1570s to 1714, when they were printed after being discovered in the Roman hospital of Santo Spirito between the Vatican and Tiber:  but their attention to embodying physical structures and organs of the body for viewers was evident in the odd sequence of detailed observations of renal anatomy that were printed in Venice in 1566, disembodied from a human figure:




The so-called “Evelyn Tables” are the mid-seventeenth century continuation of this tradition,  and a similar materialization mapping the venous anatomy for educated readers.  There is a sense of the performative in this map who exposes his interior, his crudely drawn face turned upward, as if oddly to shrug off the pressing question of his own subjectivity, that may echo unspoken curiosity about the source of the veins in the “Table,” but one reads both as a map, and a mapping of the body’s interior space.  There is less distance–and that is the point, perhaps–in the plastinated bodies viewable 24-7 in Las Vegas, created from cadavers not made of locals, but bodies who have been flown , perhaps from Chinese prisons, half way round the world.


Filed under cadavers, Evelyn Tables, Gunther von Hagens, Human Anatomy, human dissection, medical maps, surveying, Tenochtitlan, Verrazzano Globe

Maps, Mapping, Globalism: Imaging the Ecumene’s Expanse

That most ancient of words, Oikumene, expanded from the Greek “oikos” to designate a dwelling or residence, or ecumene denoted less the technical abilities of mapping or tools for describing of the world than the demarcation of inhabited lands in which civilized people or members of the church existed:  but the divulgation and expansion of the notion of mapping abilities have in recent years, since the explosion of information databases and during intense globalization since the 1980s, extended the notion of the ecumene that has grown to extend beyond the map.  It increasingly is invested as a terms with ethical connotations to understand or foreground humanity’s relation to its environment–or retake the human from the map–at a time when virtually no part of the world is not inhabited.  Indeed, the possibility of drawing frontiers between an uninhabited and inhabited world–or of defining limits of the inhabitable world–is so diminished that the concept of bounding areas are not clear; the areas of the earth that are no longer inhabited, its “open spaces” or unsettled areas have catastrophically declined in the past twenty years.

But the continued interest we have in describing how we occupy the world, if not demarcating the boundaries of the world, is at the center of the data flows and databases we process in GIS and that increasingly lie at our finger tips.  The instant generation of maps of the inhabited areas of the world have paralleled the catastrophic decline since the 1990s, when a tenth of existing wildlife declined and the catastrophic losses of wildlife confirmed at the  IUCN World Conservation Congress:  the shocking fact that only 23 percent of wilderness remains doesn’t even include the future effects of global warming, the current crisis in history’s tragedies mankind is currently in the process of having created or on its way to create.  Indeed, the destruction of wilderness–what are deemed intact landscapes that are mostly free of human disturbance–has perhaps most radically changed the nature of the inhabited world.  Since the “Last of the Wild” map was first published in 2002, the loss of almost a tenth of formerly uninhabited lands in the last decade is the most rapid expansion of human settlement of the planet, with some 3.3 million sq km of once-uninhabited lands lost, of which 2.7 million sq km2 are considered globally significant–a loss of some carbon biomass in forests destroys a resource that offsets atmospheric CO2.


But let’s return to maps, such realities being to painful for me to contemplate.  Even as the entire earth is now inhabited, much is to be gained in the concept of actively mapping expanse both by preserving an analytic relation to that image of expanse, too often rendered abstractly in computer-generated cartographical media, and encouraging an analytic relation to how the material contents of maps embody space.  Crafting an image of the inhabited world as a bound expanse enjoyed a somewhat neglected historical lineage as a form of knowing the nature of an inhabited world and of orienting viewers or readers to the expanding unknown from the Roman empire:  the considerable intellectual heft of the term inherited from ancients–Eratosthenes, Ptolemy, and Strabo–and its signification of the inhabited and inhabitable earth informed most Renaissance maps and atlases, in which practices of mapping gained new epistemic ends as mediating comprehensive knowledge.

The comprehensive genre of the atlas, an illustrated set of maps promising true global coverage of lands linked by seas, developed in concert with the knowledge that the inhabited world extended beyond earlier imagined confined, and borrowed an expansiveness previously limited to nautical cartography or mapping.  The description of the distance to the edges of the world, if inherited from antiquity, provided a model for understanding the nature of the discoveries for the educated audiences among whom the first maps of the terrestrial ecumene first circulated both in manuscript and print–from the illuminated codices produced in Florence to the massive twelve-sheet wall-map announcing the Columban discoveries that the erudite Martin Waldseemüller compiled in the early sixteenth century Strasbourg from the school at nearby Saint-Dié-des-Vosges.



The visual qualities of mapping, symbolized as an expansive landscape, cast the embrace of the inhabited world with qualities of perceptual transcendence over its variations and divisions.  Ancient geographic treatises included few maps; but mapping the ecumene created a relation of expanse and an observer’s eye in the late fifteenth century by organizing and ordering the globe’s inhabitation.  And although it’s odd to think of the ecumene as an inheritance of ancient geography that’s still employed, the inheritance mapping the inhabited earth resonates with Geographic Information Systems–although fashioning an image of the world’s geography has little of the ethical intent it seems to have enjoyed in both the ancient and early modern worlds.  When we daily orient ourselves to how space is inhabited on our computer screens, iPhones, or androids, we frame an image that bounds a record of how space is inhabited either to orient us to where we are going or how the presence of cars, people, bacilli, or weather defines the inhabit world.  Paradoxically, the growth of GIS technology has increased the manner of ways we can chart the inhabitation and presence of man in space, if it has not increased how we define its continuity, it has also provoked both a Renaissance of mapping and a crisis in the authority of the map as a representational record of the ecumene and its bound, as well as its bounded nature.

While the rest of this post isn’t exactly heavy lifting, but is stuff I’m still processing and finding my way around.

1.  The assemblage of maps in a sequence of global coverage was identified with the cultural distinction Ptolemy gave to the project of world-mapping on a graticule of meridians and parallels, to be sure, both compressing a growing sense of the world’s navigable expanse and indexing its toponymy along climactic zones.  The term ecumene challenged the mental imagination by encompassing local variety in a capacious global category, ordering a global map in a neatly bounded surface beyond the Indian Sea, and up to the limits of known land, in a feat of mental dexterity as much as precise or accurate map of exactly determined scale.  The lower boundary of the map copiously noted “terra incognita,” as later projections–and left it at that, as an expansive white space that exists beyond the sea and lakes of the moons, as this Florentine map includes, adopting the notion of an extensive northern ocean to frame the inhabited world–even while seeing the Indian Ocean as closed.


Indeed, even as the world grew more detailed and other continents were registered as inhabited, as in the Ortelian planisphere, the growth of regions of terra incognito expanded, as if to parallel the known regions which were designated by naturalistic landscapes:  the unknown regions of “America dive India Nova” were paralleled with the imagined “Terra Australis,” a later configuration of the mythical Java la Grande.


The ancient Greek astronomer and scientist Claudius Ptolemy proposed using terrestrial maps on geometrically derived parallels and meridians as tools of portmanteau-like capacity to comprehend terrestrial spaciousness, by segmenting the world’s inhabited surface by degrees of longitude.  The notion of mapping totality was particularly fertile for early map-readers a decade before 1492.  The tools for mapping the ecumene or inhabited world provided an ambitious compendium of global knowledge, although the geographic knowledge of the world was limited–and still was by the time of this world map, illuminated circa 1482:  although restricting the ecumene for modern eyes, its capacious reach extends south to inner Ethiopia and northward, beyond its broken frame, to embrace northernmost isles beyond Thule.  Rings of uninhabited islands indeed constituted, John Gillis has recently noted, part of the mental furniture on the boundaries of the inhabited world for most fifteenth-century men, and suggested a comforting bounding of the world that seemed to illustrate its protection and insulation, lying as it did between uninhabitable climactic zones and far-off seas.

The ethno-centered ancient term maintained a sense of charting the world’s recognizable inhabitants or those that mattered to the readers of maps:  so, in the Augustan age, Roman’s referred to the expanse of the empire as the ecumene, beyond which lived barbarians.  But even as it retained a bounded sense for Renaissance readers, the totalizing image of an ecumene provided a way to imagine the population of an expanse greater than lay in the ken of most–and to understand coherence within a world that included information from far-off lands, even if many fifteenth-century people lacked clear geographic categories of spatial division of an inhabited terrestrial expanse.  The edges of the earth were oddly clear for a period that suggests limited familiarity with expanse: the monsters and extraordinary riches found there were included in fifteenth-century editions of Ptolemy’s handbook of world geography, including elephants in the island of Taprobane, beyond India, trees that had leaves year-round, multitudes of serpents, and cannibals.  These were the signs of the world beyond what humans knew, and included the bare-footed gymnosophists of India.

The compendious divisions of this mental map in a sense informed an engraved world map printed as the sixth page of the 1493 Nuremberg Chronicle, or “Book of Chronicles,” a “universal” history that promised a temporal compendium of world history, embracing historical ages in order to be able depict the division of continents from its creation through after the recession of waters in the Noahic flood through the succession of worldly empires that Augustine and Orosius had famously described–a work that captured the early taste for engravings as mediating information in Renaissance Nuremberg.   Romans discussed their empire as the ecumene, imitating how Greek geographers discussed an ecumene at whose fringes lived fundamentally other foreign Peoples, outside the scope of human concern and beyond the limits of human inhabitability; the world-map in the Chronicle placed outside its borders the excluded races of  Cynocephali, one-footed Sciopods, reverse-footed Antipods, bearded women, and one-eyed cyclopean monsters.  These lay outside the three regions divided among Noah’s three sons Shem, Japhet and Ham, or ecumene, and outside its image of the inhabitable world where humans dwelled, but also reflected the new world that the recession of waters in the Noahic flood had revealed to human sight, and the projection of the world that its editor included registered the shock of the prospectus terrarum that the lessening of global waters worldwide revealed–and the ecumene it unveiled:




Hand-illuminated versions suggests significant curiosity in these creatures placed outside of the map’s ruled boundary who dwelled in a different space from the river-nourished environment of what one supposed to lie on the edges the habited world:


Secunda Aetas Mundi



The ecumene had of course already expanded dramatically by 1490 or 1493 that challenged thought about its both its boundedness and uniformity and cartographical forms to represent spatial expanse.  It continued to expand dramatically in the following years for readers of maps.  Similar monstrous races were included on its peripheries:  in the northern limits of Asia, a boundary of the inhabited world, even in Martin Waldseemüller’s learned Carta marina of 1516–both in response to literary sources and travelogues as well as the mental furniture of the bounded region of human habitability.  Many of these races were left off of the map as “an empirically known space,” for the very reason that they challenged and threatened a human space, and the boundaries of the world revealed by maritime exploration were unknown–even if sea monsters were increasingly banished from the more the edges and unknown areas of the more refined world maps, as the Carta marina.


Waldseemuller 1516 carta nautica


The consciousness of limits of habitability or human settlement was a graphic expression of Strabo’s mandate that geographers show the world’s inhabited part, as much as its inhabitants or populations to readers to satisfy curiosity and to respond to a need to describe its limits, as much as its totality:  “the geographer must describe the inhabited world in its known parts, neglect its unknown regions, as well as what is out of reach” (II, 5,5), placing a primacy on describing those parts of the world or communities in which humans live.  Although most fifteenth-century people did not easily domesticate the idea of an extensive space, let alone an undifferentiated expanse, picturing the unity and comprehensiveness of the ecumene became a basis for thinking about expanse, and comprehending difference:  the image of the ecumene in the Nuremberg Chronicle became a basis for continuing a rambling shapeless narrative grounded in a series of embedded or potted histories of place, each defined around an individual city and city view:  the ecumene was the landscape, if you will, in which each was situated.  There is often limited notation of a matrix of parallels and meridians in what might be called a readable fashion in early Ptolemaic maps:  it helped make space legible and material–or a sense that they are conventions of understanding the dramatic contraction of global space, but not indices of way-finding or marking place, as in these gores, identified with Waldseemüller’s school of cartography, ostensibly made for a small globe.





What has happened to the notion of the ecumene?  Even as the Ptolemaic ecumene was expanded, the community embraced in the map grew, rather than being abandoned, if New Worlds were processed into a map that reduced the prominence of Europe at the center of the inhabited world.  But the expanse of the ecumene held together, as it were, a sequence of regional maps, partly because the concept contained the promise that the whole world could be divided and known in synoptic form in a series of synoptic images that reconciled spatiality and territoriality.  Although mapping the continuity of expanse undergirded Renaissance cartographical images, the precision offered considerable wiggle-room, as it was limited only to the known.  But the division of space into bounded records of expanse were influential; the “chorographical” map of community became a counterpart of the totalizing coverage of a geographic projection.  To be sure, such maps responded to the diversity of ecumene that were discovered.  And maps provided models to mediate culturally fragmented collectivities, and fashion coherence across confessionally-divided communities– as the national map Oronce Fine designed of France to the French national atlases of the late sixteenth century to the English maps of Christopher Saxton, or Philip and Peter Apian’s maps of Central Europe, or a cycle of maps of the Italian peninsula that Egnazio Danti organized for a corridor leading to the apartments of Catholic Pope, discussed in an earlier post.  The coherence of each of these regions provided a sort of microcosm to the ancient geographic ecumene as it gestured to the wold that Romans civilized.


2.  The second half of this blogpost shifts focus.  In ways that less linked to cartographical models, it uses the notion of an ecumene to interrogate the survival of a  mapped global space in more modern mapping techniques.  We now lack similar boundary lines, of course, and measure contact among its regions rather than being awed by the immensity of the world’s expanse.  But the same term gained an ethical heft  in Enlightened thought to express a mandate for cosmopolitans to inhabit the world to become citizens of its entire expanse and cultures.  This shift in meaning, often thought of as a rupture, suggests continuities with the contemplative uses of globes for ancients as signs of learning or stoic remove.  The modern recuperation of the ecumene, distinct from its sense of the community of Christians (inherited from the Enlightenment) or the community of mankind is more striking as a relation to a lived environment, in ways that recuperates the ontological category of ecumene in order to describe and refer to the “humanized” world in which we now live–whose surface is more fully inhabited than ever before, but its nature shaped and informed by humanity both in regional environments and as a whole.

Augustin Berque has emphasized the benefits of attending to a relation, described by Tissier, between man and the planet in his 1993 article in the journal Persée, striking for how they dispense with the very category of a map if provocative for how they recuperate the ancient term in an ethical sense.  The term “ecumenical” oriented the term to the continuity in a community of believers.  But the ethos recuperated by Berque refers to what is human in the world, and a way of being, stripped of a fixed ethnocentric perspective.  By locating the “oikumenal” in terms of human geography stripped of a cartographical foundation, his sense eerily prefigures the images of the inhabited world that are both the benefits and costs of GIS as a basis for judging one’s own relation to the global world.  Berque has removed this ancient term of encyclopedic or positivistic coverage as a material register of geographic toponomy and the ancient craft of map making that embodied a fixed relation to the world.  His construction of an ecumene encompassing human society and its relation to the environment melds nature and culture in ways similar to the ancient term in its ethical connotations.  But his usage oddly dispenses with its graphic construction in favor of a global consciousness:  for in calling attention to the “ecumene,” has removed mankind’s relation with the earth’s surface is removed from a simple demonstrative function of the map:  much as the medium of GIS  defines the inhabitation of the world from one slant or subject, Berque asks us to embrace the multiple effects of mankind on the planet.

Berque believed that with the humanization of the planet complete, and the physical planet dominated by the effects of human life, more emphasis should be placed on a phenomenological analysis of the relation of subject an ambient by this Greek term, now removed from mapping practices to embrace human geography as a tool to consider the relation of man and his [made] environments. Putting aside the value of Berque’s point, the disposition of this philosophical standpoint  reflects the deconstruction of the privileged place of the terrestrial map and of geographic knowledge in GIS, and the image it perpetuates of the inscription of a human geography.  The relation of man and his planet–or the effects of man on the planet–are now the scope of a wide range of GIS maps of human habitation and Google Earth, or maps of influenza, infections and disease in data visualizations or geographic metadata catalogues, whose aim shift from physical geography to the place of mankind in it.  Increasingly, we are prepped to see the world nightly with a false immediacy of the nightly news, less focussed on territorial boundaries than a token of comprehensive coverage, prepped for consumption much as the newscasters who present an account of the “daily novelties” are prepped and outfitted in the apparatus of a news room.


Newscaster prepping.png.JPG


As put it eloquently (and cleverly) by Bruno Latour and friends, our ideas of territory so clearly derive from maps that the digital ubiquity of mapping places us into a new relation to territory:   we now navigate not based on “some resemblance between the map and the territory but on the detection of relevant cues . . .  to go through a heterogeneous set of datapoints” by which to move from different posts to gain new bearings.  We are always navigating a new relation to territory, or understand territorial models, not assuming defined and predetermined boundaries.  This notion of the environment is based on an ability to read signs of its inhabitation and peopling, rather than with reference to previously mapped territories, and is rooted on the ability to navigate by using maps on a screen, rather than on paper–in which the lack of resemblance indeed has further purchase (and persuasive power) as a gain in both certainty and objectivity.


3.  The analytic nature of the reader’s relation to GIS maps is less based on embodying place or expanse in a cartographical manner, because it is not rooted in mimetic qualities.  For the map, in much GIS, is used essentially as the primary field to encrypt variations in data, and removed from any pictorially descriptive function.  Put better, the map is something of a found object, a template, an objective construction in which we sort out the real information that is displayed upon it in an appealingly objective fashion, but one that lacks an orientational power rooted in mimetic claims and indeed turns away from making any actual mimetic claims:



Indeed, the underlying positivism of the objectivity of the map is recycled in most visualizations that are rooted in GIS.  If modernity, as Doreen Massey put it, involved “a particular hegemonic understanding of the nature of space itself, and of the relation between space and society,” drawing expanse on multiple computational platforms in GIS has decoupled space from a precise location:  we now know from a true “view from nowhere.”  The differentiation of terrain or local constructions of space are of less interest than the projection of meaning on a map that is treated as a screen, and several significant local markers may be absent or not noted.  Shifting scale by moving a cursor does not create a more readable space, but provides a very odd reframing of space as a unit that is not comprehended by the reader, but able viewed simultaneously at multiple scales of changing parameters, zoomed into and out of, and adjusted on a digitized scale bar. Our current National Research Council argues in its spatial literacy report on spatial thinking that “the important thing is that they allow for the spatialization of data and use a range of types and amounts of data,” lending primacy to the readability of data over the analytic or representational basis of map-making.

What is physical geography, after all, in many of these maps?  The prime mandate is to map one’s relation to the environment in a readable fashion, rather than to encode layers of local topography or meaning, and to streamline the map to allow its reconfiguration in different datasets that prepare for readability, rather than granularity or density of meaning.  Again, this is based not in mimesis, and no longer based on the notion or mimetic projection of territory:


MacArthur Freeway 11-00

Children's Hospital 11-51

If we speed this up, to look at a sort of time-stop photography of cabs in San Francisco’s downtown area, as did Stamen design in a pioneering map that combined aesthetics and the abundant database of the surveillance operations of Google Maps, and is based on readings taken from the GPS data of the Yellow Cab Company of San Francisco, available also as a film:

Stamen Cabs

Or, in Shawn Allen’s map/photo, which resembles a direct transcription of the taxicab scene in downtown San Francisco on June 15, 2012:

Shawn Allen's map:photo

Does an impoverishment of spatial literacy or toponymy result from such containers of datasets that use maps as formats?  The omniscience and transcendence of the map viewer is immeasurably increased, but the viewer is the receptacle of data, as much as the perceiver of the scene:  new currents are configured and new flows revealed, as data from a variety of sources are richly encrypted into the surface of any given image, compressing the sort of media to which we might have access to a single screen.  One has a different sort of relation to a screen than to a variegated surface, reading a way of configuring information in different ways:  but the difficulty with the screen in particular is its lack of a sense of spatial embodiment. Compare it to an earlier map of the same region, not at all sparing with information but bending backwards to compress legible content within a description of the city’s environment:


These are, perhaps, essentially different modes of data compression, based not only on distinct tacit presumptions, with one angled toward data flows, rather than to the ostensible objectivity of a perceptual model.   But the difficulty to embody data flows can generate an oddly 2-D superficiality that forsakes the very quality transcendence to which earlier ecumene aspired.  Data-streams provide a selective mapping that illuminates one angle of analysis, as it were, rather than aspiring to process an image of the entire city’s or world’s actual inhabitation.

Let’s however insist on being more concrete.  When used to display shifts in a census, the map below displays data removed from topography or centers of population density, and is a data visualization without refined conventions to process its content or meaning for viewers, even if its meaning is quite serious and subject quite human, because it displays information on a static template with little interpretive key–since this map is less of an autonomous and self-standing unit of meaning than a map that demands to be read in reference to familiarity with a map of the distribution of the state’s population:

CO2 emissions


The above map of CO2 emissions of Northern California households elegantly foregrounds one specific reading of the relation of man to the environment. The challenge raised by such an elegant map is to retain communicative flexibility of the conventions of terrestrial mapping, however.  In any GIS map, there is the anger of emptying the format of project from content such as topographic variations, specific local detail, or the dynamic relations of space and habitation within a map:  the conventions of the format gains an iconic or symbolic register alone, in short, and is considerably impoverished as a description of terrestrial habitation when it serves as a field to display data flows or project a database.  One issue is to combine the data with how the analytic framework of the map integrates word and image or creates a structural distribution–something like the poetics of mapping–rather than employ maps as a passive container for spatial information instead of actively creating a way of thinking about space. The mapping of the results of a census regularly lack a sense of topographic variation or differentiation of urban and rural population which would render it more meaningful, and give a plasticity to its already remarkable contents as readable content.  This partly lies in the lack of a dynamic relation between the visual field of the map and its reading, as in this map of the regional variations in the India’s regional population per square kilometer:




The map does not exploit its own conventions of orienting readers to space or expanse.  But GIS mapping offers a significant range of angles by which to read and explain its content.  The relevance of clarifying readers’ relations to the environment are in fact pressing, as revealed on this interactive map–which even includes an option for the reader to learn in detail what s/he can do to help:


Scenarios of Global Warming


At the same time as this pessimistic picture of the actual eventualities of climate change in the age of the anthropocene, the radically shifting nature of a world which is no longer shaped by proximity, or challenged by distance.  The map of internet penetration suggests, rather than a new map of inequalities alone, the new obstacles to the penetration and responses to messages worldwide, and, no doubt, contributed to the difficulty of the transport of needed goods and medical supplies to western Africa during the current epidemic of Ebola, which seems to have left populations scarred by the difficulty in transatlantic communication, as much as the lack of adequate maps, as OSM-H has shown, of adequate mapping on the ground.




Indeed, the map of internet penetration, for all its unpleasant echoes of a colonializing perspective, where first-world countries receive greatest coverage, reveals the extreme difficulties of penetration of all of the coastal countries of West Africa–unlike Nigeria–where the highly contagious virus has proved most difficult to be contained, and information about  the virus less able to be widely disseminated.

Are the edges of the penetration of the internet the most vulnerable edges of the inhabited world, and as the edges of the accessibility and sharing of human information the most vulnerable to cataclysm?


4.  To some extent, this takes us back to Berque’s notion of the ecumene.  But the relative thin-ness of encrypting data projection on the map is so less fine-grained to impoverish the relation between reader and map or registers of engaging readers:  the granularity of the map is particularly great perhaps because the map’s visual qualities are less closely joined with its textual ones, or the hypertext only uses the map as a static schema. There seems the danger of how maps direct our attention to spatial variations and complexity with the proliferation of maps as visual media across different venues and platforms, and a dissipation of the authority of demarcating expanse or of compacting data in a uniform surface.  Perhaps this recalls Berque’s notion of the ecumene as a set of relations to the environment, which can be read in different ways rather than in one way.

The question of habitation has become turned, like a prism, to illuminate new points of view and angles of perception, a topography of habitation indeed seems beside the point.  After all, there are no real areas of the globe that are not inhabited, and the questions of orienting individuals to space seem more pressing than ever on ethical, ecological, and moral grounds alike–if not of just making sense of the effects with which man inhabits space. In a somewhat ponderous post, let’s offer a comic conclusion, however, rather than carping about media for mapping in an age of digital reproduction and increasing vectors of data flow.  The GIS map has become a versatile demographic tool to reframe questions and reveal spatial links, possible vectors of influence or pathways of causation, and indeed maps of emotions or violence.  The question is at root what sort of remove it places the map reader to interpret those vectors on its surface.  There is a temptation to deflate the authority of the descriptive value of such a matrix for its lack of fine grain.  Amidst the attempt to map the Arab Spring there was the inevitable  GIS irony of naturalizing political movements with the ephemerality of a weather map–more a mental map of what the media presented, to be sure, rather than a map designed to orient its content to a reader practiced in interpreting a map’s construction or its conventions.  The map has the value for its viewers of an illusion of transparency and a medium of omniscience:





Or GIS-inspired variations on sabre-rattling from the American right, which was openly alarmist (if not antic) in tone, against a backdrop from Wikipedia commons:




These pseudo-news maps come from the GIS family of signs, even if they are not based on actual data.  They orient viewers with a wiki-like remove. It makes sense that at this point ecumene denotes more of an ethical stance to describe man’s relation to the environment, shifting from to what that process of inhabitation might mean; there is no demand for graphically rendering the inhabited world, but rather the ways mankind inhabits the earth and has filled and marked its space.  But there is a loss of mapping habitations. And so map making in the flexible media of GoogleMaps is no longer an expandable portmanteaux of fine grain, but rather a matrix of data streams where one charts multiple consequences of inhabitation rather the local terrain.  If we no longer have Sciopods outside of our human realm, we lose a sense of an ethics of mapping or even of relating to maps when we dispense altogether with practices of map-making.

Leave a comment

Filed under anthropocene, data visualizations, globalism, Google Maps, metageography, Ptolemaic geography