We wanted to begin with data embodiment; just as we process images differently than tables of numbers, we understand physical objects in their own way. What kind of object could we build to express data in a (literally) tangible way? We chose for this a topic that we feel is interesting on its own, the geographical distribution of economic activity through the world.
Our first task was to locate a good source of data. Country-level information about economic activity is easy to find, but this obscures most of the interesting variation (think of the difference between the depths of the Amazonas and the area around São Paulo), and more granular data isn’t available for all countries, nor necessarily comparable.
Luckily, there’s a well-known first order approximation used by economists, which is the amount of night illumination visible from space. This tends to have a good correlation with economic activity, as it reflects levels of wealth as well as population sizes. And NASA has precisely what we needed: eerily beautiful worldwide images of nighttime Earth from space, put together as if it was night on the whole planet.
To turn this image into a map of economic activity for each geographical coordinate, we used the Python library PIL to read RGB values from one of the mid-resolution files, and then turned those into scalar brightness values with a weighted average. This left us with a NumPy array that was a direct encoding of the map we wanted (once you read array indexes as geographical coordinates).
The obvious thing to try was to build from this a relief globe, with elevations reflecting brightness/economic activity. To visualize the object digitally, we loaded the data in an R script, applied an inverse Mercator transform to get back points in a 3D surface(modifying radius to reflect the brightness scalar), and used the rgl library to plot it.
So far so good (and it’s always a good test to check if an inverse transformation gives a recognizable object back). The next step was to build an STL file to feed to a printer. The rgl library has a function to output STL from a point cloud, which we then loaded into MeshLab.
Here we had a problem: it looked insane, and was impossible to print. We attempted to apply various filters to fix or rebuild the mesh, reduce resolution, etc, but the best we could get would have given Lovecraft nightmares.
To gain more control about the process, we decided to generate the STL ourselves. That proved to be quite easy. Although the STL format is more powerful than this, at its core an STL file can be as simple as a text file listing triangles in a somewhat verbose but straightforward (and very human-readable) format. We wrote code that generated what seemed like a nice, printable mesh, tried to feed it to a printer…
… it didn’t work. The model’s topology was still broken, and the physical mechanics of it was all wrong. We experimented with some variations and fixes, and came to the conclusion that we should try something easier for the medium, where “medium” is the combination of our available hardware resources, time, data, and software tools (this is something implicit in every expressive medium; you can, in theory, use very fine pens and lots of celluloid to “shoot” a photorealistic full-length remake of Avatar, but that wouldn’t be the best fit between tool and objective).
So we went for a drastic simplification in the final object: instead of a globe, a relief map. This simplified both the generating code and the output, but we still had to deal with a couple of problems:
- The object’s resolution (as encoded in the STL file) was too high for the printer, so we had to apply a coarsening filter to the original matrix data.
- The object’s topology was still wrong. We solved this with extra code that added the necessary variable-height borders and base to the object model.
And then, success!
It’s a bit on the small side and with poor resolution (we are considering building a larger version with more fine-grained detail by generating and printing separate tiles for different sections of the world), but, besides the satisfaction of getting a project through the end line, there’s something quite striking at seeing (not as a projection, but as a thing) the way the US and Europe are almost wholly raised blocks, and compare it with Central Africa, Siberia, Central Asia, or most of Australia. The familiar observation about the economic relationship between coastal areas and their interior becomes much more memorable when you can see the raised outlines of continents.
Despite the unavoidable technical glitches, it’s exciting to see the beginnings of a new medium. There’s lots of experimentation going on, people trying to figure out just what to do with this new tool, what can be said and in what new ways, and hopefully we’ll get to see (and touch!) things nobody ever did before.