Category Archives: Technology

Happy Holidata

The holidays are upon us! 2016 has been a busy year here at Pitch, but we’ve been getting into the spirit of the season as we make our annual holiday cards.

This year’s card features a snowflake that uses two data points in its generation: how long we’ve known the recipient and the air quality where we’re sending the card. It is unique to the person we sent it to, and no two snowflakes are alike.

After getting some inspiration from dozens of photos of snowflakes, we brainstormed about the different types of symmetry and shapes that would make our design. We then generated the snowflake with a script that draws a certain number of radial spikes based on how long we’ve known the person we were sending them to. Other parameters for the generation rely on random numbers, ensuring that each generated snowflake was completely unique.

card_blog_post

After plugging in our recipients’ data we exported the generated snowflakes to our Axidraw, a pen plotter that can draw complicated designs with any pen you put in it. Each snowflake took anywhere from 5-10 minutes to draw, depending on its complexity. The color we chose for the snowflake depended on the air quality where it would be sent to:

air_quality-spectrum

Our little axidraw robot did a good job of drawing all of the complicated snowflake line paths, and we even used it to draw the addresses on the envelopes and the message on the back of the card.

robot-at-work-small

We love that we get to share our art and designs with our wonderful friends and clients. Thank you to you all and we wish you a very, very happy holidata!

holiday_cards

You can see all of the snowflakes we made in this animated snowstorm. The source is available on github as well.

Tilegrams: More Human Maps

About a month ago, while plotting demographic data on a US county map, we became frustrated. The data was about employment in the financial industry, and we expected to see Manhattan light up—but on our map, Manhattan was barely bigger than a freckle. Meanwhile, thinly populated rural counties (ahem, Lincoln County, Nevada) occupied vast swaths of screen space. Much of the West was visually over-represented, while urban areas were not represented enough. There’s an essential rural bias in geographic visualizations that we needed to overcome. The perils of this kind of mis-visualization were captured eloquently by Joshua Tauberer, who concluded that, “No map at all is better than a map that perpetuates injustice.” How could we make a more just map?

The classic data visualization solution to this problem is the cartogram—a map in which each region is resized according to some dataset—typically population. Vox explains their importance. Danny Dorling told their history. Typically, they are generated by distorting a geographic map, but the trouble is that even if they’re statistically correct, they just look a little…weird.

We began to hunt for a better fit—and one type of map quickly stood out. In the UK, cartograms recently underwent a resurgence (as documented by Kenneth Field and Andy Kirk), but these cartograms had an interesting visual twist. Rather than showing bulging, stretching boundary lines, they used hexagon tiles to show constituencies in the 2015 UK General Election. The uniformity of the grid made the geographic distortions of the cartogram more palatable. We knew that if we had one for U.S. counties, then we could solve our demographic data mapping problem. We began asking around.

But the answer—about whether anyone had ever made a county-level U.S. hexagonally-tiled cartogram—was negatory. So, without thinking too much about it, we began attempting to produce our own with professional map-making software. On the advice of Mike Bostock, we ran our dataset through ScapeToad; then we loaded the resulting GeoJSON into QGIS, applied a grid with MMQGIS, and converted it into TopoJSON with ogr2ogr, and so on… The results were gratifying, yet time-consuming. We wanted to streamline the process—to democratize the means of production!

We Have the Technology

The result of our work is Tilegrams, an authoring tool to produce these sorts of maps. We decided to build it because however awesome the cartogram algorithm, these maps still demand the careful eye of the designer/statistician, to verify that the map still bears a resemblance to the geographic shape it started with—while maintaining statistical accuracy.

Starting with Shawn Allen’s cartogram.js, and plenty of our own interpretation of the TopoJSON specification, our tool began life by ingesting any US state-level dataset, generating a cartogram and then sampling the output to produce a tiled map. Because these maps require so much human validation, we then implemented a handful of classic drawing tools: drag to move, marquee selection, plus some more specialized ones. Most importantly, we added a sidebar, informing the user of the statistical accuracy of each region’s surface area. (These tools are covered in more detail in the manual.)

We call these maps “tilegrams”—from “tiled cartograms”. We made our own US state map based on 2015 population (which you can download from the tool) and incorporated others that we saw and appreciated in the media: FiveThirtyEight’s electoral vote tilegram from their 2016 Election Forecast and NPR’s one-tile-per-state tilegram.

This is just a first step. The county-level US tilegram we set out to produce is still a mammoth effort away. (Our 50-state tilegram took a day to produce; how long would a 3,000-county tilegram take?) Our great hope is that news designers will be able to produce new tilegrams for interactive and print pieces, or use the ones we are sharing. And that developers will make open-source contributions to this fledgling effort. We want to share our progress now at this historic political moment, while demographic studies rule the news.

We are also grateful to Google News Lab for joining us in this crucial effort. Simon Rogers provides an excellent introduction to the tool, too.

Please do let us know (@pitchinc) if you use these tilegrams, or make your own.

Happy (socially just) map-making!

Flying through data views

How do you make a cinematic experience out of real-time dashboard data? If it’s bound to geographic locations, it can be graphically mapped onto a virtual 3D landscape, which you could fly over and through, as on a virtual helicopter tour. Or peer directly down onto it, as on a virtual satellite. And since we’re in a virtual universe, why not have a virtual aircraft that moves seamlessly between both modes?

These are the questions we were asking as we began brainstorming what would become Norfolk Southern PULSE, the large-scale data art installation we created for the lobby of the train company’s Atlanta offices. Our dataset consisted of train “movements”—when a given train passed a given station. On one hand, we knew that we wanted to provide a bird’s-eye “overview” (pun intended) for context—but that we’d also want to dive down to individual trains on the ground, to convey their physical momentum. And since this was a non-interactive installation, the movement of the virtual camera needed to feel natural as it glided around and hovered. We would need to automate our motion design, down to every camera rotation and movement.

The system we came up with was to model different camera “behaviors”. Each behavior models a dynamic camera movement around some point, whether it’s a fixed aerial location or an animation on the surface. A central controller, the same one in charge of fetching and rendering data, was responsible for assigning camera behaviors one at a time. A behavior has no knowledge of what came before, so its first responsibility is to transition smoothly from any possible position/rotation to its own holding pattern, which might be a standstill or some kind of continuous motion (like a revolving movement, for example).

Some Sample Code

We cooked up a small, open-source example, including the “Overhead” bird’s-eye view discussed above, plus another fixed “Landscape” behavior, and two dynamic ones: “Spin” and “Trail”. Check it out on GitHub and give it a spin. Fork it, build some new behaviors (or clean up our hasty implementation), and let us know!

In the sample code, each behavior is encapsulated as its own class in the behaviors/ directory, and each implements two handlers: begin(), which sets up the transition tween from the previous camera location/rotation, and animate(), which handles the frame-by-frame animation. The main work in each case is determining where the camera should head to (__toPosition) and what it should be looking at (__toLookAtPosition). The rest is fairly boilerplate: __storeFromPosition makes a note of where the camera is and what it’s looking at, and then all that remains is to kick off the tween in __beginAnimation.

The basic THREE.js scene is set up in World.js, which is also responsible for calling animate() on the current camera behavior. The buttons for switching behaviors are implemented as a simple React component called <Selector />.

Some Special Challenges

  • The camera is special among objects, since you need to manage not just its position, but the point in space to “look at”. These two points can move independently, but their timing must be well coordinated so that the camera doesn’t glance the wrong way at any point during the transition.
  • Dynamic camera rotation was an especially mind-bending problem. It was easy in the beginning to end up with a camera that was upside-down, or pointing out to space, or pointing upside-down into space! The globe turned out to be a grounding reference, however, once we realized that the camera’s “up” direction should always be parallel to its position vector, as the globe was positioned at the origin of the Euclidean space.
  • We used tweening functions to give a more physical feel to the motion, stopping well short of implementing any kind of Earth-like physics. That said, tweening the X, Y, and Z coordinates at the same rate can often feel stiff and unnatural to anyone who’s experienced gravity, or air travel. Rather than try to perfect a master global physics, we found that ad hoc solutions based on specific animation requirements were a sounder time investment.

This was a huge learning experience for us in the game-like realms of physics and motion design, but applied to data visualization. We were guided by a desire to present not just information but a bit of visual delight in a busy area, where workers take their brown bag lunches or casual meetings. Whatever story the data may tell, we hope to make the telling always feel smooth and unhurried.