Category Archives: Work

Happy Holidata

The holidays are upon us! 2016 has been a busy year here at Pitch, but we’ve been getting into the spirit of the season as we make our annual holiday cards.

This year’s card features a snowflake that uses two data points in its generation: how long we’ve known the recipient and the air quality where we’re sending the card. It is unique to the person we sent it to, and no two snowflakes are alike.

After getting some inspiration from dozens of photos of snowflakes, we brainstormed about the different types of symmetry and shapes that would make our design. We then generated the snowflake with a script that draws a certain number of radial spikes based on how long we’ve known the person we were sending them to. Other parameters for the generation rely on random numbers, ensuring that each generated snowflake was completely unique.

card_blog_post

After plugging in our recipients’ data we exported the generated snowflakes to our Axidraw, a pen plotter that can draw complicated designs with any pen you put in it. Each snowflake took anywhere from 5-10 minutes to draw, depending on its complexity. The color we chose for the snowflake depended on the air quality where it would be sent to:

air_quality-spectrum

Our little axidraw robot did a good job of drawing all of the complicated snowflake line paths, and we even used it to draw the addresses on the envelopes and the message on the back of the card.

robot-at-work-small

We love that we get to share our art and designs with our wonderful friends and clients. Thank you to you all and we wish you a very, very happy holidata!

holiday_cards

You can see all of the snowflakes we made in this animated snowstorm. The source is available on github as well.

Tilegrams: More Human Maps

About a month ago, while plotting demographic data on a US county map, we became frustrated. The data was about employment in the financial industry, and we expected to see Manhattan light up—but on our map, Manhattan was barely bigger than a freckle. Meanwhile, thinly populated rural counties (ahem, Lincoln County, Nevada) occupied vast swaths of screen space. Much of the West was visually over-represented, while urban areas were not represented enough. There’s an essential rural bias in geographic visualizations that we needed to overcome. The perils of this kind of mis-visualization were captured eloquently by Joshua Tauberer, who concluded that, “No map at all is better than a map that perpetuates injustice.” How could we make a more just map?

The classic data visualization solution to this problem is the cartogram—a map in which each region is resized according to some dataset—typically population. Vox explains their importance. Danny Dorling told their history. Typically, they are generated by distorting a geographic map, but the trouble is that even if they’re statistically correct, they just look a little…weird.

We began to hunt for a better fit—and one type of map quickly stood out. In the UK, cartograms recently underwent a resurgence (as documented by Kenneth Field and Andy Kirk), but these cartograms had an interesting visual twist. Rather than showing bulging, stretching boundary lines, they used hexagon tiles to show constituencies in the 2015 UK General Election. The uniformity of the grid made the geographic distortions of the cartogram more palatable. We knew that if we had one for U.S. counties, then we could solve our demographic data mapping problem. We began asking around.

But the answer—about whether anyone had ever made a county-level U.S. hexagonally-tiled cartogram—was negatory. So, without thinking too much about it, we began attempting to produce our own with professional map-making software. On the advice of Mike Bostock, we ran our dataset through ScapeToad; then we loaded the resulting GeoJSON into QGIS, applied a grid with MMQGIS, and converted it into TopoJSON with ogr2ogr, and so on… The results were gratifying, yet time-consuming. We wanted to streamline the process—to democratize the means of production!

We Have the Technology

The result of our work is Tilegrams, an authoring tool to produce these sorts of maps. We decided to build it because however awesome the cartogram algorithm, these maps still demand the careful eye of the designer/statistician, to verify that the map still bears a resemblance to the geographic shape it started with—while maintaining statistical accuracy.

Starting with Shawn Allen’s cartogram.js, and plenty of our own interpretation of the TopoJSON specification, our tool began life by ingesting any US state-level dataset, generating a cartogram and then sampling the output to produce a tiled map. Because these maps require so much human validation, we then implemented a handful of classic drawing tools: drag to move, marquee selection, plus some more specialized ones. Most importantly, we added a sidebar, informing the user of the statistical accuracy of each region’s surface area. (These tools are covered in more detail in the manual.)

We call these maps “tilegrams”—from “tiled cartograms”. We made our own US state map based on 2015 population (which you can download from the tool) and incorporated others that we saw and appreciated in the media: FiveThirtyEight’s electoral vote tilegram from their 2016 Election Forecast and NPR’s one-tile-per-state tilegram.

This is just a first step. The county-level US tilegram we set out to produce is still a mammoth effort away. (Our 50-state tilegram took a day to produce; how long would a 3,000-county tilegram take?) Our great hope is that news designers will be able to produce new tilegrams for interactive and print pieces, or use the ones we are sharing. And that developers will make open-source contributions to this fledgling effort. We want to share our progress now at this historic political moment, while demographic studies rule the news.

We are also grateful to Google News Lab for joining us in this crucial effort. Simon Rogers provides an excellent introduction to the tool, too.

Please do let us know (@pitchinc) if you use these tilegrams, or make your own.

Happy (socially just) map-making!

Flying through data views

How do you make a cinematic experience out of real-time dashboard data? If it’s bound to geographic locations, it can be graphically mapped onto a virtual 3D landscape, which you could fly over and through, as on a virtual helicopter tour. Or peer directly down onto it, as on a virtual satellite. And since we’re in a virtual universe, why not have a virtual aircraft that moves seamlessly between both modes?

These are the questions we were asking as we began brainstorming what would become Norfolk Southern PULSE, the large-scale data art installation we created for the lobby of the train company’s Atlanta offices. Our dataset consisted of train “movements”—when a given train passed a given station. On one hand, we knew that we wanted to provide a bird’s-eye “overview” (pun intended) for context—but that we’d also want to dive down to individual trains on the ground, to convey their physical momentum. And since this was a non-interactive installation, the movement of the virtual camera needed to feel natural as it glided around and hovered. We would need to automate our motion design, down to every camera rotation and movement.

The system we came up with was to model different camera “behaviors”. Each behavior models a dynamic camera movement around some point, whether it’s a fixed aerial location or an animation on the surface. A central controller, the same one in charge of fetching and rendering data, was responsible for assigning camera behaviors one at a time. A behavior has no knowledge of what came before, so its first responsibility is to transition smoothly from any possible position/rotation to its own holding pattern, which might be a standstill or some kind of continuous motion (like a revolving movement, for example).

Some Sample Code

We cooked up a small, open-source example, including the “Overhead” bird’s-eye view discussed above, plus another fixed “Landscape” behavior, and two dynamic ones: “Spin” and “Trail”. Check it out on GitHub and give it a spin. Fork it, build some new behaviors (or clean up our hasty implementation), and let us know!

In the sample code, each behavior is encapsulated as its own class in the behaviors/ directory, and each implements two handlers: begin(), which sets up the transition tween from the previous camera location/rotation, and animate(), which handles the frame-by-frame animation. The main work in each case is determining where the camera should head to (__toPosition) and what it should be looking at (__toLookAtPosition). The rest is fairly boilerplate: __storeFromPosition makes a note of where the camera is and what it’s looking at, and then all that remains is to kick off the tween in __beginAnimation.

The basic THREE.js scene is set up in World.js, which is also responsible for calling animate() on the current camera behavior. The buttons for switching behaviors are implemented as a simple React component called <Selector />.

Some Special Challenges

  • The camera is special among objects, since you need to manage not just its position, but the point in space to “look at”. These two points can move independently, but their timing must be well coordinated so that the camera doesn’t glance the wrong way at any point during the transition.
  • Dynamic camera rotation was an especially mind-bending problem. It was easy in the beginning to end up with a camera that was upside-down, or pointing out to space, or pointing upside-down into space! The globe turned out to be a grounding reference, however, once we realized that the camera’s “up” direction should always be parallel to its position vector, as the globe was positioned at the origin of the Euclidean space.
  • We used tweening functions to give a more physical feel to the motion, stopping well short of implementing any kind of Earth-like physics. That said, tweening the X, Y, and Z coordinates at the same rate can often feel stiff and unnatural to anyone who’s experienced gravity, or air travel. Rather than try to perfect a master global physics, we found that ad hoc solutions based on specific animation requirements were a sounder time investment.

This was a huge learning experience for us in the game-like realms of physics and motion design, but applied to data visualization. We were guided by a desire to present not just information but a bit of visual delight in a busy area, where workers take their brown bag lunches or casual meetings. Whatever story the data may tell, we hope to make the telling always feel smooth and unhurried.

Seeing Climate Change through Google Search

We recently released a project in collaboration with some brilliant folks at Google Trends and WebGL extraordinaire Michael Chang. Google Trends, based on Google Search data, takes searches (terms and queries) and shows how often that search is entered compared to the total search volume across various regions of the world. These outputs can also be seen as time series data since 2004 and the search’s related terms or topics.

Big Data Snapshot

When Simon Rogers approached us earlier this summer and asked if we’d be interested in a complex project with a lot of moving parts and unpredictability on a short timeline— we said yes. Google Trends wanted to create an experience based on Climate Change queries to unveil their new API to the public at the GEN Summit conference in Barcelona.

The Global Editors Network (GEN) is a community of editors and innovators dedicated to creating programs in order to reward, encourage and provide opportunities for media organizations and journalists in the digital newsroom. GEN Summit is an annual conference that brings together the “leading minds of the media industry to discuss the future of news” and showcase some of the innovations being made in the space.

So, a slightly different audience than we are used to. How could we create an interactive experience geared toward journalists? How could we layer billions of searches from all over the globe and add some visual flare all the while respecting journalism’s obligation to inform clearly?

The data from Google Trends is anonymized, aggregated and normalized, allowing reporters to find and compare salient points in worldly matters. We compiled a list of related topics to climate change searches (like Drinking Water, Air Pollution and Wildlife) and we compared the quantitative data (volume of searches over time) with the major cities of the world to see how ‘important’ topics were in these places. We also had really interesting qualitative data: the literal queries entered into the search field.

Query Data Large Snapshot

New York

While the detailed data on small towns and volume of searches in major cities were really interesting, these queries were the home run, so to speak. In caring about the civic health of our cities and nations, journalists would have the potential to identify some of the trending questions before major issues bubble to the surface without a proper platform to debate them.

Our final interactive experience is a linked experience between a multi-touch wall and four Chromebook Pixels.

Screen Shot 2015-06-17 at 10.47.56 AM

The multi-touch wall shows queries popping up as the globe spins. Using real data, the piece simulates the activity of users constantly querying Google on our eight topics around global warming. The Chromebook Pixels allowed users to dive into some of the more qualitative data and excerpts of recent publications on topics in these major cities and towns around the world.

Screen Shot 2015-06-18 at 2.17.20 PM Screen Shot 2015-06-18 at 2.17.44 PM

As expected, we were prototyping and tweaking visual and interaction design daily during the final two weeks leading up to the GEN. All our work paid off though, and the Summit went smoothly and the piece was well received. Thanks to our collaboration with Chang, another version of the Chrome Experiment of the multitouch wall was showcased a few days later in New York City.

setting_the_scene_1024 slack_for_ios_upload_1024-1

See the project here.
You can also read the write up about the project in the Washington Post here.

Synaptic Motion: Us and The Brain

The brain is a fascinating mass of constant connections, or synapses, where neurons are constantly forming constellations to help us make sense of our world. We were intrigued at the thought of helping visualize what the brain’s activities look like in a live performance when approached by founder and choreographer Jodi Lomask of Capacitor, a performance art company that explores non-traditional combinations of arts and sciences through movement. Their latest project Synaptic Motion debuted at the Yerba Buena Center in San Francisco this September.

Synaptic Motion intertwined music, visuals, and dance to show the brain’s many complex processes.

Screen Shot 2014-10-16 at 11.44.55 AM

The performance comprised of custom composed audio, a live MC, an array of costumes, dance, large-scale movement sculptures, and floor and wall projections.

Jodi had EEG data recorded from her brain at the UCSF Neuroscape Lab in order to see what her thought processes looked like and how the outputs changed as she moved from her most active states to sitting completely still. We analyzed the scans and began to ideate graphical metaphors representing the transitions of the brain and the different phases of thought. With EEG data in hand and the vision in mind, the new challenge was synchronizing the brain and sound wave data with movement.

FullSizeRender_BWSound artist, Oni Martin Dobrzanski, composed a soundtrack with specific pieces for each scene of the performance that audibly represented how the brain responds to hopelessness, a caffeine rush, a seizure, or an idea. Each song had so many granular layers that we felt our visualizations would be most complementary if they were layers of simple shapes that formed grids and dynamic compositions.

FullSizeRender3_BW

Using Processing, we loaded Dobrzanki’s audio and synced our sketches to the frequencies. But, being that we are a data visualization studio and new to the dance production process, we felt we needed to better understand what it means to be in performance arts. So, our team dropped by their rehearsals to speak with the dancers, scientists, and media artists. We demoed initial sketches at a few of them and felt good about the direction we took and so we continued to iterate.

For one of those iterations, we developed a ‘constellation’ algorithm that generated lines moving through a grid to mimic connections made between neurons. Whenever the lines passed through the composition’s threshold a new constellation is generated. This visualization ended up being one of the favorites despite its simplicity. To make it more dynamic and even more awesome we made a 3D version.

Constellation8_3DRotation_cropped

We encoded, packaged, and sent our favorite animations to Mary Franck, the projection artist of Synaptic Motion and the mastermind of the show’s visuals. Mary pieced together our work, weaving her graphics and ours into the storyline and overall concept of the shifting states of the mind. It was exciting to build visualizations for a piece where we had no idea what the final output would look like. Only at Yerba Buena’s opening night did we finally see the production in its entirety.

Even as collaborators on this project, we could not have anticipated how powerful and immersive the experience would be for the audience. Throughout the show, the crowd was encouraged to roam and take in all perspectives while dancers emerged from the dark, weaving through the crowds, alluring our eyes to the center of it all; the dance floor. As viewers and contributors, we were captivated by the complex idea, completely engaged with the performance, and left feeling like we just exited a scene from a Sci-Fi film.

The thing we valued most about working with Capacitor and all the creatives behind Synaptic Motion is how collectively, we took something difficult to imagine visually and made it tangible and stimulating.

Our Team on this project:

Wesley Grubbs: Creative Director
Anna Hodgson: Art Direction
Shujian Bu: Lead Engineer
Nick Yahnke: Engineer