How do you make a cinematic experience out of real-time dashboard data? If it’s bound to geographic locations, it can be graphically mapped onto a virtual 3D landscape, which you could fly over and through, as on a virtual helicopter tour. Or peer directly down onto it, as on a virtual satellite. And since we’re in a virtual universe, why not have a virtual aircraft that moves seamlessly between both modes?
These are the questions we were asking as we began brainstorming what would become Norfolk Southern PULSE, the large-scale data art installation we created for the lobby of the train company’s Atlanta offices. Our dataset consisted of train “movements”—when a given train passed a given station. On one hand, we knew that we wanted to provide a bird’s-eye “overview” (pun intended) for context—but that we’d also want to dive down to individual trains on the ground, to convey their physical momentum. And since this was a non-interactive installation, the movement of the virtual camera needed to feel natural as it glided around and hovered. We would need to automate our motion design, down to every camera rotation and movement.
The system we came up with was to model different camera “behaviors”. Each behavior models a dynamic camera movement around some point, whether it’s a fixed aerial location or an animation on the surface. A central controller, the same one in charge of fetching and rendering data, was responsible for assigning camera behaviors one at a time. A behavior has no knowledge of what came before, so its first responsibility is to transition smoothly from any possible position/rotation to its own holding pattern, which might be a standstill or some kind of continuous motion (like a revolving movement, for example).
Some Sample Code
We cooked up a small, open-source example, including the “Overhead” bird’s-eye view discussed above, plus another fixed “Landscape” behavior, and two dynamic ones: “Spin” and “Trail”. Check it out on GitHub and give it a spin. Fork it, build some new behaviors (or clean up our hasty implementation), and let us know!
In the sample code, each behavior is encapsulated as its own class in the
behaviors/ directory, and each implements two handlers:
begin(), which sets up the transition tween from the previous camera location/rotation, and
animate(), which handles the frame-by-frame animation. The main work in each case is determining where the camera should head to (
__toPosition) and what it should be looking at (
__toLookAtPosition). The rest is fairly boilerplate:
__storeFromPosition makes a note of where the camera is and what it’s looking at, and then all that remains is to kick off the tween in
The basic THREE.js scene is set up in
World.js, which is also responsible for calling
animate() on the current camera behavior. The buttons for switching behaviors are implemented as a simple React component called
Some Special Challenges
- The camera is special among objects, since you need to manage not just its position, but the point in space to “look at”. These two points can move independently, but their timing must be well coordinated so that the camera doesn’t glance the wrong way at any point during the transition.
- Dynamic camera rotation was an especially mind-bending problem. It was easy in the beginning to end up with a camera that was upside-down, or pointing out to space, or pointing upside-down into space! The globe turned out to be a grounding reference, however, once we realized that the camera’s “up” direction should always be parallel to its position vector, as the globe was positioned at the origin of the Euclidean space.
- We used tweening functions to give a more physical feel to the motion, stopping well short of implementing any kind of Earth-like physics. That said, tweening the X, Y, and Z coordinates at the same rate can often feel stiff and unnatural to anyone who’s experienced gravity, or air travel. Rather than try to perfect a master global physics, we found that ad hoc solutions based on specific animation requirements were a sounder time investment.
This was a huge learning experience for us in the game-like realms of physics and motion design, but applied to data visualization. We were guided by a desire to present not just information but a bit of visual delight in a busy area, where workers take their brown bag lunches or casual meetings. Whatever story the data may tell, we hope to make the telling always feel smooth and unhurried.