Using Motion to Express Change

A lot has been written about the concept of visualizing the user interface as a reflection of the application’s state. Ushered in the last couple years by an emergence of code libraries — leaning on declarative patterns that describe the what instead of the how — that seek to absorb the task of managing changes so that programmers can focus on expressing the different states of their interfaces. This mental model can drastically reduce the complexity of managing views as changes occur through time.

Such a paradigm works particularly well in reducing the overhead of having to imperatively apply updates to an interface as the result of user interactions. However, there is an effect that has been generally at odds with this stance: if the interface is an expression of the application state, then changes to the state tend to be experienced discreetly. That is to say, the progression of change is perceived instantaneously by the viewer.

Sometimes this is alright and a desired outcome. Others, though, it can lead to an unintuitive experience as the change becomes abrupt and disconcerting. There is no trail explaining how something changed and the difference between the states has to be elucidated by the user, which can suppose a considerable cognitive burden that distracts.

Even though the instant transformation of the interface is accurate it may not be the clearest expression of the change that just occurred. The relationship between intention and effect might become harder to grasp. Motion can be a great aid in communicating relationships and clarifying the changes experienced in a more intuitive way. All in all, we experience reality — quantum substrate aside! — as a continuum that flows through changes. Moving interfaces can help people connect how A transformed into B.

In the realm of the digital, this has been made particularly evident with tactile devices where gestures and changes are intertwined in their cause and effect. This can be vastly more intuitive for people as it emulates more tangibly the fabric of reality people are used to interacting with.


Consider the case of a list containing a set of items: the action of moving, reordering, and so on, doesn’t just affect the single item being acted on but the rest of the set as well, particularly the one it is “swapping places” with. Reality conveys to us that in order to put something in the place of something else both things have to move. The change in overall state for the entire group can be harder to grasp by just changing the order instantly. It takes a moment to reorient. Transitions and gesture based interactions generally help connect these two states in a way that makes the interaction (the “what just happened”) more immediately understandable.

Yet there is also a tension between communicating clearly and taking too much time to communicate. Motion, wrongly applied, can lead to a feeling of slowness and unresponsiveness. Particularly if the animation is divorced from the interaction itself (its intensity, velocity, direction, etc). For example, when turning a light on or off, it is generally not necessary to describe the changes in an artificially delayed way. The relationship between intention, effect, and familiarity is enough, even though the switch and the light might not be closely connected spatially. (Dimming a light, on the other hand, follows the speed of the action, giving a more tangible feeling of control over the change.)

Once an action has become extremely familiar, the transition between states can be less of a requirement and more of an impediment. That is one of the reasons why gestural motion is so valuable. It is not merely an imperative statement of the like “animate from A to B” but an actual mapping of the interstitial states resulting in motion that is concurrent to the action being performed — the faster the action, the faster the transition occurs.

One could say that an animation is worth being present if the clarity of change it provides is greater than the time it would take to adjust cognitively to the new state in its absence.

Beyond the cognitive load benefits, other studies show that animation can improve decision-making and even help people learn and remember spatial relationships. Animating between states can also help prevent change blindness. In short: animation can free up your brain power and make an interface easier to understand—the benefits can’t be ignored.

Val Head, A List Apart.

A Scent of Spring 🦋

It is challenging to bring realistic motion control to the web in a way that is ergonomic to implement and at the same time performant to execute. Mobile operating systems have baked many of these interactions in their frameworks and their gesture models so it is a more natural and expected resource, optimized at a more profound level. The web, however, remains a lot more barebones with its animation toolkit.

If declarative patterns proved convenient in managing complex application design, they have shown limitations in representing interstitial states as performant motion. For example, APIs that trigger re-renders for every calculated micro-state between changes, while offering great development experience is too much of a penalty to pay in common usage.

On the other side, one could connect time-defined transitions to different states of the interface. However, time based transitions can be quite a poor tool when looking to represent fluid interactivity or natural motion. They take duration as a parameter so they tend to escape the control flow and anchor of the view. They are harder to map to consistent interstitial states of the application. They become awkward to interrupt and they are unable to adapt to the intensity of the user action.

Generalizing physical motion through the application of motion curves is also quite hard to model reliably. Time based animation gravitates on its own fictional space. So when it comes to portraying realistic motion, traditional CSS transitions based on duration and easing curves are largely ineffective.

Some libraries have looked elsewhere to replicate realistic motion consistently and continuously. One such case is the use of “springs” to express natural motion as a result of tension, mass, and resistance rather than duration and curves. But physics-based APIs can be harder to integrate in any sufficiently complex interface system. It is not trivial to express these values within the encapsulation provided by render functions and still retain clean user interface components. This tension has gone largely unresolved in the web space.

The recent development of React hooks offers one way of encapsulating the physics of spring motion while retaining component views largely uncluttered by the added complexity. There is also a wonderful library called react-spring that integrates all these ideas, seeking to bring the benefits of physics based animation, declarative APIs, and performant solutions, that is quite a joy to use.

Browsers are also bringing more control into the user’s hands — like the reduced motion media query — to help tune the experience to the user’s needs or preferences.It becomes more feasible to use motion in sophisticated interfaces.

The tools available for both declaring interfaces and using motion to connect state changes are maturing to the point where it is both convenient and manageable to take advantage of them. It casts a more auspicious path forwards for bringing meaningful motion control to web interfaces with a good developer experience.

Demo!

Let’s bring it all together with an example. Consider the case of moving blocks in the new WordPress editor. The ability to move content is very powerful and a crucial part of the editing experience. However, the interaction itself can at times be disorienting because two blocks have instantly moved. Under the new state, there is a brief period of reorientation happening to make sense of where things have moved and what the new order is. This can be exacerbated when moving a group of blocks all at once. Bringing motion into this fundamental interaction could help explain it better.

The first video is how block reordering looks today in the editor. The second one shows the same interactions but using motion to describe the changes, giving a better sense of flow and clarity.

Note — this is currently an experiment and not indicative of something meant to be shipped as is. Also props to Riad for helping build this prototype.

The Language of Gutenberg

The beginning of my WordCamp Europe presentation last weekend was around explaining some of the principles behind the design choices for how to gradually introduce blocks in the WordPress editing experience. Miguel Fonseca wrote a great article, titled The Language of Gutenberg, based on an earlier talk we did together at Zaragoza this year that dives more deeply on the subject.

Drawing from Life

For the past year and a half, one of my personal goals has been to start drawing on a regular cadence, forcing myself to learn from observation, develop technique, and so on. I’ve had the good fortune of finding a great academy, taught by Poul Carbajal in Madrid, with demanding standards.

The following work was drawn from life, with Conté pencils and white pastel on a grey-toned paper, measuring about 75 x 60cm.