También escribo en Español

Abstractions and the burden of knowledge

When should abstractions be made in a codebase? Since open-sourcing Calypso I’ve spoken on a few occasions about how abstractions lead to burdens of knowledge, and how we need to be careful about the kind of concepts we create. In the development values for the project we write: “we know that no abstractions are better than bad abstractions”. Why is this an important value? In a post a couple weeks ago, Lance Willett asked me to follow up on what that implies and the kind of impact it can have for learning a project.

From a philosophical point of view, abstractions are concepts we put in place to interpret events, to think about reality through a certain formality instead of dealing with the inherent plurality of reality itself. They represent ideas and act as prejudgements for the shape of our perception. In this regard, they cannot, by definition, stand to the scrutiny of reality. They mediate our understanding by existing between reality and our representation of it. They are the razors with which we try to make the vastness of reality somehow apprehensible; yet inherently false as they attempt to reduce and formulate what is vast and different under principles and categories.

In the world of development, abstractions are essentially a way of organising complexity. The problem is that complexity rarely vanishes. Instead, it remains hidden under those layers of meaning, shielded by of our abstractions. The simplification it seeks to bring usually ends up adding complexity on top of existing complexity.

When carefully chosen they can augment the understanding of how things work by teaching the underlying complexity accurately, but they do generally come at a cost. They come at the expense of adding to the pile of things you need to know to operate within the codebase. By absorbing structural complexity they gradually take the place of what needs to be learned. Very often, given our propensity to create early (and misguided) abstractions, they solidify practices and force certain meanings that sometimes are best kept loose.

That is where being aware of the kind of abstractions you are forcing people to learn becomes important in a common project. Abstractions, at their best, manage the increasing complexity of any system, and may be worth the tradeoff in certain situations. But, at their worst, they add a new layer of cognitive burden you need to cope with, distorting what is actually going on, and imposing the wrong kind of conceptual hierarchy or sameness among entities. Names and groups, for example, come at the cost of having to decide where something belongs to. Does a fall under P or Q? Should we create R for these things that are partially PQ at the same time? Is our decision of having Ps and Qs forcing us to only create Ps and Qs?

One fresh example that comes to mind for me on this tradeoff is a small abstraction we created in Calypso to manage application state, called createReducer. It’s intention is well meant — simplify the boilerplate and the interface with which to handle the serialization of pieces of the state tree so developers can move faster — yet by taking a conceptual higher ground they sometimes convey more than what it was meant to achieve. People looking through the codebase would see it as the semantic way in which they ought to create any new reducer; since the interface appears simple enough, they would default to using it. Was that the intention? Perhaps. But now something that could have been a simpler reducer inherits a complex behaviour by using a utility that appears simple.

How do you overcome these situations? Naming things properly is of course important, yet no name is strictly perfect; education and documentation do help, but understanding what you are reinforcing at a design level may be even more important, since abstractions are always teaching patterns. Which comes back to the first idea; abstractions naturally become burdens of knowledge and you need to decide when and where you are willing to take their price.

The Kuleshov Effect

Andrei Tarkovsky once defined cinema as sculpting in time. The most distinct feature of web design, as it is compared to other forms of design, is precisely that it exists in time. If graphic design was often thought as the corollary of painting, then web and interaction design could very well be the corollary of filmmaking. As such, it seems to be in it’s infancy when it comes to the lessons discovered by cinema through its short centennial history.

During the early 20th century, film creators realized editing and montage were core aspects of the craft, so redefining that many held it as the one distinct essence of cinema. The one thing that moved it from a mere technical advance in photography and motion study, to an art form capable of wonders. One of these concepts was a particular study known as the Kuleshov effect, demonstrated by Lev Kuleshov, a Russian filmmaker of the golden era of montage exploration.

The experiment was quite simple, but with profound consequences. He chose a fragment of a close-up shot of a soviet actor staring, one in which the acting was particularly neutral. He then juxtaposed that fragment with other pieces of film — in particular, one of a plate of food, another one of a dead girl in a coffin, and another one of a child playing.

He showed this sequence, put together, to an audience — and the reactions were remarkable. The people specially lauded the acting, where the actor could so profoundly represent hunger (looking at the food), sorrow (looking at the dead woman), and nostalgia (looking at the child playing). The aesthetic consequences of this realization are captivating, and extremely vigorous. 1

This was the same exact shot of the actor doing nothing in particular, eminently not-acting. However, and as with most true art forms, aesthetic representation is synthetic. The viewer fills the gaps, connects the pieces, and infers more than what is the plain material. Particularly, knowledge, concepts, and experiences are being derived, created in that empty gap where two fragments connect. There was no “hunger” implicit in the acting (it wasn’t even the intention of the actor) nor in the food itself. The sentiment of hunger exists only in the evolving of time created by the assembling of these two fragments. 2

This, of course, has gigantic impacts on cinema, and good directors know how to work with this to achieve sublime beauty that goes beyond what seems to be represented on each screen fragment. It also matters greatly to actors, obviously, because their performance is not just what they intend to act, but how their shots are pieced together; what they look at, and what comes before and after. Good actors also know this.

And now, back to the field of interaction design, how does this affect those who practice it? I believe this has significant consequences here all the same. Largely unexplored. There’s a distinctive lack of thought around this in-between area where connections are created, emerging into something that cannot be reduced to the parts. What happens when someone goes from one page to the other? What happens in the switch of context? Do web designers acknowledge there is more being created in the viewer than what they explicitly intended to put there? There’s many studies and theories and practices revolving around what is laid out in the page. Or even in the flow and nominal succession of stages. But what about that invisible instant when two things clash?

The most web developers have concerned themselves is with transition states. Usually trying to even the journey, looking to smoothly transition from one state to the other. That’s a pre-editing stage of realization — make everything seem like it’s a continuum. One sequence. Film montage discovered that time (in it’s cinematic sense) is being created beyond the singularity of each screen, beyond shot-sequences. That the continuum is being sculpted at a higher level — in that whole dimension that transcends individual pieces, and gives tremendous creative power.

In a way, web design needs to find its own montage lessons to control their less tangible experiences, to control the effects that are being created when screens are switched (specially as the viewer is often choosing the path). When what is being designed is not just the specifics of a screen, or the abstract notion of a flow, but the gap between different fragments of an experience evolving in time.

Notes:

  1. Truth be told, the last one was actually supposed to be a woman on a divan, casting the expression of desire on the actor. My apologies. But it functions the same way — I’ve grown accustomed to this variation of the experiment, as it’s the one I’ve constructed to carry myself, and enjoyed the more complex effect better.
  2. Described with precision by Eisenstein when he said a series of two film fragments is to be regarded not as their sum, but as their product.

Theme Experience (THX) in core WordPress

For the past couple releases of WordPress I had the privilege of working on revamping the admin theme screens, probably my biggest core contribution since developing Twenty Eleven. Working on the WordPress.com theme showcase was one of the first side projects I did at Automattic, when we were just Lance, Ian and me on the Theme Team; and I’ve worked on the many iterations since then, so I’m glad I could bring part of that knowledge to help improve the core theme screens.

The project was initially called THX38 and started as a plugin during the 3.8 release cycle. After some user tests we settled on a complete revamp of the experience, using client side technologies to make it faster and more responsive. The design removed most of the text in the screen, allowing people to focus primarily on the theme screenshots. (I’m personally fond of the arrow navigation system that lets you browse themes casually with the keyboard — maybe coffee cup in the other hand. It’s a somewhat relaxing way of looking at themes.)

It uses Backbone.js to power the client side code, something many parts of the WordPress administration panels are starting to leverage. It’s a good bridge between the robustness of the current server side codebase and more dynamic technologies to help achieve better user experiences on the client — things like searching for an installed theme is instant. We managed to get this in very close to the release deadline. Another positive result is we ended up with leaner code than before, something that would help with future improvements.

For 3.9 (just released, go grab it or upgrade!) I worked on using the same architecture and UX to power the install-themes screen, which wasn’t touched in 3.8. This meant interacting with the .org API for querying themes. Again, this made it just in time. The end result is a faster and image focused browsing experience, that also paves the way for future iterations in the versions to come. That’s something I really like — we’ve now had two subsequent releases improving core aspects of the theme experience. This agile process aligns with the momentum that core WordPress development has been gaining since the last few releases, with rapid cycles during the year, and we are looking at even more improvements around multiple screenshots, filtering system, etc. We now have a solid base for it.

Thanks to everyone that helped getting this out there, during every milestone we passed — and my special gratitude to Shaun, Andrew and Gregory.

Tonesque

During hack day—a day devoted to trying things and exploring ideas at Automattic—I created Tonesque, a script that allows you to get an average color representation from an image. It’s inspired by the Duotone theme, but I wanted to make something much easier to integrate with any theme. It also uses a bit of a different color processing logic. I’m putting it to test here for my image posts to see how well it behaves.

A new tale on a past idea

Once again, time to move things around here and continue, with a renewed, dustless canvas, evolving a design that was started a while before. I have since removed most of the javascript but kept the direction initiated with the trilogue experiment intact, for the most part. The win being less time spent when navigating the different content sections and a more tight, balanced design. Also a brilliant excuse to work on hacking Toolbox yet again.

With this attempt, I strive to make the reading experience a tad more focused. I also got to redesign the picture gallery display with something more flexible and appealing —to me, naturally—, while still making use of WordPress core galleries.

The less friction towards content input should mean, theoretically, more content in the end; or perhaps better content. (That has proven to be a great value with the picture gallery, since it is quite easy to upload a new picture and populate that section without having to edit any code at all.)

Another point that encouraged this housekeeping was that the balance among javascript, CSS, and content was starting to feel constricted, impending further changes to the website. It was turning into a crystal tower for which any addition was a lure for catastrophes.

Note: If you have previously left a comment around here, my apologies on keeping it on the still-pending noria. That was in part due to the unfinished, precarious state of the respond section.

Trilogue

On the past two iterations of this website I turned from a Tim Van Damme presentation –still viewable on my portfolio– to the usual bearings of a blog layout. Both worked fine, looked nice and I were, for the most part, pretty fond of them. However, I began missing one when viewing the other and the experience derived into something which seemed too disjointed for my pleasure. I also felt I wanted a more prominent place for my photography which was kind of buried down after the second iteration.

So a few weeks back I started drawing some initial mockups of what would become an idea to combine those views. The first visual element I constructed was the navigation: writing, portfolio and photography, as those were the three main areas I wanted to showcase upfront on the site –hence the name of this post.

Trilogue mockup

I wanted a clean design given that I would be adding real time switching from one section to the other and it needed to feel responsive and smooth (or as responsive and smooth as it could get) without detracting the viewer. The navigation laid out the foundation for the main composite elements of the design: the boxes. They would structure the different places of the front page and new interface buttons could be created for different functionality with ease and consistency.

Furthermore, it also allowed me to experiment with colors while the grand picture evolves around white-space: the frame responsible for setting the tone. The breath allowed by the areas of the design with no elements placed connected fine with the text and boxes.

By far, the most challenging effort was designing the different instances so as to only require a minimum animation to switch from one to the other (I am still mangling with it, though). The idea was that the pacing should follow the flow as if the site slightly rearranges itself to accommodate the content it will display as in a continuum that remains organic to the overall experience. Just from the start I declined the option of having the entire site working this way; I opted instead to display only the last post on the front and connect that to the main blog logistic.

I wanted to leave most of the animation on the CSS and use jQuery just to switch classes and add or remove markup. The added markup would trigger the new CSS and (on Safari and Firefox 4 at least) execute the desired animation. I believe it works quite well and allows for quick changes in the placing of elements and their animation without touching the laid functionality and javascript foundation. Having a manageable set of classes and using clever selections was of the utmost importance given that I intend to add some shiny responsiveness in the near future. As a result, I have the markup, the added functionality and the visualization mostly separated, which is good.

For this iteration I started the process on unstylized markup, just polishing the inner-working and the bones of the site. After the main structure was working well I ported it to a pristine Twenty Ten theme which proved to be extremely flexible – I have some hardcoded bits I need to turn into functions. Then I went to add some details around; always an ongoing process. I ended up also experimenting with the addition of a last twitter post to better balance the front page. The gallery is a nice experiment of ease-of-use. I am using the Galleria jQuery plugin and applying it to the output of a WordPress gallery. Now I just need to upload a new image to the gallery and voilá.

There are still plenty of rough edges and areas needing further improvement in order to make the experience as seamless and accessible as it should be. For one, the back button on the browser should take you back when you are on the front page. I have yet to face the issue of touch devices: still thinking on how to display what the three main buttons stands for on those devices deprived of hovering. I have still to make anchors work as direct links and play nicely with the photo gallery. So, if you see anything out of place send me a message or leave a comment around here. And if you have an iPad tell me what behaves or shows wrong.

Over the next few days I plan on finishing what is not working as expected, clear most of the piled up code-clutter and hopefully start on the different sections of the blog which are still severely undeveloped.

I redesign this place more often
than I write on it.