También escribo en Español

Abstractions and the burden of knowledge

When should abstractions be made in a codebase? Since open-sourcing Calypso I’ve spoken on a few occasions about how abstractions lead to burdens of knowledge, and how we need to be careful about the kind of concepts we create. In the development values for the project we write: “we know that no abstractions are better than bad abstractions”. Why is this an important value? In a post a couple weeks ago, Lance Willett asked me to follow up on what that implies and the kind of impact it can have for learning a project.

From a philosophical point of view, abstractions are concepts we put in place to interpret events, to think about reality through a certain formality instead of dealing with the inherent plurality of reality itself. They represent ideas and act as prejudgements for the shape of our perception. In this regard, they cannot, by definition, stand to the scrutiny of reality. They mediate our understanding by existing between reality and our representation of it. They are the razors with which we try to make the vastness of reality somehow apprehensible; yet inherently false as they attempt to reduce and formulate what is vast and different under principles and categories.

In the world of development, abstractions are essentially a way of organising complexity. The problem is that complexity rarely vanishes. Instead, it remains hidden under those layers of meaning, shielded by of our abstractions. The simplification it seeks to bring usually ends up adding complexity on top of existing complexity.

When carefully chosen they can augment the understanding of how things work by teaching the underlying complexity accurately, but they do generally come at a cost. They come at the expense of adding to the pile of things you need to know to operate within the codebase. By absorbing structural complexity they gradually take the place of what needs to be learned. Very often, given our propensity to create early (and misguided) abstractions, they solidify practices and force certain meanings that sometimes are best kept loose.

That is where being aware of the kind of abstractions you are forcing people to learn becomes important in a common project. Abstractions, at their best, manage the increasing complexity of any system, and may be worth the tradeoff in certain situations. But, at their worst, they add a new layer of cognitive burden you need to cope with, distorting what is actually going on, and imposing the wrong kind of conceptual hierarchy or sameness among entities. Names and groups, for example, come at the cost of having to decide where something belongs to. Does a fall under P or Q? Should we create R for these things that are partially PQ at the same time? Is our decision of having Ps and Qs forcing us to only create Ps and Qs?

One fresh example that comes to mind for me on this tradeoff is a small abstraction we created in Calypso to manage application state, called createReducer. It’s intention is well meant — simplify the boilerplate and the interface with which to handle the serialization of pieces of the state tree so developers can move faster — yet by taking a conceptual higher ground they sometimes convey more than what it was meant to achieve. People looking through the codebase would see it as the semantic way in which they ought to create any new reducer; since the interface appears simple enough, they would default to using it. Was that the intention? Perhaps. But now something that could have been a simpler reducer inherits a complex behaviour by using a utility that appears simple.

How do you overcome these situations? Naming things properly is of course important, yet no name is strictly perfect; education and documentation do help, but understanding what you are reinforcing at a design level may be even more important, since abstractions are always teaching patterns. Which comes back to the first idea; abstractions naturally become burdens of knowledge and you need to decide when and where you are willing to take their price.

The shallowness of specialisation

There’s a pervasive belief in the ineluctable triumph of expertise and specialisation. We dissect knowledge into areas, crafts into specialties, nature into labels. At their best, they are a valuable way of reducing reality and making it apprehensible to our minds. At their worst, they hinder understanding by filtering everything through a preconceived structure, forcing things to fall into dogmatic places, naturally excluding what doesn’t fit into its parsing of the world. Our civilisations run the risk of fragmenting themselves and their individuals when it relentlessly pushes towards utilitarian benefits. We seem to stare at a reluctant and blurry distance to Terence, the great latin poet who stated for posterity: “I am human, and nothing of that which is human is alien to me.”

Haven’t we trapped our mind’s inquisitiveness under the guise of utility? Trapped by our own very reluctance to let it push itself. In a way, the structure of our contemporary learning instead of expanding scope and curiosity tends to narrow it. The necessity to foster interdisciplinary and inclusive efforts is often a sign that our spirit has become tragically fragmented. All in the name of a self acquired notion of depth, value, and mercantile utility. Unfortunately, it’s also a false sense of depth—yet a very dangerous one.

It’s not hard to see that we may lack any sort of cohesive view, that we struggle at the gaps our grid of instrumental utility conceals. If the splitting of knowledge into areas was a necessity to allow room for diverse and improved practices, now it may be close to lose the thing that bonded them, and relinquish any sense of inclusiveness. The sciences look down upon philosophy as mere poetic ramblings; all the while philosophy looks down on the sciences as being lost in arbitrary calculations. Both forget that our greatest minds were naturally inclined to pursue both. There’s so many ways in which you can cut an entity until it no longer resembles any entity. Pursuing specialisation for too long will only yield fragmentation of knowledge — something that ails every corner of our understanding. Most often, the hardest problems cannot be deciphered within the confines of just one discipline. Sometimes, they cannot even be formulated at all from their wells.

The promise of specialisation operates on the assumption that a sort of collective geist should arise to achieve what its individuals sacrificed. But who speaks for it? The landscape of our knowledge seems like a field of separate holes where we dig in isolation. The irony is that nobody can tell how close those holes are anymore with regard to each other, or even whether they are close at all, so absorbed we are in their verticality. Could there be a distinct fear that they may forever be isolated islands? What are we to do with this cognitive scenery of moon-like craters? Who is there to dig horizontally so that those holes can reach each other?

It has been said that this overspecialisation is the only way to avoid superficial thoughts in a vast world that is impossible to grasp and comprehend. As everything, the depth achieved is but a matter of perception. Just turn the head sideways and we’ll see what is reverenced as a tunnel of illustrious depth becomes a superficial line of sameness. We are so attached to the holes we are digging that we can’t prevent being shallow—that is, horizontally speaking.

But even then it seems as if we are in a more dangerous situation. More precarious. We assume that focusing on a specialisation path is a necessity for deepness on a time of too much things to learn and explore. But then we lose all sense of curiosity, the very will to explore. We suspend the desire to know whatever lies beyond our field. What can imagination do if you only feed it from the same source? By its very nature it perpetuates a status quo, because there’s no thinking allowed outside of the holes we have chosen. Depth without breadth becomes the epitome of a particular kind of shallowness.

Great individuals have shown to have maybe one thing in common — they never deprived their curiosity of its natural state to move in all directions. Categories and disciplines are mental tools we use to superimpose a sense of continuity on a world which is essentially constantly transforming itself into existence. Creating, learning, and teaching are fundamental acts of discovery and development, intuition and inspiration, trust and openness; reaching out to the potential we have come to exist with.

Written for a profile in design.blog.

The colours of the Mona Lisa

What changes in our historic perception of art when we imagine the Parthenon in its former colourful glory — embellished with blues and reds and painted friezes — instead of the desolated white ruins we sight today? Time seems to add a layer of distance to ancient art, of spurious reverence towards their so long thought immaculate presence. For a large part of our history both Greek and Roman sculptures and architecture were considered to have been created in this pristine candidness of idealised purity, so often emulated in neoclassical times. It is now widely recognised that instead of the pure white reality we see today they were instead covered with the splendour of colour. Sadly, it’s something we cannot regain and may at best simulate with the tools we have for the benefit of our imagination and interpretations.

Yet, such shift in our understanding of the use of colour is not only important for archeological and art history reasons, but also because they shape our sensibility and development — how we understand colour through art history is part of our aesthetic legacy and commitment to truth. 1 Our understanding has a direct impact in our rendition of the past, and our use of it in the present.

Much later in history, in the domain of painting, we also face similar challenges. It’s only recently that we have the tools necessary to truly inspect the layers, materials, and pigments that paintings were made of. Research has indicated that the background in the Girl with a Pearl Earring by Vermeer is composed of a transparent layer of indigo mixed with weld over a dark black underpainting. This would have looked originally like a deep dark greenish background instead of the almost plain black we can see now. That is before the degraded composition of the paint and the fading of colours as they react to light occurred. The same is said to have happened with certain pigments in something as fresh in history as Van Gogh’s work, where chrome yellows have lost intensity and other pigments faded due to their poor lightfastness. 2

Besides the natural tendencies of certain pigments to fade and change through time, there’s also the use of varnishes of several kinds, often applied years or centuries after the works were finished by new owners, restorers, and conservationists, with various effects in the final colour rendition and overall tone of the painting. This becomes particularly evident with the often controversial cleanup works carried these days by restorers that seek to chemically remove spurious layers of varnish and overpainting to bring the works closer to their original condition. Museums have different policies regarding the extent to which they are willing to attempt the removal of the varnish layers and dirt that obscure the original paint without risking damaging the work itself.

An example of how dramatic the result could be can be seen in the restoration of Danae and Venus and Adonis by Titian carried by the Prado Museum in Madrid. In the case of Titian, his use of colour (with access to a wide range of pigments in Venice during the high Renaissance) is well documented, the quality of his flesh tones remarked by many contemporaries and artists through history. The condition of the work before restoration was but a pale impression of the artists intention. Another important aspect of validating these restorations is the contrast some of the works have compared to other contemporaries that used the same pigments and techniques, yet for various reasons were not varnished as aggressively or better preserved in general. Sometimes (even better) this reference comes from other works by the same painter or within his workshop.


Paintings by Leonardo da Vinci are scarce and often in delicate shape. Titian is estimated to have painted around 400 works of which around 300 survive; this is a stark contrast with Leonardo’s body of pictorial work, which gives even less room to make comparisons within his opus to determine the tonal qualities and guide restoration processes. Yet, the judicious comparison with other Renaissance artists, the insight into the process as well as the materials used — not to mention his own reflections in his Treatise on Painting when he writes about colour or the blueness of aerial perspective — all give a frame of reference. It’s not a daring error to think that when it comes to the great Leonardo da Vinci we have but a glimpse of his outstanding work. 3

A few years ago there was controversy when the Louvre restored The Virgin and Child with St. Anne, one of Leonardo’s latest paintings, removing the obscuring impression the brownish patina had in the work. There was a wave of reactions with mixed impressions, hard as it is to separate truth from personal estimations of what makes a Leonardo picture, there’s no consensus when it comes to clashing sensibilities and the notion of what a Leonardo “should be”. The restored luminosity and richness in colour is comparable to the often visceral rejection the coloured Greek sculptures and buildings produced in the collective mindset when they came to light. To many people, Greek sculptures should have remained white and “pure”. Outside of legitimate concerns over solvents damaging the extraordinarily delicate blending technique of Leonardo should they be too aggressive, it’s also true we have gotten used to seeing these masterworks through the faded patina of earthy tones, varnishes, and dirt that the striking resurgence of colour from below is often met with initial repel and unfamiliarity.

Yet this effort is crucial to bring such a treasure of humanity to clear sight. Mastering colour, mastering its subtleties, its power for reaching sublime balances is something only few achieve. In the history of painting those that achieved it stand as giants. Leonardo da Vinci certainly was one — as he was a master in everything he set himself to do. It is a very unfortunate reality that today it is perhaps one of the hardest things to appreciate in his remaining paintings due to the ageing, layers of oxidized varnish, and mistreatment applied to the works throughout time and their various possessors. Our sensibility gets used to the current perception of it, and we risk doing a disservice to his work when we distort them by leaving them as they are.

Which brings us to the Mona Lisa. This is a rather accurate photographic representation of the state in which we can contemplate it today. Its former glory perhaps forever lost to us outside of imagining, as we compare the work with other contemporary pieces of similar techniques and pigments, how it could have looked like to its contemporaries.

Mona Lisa

The Mona Lisa, by Leonardo da Vinci.

However, with recent good fortune, we have the unforeseen perspective given by a remarkable discovery, a feat for the academic world of art history. It was not a discovery of a painting but a re-discovery of a Mona Lisa copy, which previously had the background completely covered in dark paint and was considered to be a later copy of not much value held in the vaults of the Prado Museum in Madrid. Restoration processes carried by the museum allowed for the original landscape to come into sight. And what a sight it was! It unveiled the magnificent landscape, mountains, lapis lazuli sky, earth tones, greens and reds of the dress fabric, and the flesh tones of the model. A copy of remarkable quality in how well preserved it is and faithful to the original. Since it’s suffered a much less eventful life than Leonardo’s work, the colours have remained much closer to being intact.

Restored Prado Mona Lisa

Restored Mona Lisa copy at the Prado Museum, Madrid.

This copy gives us a unique glimpse into a lost quality of one of the most significant artworks of humankind. By looking at it, while lacking the grace of form, shape, and subtle lightning from the hand of Leonardo himself, the observer can still get an idea of the colour magnificence and overall splendour of shape and light the original work may have had. That is enough to captivate the mind. The exercise of transporting such colours into the Louvre’s work is daunting and exhilarating at the same time — it makes the work live in our heads, still able to inspire from behind the yellow-brownish surface of its present reality.

The restoration work also involved meticulous analysis of the underpainting which, according to the studies, have shown pentimenti of the same kind in both the original Leonardo masterpiece as well as the Prado copy. Infrared reflectography showed there were remarkable coincidences in the underdrawing, with common lines traced perhaps from a shared cartoon. There were corrections applied to this initial transfer by the master that were also present in the Prado copy, following the same process. All of this seems to eloquently convey that both works were painted together at the same time, perhaps by one of Leonardo’s pupils or followers working in his workshop, closely replicating what the master was doing.

It’s tragically impossible for us to appreciate the original work in its purest and intended form — we can only piece together its authentic beauty from different sources and through the will of our imagination. The Prado version is thus an invaluable reference. Maybe some day, by the evolving craft of our aesthetic archeology, we can piece this work together in a more excellent way, above the distant depiction behind a glass at the mercy of a thousand uncaring flashes in the Parisian museum.

The following is a reconstructed digital visualisation, made by myself, of what Leonardo’s work could have looked like using the colours from the Prado copy:

Reconstruction of hypothetical colours in original Mona Lisa.

Digital reconstruction of colours in Leonardo’s Mona Lisa.


Notes:

  1. Also an encouragement to reflect on the role of colour in our crafts. From the current trends in filmmaking falling into an uninspired combination of teal and orange, to the excesses of advertising that populate cities, sometimes robbing them mercilessly of their beauty.
  2. How aware of these phenomena were the artists themselves (considering sometimes it took years for these processes to develop), or whether it was employed knowingly to achieve a later effect, remains an open question, of course, though it seems a bit intellectually capricious.
  3. Raphael’s work itself, as an example, contrasts in terms of luminosity and colour with the Florentine master quite remarkably in the best preserved paintings.

The importance of taste

There’s an endless conundrum in the arts between the relationship of an artist and their own work. Who ought to be superior, the work or the artist? What does it say about an artist to produce something whose quality exceeds their own ability?

The ability to repeat. This is another crucial aspect of the story that is present in every single one of the arts and their techniques. It has its echo in acting, for instance, a discipline where uniqueness and “spontaneity” is so often heralded as a virtue. Ingmar Bergman used to say that repetition was inherent to a great actor’s performance — their ability to reenact the nuance of a scene over and over, be it during rehearsals, stage representations, or shot after shot in cinema. A creator that has the ability to replicate again their own work means they are both in control of their skills and of the result, even when it may seem the result is just a natural effect. Legend has it Marlon Brando used to mumble many of his lines on set to force himself to act again during the later stages of voice recording, given the new context the edited sequence provided for improving a performance.

Oh, but how it is often said that there’s a sense of wonder and irreplicability that occurs precisely in those fine moments by the singular chance of the occasion! 1

While filming Stalker, Tarkovsky had to shoot twice almost the entire film after finding a year’s worth of footage had been improperly developed at the laboratory, rendering the material unusable. He was in a similar situation again years later when he had to remake the complex and iconic sequence of his last film Sacrifice after the camera broke during shooting. (Which involved rebuilding the house that burns down during the scene.) Devastating for moral, but a testament of his creative will.

When taste exceeds one’s ability. Considering it deeply, that should be the most gratifying reality a craftsman, artist, or creator in general could posses. It means their judgement is sophisticated enough that it allows them to indefinitely grow, to indefinitely improve their technique and refine their renditions. The ability to create is swayed by the capacity to perceive a work and know how to improve it.

Leonardo phrased it with eloquence when he wrote: “The painter who entertains no doubt of his own ability, will attain very little. When the work succeeds beyond the judgement, the artist acquires nothing; but when the judgement is superior to the work, he never ceases improving.

If the work surpasses your own ability to judge it, it either means you’ve reached a ceiling when it comes to the refinement of your practice, or that the excellence of a work was done somehow by mistake — in a way despite its creator’s abilities. When taste is below the plenitude of the work the artist is inevitably left behind, a pale shadow of his own work. Yet when taste and judgement prevail, both the work and the creator’s abilities cannot but improve.

Notes:

  1. It’s frequent in art to talk about the sacredness of the moment. Éric Rohmer was adamant about retaining the soundscape from the original scene, since the depth of reality cannot arguably be replicated later in a studio. He used to illustrate it with how the sound of birds is unique and specific to a place. Nevertheless, that goes into quite another subject — the relationship of artifice and nature in a creation and the means of representation. (How excellence and purity in art means doing the effect of nature while still being a human product.) It’s not my intent to dive into this now, so I’m just glancing over it. Another subject for another time.

On consciousness as an illusion

What is consciousness if not an illusion of the mind? Perhaps the mere result of a growing need by a living organism to construct intricate responses to external stimuli in their struggle to survive. This requires an acute sense of existence in time, giving rise to both the notion of an indefinite present and the perception of itself as it operates in this state. Has it been a survival trait for complex organisms to develop this real-time figuration of its behaviour towards a moment where consciousness arises in the mind almost as a side effect of its own activity?

For most living organisms, instinct seems to suffice as the inherent inclination towards a certain behaviour in response to a particular situation. Yet, eventually in the history of a complex being inherent inclinations are no longer enough to determine actions because the plurality of the scenarios faced becomes too vast. The responses need to be shaped in the very moment the behaviour occurs, it needs to draw from its own accumulated experience to figure out new action paths. Instincts cannot be the ruler of general behaviour anymore because they are not effective guiding it. Wouldn’t it be fascinating if this capacity to construct behaviour (of transforming instinct into active reflection) might be what ultimately gives rise to what we understand as consciousness?

As such, our decisions and intentions may not proceed from a sense of identity, but actually precede it — and only eventually give rise to it. Is sentience and recognition of this subjective state a characteristic of a sufficiently complex consciousness that now also needs to extend its understanding through time? That is, extend its attention to both the past and the possible future as it figures out how to act in the present, with the side effect that extending such an attention to the past and the future creates a notion of permanence through time — of something that persists through change. This identification of the consciousness with its own operativeness as a permanence may be the root of the illusion of us being the consciousness. (While consciousness might have been nothing but an evolutionary tool of the mind). The result is the illusion of a self, an intangible notion of permanence, a product of the mind acting in a state of present awareness that makes consciousness spring.

The representation of the self as an effect of the mind processing and acting on the world also forges one the most primeval assumptions of understanding: cause and effect. Our understanding seems to thrive in this reality, one which can be dissected in terms of causes and effects as it directs its own behaviour. In such a world the active consciousness is able to reflect on the outcome of its actions to figure out how to act, all of this coalesced simultaneously in a sense of present time and eventually harbouring the notion of will. It is perhaps a necessity of a sufficiently complex mind in a sufficiently complex organism that this simultaneous activity produces the notion of a conscious being.

The ancient problem of free will against a predefined fate for human actions may become, in this sense, a false dichotomy — free will is as much an illusion as predefined actions are spurious.

At a time of concerns around the possibility of humanity creating machines with a resemblance of sentience and intelligence, understanding the development and nature of what we call consciousness is not just a philosophical or speculative effort, but a fundamental background for any conversation around the topic to make any sense at all. Considering consciousness may not be more than an illusion created by the history of the mind as it grew on top of instinctual behaviour, where do we actually place the notion of being? The invention of consciousness may have been a bright spark in time (one which seems at times to hold itself eternal), but nevertheless, nothing more than a moment in our brief existence. Understanding its nature — the reluctance with which it exhibits its illusion — could be another step in our knowing of reality.

On the road to Calypso

A story about WordPress, JavaScript, and open source.

About eighteen months ago my team at Automattic set upon building an extravagant experiment for the WordPress.com interface. It was to become the most important, demanding, and rewarding project I’ve worked on at Automattic. Two weeks ago, we were finally able to unveil it to the world, and open sourced the project.

calypso-screenshot

A modest beginning. Calypso 1 started as an idealized experiment, toying with the idea of what the WordPress UI could be if it was built today, entirely in JavaScript, and communicated with data only via an API. Yet, in the early days, no one really knew what it might become — if something at all —, and whether these pretentious goals would translate into a tangible thing that actually worked. Would it be possible to make such a technological leap from the current WordPress interface while retaining the solidity that was honed through years and made WordPress power a staggering 25% of the web? Would it be possible to overcome the legacy that an ageing paradigm of web rendering imposed on the evolution of the user experience but retain and foster its spirit? Given the steep learning curve such a shift would entail for everyone involved, it was also a lingering question whether we’d make it through, and if other developers would come on board to help build it.

This wasn’t the first try in this direction, either. Our previous efforts to push forward the WordPress.com user interface had yet inevitably faced the fact that the constraints and coupling of the existing codebase was too strong to overcome. Attempting to build a single page application in this landscape ended up as a convoluted attempt, with duplicated state and an awkward reliance on functionality that was not built with the considerations of a pure client application in mind. More importantly, the result was slow, hard to work with, and hard to extend. However, the emergence of the REST API around this period, which allowed a clear separation in responsibilities between the server and the client application, started to show a viable way in which a huge project like WordPress, with years of experience and legacy, could look at fully embracing modern client technologies (and with it faster iterative processes for polishing its user experience) but without dropping the solidity and permanence that had made it power such a large part of the web. In other words, an evolution of WordPress as a platform dictated by the divergence of its client application(s) and server services.

Not a framework. Even though we tried many of them, we avoided using “frameworks” to build Calypso because we appreciated the existence of single-purpose libraries that focused on one problem and solved it elegantly. Among many other modules, all pieced together via webpack, we used the small page.js router, custom data modules in raw JavaScript emitting single change events, wpcom.js as a an API connector, and React for the view rendering. The philosophies that came with React were also very appealing to us — a declarative view layer, the notion of UI as the predictable reflection of state, the importance of one-way data flows, and composition.

The fact that the client was now completely separate from the rest of the codebase would force us — and other engineers — to interact with data purely through the REST API, forcing our application to be designed without “inside knowledge”, while at the same time propelling the API itself to mature alongside the needs of a real and complex application. This decoupling naturally paved the way to fully embrace JavaScript for the entire client, no longer tied to the rendering procedures of all our legacy code, and still leveraging the backend reliability of core WordPress. Around the middle of 2014 we had a lean, modular scaffolding, composed of different JavaScript libraries running happily on our local machines (another aspect that significantly improved the developer experience) while authenticated with WordPress.com.

A need for speed. Among all the initial obstacles, there was one main reason that kept us going. How significantly faster the experience was shaping up to be. As the foundation matured, we also started glimpsing the possibilities of crafting interesting solutions that would have been close to insurmountable before, thanks to the benefits of reusable composition and a strong core. During the second half of 2014 we rapidly built the foundation of the application, honed a new — for Automattic — development process, worked on on-boarding other developers, and finalised a strong design language. By end of the year we had somewhat timidly launched a small fraction of it, the first few areas powered by Calypso in WordPress.com, and quietly celebrated the milestone. This served as an internal proof of concept and to test the reliability of the API running for millions of pageviews. But the work was just starting.

The following year had the aggressive goal of converting most of WP Admin, the default WordPress administration interface, to this new pristine greek figure that was starting to stretch its legs. At this time, some of the core principles that were guiding us, together with examples of reactivity and composition, served as indicators that this was a direction worth continuing. The success of React in the JavaScript community, a technology we had adopted very early on, was also a good sign for the direction we had settled. But more importantly, seeing the enthusiasm of other developers at Automattic was invigorating.

There’s no “I” in team. With the certainty given by the technical success of the initial launch, the novelty of such a pure JavaScript application interfacing with the behemoth that’s WordPress, and a fresh look at a cohesive UI, contributors within Automattic rapidly grew — soon covering most of the development teams. It became a huge collective effort to rebuild an administration interface that had been refined through many years and hundreds of contributors, but one that was starting to touch the walls of its inherent limits. The spirit of “I’ll never stop learning” within Automattic was never truer than in these decisive months. We were constructing a complex interface from scratch, with the cumulated experience of years but still entirely novel, and we needed to come together to execute such a difficult task. Even more, it was larger than just a rebuilding, it also supposed some significant advancements — specially around site management since Calypso was from the very start, and at its core, a multi-site endeavour.

I personally admire how strong the sense of shared ownership started to become, with teams crossing their artificial boundaries to help others, fleshing out and refining a singular experience. The engineering usability of Calypso grew significantly as the collective efforts shaped a sizeable library of components, utilities, solutions, expertise, and willingness to help. Diving deep into JavaScript was pushing our engineering literacy forward. None of us would have thought two years ago that we would be writing ECMAScript 2015 in WordPress.

Some fuel for Jetpack. Another aspect of Calypso that was demanding from the very start was that it had to be a client that treated self-hosted sites via Jetpack on an equal footing with WordPress.com sites. The goal was to let you completely manage your site regardless of where you were hosting it. This required laborious focus on both the design and engineering, syncing with Jetpack releases to power our increasing API demands. It makes me really glad that I’m writing this on the new editor in Calypso, syncing to my Jetpack site thanks to all this great effort.

Speaking of which, a test of fire for the foundation we had built came earlier this year, around March, when we had to build this new WordPress editor to go with Calypso. We were able to accomplish such an intimidating task in a very short amount of time by strong collaboration among teams, and by leveraging everything we had built so far in Calypso to speed up the engineering and design process. The new editor was announced just about a month ago. We were able to introduce a couple of cool features outside of the initial roadmap thanks to this reusability and strong codebase. (I’m personally fond of the drafts panel that allows quick switching between your working drafts from the editor itself, something that wasn’t in the original scope.)

This was a huge bet, incredibly risky, and difficult to execute, but it paid off. Like any disruption it is uncomfortable, and I’m sure will be controversial in some circles. What the team has accomplished in such a short time is amazing, and I’m incredibly proud of everyone who has contributed and will contribute in the future. This is the most exciting project I’ve been involved with in my career.

Matt Mullenweg

I’m glad I was able to be part of this project from the very start as a member of the Calypso core team. It’s even more exciting to see all of this released to the open world, without reservation, with the spirit of putting a piece of human craft out there — for people to look at, learn from, contribute to, and make their own.

Notes:

  1. Ultimately a project with about 26000 commits from around 100 people at the time we opened sourced it. See Andy Peatling’s recount of the journey.

The Kuleshov Effect

Andrei Tarkovsky once defined cinema as sculpting in time. The most distinct feature of web design, as it is compared to other forms of design, is precisely that it exists in time. If graphic design was often thought as the corollary of painting, then web and interaction design could very well be the corollary of filmmaking. As such, it seems to be in it’s infancy when it comes to the lessons discovered by cinema through its short centennial history.

During the early 20th century, film creators realized editing and montage were core aspects of the craft, so redefining that many held it as the one distinct essence of cinema. The one thing that moved it from a mere technical advance in photography and motion study, to an art form capable of wonders. One of these concepts was a particular study known as the Kuleshov effect, demonstrated by Lev Kuleshov, a Russian filmmaker of the golden era of montage exploration.

The experiment was quite simple, but with profound consequences. He chose a fragment of a close-up shot of a soviet actor staring, one in which the acting was particularly neutral. He then juxtaposed that fragment with other pieces of film — in particular, one of a plate of food, another one of a dead girl in a coffin, and another one of a child playing.

He showed this sequence, put together, to an audience — and the reactions were remarkable. The people specially lauded the acting, where the actor could so profoundly represent hunger (looking at the food), sorrow (looking at the dead woman), and nostalgia (looking at the child playing). The aesthetic consequences of this realization are captivating, and extremely vigorous. 1

This was the same exact shot of the actor doing nothing in particular, eminently not-acting. However, and as with most true art forms, aesthetic representation is synthetic. The viewer fills the gaps, connects the pieces, and infers more than what is the plain material. Particularly, knowledge, concepts, and experiences are being derived, created in that empty gap where two fragments connect. There was no “hunger” implicit in the acting (it wasn’t even the intention of the actor) nor in the food itself. The sentiment of hunger exists only in the evolving of time created by the assembling of these two fragments. 2

This, of course, has gigantic impacts on cinema, and good directors know how to work with this to achieve sublime beauty that goes beyond what seems to be represented on each screen fragment. It also matters greatly to actors, obviously, because their performance is not just what they intend to act, but how their shots are pieced together; what they look at, and what comes before and after. Good actors also know this.

And now, back to the field of interaction design, how does this affect those who practice it? I believe this has significant consequences here all the same. Largely unexplored. There’s a distinctive lack of thought around this in-between area where connections are created, emerging into something that cannot be reduced to the parts. What happens when someone goes from one page to the other? What happens in the switch of context? Do web designers acknowledge there is more being created in the viewer than what they explicitly intended to put there? There’s many studies and theories and practices revolving around what is laid out in the page. Or even in the flow and nominal succession of stages. But what about that invisible instant when two things clash?

The most web developers have concerned themselves is with transition states. Usually trying to even the journey, looking to smoothly transition from one state to the other. That’s a pre-editing stage of realization — make everything seem like it’s a continuum. One sequence. Film montage discovered that time (in it’s cinematic sense) is being created beyond the singularity of each screen, beyond shot-sequences. That the continuum is being sculpted at a higher level — in that whole dimension that transcends individual pieces, and gives tremendous creative power.

In a way, web design needs to find its own montage lessons to control their less tangible experiences, to control the effects that are being created when screens are switched (specially as the viewer is often choosing the path). When what is being designed is not just the specifics of a screen, or the abstract notion of a flow, but the gap between different fragments of an experience evolving in time.

Notes:

  1. Truth be told, the last one was actually supposed to be a woman on a divan, casting the expression of desire on the actor. My apologies. But it functions the same way — I’ve grown accustomed to this variation of the experiment, as it’s the one I’ve constructed to carry myself, and enjoyed the more complex effect better.
  2. Described with precision by Eisenstein when he said a series of two film fragments is to be regarded not as their sum, but as their product.

I redesign this place more often
than I write on it.