A personal site, or a blog, is more than just a collection of writing. It’s a kind of place - something that feels like home among the streams. Home is a very strong mental model.
Sunday, March 13th, 2022
Sunday, January 2nd, 2022
Out of all of these metaphors, the two most enduring are paper and physical space.
Tuesday, June 29th, 2021
If you download Safari Technology Preview you can test drive features that are on their way in Safari 15. One of those features, announced at Apple’s World Wide Developer Conference, is coloured browser chrome via support for the
meta value of “theme-color.” Chrome on Android has supported this for a while but I believe Safari is the first desktop browser to add support. They’ve also added support for the
media attribute on that
meta element to handle “prefers-color-scheme.”
This is all very welcome, although it does remind me a bit of when Internet Explorer came out with the ability to make coloured scrollbars. I mean, they’re nice features’n’all, but maybe not the most pressing? Safari is still refusing to acknowledge progressive web apps.
That’s not quite true. In her WWDC video Jen demonstrates how you can add a progressive web app like Resilient Web Design to your home screen. I’m chuffed that my little web book made an appearance, but when you see how you add a site to your home screen in iOS, it’s somewhat depressing.
The steps to add a website to your home screen are:
- Tap the “share” icon. It’s not labelled “share.” It’s a square with an arrow coming out of the top of it.
- A drawer pops up. The option to “add to home screen” is nowhere to be seen. You have to pull the drawer up further to see the hidden options.
- Now you must find “add to home screen” in the list
- Add to Reading List
- Add Bookmark
- Add to Favourites
- Find on Page
- Add to Home Screen
It reminds of this exchange in The Hitchhiker’s Guide To The Galaxy:
“You hadn’t exactly gone out of your way to call attention to them had you? I mean like actually telling anyone or anything.”
“But the plans were on display…”
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a torch.”
“Ah, well the lights had probably gone.”
“So had the stairs.”
“But look you found the notice didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of The Leopard.’”
Safari’s current “support” for adding progressive web apps to the home screen feels like the minimum possible …just enough to use it as a legal argument if you happen to be litigated against for having a monopoly on app distribution. “Hey, you can always make a web app!” It’s true in theory. In practice it’s …suboptimal, to put it mildly.
It’s a little bit weird that this stylistic information is handled by HTML rather than CSS. It’s similar to the
viewport value in that sense. I always that the plan was to migrate that to CSS at some point, but here we are a decade later and it’s still very much part of our boilerplate markup.
Some people have remarked that the coloured browser chrome can make the URL bar look like part of the site so people might expect it to operate like a site-specific search.
I also wonder if it might blur “the line of death”; that point in the UI where the browser chrome ends and the website begins. Does the unified colour make it easier to spoof browser UI?
Probably not. You can already kind of spoof browser UI by using the right shade of grey. Although the removal any kind of actual line in Safari does give me pause for thought.
I tend not to think of security implications like this by default. My first thought tends to be more about how I can use the feature. It’s only after a while that I think about how bad actors might abuse the same feature. I should probably try to narrow the gap between those thoughts.
Saturday, May 15th, 2021
The discussions around data policy still feel like they are framing data as oil - as a vast, passive resource that either needs to be exploited or protected. But this data isn’t dead fish from millions of years ago - it’s the thoughts, emotions and behaviours of over a third of the world’s population, the largest record of human thought and activity ever collected. It’s not oil, it’s history. It’s people. It’s us.
Monday, April 19th, 2021
This is a great HTML boilerplate, with an explanation of every line.
Tuesday, March 16th, 2021
Thursday, December 3rd, 2020
My favorite aspect of websites is their duality: they’re both subject and object at once. In other words, a website creator becomes both author and architect simultaneously. There are endless possibilities as to what a website could be. What kind of room is a website? Or is a website more like a house? A boat? A cloud? A garden? A puddle? Whatever it is, there’s potential for a self-reflexive feedback loop: when you put energy into a website, in turn the website helps form your own identity.
Tuesday, October 13th, 2020
James made a radio programme about “the cloud”:
It’s the central metaphor of the internet - ethereal and benign, a fluffy icon on screens and smartphones, the digital cloud has become so naturalised in our everyday life we look right through it. But clouds can also obscure and conceal – what is it hiding? Author and technologist James Bridle navigates the history and politics of the cloud, explores the power of its metaphor and guides us back down to earth.
Wednesday, September 23rd, 2020
This is a handy tool if you’re messing around with Twitter cards and other metacrap.
Tuesday, September 8th, 2020
T E N Ǝ T
Jessica and I went to cinema yesterday.
Normally this wouldn’t be a big deal, but in our current circumstances, it was something of a momentous decision that involved a lot of risk assessment and weighing of the odds. We’ve been out and about a few times, but always to outdoor locations: the beach, a park, or a pub’s beer garden. For the first time, we were evaluating whether or not to enter an indoor environment, which given what we now know about the transmission of COVID-19, is certainly riskier than being outdoors.
But this was a cinema, so in theory, nobody should be talking (or singing or shouting), and everyone would be wearing masks and keeping their distance. Time was also on our side. We were considering a Monday afternoon showing—definitely not primetime. Looking at the website for the (wonderful) Duke of York’s cinema, we could see which seats were already taken. Less than an hour before the start time for the film, there were just a handful of seats occupied. A cinema that can seat a triple-digit number of people was going to be seating a single digit number of viewers.
We got tickets for the front row. Personally, I love sitting in the front row, especially in the Duke of York’s where there’s still plenty of room between the front row and the screen. But I know that it’s generally considered an undesirable spot by most people. Sure enough, the closest people to us were many rows back. Everyone was wearing masks and we kept them on for the duration of the film.
The film was Tenet). We weren’t about to enter an enclosed space for just any ol’ film. It would have to be pretty special—a new Star Wars film, or Denis Villeneuve’s Dune …or a new Christopher Nolan film. We knew it would look good on the big screen. We also knew it was likely to be spoiled for us if we didn’t see it soon enough.
At this point I am sounding the spoiler horn. If you have not seen Tenet yet, abandon ship at this point.
I really enjoyed this film. I understand the criticism that has been levelled at it—too cold, too clinical, too confusing—but I still enjoyed it immensely. I do think you need to be able to enjoy feeling confused if this is going to be a pleasurable experience. The payoff is that there’s an equally enjoyable feeling when things start slotting into place.
The closest film in Christopher Nolan’s back catalogue to Tenet is Inception in terms of twistiness and what it asks of the audience. But in some ways, Tenet is like an inverted version of Inception. In Inception, the ideas and the plot are genuinely complex, but Nolan does a great job in making them understandable—quite a feat! In Tenet, the central conceit and even the overall plot is, in hindsight, relatively straightforward. But Nolan has made it seem more twisty and convuluted than it really is. The ten minute battle at the end, for example, is filled with hard-to-follow twists and turns, but in actuality, it literally doesn’t matter.
The pitch for the mood of this film is that it’s in the spy genre, in the same way that Inception is in the heist genre. Though there’s an argument to be made that Tenet is more of a heist movie than Inception. But in terms of tone, yeah, it’s going for James Bond.
Even at the very end of the credits, when the title of the film rolled into view, it reminded me of the Bond films that would tease “The end of (this film). But James Bond will return in (next film).” Wouldn’t it have been wonderful if the very end of Tenet’s credits finished with “The end of Tenet. But the protagonist will return in …Tenet.”
The pleasure I got from Tenet was not the same kind of pleasure I get from watching a Bond film, which is a simpler, more basic kind of enjoyment. The pleasure I got from Tenet was more like the kind of enjoyment I get from reading smart sci-fi, the kind that posits a “what if?” scenario and isn’t afraid to push your mind in all kinds of uncomfortable directions to contemplate the ramifications.
Like I said, the central conceit—objects or people travelling backwards through time (from our perspective)—isn’t actually all that complex, but the fun comes from all the compounding knock-on effects that build on that one premise.
In the film, and in interviews about the film, everyone is at pains to point out that this isn’t time travel. But that’s not true. In fact, I would argue that Tenet is one of the few examples of genuine time travel. What I mean is that most so-called time-travel stories are actually more like time teleportation. People jump from one place in time to another instaneously. There are only a few examples I can think of where people genuinely travel.
The grandaddy of all time travel stories, The Time Machine by H.G. Wells, is one example. There are vivid descriptions of the world outside the machine playing out in fast-forward. But even here, there’s an implication that from outside the machine, the world cannot perceive the time machine (which would, from that perspective, look slowed down to the point of seeming completely still).
The most internally-consistent time-travel story is Primer. I suspect that the Venn diagram of people who didn’t like Tenet and people who wouldn’t like Primer is a circle. Again, it’s a film where the enjoyment comes from feeling confused, but where your attention will be rewarded and your intelligence won’t be insulted.
In Primer, the protagonists literally travel in time. If you want to go five hours into the past, you have to spend five hours in the box (the time machine).
In Tenet, the time machine is a turnstile. If you want to travel five hours into the past, you need only enter the turnstile for a moment, but then you have to spend the next five hours travelling backwards (which, from your perspective, looks like being in a world where cause and effect are reversed). After five hours, you go in and out of a turnstile again, and voila!—you’ve time travelled five hours into the past.
Crucially, if you decide to travel five hours into the past, then you have always done so. And in the five hours prior to your decision, a version of you (apparently moving backwards) would be visible to the world. There is never a version of events where you aren’t travelling backwards in time. There is no “first loop”.
That brings us to the fundamental split in categories of time travel (or time jump) stories: many worlds vs. single timeline.
In a many-worlds story, the past can be changed. Well, technically, you spawn a different universe in which events unfold differently, but from your perspective, the effect would be as though you had altered the past.
The best example of the many-worlds category in recent years is William Gibson’s The Peripheral. It genuinely reinvents the genre of time travel. First of all, no thing travels through time. In The Peripheral only information can time travel. But given telepresence technology, that’s enough. The Peripheral is time travel for the remote worker (once again, William Gibson proves to be eerily prescient). But the moment that any information travels backwards in time, the timeline splits into a new “stub”. So the many-worlds nature of its reality is front and centre. But that doesn’t stop the characters engaging in classic time travel behaviour—using knowledge of the future to exert control over the past.
Time travel stories are always played with a stacked deck of information. The future has power over the past because of the asymmetric nature of information distribution—there’s more information in the future than in the past. Whether it’s through sports results, the stock market or technological expertise, the future can exploit the past.
Information is at the heart of the power games in Tenet too, but there’s a twist. The repeated mantra here is “ignorance is ammunition.” That flies in the face of most time travel stories where knowledge—information from the future—is vital to winning the game.
It turns out that information from the future is vital to winning the game in Tenet too, but the reason why ignorance is ammunition comes down to the fact that Tenet is not a many-worlds story. It is very much a single timeline.
Having a single timeline makes for time travel stories that are like Greek tragedies. You can try travelling into the past to change the present but in doing so you will instead cause the very thing you set out to prevent.
The meat’n’bones of a single timeline time travel story—and this is at the heart of Tenet—is the question of free will.
The most succint (and disturbing) single-timeline time-travel story that I’ve read is by Ted Chiang in his recent book Exhalation. It’s called What’s Expected Of Us. It was originally published as a single page in Nature magazine. In that single page is a distillation of the metaphysical crisis that even a limited amount of time travel would unleash in a single-timeline world…
There’s a box, the Predictor. It’s very basic, like Claude Shannon’s Ultimate Machine. It has a button and a light. The button activates the light. But this machine, like an inverted object in Tenet, is moving through time differently to us. In this case, it’s very specific and localised. The machine is just a few seconds in the future relative to us. Cause and effect seem to be reversed. With a normal machine, you press the button and then the light flashes. But with the predictor, the light flashes and then you press the button. You can try to fool it but you won’t succeed. If the light flashes, you will press the button no matter how much you tell yourself that you won’t (likewise if you try to press the button before the light flashes, you won’t succeed). That’s it. In one succinct experiment with time, it is demonstrated that free will doesn’t exist.
Tenet has a similarly simple object to explain inversion. It’s a bullet. In an exposition scene we’re shown how it travels backwards in time. The protagonist holds his hand above the bullet, expecting it to jump into his hand as has just been demonstrated to him. He is told “you have to drop it.” He makes the decision to “drop” the bullet …and the bullet flies up into his hand.
This is a brilliant bit of sleight of hand (if you’ll excuse the choice of words) on Nolan’s part. It seems to imply that free will really matters. Only by deciding to “drop” the bullet does the bullet then fly upward. But here’s the thing: the protagonist had no choice but to decide to drop the bullet. We know that he had no choice because the bullet flew up into his hand. The bullet was always going to fly up into his hand. There is no timeline where the bullet doesn’t fly up into his hand, which means there is no timeline where the protagonist doesn’t decide to “drop” the bullet. The decision is real, but it is inevitable.
The lesson in this scene is the exact opposite of what it appears. It appears to show that agency and decision-making matter. The opposite is true. Free will cannot, in any meaningful sense, exist in this world.
This means that there was never really any threat. People from the future cannot change the past (or wipe it out) because it would’ve happened already. At one point, the protagonist voices this conjecture. “Doesn’t the fact that we’re here now mean that they don’t succeed?” Neil deflects the question, not because of uncertainty (we realise later) but because of certainty. It’s absolutely true that the people in the future can’t succeed because they haven’t succeeded. But the protagonist—at this point in the story—isn’t ready to truly internalise this. He needs to still believe that he is acting with free will. As that Ted Chiang story puts it:
It’s essential that you behave as if your decisions matter, even though you know that they don’t.
That’s true for the audience watching the film. If we were to understand too early that everything will work out fine, then there would be no tension in the film.
As ever with Nolan’s films, they are themselves metaphors for films. The first time you watch Tenet, ignorance is your ammuntion. You believe there is a threat. By the end of the film you have more information. Now if you re-watch the film, you will experience it differently, armed with your prior knowledge. But the film itself hasn’t changed. It’s the same linear flow of sequential scenes being projected. Everything plays out exactly the same. It’s you who have been changed. The first time you watch the film, you are like the protagonist at the start of the movie. The second time you watch it, you are like the protagonist at the end of the movie. You see the bigger picture. You understand the inevitability.
The character of Neil has had more time to come to terms with a universe without free will. What the protagonist begins to understand at the end of the film is what Neil has known for a while. He has seen this film. He knows how it ends. It ends with his death. He knows that it must end that way. At the end of the film we see him go to meet his death. Does he make the decision to do this? Yes …but he was always going to make the decision to do this. Just as the protagonist was always going to decide to “drop” the bullet, Neil was always going to decide to go to his death. It looks like a choice. But Neil understands at this point that the choice is pre-ordained. He will go to his death because he has gone to his death.
At the end, the protagonist—and the audience—understands. Everything played out exactly as it had to. The people in the future were hoping that reality allowed for many worlds, where the past could be changed. Luckily for us, reality turns out to be a single timeline. But the price we pay is that we come to understand, truly understand, that we have no free will. This is the kind of knowledge we wish we didn’t have. Ignorance was our ammunition and by the end of the film, it is spent.
Nolan has one other piece of misdirection up his sleeve. He implies that the central question at the heart of this time-travel story is the grandfather paradox. Our descendents in the future are literally trying to kill their grandparents (us). But if they succeed, then they can never come into existence.
But that’s not the paradox that plays out in Tenet. The central paradox is the bootstrap paradox, named for the Heinlein short story, By His Bootstraps. Information in this film is transmitted forwards and backwards through time, without ever being created. Take the phrase “Tenet”. In subjective time, the protagonist first hears of this phrase—and this organisation—when he is at the start of his journey. But the people who tell him this received the information via a subjectively older version of the protagonist who has travelled to the past. The protagonist starts the Tenet organistion (and phrase) in the future because the organisation (and phrase) existed in the past. So where did the phrase come from?
This paradox—the bootstrap paradox—remains after the grandfather paradox has been dealt with. The grandfather paradox was a distraction. The bootstrap paradox can’t be resolved, no matter how many times you watch the same film.
So Tenet has three instances of misdirection in its narrative:
- Inversion isn’t time travel (it absolutely is).
- Decisions matter (they don’t; there is no free will).
- The grandfather paradox is the central question (it’s not; the bootstrap paradox is the central question).
I’m looking forward to seeing Tenet again. Though it can never be the same as that first time. Ignorance can never again be my ammunition.
I’m very glad that Jessica and I decided to go to the cinema to see Tenet. But who am I kidding? Did we ever really have a choice?
Wednesday, July 8th, 2020
A trashcan, a tyepface, and a tactile keyboard. Marcin gets obsessive (as usual).
Monday, February 3rd, 2020
We’ve industrialized design and are relegated to squeezing efficiencies out of it through our design systems. All CSS changes must now have a business value and user story ticket attached to it.
Dave follows on from my post about design systems and automation.
At the same time, I have seen first hand how design systems can yield improvements in accessibility, performance, and shared knowledge across a willing team. I’ve seen them illuminate problems in design and code. I’ve seen them speed up design and development allowing teams to build, share, and validate prototypes or A/B tests before undergoing costly guesswork in production. There’s value in these tools, these processes.
Wednesday, January 29th, 2020
Architects, gardeners, and design systems
I compared design systems to dictionaries. My point was that design systems—like language—can be approached in a prescriptivist or descriptivist manner. And I favour descriptivism.
A prescriptive approach might give you a beautiful design system, but if it doesn’t reflect the actual product, it’s fiction. A descriptive approach might give a design system with imperfections and annoying flaws, but at least it will be accurate.
I think it’s more important for a design system to be accurate than beautiful.
Meanwhile, over on Frank’s website, he’s been documenting the process of its (re)design. He made an interesting comparison in his post Redesign: Gardening vs. Architecture. He talks about two styles of writing:
In interviews, Martin has compared himself to a gardener—forgoing detailed outlines and overly planned plot points to favor ideas and opportunities that spring up in the writing process. You see what grows as you write, then tend to it, nurture it. Each tendrilly digression may turn into the next big branch of your story. This feels right: good things grow, and an important quality of growth is that the significant moments are often unanticipated.
On the other side of writing is who I’ll call “the architect”—one who writes detailed outlines for plots and believes in the necessity of overt structure. It puts stock in planning and foresight. Architectural writing favors divisions and subdivisions, then subdivisions of the subdivisions. It depends on people’s ability to move forward by breaking big things down into smaller things with increasing detail.
It’s not just me, right? It all sounds very design systemsy, doesn’t it?
This is a false dichotomy, of course, but everyone favors one mode of working over the other. It’s a matter of personality, from what I can tell.
Replace “personality” with “company culture” and I think you’ve got an interesting analysis of the two different approaches to design systems. Descriptivist gardening and prescriptivist architecture.
Frank also says something that I think resonates with the evergreen debate about whether design systems stifle creativity:
It can be hard to stay interested if it feels like you’re painting by numbers, even if they are your own numbers.
I think Frank’s comparison—gardeners and architects—also speaks to something bigger than design systems…
I gave a talk last year called Building. You can watch it, listen to it, or read the transcript if you like. The talk is about language (sort of). There’s nothing about prescriptivism or descriptivism in there, but there’s lots about metaphors. I dive into the metaphors we use to describe our work and ourselves: builders, engineers, and architects.
It’s rare to find job titles like software gardener, or information librarian (even though they would be just as valid as other terms we’ve made up like software engineer or information architect). Outside of the context of open source projects, we don’t talk much about maintenance. We’re much more likely to talk about making.
When tech culture only celebrates creation, it risks ignoring those who teach, criticize, and take care of others.
Anyone who’s spent any time working on design systems can tell you there’s no shortage of enthusiasm for architecture and making—“let’s build a library of components!”
There’s less enthusiasm for gardening, care, communication and maintenance. But that’s where the really important work happens.
In her book The Real World of Technology, the metallurgist Ursula Franklin contrasts prescriptive technologies, where many individuals produce components of the whole (think about Adam Smith’s pin factory), with holistic technologies, where the creator controls and understands the process from start to finish.
In that light, design systems take their place in a long history of dehumanising approaches to manufacturing like Taylorism. The priorities of “scientific management” are the same as those of design systems—increasing efficiency and enforcing consistency.
Humans aren’t always great at efficiency and consistency, but machines are. Automation increases efficiency and consistency, sacrificing messy humanity along the way:
Machine with the strength of a hundred men
Can’t feed and clothe my children.
Historically, we’ve seen automation in terms of physical labour—dock workers, factory workers, truck drivers. As far as I know, none of those workers participated in the creation of their mechanical successors. But when it comes to our work on the web, we’re positively eager to create the systems to make us redundant.
The usual response to this is the one given to other examples of automation: you’ll be free to spend your time in a more meaningful way. With a design system in place, you’ll be freed from the drudgery of manual labour. Instead, you can spend your time doing more important work …like maintaining the design system.
You’ve heard the joke about the factory of the future, right? The factory of the future will have just two living things in it: one worker and one dog. The worker is there to feed the dog. The dog is there to bite the worker if he touches anything.
Roll on snare drum.
Wednesday, January 22nd, 2020
Good morning, everybody. It is a real honour to be here. As Simon said, I was here six, seven, eight years ago attending this conference because it’s such a great conference. I’m kind of feeling the pressure now that I’m up here on the stage speaking at this conference. I’m just glad I’m on first so I can get it over with and then listen to all these great talks.
I’m here today to talk to you …which is kind of weird when you think about it. I mean, first, the fact that it’s me up here on the stage through some clerical error.
But also, I’m going to talk to you. I’m going to vibrate air over my vocal cords and move this big meaty piece of flesh inside my jaw up and down vibrating the airwaves and you’re going to listen to me doing that. It seems like a crazy thing to do except for the fact that, of course, I’ll be using language.
Maybe the great distinguishing feature of our species, language. The great leap forward that happened—who knows—50,000, 100,000 years ago when we, as a species, developed language. With language, by moving those vocal cords and that big piece of flesh in my jaw, we can tell stories. I can recount something that happened in the past.
Perhaps more amazingly, we can imagine things that might come to be. I could tell you something that might happen in the future. So language is a kind of time travel.
It’s all possible because we’re speaking the same codebase. The particular language I’m talking now is English. As long as you can decode English then all these noises I’m making will make sense to you even if there isn’t actually any information in the words. I can say Chomsky’s famous one.
Colourless green ideas sleep furiously.
You can parse that. It doesn’t make any sense, but you can parse it.
Most of the time, the sentences we use also convey some kind of information. Language is not just time travel. Language is also communication.
There can be an idea that’s sitting in my head and I’ll, you know, vibrate the air and vocal cords, flap this big fleshy thing in my jaw around, and transfer the idea from my head to your head. Language is almost like a virus. You can’t help but take the idea in.
I can say to you, “Don’t think of an elephant,” right? Now you’ve just thought of an elephant. It’s the language equivalent of the chicken game which, if you haven’t played before, sorry. You’ve just lost.
This sentence, “Don’t think of an elephant,” is actually the title of a book by George Lakoff. George Lakoff is a linguist. He’s written many books. He wrote Women, Fire, and Dangerous Things. He wrote this, Metaphors We Live By, because he’s kind of obsessed with metaphors.
We use metaphor all the time in language. We use conceptual metaphor, so when we take one idea and we use the language of that idea to talk about a different idea. The classic example being something intangible.
Let’s say time. How do we talk about time when we can’t touch it, we can’t feel it, it’s intangible? Well, we use metaphor.
We talk about time as though it’s a physical object moving through space. We say time flies or time drags or we talk about time as though it’s a resource. We talk about saving time, wasting time.
You can’t do any of those things with time. That’s not how time works. But the metaphor is very helpful.
The other kind of metaphor is the cognitive metaphor. This is what George Lakoff is interested in, particularly in things like political language. How we frame a debate can tip the scales of how that debate would unfold. If we were about to have a debate about tax relief, well, before the debate has even begun, we’ve framed taxation as something you need relief from and the scales have been tipped.
I’m very interested in this idea of metaphor, analogy, and simile and how we talk about the work we do. It’s such a young industry. What we do is we borrow from other industries. We’re not the first to do this. There’s a great book called Understanding Comics by Scott McCloud. Who’s read Understanding Comics? It’s great.
It’s about comics but, really, it’s just a fantastic book. It’s written as a comic. In it, Scott McCloud makes the point of this new medium, comics, had to kind of borrow from the existing mediums that came before. He points out that this isn’t new. He says:
Each new medium begins its life by imitating its predecessors. Many early movies were like filmed stage plays. Much early television was like radio with pictures.
Right? That it takes time.
Now, this idea of a new medium having to borrow the tropes and the language of the medium that came before, this idea pops up again on the web in this article published in the year 2000 by John Allsopp on A List Apart, A Dao of Web Design. Can I get a show of hands of who’s read A Dao of Web Design? Awesome. You are my people. The rest of you, please read it. It’s such a wonderful article.
It’s crazy that I’m standing up here recommending, “Oh, yeah, you should totally read this article from the year 2000,” but it is relevant. It’s amazingly relevant still today. It’s maybe more relevant today than when it was written. In the article, John says:
When a new medium borrows from an existing one, some of what it borrows makes sense, but much of the borrowing is thoughtless, it’s ritual, and it often constrains the new medium. Over time, the new medium develops its own conventions, throwing off existing conventions that don’t make sense.
Now, at the time John was writing this, 2000, of course, we were borrowing from what had come before in the previous medium and that was print. We were trying to figure out how do we get the same level of control that we were used to in the world of print on the web. We did that using clever techniques thanks to David Siegel who wrote this book, Creating Killer Websites. David Siegel, if you don’t know the name, you’re certainly familiar with his work because he’s the guy who came up with the idea of using tables for layout or having a one-pixel by one-pixel spacer
Hey, listen. That was the only way we could do it back then. They were hacks, yes, but they were necessary hacks. He did actually recant. Years later, he wrote a piece that said, the web is ruined and I ruined it. This may be overstating the case, but you know.
He was pointing out we could use these techniques, these hacks to constrain Web and make it work like print. We could get pixel-perfect control. John Allsopp, in his article, he’s kind of pushing against and going, no, no, no:
The web is a new medium. It has emerged from the medium of printing whose skills and design language and convention strongly influence it. It is too often shaped by that from which it sprang. Killer websites are usually those which tame the wildness of the web, constraining pages as if they were made of paper. Desktop publishing for the web.
So, I mean, John totally acknowledges that there is a lot to learn from this rich, rich history of print and, before print, just writing. This is clearly the second great leap of our species. We had language where we could communicate ideas, tell stories, imagine the future—as long as we’re in the same physical space—and then we came up with writing. Now we can communicate, re-viral ideas, talk about the future and the past, and we don’t even have to be in the same physical place. Someone who died centuries ago can put an idea in your head by putting language onto a medium like vellum or, later, paper.
You can see this evolution over centuries from illuminated manuscripts to the printing press, Gutenberg, until we get to the 20th Century and we really start to refine the design. We got the Swiss School of Design, the fonts, typography, and the grid system. There’s a lot to learn here.
What’s interesting to me, though, is what seems to be this battle of extremes. We’ve got David Siegel talking about desktop publishing for the web, effectively, and John Allsopp talking about, “No, the web is its own medium. It needs to have its own conventions.”
They seem to be at opposite ends of a spectrum. Yet, they actually have a commonality because, on both sides, when they’re talking about this, they’re talking about websites — web sites. Now, that in itself is a metaphor. You don’t have physical sites on the web. It’s intangible like time. Yet, we chose this metaphor. The idea of a site, a place where you go to a physical place.
Site actually is pretty good with connotations of a building site, a construction site. That was literally the metaphor in the ’90s. The web is like a construction site. It kind of is constantly under construction. Oh, you want the full nostalgic effect?
There we go. We’re back to Geocities. But I feel like then we decided to grow out of this metaphor and use more grownup metaphors. We got professional. We had to borrow from other industries, other mediums, and here’s one that people are very fond of borrowing: architecture—describing what we do as architecture.
Whether it’s on the design side or the development side, talking about us as architects. It seems like a very appealing industry to borrow from, which is fascinating. If you ever talk to architects, man, it’s a shitty industry. Spec work, awards, and competition, it’s not a great industry.
But we seem to hold it up as, like, “Oh, yeah, we’re like architects because architects are awesome.” I think of Hollywood because every Hollywood movie that has an architect in it, the architects are always really nice people. They’re always like the protagonist, never the antagonist. The architect is never the villain.
It’s fair enough. It’s fair enough to borrow things from something like architecture. For example, I know plenty of designers who would say that this book is the best book about UX that they’ve ever read, 101 Things I Learned in Architecture School by Matthew Frederick. It was published in 2007. It’s not written for UX designers. It’s not written about the web, but there are lessons in there that are directly applicable.
There are other works from the world of architecture that have definitely influenced the work we are doing today like the classic from Christopher Alexander, A Pattern Language. Now this—I say classic rightly—this is a classic book. A classic book is a book everyone has heard of and nobody has read.
That is certainly the case here. Published in 1977, and it influenced lots of people doing things in the digital space. Ward Cunningham, the inventor of the wiki, he said, yeah, he was really influenced by A Pattern Language.
The idea of a pattern language, it’s architecture, but breaking things down into components that you could change the parameters we used in public spaces, buildings, things like that. It’s a modular approach. Later on, in the software world, a gang of four, they wrote Design Patterns: Elements of Reusable Object-Oriented Software, and they were directly influenced by Christopher Alexander, this idea of a pattern language, components, patterns, modularity.
What’s interesting is there’s another book by Molly Wright Steenson, you may remember was a blogger, Girl Wonder. She worked in the world of architecture and she’s written a book about the influence of architects and designers on the digital space. Richard Saul Wurman, and information architecture. There’s a very direct metaphor there, but also Christopher Alexander.
She points out, actually, the funny thing is, he’s had way more of an influence in the digital space than he ever had in architecture. Most architects don’t like him. They think he’s a bit preachy. But his influence in the digital space is massive. Here I am talking about modularity, components, and patterns. Well, I mean, that is so hot right now. Design systems, we’re breaking things down into patterns. In fact, I ended up organizing a conference in 2017, purely about design systems, pattern libraries, styles, all this stuff called Patterns Day. It was great. We had these wonderful speakers. Jina Anne was there, Rachel Andrew, Alla Kholmatova, Alice Bartlett. It was great.
But, by the end of the day, I was kind of half-joking as saying, we should have had a drinking game where, every time someone referenced Christopher Alexander, we had to take a drink because his spirit loomed large over this. Actually, the full rules of the drinking game I came up with afterward where any time someone references Christopher Alexander, you take a drink. Any time someone says Lego, you take a drink. Any time someone says that naming things is hard, take a drink. Any time someone says atomic or atomic design, take a drink. Anytime someone says bootstrap, you puke the drink back up.
A Pattern Language is a work of architecture that directly not just influenced but is still influencing our work today; the idea of breaking things down into components to reuse.
Now, there’s another work from the world of architecture that has a big influence on me. It’s a classic book, again, How Buildings Learn. It’s the best book I’ve never read, published in 1994, by Stewart Brand. There was also a TV series that went with this that’s pretty fascinating.
In this, he talks about the work of a British architect named Frank Duffy and Duffy’s idea of something he called shearing layers. What Duffy said was that a building properly conceived is several layers of longevity. He kind of broke these down. You’ve got the sites that the building is on. We’re talking about geological time scales.
Then above that, the structure you hope will last for centuries. Then you’ve got the infrastructure inside the building that you might have to swap out every few decades. Change the plumbing. Then you’ve got the walls and the doors. You can change them every so often until you get into the room. You’ve got furniture, which you can move on a daily basis.
The time scales get faster as you move inward. He diagrammed it like this. This is shearing layers diagrammed for the building. I find this really interesting, this idea of different time scales.
But there’s another factor here I’m kind of fascinated by, which is that each layer depends on the layer below. You can’t have a structure until you’ve got a site to build on. You can’t have furniture inside a room until you’ve got the room. You need to have the walls there. Each layer is building on top of what’s come before. You can’t jump straight ahead to furniture without first having all those other layers.
Now, this reminds me of another idea that the writer Steven Johnson talks about a lot in his work, for example, this book, Where Good Ideas Come From. This is the idea of the adjacent possible, that certain inventions leap forward that can’t happen until other things have happened before them.
There’s a reason why the microwave oven wasn’t invented in medieval France. Too many other things had to be invented first before something like the microwave oven becomes inevitable.
Everything we do is kind of built on this idea of the adjacent possible because businesses and services on the web are on top of a whole bunch of layers of adjacent possibilities. You can’t have Twitter, Facebook, or Wikipedia until the web exists. The web itself is built on all of these layers that have to happen first.
We have to have the Industrial Revolution. We have to have electricity. Then somebody has to create circuitry. We have to get to the idea of having computers and then networked computers, something like the Internet. Then the web becomes possible. Once the web is possible, then all these businesses on top of the web become possible.
This idea of the adjacent possible, the shearing layers, they kind of fascinate me because I’m seeing a parallel there.
Now, Stewart Brand, who wrote about shearing layers and architecture, he revisited this idea of shearing layers and took them out from the world of architecture in a later work called The Clock of the Long Now. Stewart Brand is one of the founders of the Long Now Foundation. If you haven’t heard of it, it’s an organization dedicated to long-term thinking. I’m a card-carrying member. The card is designed to last for a few thousand years as well.
They’re currently building a clock that will tell time for 10,000 years. Brian Eno has written an algorithm for the chimes so that when it chimes once a century, it will never be quite the same chime. It’s encouraging long now thinking.
In this book, the full title of the book being The Clock of the Long Now: Time and Responsibility: The Ideas Behind the World’s Slowest Computer, he extrapolates shearing layers into something he calls pace layers. If you take the shearing layers model and look around you, it’s everywhere. It’s kind of like systems thinking, the Donella Meadows idea that systems are everywhere.
It’s kind of true. You look around these pace layers; shearing layers applied to the real world are everywhere. The example he gives is our species. If we look at the human race, we have these different time scales. The slowest is our physical nature as in our DNA, our physiological nature. That takes millennia to change. Physiologically, there’s no difference between a caveman and a spaceman.
Above that, you’ve got culture. This takes centuries, maybe longer, to accumulate over time.
Then systems of governance; not governments — governance. How are we going to run the societies?
An infrastructure, you want that to move faster, but not too fast or it could be very disruptive. Then you get into commerce, trading. Very fast-moving.
Then, finally, you’ve got fashion, which is super-fast. By fashion, he means things like popular music, anything that’s supposed to move fast. If fashion moved slowly, that wouldn’t be a good thing. It’s meant to move fast. It’s meant to try things out. “What about this? No, what about this? Try this.” Right? You don’t want for the things further down.
He’s mapped this onto these layers. From shearing layers, we go to pace layers. They have different timescales.
I’m talking about the difference between these really fast layers at the top, you know, “What about this? Try this? Today, we’re doing that,” compared to the really slow layers at the bottom that move slowly and are resistant to change.
Fast learns but slow remembers. Fast proposes and slow disposes. Fast is discontinuous but slow is continuous. Fast and small instructs slow and big by a crude innovation, an occasional revolution, and slow and big controls small and fast by constraint and constancy. Fast gets all our attention, but slow has all the power.
Now, once I was exposed to this idea and this virus had landed in my head, I found that I couldn’t get it out of my head. I started seeing pace layers everywhere. At Clear Left, where I work, it’s a running joke. On every project, we have a kickoff. It’s like, what’s the time to pace layers? How long will it be before someone makes a pace layer analogy? It’s like my brain has now been rewired to see pace layers everywhere.
It’s like, you know, the first time that someone points out the arrow in the FedEx logo. There was your life before that and there’s your life after that.
You’ve all seen the arrow in the FedEx logo. Yeah.
What about Toblerone? You’ve all seen the bear? Ah, yeah! Right? You will never be able to unsee that.
Consider the duck.
It’s a perfectly normal, ordinary duck. Agreed? But then your brain is exposed to the idea that all ducks are actually wearing dog masks.
All ducks are actually wearing dog masks. Now, when I show you the same picture of the same duck—
—you will never be able to unsee that. That’s how my brain feels when it comes to pace layers. I see them everywhere. It’s like the crazy wall part of the serial killer’s lair in the murder mystery. It’s just pace layers.
I couldn’t help but apply pace layers to the work we do mapping our medium to pace layers. Let’s try it with the World Wide Web.
Well, we build on top of the Internet. We can’t have the web before having Internet. At the very bottom layer, you’ve got the protocols of the Internet itself, you know, TCP/IP, which have been pretty much unchanged for decades. They were there from the ARPANET before the Internet. It’s a good thing that they’re unchanged. You would not want to be swapping out that low layer very quickly.
Above that, we have all the different protocols we use, protocols for email, protocols for file transfer, and protocols for the World Wide Web, HTTP, the hypertext transfer protocol. Now, this has evolved over time. We now have HTTP2, but it’s been a slow process and that feels right. Again, we shouldn’t be swapping out too quickly, but it’s a bit faster moving than the Internet protocols. On top of HTTP, we can put our URLs. Now, I would love it if URLs were right down at the bottom layer and they were permanent and they never changed and they never went away. That is the web I want, but I must acknowledge that, alas, you have to work hard to keep URLs alive. They do change. They do move. They do get destroyed, which is a bit of a shame, but we can work at it, people. We can work on keeping our URLs alive.
What we put at that those URLs, at the simplest level, we’ve got HTML. It was there from the start. From day one of the web, HTML was there and it’s still there today, but it’s evolved. It’s changed over time. Initially, HTML had 21 elements and now it’s got 121 elements, so it’s evolved.
But it feels like you can keep up with the pace of change. The last big evolution of HTML was 2010, later, with HTML5. We do get new editions every now and then, but it’s fine. We can keep up with it.
Then CSS, CSS changes may be more — definitely changes more rapidly than HTML. That feels like a good thing. We kind of want more. Give us some more CSS and now we’ve got Grid and we’ve got Flexbox. We’ve got all these great, new CSS things. Custom properties.
I don’t feel too overwhelmed by that. I still feel like, “Oh, no, this is good. We’ve got new CSS. I’m feeling I can keep on top of this, you know, read the right articles, read the right books, try them out. It’s fine.”
The pace, I constantly feel like I’m falling behind like, “Oh, I haven’t even heard of this new thing that apparently everybody is using.”
Does anyone else feel overwhelmed by this pace of change? Okay, good. Keep your hands up for a sec and just look around. All right? You are not alone. This turns out to be normal.
Whereas, “Oh, okay. It’s supposed to move fast. It would be bad if it moved slow. It’s meant to be trying stuff out. We see what sticks.”
required attribute. The pattern, it stuck. The spaghetti stuck to the wall and it moved down the layers into something more stable.
Now, the other thing I realized by mapping our technology stack of the web onto this pace layer model is that this is how I build. When I’m building a website, I pretty much start at the third layer. I don’t worry about, is the Internet on.
This seems to me to make sense as a way of building on the web because it maps to the structure of the pace layers of the web. But it’s also a testament to the flexibility of the web that you don’t have to build this way. If you don’t want to build in this layered way, you don’t have to.
Now, this model makes complete sense in other mediums. I think other mediums have influenced our thinking on the web. Maybe we’ve borrowed the metaphors of these other mediums.
For example, if you’re building a native app, this makes complete sense. If you’re building an iOS app and I have an iOS device, it works great. I get 100% of what you designed. But if you build an iOS app and I have, say, an Android device, it doesn’t work at all. You can’t install an iOS app onto an Android device. Those are your options: either it works great or it doesn’t work at all. This mental model makes complete sense in that field.
On the web, because we can have this layered approach, that means we can build like this. We can go from something that doesn’t work at all to something that just about works—maybe it’s just text on a screen—to something that works fine—maybe it’s missing a bunch of behaviors, but the user can accomplish what they want to do—to something that works well, but maybe the latest and greatest browser APIs aren’t supported by a particular browser—and then to something that works great like the latest browser running the best device, great network.
Most people are going to be somewhere on this continuum. Maybe nobody is going to get 100% of what you hope they get, but no one is going to get zero percent either as long as you’re building in this way, as long as you’re building with the grain of the web, building in layers, one thing on top of the other.
I’m going to quote Ethan here. Hi, Ethan. Ethan said:
In a way, this is a way of busting assumptions, the what-ifs. What if something isn’t supported? By building in a layered way, it’s okay. Everything will fall back to the layer below, the adjacent possible.
Now, Ethan, of course, we all know from this article, Responsive Web Design, published on A List Apart. When was that? 2010. My God, nine years ago. Ten years after, John Allsopp published A Dao of Web Design on A List Apart. One of the first things Ethan does in this article is to reference A Dao of Web Design. You could say that Ethan was building on top of that foundational layer that was set by John Allsopp.
Architecture again. Responsive web design. The reason why Ethan chose that term was because there was this idea in architecture called responsive architecture about buildings that could respond to the conditions of the people in the buildings. That made a really good metaphor for talking about the web on large screens, small screens, and everything in between.
This architecture thing, as a metaphor, it’s not bad. We can learn from it. I think, just be careful not to take it too far.
It’s not the only metaphor we use. Here’s another one. When we don’t talk about ourselves as architects, we’re engineers. Yeah.
It sounds good. This one predates the web. We’ve been talking about the idea of software engineering for a long time. I’m very partial to this term: software engineering. Not so much for the term itself. Not that I think it’s a particularly good metaphor, but from where it comes from, which is fricken’ awesome.
The term “software engineering” comes from Margaret Hamilton. Margaret Hamilton was in charge of the onboard flight software on the Apollo moon landing. This is engineering. That is the code base she’s standing next to there, which would then literally be woven into the computers onboard Apollo.
But as a metaphor, engineering, well, there’s a whole bunch of different kinds of it. What kind of engineer are we talking about here? Is it material engineering, structural engineering, chemical engineering, aeronautical engineering? They all have commonalities. One being, as an engineer, you’ve got to know two things. There’s the materials you’re going to be working with and the tools you’re going to use to shape those materials.
These are obvious tools we use to build the web, but there are less obvious tools. If you were working on a Web project, these tools also get used. You’re going to be talking over email. You’re going to be communicating over Slack, organizing spreadsheets, spreadsheets people.
We talk about these as productivity tools, though sometimes I know it feels like they are reducing productivity rather than increasing it. But it’s kind of a misnomer when you think about productivity tools. All tools are productivity tools. That’s literally what tools are for is to make you more productive.
I think we should acknowledge that these are legitimate design tools. You can’t launch a project without putting in some time and some kind of communication tool.
Then when it comes to the actual welding of these materials, we’ve got a whole bunch of tools that sit in our machines or sit in our Web servers. Now I feel like I’m back up at that top layer of the pace layers and I’m getting overwhelmed with the task runners, the build tools, the chains, the transpilers, and the preprocessors. Apparently, it changes every week. Oh, you’re still using Grunt? No, we’re using Gulp. No, Webpack. That’s what’s so overwhelming.
It also feels like it’s quite complicated. This is complicated stuff, but it’s like we’ve chosen it. We’ve chosen to make our lives complicated, in a way.
I’ll tell you what it reminds me of. Do you remember that startup, Juicero?
Where they sold a big, expensive, complicated machine to make juice, but you had to buy exactly the right juice packets to put in the big, expensive machine to make the juice. It works. It works great. The big, expensive, complicated machine does its job but somebody noticed that you could actually just take the packets and squeeze them by hand and it still produces juice. I’m just saying that squeezing by hand is still an option. You can build websites by squeezing by hand. (I think this metaphor has been stretched just about as far as it can do, so I will leave it there.)
There’s this other kind of spectrum, I guess, between the materials and the tools and then the people that will be exposed to the materials and the tools. They kind of fall into two categories: the engineers themselves and the end-users.
When we’re evaluating our tools and asking, “Is this the right tool to use?” we should evaluate it from our perspective, yes, “Is this going to be a helpful tool to me as an engineer?” if we’re using that metaphor. But I strongly feel we should also ask, “Is this going to be useful for the end-user?”
If those two things come into conflict, what then? Do we privilege our own experience over the user experience? I would hope not. I worry that, in a lot of tool choices, particularly on stuff that gets sent down to the browser. “Oh, I’m going to use a CSS framework.” Great. Good for you. That’s helping you out but now the user has to pay the cost of the benefit that you get from that CSS framework because they have to download the whole CSS framework.
Sometimes, these things come into conflict and I feel like maybe we privileged the developer experience over the user experience and that worries me. The other time they don’t come into conflict. All those tools like preprocessors and task runners that just sit on your own computer, no direct effect on the end-user experience. Frankly, use whatever you like. It doesn’t make a direct effect on the end-user experience.
When we’re evaluating tools, there are all these questions to ask. Who benefits from the tool? If I choose to use this tool, will it benefit the users? Will it benefit the engineers? Neither? Both?
There are other questions we ask like, well, just how good is this tool? To evaluate that we ask; yeah, how well does it work? Does this tool do what it says it will do well?
This, of course, is a completely valid question to ask but there’s a corollary that I think is more valid and that’s to ask not just how well does it work but how well does it fail?
What happens when something goes wrong?
These technologies on the web, they fail well by design. CSS fails well. Use a CSS property the browser doesn’t understand or CSS value. The browser just ignores it. It fails well.
HTML: Make up an HTML element. Throw it into a webpage. The browser doesn’t throw an error. The browser doesn’t stop parsing the webpage. It just ignores it and moves on. It fails well.
It actually makes sense to not jump ahead to the powerful stuff, to the top of the pace layers, but to try and build in layers and stay low for as long as possible. This is actually a principle, a principle that underlies the architecture of the web itself called the Principle of Least Power. You should choose the least powerful language for a given purpose, which seems really counterintuitive.
Why would I choose the least powerful language to do something? Surely, I want more power. The idea here is the power comes at an expense. Power comes at the expense of complexity, fragility. The more powerful technology is maybe more likely to fail badly.
Derek Featherstone put it well. He said:
:hover - done. Right? Oh, you need to make an interactive button? Use the
button element. Be lazy.
This makes a lot of sense, the Principle of Least Power. It makes a lot of sense to me on the web, especially when you combine it with a universal law that definitely applies on the web, and that’s Murphy’s Law:
Anything that can possibly go wrong will go wrong.
This comes directly from the world of engineering. Edward Aloysius Murphy Jr. was an aerospace engineer. It’s because he had this attitude, he never lost anybody on his watch.
I think we tend to dismiss things going wrong as edge cases. We kind of assume the average output. Other industries, when they’re making cars, they test them. They strap crash test dummies in. They smack them into walls at high speed.
To be fair, a lot of the reason why they have to do that is because of regulation. They didn’t necessarily choose to do it, but still. Can you imagine if they went, well, actually, we realize that most people are going to drive cars on roads and people driving into walls is an edge case, so we’re not going to worry too much about that?
Now, obviously, you want to hope for the best but you should prepare for the worst. Trent Walton said:
Like cars designed to perform in extreme heat or on icy roads, websites should be built to face the reality of the web’s inherent variability.
The web’s inherent variability, that gets to the heart of it.
Dave Siegel was trying to battle with the pixel-perfect labels was the web’s inherent variability. What John Allsopp was calling was for us to embrace the web’s inherent variability. It’s a feature, not a bug.
Are we engineers? Can we call ourselves engineers? Well, let me tell you something from the world of structural engineering.
This is the plan for the Quebec Bridge in Canada, a cantilever bridge. Construction started at the start of the 20th Century. There was a competition to see who get to design and build a bridge because that’s the way the industry works.
The engineer in charge was named Theodore Cooper. Now, originally, the bridge was meant to be 490 meters long but Theodore Cooper changed the specification to make it 550 meters long, mostly because, up in Scotland, the Firth of Forth Bridge, that was the longest bridge in the world at the time, longest cantilever. He wanted this bridge to exceed that, so he made the bridge longer but he did not recalculate the already high stresses being placed on the material of the bridge.
Oh, also, Theodore Cooper refused to work on site. He was down in New York, supposedly overseeing construction from New York. And when it was proposed that somebody should check his calculations, he took that as a personal afront and said, “No, no, no. No, no, that won’t work,” so there was no code reviews happening on this project.
Now, someone was onsite, the young engineer named Norman McLure. By 1907, August 6th, he had started to notice that the steel was bending, getting a lot of stress. Then again, on August 27th, it had got worse.
Cooper was notified down in New York. He did send a telegram back to Quebec. He said, “Place no more load on Quebec bridge until all facts considered - stop.” But he was inferring that the work should stop. He never explicitly said, “Stop the work right now,” so the telegram was ignored and work continued.
On August 29th, 1907, the bridge collapsed. It was shortly before the end of the day. The whistle was just about to blow to signal the end of the working day. There were 86 workers on the bridge and 75 of them died.
Now, something started happening in Canada a few years after this, by 1925. Engineering schools in Canada started holding private ceremonies around graduation time. This was a ceremony that was separate from qualifications. This wasn’t about whether you were qualified to be an engineer. This was called The Ritual of the Calling of the Engineer. You would speak an obligation penned by Rudyard Kipling, which I won’t repeat here because it’s meant to stay within the confines of this ritual.
You would also receive an iron ring. This iron ring would be a symbol of pride of being an engineer, but also a symbol of humility. For the longest time, the myth persisted that the iron itself was made from the steel in the Quebec Bridge. It’s not true, but the Quebec Bridge certainly looms over the idea of the iron ring. You’d wear it on the little finger of your working hand, so it would brush against the paper or the computer keyboard during your working day as a constant reminder of your responsibility as an engineer.
When we call ourselves engineers, I do have to ask, have we earned it? Do we take our responsibility seriously?
Maybe we don’t call ourselves engineers, but then what do we call ourselves? Does it even matter?
Well, we could go back to that original metaphor from the ’90s, under construction. Maybe we’re builders. We build things. The web is under construction. We’re the ones constructing it. It’s not so bad, you know, to be the ones literally building the web. It’s kind of awesome when you think about it.
Christopher Alexander, when he was talking about his reason for coming up with A Pattern Language, was because he said:
Most of the wonderful places in the world were not made by architects but by the people.
Maybe we’re at the bottom of the layer stack here as workers just building the web, but maybe we also have all the power — more power than we realize. Our collective power is greater than anything any architect could wield.
Yeah, maybe we’re builders. Maybe we’re bricklayers. I know Simon comes from a long line of bricklayers. It is a noble profession. Think about what our building blocks are, the building blocks of the World Wide Web.
The World Wide Web, I think, is the next great leap forward. We had language, writing, the printing press, and now hypertext in the form of the Word Wide Web. Who gets to build it? We do with this kind of building block: the URL, a link. What an amazing building block that is.
I can make a webpage and put two links on it linking to two different things. That combination of those two links has never existed before in the history of the web. We’ve created something new, link by link, building block by building block, building in layers.
I’m reminded of an apocryphal story may be from medieval times—who knows—a traveler coming across three workers. All three workers are doing the same thing. They’re building. They’re moving stones. They’re putting stones one on top of the other.
The traveler says to the first builder, “What are you doing?”
He says, “Oh, I’m moving stones.”
He says to the second builder, “What are you doing?” He says, “I’m building a wall.”
He says to the third builder, “What are you doing?”
He says, “I’m building a cathedral.”
They’re all doing the same task but thinking about it in different ways. Maybe that’s what we need to do. Forget about labels, metaphors, architecture, engineer, building, whatever. Just think about what a privilege it is to be doing this, to embrace the fact that we are the builders. We are the bricklayers.
Today, for example, we’re going to hear from quite an amazing collection of bricklayers that I’m really looking forward to hearing from. I want to hear what they’re building. I want to hear their stories of how they built it, why they built it.
But to do that, I need to stop moving air over these vocal cords and flapping this fleshy piece of meat around in my mouth and just stop talking. Thank you for listening.
Thursday, January 9th, 2020
This is a great proposal that would make the Cache API even more powerful by adding metadata to cached items, like when it was cached, how big it is, and how many times it’s been retrieved.
Wednesday, January 8th, 2020
From Xerox PARC to the World Wide Web:
The internet did not use a visual spatial metaphor. Despite being accessed through and often encompassed by the desktop environment, the internet felt well and truly placeless (or perhaps everywhere). Hyperlinks were wormholes through the spatial metaphor, allowing a user to skip laterally across directories stored on disparate servers, as well as horizontally, deep into a file system without having to access the intermediate steps. Multiple windows could be open to the same website at once, shattering the illusion of a “single file” that functioned as a piece of paper that only one person could hold. The icons that a user could arrange on the desktop didn’t have a parallel in online space at all.
Monday, January 6th, 2020
I’ve been thinking about some of the default behaviours that are built into web browsers.
First off, there’s the decision that a browser makes if you enter a web address without a protocol. Let’s say you type in
example.com without specifying whether you’re looking for
Browsers default to HTTP rather than HTTPS. Given that HTTP is older than HTTPS that makes sense. But given that there’s been such a push for TLS on the web, and the huge increase in sites served over HTTPS, I wonder if it’s time to reconsider that default?
Most websites that are served over HTTPS have an automatic redirect from HTTP to HTTPS (enforced with HSTS). There’s an ever so slight performance hit from that, at least for the very first visit. If, when no protocol is specified, browsers were to attempt to reach the HTTPS port first, we’d get a little bit of a speed improvement.
But would that break any existing behaviour? I don’t know. I guess there would be a bit of a performance hit in the other direction. That is, the browser would try HTTPS first, and when that doesn’t exist, go for HTTP. Sites served only over HTTP would suffer that little bit of lag.
Whatever the default behaviour, some sites are going to pay that performance penalty. Right now it’s being paid by sites that are served over HTTPS.
Here’s another browser default that Rob mentioned recently: the
I thought I might be able to get away with omitting
meta name="viewport". Apparently not! Maybe someday.
This all goes back to the default behaviour of Mobile Safari when the iPhone was first released. Most sites wouldn’t display correctly if one pixel were treated as one pixel. That’s because most sites were built with the assumption that they would be viewed on monitors rather than phones. Only weirdos like me were building sites without that assumption.
So the default behaviour in Mobile Safari is assume a page width of 1024 pixels, and then shrink that down to fit on the screen …unless the developer over-rides that behaviour with a
meta tag. That default behaviour was adopted by other mobile browsers. I think it’s a universal default.
But the web has changed since the iPhone was released in 2007. Responsive design has swept the web. What would happen if mobile browsers were to assume
meta element always felt like a (proprietary) band-aid rather than a long-term solution—for one thing, it’s the kind of presentational information that belongs in CSS rather than HTML. It would be nice if we could bid it farewell.
Monday, December 2nd, 2019
A one-stop shop for all the metacrap you can put in the
head of your HTML documents.
Tuesday, October 29th, 2019
Official Google Webmaster Central Blog [EN]: More options to help websites preview their content on Google Search
Google’s pissing over HTML again, but for once, it’s not by making up
A new way to help limit which part of a page is eligible to be shown as a snippet is the “
data-nosnippet” HTML attribute on
This is a direct contradiction of how
data-* attributes are intended to be used:
…these attributes are intended for use by the site’s own scripts, and are not a generic extension mechanism for publicly-usable metadata.
Friday, September 6th, 2019
This is brilliant technique by Remy!
If you’ve got a custom offline page that lists previously-visited pages (like I do on my site), you don’t have to choose between
IndexedDB—you can read the metadata straight from the HTML of the cached pages instead!
This seems forehead-smackingly obvious in hindsight. I’m totally stealing this.