Tags: art

835

sparkline

Wednesday, May 4th, 2022

Why I don’t miss React: a story about using the platform - Jack Franklin

This is a great case study of switching from a framework mindset to native browser technologies.

Though this is quite specific to Jack’s own situation, I do feel like there’s something in the air here. The native browser features are now powerful and stable enough to make the framework approach feel outdated.

And if you do want to use third-party dependencies, Jack makes a great case for choosing smaller single-responsibility helpers rather than monolithic frameworks.

Replacing lit-html would be an undertaking but much less so than replacing React: it’s used in our codebase purely for having our components (re)-render HTML. Replacing lit-html would still mean that we can keep our business logic, ultimately maintaining the value they provide to end-users. Lit-Html is one small Lego brick in our system, React (or Angular, or similar) is the entire box.

You Don’t Need A UI Framework — Smashing Magazine

We noticed a trend: students who pick a UI framework like Bootstrap or Material UI get off the ground quickly and make rapid progress in the first few days. But as time goes on, they get bogged down. The daylight grows between what they need, and what the component library provides. And they wind up spending so much time trying to bend the components into the right shape.

I remember one student spent a whole afternoon trying to modify the masthead from a CSS framework to support their navigation. In the end, they decided to scrap the third-party component, and they built an alternative themselves in 10 minutes.

This tracks with my experience. These kinds of frameworks don’t save time; they defer it.

The one situation where that works well, as Josh also points out, is prototyping.

If the goal is to quickly get something up and running, and you don’t need the UI to be 100% professional, I do think it can be a bit of a time-saver to quickly drop in a bunch of third-party components.

Tuesday, May 3rd, 2022

City of Women London

City of Women encourages Londoners to take a second glance at places we might once have taken for granted by reimagining the iconic Underground map.

I love everything about this …except that there’s no Rosalind Franklin station.

Friday, April 22nd, 2022

Blogging and the heat death of the universe • Robin Rendle

A cautionary tale on why you should keep your dependencies to a minimum and simplify your build process (if you even need one):

If it’s not link rot that gets you then it’s this heat death of the universe problem with entropy setting in slowly over time. And the only way to really defend against it is to build things progressively, to make sure that you’re not tied to one dependency or another. That complex build process? That’s a dependency. Your third party link to some third party font service that depends on their servers running forever? Another dependency.

Tuesday, April 12th, 2022

Starting and finishing

Someone was asking recently about advice for public speaking. This was specifically for in-person events now that we’re returning to actual live conferences.

Everyone’s speaking style is different so there’s no universal advice. That said, just about everyone recommends practicing. Practice your talk. Then practice it again and again.

That’s good advice but it’s also quite time-consuming. Something I’ve recommended in the past is to really concentrate on the start and the end of the talk.

You should be able to deliver the first five minutes of your talk in your sleep. If something is going to throw you, it’s likely to happen at the beginning of your talk. Whether it’s a technical hitch or just the weirdness and nerves of standing on stage, you want to be able to cruise through that part of the talk on auto-pilot. After five minutes or so, your nerves will have calmed and any audio or visual oddities should be sorted.

Likewise you want to really nail the last few minutes of your talk. Have a good strong ending that you can deliver convincingly.

Make it very clear when you’re done—usually through a decisive “thank you!”—to let the audience know that they may now burst into rapturous applause. Beware the false ending. “Thank you …and this is my Twitter handle. I always like hearing from people. So. Yeah.” Remember, the audience is on your side and they want to show their appreciation for your talk but you have to let them know without any doubt when the talk is done.

At band practice we sometimes joke “Hey, as long as we all start together and finish together, that’s what matters.” It’s funny because there’s a kernel of truth to it. If you start a song with a great intro and you finish the song with a tight rock’n’roll ending, nobody’s going to remember if somebody flubbed a note halfway through.

So, yes, practice your talk. But really practice the start and the end of your talk.

Monday, April 4th, 2022

Why Computers Won’t Make Themselves Smarter | The New Yorker

In this piece published a year ago, Ted Chiang pours cold water on the idea of a bootstrapping singularity.

How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved. This is a very strong claim. If someone is asserting that infinite optimization for generality is possible, I’d like to see some arguments besides citing examples of optimization for specialized tasks.

Sunday, March 27th, 2022

Artifice and Intelligence

Whatever the merit of the scientific aspirations originally encompassed by the term “artificial intelligence,” it’s a phrase that now functions in the vernacular primarily to obfuscate, alienate, and glamorize.

Do “cloud” next!

Tuesday, February 22nd, 2022

Bones, Bones: How to Articulate a Whale

I found this to be thoroughly engrossing. An articulate composition, you might say.

I couldn’t help thinking of J.G. Ballard’s short story, The Drowned Giant.

Wednesday, February 9th, 2022

The web is a miracle

I wonder what kinds of conditions would need to be true for another platform to be built in a similar way? Lots of people have tried, but none of them have the purity of participation for the love of it that the web has.

Thursday, February 3rd, 2022

‘Like an atomic bomb’: So what now for the IAB’s GDPR fix after regulator snafu? - Digiday

Simply put, the popups asking people for consent whenever they land on a site are illegal.

Saturday, January 8th, 2022

Ban embed codes

Prompted by my article on third-party code, here’s a recommendation to ditch any embeds on your website.

Tuesday, January 4th, 2022

Plus Equals #4, December 2021

In which Rob takes a deep dive into isometric projection and then gets generative with it.

Thursday, December 23rd, 2021

Brian Eno on NFTs and Automaticism

Much of the energy behind crypto arises from the very strong need that some people feel to operate outside of a state, and therefore outside of any sort of democratic communal overview. The idea that Ayn Rand, that Nietzsche-for-Teenagers toxin, should have had her whacky ideas enshrined in a philosophy about money is what is terrifying to me.

Wednesday, December 15th, 2021

Friday, December 10th, 2021

Test Your Product on a Crappy Laptop - CSS-Tricks

Eric’s response to Chris’s question—“What is one thing people can do to make their website better?”—dovetails nicely with my own answer:

The two real problems here are:

  1. Third-party assets, such as the very analytics and CRM packages you use to determine who is using your product and how they go about it. There’s no real control over the quality or amount of code they add to your site, and setting up the logic to block them loading their own third-party resources is difficult to do.
  2. The people who tell you to add these third-party assets. These people typically aren’t aware of the performance issues caused by the ask, or don’t care because it’s not part of the results they’re judged by.

Thursday, December 9th, 2021

Ain’t no party like a third party

This was originally published on CSS Tricks in December 2021 as part of a year-end round-up of responses to the question “What is one thing people can do to make their website bettter?”

I’d like to tell you something not to do to make your website better. Don’t add any third-party scripts to your site.

That may sound extreme, but at one time it would’ve been common sense. On today’s modern web it sounds like advice from a tinfoil-hat wearing conspiracy nut. But just because I’m paranoid doesn’t mean they’re not out to get your user’s data.

All I’m asking is that we treat third-party scripts like third-party cookies. They were a mistake.

Browsers are now beginning to block third-party cookies. Chrome is dragging its heels because the same company that makes the browser also runs an advertising business. But even they can’t resist the tide. Third-party cookies are used almost exclusively for tracking. That was never the plan.

In the beginning, there was no state on the web. A client requested a resource from a server. The server responded. Then they both promptly forgot about it. That made it hard to build shopping carts or log-ins. That’s why we got cookies.

In hindsight, cookies should’ve been limited to a same-origin policy from day one. That would’ve solved the problems of authentication and commerce without opening up a huge security hole that has been exploited to track people as they moved from one website to another. The web went from having no state to having too much.

Now that vulnerability is finally being closed. But only for cookies. I would love it if third-party JavaScript got the same treatment.

When you add any third-party file to your website—an image, a style sheet, a font—it’s a potential vector for tracking. But third-party JavaScript files go one further. They can execute arbitrary code.

Just take a minute to consider the implications of that: any third-party script on your site is allowing someone else to execute code on your web pages. That’s astonishingly unsafe.

It gets better. One of the pieces of code that this invited intruder can execute is the ability to pull in other third-party scripts.

You might think there’s no harm in adding that one little analytics script. Or that one little Google Tag Manager snippet. It’s such a small piece of code, after all. But in doing that, you’ve handed over your keys to a stranger. And now they’re welcoming in all their shady acquaintances.

Request Map Generator is a great tool for visualizing the resources being loaded on any web page. Try pasting in the URL of an interesting article from a news outlet or magazine that someone sent you recently. Then marvel at the sheer size and number of third-party scripts that sneak in via one tiny script element on the original page.

That’s why I recommend that the one thing people can do to make their website better is to not add third-party scripts.

Easier said than done, right? Especially if you’re working on a site that currently relies on third-party tracking for its business model. But that exploitative business model won’t change unless people like us are willing to engage in a campaign of passive resistance.

I know, I know. If you refuse to add that third-party script, your boss will probably say, “Fine, I’ll get someone else to do it. Also, you’re fired.”

This tactic will only work if everyone agrees to do what’s right. We need to have one another’s backs. We need to support one another. The way people support one another in the workplace is through a union.

So I think I’d like to change my answer to the question that’s been posed.

The one thing people can do to make their website better is to unionize.

Saturday, December 4th, 2021

Ain’t No Party Like a Third Party - CSS-Tricks

Chris is doing another end-of-year roundup. This time the prompt is “What is one thing people can do to make their website bettter?”

This is my response.

I’d like to tell you something not to do to make your website better. Don’t add any third-party scripts to your site.

Wednesday, November 3rd, 2021

Publishing The State Of The Web

Back in April I gave a talk at An Event Apart Spring Summit. The talk was called The State Of The Web, and I’ve published the transcript. I’ve also published the video.

I put a lot of work into this talk and I think it paid off. I wrote about preparing the talk. I also wrote about recording it. I also published links related to the talk. It was an intense process, but a rewarding one.

I almost called the talk The Overview Effect. My main goal with the talk was to instil a sense of perspective. Hence the references to the famous Earthrise photograph.

On the one hand, I wanted the audience to grasp just how far the web has come. So the first half of the talk is a bit of a trip down memory lane, with a constant return to just how much we can accomplish in browsers today. It’s all very positive and upbeat.

Then I twist the knife. Having shown just how far we’ve progressed technically, I switch gears the moment I say:

The biggest challenges facing the World Wide Web today are not technical challenges.

Then I dive into those challenges, as I see them. It turns out that technical challenges would be preferable to the seemingly intractable problems of today’s web.

I almost wish I could’ve flipped the order: talk about the negative stuff first but then finish with the positive. I worry that the talk could be a bit of a downer. Still, I tried to finish on an optimistic note.

I won’t spoil it any more for you. Watch the video or have a read of The State Of The Web.

Tuesday, November 2nd, 2021

The State Of The Web

The opening presentation from An Event Apart Spring Summit held online in April 2021.

Hello, my friends. I’d like us to try to collectively achieve something today. What I’d like us to achieve is a sense of perspective.

To do this we need to take a step back and cast an eye on the past.

For example, I can look back and say “Wow, what a terrible year!”

A year of death. A year of polarisation. Of inequality. A corrupt government. Protests in the street as people struggled to fight against systemic racism.

Yes, I am of course talking about the year 1968.

The past

By the end of 1968, the United States of America was a nation in turmoil. Civil rights. The war in Vietnam. It felt like the polarising issues of the day were splitting the country in two.

But in the final week of the year, something happened that offered a sense of perspective.

In an audacious move, NASA decided to bring forward the schedule of its Apollo programme. Apollo 7 was a success but that mission was confined to Earth orbit. For Apollo 8, human beings would leave Earth’s orbit for the first time in history. The bold plan was to fly to and around the moon before returning safely to Earth.

From today’s perspective, you might just see it as a dry run for Apollo 11 when human beings would step foot on the moon. But at the time, it was an unbelievably bold move. A literal moonshot.

On the winter solstice, December 21st 1968, Jim Lovell, Frank Borman, and Bill Anders were launched on their six day mission to the moon and back.

The mission was a success. Everything went according to plan. But the reason why we remember the Apollo 8 mission today is for something that wasn’t planned.

First of all, after the translunar injection when the crew had left Earth orbit and were on their way to the moon (already the furthest distance ever travelled by our species), someone—probably Bill Anders—pointed a camera back at Earth.

This was the first picture ever taken by a human being of the whole earth. It’s quite a perspective-setting sight, seeing the whole Earth. To us today, it’s almost commonplace. But remember that were was a time when no one had ever seen this view.

In fact, throughout the 1960s activist Stewart Brand had a campaign, handing out buttons with the question, “Why haven’t we seen a photograph of the whole Earth yet?”

I like the “yet” at the end of that. It gives it a conspiracy-tinged edge.

Stewart Brand suspected that if people could see their home planet in one image, it could reset their perspectives. They would truly grok the idea of Spaceship Earth, as Buckminster Fuller would say. The idea came to Brand when he was on a rooftop, tripping on acid, experiencing the horizon curve away from him and giving him quite a sense of perspective.

Later, he would start the Whole Earth Catalog. It was like a print version of Wikipedia, with everything you needed to know to run a commune.

Later still, he went on to found the Long Now Foundation, an organisation dedicated to long-term thinking. I’m a proud member.

Their most famous project is the clock of the long now, which will keep time for 10,000 years. This is just a scale model in the Science Museum in London. The full-size clock is being built inside a mountain on geologically stable ground. Just thinking about the engineering challenges involved is bound to give you a certain sense of perspective.

But let’s snap back from 10,000 in the future to that Apollo 8 mission in December of 1968.

This picture of the whole earth wasn’t the most important picture taken by Bill Anders on that flight. By Christmas Eve, the crew had reached the moon and successfully entered lunar orbit.

Oh my God! Look at that picture over there! There’s the Earth coming up. Wow, that’s pretty.

Hey, don’t take that, it’s not scheduled.

You got a color film, Jim? Hand me that roll of color quick, would you…

Oh man, that’s…

Quick! Quick!

This is what Bill Anders captured.

Earthrise.

I could try to describe it. But they should’ve sent a poet.

Fifty years later, this poet puts it beautifully. This is Amanda Gorman’s poem Earthrise.

On Christmas Eve, 1968, astronaut Bill Anders

Snapped a photo of the earth

As Apollo 8 orbited the moon.

Those three guys

Were surprised

To see from their eyes

Our planet looked like an earthrise

A blue orb hovering over the moon’s gray horizon,

with deep oceans and silver skies.
It was our world’s first glance at itself

Our first chance to see a shared reality,

A declared stance and a commonality;
A glimpse into our planet’s mirror,

And as threats drew nearer,

Our own urgency became clearer,

As we realize that we hold nothing dearer

than this floating body we all call home.

Astronauts have been known to experience something called the overview effect. It’s a profound change in perspective that comes from seeing the totality of our home planet in all its beauty and fragility.

The Earthrise photograph gave the world a taste of the overview effect, right at a time when it was most needed.

The World Wide Web

I wonder if it’s possible to get an overview effect for the World Wide Web?

There is no photograph of the whole web. We can’t see the web. We can’t travel into space and look back at our online home.

But we can travel back in time. Let’s travel back to 1945.

That was the year that an article was published in The Atlantic Monthly by Vannevar Bush. He was a pop scientist of his day, like Neil de Grasse Tyson or Bill Nye.

The article was called As We May Think. In the article, Bush describes a hypothetical device called a memex.

Imagine a desk filled with reams and reams of microfilm. The operator of this device can find information and also make connections between bits of information, linking them together in whatever way makes sense to them.

This sounds a lot like hypertext. That word would be coined decades later by Ted Nelson to describe “text which is not constrained to be linear.”

Vannevar Bush’s idea of the memex and Ted Nelson’s ideas about hypertext would be a big influence on Tim Berners-Lee, the creator of the World Wide Web.

But his big breakthrough wasn’t just making hypertext into a reality. Other people had already done that.

Douglas Engelbart, who wanted to make the computer equivalent of the memex, had already demonstrated a working hypertext system in 1968 in an astonishing demonstration that came to be known as The Mother Of All Demos.

The idea of hypertext was kind of like a choose-your-own-adventure book. Individual pieces of text in a book are connected with unique identifiers and you can jump from one piece of text to another within the same book.

But what if you could jump between books? That’s the other piece of the puzzle.

The idea of connecting computers together came from the concept of “time sharing” allowing you to remotely access another computer.

With funding from the US Department of Defence’s Advance Research Projects Agency, time sharing was taken to the next level with the creation of a computer network called the ARPANET.

It grew. And it grew. Until it was no longer just a network of computers. It was a network of networks. Or internetwork. Internet, for short.

Tim Berners-Lee took the infrastructure of the internet and mashed it up with the idea of hypertext. Instead of imagining hypertext as a book with interconnected concepts, he imaged a library of books where you could jump from one idea in one book to another idea in a completely different book in a completely different part of the library.

This was the World Wide Web. And Tim Berners-Lee called it the World Wide Web even when it only existed on his computer. You have to admire the chutspah of that!

But the really incredible thing is that it worked! In March of 1989 he proposed a global hypertext system, where anybody could create new pages without asking anyone for permission, and anyone could access those pages no matter what kind of device or operating system they were using.

And that’s what we have today. While the World Wide Web might seem inevitable in hindsight, it was anything but. It is a remarkable achievement.

The World Wide Web was somewhat lacking in colour originally. When I started making websites in the mid nineties, colour had arrived but it was limited.

Colour

When I started making websites in the mid nineties, colour had arrived but it was somewhat limited.

We had a palette of 216 web safe colours. You knew if a colour was “web safe” if the hexadecimal notation was three sets of duplicated values. If you altered one of those values even slightly, there was no guarantee that the colour would display consistently on the monitors of the time.

I have a confession to make: I kind of liked this constraint in a weird way. To this day, if I have a colour value that’s almost web-safe, I can’t resist nudging it slightly.

Fortunately, monitors improved. They got flatter for one thing. They were also capable of displaying plenty of colours.

And we also got more and more ways of specifying colours. As well as hexadecimal, we got RGB: Red, Green, Blue. Better yet, we got RGBa …with alpha transparency. That’s opacity to you and me.

Then we got HSL: hue, saturation, lightness. Or should I say HSLa: hue, saturation, lightness, and alpha transparency.

And there are more colour spaces on the way. HWB (hue, whiteness, blackness), LAB, LCH. And there’s work on a color() function so you can specify even more colour spaces.

Typography

In the beginning, typography on the World Wide Web was non-existent. Your browser used whatever was available on your operating system.

That situation continued for quite a while. You’d have to guess which fonts were likely to be available on Windows or Mac.

If you wanted to use a sans-serif typeface, there was Arial on Windows and Helvetica on the Mac. Verdana was a pretty safe bet too.

For a while your only safe option for a serif typeface was Times New Roman. When Mathew Carter’s Georgia was released, it was a godsend. Here was a typeface specifically designed for the screen.

Later Microsoft released another four fonts designed for the screen. Four new fonts! It felt like we were being spoiled.

But what if you wanted to use a typeface that didn’t come installed with an operating system? Well, you went into Photoshop and made an image of the text. Now the user had to download additional images. The text wasn’t selectable and it was a fixed width.

We came up with all sorts of clever techniques to do what was called “image replacement” for text. Some of the techniques involved CSS and background images. One of the techniques involved Flash. It was called sIFR: Scalable Inman Flash Replacement. A later technique called Cufón converted the letter shapes into paths in Canvas.

All of these techniques were hacks. Very clever hacks, but hacks nonetheless. They were clever and they worked but they always reminded me of Samuel Johnson’s description of a dog walking on its hind legs:

It is not done well but you are surprised to find it done at all.

What if you wanted to use an actual font file in a web page?

There was only one browser that supported font embedding: Microsoft’s Internet Explorer. The catch was that you had to use a proprietary font format called Embedded Open Type.

Both type foundries and browser makers were nervous about allowing regular font files to be embedded in web pages. They were worried about licensing. Wouldn’t this lead to even more people downloading fonts illegally? How would the licensing be enforced?

The impasse was broken with a two-pronged approach. First of all, we got a new font format called Web Open Font Format or WOFF. It could be used to take a regular font file and wrap it in a light veneer of metadata about licensing. There’s a sequel that’s even better than the original, WOFF2.

The other breakthrough was the creation of intermediary services like Typekit and Fontdeck. They would take care of serving the actual font files, making sure they couldn’t be easily downloaded. They could also keep track of numbers to ensure that type foundries were being compensated fairly.

Over time it became clear to type foundries that most web designers wanted to do the right thing when it came to licensing fonts. And so these days, you can probably license a font straight from a type foundry for use on the web and host it yourself.

You might need to buy a few different weights. Regular. Bold. Maybe italic. What about extra bold? Or a light weight? It all starts to add up, especially for the end user who has to download all those files.

I remember being at the web typography conference Ampersand years ago and hearing a talk from Nick Sherman. He asked us to imagine one single font file that could go from light to regular to bold and everything in between. What he described sounded like science fiction.

It is now science fact, indistinguishable from magic. Variable fonts are here. You can typeset text on the web to be light, or regular, or bold, or anything in between.

When you use CSS to declare the font-weight property, you can use keywords like “normal” or “bold” but you can also use corresponding numbers like 400 or 700. There’s a scale with nine options from 100 to 900. But why isn’t the scale simply one to nine?

Well, even though the idea of variable fonts would have been pure fantasy when this part of CSS was being specced, the authors had some foresight:

One of the reasons we chose to use three-digit numbers was to support intermediate values in the future.

With the creation of variable fonts, Håkon Wium Lee added:

And the future is now.

On today’s web you could have 999 font-weight options.

Images

In the beginning, the World Wide Web was a medium for text only. There were no images and certainly no videos.

In an early mailing list discussion, there was talk of creating a new HTML element for images. Perhaps it should be called “icon”. Or maybe it should be more generic and be called “embed”. Tim Berners-Lee said he imagined using the rel attribute on the A element for embedding images.

While this discussion was happening, Marc Andreessen popped in to say that he had just shipped a new HTML element in the Mosaic browser. It’s called IMG and it takes an attribute called SRC that points to the source of the image.

This was a self-closing tag so there was no way to put fallback content in between the opening and closing tags if the image couldn’t be displayed. So the ALT attribute was introduced instead to provide an alternative description of the image.

For the images themselves, there were really only two choices. JPG for photographic images. GIF for icons or anything that needed basic transparency. GIFs could also do animation and today, that’s pretty much all they’re used for. That’s because there was a concerted campaign to ditch the GIF format on the web. Unisys, who owned the rights to a compression algorithm used by the GIF format, had started to make noises about potentially demanding license fees for its use.

The Portable Network Graphics format—or PNG—was created in response. It was more performant and it allowed you to have proper alpha transparency.

These were all bitmap formats. What if you wanted a vector format for images that would retain crispness at any size or resolution? There was only one option: Flash. You’d have to embed a Flash movie in your web page just to get the benefit of vector graphics.

By the 21st century there were some eggheads working on a text-based vector file format that could be embedded in webpages, but it sounded like a pipe dream. It was called SVG for Scalable Vector Graphics. The format was dreamed up in 2001 but for years, not a single browser supported it. It was like some theoretical graphical Shangri-La.

But by 2011, every major browser supported it. Styleable, scriptable, animatable, vector graphics have gone from fantasy to reality.

There’s more choice in the world of bitmap images too. WebP is well supported. AVIF is is gaining support.

The IMG element itself has grown too. You can use the srcset attribute to give the browser a range of images to choose from to best suit the user’s device and network connection. You can use the loading attribute to get lazy loading of images for free—no JavaScript required.

We now have audio in HTML. No JavaScript required. We now have video in HTML. No JavaScript required.

These elements have been designed with more thought than the IMG element. They are not self-closing elements, by design. You can put fallback content between the opening and closing tags.

The audio and video elements arrived long after the IMG element. For a long time, there was no easy way to do video or audio on the web.

That was very frustrating for me. The first websites I ever built were for bands. The only way to stream music was with a proprietary plug-in like Real Audio.

Or Flash.

While the web standards were still being worked on, Flash delivered the goods with streaming audio and video. This happened over and over. Flash gave us vector graphics, animation, video, and more. But the price was lock-in. Flash was a proprietary format.

Still, Flash showed the web standards bodies the direction of travel. Flash was the hare. Web standards were the tortoise.

We know how that race ended.

In a way, Flash was like the Research and Development incubator for the World Wide Web. We got CSS animations, SVG, and streaming video because Flash showed that there was an appetite for them.

Until web standards provide a way to do something, designers and developers will reach for whatever tool gets the job done. Take layout, for example.

Layout

In the early days of the web, you could have any layout you wanted …as long as it was a single column.

Before long, HTML expanded to provide some rudimentary formatting for that single column of text. Presentational elements and attributes were invented. And even when elements and attributes weren’t meant to be used for formatting, people got creative.

Tables for layout. A single pixel GIF that could be given width and height. These were clever solutions. But they were hacks. And they were in danger of turning HTML into a presentational language instead of a language for structuring content.

CSS came to the rescue. A language specifically for presentation.

But we still didn’t get proper layout tools. There was a lot of debate in the early days about whether CSS should even attempt to provide layout tools or whether that was a job for a separate technology.

We could lay things out using the float property, but really that was just another hack.

Floats were an improvement over tables for layout, but we only swapped one tool for another. Our collective thinking still wasn’t very web-like.

For example, designers and developers insisted on building websites with a fixed width. This started in the era of table layouts and carried over into CSS.

To start with, the fixed width was 640 pixels. Then it was 800 pixels. Then people settled on the magical number of 960 pixels. Designers and developers didn’t seem at all concerned that people had different sized screens.

That was until the iPhone came out. It caused a panic. What fixed width were we supposed to design for now?

The answer was there all along. Even before the web appeared in mobile devices, it was possible to build fluid layouts that would adapt to screen size. It’s just that the majority of designers and developers chose not to build in this way.

I was pleased that mobile came along and shook things up. It exposed the assumptions that people were making. And it forced designers and developers to think in a more fluid, webby way.

Even better, CSS had expanded to include media queries so it was possible to alter layouts at different breakpoints.

Ethan came along and put a nice bow on it with his definition of responsive design: fluid media, fluid layouts, and media queries.

I fell in love with responsive web design instantly becuase it matched how I was already thinking about the web. I was one of the handful of weirdos who insisted on building fluid websites when everyone else was using fixed-width layouts.

But I thought that responsive web design would struggle to take hold.

I’m delighted to say that I was wrong. Responsive web design has become the default!

If I could go back to my past self in the mid 2000s, I’d love to tell them that in the future, everyone would be building with fluid layouts (and also that time travel had been invented apparently).

Not only that, but we finally have proper layout tools for the web. Flexbox. Grid. No more hacks. We’re even getting container queries soon (thanks, Miriam!).

Web browsers now are positively overflowing with fantastic design tools that would have been unimaginable to my past self. Support for these technologies is pretty much universal.

When browsers differ today, it’s only terms of which standards they don’t yet support. There was a time when browsers differed massively in how they handled basic web technologies.

There was a time when being a web developer meant understanding all the different quirks between browsers.

And browser makers spent a ludicrous amount of time reverse-engineering the quirky behaviour of whichever browser was the market leader.

That changed with HTML5. We remember HTML5 for introducing new APIs, new form fields, and new structural elements. But the biggest innovation was completely invisible. For the first time, error-handling was standardised. Browsers had a set of rules they could work from. Once browsers adopted this consistent approach to error-handling, cross-browser differences dried up.

That was good news for web developers. We were sick of dealing with different browsers taking different approaches. We had been burned with JavaScript.

JavaScript

In the beginning, there was no scripting on the web, just like there was no styling. Tim Berners-Lee wasn’t opposed to the idea of executing arbitrary code on the web. But he pointed out that you’d need everyone to agree on which programming language browsers would use.

You need something really powerful, but at the same time ubiquitous. Remember a facet of the web is universal readership. There is no universal interpreted programming language.

This problem of which language to choose was solved in the usual way. Brendan Eich, who was working at Netscape, created a completely new programming language in just ten days. It would be called… LiveScript. Then the marketing department got involved and because Java was the new hotness at the time, this scripting language was renamed to… JavaScript. Even though it has nothing to do with Java. Java is to JavaScript as ham is to hamster.

The important thing is that multiple browsers implemented it. Then the hype started. We were told about this great new technology called DHTML. The D stood for dynamic! This would allow us to programmatically manipulate elements in a web page.

But… the two major browsers at the time, Netscape Navigator and Internet Explorer, used two completely incompatible syntaxes. For Netscape Navigator you’d use document.layers. For Internet Explorer it was document.all.

This was when developers said enough was enough. We wanted standards. The Web Standards Project was formed and we lobbied browser makers to implement web standards, like CSS and also the Document Object Model. This was a standardised way of manipulating elements in a web page. You could use methods like getElementById and getElementsByTagName.

That worked fine, but it was yet another vocabulary to learn. If you already knew CSS, then you already understood how to get an element by ID and get elements by tag name, but with a different syntax.

John Resig created jQuery so that you could use the CSS syntax to do DOM scripting. There were lots of other JavaScript libraries released around the same time, but jQuery was by far the most popular. Clearly, this syntax was something that developers wanted.

Now we no longer need jQuery. We’ve got querySelector and querySelectorAll. But the reason we no longer need jQuery is because of jQuery. Just like Flash, jQuery showed what developers wanted. And just as with Flash, the web standards took more time. But now jQuery is obsolete …precisely because it was so successful.

It’s a similar story with Sass and CSS. There was a time when Sass was the only way to have a feature like variables. But now with custom properties available in CSS, Sass is becoming increasingly obsolete …precisely because it was so successful and showed the direction of travel.

In a way, jQuery and Sass (and maybe even Flash) were kind of like polyfills. That’s a term that my friend Remy Sharp coined for JavaScript libraries that fill in the gaps until browsers have implemented web standards.

By way of explanation to anyone in the States, Polyfilla is a brand name in the UK for what you call spackling paste. So JavaScript libraries like jQuery spackled over the gaps in browsers.

But some capabilities can’t be polyfilled. If a browser doesn’t provide API access to a particular sensor, for example, there’s no way to spackle that gap.

For quite a while, if you wanted access to device APIs, you’d have to build a native app. But over time, that has changed. Now browsers are capable of providing app-like experiences. You can get location data. You can access the camera. You can provide notifications. You can even make websites work offline using service workers.

Native apps had all these capabilities before web browsers. Just as with Flash and jQuery, native apps pointed the way. The gap always looks insurmountable to begin with. But over time, the web always manages to catch up.

At the beginning of 2021, Ire said:

By the end of the year, I would predict that any major native mobile application could be instead built using native web capabilities.

The present

The web has come along way. It has grown and evolved. Browsers have become more and more powerful while maintaining backward compatibility.

In the past we had to hack our way around the technological limitations of the web and we had a long wish list of features we wanted.

I’m not saying we’re done. I’m sure that more features will keep coming. But our wish list has shrunk.

The biggest challenges facing the World Wide Web today are not technical challenges.

Today it is possible to create beautiful websites that make full use of colour, typography, layout, animation, and more. But this isn’t what users experience.

This is what users experience. A tedious frustrating game of whack-a-mole with websites that claim to value our privacy while asking us to relinquish it.

This is not a technical problem. It is a design decision. The decision might not be made by anyone with designer in their job title, but make no mistake, business decisions have a direct effect on user experience.

On the face of it, the problem seems to be with the business model of advertising. But that’s not quite right. To be more precise, the problem is with the business model of behavioural advertising. That relies on intermediaries to amass huge amounts of personal data so that they can supposedly serve up relevant advertising.

But contextual advertising, which serves up ads based on the content you’re looking at doesn’t require the invasive collection of personal data. And it works. Behavioural advertising, despite being a huge industry that depends on people giving up their privacy, doesn’t even work very well. And on the few occasions when it does work, it just feels creepy.

The problem is not advertising. The problem is tracking. The greatest trick the middlemen ever pulled was convincing us that you can’t have effective advertising without tracking. That is false. But they’ve managed to skew our sense of perspective so that invasive advertising seems inevitable.

Advertising was always possible on the web. You could publish anything and an ad is just one more thing you could choose to publish. But tracking was impossible. That’s because the early web was stateless. A browser requests a resource from a server and once that transaction is done, they both promptly forget about it. That made it very hard to do things like online shopping or logging into an account.

Two technologies were created later that enabled state on the web. Cookies and JavaScript. If these technologies had been limited to a same origin policy, they would have nicely solved the problems of online shopping and authentication.

But these technologies work across domains. Third party cookies and third-party JavaScript enables users to be tracked as they move from site to site. The web gone from having no state to having too much.

There is hope. Browsers like Firefox and Safari are blocking third-party cookies by default. Personally, I’d love it if third-party JavaScript got the same treatment. You can also install add-ons to make your browser more secure, although these add-ons are often labelled ad-blockers, which is a shame. Because the problem is not advertising. The problem is tracking.

Perhaps none of this applies to you anyway. You may be thinking that this is a problem for websites. But you build web apps.

Personally I’m not keen on the idea of dividing the entirety of the World Wide Web into two vaguely-defined categories. I have yet to hear a good definition of “web app” other than “a website that requires JavaScript to work.”

But the phrase “single page app” has a more definite meaning. It refers to an architectural decision. That decision is to reinvent the web browser inside a web browser.

In a sense, it’s a testament to the power of JavaScript that you can choose to do this. Browsers render content and perform navigations, but if you’d rather recreate that functionality from scratch in JavaScript, you can.

But should you? Browsers have increased in complexity so that we can build without complexity. We can use the built-in power of modern HTML, CSS, and JavaScript to make web browsers do the work. If we work with the grain of the web, we can accomplish more and more with less and less code.

But that isn’t what happened. Instead developers have recreated form controls like dropdowns and datepickers from scratch using divs and lashings and lashings of JavaScript.

Perhaps this points to some missing features on the web. It’s still too hard to style native dropdowns and datepickers (but that’s being worked on—there’s standards work underway to give us more styling control over form elements). But that doesn’t explain why developers would choose to recreate something like a button using divs and JavaScript when the button element already exists and can be styled any way you like.

I think there’s a certain mindset being applied to web development here. And that mindset comes from the world of software. Again, it’s a testament to how far the web has come that it can be treated as a software platform on par with operating systems like iOS, Android, or Windows. There’s a lot to be learned from the world of software development, like testing, for example. But the web is different. When a user navigates to a URL, it shouldn’t feel like they’re installing a piece of software.

We should be aiming to keep our payloads as small as possible. And given how powerful browsers have become, we need fewer and fewer dependencies—fewer and fewer polyfills.

But performance has gotten worse. Payloads have gotten bigger. Dependencies like JavaScript frameworks have become more and more widespread even as they became less and less necessary.

When asked to justify the enormous payloads, web developers have responded by saying that user’s expectations have changed. That is correct, but not in the way that I think they mean.

When I talk to people about using the web—especially on mobile—their expectations are that they will have a terrible experience. That websites will be slow to load. And I guarantee you that none of them are saying, “Well I’d be annoyed if this were a website but seeing as this is a web app, I’m absolutely fine with this terrible experience.”

I said that the biggest challenges facing the World Wide Web today are not technical challenges. I think the biggest challenge facing the web today is people’s expectations.

There is no technical reason for websites or web apps to be so frustrating. But we have collectively led people to expect a bad experience on the web.

Our intentions may be have good. We thought users wanted nice page transitions and form elements that were on-brand. But if you talk to people, you find out that what they want is to accomplish their task without megabytes of JavaScript getting in the way.

There’s a great German word, “Verschlimmbessern”: the act of making something worse in the attempt to make it better. Perhaps we verschlimmbessert the web.

Let’s step back. Get some perspective. Instead of assuming that a single page app architecture is needed, ask what users need to accomplish. Instead of assuming you need a CSS framework or a JavaScript library, see what you can do in browsers today with native CSS and vanilla JavaScript. Don’t include a bunch of dependencies by default just in case you might need them. Instead, as Rachel puts it:

Stop solving problems you don’t yet have.

Lean into what web browsers can accomplish today. If you find something missing, that’s the time to reach for a library …but treat it like a polyfill. Whereas web standards stick around, every library and framework comes with a limited lifespan. Treat them as cattle, not pets.

I understand that tools and frameworks can make your life easier. And if we’re talking about server-side frameworks, then I say “Go for it.” Or if you’re using build tools that sit on your computer to do version control, linting, pre-processing, or transpiling, then I say “Go for it.”

But once you make users download tools or frameworks, you’re making them pay a tax for your developer convenience.

We need to value user needs above developer convenience. If I have the choice of making something the user’s problem or making it my problem, I’ll make it my problem every time. That’s my job.

We need to change people’s expectations of the World Wide Web, especially on mobile. Otherwise, the web will be lost.

The future

Two years ago, I had the great honour of being invited to CERN to mark the 30th anniversary of the original proposal for the World Wide Web. One of the other people there was the journalist Zeynep Tüfekçi. She was on a panel along with Tim Berners-Lee and other luminaries of the early web. At the end of the panel discussion, she was asked:

What would you tell the next generation about how to use this wonderful tool?

She replied:

If you have something wonderful, if you do not defend it, you will lose it. If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.

I believe that we can save the web. I believe that we can change people’s expectations. We’ll do that by showing them what the web is capable of.

It sounds like a moonshot. But, y’know, moonshots aren’t made possible by astronauts. They’re made possible by people like Poppy Northcutt in mission control. Katherine Johnson running the numbers. And Margaret Hamilton inventing the field of software engineering to create the software for the lunar lander. Individual people working together on something bigger than any one person.

There’s a story told about the first time President Kennedy visited NASA. While he was a getting a tour of the place, he introduced himself to a janitor. And the president asked the janitor what he did. The janitor answered:

I’m helping put a man on the moon.

It’s the kind of story that’s trotted out by company bosses to make you feel good about having your labour exploited for the team. But that janitor’s loyalty wasn’t to NASA, an organisation. He was working for something bigger.

I encourage you to have that sense of perspective. Whatever company or organisation you happen to be working for right now, remember that you are building something bigger.

The future of the World Wide Web is in good hands. It’s in your hands.