You are here

Smashing Magazine

Subscribe to Smashing Magazine feed Smashing Magazine
Recent content in Articles on Smashing Magazine — For Web Designers And Developers
Updated: 2 hours 7 min ago

Tree-Shaking: A Reference Guide

Fri, 05/14/2021 - 03:30

Before starting our journey to learn what tree-shaking is and how to set ourselves up for success with it, we need to understand what modules are in the JavaScript ecosystem.

Since its early days, JavaScript programs have grown in complexity and the number of tasks they perform. The need to compartmentalize such tasks into closed scopes of execution became apparent. These compartments of tasks, or values, are what we call modules. They’re main purpose is to prevent repetition and to leverage reusability. So, architectures were devised to allow such special kinds of scope, to expose their values and tasks, and to consume external values and tasks.

To dive deeper into what modules are and how they work, I recommend “ES Modules: A Cartoon Deep-Dive”. But to understand the nuances of tree-shaking and module consumption, the definition above should suffice.

What Does Tree-Shaking Actually Mean?

Simply put, tree-shaking means removing unreachable code (also known as dead code) from a bundle. As Webpack version 3’s documentation states:

“You can imagine your application as a tree. The source code and libraries you actually use represent the green, living leaves of the tree. Dead code represents the brown, dead leaves of the tree that are consumed by autumn. In order to get rid of the dead leaves, you have to shake the tree, causing them to fall.”

The term was first popularized in the front-end community by the Rollup team. But authors of all dynamic languages have been struggling with the problem since much earlier. The idea of a tree-shaking algorithm can be traced back to at least the early 1990s.

In JavaScript land, tree-shaking has been possible since the ECMAScript module (ESM) specification in ES2015, previously known as ES6. Since then, tree-shaking has been enabled by default in most bundlers because they reduce output size without changing the program’s behaviour.

The main reason for this is that ESMs are static by nature. Let‘s dissect what that means.

ES Modules vs. CommonJS

CommonJS predates the ESM specification by a few years. It came about to address the lack of support for reusable modules in the JavaScript ecosystem. CommonJS has a require() function that fetches an external module based on the path provided, and it adds it to the scope during runtime.

That require is a function like any other in a program makes it hard enough to evaluate its call outcome at compile-time. On top of that is the fact that adding require calls anywhere in the code is possible — wrapped in another function call, within if/else statements, in switch statements, etc.

With the learning and struggles that have resulted from wide adoption of the CommonJS architecture, the ESM specification has settled on this new architecture, in which modules are imported and exported by the respective keywords import and export. Therefore, no more functional calls. ESMs are also allowed only as top-level declarations — nesting them in any other structure is not possible, being as they are static: ESMs do not depend on runtime execution.

Scope and Side Effects

There is, however, another hurdle that tree-shaking must overcome to evade bloat: side effects. A function is considered to have side effects when it alters or relies on factors external to the scope of execution. A function with side effects is considered impure. A pure function will always yield the same result, regardless of context or the environment it’s been run in.

const pure = (a:number, b:number) => a + b const impure = (c:number) => + c

Bundlers serve their purpose by evaluating the code provided as much as possible in order to determine whether a module is pure. But code evaluation during compiling time or bundling time can only go so far. Therefore, it’s assumed that packages with side effects cannot be properly eliminated, even when completely unreachable.

Because of this, bundlers now accept a key inside the module’s package.json file that allows the developer to declare whether a module has no side effects. This way, the developer can opt out of code evaluation and hint the bundler; the code within a particular package can be eliminated if there’s no reachable import or require statement linking to it. This not only makes for a leaner bundle, but also can speed up compiling times.

{ "name": "my-package", "sideEffects": false }

So, if you are a package developer, make conscientious use of sideEffects before publishing, and, of course, revise it upon every release to avoid any unexpected breaking changes.

In addition to the root sideEffects key, it is also possible to determine purity on a file-by-file basis, by annotating an inline comment, /*@__PURE__*/, to your method call.

const x = */@__PURE__*/eliminated_if_not_called()

I consider this inline annotation to be an escape hatch for the consumer developer, to be done in case a package has not declared sideEffects: false or in case the library does indeed present a side effect on a particular method.

Optimizing Webpack

From version 4 onward, Webpack has required progressively less configuration to get best practices working. The functionality for a couple of plugins has been incorporated into core. And because the development team takes bundle size very seriously, they have made tree-shaking easy.

If you’re not much of a tinkerer or if your application has no special cases, then tree-shaking your dependencies is a matter of just one line.

The webpack.config.js file has a root property named mode. Whenever this property’s value is production, it will tree-shake and fully optimize your modules. Besides eliminating dead code with the TerserPlugin, mode: 'production' will enable deterministic mangled names for modules and chunks, and it will activate the following plugins:

  • flag dependency usage,
  • flag included chunks,
  • module concatenation,
  • no emit on errors.

It’s not by accident that the trigger value is production. You will not want your dependencies to be fully optimized in a development environment because it will make issues much more difficult to debug. So I would suggest going about it with one of two approaches.

On the one hand, you could pass a mode flag to the Webpack command line interface:

# This will override the setting in your webpack.config.js webpack --mode=production

Alternatively, you could use the process.env.NODE_ENV variable in webpack.config.js:

mode: process.env.NODE_ENV === 'production' ? 'production' : development

In this case, you must remember to pass --NODE_ENV=production in your deployment pipeline.

Both approaches are an abstraction on top of the much known definePlugin from Webpack version 3 and below. Which option you choose makes absolutely no difference.

Webpack Version 3 and Below

It’s worth mentioning that the scenarios and examples in this section might not apply to recent versions of Webpack and other bundlers. This section considers usage of UglifyJS version 2, instead of Terser. UglifyJS is the package that Terser was forked from, so code evaluation might differ between them.

Because Webpack version 3 and below don’t support the sideEffects property in package.json, all packages must be completely evaluated before the code gets eliminated. This alone makes the approach less effective, but several caveats must be considered as well.

As mentioned above, the compiler has no way of finding out by itself when a package is tampering with the global scope. But that’s not the only situation in which it skips tree-shaking. There are fuzzier scenarios.

Take this package example from Webpack’s documentation:

// transform.js import * as mylib from 'mylib'; export const someVar = mylib.transform({ // ... }); export const someOtherVar = mylib.transform({ // ... });

And here is the entry point of a consumer bundle:

// index.js import { someVar } from './transforms.js'; // Use `someVar`...

There’s no way to determine whether mylib.transform instigates side effects. Therefore, no code will be eliminated.

Here are other situations with a similar outcome:

  • invoking a function from a third-party module that the compiler cannot inspect,
  • re-exporting functions imported from third-party modules.

A tool that might help the compiler get tree-shaking to work is babel-plugin-transform-imports. It will split all member and named exports into default exports, allowing the modules to be evaluated individually.

// before transformation import { Row, Grid as MyGrid } from 'react-bootstrap'; import { merge } from 'lodash'; // after transformation import Row from 'react-bootstrap/lib/Row'; import MyGrid from 'react-bootstrap/lib/Grid'; import merge from 'lodash/merge';

It also has a configuration property that warns the developer to avoid troublesome import statements. If you’re on Webpack version 3 or above, and you have done your due diligence with basic configuration and added the recommended plugins, but your bundle still looks bloated, then I recommend giving this package a try.

Scope Hoisting and Compile Times

In the time of CommonJS, most bundlers would simply wrap each module within another function declaration and map them inside an object. That’s not any different than any map object out there:

(function (modulesMap, entry) { // provided CommonJS runtime })({ "index.js": function (require, module, exports) { let { foo } = require('./foo.js') foo.doStuff() }, "foo.js": function(require, module, exports) { = { doStuff: () => { console.log('I am foo') } } } }, "index.js")

Apart from being hard to analyze statically, this is fundamentally incompatible with ESMs, because we’ve seen that we cannot wrap import and export statements. So, nowadays, bundlers hoist every module to the top level:

// moduleA.js let $moduleA$export$doStuff = () => ({ doStuff: () => {} }) // index.js $moduleA$export$doStuff()

This approach is fully compatible with ESMs; plus, it allows code evaluation to easily spot modules that aren’t being called and to drop them. The caveat of this approach is that, during compiling, it takes considerably more time because it touches every statement and stores the bundle in memory during the process. That’s a big reason why bundling performance has become an even greater concern to everyone and why compiled languages are being leveraged in tools for web development. For example, esbuild is a bundler written in Go, and SWC is a TypeScript compiler written in Rust that integrates with Spark, a bundler also written in Rust.

To better understand scope hoisting, I highly recommend Parcel version 2’s documentation.

Avoid Premature Transpiling

There’s one specific issue that is unfortunately rather common and can be devastating for tree-shaking. In short, it happens when you’re working with special loaders, integrating different compilers to your bundler. Common combinations are TypeScript, Babel, and Webpack — in all possible permutations.

Both Babel and TypeScript have their own compilers, and their respective loaders allow the developer to use them, for easy integration. And therein lies the hidden threat.

These compilers reach your code before code optimization. And whether by default or misconfiguration, these compilers often output CommonJS modules, instead of ESMs. As mentioned in a previous section, CommonJS modules are dynamic and, therefore, cannot be properly evaluated for dead-code elimination.

This scenario is becoming even more common nowadays, with the growth of “isomorphic” apps (i.e. apps that run the same code both server- and client-side). Because Node.js does not have standard support for ESMs yet, when compilers are targeted to the node environment, they output CommonJS.

So, be sure to check the code that your optimization algorithm is receiving.

Tree-Shaking Checklist

Now that you know the ins and outs of how bundling and tree-shaking work, let’s draw ourselves a checklist that you can print somewhere handy for when you revisit your current implementation and code base. Hopefully, this will save you time and allow you to optimize not only the perceived performance of your code, but maybe even your pipeline’s build times!

  1. Use ESMs, and not only in your own code base, but also favour packages that output ESM as their consumables.
  2. Make sure you know exactly which (if any) of your dependencies have not declared sideEffects or have them set as true.
  3. Make use of inline annotation to declare method calls that are pure when consuming packages with side effects.
  4. If you’re outputting CommonJS modules, make sure to optimize your bundle before transforming the import and export statements.
Package Authoring

Hopefully, by this point we all agree that ESMs are the way forward in the JavaScript ecosystem. As always in software development, though, transitions can be tricky. Luckily, package authors can adopt non-breaking measures to facilitate swift and seamless migration for their users.

With some small additions to package.json, your package will be able to tell bundlers the environments that the package supports and how they’re supported best. Here’s a checklist from Skypack:

  • Include an ESM export.
  • Add "type": "module".
  • Indicate an entry point through "module": "./path/entry.js" (a community convention).

And here’s an example that results when all best practices are followed and you wish to support both web and Node.js environments:

{ // ... "main": "./index-cjs.js", "module": "./index-esm.js", "exports": { "require": "./index-cjs.js", "import": "./index-esm.js" } // ... }

In addition to this, the Skypack team has introduced a package quality score as a benchmark to determine whether a given package is set up for longevity and best practices. The tool is open-sourced on GitHub and can be added as a devDependency to your package to perform the checks easily before each release.

Wrapping Up

I hope this article has been useful to you. If so, consider sharing it with your network. I look forward to interacting with you in the comments or on Twitter.

Useful Resources Articles and Documentation Projects and Tools
Categories: Design

Frustrating Design Patterns That Need Fixing: Birthday Picker

Wed, 05/12/2021 - 05:42

You’ve seen them before. Confusing and frustrating design patterns that seem to be chasing you everywhere you go, from one website to another. Perhaps it’s a disabled submit button that never communicates what’s actually wrong, or tooltips that — once opened — cover the input field just when you need to correct a mistake. They are everywhere, and they are annoying, often tossing us from one dead-end to another, in something that seems like a well-orchestrated and poorly designed mousetrap.

These patterns aren’t malicious nor evil. They don’t have much in common with deceptive cookie prompts or mysterious CAPTCHAs in disguise of fire hydrants and crosswalks. They aren’t designed with poor intentions or harm in mind either: nobody wakes up in the morning hoping to increase bounce rates or decrease conversion.

It’s just that over the years, some more-or-less random design decisions have become widely accepted and adopted, and hence they are repeated over and over again — often without being questioned or validated by data or usability testing. They’ve become established design patterns. And often quite poor ones. Showing up again and again and again among user complaints during testing.

In this new series of articles, let’s take a closer look at some of these frustrating design patterns and explore better alternatives, along with plenty of examples and questions to keep in mind when building or designing one. These insights are coming from user research and usability tests conducted by yours truly and colleagues in the community, and of course, they all will be referenced in each of the upcoming posts.

We’ll start with a humble and seemingly harmless pattern that we all had experienced at some point — the infamous birthday picker that too often happens to be inaccessible, slow and cumbersome to use. We’ve written about perfect date and time pickers in much detail already, but birthday pickers deserve a separate conversation.

Frustrating UX: Birthday Dropdown/Widgets Starting In 2021

Every time you apply for a job application, open a bank account or book a flight, you probably will have to type in your date of birth. Obviously, the input is a date, and so it shouldn’t be very surprising to see interfaces using a well-adopted date-picker-calendar-alike widget (native or custom), or a drop-down to ask for that specific input.

We can probably spot the reasons why these options are often preferred. From the technical standpoint, we want to ensure that the input is correct, and catch errors early. Our validation has to be bulletproof enough to validate the input, provide a clear error message and explain what exactly the customer needs to do to fix it. We just don’t have all these issues with a dropdown or a calendar widget. Plus, we can easily prevent any locale or formatting differences by providing only the options that would fit the bill.

So it’s not uncommon to see dropdowns considered the UI of last resort, and usually replaced with buttons (e.g. for filters), toggles, segmented controls, or autocomplete boxes that combine the flexibility of a text box with the assurance of a <select>-box. Dropdowns aren’t bad per se; it’s just users spend way more time than necessary filling in the data in them.

And then there is a question about default values. While with dropdowns we often default to no input whatsoever (mm/dd/yyyy), with a date picker we need to provide some starting point for the calendar view. In the latter case, ironically, the “starting” date usually happens to be just around the date of when the form is filled, e.g. May 15th, 2021. This doesn’t appear optimal of course, but what should be the right date? We need to start somewhere, right?

Well, there really isn’t a right date though. We could start early or late, 3 months ago or tomorrow, but in the case of a birthday picker, all of these options are pure guesswork. And as such, they are somewhat frustrating: without any input, customers might need to scroll all the way from 1901 to the late 1980s, and with some input set, they’ll need to correct it, often jumping decades back and forth. That interaction will require impeccable precision in scrolling.

No matter what choice we make, we will be wrong almost all the time. This is likely to be different for a hotel booking website, or a food delivery service, and plenty of other use cases — just not birthday input. This brings us to the conversation about how to objectively evaluate how well-designed a form input is.

Evaluating The Quality Of Form Design

Design can be seen as a very subjective matter. After all, everybody seems to have their own opinion and preferences about the right approach for a given problem. But unlike any kind of self-expression or art, design is supposed to solve a problem. The question, then, is how well a particular design solves a particular problem. The more unambiguous the rendering of designer’s intent, the fewer mistakes customers make, the less they are interrupted, the better the design. These attributes are measurable and objective.

In my own experience, forms are the most difficult aspect of user experience. There are so many difficult facets from microcopy and form layout to inline validation and error messages. Getting forms right often requires surfacing back-end errors and third-party errors properly to the front-end and simplifying a complex underlying structure into a set of predictable and reasonable form fields. This can easily become a frustrating nightmare in complex legacy applications and third-party integrations.

So, when it comes to form design, in our projects, we always try to measure the quality of a particular solution based on the following 9 attributes:

  • Mental model
    How well does our form design fit into the mental model of the customer? When asking for personal details, we need to ask exactly the minimum of what’s required for us to help our customers get started. We shouldn’t ask for any sensitive or personal details (gender, birthday, phone number) unless we have a good reason for it, and explain it in the UI.
  • Complexity
    How many input elements do we display per page, on mobile and on desktop? If a form contains 70–80 input fields, rather than displaying them all on one page, or use a multi-column layout, it might be a good idea to use a task list pattern to break down complexity into smaller, manageable chunks.
  • Speed of input
    How much time and effort does the customer need to fill in data correctly? For a given input, how many taps/keystrokes/operations are required to complete the form with a given data accurately, assuming that no mistakes are done along the way.
  • Accessibility
    When speaking about the speed of input, we need to ensure that we support various modes of interaction, primarily screen reader users and keyboard users. This means properly set labels, large buttons, labels placed above the input field, and errors properly communicated, among many other things.
  • Scalability
    If we ever needed to translate the UI to another language or adapt it for another form factor, how straightforward would it be, and how many issues will it cause? (A typical example of a problematic solution is a floating label pattern, and we’ll talk about it in a separate post.)
  • Severity of interruptions
    How often do we interrupt customers, be it with loading spinners, early or late inline validation, freezing parts of the UI to adjust the interface based on provided UI (e.g. once a country is selected), the frequency of wrongly pre-filled data, or wrongly auto-corrected data?
  • Form success rate
    How many customers successfully complete a form without a single mistake? If a form is well designed, a vast majority of customers shouldn’t ever see any errors at all. For example, this requires that we tap into browser’s auto-fill, the tab order is logical and making edits is conventional and obvious.
  • Speed of recovery
    How high is the ratio of customers who succeed in discovering the error, fixing it, and moving along to the next step of the form? We need to track how often error messages appear, and what error messages are most common. That’s also why it’s often a good idea to drop by customer support and check with them first what customers often complain about.
  • Form failure rate
    How many customers abandon the form? This usually happens not only because of the form’s complexity, but also because customers can’t find a way to fix an error due to aggressive validators or disabled “submit” buttons. It also happen happens because the form asks too much sensitive and personal information without a good reason.

To understand how well a form works, we run usability studies with customers accessing the interface on their own machine — be it mobile device, tablet, laptop or desktop — on their own OS, in their own browser. We ask to record the screen, if possible, and use a thinking-aloud protocol, to follow where and how and why mistakes happen. We also study how fast the customer is moving from one form field to another, when they pause and think, and when most mistakes happen.

Obviously, the sheer number of taps or clicks doesn’t always suggest that the input has been straightforward or cumbersome. But some modes of input might be more likely to generate errors or cause confusion, and others might be outliers, requiring just way more time compared to other options. That’s what we are looking for in tests.

Now, let’s see how we can apply it to the birthday input problem.

Designing A Better Birthday Input

If somebody asks you for your birthday, you probably will have a particular string of digits in mind. It might be ordered in dd/mm/yyyy or mm/dd/yyyy, but it will be a string of 8 digits that you’ve been repeating in all kinds of documents since a very young age.

We can tap into this simple model of what a birthday input is with a simple, single-input field which would combine all three inputs — day, month, and year. That would mean that the user would just type a string of 8 numbers, staying on the keyboard all the time.

However, this approach brings up a few issues:

  • we need to support auto-formatting and masking,
  • we need to explain the position of the day/month input,
  • we need to support the behavior of the Backspace button across the input,
  • we need to track and hide/show/permanently display the masking,
  • we need to support jumps into a specific value (e.g. month),
  • we need to minimize rage clicks and navigation within the input to change a specific value on mobile devices,
  • If auto-making isn’t used, we need to come up with a set of clean-up and validation rules to support any kind of delimiters.

In his book on Form Design Patterns, Adam Silver argues that using multiple inputs instead of one input is rarely a good idea, but it is a good option for dates. We can clearly communicate what each input represents, and we can highlight the specific input with focus styles. Also, validation is much easier, and we can communicate easily what specific part of the input seems to be invalid, and how to fix it.

We could either automatically transition the user from one input to the next when the input is finished, or allow users to move between fields on their own. At the first glance, the former seems better as the input would require just 8 digits, typed one after another. However, when people fix errors, they often need input buffers — space within the input field to correct existing input.

For example, it’s common to see people typing in 01, realizing that they made a mistake, then changing the input to 010, and then removing the first 0, just to end up with a reversed (and correct) string — 10. By avoiding an automatic transition from one field to the next, we might be causing less trouble and making the just UI a bit more predictable and easy to deal with.

To explain the input, we’d need to provide labels for the day, month and year, and perhaps also show an example of the correct input. The labels shouldn’t be floating labels but could live comfortably above the input field, along with any hints or examples that we might want to display. Plus, every input could be highlighted on focus as well.

Over the years, I couldn’t spot a single problem with this solution throughout years of testing, and it’s not surprising the pattern being used on as well.

When You Need A Date Picker After All

While the solution above is probably more than enough for a birthday input, it might not be good enough for more general situations. We might need a date input that’s less literal than a birth day, where customers will have to pick a day rather than provide it (e.g. “first Saturday in July”). For this case, we could enhance the three input fields with a calendar widget that users could use as well. A default input would depend on either the current date, or a future date that most customers tend to choose.

Adam provides a simple code example for the Memorable date pattern in his NoStyle Design System. It solves plenty of development work and avoids plenty of accessibility issues, and all of that by avoiding tapping around calendar widgets or unnecessary scrolling around dropdown wheels.

Wrapping Up

Of course, a good form control depends on the kind of date input that we are expecting. For trip planners, where we expect customers to select a date of arrival, a flexible input with a calendar look-up might be useful.

When we ask our customers about their date of birth though, we are asking for a very specific date — a very specific string, referring to an exact day, month, and year. In that case, a drop-down is unnecessary. Neither is a calendar look-up, defaulting to a more-or-less random value. If you do need one, avoid native date pickers and native drop-downs if possible and use an accessible custom solution instead. And rely on three simple input fields, with labels and explanations placed above the input field.

We’ve also published a lengthy opus on designing a perfect date and time picker, along with checklists you might want to use to design or build one.

Related Articles

If you find this article useful, here’s an overview of similar articles we’ve published over the years — and a few more are coming your way.

Categories: Design

Little Smashing Stories

Wed, 05/12/2021 - 01:00

This is not a regular Smashing article. Over a decade ago, we set out to send a truly smashing newsletter with useful tips and techniques for designers and developers. The first issue was sent out in early 2009. And we would have never imagined that we'd reach 190.000 wonderful folks like you, who read email every other week. In fact, most newsletters we send out these days have pretty much stayed true to our original course that we set out back in the day.

Today, we have a little celebration for our 300th newsletter edition which coincides with the birthday of our incredible illustrator Ricardo Gimenes who is the creative mind behind all of the Smashing Cats (over 150, and still counting!). Ricardo must be dreaming in cats at this point. Happy birthday, dear Ricardo! (Please sing along, if possible.)

In this post, we show stories of some of the people behind these weekly newsletters, and our little magazine. We asked everyone on the team to share a personal story, something from their memories, childhood or anything that made a world difference to them. These stories are the heart of this little article.

But of course you — yes, you, dear reader, and your story — are at the heart of this issue as well. We’d love to hear your story on Twitter and in the comments. When you started reading the newsletter, and perhaps how a little tip in the newsletter helped you in some way.

And of course, thank you so much for being and staying smashing. I would love to meet you, and hear your story, and I’m very hopeful that we all will be able to do just that in the nearest future.

Vitaly (@smashingmag)

Esther Fernández (Sponsorships)

Last week, as my parents were tidying up the family house, they came across some old pictures that they chose to share with me. Amongst them was this old picture of me proudly standing on the top of an olive tree in the wild spaces that once surrounded my hometown.

The photo arrived at the perfect time. As a mirror, it reminded me of who I once was and who I still am. At times when I have to confront some of my deepest fears, this picture proves to me that I have the audacity of climbing and standing, hands-free.

Iris Lješnjanin (Editorial)

I had just turned five when my parents and I moved from Slovenia to the United Arab Emirates where I lived until high school. Later on, with my friends and family scattered all across the globe, I remember missing them so much that I made a promise to myself to write letters and send pictures so that we could stay in touch — even though it sometimes took ages to get one back or I never even heard back from them.

I loved collecting stickers, postcards, and different types of paper to write on, and even found penpals who also shared my passion of writing and lived in Germany, Bosnia, Australia, and even Brunei (just to name a few).

Later on, when communication turned into emails and chatting via various messaging apps (does anyone else still remember mIRC, MSN, and ICQ?), the hand-written letters slowly stopped filling our mailbox and all of the writing was turned into endless typing alongside emoticons and all sorts of ASCII art.

Still, I remember printing out my favorite emails on a continuous form paper (the one with punched holes on the sides), just so that I’d always have them at hand and could read them along with the other letters kept away in my memory box that I kept on the top shelf of my closet.

Now that I’m in my 30’s, I still love getting snail mail, and especially in times like these, a letter can be a considerate and gentle way to reach out to someone and not make them feel like they’re pressured to get back to you right away. (Dear Inbox, I’m looking at you.) There’s something special about writing letters. It’s a piece of paper that creates a sort of intimacy and connection that cannot be felt online.

It’s a sign that somebody has actually taken their time to sit down and prepare something just for you. It’s a piece of paper with somebody’s gentle touch who wrote down meaning into some words while thinking about you and put it in an envelope beautifully wrapped — with not just any stamp. That truly makes every letter I’ve ever received quite unique, special, and dear to my heart.

Before I joined Smashing, Vitaly had already started sending out the Smashing Newsletter, and what actually started out as a fun writing project for the entire team, turned into something so precious and valuable that we can’t imagine ourselves without today. Not only is it an opportunity to connect with folks who share their passion for the web, but it also allows us to contribute to designers and developers by shining the spotlight on those who don’t get the credit and attention they truly deserve for their dedication and hard work.

It is with that same enthusiasm of personally writing each and every letter that we (on behalf of the Smashing team) would like to personally say "Thank you" with each and every Smashing email newsletter that we send out. A heartfelt thanks to both those who share their work online, as well as you, dear reader, for sticking around throughout the years while supporting and inspiring others by simply spreading the word.

Alma Hoffmann (Editorial)

I’ve been in a long distance relationship with Smashing since 2010. It all started with a tweet from Vitaly looking for writers. I replied. The rest is history. Met Vitaly, Iris, Markus, Ricardo, Inge, Rachel, Amanda, and many others in person in 2017. It was one of the biggest highlights in my career.

I walked around with Iris looking for a sweater because I was so cold. We hustled as we walked around the streets finding the stores. She took me to stores to buy gifts to bring back home. And we did it all practically under an hour. She gave me a sketchbook which I filled with photos of Freiburg and a canary yellow bag which I still use to carry my art supplies around town. Love my bag! Some years before, I was having a sad day and on the mail was a gift from Smashing. It made my day!

I love working at Smashing. The commitment to quality is not only impressive, but also, an unifying element that keep all of us connected to a single purpose: to be the best magazine about web development and design. I’ve become a better writer because of it.

Jarijn Nijkamp (Membership)

I have worked in or ‘around’ the educational field for the better part of my professional life, and helping people find their path is just the coolest thing. I still feel very happy (and a bit proud) when an old student gets in touch and shares something with me — either personal or professional.

The other day I found this nice graduation photo from the first ‘cohort’ I taught and managed. A very international bunch of great students who have since grown up to be wonderful people!

Vitaly Friedman (Editorial)

I used to travel somewhere almost every week: from one place to another, between projects, conferences, workshops and just random coffee breaks in favorite places in the world. Back in 2013, I moved out of my apartment without moving in anywhere. I gave away all my belongings to a homeless shelter, and decided to run a creative experiment, traveling from one place to another. I’ve been privileged to have visited an incredible amount of places, and meet an incredible amount of remarkable people, and that experiment never really stopped.

Until 2020. It was a difficult and remarkably unsettling transition for me personally, but it did give me an extremely refreshing perspective on how things can be. We move forward by inertia at times, but stopping and looking around and revisiting things is such a healthy exercise in self-understanding. Over the last year, I’ve rediscovered the beauty of a mouse, secondary screen and a comfy external keyboard. I’ve learned about the importance of true, meaningful, deep relationships. Of letting go, and embracing things that lie in your heart. In my case, it’s writing, editing, building, designing.

I even started feeling comfortable in the online space with our online workshops, and having more focused time to write and code and build and design. I still miss traveling a lot, and can’t wait to meet dear friends all over the world in person. But it’s not as bad as I thought it would be a year ago. The new remote world changed my perspective around myself, and if anything, I can now make a more balanced and conscious choice of how to shape the future. And that’s a decision I won’t take lightly.

Amanda Annandale (Events)

I’ve been at Smashing for over four years, but that was all possible because of a small decision that completely changed my life ten years ago. I was a Stage and Event Manager in NYC, and decided to take a freelance job running events. In my first event, I assisted the 'Carsonified/Future Of...' event while working on the "Avenue Q" stage. Their team was lovely, including their tech guy, who has since become my husband!

After moving to England to be with my husband, I was able to spend more time with the 'Carsonified/Future Of...' friends, and one of them was just moving on from a job at Smashing. She introduced me to the the Smashing team, where I joined just a few months later. In an amazing twist, the first SmashingConf I produced was on that very same “Avenue Q” stage, where my Smashing journey began nearly ten years ago — over five years before I joined the team!

We’d Love To Hear Your Story!

These are just a few of our stories, but all of us have some as well. We’d love to hear yours! What changed your point of view around the world? What makes you smile and happy? What memory keeps your optimistic and excited about the future?

Or perhaps you have a story of how you learned about the newsletter in the first place, and when you first started reading it? We’d love to hear your story, perhaps how a little tip in the newsletter helped you in some way. And yet again, thanks for being smashing, everyone!

Categories: Design

A Primer On CSS Container Queries

Tue, 05/11/2021 - 06:30

At present, container queries can be used in Chrome Canary by visiting chrome://flags and searching for and enabling them. A restart will be required.

Note: Please keep in mind that the spec is in progress, and could change at any time. You can review the draft document which will update as the spec is formed.

What Problem Are CSS Container Queries Solving?

Nearly 11 years ago, Ethan Marcotte introduced us to the concept of responsive design. Central to that idea was the availability of CSS media queries which allowed setting various rules depending on the size of the viewport. The iPhone had been introduced three years prior, and we were all trying to figure out how to work within this new world of contending with both mobile screen sizes and desktop screen sizes (which were much smaller on average than today).

Before and even after responsive design was introduced, many companies dealt with the problem of changing layout based on screen size by delivering completely different sites, often under the subdomain of m. Responsive design and media queries opened up many more layout solutions, and many years of creating best practices around responding to viewport sizes. Additionally, frameworks like Bootstrap rose in popularity largely due to providing developers responsive grid systems.

In more recent years, design systems and component libraries have gained popularity. There is also a desire to build once, deploy anywhere. Meaning a component developed in isolation is intended to work in any number of contexts to make building complex interfaces more efficient and consistent.

At some point, those components come together to make a web page or application interface. Currently, with only media queries, there is often an extra layer involved to orchestrate mutations of components across viewport changes. As mentioned previously, a common solution is to use responsive breakpoint utility classes to impose a grid system, such as frameworks like Bootstrap provide. But those utility classes are a patch solution for the limitations of media queries, and often result in difficulties for nested grid layouts. In those situations, you may have to add many breakpoint utility classes and still not arrive at the most ideal presentation.

Or, developers may be using certain CSS grid and flex behaviors to approximate container responsiveness. But flex and CSS grid solutions are restricted to only loosely defining layout adjustments from horizontal to vertical arrangements, and don’t address the need to modify other properties.

Container queries move us beyond considering only the viewport, and allow any component or element to respond to a defined container’s width. So while you may still use a responsive grid for overall page layout, a component within that grid can define its own changes in behavior by querying its container. Then, it can adjust its styles depending on whether it’s displayed in a narrow or wide container.

We also kept main as a container. This means we can add styles for the .article class, but they will be in response to the width of main, not themselves. I am anticipating this ability to have rules within container queries responding to multiple layers of containers cause the most confusion for initial implementation and later evaluation and sharing of stylesheets.

In the near future, updates to browser's DevTools will certainly help in making DOM changes that alter these types of relationships between elements and the containers they may query. Perhaps an emerging best practice will be to only query one level up within a given @container block, and to enforce children carrying their container with them to reduce the potential of negative impact here. The trade-off is the possibility of more DOM elements as we saw with our article example, and consequently dirtying semantics.

In this example, we saw what happened with both nested containers and also the effects of introducing flex or grid layout into a container. What is currently unsettled in the spec is what happens when a container query is defined but there are no actual container ancestors for those queried elements. It may be decided to consider containment to be false and drop the rules, or they may fallback to the viewport. You can track the open issue for fallback containment and even add your opinion to this discussion!

Container Element Selector Rules

Earlier I mentioned that a container cannot itself be styled within a container query (unless it’s a nested container and responding to its ancestor container’s query). However, a container can be used as part of the CSS selector for its children.

Why is this important? It allows retaining access to CSS pseudo-classes and selectors that may need to originate on the container, such as :nth-child.

Given our article example, if we wanted to add a border to every odd article, we can write the following:

@container (min-width: 60ch) { .container:nth-child(odd) > article { border: 1px solid grey; } }

If you need to do this, you may want to use less generic container class names to be able to identify in a more readable way which containers are being queried for the rule.

Case Study: Upgrading Smashing Magazine’s Article Teasers

If you visit an author’s profile here on Smashing (such as mine) and resize your browser, you’ll notice the arrangement of the article teaser elements change depending on the viewport width.

On the smallest viewports, the avatar and author’s name are stacked above the headline, and the reading time and comment stats are slotted between the headline and article teaser content. On slightly larger viewports, the avatar floats left of all the content, causing the headline to also sit closer to the author’s name. Finally, on the largest viewports, the article is allowed to span nearly the full page width and the reading time and comment stats change their position to float to the right of the article content and below the headline.

By combining container queries with an upgrade to using CSS grid template areas, we can update this component to be responsive to containers instead of the viewport. We’ll start with the narrow view, which also means that browsers that do not support container queries will use that layout.

Now for this demo, I’ve brought the minimum necessary existing styles from Smashing, and only made one modification to the existing DOM which was to move the headline into the header component (and make it an h2).

Here’s a reduced snippet of the article DOM structure to show the elements we’re concerned about re-arranging (original class names retained):

<article class="article--post"> <header> <div class="article--post__image"></div> <span class="article--post__author-name"></span> <h2 class="article--post__title"></h2> </header> <footer class="article--post__stats"></footer> <div class="article--post__content"></div> </article>

We’ll assume these are direct children of main and define main as our container:

main { contain: layout inline-size style; }

In the smallest container, we have the three sections stacked: header, stats, and content. Essentially, they are appearing in the default block layout in DOM order. But we’ll go ahead and assign a grid template and each of the relevant elements because the template is key to our adjustments within the container queries.

.article--post { display: grid; grid-template-areas: "header" "stats" "content"; gap: 0.5rem; } .article--post header { grid-area: header; } .article--post__stats { grid-area: stats; } .article--post__content { grid-area: content; }

Grid is ideal for this job because being able to define named template areas makes it much easier to apply changes to the arrangement. Plus, its actual layout algorithm is more ideal than flexbox for how we want to manage to resize the areas, which may become more clear as we add in the container query updates.

Before we continue, we also need to create a grid template for the header to be able to move around the avatar, author’s name, and headline.

We’ll add onto the rule for .article`--post header`:

.article--post header { display: grid; grid-template-areas: "avatar name" "headline headline"; grid-auto-columns: auto 1fr; align-items: center; column-gap: 1rem; row-gap: 0.5rem; }

If you’re less familiar with grid-template-areas, what we’re doing here is ensuring that the top row has one column for the avatar and one for the name. Then, on the second row, we’re planning to have the headline span both of those columns, which we define by using the same name twice.

Importantly, we also define the grid-auto-columns to override the default behavior where each column takes up 1fr or an equal part of the shared space. This is because we want the first column to only be as wide as the avatar, and allow the name to occupy the remaining space.

Now we need to be sure to explicitly place the related elements into those areas:

.article--post__image { grid-area: avatar; } .article--post__author-name { grid-area: name; } .article--post__title { grid-area: headline; font-size: 1.5rem; }

We’re also defining a font-size for the title, which we’ll increase as the container width increases.

Finally, we will use flex to arrange the stats list horizontally, which will be the arrangement until the largest container size:

.article--post__stats ul { display: flex; gap: 1rem; margin: 0; }

Note: Safari recently completed support of gap for flexbox, meaning it’s supported for all modern browsers now!

Categories: Design

The Conference Platform We Use For Our Online Events: Hopin

Tue, 05/11/2021 - 03:00

When the Smashing team first heard of COVID-19, we didn’t know what to think. On a Monday, we began making alterations to our event, adding space, cleaning stations, rules, etc. But that Friday we knew this would change the scope of our year.

From there, began new discussions throughout Smashing. What is the heart of "SmashingConf?" We didn’t want to just publish talks. Smashing is about togetherness, community, and learning from one another — through struggles, failures, and achievements. How would we communicate that while each of us joined from around the globe?

This discussion continued as we looked into the many concepts and abilities coming from each platform. With so many options, what factors were most important? It was then we were pointed to a new platform, Hopin.

Why Hopin?

From there, a few months of trial, experimenting, and playing brought us together. Hopin’s Session Rooms gave us a way to bring intimacy to our Q&A sessions. Attendees are able to share their video and audio, ask questions, and have a full conversation with the speakers. But, we have taken that further. Some of the speakers were kind enough to hold audit sessions for us. Here people could share their audio, video, and screen, and get instant feedback on their website — useful information ranging from accessibility, performance, and design.

And it wouldn’t be SmashingConf without one of our Smashing parties. Our dearest colleagues, Charis and Jarijn, donned some virtual costumes and hosted a quiz show to end all quiz shows. Attendees either joined on screen or just in the chat, and competed for a range of prizes. From questions to dance parties, there was a feeling of togetherness from those sessions.

Now, I don’t know about you, but one of the best aspects of going to a conference is the people you meet throughout the week. And in person, there are so many ways to do that. From taking a workshop together to just standing in line behind someone for a coffee.

In Hopin, a feature we’re still trying to embrace as much as deserved is "Networking". Here, you decide when you’re "ready" and the platform pairs you with another person to chat for 3 minutes. Sounds daunting? What if there’s a challenge? Smashing is all about the challenges, and at our Holiday Meets event, the attendees had a set of Bingo cards, and had to chat to other attendees to solve the clues!

But of course, some people would prefer not to share their video — and that’s totally fine, of course. But we did have a bit of a challenge there. While Hopin offers multiple chat options (including a DM feature that you can enable or disable), we wanted to share photos and emoji. Here we brought in a new Slack channel (yes, another one). As Hopin progresses, they are aiming to bring their own emoji deck, but we’ve been able to use Get Emoji to copy and paste in, as well as the native Mac emoji keyboard to add reactions, bringing a more varied set of reactions.

As our community is the most important factor, our biggest concern was accessibility. From what we’ve learned in conversations with the Hopin crew, they have had accessibility at the forefront of their minds since the beginning. However, these things take time to get right. While we were concerned about a new platform, we also knew that we had to go with a company that was willing to put the time and energy into it.

While Hopin now has full keyboard integration and we have great reports of how it works with a screen reader, the missing ingredient has been captioning. The community has been told repeatedly closed captioning is coming, and the team has confirmed it’s currently in development. In the meantime, it is possible to integrate captions to the mainstage if using the RTMP streaming function, but the rest is still a little bit off.

While we wait for more integration, Smashing has been thrilled to partner with two wonderful companies to help us with our captioning needs:

  • White Coat Captioning
    This tool uses a human transcriber, live, who inputs the text to StreamText, a platform that lets your audience read a live transcript in a separate window, removing the need to integrate captions. For those interested, this is a more expensive option, but you pay for human accuracy.
  • Thisten
    An AI program that also uses a remote transcribing program. This also allows your audience to follow along, as well as an option to let selected users or team members edit words, allowing the AI to learn as it goes. While this is a much more cost-friendly option, it is an AI program and still learning.

Now when it comes to broadcasting, Hopin caught our eye early on. With a dedicated backstage, you are given the option to stream from the platform — no other pieces of technology necessary. But, if you are more technically minded, there’s a streamless RTMP feature that allows you to use programs like Open Broadcaster Software to create and integrate multiple screens, captioning, and so on.

In the beginning, the streaming quality from Hopin did leave a bit to be desired. As the program is not app-based and only in the browser, it works best only on Chrome or Firefox. In addition, the bandwidth that’s needed (paired with the CPU it can take), made it a bit restrictive.

However, this year, Hopin held its 2021 Kickoff with some exciting updates. At the end of 2020, a dynamic streaming platform called StreamYard has joined the Hopin family. StreamYard is a favorite amongst Twitch streamers and event producers alike. With enhanced production qualities, there are ways to incorporate titles, lower thirds, enhancements, questions from the audiences, and other labels. This also allows for more personalization, as well as more locations to stream.

Note: Hopin has prepared a guide on how to host a virtual event in 2021 — I recommend checking it out.

Another addition that we’re thrilled about is the integration of Miro. If you haven’t used it before, Miro is a remote whiteboard app that lets users create ideas together. Vitaly Friedman first brought it to our attention as we were developing new ways to engage people in online events and workshops. From here, he developed a set of challenges and competitions where teams of attendees could create and storyboard their new ideas of design and code together. Now, Hopin has brought the ability to integrate this within Hopin. No need to exit and work in multiple places. Plus, as there is a free option for Miro, there is no added cost — just added benefit.

But most excitingly, Hopin announced their app for hybrid events. The in-person attendees will be able to join their fellow attendees online and join in on the conversation. Vendors will be able to present in person, but also talk to those who are online whilst widening their reach. If an attendee needed to miss part of a talk, or head outside, a live stream would still be available. If an attendee’s company is headed in person, but personal reasons restrict their travel, they can still join in with the company from the comfort of their own home, and not miss a beat of the discussion.

What’s Next For Smashing Events?

As we are all hoping to get back to what life was like before (especially in terms of in-person events), there are many reasons some will be reluctant or restricted from traveling in the future. And we must say, there are some great benefits to online events. Our team has been able to study and learn remotely and enhance their abilities. Smashing has been able to bring forth their own workshops, bringing the experts to your home office, And most importantly, we have met some wonderful people this year who would not have been able to travel previously. While Smashing can’t wait to get back to in-person events, hybrid may be the future.

If you’re like us and can’t wait to jump into our next Hopin event, be sure to see what our Membership team has to offer! With links to all videos of our previous events — including our Smashing Meets series — you’ll experience Hopin’s great platform and all of the Smashing content you can handle!

Off Hopin, we’re thrilled to continue to connect with you and continually find ways to learn together. If you haven’t already, take a moment to see what online workshops Smashing has to offer. All the energy and education of our in-person sessions, split up over a few days, giving you time to digest and explore.

Have to miss a session? No problem! Videos are available as soon as possible after the workshop — with many being uploaded the next day! Worried about joining on Zoom? We have a moderator in each session to help with any technical issues, so no one is left behind, and no class comes to a halt because of a connection issue. There are sessions for every discipline available — with more being added every day.

Categories: Design

Get Started With React By Building A Whac-A-Mole Game

Mon, 05/10/2021 - 04:00

I’ve been working with React since ~v0.12 was released. (2014! Wow, where did the time go?) It’s changed a lot. I recall certain “Aha” moments along the way. One thing that’s remained is the mindset for using it. We think about things in a different way as opposed to working with the DOM direct.

For me, my learning style is to get something up and running as fast as I can. Then I explore deeper areas of the docs and everything included whenever necessary. Learn by doing, having fun, and pushing things!


The aim here is to show you enough React to cover some of those "Aha" moments. Leaving you curious enough to dig into things yourself and create your own apps. I recommend checking out the docs for anything you want to dig into. I won’t be duplicating them.

Please note that you can find all examples in CodePen, but you can also jump to my Github repo for a fully working game.

First App

You can bootstrap a React app in various ways. Below is an example:

import React from '' import { render } from '' const App = () => <h1>{`Time: ${}`}</h1> render(<App/>, document.getElementById('app')

Starting Point

We’ve learned how to make a component and we can roughly gauge what we need.

import React, { Fragment } from '' import { render } from '' const Moles = ({ children }) => <div>{children}</div> const Mole = () => <button>Mole</button> const Timer = () => <div>Time: 00:00</div> const Score = () => <div>Score: 0</div> const Game = () => ( <Fragment> <h1>Whac-A-Mole</h1> <button>Start/Stop</button> <Score/> <Timer/> <Moles> <Mole/> <Mole/> <Mole/> <Mole/> <Mole/> </Moles> </Fragment> ) render(<Game/>, document.getElementById('app')) Starting/Stopping

Before we do anything, we need to be able to start and stop the game. Starting the game will trigger elements like the timer and moles to come to life. This is where we can introduce conditional rendering.

const Game = () => { const [playing, setPlaying] = useState(false) return ( <Fragment> {!playing && <h1>Whac-A-Mole</h1>} <button onClick={() => setPlaying(!playing)}> {playing ? 'Stop' : 'Start'} </button> {playing && ( <Fragment> <Score /> <Timer /> <Moles> <Mole /> <Mole /> <Mole /> <Mole /> <Mole /> </Moles> </Fragment> )} </Fragment> ) }

We have a state variable of playing and we use that to render elements that we need. In JSX, we can use a condition with && to render something if the condition is true. Here we say to render the board and its content if we are playing. This also affects the button text where we can use a ternary.

Open the demo at this link and set the extension to highlight renders. Next, you’ll see that the timer renders as time changes, but when we whack a mole, all components re-render.

Loops in JSX

You might be thinking that the way we’re rendering our Moles is inefficient. And you’d be right to think that! There’s an opportunity for us here to render these in a loop.

With JSX, we tend to use 99% of the time to render a collection of things. For example:

const USERS = [ { id: 1, name: 'Sally' }, { id: 2, name: 'Jack' }, ] const App = () => ( <ul> {{ id, name }) => <li key={id}>{name}</li>)} </ul> )

The alternative would be to generate the content in a for loop and then render the return from a function.

return ( <ul>{getLoopContent(DATA)}</ul> )

What’s that key attribute for? That helps React determine what changes need to render. If you can use a unique identifier, then do so! As a last resort, use the index of the item in a collection. (Read the docs on lists for more.)

For our example, we don’t have any data to work with. If you need to generate a collection of things, then here’s a trick you can use:

new Array(NUMBER_OF_THINGS).fill().map()

This could work for you in some scenarios.

return ( <Fragment> <h1>Whac-A-Mole</h1> <button onClick={() => setPlaying(!playing)}>{playing ? 'Stop' : 'Start'}</button> {playing && <Board> <Score value={score} /> <Timer time={TIME_LIMIT} onEnd={() =>'Ended')}/> {new Array(5).fill().map((_, id) => <Mole key={id} onWhack={onWhack} /> )} </Board> } </Fragment> )

Or, if you want a persistent collection, you could use something like uuid:

import { v4 as uuid } from '' const MOLE_COLLECTION = new Array(5).fill().map(() => uuid()) // In our JSX { => )} Ending Game

We can only end our game with the Start button. When we do end it, the score remains when we start again. The onEnd for our Timer also does nothing yet.

We’re going to bring in a third-party solution to make our moles bob up and down. This is an example of how to bring in third-party solutions that work with the DOM. In most cases, we use refs to grab DOM elements, and then we use our solution within an effect.

We’re going to use GreenSock(GSAP) to make our moles bob. We won’t dig into the GSAP APIs today, but if you have any questions about what they’re doing, please ask me!

Here’s an updated Mole with GSAP:

import gsap from '' const Mole = ({ onWhack }) => { const buttonRef = useRef(null) useEffect(() => { gsap.set(buttonRef.current, { yPercent: 100 }), { yPercent: 0, yoyo: true, repeat: -1, }) }, []) return ( <div className="mole-hole"> <button className="mole" ref={buttonRef} onClick={() => onWhack(MOLE_SCORE)}> Mole </button> </div> ) }

We’ve added a wrapper to the button which allows us to show/hide the Mole, and we’ve also given our button a ref. Using an effect, we can create a tween (GSAP animation) that moves the button up and down.

You’ll also notice that we’re using className which is the attribute equal to class in JSX to apply class names. Why don’t we use the className with GSAP? Because if we have many elements with that className, our effect will try to use them all. This is why useRef is a great choice to stick with.

See the Pen 8. Moving Moles by @jh3y.

Awesome, now we have bobbing Moles, and our game is complete from a functional sense. They all move exactly the same which isn’t ideal. They should operate at different speeds. The points scored should also reduce the longer it takes for a Mole to get whacked.

Our Mole’s internal logic can deal with how scoring and speeds get updated. Passing the initial speed, delay, and points in as props will make for a more flexible component.

<Mole key={index} onWhack={onWhack} points={MOLE_SCORE} delay={0} speed={2} />

Now, for a breakdown of our Mole logic.

Let’s start with how our points will reduce over time. This could be a good candidate for a ref. We have something that doesn’t affect render whose value could get lost in a closure. We create our animation in an effect and it’s never recreated. On each repeat of our animation, we want to decrease the points value by a multiplier. The points value can have a minimum value defined by a pointsMin prop.

const bobRef = useRef(null) const pointsRef = useRef(points) useEffect(() => { bobRef.current =, { yPercent: -100, duration: speed, yoyo: true, repeat: -1, delay: delay, repeatDelay: delay, onRepeat: () => { pointsRef.current = Math.floor( Math.max(pointsRef.current * POINTS_MULTIPLIER, pointsMin) ) }, }) return () => { bobRef.current.kill() } }, [delay, pointsMin, speed])

We’re also creating a ref to keep a reference for our GSAP animation. We will use this when the Mole gets whacked. Note how we also return a function that kills the animation on unmount. If we don’t kill the animation on unmount, the repeat code will keep firing.

See the Pen 9. Score Reduction by @jh3y.

What will happen when a mole gets whacked? We need a new state for that.

const [whacked, setWhacked] = useState(false)

And instead of using the onWhack prop in the onClick of our button, we can create a new function whack. This will set whacked to true and call onWhack with the current pointsRef value.

const whack = () => { setWhacked(true) onWhack(pointsRef.current) } return ( <div className="mole-hole"> <button className="mole" ref={buttonRef} onClick={whack}> Mole </button> </div> )

The last thing to do is respond to the whacked state in an effect with useEffect. Using the dependency array, we can make sure we only run the effect when whacked changes. If whacked is true, we reset the points, pause the animation, and animate the Mole underground. Once underground, we wait for a random delay before restarting the animation. The animation will start speedier using timescale and we set whacked back to false.

useEffect(() => { if (whacked) { pointsRef.current = points bobRef.current.pause(), { yPercent: 100, duration: 0.1, onComplete: () => { gsap.delayedCall(gsap.utils.random(1, 3), () => { setWhacked(false) bobRef.current .restart() .timeScale(bobRef.current.timeScale() * TIME_MULTIPLIER) }) }, }) } }, [whacked])

That gives us:

See the Pen 10. React to Whacks by @jh3y.

The last thing to do is pass props to our Mole instances that will make them behave differently. But, how we generate these props could cause an issue.

<div className="moles"> {new Array(MOLES).fill().map((_, id) => ( <Mole key={id} onWhack={onWhack} speed={gsap.utils.random(0.5, 1)} delay={gsap.utils.random(0.5, 4)} points={MOLE_SCORE} /> ))} </div>

This would cause an issue because the props would change on every render as we generate the moles. A better solution could be to generate a new Mole array each time we start the game and iterate over that. This way, we can keep the game random without causing issues.

const generateMoles = () => new Array(MOLES).fill().map(() => ({ speed: gsap.utils.random(0.5, 1), delay: gsap.utils.random(0.5, 4), points: MOLE_SCORE })) // Create state for moles const [moles, setMoles] = useState(generateMoles()) // Update moles on game start const startGame = () => { setScore(0) setMoles(generateMoles()) setPlaying(true) setFinished(false) } // Destructure mole objects as props <div className="moles"> {{speed, delay, points}, id) => ( <Mole key={id} onWhack={onWhack} speed={speed} delay={delay} points={points} /> ))} </div>

And here’s the result! I’ve gone ahead and added some styling along with a few varieties of moles for our buttons.

See the Pen 11. Functioning Whac-a-Mole by @jh3y.

We now have a fully working “Whac-a-Mole” game built in React. It took us less than 200 lines of code. At this stage, you can take it away and make it your own. Style it how you like, add new features, and so on. Or you can stick around and we can put together some extras!

Tracking The Highest Score

We have a working "Whac-A-Mole", but how can we keep track of our highest achieved score? We could use an effect to write our score to localStorage every time the game ends. But, what if persisting things was a common need. We could create a custom hook called usePersistentState. This could be a wrapper around useState that reads/writes to localStorage.

const usePersistentState = (key, initialValue) => { const [state, setState] = useState( window.localStorage.getItem(key) ? JSON.parse(window.localStorage.getItem(key)) : initialValue ) useEffect(() => { window.localStorage.setItem(key, state) }, [key, state]) return [state, setState] }

And then we can use that in our game:

const [highScore, setHighScore] = usePersistentState('whac-high-score', 0)

We use it exactly the same as useState. And we can hook into onWhack to set a new high score during the game when appropriate:

const endGame = points => { if (score > highScore) setHighScore(score) // play fanfare! }

How might we be able to tell if our game result is a new high score? Another piece of state? Most likely.

See the Pen 12. Tracking High Score by @jh3y.

Whimsical Touches

At this stage, we’ve covered everything we need to. Even how to make your own custom hook. Feel free to go off and make this your own.

Sticking around? Let’s create another custom hook for adding audio to our game:

const useAudio = (src, volume = 1) => { const [audio, setAudio] = useState(null) useEffect(() => { const AUDIO = new Audio(src) AUDIO.volume = volume setAudio(AUDIO) }, [src]) return { play: () =>, pause: () => audio.pause(), stop: () => { audio.pause() audio.currentTime = 0 }, } }

This is a rudimentary hook implementation for playing audio. We provide an audio src and then we get back the API to play it. We can add noise when we “whac” a mole. Then the decision will be, is this part of Mole? Is it something we pass to Mole? Is it something we invoke in onWhack ?

These are the types of decisions that come up in component-driven development. We need to keep portability in mind. Also, what would happen if we wanted to mute the audio? How could we globally do that? It might make more sense as a first approach to control the audio within the Game component:

// Inside Game const { play: playAudio } = useAudio('/audio/some-audio.mp3') const onWhack = () => { playAudio() setScore(score + points) }

It’s all about design and decisions. If we bring in lots of audio, renaming the play variable could get tedious. Returning an Array from our hook-like useState would allow us to name the variable whatever we want. But, it also might be hard to remember which index of the Array accounts for which API method.

See the Pen 13. Squeaky Moles by @jh3y.

That’s It!

More than enough to get you started on your React journey, and we got to make something interesting. We sure did cover a lot:

  • Creating an app,
  • JSX,
  • Components and props,
  • Creating timers,
  • Using refs,
  • Creating custom hooks.

We made a game! And now you can use your new skills to add new features or make it your own.

Where did I take it? At the time of writing, it’s at this stage so far:

See the Pen Whac-a-Mole w/ React && GSAP by @jh3y.

Where To Go Next!

I hope building “Whac-a-Mole” has motivated you to start your React journey. Where next? Well, here are some links to resources to check out if you’re looking to dig in more — some of which are ones I found useful along the way.

Categories: Design

How To Organize Product-Related Assets And Collaborate Better

Fri, 05/07/2021 - 02:55

So, you start working on a new product. It’s crucial to set a well-organized environment — that is, the space where you and your team interact with all product-related files and documents — right away. The amount of your assets will only grow with time, and it becomes almost unbearable to find and fix the right assets on-the-go (except perhaps for small amendments). Besides, a random structure may slow down your workflow or collapse at a certain stage of your product growth.

If there are already specific rules about product assets management in your team (guidelines, principles and a preferable software), then this article might not be that applicable for your case. But if there’s nothing specific in place just yet, or you start building your product environment from scratch, then the information below can save you a lot of time and stress.

At this point, you might get a feeling that the above-stated issues would concern only product owners and managers. Actually, it’s not quite true: to achieve a really effective collaboration, it’s necessary for the entire product team, including not only managers but also researchers, designers, and editors as well, to be on the same page when it comes to the way of how workspace and assets are organized. That’s why for every team member, it’s worth to invest some time learning about product assets management and agree on collaboration rules.

The good news is that there are some well-established guidelines and best practices around the topic. I have learned some principles the hard way, and below you’ll find an overview of what worked and didn’t work for me. I’m a startup co-founder now, so I handle my own product, and I used to be the managing director of content departments of large teams. In part, I led an e-learning product with 700 students and 20 lecturers, and built a 10-members remote editorial team from scratch two times.

It may take you a while to understand why certain ways of working are necessary, but, speaking from my experience, it will pay back. The article will be useful for product specialists in small teams that start building their environment from scratch, or want to revise their product records management skills.

Compare the two folders above. One is a folder with random files displayed in a Google Drive, and the other provides some structure for all your assets across folders. Of course, it would be easier for you to find, say, your July report for investors if everyone on the team named and structured their assets consistently.

Let’s have a look at how to do that.

Product Records 101

So, as designers and developers, we know how to organize and maintain our code and design assets, but what about overall product records at large? There are plenty of common assets that would fall under the "Product records" category:

  • Market research, business plans
  • UX assets: UX tests, copy docs, etc.
  • Editorial policy
  • Development timeline
  • Various specifications
  • Contracts, invoices and templates
  • Presentations, pitch decks, one-pagers, etc.
  • Investors' reports

It all boils down to adopting a shared understanding of the company’s culture, so that every team member is aligned, and can follow along in their work to avoid costly mistakes. It includes managing documents, working with content, dealing with reports, keeping testing records, collaboration and discussions.

Let’s explore some of the principles that could help get there. And we’ll start with the mindset.


These are some basic principles when building the environment:

  • The records should be accessible any time for everybody who is authorized; there shouldn’t be any “head keeper” of product records (a bottleneck, actually).
  • The records should be accessible from anywhere.
  • The records should be robust, i.e. it should be hard to break something irreversibly by accident.
  • The access control should be pre-defined, understandable, and easily manageable.
  • It should be easy to jump in for newcomers.
  • It should be easy to hand out the assets to outside collaborators.

But how do we achieve all of it in practice?

Concepts And Tools

First, let’s define the subject more precisely. When speaking of environment, we are speaking about product assets, not tasks. To understand what piece of information actually is an asset, and what is a task, we need to ask ourselves whether this information would be relevant during the whole product lifecycle, or will it be archived as soon as it is resolved.

Usually, we use content applications for long-term assets, and task managers for time-limited activities.

What applications do we choose? Any: it’s your personal preference. We can use any tools in any combination — we just need to make sure that it’s a cloud application that synchronizes across different users and devices.

Personally, I’m a fan of Google Drive because of its simplicity, accessibility and ease of use for my not-so-tech-savvy colleagues. A good alternative to Google Drive tools is Dropbox and Dropbox Paper. And there’s Notion, another popular application — it’s a combination of a task manager and a content keeper. Of course, it’s not an exhaustive list, but they work well for me.

OK, enough talking, let’s organize our files.

How To Create A Well-Organized Workspace Start With A Simple Arrangement And Adjust It Later

So, you have several files that you want to organize. Don’t spend too much time and effort to create a perfect set of folders or sections at once. Start with the simplest arrangement, then watch how you use your workspace, and add, merge, or remove folders or sections on-the-go.


First, you never know how your product will evolve, and second, keep in mind that all organizational activities are still secondary ones, so we probably don’t want to get exhausted with preparations before we come down to our work tasks. So it’s a good idea to start with a general structure and watch how you’re using it for a while — and then review it. In iterations.

In part, it’s a good idea to create a folder called Sandbox and move all the documents that don’t fit in already defined categories — e.g. if you spend more than 3s wondering where to place or localte them. You can define new rules for these documents later.

Have A Weekly 10-min Clean-up

I can’t express what a time-saver it is! I really mean it. Schedule the 10-minutes clean-up for every week, and use this time to go through your records, adjust file names, check their consistency, or tweak the structure of your folders and sections.

Oh, is everything perfect? Well, you can use this time to get a coffee or meditate for a while — it’s really useful in all aspects.

The Principles of Files Navigation Create An Index Document With Links To Everything

When the amount of documents and assets grows, it’s a good idea to create an overview page of all sets that exist in the folder. Think of it as a front page of your project that reveals all the main components of it.

What would you include in such a document?

  • Onboarding Tips
    A guide that may feature a list of your team members and their roles, i.e., the information you want to pass to a newcomer. To create it, the next time you onboard a new team members, pay attention and record all the messages you send to them. Create a template of these messages, organize them and put them together.
  • Design Files
    Links to Figma or Sketch files, illustrations, etc.
  • Information Architecture
    Usually, a link to Miro or Airtables.
  • Product Assets
    A Google Drive or Dropbox folder link.
  • Tech Stack
    An explanation of the technical setup used for a product, perhaps with important tools to install or use.
  • Press Kit And Marketing Materials
    Your branding material, interface demo snapshots, and similar assets.
  • Presentations, One-Pagers, Pitches
    Anything you usually send out to tell more about your product, or pitch your idea.
  • Business Assets
    100-day plan, unit economy calculation, etc.
  • Other resources and documents that you actively use.

Such a list eliminates unnecessary questions and rounds of emails with lost attachments. It also proves especially precious when you’re on a call and want to quickly open a file and share it with your colleagues.

One thing to avoid though is copying assets from one folder to another, or from one application to another. We’ll discuss it later in the article.

Use Tags In Document Names

When you create a document, consider adding a prefix, or tag, in its title. A tag reflects categories that the document belongs to.

This way, you’ll navigate files more easily and won’t get confused if you are simultaneously working on similar documents from different projects — say, when you have an index document in your [First] and [Second] projects.

It also helps to navigate between them with dozens of open browser tabs.

Some useful tags:

  • [Draft] or [WIP]: for documents that are still in progress.
  • [Old] or [Arch]: for documents that are no longer relevant.
  • [EN], [ES], or [UA]: for multi-lingual documents.
  • [Jessy]: if you’re working with outside collaborators.
  • Last but not least, an abbreviation of your project name (e.g. [SM] for Smashing Magazine, and so on).

Again, start with those tags that suit you and add more if necessary. Just keep their consistency in mind — say, if you use the tag [WIP] for documents that aren’t ready yet, stick to it and don’t use [Draft] in such cases. Sounds obvious, but sometimes happens unconsciously. Otherwise, it’ll be hard to find necessary files later.

The same guideline also applies to folder names.

Principles Of Editing Use Emoji For Smart Statuses

This might sound a bit uncommon, but emojiscan be a great visual navigation tool because they are different from the rest of the content. For example, you could add emoji in headings or tables to indicate the current status of a document.

You could use emoji for your sections or folders. And it also could work in very different settings — e.g. in UI editing apps, such as Figma or Sketch.

In fact, you could come up with a special system of labeling: say, a green circle for finished items and an orange diamond for work in progress (don’t forget about accessibility: the elements should differ not only in color but in shape as well). Also, consider adding a legend for everyone to consult if necessary.

Highlight And Comment On Changes

If you change some part of a document, it’s a good practice to let your teammates know about the changes when they see the document next time.

For this, we use highlights.

You can also highlight the parts on which you’re currently working. It’s especially helpful when you’re working with large files and have several unfinished and scattered chunks. This way, you reduce the chances to miss out on some unfinished parts since you’ll spot them when revising the file. It works for all kinds of documents actually — from Google Docs articles to Figma boards.

Besides, at every point, your colleagues will know at what stage each part actually is.

Use Default Styles

There is always a temptation to set your own fonts and colors in a doc. But generally, there’s no use in it, just an extra effort. So, agree on generaly type setting, and stick to these default styles for your internal documentation.

Be Careful With Markup

Every app supports anchors, bookmarks, cross-links, and other types of content elements that you can refer to.

For example, when you share a large document, you might want to point to a particular place there. In apps like Google Docs you can add a bookmark and it will generate a sharing link to that bookmark. On opening such a link, the cursor will jump to the bookmark with a small delay.

There are the same anchor links for comments, too.

They’re all useful features, but keep in mind that these elements are not that robust compared to a regular link, pasted in the body of the document or message. For example, if you look at the image below, in the message to the left I added links to Google Docs bookmarks without any notes. If the bookmark disappears, the person will be redirected to the beginning of the page and will have no idea what spot was meant.

In the message to the right, I used bookmarks as a supporting tool and added a short description of what parts should be updated.

So, the golden rule I tend to stick to is to use links to bookmarks and alike as a supportive tool, but still note the spot-in-question, i.e. the context of what is linked to.

The same applies to references inside a document or a page: if you need to add a link redirecting somewhere, don’t forget to explain the item you’re referring to.

Actually, I borrowed this principle from my university course in translation. It’s called infoglut, and it states that if you need to get a message across asynchronously, you should enclose the main concept at least twice there. This way you minimize the chances of misinterpretation if some bits of the message are lost.

It’s not about redundancy — it’s about being robust.

Version Control Principles Keep Only One Up-To-Date Version

If a document should be featured in two places simultaneously, never duplicate it. Instead, use links or shortcuts.

The same applies to any asset. Especially when you use several tools at a time. For example, if you make an index list of your product assets (the “front page”) in a Google Doc, then don’t create a corresponding list of links in a Notion workspace and vice versa.

The reason is simple: avoid doubling the effort to keep both files up-to-date. You’re just more likely to come to a frustrating situation when you have two active files that differ a little this way.

Don’t Multiply Assets

If you have a document that falls under several categories — say, you have a copy doc of your landing page in your Design folder and want to add it to your proofreader’s folder so they can edit it, — avoid duplicating this document.

Instead, make a shortcut.

In the example above, I’m using a native shortcut tool in Google Drive but of course you can create a document with a link to the file in another applications the same way.

Don’t Delete Outdated Documents — Add Backlinks

Ah, that’s another useful tip that proved to be handy for me. Say, you’ve revised the entire document, and now the previous version is no longer relevant.

What to do with the outdated document? Don’t delete it — there’s a good chance that you will be opening this link since it’s already saved in your browser history, or that it is bookmarked by one of your colleagues, or it is linked in another file.

However, it’s not an option to leave it as is. What to do in such a case? Well, there are a few options.

  1. Change its appearance; say, make the text color magenta or apply strikethrough style (or both).
  2. Add a tag in the filename, like [OLD] or [Arch].
  3. Add an explanation and link to the new version.

Access Management Principles Create All New Documents In The Team Folder

What differs between an offline folder and a cloud folder? Collaboration options, of course! Ask your colleagues to make a habit of creating new documents within a team folder instead of moving them there afterwards. It just saves unnecessary emails and text messages.

Share Items Via A Sharing Tool

If you work in a browser window and need to share a link to a file, your first hunch will be to copy it from the address bar.

But be careful. In many apps, this way you will share the link to your private workspace but not for the public version of the file. Or just share a restricted document, not giving the right access. Of course, this will end up with people waiting for your approval, or unnecessary emails which you have to process one-by-one.

Make it a habit to share files and documents via their native sharing tool. At some point, it will literally save your life (or, at least, your vacation.)

Collaboration Principles

This section is specific primarily to cloud content editors.

Don’t Resolve Others’ Comments — Reply Done Instead

Say, you’re an author and you’re discussing some points with your editor. You made some changes and believe that the discussion is over. In that case, don’t resolve their comments — just reply DONE and always let the person who started the conversation resolve it.

This way, the person can check the final result and, if they agree with it, resolve the issue themselves. Or, if it were you who started the discussion, then you’ll be able to accept or reject the changes without re-reading the whole document, trying to spot the changes made.

Be Careful With Comments And Suggestions

Suggestions and comments allow an editor to suggest their changes without altering the original text, but a viewer won’t see them if they access the document in a view-only or guest mode.

So use comments and suggestions only for team collaboration but not for leaving regular annotations in the document. For such cases, I prefer leaving my annotations in the body of the text in different fonts and colors.

More Tips For a Productive Workflow Use Shortcuts

This piece of advice isn’t very unexpected, but I just can’t help repeating it. For me, the easiest way to learn new shortcuts is to pick, say, a set of three at a time, use them until it becomes fully automated for you, and only then go for another portion of three. Don’t try to encompass all of them at once, it’s way too difficult.

Thus, if you work with Google Drive, then look at these five most useful ones:

  • Cmd/Ctrl + F to find some text.
  • Cmd/Ctrl + Alt/Option + h1/h2/h3 to make corresponding headings.
  • Cmd/Ctrl + Shift + C to show the word and character count statistics of the document and the selected piece.
  • Cmd/Ctrl + Enter for a page break.
  • Cmd/Ctrl + Shift + X for strikethrough text style.

You can also take a look at the whole list of Google Drive shortcuts.

If we take Notion, these are my favourite ones:

  • All / (slash) commands that create certain content blocks; /todo is on the top of my list.
  • Cmd/Ctrl + A to select a block where my cursor is.
  • Using [[ + page name to create a link to it.

The Notion team also provides a comprehensive list of all the Notion shortcuts.

This is what works best for my flow, but surely you can think of another set that suits your needs best.

Set Up A Separate Email Folder For Notifications

A separate email folder will help you avoid being overwhelmed with identical emails and miss the important ones. But I’d strongly recommend not turning off email notifications. What to do, then? Separate folders with rules.

I leave notification badges but I have all such emails automatically marked as read. Plus, with such a system in place, you can automatically group emails and get an overview of what requires my attention and what has been work in progress.

Use A Grammar Correction Extension

A grammar correction tool highlights your typos and suggests better ways to formulate your sentences. There are several products with a desktop app and/or a browser extension that works in every content environment.

A Product Assets Folder Structure With A Template

Right now, I’m working on my own startup and we’re constantly applying all these principles to our file management processes. I’ve simplified our structure and made a default folder you can use and customize:

With this template, you can easily replicate the same structure in your favorite cloud app — be it Dropbox, Google Drive, or pretty much anything else. If you want to try the Notion app, start by using their templates (“Company home” or “Product Wiki” are useful for long-term assets and “Roadmap” can be used as a task management tool), and adjust it on the go. If you use a different app, you can look at both structures and create your perfect combined workspace.

Let’s get the assets organized! And please share your best practices in the comments!

Categories: Design

Improving The Performance Of Shopify Themes (Case Study)

Thu, 05/06/2021 - 04:00

The dreaded refactor of old code can be challenging. Code evolves over time with more features, new or changing dependencies, or maybe a goal of performance improvements. When tackling a big refactor, what are the things you should focus on and what performance improvements can you expect?

I’ve been building Shopify themes for the better part of a decade. When I worked in-house at Shopify in 2013, themes were fairly simple in terms of code complexity. The hardest part was that Shopify required themes to support IE8, and up until late 2020, IE11. That meant there was a lot of modern JavaScript we couldn’t utilize without sometimes sizable polyfills.

Eight years later, in 2021, themes are infinitely more complex because Shopify has released a ton of new features (to go along with our in-house ideas at Archetype Themes). The problem is that building new performant features will only go so far when some of your codebase is so old that it has old IE polyfills or IE10 CSS hacks. Our themes had pretty good speed scores for how much they offered, but they were undoubtedly bloated.

Our Goal Was Simple

Better performance across the board. Faster time to first paint. Less blocking JS. Less code complexity.

Getting there was the hard part. It included:

  • Remove jQuery and rewrite ~6k lines of JS per theme in Vanilla JS
  • Remove Handlebars.js, as our templating needs were way too small for such a large package
  • Standardizing code shared between themes (remove duplication)

Moving away from jQuery was a blessing, but a long process. Thankfully, Tobias Ahlin has a fantastic guide on some of the quick conversions away from jQuery. While going through these changes, it was the perfect time to rethink some more basic issues like how my JS was structured and how elements were initialized.

Remove jQuery

Writing Vanilla JS always seemed like a pipe dream. We had to support old IE, so it was just so easy to ignore any attempt at removing it. Then IE 11 support was dropped by Shopify and the clouds parted — it was our time.

Why remove jQuery anyway? I’ve heard lots of arguments about this, such as its package size isn’t that bad compared to a framework like React. Well, jQuery isn’t a framework like React so it’s a bit of a non-starter comparison. jQuery is a way of using CSS-like selectors and developer-friendly syntax for things like animations and Ajax requests. Most of all, it helped with cross-browser differences so developers didn’t have to think about it.

We wanted to remove it for a few reasons:

I’m one of those developers who were stuck in the past. I knew jQuery inside and out and could make it pull off nearly anything I tried. Was it perfect? No, of course not. But when you look at the lifecycle of some JS frameworks that flamed out, jQuery has always been steady and that was familiar and safe to me. Removing our reliance on it and untangling it from ~6k lines of code (for each theme) felt insurmountable — especially when I couldn’t know for sure my performance scores would benefit or by how much.

Our approach was to comment out each module we had, remove jQuery, and slowly add in each module or function one at a time while it was rewritten. We started with the simplest file, one with a few functions and a few selectors. Nice and easy, no errors in dev tools, time to move on.

We did this one by one, remembering the easy fixes from the early files when we got to the complex ones like refactoring all of the potential features associated with a product and its add-to-cart form (I counted, it’s 24 unique things). In the end, we got the product JS from 1600 lines of code to 1000. Along the way, we found better ways to do some things and would go back and refactor as needed.

We realized Vanilla JS isn’t scary, it’s just a bit more of an intentional way of writing code than jQuery. We also realized some ancient code was a mess — we needed to organize the JS to be more modular and remove duplicate code (more on that below). But before that, we wanted to play with some of the fun JS we’d only used in other projects.

Intersection Observer API

Shopify themes are powerful in that they let merchants move elements around the page however they want. That means, as the developer, you don’t know where the element is, whether it exists, or how many exist.

To initialize these elements, we had been using scroll events that continuously checked if an element was visible on the page with this function:

theme.isElementVisible = function($el, threshold) { var rect = $el[0].getBoundingClientRect(); var windowHeight = window.innerHeight || document.documentElement.clientHeight; threshold = threshold ? threshold : 0; // If offsetParent is null, it means the element is entirely hidden if ($el[0].offsetParent === null) { return false; } return ( rect.bottom >= (0 - (threshold / 1.5)) && rect.right >= 0 && <= (windowHeight + threshold) && rect.left <= (window.innerWidth || document.documentElement.clientWidth) ); };

Even though these scroll events were throttled, there was a lot of math being done by the browser all the time. It never really felt too sluggish, but it did take up a spot in the call stack which impacted other JS competing for priority. I wish we had done more performance research on this update specifically because I think it’s responsible for many of the improvements in Time to interactive and Total blocking time that you’ll see below.

In comes the Intersection Observer API. Now that IE11 support wasn’t required, I was so happy to be able to fully utilize this. In short, it’s an asynchronous way of knowing when an element is visible in the window. No more sluggish measurements and scroll events.

To initialize an element when it’s visible, we use something as simple as this:

theme.initWhenVisible({ element: document.querySelector('div'), callback: myCallback });

All of the JS required for the element will be handled inside myCallback, preventing it from doing anything until it’s visible.

This sets up an observer for that element, and then removes the observer once it’s visible. It’s always good to clean up after yourself even if you think there might not be much impact without it. If there’s a callback, we run it and our module is ready to go.

theme.initWhenVisible = function(options) { var threshold = options.threshold ? options.threshold : 0; var observer = new IntersectionObserver((entries, observer) => { entries.forEach(entry => { if (entry.isIntersecting) { if (typeof options.callback === 'function') { options.callback(); observer.unobserve(; } } }); }, {rootMargin: '0px 0px '+ threshold +'px 0px'}); observer.observe(options.element); };

You can pass a threshold to initialize the element before it’s on the screen too, which can be handy if you want to preload something like Google’s Map API slightly before the element is visible so it’s ready when it is.

Layzloading Images And object-fit

We use lazysizes for lazy-loading our images. It has some helpful plugins for also loading background images, but requires a lot more markup on your element. While the plugins are quite small, it’s one more thing that’s easily removed with pure CSS.

Using object-fit in CSS meant that we could position an image just like a background image, but as an <img> element and get all the benefits of normal lazy-loading without extra JS. The real benefit in this is we’re one step closer to using native browser lazy-loading (which doesn’t support background images). We’ll still have to load in lazysizes as a fallback when the native approach isn’t supported, but it means removing an entire dependency.

<script> if ('loading' in HTMLImageElement.prototype) { // Browser supports `loading` } else { // Fetch and initialize lazysizes } </script> MatchMedia API

In the past, we used enquire.js to know when breakpoints changed. This is used when resizing elements, changing a module’s arguments for desktop vs mobile, or simply to show/hide elements that you can’t with CSS.

Instead of relying on another package, once again we can go with a native solution in matchMedia.

var query = 'screen and (max-width:769px)'; var isSmall = matchMedia(query).matches; matchMedia(query).addListener(function(mql) { if (mql.matches) { isSmall = true; document.dispatchEvent(new CustomEvent('matchSmall')); } else { isSmall = true; document.dispatchEvent(new CustomEvent('unmatchSmall')); } });

With just a few lines of code, we can listen for breakpoint changes and change a helpful variable that’s used elsewhere and trigger a custom event that specific modules can listen for.

document.addEventListener('matchSmall', function() { // destroy desktop-only features // initialize mobile-friendly JS }); Hunting down duplicate code

As I mentioned at the beginning, we had slowly built features into our themes for years. It didn’t take long for some elements to be built out that were kind of like others, like a full-width homepage video and later videos on your product listing or a popup video modal.

YouTube’s API, for example, initialized differently three times and had nearly identical callbacks and accessibility features built out per-module. It was a bit embarrassing we didn’t build it smarter in the first place, but that’s how you know you’re growing as a developer.

We took this time to consolidate many of our modules to be standalone helpers. YouTube became its own method that all sections from all of our themes could use. It meant refactoring by breaking it down into the most basic parts:

  • Default API arguments (overridable by the initializing module)
  • A div ID to initialize the video onto
  • ID of the YouTube video to load
  • Events (API is ready, video state changed, etc)
  • Play/pause when not in view
  • Handle iOS low power mode when autoplay not supported

My approach was to do this all on paper before coding, which is something that always helps me sort out what’s integral to the module I’m building vs what’s custom by the parent that’s initializing it — a division of labor if you will.

Now our three themes that initialize YouTube videos a total of nine different ways use a single file. That’s a big code complexity win for us, and makes any future updates much easier for me and other developers that might touch the code. By using this same approach for other modules while converting to Vanilla JS, it allowed us to move nearly half of each theme’s JS to a single shared module across them all.

This is something that was invaluable to our team and our multi-project setup and might not be useful to your projects exactly, but I believe the process is. Thinking about simplicity and avoiding duplication will always benefit your project.

We did the same for slideshow modules (image slideshows, testimonials, product page images, announcement bars), drawers and modals (mobile menus, cart drawers, newsletter popups), and many more. One module has one purpose and will share back to the parent only what is required. This meant less code shipped, and cleaner code to develop with.

Performance Stats

Finally, the good stuff. Was this all worth it? Most of this was done blindly with the assumption that less JS, smarter initializing, and more modern approaches would result in faster themes. We weren’t disappointed.

We started all of this work with Motion, our first theme. It had the most bloated JS and the biggest room for improvement.

  • 52% less JS shipped
  • Desktop home page speeds (with heavy elements like multiple videos, featured products, slideshows with large images)
Desktop home page Before After Change Lighthouse score 57 76 +33 Total blocking time 310ms 50ms -83.8% Time to interactive 2.4s 2.0s -16% Largest contentful paint 3.8s 2.6s -31.5%
  • Mobile product pages
Mobile product page Before After Change Lighthouse score 26 65 +150% Total blocking time 1440ms 310ms -78% Time to interactive 11.3s 6.1s -46% Largest contentful paint 13s 4.2s -67.6%

Then we moved on to Impulse, our second and most feature-heavy theme.

  • 40% less JS shipped
  • 28% faster mobile home page speeds
Desktop home page Before After Change Lighthouse score 58 81 +39.6% Total blocking time 470ms 290ms -38% Time to interactive 6.1s 5.6s -8% Largest contentful paint 6s 2.9s -51.6%
  • 30% faster mobile home page and product page speeds
Mobile product page Before After Change Lighthouse score 32 45 +40.6% Total blocking time 1490ms 780ms -47.6% Time to interactive 10.1s 8.3s -17.8% Largest contentful paint 10.4s 8.6s -17.3%

While you may notice these numbers got a lot better, they’re still not great. Shopify themes are handcuffed by the platform so our starting point is already challenging. That could be an entirely separate article, but here’s the overview:

  • Shopify has a lot of overhead: feature detection, tracking, and payment buttons (Apple Pay, Google Pay, ShopPay). If you’re on a product page with dynamic payment buttons you can be looking at about 187kb of Shopify scripts vs. 24.5kb theme files. Most sites will have Google Analytics, and maybe a Facebook Pixel or other tracking scripts loaded on top of all this.

The good news is that these scripts are loaded fairly efficiently and most don’t block the page rendering much. The bad news is that there's still a lot of JavaScript loading on those pages that are out of the theme’s control and cause some flags on Lighthouse scores.

  • Apps are a huge bottleneck and store owners, generally, have no idea. We routinely see shops with 20+ apps installed, and even a simple app can drop your Shopify speed score by 10+ points. Here’s the breakdown of our Impulse theme with three apps installed.

Here’s a great case study on apps and their effect on performance.

We’re still in the process of finishing these updates to our third theme, Streamline. Streamline also has some other performance features built in that we’re exploring adding to our other themes, such as loadCSS by Filament Group to prevent the CSS from being a render-blocking resource.

These numbers aren’t insignificant. It’s widely reported that speed matters and even small changes can make big impacts. So while we are happy with all of this progress, it’s not the end. Performance will continue to be a dominant part of our builds and we won’t stop looking for more ways to simplify code.

What’s Next?

Performance is an ongoing challenge, one we’re excited to keep pushing on. A few things on our list are:

Resources For Shopify Developers

If you’re building on Shopify, or want to get started, here are some helpful resources for you:

Categories: Design

Reducing HTML Payload With Next.js (Case Study)

Wed, 05/05/2021 - 03:30

I know what you are thinking. Here’s another article about reducing JavaScript dependencies and the bundle size sent to the client. But this one is a bit different, I promise.

This article is about a couple of things that Bookaway faced and we (as a company in the traveling industry) managed to optimize our pages, so that the HTML we send is smaller. Smaller HTML means less time for Google to download and process those long strings of text.

Usually, the HTML code size is not a big issue, especially for small pages, not data-intensive, or pages that are not SEO-oriented. However, in our pages, the case was different as our database stores lots of data, and we need to serve thousands of landing pages at scale.

You may be wondering why we need such a scale. Well, Bookaway works with 1,500 operators and provide over 20k services in 63 countries with 200% growth year over year (pre Covid-19). In 2019, we sold 500k tickets a year, so our operations are complex and we need to showcase it with our landing pages in an appealing and fast manner. Both for Google bots (SEO) and to actual clients.

In this article, I’ll explain:

  • how we found the HTML size is too big;
  • how it got reduced;
  • the benefits of this process (i.e. creating improved architecture, improving ode organization, providing a straightforward job for Google to index tens of thousands of landing pages, and serving much fewer bytes to the client — especially suitable for people with slow connections).

But first, let’s talk about the importance of speed improvement.

Why Is Speed Improvement Necessary To Our SEO Efforts?

Meet “Web Vitals”, but in particular, meet LCP (Largest Contentful Paint):

“Largest Contentful Paint (LCP) is an important, user-centric metric for measuring perceived load speed because it marks the point in the page load timeline when the page’s main content has likely loaded — a fast LCP helps reassure the user that the page is useful.”

The main goal is to have a small LCP as possible. Part of having a small LCP is to let the user download as small HTML as possible. That way, the user can start the process of painting the largest content paint ASAP.

While LCP is a user-centric metric, reducing it should make a big help to Google bots as Googe states:

“The web is a nearly infinite space, exceeding Google’s ability to explore and index every available URL. As a result, there are limits to how much time Googlebot can spend crawling any single site. Google’s amount of time and resources to crawling a site is commonly called the site’s crawl budget.”

— “Advanced SEO,” Google Search Central Documentation

One of the best technical ways to improve the crawl budget is to help Google do more in less time:

Q: “Does site speed affect my crawl budget? How about errors?”

A: “Making a site faster improves the users' experience while also increasing the crawl rate. For Googlebot, a speedy site is a sign of healthy servers so that it can get more content over the same number of connections.”

To sum it up, Google bots and Bookaway clients have the same goal — they both want to get content delivered fast. Since our database contains a large amount of data for every page, we need to aggregate it efficiently and send something small and thin to the clients.

Investigations for ways we can improve led to finding that there is a big JSON embedded in our HTML, making the HTML chunky. For that case, we’ll need to understand React Hydration.

React Hydration: Why There Is A JSON In HTML

That happens because of how Server-side rendering works in react and Next.js:

  1. When the request arrives at the server — it needs to make an HTML based on a data collection. That collection of data is the object returned by getServerSideProps.
  2. React got the data. Now it kicks into play in the server. It builds in HTML and sends it.
  3. When the client receives the HTML, it is immediately pained in front of him. In the meanwhile, React javascript is being downloaded and executed.
  4. When javascript execution is done, React kicks into play again, now on the client. It builds the HTML again and attaches event listeners. This action is called hydration.
  5. As React building the HTML again for the hydration process, it requires the same data collection used on the server (look back at 1.).
  6. This data collection is being made available by inserting the JSON inside a script tag with id __NEXT_DATA__.
What Pages Are We Talking About Exactly?

As we need to promote our offerings in search engines, the need for landing pages has arisen. People usually don’t search for a specific bus line’s name, but more like, “How to get from Bangkok to Pattaya?” So far, we have created four types of landing pages that should answer such queries:

  1. City A to City B
    All the lines stretched from a station in City A to a station in City B. (e.g. Bangkok to Pattaya)
  2. City
    All lines that go through a specific city. (e.g. Cancun)
  3. Country
    All lines that go through a specific country. (e.g. Italy)
  4. Station
    All lines that go through a specific station. (e.g. Hanoi-airport)
Now, A Look At Architecture

Let’s take a high-level and very simplified look at the infrastructure powering the landing pages we are talking about. Interesting parts lie on 4 and 5. That’s where the wasting parts:

Key Takeaways From The Process
  1. The request is hitting the getInitialProps function. This function runs on the server. This function’s responsibility is to fetch data required for the construction of a page.
  2. The raw data returned from REST Servers passed as is to React.
  3. First, it runs on the server. Since the non-aggregated data was transferred to React, React is also responsible for aggregating the data into something that can be used by UI components (more about that in the following sections)
  4. The HTML is being sent to the client, together with the raw data. Then React is kicking again into play also in the client and doing the same job. Because hydration is needed (more about that in the following sections). So React is doing the data aggregation job twice.
The Problem

Analyzing our page creation process led us to the finding of Big JSON embedded inside the HTML. Exactly how big is difficult to say. Each page is slightly different because each station or city has to aggregate a different data set. However, it is safe to say that the JSON size could be as big as 250kb on popular pages. It was Later reduced to sizes around 5kb-15kb. Considerable reduction. On some pages, it was hanging around 200-300 kb. That is big.

The big JSON is embedded inside a script tag with id of ___NEXT_DATA___:

<script id="__NEXT_DATA__" type="application/json"> // Huge JSON here. </script>

If you want to easily copy this JSON into your clipboard, try this snippet in your Next.js page:


A question arises.

Why Is It So Big? What’s In There?

A great tool, JSON Size analyzer, knows how to process a JSON and shows where most of the bulk of size resides.

That was our initial findings while examining a station page:

There are two issues with the analysis:

  1. Data is not aggregated.
    Our HTML contains the complete list of granular products. We don’t need them for painting on-screen purposes. We do need them for aggregation methods. For example, We are fetching a list of all the lines passing through this station. Each line has a supplier. But we need to reduce the list of lines into an array of 2 suppliers. That’s it. We’ll see an example later.
  2. Unnecessary fields.
    When drilling down each object, we saw some fields we don’t need at all. Not for aggregation purposes and not for painting methods. That’s because We fetch the data from REST API. We can’t control what data we fetch.

Those two issues showed that the pages need architecture change. But wait. Why do we need a data JSON embedded in our HTML in the first place?

Categories: Design

A Guide To Undoing Mistakes With Git

Tue, 05/04/2021 - 07:30

Working with code is a risky endeavour: There are countless ways to shoot yourself in the foot! But if you use Git as your version control system, then you have an excellent safety net. A lot of “undo” tools will help you recover from almost any type of disaster.

In this first article of our two-part series, we will look at various mistakes — and how to safely undo them with Git!

Discard Uncommitted Changes in a File

Suppose you’ve made some changes to a file, and after some time you notice that your efforts aren’t leading anywhere. It would be best to start over and undo your changes to this file.

The good news is that if you haven’t committed the modifications, undoing them is pretty easy. But there’s also a bit of bad news: You cannot bring back the modifications once you’ve undone them! Because they haven’t been saved to Git’s “database”, there’s no way to restore them!

With this little warning out of the way, let’s undo our changes in index.html:

$ git restore index.html

This command will restore our file to its last committed state, wiping it clean of any local changes.

Restore a Deleted File

Let’s take the previous example one step further. Let’s say that, rather than modifying index.html, you’ve deleted it entirely. Again, let’s suppose you haven’t committed this to the repository yet.

You’ll be pleased to hear that git restore is equipped to handle this situation just as easily:

$ git restore index.html

The restore command doesn’t really care what exactly you did to that poor file. It simply recreates its last committed state!

Discard Some of Your Changes

Most days are a mixture of good and bad work. And sometimes we have both in a single file: Some of your modifications will be great (let’s be generous and call them genius), while others are fit for the garbage bin.

Git allows you to work with changes in a very granular way. Using git restore with the -p flag makes this whole undoing business much more nuanced:

$ git restore -p index.html

Git takes us by the hand and walks us through every chunk of changes in the file, asking whether we want to throw it away (in which case, we would type y) or keep it (typing n):

Using the --amend option allows you to change this very last commit (and only this one):

$ git commit --amend -m "A message without typos"

In case you’ve also forgotten to add a certain change, you can easily do so. Simply stage it like any other change with the git add command, and then run git commit --amend again:

$ git add forgotten-change.txt $ git commit --amend --no-edit

The --no-edit option tells Git that we don’t want to change the commit’s message this time.

Revert the Effects of a Bad Commit

In all of the above cases, we were pretty quick to recognize our mistakes. But often, we only learn of a mistake long after we’ve made it. The bad commit sits in our revision history, peering snarkily at us.

Of course, there’s a solution to this problem, too: the git revert command! And it solves our issue in a very non-destructive way. Instead of ripping our bad commit out of the history, it creates a new commit that contains the opposite changes.

Performing that on the command line is as simple as providing the revision hash of that bad commit to the git revert command:

$ git revert 2b504bee

As mentioned, this will not delete our bad commit (which could be problematic if we have already shared it with colleagues in a remote repository). Instead, a new commit containing the reverted changes will be automatically created.

Restore a Previous State of the Project

Sometimes, we have to admit that we’ve coded ourselves into a dead end. Perhaps our last couple of commits have yielded no fruit and are better off undone.

Luckily, this problem is pretty easy to solve. We simply need to provide the SHA-1 hash of the revision that we want to return to when we use the git reset command. Any commits that come after this revision will then disappear:

$ git reset --hard 2b504bee

The --hard option makes sure that we are left with a clean working copy. Alternatively, we can use the --mixed option for a bit more flexibility (and safety): --mixed will preserve the changes that were contained in the deleted commits as local changes in our working copy.

The first thing to know about reflog is that it’s ordered chronologically. Therefore, it should come as no surprise to see our recent git reset mistake at the very top. If we now want to undo this, we can simply return to the state before, which is also protocoled here, right below!

We can now copy the commit hash of this safe state and create a new branch based on it:

$ git branch happy-ending e5b19e4

Of course, we could have also used git reset e5b19e4 to return to this state. Personally, however, I prefer to create a new branch: It comes with no downsides and allows me to inspect whether this state is really what I want.

Restore a Single File From a Previous State

Until now, when we’ve worked with committed states, we’ve always worked with the complete project. But what if we want to restore a single file, not the whole project? For example, let’s say we’ve deleted a file, only to find out much later that we shouldn’t have. To get us out of this misery, we’ll have to solve two problems:

  1. find the commit where we deleted the file,
  2. then (and only then) restore it.

Let’s go search the commit history for our poor lost file:

$ git log -- <filename>

The output of this lists all commits where this file has been modified. And because log output is sorted chronologically, we shouldn’t have to search for long — the commit in which we deleted the file will likely be topmost (because after deleting it, the file probably wouldn’t show up in newer commits anymore).

With that commit’s hash and the name of our file, we have everything we need to bring it back from the dead:

$ git checkout <deletion commit hash>~1 -- <filename>

Note that we’re using ~1 to address the commit before the one where we made the deletion. This is necessary because the commit where the deletion happened doesn’t contain the file anymore, so we can’t use it to restore the file.

You Are Now (Almost) Invincible

During the course of this article, we’ve witnessed many disasters — but we’ve seen that virtually nothing is beyond repair in Git! Once you know the right commands, you can always find a way to save your neck.

But to really become invincible (in Git, that is), you’ll have to wait for the second part of this series. We will look at some more hairy problems, such as how to recover deleted branches, how to move commits between branches, and how to combine multiple commits into one!

In the meantime, if you want to learn more about undoing mistakes with Git, I recommend the free “First Aid Kit for Git”, a series of short videos about this very topic.

See you soon in part two of this series! Subscribe to the Smashing Newsletter to not miss that one. ;-)

Categories: Design

Smashing Podcast Episode 36 With Miriam Suzanne: What Is The Future Of CSS?

Mon, 05/03/2021 - 22:00

In this episode, we’re starting our new season of the Smashing Podcast with a look at the future of CSS. What new specs will be landing in browsers soon? Drew McLellan talks to expert Miriam Suzanne to find out.

Show Notes Weekly Update Transcript

Drew McLellan: She’s an artist, activist, teacher and web developer. She’s a co-founder of OddBird, a provider of custom web applications, developer tools, and training. She’s also an invited expert to the CSS Working Group and a regular public speaker and author sharing her expertise with audiences around the world. We know she knows CSS both backwards and forwards, but did you know she once won an egg and spoon race by taking advantage of a loophole involving macaroni? My smashing friends, please welcome Miriam Suzanne. Hi, Miriam. How are you?

Miriam Suzanne: I’m smashing, thank you.

Drew: That’s good to hear. I wanted to talk to you today about some of the exciting new stuff that’s coming our way in CSS. It feels like there’s been a bit of an acceleration over the last five years of new features making their way into CSS and a much more open and collaborative approach from the W3C with some real independent specialists like yourself, Rachel Andrew, Lea Verou and others contributing to the working group as invited experts. Does it feel like CSS is moving forward rapidly or does it still feel horribly slow from the inside?

Miriam: Oh, it’s both, I think. It is moving quite fast and quite fast is still sometimes very slow because there’s just so many considerations. It’s hard to really land something everywhere very quickly.

Drew: It must feel like there’s an awful lot of work happening on all sorts of different things and each of them edging forward very, very slowly, but when you look at the cumulative effect, there’s quite a lot going on.

Miriam: Yeah, exactly, and I feel like I don’t know what kicked off that change several years ago, whether it was grid and flexbox really kicked up interest in what CSS could be, I think, and there’s just been so much happening. But it’s interesting watching all the discussions and watching the specs. They all refer to each other. CSS is very tied together. You can’t add one feature without impacting every other feature and so all of these conversations have to keep in mind all of the other conversations that are happening. It’s really a web to try to understand how everything impacts everything else.

Drew: It feels like the working group very much always looking at what current practice is and seeing what holes people are trying to patch, what problems they’re trying to fix, often with JavaScript, and making a big messy ball of JavaScript. Is that something that’s a conscious effort or does it just naturally occur?

Miriam: I would say it’s very conscious. There’s also a conscious attempt to then step back from the ideas and say, "Okay, this is how we’ve solved them in JavaScript or using hacks, workarounds, whatever." We could just pave that cow path, but maybe there’s a better way to solve it once it’s native to CSS and so you see changes to things like variables. When they move from preprocessors like Sass and Less to CSS, they become something new. And that’s not always the case, sometimes the transition is pretty seamless, it’s more just take what’s already been designed and make it native. But there’s a conscious effort to think through that and consider the implications.

Drew: Yeah, sometimes a small workaround is hiding quite a big idea that could be more useful in itself.

Miriam: And often, hiding overlapped ideas. I was just reading through a lot of the issues around grid today because I’ve been working on responsive components, things like that, and I was like, "Okay, what’s happening in the grid space with this?" And there’s so many proposals that mix and overlap in really interesting ways. It can be hard to separate them out and say, "Okay, should we solve these problems individually or do we solve them as grouped use cases? How exactly should that be approached?"

Drew: I guess that can be, from the outside, that might seem like a frustrating lack of progress when you say, "Why can’t this feature be implemented?" It’s because when you look at that feature, it explodes into something much bigger that’s much harder to solve.

Miriam: Exactly.

Drew: Hopefully, solving the bigger problem makes all sorts of other things possible. I spent a lot of my career in a position where we were just sort of clamoring for something, anything, new to be added to CSS. I’m sure that’s familiar to you as well. It now seems like it’s almost hard to keep track of everything that’s new because there’s new things coming out all the time. Do you have any advice for working front-enders of how they can keep track of all the new arrivals in CSS? Are there good resources or things they should be paying attention to?

Miriam: Yeah, there are great resources if you really want a curated, a sense of what you should be watching. But that’s Smashing Magazine, CSS-Tricks, all of the common blogs and then various people on Twitter. Browser implementers as well as people on the working group as well as people that write articles. Stephanie Eckles comes to mind, ModernCSS. There’s a lot of resources like that. I would also say, if you keep an eye on the release notes from different browsers, they don’t come out that often, it’s not going to spam your inbox every day. You’ll often see a section in the release notes on what have they released related to CSS. And usually in terms of features, it’s just one or two things. You’re not going to become totally overwhelmed by all of the new things landing. They’ll come out six weeks to a couple of months and you can just keep an eye on what’s landing in the browsers.

Drew: Interesting point. I hadn’t thought of looking at browser release notes to find this stuff. Personally, I make efforts to follow people on Twitter who I know would share things, but I find I just miss things on Twitter all the time. There’s lots of cool stuff that I never get to see.

Drew: In that spirit, before we look too far into the future into what’s under development at the moment, there are quite a few bits of CSS that have already landed in browsers that might be new to people and they might be pretty usable under a lot of circumstances. There are certainly things that I’ve been unaware of.

Drew: One area that comes to mind is selectors. There’s this "is" pseudo-class function, for example. Is that like a jQuery "is" selector, if you remember those? I can barely remember those.

Miriam: I didn’t use jQuery enough to say.

Drew: No. Now even saying that, it’s so dusty in my mind, I’m not even sure that was a thing.

Miriam: Yeah, "is" and "where", it’s useful to think of them together, both of those selectors. "Is" sort of landed in most browsers a little bit before "where", but at this point I think both are pretty well-supported in modern browsers. They let you list a number of selectors inside of a single pseudo-class selector. So you say, ":is" or ":where" and then in parentheses, you can put any selectors you want and it matches an element that also matches the selectors inside. One example is, you can say, "I want to style all the links inside of any heading." So you can say "is", H1, H2, H3, H4, H5, H6, put a list inside of "is", and then, after that list say "A" once. And you don’t have to repeat every combination that you’re generating there. It’s sort of a shorthand for bringing nesting into CSS. You can create these nested "like" selectors. But they also do some interesting things around specificity... Sorry, what were you going to say?

Drew: I guess it’s just useful in making your style sheet more readable and easy to maintain if you’re not having to longhand write out every single combination of things.

Miriam: Right. The other interesting thing you can do with it is you can start to combine selectors. So you can say, "I’m only targeting something that matches both the selectors outside of "is" and the selectors inside of "is"". It has to match all of these things." So you can match several selectors at once, which is interesting.

Drew: Where does "where" come into it if that’s what "is" does?

Miriam: Right. "Where" comes into it because of the way that they handle specificity. "Is" handles specificity by giving you the entire selector gets the specificity of whatever is highest specificity inside of "is." "Is" can only have one specificity and it’s going to be the highest of any selector inside. If you put an "id" inside it, it’s going to have the specificity of an "id." Even if you have an "id" and a class, two selectors, inside "is", It’s going to have the specificity of the "id."

Miriam: That defaults to a higher specificity. "Where" defaults to a zero specificity, which I think is really interesting, especially for defaults. I want to style an audio element where it has controls, but I don’t want to add specificity there, I just want to say where it’s called for controls, where it has the controls attribute, add this styling to audio. So a zero-specificity option. Otherwise, they work the same way.

Drew: Okay. So that means with a zero specificity, it means that, then, assuming that somebody tries to style those controls in the example, they’re not having to battle against the styles that have already been set.

Miriam: That’s right, yeah. There’s another interesting thing inside of both of those where they’re supposed to be resilient. Right now, if you write a selector list and a browser doesn’t understand something in that selector list, it’s going to ignore all of the selectors in the list. But if you do that inside of "is" or "where", if an unknown selector is used in a list inside of "is" or "where", it should be resilient and the other selectors should still be able to match.

Drew: Okay, so this is that great property of CSS, that if it doesn’t understand something, it just skips over it.

Miriam: Right.

Drew: And so, you’re saying that if there’s something that it doesn’t understand in the list, skip over the thing it doesn’t understand, but don’t throw the baby out with the bathwater, keep all the others and apply them.

Miriam: Exactly.

Drew: That’s fascinating. And the fact that we have "is" and "where" strikes me as one of those examples of something that sounds like an easy problem. "Oh, let’s have an "is" selector." And then somebody says, "But what about specificity?"

Miriam: Right, exactly.

Drew: How are we going to work that out?

Miriam: Yeah. The other interesting thing is that it comes out of requests for nesting. People wanted nested selectors similar to what Sass has and "is" and "where" are, in some ways, a half step towards that. They will make the nested selectors easier to implement since we already have a way to, what they call "de-sugar" them. We can de-sugar them to this basic selector.

Drew: What seems to me like the dustiest corners of HTML and CSS are list items and the markers that they have, the blitz or what have you. I can remember, probably back in Frontpage in the late ’90s, trying to style, usually with proprietary Microsoft properties, for Internet Explorer back in the day. But there’s some good news on the horizon for lovers of markers, isn’t there?

Miriam: Yeah, there’s a marker selector that’s really great. We no longer have to remove the markers by saying... How did we remove markers? I don’t even remember. Changing the list style to none.

Drew: List style, none. Yup.

Miriam: And then people would re-add the markers using "before" pseudo-element. And we don’t have to do that anymore. With the marker pseudo-element, we can style it directly. That styling is a little bit limited, particularly right now, it’s going to be expanding out some, but yeah, it’s a really nice feature. You can very quickly change the size, the font, the colors, things like that.

Drew: Can you use generated content in there as well?

Miriam: Yes. I don’t remember how broad support is for the generated content, but you should be able to.

Drew: That’s good news for fans of lists, I guess. There’s some new selectors. This is something that I came across recently in a real-world project and I started using one of these before I realized actually it wasn’t as well supported as I thought, because it’s that new. And that’s selectors to help when "focus" is applied to elements. I think I was using "focus within" and there’s another one, isn’t there? There’s-

Miriam: "Focus visible."

Drew: What do they do?

Miriam: Browsers, when they’re handling "focus", they make some decisions for you based on whether you’re clicking with a mouse or whether you’re using a keyboard to navigate. Sometimes they show "focus" and sometimes they don’t, by default. "Focus visible" is a way for us to tie into that logic and say, "When the browser thinks focus should be visible, not just when an item has focus, but when an item has focus and the browser thinks focus needs to be visible, then apply these styles." That’s useful for having outline rings on focus, but not having them appear when they’re not needed, when you’re using a mouse and you don’t really need to know. You’ve clicked on something, you know that you’ve focused it, you don’t need the styling there. "Focus visible" is really useful for that. "Focus within" allows you to say, "Style the entire form when one of its elements has focus," which is very cool and very powerful.

Drew: I think I was using it on a dropdown menu navigation which is-

Miriam: Oh, sure.

Drew: ... a focus minefield, isn’t it?

Miriam: Mm-hmm (affirmative).

Drew: And "focus within" was proven very useful there until I didn’t have it and ended up writing a whole load of JavaScript to recreate what I’d achieved very simply with CSS before it.

Miriam: Yeah, the danger always with new selectors is how to handle the fallback.

Drew: One thing I’m really excited about is this new concept in CSS of aspect ratio. Are we going to be able to say goodbye to the 56% top padding hack?

Miriam: Oh, absolutely. I’m so excited to never use that hack again. I think that’s landing in browsers. I think it’s already available in some and should be coming to others soon. There seems to be a lot of excitement around that.

Drew: Definitely, it’s the classic problem, isn’t it, of having a video or something like that. You want to show it in like a 16 by 9 ratio, but you want to set the dimensions on it. But maybe it’s a 4 by 3 video and you have to figure out how to do it and get it to scale with the right-

Miriam: Right, and you want it to be responsive, you want it to fill a whole width, but then maintain its ratio. Yeah, the hacks for that aren’t great. I use one often that’s create a grid, position generated content with a padding top hack, and then absolute position the video itself. It’s just a lot to get it to work the way you want.

Drew: And presumably, that’s going to be much more performance for the layout engines to be able to deal with and-

Miriam: Right. And right away, it’s actually a reason to put width and height values back on to replaced elements like images, in particular, so that even before CSS loads, the browser can figure out what is the right ratio, the intrinsic ratio, even before the image loads and use that in the CSS. We used to strip all that out because we wanted percentages instead and now it’s good to put it back in.

Drew: Yes, I was going to say that when responsive web design came along, we stripped all those out. But I think we lost something in the process, didn’t we, of giving the browser that important bit of information about how much space to reserve?

Miriam: Yeah, and it ties in to what Jen Simmons has been talking about lately with intrinsic web design. The idea with responsive design was basically that we strip out any intrinsic sizing and we replace it with percentages. And now the tools that we have, flex and grid, are actually built to work with intrinsic sizes and it’s useful to put those all back in and we can override them still if we need to. But having those intrinsic sizes is useful and we want them.

Drew: Grid, you mentioned, I think sort of revolutionized the way we think about layout on the web. But it was always sort of tempered a little bit by the fact that we didn’t get subgrid at the same time. Remind us, if you will, what subgrid is all about and where are we now with support?

Miriam: Yeah. Grid establishes a grid parent and then all of its children layout on that grid. And subgrid allows you to nest those and say, "Okay, I want grandchildren to be part of the grandparent grid." Even if I have a DOM tree that’s quite a bit nested, I can bubble up elements into the parent grid, which is useful. But it’s particularly useful when you think about the fact that CSS in general and CSS Grid in particular does this back and forth of some parts of the layout are determined based on the available width of the container. They’re contextual, they’re outside-in. But then also, some parts of it are determined by the sizes of the children, the sizes of the contents, so we have this constant back and forth in CSS between whether the context is in control or whether the contents are in control of the layout. And often, they’re intertwined in very complex ways. What’s most interesting about subgrid is it would allow the contents of grid items to contribute back their sizing to the grandparent grid and it makes that back and forth between contents and context even more explicit.

Drew: Is that the similar problem that has been faced by container queries? Because you can’t really talk about the future of CSS and ask designers and developers what they want in CSS without two minutes in somebody saying, "Ah, container queries, that’s what we want." Is that a similar issue of this pushing and pulling of the two different context to figure out how much space there is?

Miriam: Yeah, they both are related to that context-content question. Subgrid doesn’t have to deal with quite the same problems. Subgrid actually works. It is actually able to pass those values both directions because you can’t change the contents based on the context. We sort of cut off that loop. And the problem with container queries has always been that there’s a potential infinite loop where if we allow the content to be styled based on its context explicitly, and you could say, "When I have less than 500 pixels available, make it 600 pixels wide." You could create this loop where then that size changes the size of the parent, that changes whether the container query applies and on and on forever. And if you’re in the Star Trek universe, the robot explodes. You get that infinite loop. The problem with container queries that we’ve had to solve is how do we cut off that loop.

Drew: Container queries is one of the CSS features that you’re one of the editors for, is that right?

Miriam: Yeah.

Drew: So the general concept is like a media query, where we’re looking at the size of a viewport, I guess, and changing CSS based on it. Container queries are to do that, but looking at the size of a containing element. So I’m a hero image on a page, how much space have I got?

Miriam: Right. Or I’m a grid item in a track. How much space do I have in this track? Yeah.

Drew: It sounds very difficult to solve. Are we anywhere near a solution for container queries now?

Miriam: We are very near a solution now.

Drew: Hooray!

Miriam: There’s still edge cases that we haven’t resolved, but at this point, we’re prototyping to find those edge cases and see if we can solve all of them. But the prototypes we’ve played with so far surprisingly just work in the majority of cases, which has been so fun to see. But it’s a long history. It’s sort of that thing with... Like we get "is" because it’s halfway to nesting. And there’s been so much work over the last 10 years. What looks like the CSS Working Group not getting anywhere on container queries has actually been implementing all of the half steps we would need in order to get here. I came on board to help with this final push, but there’s been so much work establishing containment and all these other concepts that we’re now relying on to make container queries possible.

Drew: It’s really exciting. Is there any sort of timeline now that we might expect them to get into browsers?

Miriam: It’s hard to say exactly. Not all browsers announce their plans. Some more than others. It’s hard to say, but all of the browsers seem excited about the idea. There’s a working prototype in Chrome Canary right now that people can play with and we’re getting feedback through that to make changes. I’m working on the spec. I imagine dealing with some of the complexity in the edge cases. It will take some time for the spec to really solidify, but I think we have a fairly solid proposal overall and I hope that other browsers are going to start picking up on that soon. I know containment, as a half step, is already not implemented everywhere, but I know Igalia is working to help make sure that there’s cross-browser support of containment and that should make it easier for every browser to step up and do the container queries.

Drew: Igalia are an interesting case, aren’t they? They were involved in a lot of the implementation on Grid initially, is that right?

Miriam: Yes. I understand they were hired by Bloomberg or somebody that really wanted grids. Igalia is really interesting. They’re a company that contributes to all of the browsers.

Drew: They’re sort of an outlier, it seems. All the different parties that work on CSS, is mostly, as you’d expect, mostly browser vendors. But yes, they’re there as a sort of more independent developer, which is very interesting.

Miriam: A browser vendor vendor.

Drew: Yes. Definitely. Another thing I wanted to talk to you about is this concept that completely twisted my mind a little bit while I started to think about it. It’s this concept of cascade layers. I think a lot of developers might be familiar with the different aspects of the CSS cascade thing, specificity, source order, importance, origin. Are those the main ones? What are cascade layers? Is this another element of the cascade?

Miriam: Yeah. It is another element very much like those. I think often when we talk about the cascade, a lot of people mainly think of it as specificity. And other things get tied into that. People think of importance as a higher specificity, people think of source order as a lower specificity. That makes sense because, as authors, we spend most of our time in specificity.

Miriam: But these are separate things and importance is more directly tied to origins. This idea of where do styles come from. Do they come from authors like us or browsers, the default styles, or do they come from users? So three basic origins and those layer in different ways. And then importance is there to flip the order so that there’s some balance of control. We get to override everybody by default, but users and browsers can say, "No, this is important. I want control back." And they win.

Miriam: For us, importance acts sort of like a specificity layer because normal author styles and important author styles are right next to each other so it makes sense that we think of them that way. But I was looking at that and I was thinking specificity is this attempt to say... It’s a heuristic. That means it’s a smart guess. And the guess is based on we think the more narrowly targeted something is, probably the more you care about it. Probably. It’s a guess, it’s not going to be perfect, but it gets us partway. And that is somewhat true. The more narrowly we target something, probably the more we care about it so more targeted styles override less targeted styles.

Miriam: But it’s not always true. Sometimes that falls apart. And what happens is, there’s three layers of specificity. There’s id’s, there’s classes and attributes, and there there’s elements themselves. Of those three layers, we control one of them completely. Classes and attributes, we can do anything we want with them. They’re reusable, they’re customizable. That’s not true of either of the other two layers. Once things get complex, we often end up trying to do all of our cascade management in that single layer and then getting angry, throwing up our hands, and adding importance. That’s not ideal.

Miriam: And I was looking at origins because I was going to do some videos teaching the cascade in full, and I thought that’s actually pretty clever. We, as authors, often have styles that come from different places and represent different interests. And what if we could layer them in that same way that we can layer author styles, user styles, and browser styles. But instead, what if they’re... Here’s the design system, here’s the styles from components themselves, here’s the broad abstractions. And sometimes we have broad abstractions that are narrowly targeted and sometimes we have highly repeatable component utilities or something that need to have a lot of weight. What if we could explicitly put those into named layers?

Miriam: Jen Simmons encouraged me to submit that to the working group and they were excited about it and the spec has been moving very quickly. At first, we were all worried that we would end up in a z-index situation. Layer 999,000 something. And as soon as we started putting together the syntax, we found that that wasn’t hard to avoid. I’ve been really excited to see that coming together. I think it’s a great syntax that we have.

Drew: What form does the syntax take on, roughly? I know it’s difficult to mouth code, isn’t it?

Miriam: It’s an "@" rule called "@layer." There’s actually two approaches. You can also use, we’re adding a function to the "@import" syntax so you could import a style sheet into a layer, say, import Bootstrap into my framework layer. But you can also create or add to layers using the "@layer" rule. And it’s just "@layer" and then the name of the layer. And layers get stacked in the order they’re first introduced, which means that even if you’re bringing in style sheets from all over and you don’t know what order they’re going to load, you can, at the top of your document, say, "Here are the layers that I’m planning to load, and here’s the order that I want them in." And then, later, when you’re actually adding styles into those layers, they get moved into the original order. It’s also a way of saying, "Ignore the source order here. I want to be able to load my styles in any order and still control how they should override each other."

Drew: And in its own way, having a list, at the top, of all these different layers is self-documenting as well, because anybody who comes to that style sheet can see the order of all the layers.

Miriam: And it also means that, say, Bootstrap could go off and use a lot of internal layers and you could pull those layers in from Bootstrap. They control how their own layers relate to each other, but you could control how those different layers from Bootstrap relate to your document. So when should Bootstrap win over your layers and when should your layers win over Bootstrap? And you can start to get very explicit about those things without ever throwing the "important" flag.

Drew: Would those layers from an imported style sheet, if that had its own layers, would they all just mix in at the point that the style sheet was added?

Miriam: By default, unless you’ve defined somewhere else previously how to order those layers. So still, your initial layer ordering would take priority.

Drew: If Bootstrap, for example, had documented their layers, would you be able to target a particular one and put that into your layer stack to change it?

Miriam: Yes.

Drew: So it’s not an encapsulated thing that all moves in one go. You can actually pull it apart and...

Miriam: It would depend... We’ve got several ideas here. We’ve built in the ability to nest layers that seemed important if you were going to be able to import into a layer. You would have to then say, "Okay, I’ve imported all of Bootstrap into a layer called frameworks," but they already had a layer called defaults and a layer called widgets or whatever. So then I want a way to target that sublayer. I want to be able to say "frameworks widgets" or "frameworks defaults" and have that be a layer. So we have a syntax for that. We think that all of those would have to be grouped together. You couldn’t pull them apart if they’re sublayered. But if Bootstrap was giving you all those as top level layers, you could pull them in at the top level, not group them. So we have ways of doing both grouping or splitting apart.

Drew: And the fact that you can specify a layer that something is imported into that doesn’t require any third-party script to know about layers or have implemented it, presumably, it just pulls that in at the layer you specify.

Miriam: Right.

Drew: That would help with things pretty much like Bootstrap and that sort of thing, but also just with the third party widgets you’re then trying to fight with specificity to be able to re-style them and they’re using id’s to style things and you want to change the theme color or something and you having to write these very specific... You can just change the layer order to make sure that your layers would win in the cascade.

Miriam: Yup. That’s exactly right. The big danger here is backwards compatibility. It’s going to be a rough transition in some sense. I can’t imagine any way of updating the cascade or adding the sort of explicit rules to the cascade without some backwards compatibility issues. But older browsers are going to ignore anything inside a layer rule. So that’s dangerous. This is going to take some time. I think we’ll get it implemented fairly quickly, but then it will still take some time before people are comfortable using it. And there are ways to polyfill it particularly using "is." The "is selector gives us a weird little polyfill that we’ll be able to write. So people will be able to use the syntax and polyfill it, generate backwards-compatible CSS, but there will be some issues there in the transition.

Drew: Presumably. And you’re backwards-compatible to browsers that support "is."

Miriam: That’s right. So it gets us a little farther, but not... It’s not going to get us IE 11.

Drew: No. But then that’s not necessarily a bad thing.

Miriam: Yeah.

Drew: It feels like a scoping mechanism but it’s not a scoping mechanism, is it, layers? It’s different because a scope is a separate thing and is actually a separate CSS feature that there’s a draft in the works for, is that right?

Miriam: Yeah, that’s another one that I’m working on. I would say, as with anything in the cascade, they have sort of an overlap. Layers overlap with specificity and both of them overlap with scope.

Miriam: The idea with scope, what I’ve focused on, is the way that a lot of the JavaScript tools do it right now. They create a scope by generating a unique class name, and then they append that class name to everything they consider within a scope. So if you’re using "view" that’s everything within a view component template or something. So they apply it to every element in the HTML that’s in the scope and then they also apply it to every single one of your selectors. It takes a lot of JavaScript managing and writing these weird strings of unique ids.

Miriam: But we’ve taken the same idea of being able to declare a scope using an "@scope" rule that declares not just the root of the scope, not just this component, but also the lower boundaries of that scope. Nicole Sullivan has called this "donut scope", the idea that some components have other components inside of them and the scope only goes from the outer boundaries to that inner hole and then other things can go in that hole. So we have this "@scope" rule that allows you to declare both a root selector and then say "to" and declare any number of lower boundaries. So in a tab component it might be "scope tabs to tab contents" or something so you’re not styling inside of the content of any one tab. You’re only scoping between the outer box and that inner box that’s going to hold all the contents.

Drew: So it’s like saying, "At this point, stop the inheritance."

Miriam: Not exactly, because it doesn’t actually cut off inheritance. The way I’m proposing it, what it does is it just narrows the range of targeted elements from a selector. So any selector you put inside of the scope rule will only target something that is between the root and the lower boundaries and it’s a targeting issue there. There is one other part of it that we’re still discussing exactly how it should work where, the way I’ve proposed it, if we have two scopes, let’s call them theme scopes. Let’s say we have a light theme and a dark theme and we nest them. Given both of those scopes, both of them have a link style, both of those link styles have the same specificity, they’re both in scopes. We want the closer scope to win in that case. If I’ve got nested light and dark and light and dark, we want the closest ancestor to win. So we do have that concept of proximity of a scope.

Drew: That’s fascinating. So scopes are the scope of the targeting of a selector. Now, I mentioned this idea of inheritance. Is there anything in CSS that might be coming or might exist already that I didn’t know about that will stop inheritance in a nice way without doing a massive reset?

Miriam: Well, really, the way to stop inheritance is with some sort of reset. Layers would actually give you an interesting way to think about that because we have this idea of... There’s already a "revert" rule. We have an "all" property, which sets all properties, every CSS property, and we have a "revert" rule, which reverts to the previous origin. So you can say "all revert" and that would stop inheritance. That would revert all of the properties back to their browser default. So you can do that already.

Miriam: And now we’re adding "revert layer", which would allow you to say, "Okay I’m in the components layer. Revert all of the properties back to the defaults layer." So I don’t want to go the whole way back to the browser defaults, I want to go back to my own defaults. We will be adding something like that in layers that could work that way.

Miriam: But a little bit, in order to stop inheritance, in order to stop things from getting in, I think that belongs more in the realm of shadow DOM encapsulation. That idea of drawing hard boundaries in the DOM itself. I’ve tried to step away from that with my scope proposal. The shadow DOM already is handling that. I wanted to do something more CSS-focused, more... We can have multiple overlapping scopes that target different selectors and they’re not drawn into the DOM as hard lines.

Drew: Leave it to someone else, to shadow DOM. What stage are these drafts at, the cascade layers and scope? How far along the process are they?

Miriam: Cascade layers, there’s a few people who want to reconsider the naming of it, but otherwise, the spec is fairly stable and there’s no other current issues open. Hopefully, that will be moving to candidate recommendation soon. I expect browsers will at least start implementing it later this year. That one is the farthest along because for browsers, it’s very much the easiest to conceptualize and implement, even if it may take some time for authors to make the transition. That one is very far along and coming quickly.

Miriam: Container queries are next in line, I would say. Since we already have a working prototype, that’s going to help a lot. But actually defining all of the spec edge cases... Specs these days are, in large part, "How should this fail?" That’s what we got wrong with CSS 1. We didn’t define the failures and so browsers failed differently and that was unexpected and hard to work with. Specs are a lot about dealing with those failures and container queries are going to have a lot of those edge cases that we have to think through and deal with because we’re trying to solve weird looping problems. It’s hard to say on that one, because we both have a working prototype ahead of any of the others, but also it’s going to be a little harder to spec out. I think there’s a lot of interest, I think people will start implementing soon, but I don’t know exactly how long it’ll take.

Miriam: Scope is the farthest behind of those three. We have a rough proposal, we have a lot of interest in it, but very little agreement on all the details yet. So that one is still very much in flux and we’ll see where it goes.

Drew: I think it’s amazing, the level of thought and work the CSS Working Group are putting into new features and the future of CSS. It’s all very exciting and I’m sure we’re all very grateful for the clever folks like yourself who spend time thinking about it so that we get new tools to use. I’ve been learning all about what’s coming down the pike in CSS, what have you been learning about lately, Miriam?

Miriam: A big part of what I’m learning is how to work on the spec process. It’s really interesting and I mean the working group is very welcoming and a lot of people there have helped me find my feet and learn how to think about these things from a spec perspective. But I have a long ways to go on that and learning exactly how to write the spec language and all of that. That’s a lot in my mind.

Miriam: Meanwhile, I’m still playing with grids and playing with custom properties. And while I learned both of those, I don’t know, five years ago, there’s always something new there to discover and play with, so I feel like I’m never done learning them.

Drew: Yup. I feel very much the same. I feel like I’m always a beginner when it comes to a lot of CSS.

Drew: If you, dear listener, would like to hear more from Miriam, you can find her on Twitter where she’s @MiriSuzanne, and her personal website is Thanks for joining us today, Miriam. Do you have any parting words?

Miriam: Thank you, it’s great chatting with you.

Categories: Design

The Evolution Of Jamstack

Mon, 05/03/2021 - 00:00

It’s been five years since I first presented the idea of the Jamstack architecture at SmashingConf in San Francisco 2016, a talk inspired by many conversations with colleagues and friends in the industry. At that point, the idea of fundamentally decoupling the front-end web layer from the back-end business logic layer was only an early trend, and not yet a named architectural approach.


Static site generators were emerging as a real option for building larger content-driven sites, but the whole ecosystem around them was nascent, and the main generators were pure open-source tools with no commercial presence. Single Page Applications were the basis of some large-scale web apps, like Gmail, but the typical approach to building them was still backend-centric.

Fast forward to 2020, Jamstack hit the mainstream, and we saw millions of developers and major brands like Unilever, Nike, and PayPal embrace the architecture. Vital initiatives like the Covid Tracking Project were able to scale from 0 to 2 million API requests on the Jamstack. Frameworks like Nuxt became commercial businesses, and we celebrated large public companies like Microsoft and Cloudflare as they launched early Jamstack offerings.

As the commercial space has heated up and the developer community has grown, there’s also been more noise, and we’re even starting to test the boundaries of Jamstack’s best practices. It feels like the right time to both revisit the original vision some of us had five years ago, and look ahead at what the changes in the technological landscape will mean for the future of the Jamstack architecture and the web.

Let’s start out by quickly revisiting the core principles that have made the architecture prove popular.

Compiling The UI

In the Jamstack architecture, the UI is compiled. The goal is to do the right work at the right times — with a preference for doing as much work as possible ahead of time. Many times, the entire site can be prerendered, perhaps not even requiring a backend once deployed.

Decoupled Frontends

Decoupling the frontend from back-end services and platforms enforces a clear contract for how your UI communicates with the rest of the system. This defaults to simplicity: your frontend has a limited contact surface with anything outside itself, making it less complicated to understand how external changes will affect its operation.

Pulling Data As Needed

Of course, not everything can be prerendered, and the Jamstack architecture is capable of delivering dynamic, personalized web apps as well as more globally consistent content. Requesting data from the frontend can power some rich and dynamic applications.

A good example is the frontend of our own Netlify UI, which is itself a Jamstack application built and run on Netlify. We pre-compile an app shell, then use asynchronous requests to hit our API to load data about our users and their sites. Whether you’re using REST, GraphQL, or WebSockets, if you’re precompiling as much of the UI as possible and loading data to give your users a dynamic, customized experience, then you’re shipping the Jamstack architecture.

Jamstack In 2021 And Beyond

There’s more innovation happening across the Jamstack ecosystem than ever before. You can see a rapid evolution of the back-end services, developer tooling, and client-side technologies that are combining to enable development teams to build experiences for the web that would have seemed out of reach only a couple of years ago.

I want to point to three trends I see shaping up for Jamstack developers in the near future:

1. Distributed Persistent Rendering (DPR)

More than anything, Jamstack’s inherent simplicity has made the process of building and deploying web applications much easier to reason about. Code and content updates can be pre-rendered as clean, atomic deployments and pushed right to the edge, creating strong guarantees around reliability and performance without the need to manage complex infrastructure.

But pre-rendering a larger website may also mean waiting several minutes each time there’s a new deployment. That’s why I think we are seeing so much innovation happening to make builds smarter and faster — especially for larger sites and web apps. Take for example the raw speed of esbuild, the new “extremely fast JavaScript compiler.” A production bundle that may take Parcel or Webpack over a minute to compile can be completed by esbuild in under a second. And build tools like Vite and Snowpack lean on native ES modules to make local development feel nearly instantaneous.

In the React ecosystem, some newer frameworks like Remix or Blitz are starting to lean more on the “run everything on a server” approach we’ve all known in the past. There’s a risk of bringing back much of the complexity we’ve worked to escape. Layers of caching can help make server-side apps more performant, but developers lose the guarantees of atomic deployments that make Jamstack apps easy to reason about.

Blitz seems to be moving the monolith into the frontend. This can make full-stack apps runnable on typical Jamstack platforms, but without any clear decoupling between the web experience layer and the back-end business logic layer. I think decoupling the frontend from the backend is fundamental to the Jamstack approach and responsible for unlocking so many of its benefits.

What I see gaining real momentum are the “hybrid” frameworks like Next.js, Nuxt.js, and SvelteKit that allow developers to seamlessly mix pages pre-rendered at build time with other routes that are rendered via serverless functions. The challenge is that serverless functions (while certainly scalable) have their own set of performance implications.

Ultimately, I see the community moving towards an extremely powerful trio that provides Jamstack developers request-level control over the performance profile of any site or application:

  1. Delivering pages entirely pre-rendered at build time,
  2. Delivering pages dynamically via serverless functions, or
  3. Building pages on-demand that then persist as static CDN assets.

Next.js has done quite a bit of work on a concept they call Incremental Static Regeneration. The idea is to ensure high-performance pages by paring serverless functions with different caching strategies like Stale While Revalidate. While the idea of distributing some of the builds to an on-demand approach that still includes strong caching guarantees is a powerful technique, there’s a risk to breaking atomic deploys in this particular implementation, and the benefits are locked into a singular framework, and in some cases, a provider.

At Netlify, we see a lot of promise in the idea of allowing developers to render critical pages at build time, while deferring other pages (like older blog posts, for example) to be built only when and if they are requested. We’re calling the approach Distributed Persistent Rendering or DPR. It’s an architecture for incremental builds that can be compatible across almost every framework and Jamstack site generator, from 11ty to Nuxt to Next.js.

DPR will dramatically reduce upfront build times for larger sites, solving a core criticism of static site generation. On, we’ve opened a Request For Comments to involve the entire community in our efforts to give developers more options for how pages are rendered while adhering closely to the principles that have made Jamstack so popular. By giving this architecture a name and refining it with community input, we can help Jamstack developers build patterns around it — regardless of the framework.

2. Streaming Updates From The Data Layer

If you develop web applications, you’ve likely followed the evolution of state management libraries as developers have built more and more complex web interfaces using tools like React, Vue, and Svelte. But state management has largely been an in-browser and in-memory concern. Each browser tab essentially has its own state, but can be quite complex to connect that local browser state of your application back to the data services that power it.

Luckily, this is improving as more and more services now support real-time data subscriptions. Hasura, OneGraph, and Supabase all offer this capability and I only expect to see wider adoption across all providers as the underlying data stores are cached and distributed to the edge for fast global performance. Take Twillio’s expanding APIs: they now not only offer streaming video but also streaming “data tracks,” which can be used to create complex collaboration apps that stay continually synchronized across participants.

Finally, new providers are emerging that aggregate data across back-end services. Whether or not you use GraphQL as a query language, it’s really compelling to imagine the power of connecting your UI to a single, standard stream of updates from multiple underlying APIs.

3. Developer Collaboration Goes Mainstream

The Jamstack is built on a Git workflow — an approach that scales really well to larger development teams. But going forward, we’ll start to see how these traditionally developer-focused tools will expand to involve everyone across the company: developers, sure, but also writers, editors, designers, and SEO experts.

When you think of collaboration, you tend to think of synchronous edits—the multiple cursors that fly around a Google Doc, for example. We are seeing that style of live collaboration come to CMS tools like Sanity and design tools like Figma. But so much work often happens asynchronously, and non-developers traditionally haven’t enjoyed the powerful tools that developers use to seamlessly branch, stage, and merge changes with collaborative discussion attached to each pull request.

Early on in the Jamstack, some clever git-based CMS tools emerged to help non-developers manage content like code — perhaps without even knowing that each change they made was being git-committed just like a developer working from the terminal. We’re now starting to see new tools tackle visual page edits in a way that remains compatible with popular Jamstack site generators like Gatsby and Next.js. All of this lowers the bar to collaboration for non-developers and we’ll only see that trend accelerate.

And it’s not just non-developers joining in on the collaboration: deep integrations between tools are bringing more automated contributions into our dev, build, and deploy workflows. Just browse the comment history on a GitHub pull request to see how many tools are now integrated to run automated tests and catch errors before they are deployed.

Updates to Netlify’s docs, for example, aren’t just linted against our code standards, they are also linted against our content standards, ensuring we stay consistent with our style guide for vocabulary, language, and phrasing. Teams can also now easily tie performance budgets and SEO standards to each deployment, again with alerts and logs tied directly to GitHub issues.

I would expect to see those sorts of integrations explode in the coming year, allowing a git-based workflow to underpin not just code changes, but also content, data, design assets — you name it. Friendly interfaces into these Git workflows will allow more contributors to comment, commit, and collaborate and bring developer productivity tooling further into the mainstream.

Enabling Scale And Dynamic Use Cases

While Jamstack stays true to the core concepts of decoupling the frontend from the backend and maintaining immutable and atomic deploys, new build strategies and compute primitives have the potential to unlock extremely large-scale sites and dynamic, real-time web applications.

Jamstack developers — and now entire web teams, marketers, and product managers — have much to look forward to in this space.

Further Reading And References
Categories: Design

May Is In The Air (2021 Wallpapers Edition)

Fri, 04/30/2021 - 02:00

We always try our best to challenge your creativity and get you out of your comfort zone. In all these years we’ve been running it, our monthly wallpapers challenge has turned out to be the perfect occasion to do just that: to put your creative skills to the test, try out a new technique you haven’t tried before, tell a story that matters to you, or indulge in a little project just for fun. And, well, the submissions that reach us every month always make for a unique collection of community artworks, adorning desktops and phone screens and, who knows, maybe even sparking new ideas.

It wasn’t any different this time around. Created with love by designers and artists from across the globe, the wallpapers in this collection all come in versions with and without a calendar for May 2021. For some extra variety, we also compiled a little best-of with designs from our archives at the end of this post. Thank you to everyone who took on the challenge and shared their wallpapers with us — you’re smashing! Happy May!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
Submit a wallpaper!

Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent! Join in! →

From Nope To Hope

“Hope helps us define what we want in our futures and is part of the self-narrative about our lives we all have running inside our minds. Whether we think about it or not, hope is a part of everyone’s life. Everyone hopes for something. It’s an inherent part of being a human being.” — Designed by Hitesh Puri from Delhi, India.

Night Falls In Cairo

“Night falls in Cairo and it is the perfect moment to walk through its streets and its pyramids.” — Designed by Veronica Valenzuela from Spain.

Working From Home

“After more than a year of working remotely, we have developed a lasting connection between our decorated home offices, our outdoor spaces, and our beloved pets. Our diverse environments play a role in our newfound identities as we learn to navigate a remote world.” — Designed by Mad Fish Digital from Portland, OR.

The Off-Hours Guardian

“In May, we are marking Labour Day, the international holiday celebrating workers’ achievements and urging fair pay and better working conditions. But for many, this day will be a reminder of countless hours of overtime, stress, and strain caused by tight deadlines, lack of workers’ rights for freelancers, and a paycheck that covers just the basics. Our thoughts are with all of you who will spend International Workers’ Day at their jobs, with freelancers fighting for their rights, and with anyone who feels difficulty maintaining their work-life balance.” — Designed by PopArt Studio from Serbia.

That’s Cracking

Designed by Ricardo Gimenes from Sweden.

Love Japanese Food

Designed by Ricardo Gimenes from Sweden.

American French Fries

Designed by Ricardo Gimenes from Sweden.

Oldies But Goodies

Some bold, others delicate, some minimalist, others witty — the May wallpapers which reached us in the past few years are as diverse as the artists who created them. Here are some of our favorites from the past. Which one is yours? (Please note that these designs come from our archives and, thus, don’t include a calendar.)

Today, Yesterday, Or Tomorrow

“During the last few months we have experienced significant and profound changes in our lifestyle and work habits due to the pandemic. One of the most signifcant changes is how we relate to time. Our day to day blurs as we try to make the best of this situation while hoping for a better tomorrow. While working on my daily lettering practice, I lost track of time and thought what day is it today? From there, I wondered if it was today, yesterday, or tomorrow. The rest followed as I kept thinking how time and routine or the new routine of being home blurs the days for each of us.” — Designed by Alma Hoffmann from the United States.

Sweet Lily Of The Valley

“The ‘lily of the valley’ came earlier this year. In France, we celebrate the month of May with this plant.” — Designed by Philippe Brouard from France.

Spring Gracefulness

“We don’t usually count the breaths we take, but observing nature in May, we can’t count our breaths being taken away.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Spring Is In The Air

“Spring is the most inspiring time of the year, so I’ve decided to create this wallpaper to celebrate this beautiful season.” — Designed by Hristina Hristova from Bulgaria.

Enjoy May!

“Springtime, especially Maytime is my favorite time of the year. And I like popsicles — so it’s obvious isn’t it?” — Designed by Steffen Weiß from Germany.

Poppies Paradise

Designed by Nathalie Ouederni from France.

May The Force Be With You

“Yoda is my favorite Star Wars character and ‘may’ has funny double meaning.” — Designed by Antun Hirsman from Croatia.

Birds Of May

“A little-known ‘holiday’ on May 4th known as ‘Bird Day’. It is the first holiday in the United States celebrating birds. Hurray for birds!” — Designed by Clarity Creative Group from Orlando, FL.

April Showers Bring Magnolia Flowers

“April and May are usually when everything starts to bloom, especially the magnolia trees. I live in an area where there are many and when the wind blows, the petals make it look like snow is falling.” — Designed by Sarah Masucci from the United States.

Cool As An Octopus

“A fear I need to conquer inspired this wallpaper design. I never really liked the ocean because of how much we don’t know about it or anything that lives in it. However, I have decided to try and focus on the beautiful parts of this great unknown by choosing an octopus.” — Designed by Alexandra Covaleski from the United States.

Mental Health Awareness Day

Designed by Kay Karremans from Belgium.

Leaves Of Grass

“For many, the month of May is the preferred month of the year. Nature blossoms, birds sing, and leaves of grass sprout all around us. Coincidentally, this is the month at the end of which the poet Walt Whitman was born. As he said in his famous poetry collection: ‘Spontaneous me, Nature, The loving day, the mounting sun, the friend I am happy with, The arm of my friend hanging idly over my shoulder.’ Sounds like a perfect setting to fly a kite — so, let’s get to it!” — Designed by PopArt Studio from Novi Sad, Serbia.

Asparagus Say Hi!

“In my part of the world, May marks the start of seasonal produce, starting with asparagus. I know spring is finally here and summer is around the corner when locally-grown asparagus shows up at the grocery store.” — Designed by Elaine Chen from Toronto, Canada.

Celestial Longitude Of 45°

“Lixia is the 7th solar term according to the traditional East Asian calendars, which divide a year into 24 solar terms. It signifies the beginning of summer in East Asian cultures. Usually begins around May 5 and ends around May 21.” — Designed by Hong, Zi-Cing from Taiwan.

Stone Dahlias

Designed by Rachel Hines from the United States.

Hope Shakes The Branches

“The promise of the return of spring feels normal in our new strange and abnormal world.” — Designed by Jeffrey Berg from the United States.

Make A Wish

Designed by Julia Versinina from Chicago, USA.

Old vs. New

Designed by Wannes De Roy from Belgium.

Be On Your Bike!

“May is National Bike Month! So, instead of hopping in your car, grab your bike and go. Our whole family loves that we live in our bike-friendly community. So, bike to work, to school, to the store, or to the park — sometimes it is faster. Not only is it good for the environment, but it is great exercise!” — Designed by Karen Frolo from the United States.


Designed by Siemon Donvil from Belgium.

  • preview
  • without calendar: 1024x768, 1280x800, 1920x1080, 2560x1440
  • Categories: Design

    The Humble `<img>` Element And Core Web Vitals

    Thu, 04/29/2021 - 05:15

    The humble <img> element has gained some superpowers over the years. Given how central it is to image optimization on the web, let’s catch up on what it can do and how it can help improve user experience and the Core Web Vitals. I’ll be covering image optimization in more depth in Smashing Magazine’s new book on Image Optimization.

    Some tips to get us started:

    • For a fast Largest Contentful Paint:
      • Request your key hero image early.
      • Use srcset + efficient modern image formats.
      • Avoid wasting pixels (compress, don’t serve overly high DPR images).
      • Lazy-load offscreen images (reduce network contention for key resources).
    • For a low Cumulative Layout Shift:
      • Set dimensions (width, height) on your images.
      • Use CSS aspect-ratio or aspect ratio boxes to reserve space otherwise.
    • For low impact to First Input Delay:
      • Avoid images causing network contention with other critical resources like CSS and JS. While not render-blocking, they can indirectly impact render performance.

    Note: Modern image components that build on <img>, like Next.js <Image> (for React) and Nuxt image (for Vue) try to bake in as many of these concepts as possible by default. We’ll cover this later. You can of course also do this manually just using the <img> element directly. If using 11ty for your static sites, try the 11ty high-performance blog template.

    How Do Images Impact User Experience And The Core Web Vitals?

    You may have heard of Core Web Vitals (CWV). It’s an initiative by Google to share unified guidance for quality signals that can be key to delivering a great user experience on the web. CWV is part of a set of page experience signals Google Search will be evaluating for ranking. Images can impact the CWV in a number of ways.

    In many modern web experiences, images tend to be the largest visible element when a page completes loading. These include Hero images, images from carousels, stories and banners. Largest Contentful Paint (LCP) is a Core Web Vitals metric that measures when the largest contentful element (images, text) in a user’s viewport, such as one of these images, becomes visible.

    This allows a browser to determine when the main content of the page has finished rendering. When an image is the largest contentful element, how slowly that image loads can impact LCP. In addition to applying image compression (e.g using Squoosh, Sharp, ImageOptim or an image CDN), and using a modern image format, you can tweak the <img> element to serve the most appropriate responsive version of an image or lazy-load it.

    Layout shifts can be distracting to users. Imagine you’ve started reading an article when all of a sudden elements shift around the page, throwing you off and requiring you to find your place again. Cumulative Layout Shift (CLS, a Core Web Vitals metric) measures the instability of content. The most common causes of CLS include images without dimensions (see below) which can push down content when they load and snap into place; ignoring them means the browser may not be able to reserve sufficient space in advance of them loading.

    Generated using Layout Shift GIF Generator. You may also be interested in the CLS Debugger. (Large preview)

    It’s possible for images to block a user’s bandwidth and CPU on page load. They can get in the way of how critical resources are loaded, in particular on slow connections and lower-end mobile devices leading to bandwidth saturation. First Input Delay (FID) is a Core Web Vitals metric that captures a user’s first impression of a site’s interactivity and responsiveness. By reducing main-thread CPU usage, FID can also be reduced.


    In this guide, we will be using Lighthouse to identify opportunities to improve the Core Web Vitals. Lighthouse is an open-source, automated tool for improving the quality of web pages. You can find it in the Chrome DevTools suite of debugging tools and run it against any web page, public or requiring authentication. You can also find Lighthouse in PageSpeed Insights, CI and WebPageTest.

    Keep in mind that Lighthouse is a lab tool. While great for looking at opportunities to improve your user experience, always try to consult real-world data for a complete picture of what actual users are seeing.

    The Basics

    To place an image on a web page, we use the <img> element. This is an empty element — it has no closing tag — requiring a minimum of one attribute to be helpful: src, the source. If an image is called donut.jpg and it exists in the same path as your HTML document, it can be embedded as follows:

    <img src="donut.jpg">

    To ensure our image is accessible, we add the alt attribute. The value of this attribute should be a textual description of the image, and is used as an alternative to the image when it can’t be displayed or seen; for example, a user accessing your page via a screen reader. The above code with an alt specified looks as follows:

    <img src="donut.jpg" alt="A delicious pink donut.">

    Next, we add width and height attributes to specify the width and height of the image, otherwise known as the image’s dimensions. The dimensions of an image can usually be found by looking at this information via your operating system’s file explorer (Cmd + I on macOS).

    <img src="donut.jpg" alt="A delicious pink donut." width="400" height="400">

    When width and height are specified on an image, the browser knows how much space to reserve for the image until it is downloaded. Forgetting to include the image’s dimensions can cause layout shifts, as the browser is unsure how much space the image will need.

    Modern browsers now set the default aspect ratio of images based on an image’s width and height attributes, so it’s valuable to set them to prevent such layout shifts.

    Identify The Largest Contentful Paint Element

    Lighthouse has a “Largest Contentful Paint element” audit that identifies what element was the largest contentful paint. Hovering over the element will highlight it in the main browser window.

    If this element is an image, this information is a useful hint you may want to optimize the loading of this image. You might also find this helpful LCP Bookmarklet by Annie Sullivan useful for quickly identifying the LCP element with a red rectangle in just one click.

    Note: The Largest Contentful Paint element candidate can change through the page load. For this reason, it’s valuable to not just look at what synthetic tooling like Lighthouse may say, but also consult what real users see.

    Hovering over an image in the Chrome DevTools Elements panel will display the dimensions of the image as well as the image’s intrinsic size.

    Identify Layout Shifts From Images Without Dimensions

    To limit Cumulative Layout Shift being caused by images without dimensions, include width and height size attributes on your images and video elements. This approach ensures that the browser can allocate the correct amount of space in the document while the image is loading. Lighthouse will highlight images without a width and height:

    See Setting Height And Width On Images Is Important Again for a good write-up on the importance of thinking about image dimensions and aspect ratio.

    Responsive Images

    What about switching image resolution? A standard <img> only allows us to supply a single source file to the browser. But with the srcset and sizes attributes, we can provide many additional source images (and hints) so the browser can pick the most appropriate one. This allows us to supply images that are smaller or larger.

    <img src="donut-800w.jpg" alt="A delicious pink donut." width="400" height="400" srcset="donut-400w.jpg 400w, donut-800w.jpg 800w" sizes="(max-width: 640px) 400px, 800px">

    The srcset attribute defines the set of images the browser can select from, as well as the size of each image. Each image string is separated by a comma and includes:

    • a source filename (donut-400w.jpg);
    • a space;
    • and the image’s intrinsic width specified in pixels (400w), or a pixel density descriptor (1x, 1.5x, 2x, and so on).

    The sizes attribute specifies a set of conditions, such as screen widths, and what image size is best to select when those conditions are met. Above, (max-width:640px) is a media condition asking “if the viewport width is 640 pixels or less,” and 400px is the width of the slot, the image is going to fill when the media condition is true. This typically corresponds to the page’s responsive breakpoints.

    Device Pixel Ratio (DPR) / Pixel Density Capping

    Device Pixel Ratio (DPR) represents how a CSS pixel is translated to physical pixels on a hardware screen. High resolution and retina screens use more physical pixels to represent CSS pixels for imagery that is sharper and has more detailed visuals.

    The human eye may not be capable of distinguishing the difference between images that are a 2x-3x DPR vs. an even higher resolution. Serving overly high DPR images is a common problem for sites leveraging <img srcset> and a suite of image sizes.

    It may be possible to use DPR-capping to serve your users an image at a 2x or 3x fidelity to prevent large image payloads. Twitter capped their image fidelity at 2x, resulting in 33% faster timeline image loading times. They found that 2x was a sweet spot of both good performance wins with no degradation in quality metrics.

    Note: This approach to DPR-capping is currently not possible if using “w” descriptors.

    Identify Images That Can Be Better Sized

    Lighthouse includes a number of image optimization audits for helping you understand if your images could be better compressed, delivered in a more optimal modern image format, or resized.

    Even those images which are responsive (that is, sized relative to the viewport) should have a width and height set. In modern browsers, these attributes establish an aspect ratio that helps prevent layout shifts, even if the absolute sizes are overridden by CSS.

    When not using an image CDN or framework, I like to use to determine the optimal image breakpoints and generate <img> srcset code for my responsive images.

    Serving Modern Image Formats

    Art direction allows us to serve different images depending on a user’s display. While responsive images load different sizes of the same image, art direction can load very different images based on the display.

    The browser can choose which image format to display using the <picture> element. The <picture> element supports multiple <source> elements and a single <img> element, which can reference sources for different formats including AVIF, WebP, and eventually JPEG XL.

    <picture> <source srcset="puppy.jxl" type="image/jxl"> <source srcset="puppy.avif" type="image/avif"> <source srcset="puppy.webp" type="image/webp"> <source srcset="puppy.jpg" type="image/jpeg"> <img src="puppy.jpg" alt="Cute puppy"> </picture>

    In this example, the browser will begin to parse the sources and will stop when it has found the first supported match. If no match is found, the browser loads the source specified in <img> as the fallback. This approach works well for serving any modern image format not supported in all browsers. Be careful with ordering <source> elements as order matters. Don’t place modern sources after legacy formats, but instead put them before. Browsers that understand it will use them and those that don’t will move onto more widely supported frameworks.

    Understanding the myriad of image format options out there today can be a confusing process, but you may find Cloudinary’s comparison of modern image formats helpful:

    You may also find Malte Ubl’s AVIF and WebP quality settings picker useful for selecting quality settings to match the quality of a JPEG at a particular given quality setting.

    Identify Images That Could Be Served In A More Modern Format

    Lighthouse (below) highlights potential savings from serving images in a next-generation format.

    Note: We have an open issue to better highlight the potential savings for AVIF in Lighthouse.

    You might also find value in using image auditing tools such as Cloudinary’s image analysis tool for a deeper look at image compression opportunities for all the images on a page. As a bonus, you can download compressed versions of suggested image formats such as WebP:

    I also enjoy using Squoosh for its support of bleeding-edge formats, such as JPEG XL as it offers a low-friction way to experiment with modern formats outside of a CLI or CDN.

    There are multiple ways to approach sizing issues as both srcset and sizes are both usable on <picture> and <img>. when in doubt, use <img> with srcset/sizes for single images that have a simple layout. Use <picture> for serving multiple formats, complex layout and art direction.

    Chrome DevTools allows you to disable modern image formats (demo), like WebP, AVIF or JPEG XL, to test differing fallbacks for them in the Rendering panel:

    CanIUse has the latest browser support details for WebP, AVIF and JPEG XL.

    Content Negotiation

    An alternative to manually handling image format selection using <picture> is to rely on the accept header. This is sent by the client, allowing the server to deliver an image format that is the best fit for the user. CDNs such as Akamai, Cloudinary and Cloudflare support it.

    Image Lazy Loading

    What about offscreen images that are not visible until a user scrolls down the page? In the example below, all the images on the page are “eagerly loaded” (the default in browsers today), causing the user to download 1.1 MB of images. This can cause users’ data plans to take a hit in addition to affecting performance.

    Using the loading attribute on <img>, we can control the behavior of image loading. loading="lazy" lazy-loads images, deferring their loading until they reach a calculated distance from the viewport. loading="eager" loads images right away, regardless of their visibility in the viewport. eager is the default so doesn’t need to be explicitly added (that is, just use <img> for eager loading).

    Below is an example of lazy-loading an <img> with a single source:

    <img src="donut.jpg" alt="A delicious pink donut." loading="lazy" width="400" height="400">

    With native <img> lazy-loading, the earlier example now downloads only about 90 KB of images! Just adding loading="lazy" to our offscreen images has a huge impact. You ideally want to lazy-load all images present outside of the initial viewport and avoid it for everything that is within the initial viewport.

    Lazy loading also works with images that include srcset:

    <img src="donut-800w.jpg" alt="A delicious donut" width="400" height="400" srcset="donut-400w.jpg 400w, donut-800w.jpg 800w" sizes="(max-width: 640px) 400px, 800px" loading="lazy">

    In addition to working on srcset, the loading attribute also works on <img> inside <picture>:

    <!-- Lazy-load images in <picture>. <img> is the one driving image loading so <picture> and srcset fall off of that --> <picture> <source media="(min-width: 40em)" srcset="big.jpg 1x, big-hd.jpg 2x"> <source srcset="small.jpg 1x, small-hd.jpg 2x"> <img src="fallback.jpg" loading="lazy"> </picture>

    The Lighthouse Opportunities section lists any offscreen or hidden images on a page that can be lazy-loaded as well as the potential savings from doing so.

    See for latest browser support for native image lazy-loading.

    Request Your Image Early

    Help the browser discover your LCP image early so that it can fetch and render it with minimal delay. Where possible, attempt to solve this by better minimizing the request chains to your LCP image so that the browser doesn’t need to first fetch, parse and execute JavaScript or wait for a component to render/hydrate to discover the image.

    <link rel=preload> can be used with <img> to allow browsers to discover critical resources you want to load as soon as possible, prior to them being found in HTML.

    <link rel="preload" as="image" href="donut.jpg">

    If you are optimizing LCP, preload can help boost how soon late-discovered hero images (e.g such as those loaded by JavaScript or background hero images in CSS) are fetched. Preload can make a meaningful difference if you need critical images (like Hero images) to be prioritized over the load of other images on a page.

    Note: Use preload sparingly and always measure its impact in production. If the preload for your image is earlier in the document than it is, this can help browsers discover it (and order relative to other resources). When used incorrectly, preloading can cause your image to delay First Contentful Paint (e.g CSS, Fonts) — the opposite of what you want. Also note that for such reprioritization efforts to be effective, it also depends on servers prioritizing requests correctly.

    Preload can be used to fetch sources for an <img> of a particular format:

    <link rel="preload" as="image" href="donut.webp" type="image/webp">

    Note: This approach only preloads the latest format where supported, however cannot be used to preload multiple supported formats as this would preload both of them.

    Preload can also be used to fetch responsive images so the correct source is discovered sooner:

    <link rel="preload" as="image" href="donut.jpg" imagesrcset=" poster_400px.jpg 400w, poster_800px.jpg 800w, poster_1600px.jpg 1600w" imagesizes="50vw">

    Take care not to overuse preload (when each resource is considered important, none of them really are). Reserve it for critical resources which the browser’s preload scanner may not be able to quickly find organically.

    Lighthouse suggests opportunities to apply this optimization in Lighthouse 6.5 and above.

    See for latest browser support for link rel=preload.

    Image Decoding

    Browsers need to decode the images they download in order to turn them into pixels on your screen. However, how browsers handle deferring images can vary. At the time of writing, Chrome and Safari present images and text together – synchronously – if possible. This looks correct visually, but images have to be decoded, which can mean text isn’t shown until this work is done. The decoding attribute on <img> allows you to signal a preference between synchronous and asynchronous image decoding.

    <img src="donut-800w.jpg" alt="A delicious donut" width="400" height="400" srcset="donut-400w.jpg 400w, donut-800w.jpg 800w" sizes="(max-width: 640px) 400px, 800px" loading="lazy" decoding="async">

    decoding="async" suggests it’s OK for image decoding to be deferred, meaning the browser can rasterize and display content without images while scheduling an asynchronous decode that is off the critical path. As soon as image decoding is complete, the browser can update the presentation to include images. decoding=sync hints that the decode for an image should not be deferred, and decoding="auto" lets the browser do what it determines is best.

    Note: See for the latest browser support for the decoding attribute.


    What if you would like to show the user a placeholder while the image loads? The background-image CSS property allows us to set background images on an element, including the <img> tag or any parent container elements. We can combine background-image with background-size: cover to set the size of an element’s background image and scale the image as large as possible without stretching the image.

    Placeholders are often inline, Base64-encoded data URLs which are low-quality image placeholders (LQIP) or SVG image placeholders (SQIP). This allows users to get a very quick preview of the image, even on slow network connections, before the sharper final image loads in to replace it.

    <img src="donut-800w.jpg" alt="A delicious donut" width="400" height="400" srcset="donut-400w.jpg 400w, donut-800w.jpg 800w" sizes="(max-width: 640px) 400px, 800px" loading="lazy" decoding="async" style="background-size: cover; background-image: url(data:image/svg+xml;base64,[svg text]);">

    Note: Given that Base64 data URLs can be quite long, [svg text] is denoted in the example above to improve readability.

    With an inline SVG placeholder, here is how the example from earlier now looks when loaded on a very slow connection. Notice how users are shown a preview right away prior to any full-size images being downloaded:

    There are a variety of modern solutions for image placeholders (e.g CSS background-color, LQIP, SQIP, Blur Hash, Potrace). Which approach makes the most sense for your user experience may depend on how much you’re attempting to offer a preview of the final content, display progress (e.g progressive loading) or just avoid a visual flash when the image finally loads in. I’m personally excited for JPEG XL’s support for full progressive rendering.

    Ultimately including an inline data URL for your low-quality placeholder image that is served in the initial HTML within the <img>’s styles avoids the need for an additional network request. I’d consider a placeholder size of being <= 1-2KB as being optimal. LCP will take into account the placeholder image’s intrinsic size so ideally aim for the “preview” to match the intrinsic size of the real image being loaded.

    Note: There is an open issue to discuss factoring in progressive loading specifically into the Largest Contentful Paint metric.

    Lazy-render Offscreen Content

    Next, let’s discuss the CSS content-visibility property, which allows the browser to skip rendering, layout and paint for elements until they are needed. This can help optimize page load performance if a large quantity of your page’s content is offscreen, including content which uses <img> elements. content-visibility:auto can reduce how much CPU work the browser has to do less work upfront, including offscreen image decoding.

    section { content-visibility: auto; }

    The content-visibility property can take a number of values, however, auto is the one that offers performance benefits. Sections of the page with content-visibility: auto get containment for layout, paint and style containment. Should the element be off-screen, it would also get size containment.

    Browsers don’t paint the image content for content-visibility affected images, so this approach may introduce some savings.

    section { content-visibility: auto; contain-intrinsic-size: 700px; }

    You can pair content-visibility with contain-intrinsic-size which provides the natural size of the element if it is impacted by size containment. The 700px value in this example approximates the width and height of each chunked section.

    See for latest browser support for CSS content-visibility.

    Next.js Image Component

    Next.js now includes an Image component with several of the above best practices baked in. The image component handles image optimization, generating responsive images (automating <img srcset>) and lazy-loading in addition to many other capabilities. This is just one of the optimizations that has come out of the Chrome and Next.js teams collaborating with sites adopting it seeing up to a 60% better LCP and 100% better CLS.

    In the below Next.js example, the standard <img> element is first used to load 3 donut images downloaded from Unsplash.

    import Head from 'next/head'; export default function Index() { return ( <div> <Head> <title>Create Next App</title> </Head> <main> <div> <img src="/donut1.jpeg" alt="Donut" height={700} width={700} /> <img src="/donut2.jpeg" alt="Donut" height={700} width={700} /> <img src="/donut3.jpeg" alt="Donut" height={700} width={700} /> </div> </main> </div> ); }

    When this page is loaded with the DevTools network panel open, we see that our images are very large in size (325KB + 4.5MB + 3.6MB = 8.4MB in total), they all load regardless of whether the user can see them and are likely not as optimized as they could be.

    Loading images at these sizes is unnecessary, in particular if our user is on a mobile device. Let’s now use the Next.js image component instead. We import it in from 'next/image' and replace all our <img> references with <Image>.

    import Head from 'next/head'; import Image from 'next/image'; export default function Index() { return ( <div> <Head> <title>Next.js Image Component</title> </Head> <main> <div> <Image src="/donut1.jpeg" alt="Donut" height={700} width={700} /> <Image src="/donut2.jpeg" alt="Donut" height={700} width={700} /> <Image src="/donut3.jpeg" alt="Donut" height={700} width={700} /> </div> </main> </div> ); }

    We can reload our page and take a second look at the DevTools network panel. Now only 1 image is being loaded by default (the only one visible in the viewport), it’s significantly smaller than the original (~33KB vs 325KB) and uses a more modern format (WebP).

    Note: Next.js will optimize images dynamically upon request and store them in a local cache. The optimized image then gets served for subsequent requests until an expiration is reached.

    Next.js can also generate several versions of the image to serve media to smaller screens at the right size. When loading the page under mobile emulation (a Pixel phone), an even smaller 16KB image gets served for our first image.

    When a user scrolls down the page, the additional images are lazy-loaded in. Note how no additional configuration or tweaking was needed here — the component just did the right thing by default.

    The performance optimizations offered by the Next.js image component can help improve Largest Contentful Paint. To learn more about the component, including the different layout modes it supports, check out the Next.js documentation. A component with similar capabilities is available for Nuxt.js.

    What Are Examples Of Businesses Improving LCP Via Image Optimizations?

    Vodafone found that a 31% improvement in LCP increased sales by 8%. Their optimizations to improve LCP included resizing their hero image, optimizing SVGs and using media queries to limit loading offscreen images.

    Agrofy found that 70% improvement in LCP correlated to a 76% reduction in load abandonment. Their optimizations to LCP included a 2.5s saving from switching their first large image from being behind JavaScript (client-side hydration) to being directly in the main HTML document.

    French Fashion house Chloè used Link Preload to preload their 1x and 2x Hero images, which were previously bottlenecked by a render-blocking script. This improved their Largest Contentful Paint by 500ms based on Chrome UX Report data over 28 days.

    Optimizations to Cumulative Layout Shift helped YAHOO! Japan increased its News page views per session by 15%. They determined shifts were caused after their hero images were loaded and snapped in for the first view. They used Aspect Ratio Boxes to reserve space before their image was loaded.

    Lab Data Is Only Part Of The Puzzle. You Also Need Field Data.

    Before we go, I’d love to share a quick reminder about the importance of looking at the image experience your real users might have. Lab tools like Lighthouse measure performance in a synthetic (emulated mobile) environment limited to page load, while field or real-world data (e.g. RUM, Chrome UX Report) are based on real users throughout the lifetime of the page.

    It’s important to check how realistic your lab results are by comparing them against typical users in the field. For example, if your lab CLS is much lower than the 75th percentile CLS in the field, you may not be measuring layout shifts as real users are experiencing them.

    CLS is measured during the full lifespan of a page, so user behavior (interactions, scrolls, clicks) can have an impact on the elements that are shifting. For images, lab CLS may not see an improvement from fixing missing image dimensions if the images happen to be far down a page requiring a scroll. This is just one place where it’s worthwhile consulting real-user data.

    For LCP it is very possible that the candidate element can change depending on factors such as load times (the LCP candidate could initially be a paragraph of text and then a large hero image), personalization or even different screen resolutions. Lighthouse’s LCP audit is a good place to start, but do take a look at what real users see to get the full picture.

    Whenever possible, try to configure your lab tests to reflect real-user access and behavior patterns. Philip Walton has an excellent guide on debugging Web Vitals in the field worth checking for more details.

    Editorial Note: Addy's Book on Image Optimization

    We’re happy and honored to have teamed up with Addy to publish a dedicated book on image optimization, and the book is now finally here. With modern approaches to image compression and image delivery, current and emerging image formats, how browsers load, decode and render images, CDNs, lazy-loading, adaptive media loading and how to optimize for Core Web Vitals. Everything in one, single, 528-pages book. Download a free PDF sample (12MB).

    Get the book right away — and thank you for your kind support, everyone! ❤️

    Now that we’ve covered the foundations of the modern <img> tag, check out the pre-release of the Image Optimization book to get a deeper understanding of how images can impact performance and UX.

    Throughout the book, we will cover advanced image optimization techniques that expand heavily on the above guidance, as well as how to best use elements like <img> and <picture> to make your images on the web shine.

    You may also be interested in reading Malte Ubl’s guide to image optimization, Jake Archibald’s guide to the AVIF format and Katie Hempenius’ excellent guidance on

    With thanks to Yoav Weiss, Rick Viscomi and Vitaly for their reviews.

    Categories: Design

    Meet Image Optimization, A New Smashing Book By Addy Osmani

    Wed, 04/28/2021 - 06:20

    Images have been a key part of the web for decades. Our brains interpret images much faster than text, which is why high-quality visuals drive conversions and user engagement. Just think about landing pages and product photos, feature panels and hero areas. To be effective, all these images need to be carefully orchestrated to appear on the screen fast — but as it turns out, loading images efficiently at scale isn’t a project for a quiet afternoon.

    Image optimization, loading behavior and rendering in the browser require understanding of image formats and image compression techniques, image decoding and browser rendering, image CDNs and adaptive media loading, not to mention effective caching and preloading. Let’s figure it all out.

    For a thorough guide on image optimization, we’ve teamed up with Addy Osmani, an engineer manager working on Google Chrome and focusing around performance for decades. The result is a book with everything you need to optimize your images and display them swiftly, without being slowed down along the way.

    About The Book

    Next to videos, images are the heaviest, most requested assets on the web. Some of the biggest size savings and performance improvements can come through a better image optimization strategy. So how do we encode, deploy, serve and maintain images over time? Our new book explores just that. Check free PDF preview (3MB).

    Meet our new book “Image Optimization” with everything you need to know to get images on the web right.

    Addy’s new book focuses on what matters: modern approaches to image compression and image delivery, practical tools and techniques to automate optimization, responsive images, current and emerging image formats, how browsers load, decode and render images, CDNs, lazy-loading, adaptive media loading and how to optimize for Core Web Vitals. Everything in one, single, 528-pages book.

    Image Optimization will help you deliver high-quality responsive images in the best format and size, and at the moment when your users need them the most. Packed with useful tips and techniques, the book has a sharp focus on practical application and longevity of your image optimization strategies. Jump to table of contents ↓

    You’ll learn:
    • image formats,
    • image compression,
    • improve image rendering,
    • prepare images for a variety of resolutions,
    • automate image optimization,

    • image maintenance,

    • lazy-loading and techniques like SQIP,

    • image CDNs, and how to set up one,

    • AVIF, JPEG XL, and HEIF, their pros and cons,
    • Core Web Vitals and how to optimize for them,
    • adaptive image delivery for network conditions.
    A sneak-peek inside the book, with 528 pages on everything image optimization. Large view. Table Of Contents

    Images help us tell a story and better engage with our customers. There is no shortage of high-quality photos around us, but how to do we encode, deploy, serve and maintain them?

    The 23 chapters of our shiny new book explore just that.

    1. The Humble <img> Element +

    The humble <img> element has gained some superpowers over the years. Given how central it is to image optimization on the web, let’s catch up on what it can do.

    2. Optimizing Image Quality +

    Most optimization tools allow you to set the level of quality you’re happy with. Lower quality reduces file size but can introduce artifacts, halos, or blocky degrading.

    3. Comparing Image Formats +

    Each new image format has a range of effectiveness if you compare and align on a consistent “quality” of experience. In this chapter, we will explore the challenge of defining quality and setting expectations when converting between formats.

    4. Color Management +

    Ideally, every screen and web browser in the world would display color in exactly the same way. Unfortunately, they don’t. Color management allows us to reach a compromise on displaying color through color models, spaces, and profiles.

    5. Image Decoding Performance +

    How quickly an image can be decoded determines how soon browsers can show it to the user. Keeping this efficient helps ensure a good user experience. Let’s dig deeper into image decoding to understand how browsers perform behind the scenes and how you can control decoding.

    6. Measuring Image Performance +

    In this chapter, we will look into how to use Lighthouse to audit for unoptimized images and how to monitor a web performance budget for images.

    7. JPEG +

    The JPEG may well be the world’s most widely used image format. Let’s examine JPEG’s compression modes as these can have a significant impact on perceived performance.

    8. PNG +

    From the very basics to palette modes, index and alpha transparency, and compression tips, in this chapter we’ll take an in-depth look at PNGs.

    9. WebP +

    WebP is a relatively recent image format developed with the aim of offering lower file sizes for lossless and lossy compression at an acceptable visual quality. Let’s explore how to use WebP images in production today.

    10. SVG +

    There are a number of ways to implement SVGs in a web page. In this chapter, we’ll take a look at different approaches and at how to keep your SVGs lean and performant.

    11. Responsive Images +

    Using responsive images is a key part of delivering fully responsive web design. This chapter covers techniques for defining responsive images.

    12. Progressive Rendering Techniques +

    There are many progressive image loading techniques that can shorten perceived load time. In this chapter, we’ll look at different ways of progressively loading images to improve performance and the user experience.

    13. Optimizing Network Requests with Caching and Preloading +

    Downloading files such as images or videos over the network can be slow and costly. HTTP caching, service workers, image spriting, and preloading help optimize network requests. Let’s explore how they work.

    14. Lazy-Loading Offscreen Images +

    Web pages often contain a large number of images, contributing to page bloat, data costs, and how fast a page can load. Let’s take a look at how lazy-loading offscreen images can help improve the situation.

    15. Replacing Animated GIFs +

    If you’re aiming to improve the loading performance of your pages, animated GIFs aren’t very compatible with that goal. But this is an area of loading performance where, without a lot of work, you can get significant gains without a loss of content quality.

    16. Image Content Delivery Networks +

    For sites with large traffic and a global reach, basic optimizations at build time are usually not enough. CDNs help teams to handle known static assets ahead of time as well as dynamic assets to improve load times and site reliability.

    17. HEIF and HEIC +

    While other image formats may offer broader compatibility, HEIF is worth being familiar with as you may have users wishing to work with this format on iOS.

    18. AVIF +

    AVIF aims to produce high-quality compressed images that lose as little quality as possible during compression. Let’s dive deeper into its features, browser support, and performance.

    19. JPEG XL +

    JPEG XL is an advanced image format aiming to deliver very significant compression benefits over JPEG. In this chapter, we’ll take a closer look at what it has to offer.

    20. Comparing New Image File Formats +

    While the new image formats support roughly the same sets of capabilities, the strength of each differentiates between them. The tables in this chapter aim to offer a summary of some of the more important features and how well each format handles various image types.

    21. Delivering Light Media Experiences with Data Saver +

    Browsers with Data Saver features give users a chance to explicitly tell us that they want to use a site without consuming so much data. Let’s see how to make use of Data Saver to deliver light media experiences.

    22. Optimize Images for Core Web Vitals +

    Which metrics should you focus on to improve the user experience? Core Web Vitals focuses on three key aspects of user experience: page loading experience, interaction readiness, and the visual stability of the page. Let’s take a look at each one of them.

    23. Twitter’s Image Pipeline (Case Study) +

    Tweets are often accompanied by images to illustrate, amuse, and increase user engagement. That is why Twitter places so much emphasis on a strong image optimization process. This case study focuses on the different steps that Twitter has taken to load images faster while ensuring they are as impactful as intended.

    528 pages. The eBook is available right away (PDF, ePUB, Amazon Kindle). Shipping of the printed book will start late May. Written by Addy Osmani. Designed by Espen Brunborg and Nadia Snopek.

    About the Author

    Addy Osmani is an engineering manager working on Google Chrome. His team focuses on speed, helping keep the web fast. Devoted to the open-source community, Addy’s past open-source contributions include Lighthouse (an auditing tool for performance and web best practices), Workbox (libraries for offline-caching files using service workers), Yeoman (the web scaffolding tool), Critical (the critical-path CSS optimization tool), and TodoMVC. Addy is the author of the book Learning JavaScript Design Patterns.

    Reviews and Testimonials “An incredibly comprehensive overview of image optimization. This book will teach you everything you need to know about delivering effective and performant images on the web.”

    Katie Hempenius, Google “Optimizing image delivery is key to building high-performance web apps. This book explains everything developers should know about choosing the right image format, compressing image assets — and more!”

    Mathias Bynens, Google “Images are the heart and soul of the web; they help create that emotional connection with humans. Yet, it is really easy to ruin that experience through slow loading or worse, over quantizing the pixels and distorting images. Understanding how images work is essential for every engineer; the last thing we want is to deal with open bugs from bad creative or performance experiences.”

    Colin Bendell, Shopify Technical Details
    • ISBN: 978-3-945749-94-4 (print)
    • Quality hardcover, stitched binding, ribbon page marker.
    • Free worldwide airmail shipping from Germany starting late May. You can start reading the eBook right away.
    • eBook is already available as PDF, ePUB, and Amazon Kindle.
    • Get the book right away.
    Community Matters ❤️

    Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin'! ;-)

    Stay smashing, and thank you for your ongoing support, everyone!

    More Smashing Books & Goodies

    Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.

    In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Paul and Adam are some of these people. Have you checked out their books already?

    TypeScript In 50 Lessons

    Everything you need to know about TypeScript, its type system, generics and its benefits.

    Add to cart $44

    Interface Design Checklists (PDF)

    100 practical cards for common interface design challenges.

    Add to cart $15

    Form Design Patterns

    A practical guide to designing and coding simple and inclusive forms.

    Add to cart $39

    Categories: Design

    Understanding Easing Functions For CSS Animations And Transitions

    Tue, 04/27/2021 - 05:00

    Have you ever noticed how smooth and delightful animations look on a well-made, professional project? I am reminded of the In Pieces website where animations are used not just for decoration, but they also convey the message about the endangered species in an impactful way. Not only is the animation design and style beautiful, but they also flow nicely and harmoniously. It is precisely that flow in combination with the design and presentation which makes the animation look stunning and natural. That is the power of easing functions, which are also called timing functions.

    Animation duration determines the amount of time for the animation to go from the first keyframe to the last. The following graph shows the connection between the animation keyframes and duration.

    There are many ways in which animation can progress between two keyframes. For example, animation can have a constant speed or it can move quickly at the start and slow down near the end, or move slowly at the start and then speed up until it reaches the end, etc. This rate, or speed is defined with the easing functions (timing functions). If we take a look at the previous graph, the easing function is represented by the shape of the line connecting the two points. We’ve used the linear function (straight line) for the previous example, but we can also use a curve to connect the keyframes.

    As you can see, there are lots of possible options and variations for animation easing functions and we’ll take a look at them next.

    Types Of Easing Functions

    There are three main types of easing functions that can be used in CSS:

    • Linear functions (linear),
    • Cubic Bézier functions (includes ease, ease-in, ease-out and ease-in-out),
    • Staircase functions (steps).
    Linear Functions

    We’ve covered linear functions in one of the previous examples, so let’s do a quick recap. With the linear timing function, the animation is going through the keyframes at a constant speed. As you might already know, the linear timing function can be easily set in CSS by using the linear keyword.

    This is because the first (P0) and last points (P3) are fixed to the start (initial animation state) and the end (final animation state) of the curve, as the animation needs to end on a specified keyframe and within the specified duration. With the two remaining points (P1 and P2), we can fine-tune the curve and easing of the function, resulting with non-linear animation speed.

    cubic-bezier(x1, y1, x2, y2)

    X coordinates (x1 and x2) represent time ratio and are limited to values between 0 and 1 (the animation cannot begin sooner or last longer than specified), while Y coordinates (y1 and y2) represent the animation output and their values, which are usually set somewhere between 0 and 1 but are not limited to that range. We can use the y1 and y2 values that are outside the 0 and 1 range to create bouncing effects.

    If the animation consists of several keyframes, defined in CSS @keyframes property, the easing function will be applied to each curve between the two points. If we are applying ease-out function to an animation with 3 keyframes, the animation will accelerate at the start of the first keyframe, and decelerate near the second keyframe and the same motion will be repeated for the next pair of keyframes (second keyframe and the last keyframe).

    The following example showcases how various jump terms affect the animation behavior. Various jump terms are applied to the 5-step animation with the same duration.

    Chrome, Safari and Firefox also offer a dedicated Animations tab in developer tools that offers a more detailed overview, including animation properties, duration, timeline, keyframes, delay, etc.

    Useful Tools And Websites

    There are plenty of useful online resources and easing presets that can give much more variety to easing functions.

    More popular online resources include Easing Functions Cheat Sheet by Andrey Sitnik and Ivan Solovev and CSS Easing Animation Tool by Matthew Lein. These tools offer a wide range of presets that you can use as a foundation for your easing function and then fine-tune the curve to fit your animation timeline.

    Animations & Accessibility

    When working with easing functions and animations in general, it’s important to address accessibility requirements. Some people prefer browsing the web with reduced motion, so we should provide a proper fallback. This can be easily done with widely-supported prefers-reduced-motion media query. This media query allows us to either remove the animation or assign a different animation based on user preference.

    .animated-element { animation: /* Regular animation */; } @media (prefers-reduced-motion) { .animated-element { /* Accessible animation with reduced motion */ } }

    I’ve modified an analog clock example by Alvaro Montoro to include alternative animation for users with prefers-reduced-motion flag set.

    See the Pen CSS Analog Clock with prefers reduced motion by Adrian Bece.

    On a default animation, the seconds hand of the clock is constantly moving which may cause difficulties for some users. We can easily make the animation much more accessible by changing the animation timing function to steps. In the following example, users with prefers-reduced-motion flag set will be displayed an animation where seconds arm ticks every five seconds.

    @media (prefers-reduced-motion) { .arm.second { animation-timing-function: steps(12); } } Conclusion

    Easing functions, or timing functions, change the animation’s look and feel by affecting the animation rate (speed). Easing functions enable us to create animations that resemble natural motion which can result in improved, more delightful UX and having a better impression on the users. We’ve seen how we can use pre-defined values like linear, ease-out, ease, etc. to quickly add a timing function and how to create custom easing functions with cubic-bezier function for more impressive and impactful animations. We’ve also covered staircase functions that can be used to create “ticking” animation and are rarely used. When creating animations, it’s important to keep accessibility in mind and provide an alternative, less distracting animations with less motion to users with prefers-reduced-motion flag set.

    There are plenty of browser and online tools that can simplify and streamline creating custom easing functions, so creating animations with a beautiful flow is easier than ever. If you haven’t done so already, I would recommend experimenting with various easing functions and creating your own easing function library.

    Categories: Design

    How To Bake Layers Of Accessibility Testing Into Your Process

    Mon, 04/26/2021 - 04:00

    When building websites or apps, accessibility testing is critical to ensure that what you build will work for all your users. This includes users with disabilities and also people with temporary and situational limitations (like that coworker who broke their arm skiing or the customer who is outdoors on their phone with glare on the screen).

    We’re going to share how to "layer" accessibility testing by using a variety of tools and approaches at different stages in the digital product lifecycle to catch accessibility issues early — when it’s easier and cheaper to fix them. Taking a layered approach to testing your site for accessibility also improves the usability of your site — which in turn increases your customer base and reduces customer service inquiries. It can both make and save you money.

    We’ll use a layered cake analogy to talk about the different layers of accessibility testing and when to use them. Food analogies have become quite popular in the accessibility world!

    This approach has worked well for both of us. Mike is a seasoned accessibility advocate and senior strategist at a government technology firm (CivicActions), and Kate is the Head of Services at an accessibility testing platform (Fable).

    While Mike looks at accessibility testing from a more technical angle early in the development phase and scanning for compliance on live sites, Kate focuses on the user experience. Both of us realized that combining many types of accessibility testing throughout the product development life cycle is a powerful way to improve overall product accessibility. In this article, we’ll share some of the things we’ve learned.

    Most organizations approach accessibility in three main ways:

    1. Running tools to check your code and/or user interface.
      This is often referred to as “automated testing” because you use software to automatically test many accessibility issues at once.
    2. Using your computer in a way that is different than you normally do.
      For example, by not using a mouse, zooming your browser to 200%), or switching to Windows high contrast mode.
    3. Using assistive technology and users with disabilities to check for usability issues.
      This is often referred to as “manual testing” because it requires a person to evaluate accessibility issues.

    Far too many organizations rely exclusively on a single accessibility solution to validate their site. No one tool or process can give an organization the confidence that they are actually meeting the needs of the greatest possible number of people.

    How To Secure Buy-In For Accessibility

    In many organizations, in order to do accessibility testing, you’ll need executives to prioritize and support the work. Here are tips on how to make that happen if you don’t already have buy-in for accessibility:

    • Check if there is a legal requirement for your organization to be accessible.
      “Accessibility Act” and “Disability Act” are search terms that should pull up relevant laws in most countries. Sharing the legal risk can be the right incentive for some organizations.
    • Find out what your competitors are doing.
      Check for an accessibility statement on their websites. Most organizations are keen to stay ahead of the competition and knowing that others are prioritizing accessibility could do the trick.
    • Connect with customer service to find out if there are accessibility complaints.
      If possible, reach out to customers directly to hear about their experiences and share those stories with company leaders. Hearing about dissatisfied customers can be a huge motivator. If you can get permission from the customers, record a demo of them facing challenges with your products. A video like that can be very compelling.
    • Explain the financial costs and rewards.
      Many companies think they can’t afford to do accessibility, but it’s much more affordable when it’s integrated into the day-to-day work and not an afterthought. There’s also the potential revenue from people with disabilities — globally they represent more than 3 trillion dollars in disposable income.

    • Find the right champion.
      Chances are there’s already someone at the highest levels of the organization who cares about people and doing the right thing. This may be a Diversity and Inclusion lead, someone fighting for environmental sustainability, or other issues. Perhaps it’s someone with a disabled friend or family member. Making them aware of accessibility may be all that’s needed to add a new focus to their efforts.
    Gathering Your Ingredients

    Accessibility should be baked into your process as early as possible. One place to start is with the procurement process. You can incorporate accessibility as part of the review process for any technology systems you are buying or building. DisabilityIN has some excellent resources on accessible IT procurement.

    Looking for vendor accessibility statements or VPATs for products can help, but so can doing a quick review with some of the tools mentioned in the recipe below. Not all software is created equally, so you want to be sure you’re working with vendors who are actively contributing to tools and processes that help you prioritize accessibility from the start.

    Another way to bake in accessibility early, if you’re creating or updating a design system, is to choose a component library that has been built with accessibility in mind. Look for libraries with a clear accessibility statement and an open issue queue that allows you to review problems.


    • The Angular Components team has built accessibility into the Material UI library. For example, the radio button component uses a radio group with an aria-label. Each radio button reads as checked or not checked to a screen reader user, the buttons can be selected using the arrow keys like standard HTML radio buttons, and the focused state is clearly visible.
    • Reakit for React describes an accessibility warning feature on their accessibility page that will tell developers when an aria-label is needed.
    • The Lion accessible web components library uses an a11y label to tag accessibility issues in GitHub so you can see what’s being improved and open your own issue if needed.

    Another way to embed accessibility into your process is to update one of the personas your team uses to include disabilities. Many people have more than one disability, so creating at least one persona with several disabilities will ensure you keep that audience top-of-mind in all your early design work.

    To flesh out that persona, talk to people with real disabilities — including temporary and situational limitations — to help you understand how they use technology, sites, and apps in the real world. One in five people have a permanent disability, but 100% of the population will be faced with vision, hearing, motion, or cognitive disabilities at some point in their lives. Our personas can reflect:

    • people with allergies, insomnia, or broken bones;
    • people using outdated technology or using their computers outside; or even
    • people who change their technology use according to their location (for example, disabling images when they need to save internet bandwidth).

    Little changes like these can have a big impact on how your team thinks. One way to pitch this change to leadership and teams is to talk about how it will make your personas more reflective of your actual users — which is the whole point of personas. They must be realistic.

    One of the most impactful ways to involve people with disabilities is to have them help co-design services and products. Australia has a free training toolkit on how to do co-design with people with disabilities. There’s also a great case study on how one company ran co-design sessions with people with learning disabilities on behalf of the UK government.

    Legacy IT

    Whether we like it or not, most of the decisions about organizational IT were made months (if not years) ago. Even when you are in the heat of a procurement process, accessibility is typically just one of many considerations. This is to be expected — even in organizations that are passionate about accessibility.

    With legacy technology, the first step is simply to raise awareness with the vendor or team about the importance of accessibility. If you can detail accessibility issues that you want to be fixed using automated tools, it can help adjust how a vendor ranks their issue queue. There isn’t always a community portal to post concerns like this, but there might be a community on Twitter or Reddit where you could bring issues to light.

    Additionally, there might be a customizable theme that could be adjusted to address some of the concerns. Some solutions may offer an Application Programming Interface (API) that would allow a developer to build an accessible user interface around it.

    If a vendor has competitors, it can be useful to highlight the accessibility features that are included in that product. It can be beneficial to remind vendors that you do have options.

    If legacy IT is an internally built product, a good way to quickly evaluate it is using the keyboard only. If you can’t use the product with a keyboard (for example, there’s no visible focus or the UI is only mouse clickable), it’s likely going to be a lot of effort to improve the accessibility of the product.

    Consider offering alternative ways to access the service (e.g. phone support, in-person service, or email) so that people who can’t access the product digitally because of accessibility barriers can still get what they need.

    Think about the organizational roadmap and when it might be feasible to upgrade or retire the product and weigh the cost and effort of accessibility against that. If you have other, newer products that aren’t accessible, it might be more productive to focus your efforts on those products if a legacy tool is nearing the end of life.

    The Recipe

    Here is an example of a comprehensive accessibility testing approach, with five layers for a really delicious accessibility testing cake. Figure out what your budget is going to be and then price out all the various testing approaches. Some are free, others will cost money. In the next section, we provide advice on where to start if all these layers of testing won’t fit in your budget.

    1. Research User Needs
      Ensure the questionnaire that you use to screen potential research participants asks about assistive technology use. This will make it easy to integrate people with disabilities into your existing research process at no extra cost. If you don’t have luck finding participants this way, try reaching out to disability organizations.

      You can also modify your existing user personas to include users with disabilities. You can borrow aspects of user profiles from Gov.UK if you need to do this quickly and cheaply. If you have the budget for it, add people with disabilities into prototype and design reviews. This may be easiest to do if you engage a vendor that offers this type of service, hence the need for a budget. Alternatively, you can pay participants directly.

    2. Refine Your Process
      Encourage developers, designers, and content authors to include accessibility checks as part of their process. Here are ways to do that using free automated testing tools:
      • Download free browser extensions/plug-ins to do page specific testing for design reviews (WAVE or Accessibility Insights)
      • If you use continuous integration testing as part of the build pipeline for developers, make sure you are evaluating accessibility (there are free open-source tools for this like Axe Core and Pa11y)
      • Give content authors tools in the WYSIWYG interface to identify barriers that they have added (HTML Code Sniffer)
      • Ensure you are crawling your site regularly to catch accessibility issues. If possible, run crawlers in both staging and production environments (Purple Hats is a free open source option)
    3. Manual QA
      You don’t have to add extra people to do QA, just integrate it into your existing process. If you only do one thing, then stop using the mouse during your regular QA. You’ll catch accessibility bugs along with other functional bugs. If you want to do more, test with screen readers, and magnifiers too.

      Here are various ways you can do manual accessibility QA without purchasing any tools:

      • Can you access your site without your mouse? Use simple keyboard-only manual testing to evaluate new components and content.
      • Browse your site with magnification set to 200% or greater using the built-in magnification tools in your browser (Ctrl + +).
      • Flip your browser or OS to dark mode and see if your site works well for people with light sensitivity.
      • Perform sprint-level testing with developers and designers using assistive technology (VoiceOver, Microsoft Narrator, and NVDA are free options).
    4. User Testing
      In large corporate environments with a dedicated budget for accessibility, you can pay assistive technology users to test functionality on your staging environment before launch.

      Nothing gives you greater certainty that your product will work for people with disabilities than validating with users. Not even a perfect WCAG compliance score can give you that assurance the way a real person using the product can.

      People with disabilities are often asked to do work for free, which is problematic as many with disabilities are already at an economic disadvantage. If you’re working on a personal project and there’s no budget, look at your network and see if there are people who would be interested in helping in exchange for an equivalent favor.

    5. Specialist Review
      If your organization has an accessibility team, have them do User Acceptance Testing pre-release. This is where you can get detailed feedback on WCAG compliance that you may have missed in earlier steps.

      Think of it as a final check; your accessibility team isn’t doing all of the work on accessibility, everyone has a role to play. Accessibility teams are most effective when they set standards, provide training, give guidance and evaluate compliance. They support accessibility efforts but aren’t the only ones doing accessibility work. That way no one person or team becomes a bottleneck.

      If you don’t have a team, you can hire accessibility professionals to do the reviews prior to release.

    Where To Start

    Start where you are. The goal isn’t perfection, but ongoing improvement. Implementing all layers at once doesn’t have to be the goal. Rather, it’s about starting with one or two layers and then gradually adding more layers as your team gets better at accessibility testing. A small slice of cake is better than no cake.

    • If you are new to accessibility, start by adding a free browser extension to find accessibility issues and start by learning how to fix the errors that are displayed. WebAIM’s WAVE Toolbar is great for this.
    • Start sharing accessibility information that you have found useful. This could be just on Twitter or Reddit, but you could also start a newsletter to help raise awareness.
    • Sign up for webinars or events focused on accessibility so that you can learn more.
    • A team with a strong user-centered design approach might want to start with layer one: interviewing people with disabilities as part of user research.
    • A team with a strong IT compliance process might invest in tighter integration of automated testing in their continuous integration process or a site-wide crawler first.
    • Find ways to incorporate accessibility earlier in the design/development process.
    • Make sure you have meaningful accessibility statements which reflect your organization’s commitment to remove barriers to people with disabilities.
    • Build a champions network that allows a community of practice to grow and learn from each other.
    Limitations Of Automated Tools

    Every baker needs to have an arsenal of tools they can rely on. There are proprietary accessibility tools worth considering, but there are also excellent open-source tools including the ones we mentioned in the “recipe” above that are available for free.

    In modern dynamic sites, it is important to use automated tools to catch accessibility errors early before they are published to the live site. It’s also important to crawl the site to see that all the pages still comply after they’ve been published and continuously updated.

    The challenge is that designers and developers often assume that if the tests don’t report any errors, a site is good. When you give people a test, people tend to write towards it. Unfortunately, many designers and developers stop when they eliminate the errors that they see with WAVE or Axe.

    To be clear, it is a small fraction of teams that even do this, but if we want to make sites that are perceivable, operable, and understandable for more people using different types of technology, we have to do better.

    Automated tools are great but limited. Even the best available automated tools only catch about 30 to 40% of WCAG compliance accessibility errors. An automated tool can tell you if an image is missing an alternative description, but what it can’t tell you is if the description is entirely inaccurate or used in the wrong context and therefore useless. That still requires a person to evaluate.

    To get past these limits, it’s important to recognize that accessibility doesn’t automatically mean usability for people with disabilities. Think of accessibility as the lowest bar; it works with assistive technology, but to go beyond “it works” to “it’s enjoyable and easy to use” you’ll need to test with real users.

    Many organizations already do usability testing, but most don’t include people with disabilities. If you’re having trouble recruiting more diverse participants, consider working with an organization that has a community of assistive technology users and a platform to make testing quick and easy.

    Let’s Get Baking!

    Use a layered accessibility testing approach when you are working to build an inclusive website. Don’t rely on just one type of testing to find barriers for people with disabilities.

    • Test your ideas with assistive technology users early in the process
    • Integrate regular automated code checks into the process of building the site
    • Do manual testing using assistive technology as part of QA
    • Test with people with disabilities prior to launch
    • Perform comprehensive accessibility reviews on staging

    Remember the goal isn’t to score high in a testing tool, or even to meet a WCAG guideline, but rather to make your content more widely available, including to assistive technology users.

    Ultimately, accessibility statements are the icing on the cake. Include an accessibility statement with contact information on your site to provide a feedback loop. Your users are the experts and everyone should be part of making a site better over time.

    Categories: Design

    A Guide To Newly Supported, Modern CSS Pseudo-Class Selectors

    Fri, 04/23/2021 - 01:30

    Pseudo-class selectors are the ones that begin with the colon character “:” and match based on a state of the current element. The state may be relative to the document tree, or in response to a change of state such as :hover or :checked.


    Although defined in Selectors Level 4, this pseudo-class has had cross-browser support for quite some time. The any-link pseudo-class will match an anchor hyperlink as long as it has a href. It will match in a way equivalent to matching both :link and :visited at once. Essentially, this may reduce your styles by one selector if you are adding basic properties such as color that you’d like to apply to all links regardless of their visited status.

    :any-link { color: blue; text-underline-offset: 0.05em; }

    An important note about specificity is that :any-link will win against a as a selector even if a is placed lower in the cascade since it has the specificity of a class. In the following example, the links will be purple:

    :any-link { color: purple; } a { color: red; }

    So if you introduce :any-link, be aware that you will need to include it on instances of a as a selector if they will be in direct competition for specificity.


    I’d bet that one of the most common accessibility violations across the web is removing outline on interactive elements like links, buttons, and form inputs for their :focus state. One of the main purposes of that outline is to serve as a visual indicator for users who primarily use keyboards to navigate. A visible focus state is critical as a way-finding tool as those users tab across an interface and to help reinforce what is an interactive element. Specifically, the visible focus is covered in the WCAG Success Criterion 2.4.11: Focus Appearance (Minimum).

    The :focus-visible pseudo-class is intended to only show a focus ring when the user agent determines via heuristics that it should be visible. Put another way: browsers will determine when to apply :focus-visible based on things like input method, type of element, and context of the interaction. For testing purposes via a desktop computer with keyboard and mouse input, you should see :focus-visible styles attached when you tab into an interactive element but not when you click it, with the exception of text inputs and textareas which should show :focus-visible for all focus input types.

    Note: For more details, review the working draft of the :focus-visible spec.

    The latest versions of Firefox and Chromium browsers seem to now be handling :focus-visible on form inputs according to the spec which says that the UA should remove :focus styles when :focus-visible matches. Safari is not yet supporting :focus-visible so we need to ensure a :focus style is included as a fallback to avoid removing the outline for accessibility.

    Given a button and text input with the following set of styles, let’s see what happens:

    input:focus, button:focus { outline: 2px solid blue; outline-offset: 0.25em; } input:focus-visible { outline: 2px solid transparent; border-color: blue; } button:focus:not(:focus-visible) { outline: none; } button:focus-visible { outline: 2px solid transparent; box-shadow: 0 0 0 2px #fff, 0 0 0 4px blue; } Chromium and Firefox
    • input
      Correctly remove :focus styles when elements are focused via mouse input in favor of :focus-visible resulting in changing the border-color and hiding the outline on keyboard input
    • button
      Does not only use :focus-visible without the extra rule for button:focus:not(:focus-visible) that removes the outline on :focus, but will allow visibility of the box-shadow only on keyboard input
    • input
      Continues using only the :focus styles
    • button
      This seems to now be partially respecting the intent of :focus-visible on the button by hiding the :focus styles on click, but still showing the :focus styles on keyboard interaction

    So for now, the recommendation would be to continue including :focus styles and then progressively enhance up to using :focus-visible which the demo code allows. Here’s a CodePen for you to continue testing with:

    See the Pen Testing application of :focus-visible by Stephanie Eckles.


    The :focus-within pseudo-class has support among all modern browsers, and acts almost like a parent selector but only for a very specific condition. When attached to a containing element and a child element matches for :focus, styles can be added to the containing element and/or any other elements within the container.

    A practical enhancement to use this behavior for is styling a form label when the associated input has focus. For this to work, we wrap the label and input in a container, and then attach :focus-within to that container as well as selecting the label:

    .form-group:focus-within label { color: blue; }

    This results in the label turning blue when the input has focus.

    This CodePen demo also includes adding an outline directly to the .form-group container:

    See the Pen Testing application of :focus-within by Stephanie Eckles.


    Also known as the “matches any” pseudo-class, :is() can take a list of selectors to try to match against. For example, instead of listing heading styles individually, you can group them under the selector of :is(h1, h2, h3).

    A couple of unique behaviors about the :is() selector list:

    • If a listed selector is invalid, the rule will continue to match the valid selectors. Given :is(-ua-invalid, article, p) the rule will match article and p.
    • The computed specificity will equal that of the passed selector with the highest specificity. For example, :is(#id, p) will have the specificity of the #id — 1.0.0 — whereas :is(p, a) will have a specificity of 0.0.1.

    The first behavior of ignoring invalid selectors is a key benefit. When using other selectors in a group where one selector is invalid, the browser will throw out the whole rule. This comes into play for a few instances where vendor prefixes are still necessary, and grouping prefixed and non-prefixed selectors causes the rule to fail among all browsers. With :is() you can safely group those styles and they will apply when they match and be ignored when they don’t.

    To me, grouping heading styles as previously mentioned is already a big win with this selector. It’s also the type of rule that I would feel comfortable using without a fallback when applying non-critical styles, such as:

    :is(h1, h2, h3) { line-height: 1.2; } :is(h2, h3):not(:first-child) { margin-top: 2em; }

    In this example (which comes from the document styles in my project SmolCSS), having the greater line-height inherited from base styles or lacking the margin-top is not really a problem for non-supporting browsers. It’s simply less than ideal. What you wouldn’t want to use :is() for quite yet would be critical layout styles such as Grid or Flex that significantly control your interface.

    Additionally, when chained to another selector, you can test whether the base selector matches a descendent selector within :is(). For example, the following rule selects only paragraphs that are direct descendants of articles. The universal selector is being used as a reference to the p base selector.

    p:is(article > *)

    For the best current support, if you’d like to start using it you’ll also want to double-up on styles by including duplicate rules using :-webkit-any() and :matches(). Remember to make these individual rules, or even the supporting browser will throw it out! In other words, include all of the following:

    :matches(h1, h2, h3) { } :-webkit-any(h1, h2, h3) { } :is(h1, h2, h3) { }

    Worth mentioning at this point is that along with the newer selectors themselves is an updated variation of @supports which is @supports selector. This is also available as @supports not selector.

    Note: At present (of the modern browsers), only Safari does not support this at-rule.

    You could check for :is() support with something like the following, but you’d actually be losing out on supporting Safari since Safari supports :is() but doesn’t support @supports selector.

    @supports selector(:is(h1)) { :is(h1, h2, h3) { line-height: 1.1; } } :where()

    The pseudo-class :where() is almost identical to :is() except for one critical difference: it will always have zero-specificity. This has incredible implications for folks who are building frameworks, themes, and design systems. Using :where(), an author can set defaults and downstream developers can include overrides or extensions without specificity clashing.

    Consider the following set of img styles. Using :where(), even with a higher specificity selector, the specificity remains zero. In the following example, which color border do you think the image will have?

    :where(article img:not(:first-child)) { border: 5px solid red; } :where(article) img { border: 5px solid green; } img { border: 5px solid orange; }

    The first rule has zero specificity since its wholly contained within :where(). So directly against the second rule, the second rule wins. Introducing the img element-only selector as the last rule, it’s going to win due to the cascade. This is because it will compute to the same specificity as the :where(article) img rule since the :where() portion does not increase specificity.

    Using :where() alongside fallbacks is a little more difficult due to the zero-specificity feature since that feature is likely why you would want to use it over :is(). And if you add fallback rules, those are likely to beat :where() due to its very nature. And, it has better overall support than the @supports selector so trying to use that to craft a fallback isn’t likely to provide much (if any) of a gain. Basically, be aware of the inability to correctly create fallbacks for :where() and carefully check your own data to determine if it’s safe to begin using for your unique audience.

    You can further test :where() with the following CodePen that uses the img selectors from above:

    See the Pen Testing :where() specificity by Stephanie Eckles.

    Enhanced :not()

    The base :not() selector has been supported since Internet Explorer 9. But Selectors Level 4 enhances :not() by allowing it to take a selector list, just like :is() and :where().

    The following rules provide the same result in supporting browsers:

    article :not(h2):not(h3):not(h4) { margin-bottom: 1.5em; } article :not(h2, h3, h4) { margin-bottom: 1.5em; }

    The ability of :not() to accept a selector list has great modern browser support.

    As we saw with :is(), enhanced :not() can also contain a reference to the base selector as a descendent using *. This CodePen demonstrates this ability by selecting links that are not descendants of nav.

    See the Pen Testing :not() with a descendent selector by Stephanie Eckles.

    Bonus: The previous demo also includes an example of chaining :not() and :is() to select images that are not adjacent siblings of either h2 or h3 elements.

    Proposed but “at risk” — :has()

    The final pseudo-class that is a very exciting proposal but has no current browser implementing it even in an experimental way is :has(). In fact, it is listed in the Selector Level 4 Editor’s Draft as “at-risk” which means that it is recognized to have difficulties in completing its implementation and so it may be dropped from the recommendation.

    If implemented, :has() would essentially be the “parent selector” that many CSS folks have longed to have available. It would work with logic similar to a combination of both :focus-within and :is() with descendent selectors, where you are looking for the presence of descendants but the applied styling would be to the parent element.

    Given the following rule, if navigation contained a button, then the navigation would have decreased top and bottom padding:

    nav { padding: 0.75rem 0.25rem; nav:has(button) { padding-top: 0.25rem; padding-bottom: 0.25rem; }

    Again, this is not currently implemented in any browser even experimentally — but it is fun to think about! Robin Rendle provided additional insights into this future selector over on CSS-Tricks.

    Honorable Mention from Level 3: :empty

    A useful pseudo-class you may have missed from Selectors Level 3 is :empty which matches an element when it has no child elements, including text nodes.

    The rule p:empty will match <p></p> but not <p>Hello</p>.

    One way you can use :empty is to hide elements that are perhaps placeholders for dynamic content that is populated with JavaScript. Perhaps you have a div that will receive search results, and when it’s populated it will have a border and some padding. But with no results yet, you don’t want it to take up space on the page. Using :empty you can hide it with:

    .search-results:empty { display: none; }

    You may be thinking about adding a message in the empty state and be tempted to add it with a pseudo-element and content. The pitfall here is that messages may not be available to users of assistive technology which are inconsistent on whether they can access content. In other words, to make sure a “no results” type of message is accessible, you would want to add that as a real element like a paragraph (an aria-label would no longer be accessible for a hidden div).

    Resources for Learning About Selectors

    CSS has many more selectors inclusive of pseudo-classes. Here are a few more places to learn more about what’s available:

    Categories: Design

    Web Design Done Well: The Ordinary Made Extraordinary (Part 1)

    Thu, 04/22/2021 - 07:00

    Great ideas in web design come so thick and fast that it can be easy to miss them if you’re not careful. This series is a small antidote to that, piecing together splashes of inspiration that caught our eye. Whether it’s a mind-bending new feature or simply an old trick delivered with new elegance, they share the quality of making us think a little differently.

    I recently wrote a piece lauding the work of Saul Bass in the world of web design. One of his great gifts was making even the tiniest details beautiful. It is in that same spirit we kick off this series by honing in on website trends and features we’ve grown accustomed to being dull. As you’ll see, they needn’t be. The trick is often in the execution. Just about anything can be beautiful. Why aim for anything less?

    Glasgow International’s Pages Within Pages

    We’re used to plenty of scrolling these days, but the Glasgow International festival website has found a simple, clever way to scratch that itch while keeping pages short:

    On mobile, the same three sections form one big column. It’s a savvy solution to the mobile/desktop relationship, and a pretty stylish one too. (Shout out to the ‘Support’ button, which starts spinning when you hover on it.)

    The CSS behind this is suitably simple. The three sections sit inside a flex container, with all three sharing the values of overflow-y: auto; and height: 100vh; so that they always fit the desktop viewport. The really nice touch here is using scrollbar-width: auto; to remove the sidebar. Because the columns take up the whole screen you intuitively work out the way the page works as soon as you move your mouse.

    Kenta Toshikura’s Dimension-Bending Portfolio

    A recent site of the week on Awwwards, this portfolio website by Japanese frontend developer Kenta Toshikura is simply breathtaking:

    If in doubt, the tendency is to lean towards flat, modular arrangements, but maybe we should be thinking in three dimensions a little more often. This is a fantastic example of lateral thinking transforming what could easily have been a column of boxes into something truly memorable.

    We may not all be equipped to do something quite this fancy (I’m certainly not) but it’s well worth remembering that web pages aren’t blank canvases so much as they are windows into alternate dimensions.

    Stripe Documentation Is The Teacher We All Want

    Documentation is all too often one of the first casualties of the Web’s mile-a-minute pace. It needn’t be. I have no qualms calling Stripe’s documentation beautiful:

    I’m sure most of us have ground through enough bad documentation to appreciate the effort put into this approach. Clear, hierarchical navigation for the content, bite-sized step-by-step-copy, and of course the code snippets. Dynamic previews of code across multiple platforms and languages is above and beyond, but then why shouldn’t it be?

    There are few things more valuable — and more elusive — than quality learning resources. Stripe shows there is a world of possibilities online beyond the standard words on a page. I’ve shared this before (and I’ll share it again) but Write the Doc’s documentation guide is a smashing resource for presenting informative content in useful, dynamic ways.

    Max Böck’s Technicolor Dream

    There is an awful lot to like about Max Böck’s personal website, but for the purposes of this piece, I’m honing in on color schemes. Most websites have one color scheme.

    Light and dark is the new normal, but as Böck himself writes in his blog post about the theme switcher, only Siths deal in absolutes. Through the magic CSS custom properties the site switches between color schemes seamlessly. For a full breakdown of how it works I heartily recommend reading the full post linked above. And for further reading on custom properties Smashing has plenty too:

    The themes are named after Mario Kart 64 tracks, if you were wondering. Except Hacker News. That’s named after Hacker News, with the marvellous touch of adding ‘considered harmful’ to the end of every single Böck blog post title.

    It’s a fun twist on the traditional light/dark dichotomy, and also speaks to just how fluid sites can be nowadays. The same groundwork could allow you to adjust color schemes depending on where people are visiting the site from, for example.

    Overpass Sells Sales

    Sales isn’t exactly a sector that screams innovation, but credit where credit is due. Overpass’s carousels bounce and shrink and expand so smoothly that it almost feels like you’re interacting with something tactile, like a rubber band.

    Here, both the touch-action and translate3d()) CSS functions are used to great effect, making the cards container something that can be effectively dragged around the screen. In the event of the container being grabbed, all cards use scale(0.95)) to recede ever so slightly until the user lets go. It gives the carousel a lovely sense of depth and lightness.

    The audio clips are a nice touch. Multimedia integration has been a running theme in these examples. Always lay the accessibility groundwork, but be bold. At this stage the only real limits are those of our imaginations.

    E-Commerce Meets Long Form Storytelling On Mammut

    From Steve Jobs to Seth Godin, it is often said marketing is a storytelling game. This is something that a lot of e-commerce websites seem to have forgotten, each serving up page after page of glossy products floating in front of perfect white backgrounds. You can almost hear the sucking sound of conversion funnels trying to draw you in.

    It’s refreshing then to see a company like Mammut going all in on storytelling to sell its hiking products. Their long-form expedition articles are as immersive as the finest New York Times feature, with audio clips, maps, and, naturally, stunning photography. Mammut gear features heavily, of course, but it’s done in a way that’s tasteful. More importantly than that, it’s authentic.

    Although there is some super slick styling going on here that’s not why I’ve included it. In a way it’s incredible just how impersonal much of the Web feels these days, with e-commerce being a particularly egregious offender.

    This is the kind of thing people would share even if they had no interest in buying mountaineering gear. It’s superb content. Instagram influencer posts look like child’s play compared to this. Do those prompts to shop take you to the aforementioned squeaky clean e-commerce checkout? Naturally. But, by God do they earn it. Not everyone has the resources for something this cutting edge, but it shows that e-commerce doesn’t have to be sterile and lifeless.

    Axeptio Makes Its Cookies Palatable

    You can’t swing a cat without hitting a disclaimer pop-up these days. It’s bizarre, then, that so many of them are so ugly. More often than not, they feel tacked on and graceless. Now, to be fair, that’s because they are tacked on and graceless, but some genuinely are just there to Improve Your Browsing Experience™.

    Instead of treating its cookie pop-up like a bad odour, web consent solution provider Axeptio walks the walk by making them look stylish, and even rather charming. With GDPR (and basic decency) to think about, it’s essential to weave ethical design into a website’s fabric.

    A lovely touch is that it doesn’t actually pop up until users start moving around the site. Why bother people if they’re not even interested in the content? Notice as well that they’ve dropped the boilerplate cookie lingo in favor of something more conversational.

    Granted, this may not make the mundane ‘extraordinary’ exactly, but it does make it a whole lot classier. It’s a small touch, but one which makes an excellent first impression. Without even touching my mouse, I already have a sense of Axeptio’s attention to detail and commitment to quality. A blocky ‘We care about your privacy’ pop-up would have given a very different impression.

    As far as cookies and pop-ups are necessary, we may as well own them. The same applies to other unsexy staples of the modern web. Do legal consent forms, email signups, and privacy pages have to be ugly and evasive, or do we just need to think a little differently? Share your thoughts below!

    Categories: Design

    A Complete Guide To Incremental Static Regeneration (ISR) With Next.js

    Wed, 04/21/2021 - 06:55

    A year ago, Next.js 9.3 released support for Static Site Generation (SSG) making it the first hybrid framework. I’d been a happy Next.js user for about a few years at this point, but this release made Next.js my new default solution. After working with Next.js extensively, I joined Vercel to help companies like Tripadvisor and Washington Post as they adopt and scale Next.js.

    In this article, I’d like to explore a new evolution of the Jamstack: Incremental Static Regeneration (ISR). Below you’ll find a guide to ISR — including use cases, demos and tradeoffs.

    The Problem with Static-Site Generation

    The idea behind the Jamstack is appealing: pre-rendered static pages which can be pushed to a CDN and globally available in seconds. Static content is fast, resilient to downtime, and immediately indexed by crawlers. But there are some issues.

    If you’ve adopted the Jamstack architecture while building a large-scale static site, you might be stuck waiting hours for your site to build. If you double the number of pages, the build time also doubles. Let’s consider Is it possible to statically generate millions of products with every deployment?

    Even if every page was statically generated in an unrealistic 1ms, it would still take hours to rebuild the entire site. For large web applications, choosing complete static-site generation is a non-starter. Large-scale teams need a more flexible, personalized, hybrid solution.

    Content Management Systems (CMS)

    For many teams, their site’s content is decoupled from the code. Using a Headless CMS allows content editors to publish changes without involving a developer. However, with traditional static sites, this process can be slow.

    Consider an e-commerce store with 100,000 products. Product prices change frequently. When a content editor changes the price of headphones from $100 to $75 as part of a promotion, their CMS uses a webhook to rebuild the entire site. It’s not feasible to wait hours for the new price to be reflected.

    Long builds with unnecessary computation might also incur additional expenses. Ideally, your application is intelligent enough to understand which products changed and incrementally update those pages without needing a full rebuild.

    Incremental Static Regeneration (ISR)

    Next.js allows you to create or update static pages after you’ve built your site. Incremental Static Regeneration (ISR) enables developers and content editors to use static-generation on a per-page basis, without needing to rebuild the entire site. With ISR, you can retain the benefits of static while scaling to millions of pages.

    Static pages can be generated at runtime (on-demand) instead of at build-time with ISR. Using analytics, A/B testing, or other metrics, you are equipped with the flexibility to make your own tradeoff on build times.

    Consider the e-commerce store from before with 100,000 products. At a realistic 50ms to statically generate each product page, this would take almost 2 hours without ISR. With ISR, we can choose from:

    • Faster Builds
      Generate the most popular 1,000 products at build-time. Requests made to other products will be a cache miss and statically generate on-demand: 1-minute builds.
    • Higher Cache Hit Rate
      Generate 10,000 products at build-time, ensuring more products are cached ahead of a user’s request: 8-minute builds.

    Let’s walk through an example of ISR for an e-commerce product page.

    Getting Started Fetching Data

    If you’ve never used Next.js before, I’d recommend reading Getting Started With Next.js to understand the basics. ISR uses the same Next.js API to generate static pages: getStaticProps. By specifying revalidate: 60, we inform Next.js to use ISR for this page.

    1. Next.js can define a revalidation time per page. Let’s set it at 60 seconds.
    2. The initial request to the product page will show the cached page with the original price.
    3. The data for the product is updated in the CMS.
    4. Any requests to the page after the initial request and before 60 seconds are cached and instantaneous.
    5. After the 60-second window, the next request will still show the cached (stale) page. Next.js triggers a regeneration of the page in the background.
    6. Once the page has been successfully generated, Next.js will invalidate the cache and show the updated product page. If the background regeneration fails, the old page remains unaltered.
    // pages/products/[id].js export async function getStaticProps({ params }) { return { props: { product: await getProductFromDatabase( }, revalidate: 60 } } Generating Paths

    Next.js defines which products to generate at build-time and which on-demand. Let’s only generate the most popular 1,000 products at build-time by providing getStaticPaths with a list of the top 1,000 product IDs.

    We need to configure how Next.js will “fallback” when requesting any of the other products after the initial build. There are two options to choose from: blocking and true.

    • fallback: blocking (preferred)
      When a request is made to a page that hasn’t been generated, Next.js will server-render the page on the first request. Future requests will serve the static file from the cache.
    • fallback: true
      When a request is made to a page that hasn’t been generated, Next.js will immediately serve a static page with a loading state on the first request. When the data is finished loading, the page will re-render with the new data and be cached. Future requests will serve the static file from the cache.
    // pages/products/[id].js export async function getStaticPaths() { const products = await getTop1000Products() const paths = => ({ params: { id: } })) return { paths, fallback: ‘blocking’ } } Tradeoffs

    Next.js focuses first and foremost on the end-user. The "best solution" is relative and varies by industry, audience, and the nature of the application. By allowing developers to shift between solutions without leaving the bounds of the framework, Next.js lets you pick the right tool for the project.

    Server-Side Rendering

    ISR isn’t always the right solution. For example, the Facebook news feed cannot show stale content. In this instance, you’d want to use SSR and potentially your own cache-control headers with surrogate keys to invalidate content. Since Next.js is a hybrid framework, you’re able to make that tradeoff yourself and stay within the framework.

    // You can cache SSR pages at the edge using Next.js // inside both getServerSideProps and API Routes res.setHeader('Cache-Control', 's-maxage=60, stale-while-revalidate');

    SSR and edge caching are similar to ISR (especially if using stale-while-revalidate caching headers) with the main difference being the first request. With ISR, the first request can be guaranteed static if pre-rendered. Even if your database does down, or there’s an issue communicating with an API, your users will still see the properly served static page. However, SSR will allow you to customize your page based on the incoming request.

    Note: Using SSR without caching can lead to poor performance. Every millisecond matters when blocking the user from seeing your site, and this can have a dramatic effect on your TTFB (Time to First Byte).

    Static-Site Generation

    ISR doesn’t always make sense for small websites. If your revalidation period is larger than the time it takes to rebuild your entire site, you might as well use traditional static-site generation.

    Client-Side Rendering

    If you use React without Next.js, you’re using client-side rendering. Your application serves a loading state, followed by requesting data inside JavaScript on the client-side (e.g. useEffect). While this does increase your options for hosting (as there’s no server necessary), there are tradeoffs.

    The lack of pre-rendered content from the initial HTML leads to slower and less dynamic Search Engine Optimization (SEO). It’s also not possible to use CSR with JavaScript disabled.

    ISR Fallback Options

    If your data can be fetched quickly, consider using fallback: blocking. Then, you don’t need to consider the loading state and your page will always show the same result (regardless of whether it’s cached or not). If your data fetching is slow, fallback: true allows you to immediately show a loading state to the user.

    ISR: Not Just Caching!

    While I’ve explained ISR through the context of a cache, it’s designed to persist your generated pages between deployments. This means that you’re able to roll back instantly and not lose your previously generated pages.

    Each deployment can be keyed by an ID, which Next.js uses to persist statically generated pages. When you roll back, you can update the key to point to the previous deployment, allowing for atomic deployments. This means that you can visit your previous immutable deployments and they’ll work as intended.

    • Here’s an example of reverting code with ISR:
    • You push code and get a deployment ID 123.
    • Your page contains a typo “Smshng Magazine”.
    • You update the page in the CMS. No re-deploy needed.
    • Once your page shows “Smashing Magazine”, it’s persisted in storage.
    • You push some bad code and deploy ID 345.
    • You roll back to deployment ID 123.
    • You still see “Smashing Magazine”.

    Reverts and persisting static pages are out of scope of Next.js and dependent on your hosting provider. Note that ISR differs from server-rendering with Cache-Control headers because, by design, caches expire. They are not shared across regions and will be purged when reverting.

    Examples of Incremental Static Regeneration

    Incremental Static Regeneration works well for e-commerce, marketing pages, blog posts, ad-backed media, and more.

    • E-commerce Demo
      Next.js Commerce is an all-in-one starter kit for high-performance e-commerce sites.
    • GitHub Reactions Demo
      React to the original GitHub issue and watch ISR update the statically generated landing page.
    • Static Tweets Demo
      This project deploys in 30 seconds, but can statically generate 500M tweets on-demand using ISR.
    Learn Next.js Today

    Developers and large teams are choosing Next.js for its hybrid approach and ability to incrementally generate pages on-demand. With ISR, you get the benefits of static with the flexibility of server-rendering. ISR works out of the box using next start.

    Next.js has been designed for gradual adoption. With Next.js, you can continue using your existing code and add as much (or as little) React as you need. By starting small and incrementally adding more pages, you can prevent derailing feature work by avoiding a complete rewrite. Learn more about Next.js — and happy coding, everyone!

    Further Reading
    Categories: Design
    ©2021 Richard Esmonde. All rights reserved.