You are here

Smashing Magazine

Subscribe to Smashing Magazine feed
Recent content in Articles on Smashing Magazine — For Web Designers And Developers
Updated: 1 min 47 sec ago

How To Empower Design Teams By Measuring Value

13 hours 5 min ago
How To Empower Design Teams By Measuring Value How To Empower Design Teams By Measuring Value Dave Cunningham 2020-01-24T13:00:00+00:00 2020-01-24T13:35:05+00:00

The business value of design has been proven at scale by the McKinsey Design Index. The report shows the best design performers increased their revenues and shareholder returns at nearly twice the rate of their industry counterparts.

However, we still see designers struggling with these common problems:

  • Projects are stop and start designers lose momentum and focus;
  • Potentially impactful design work isn’t put into production;
  • Designers are given solutions and have to try and bring projects back to the problem;
  • Stakeholders aren’t involved or our out of reach;
  • Business strategies and desired outcomes aren’t clear;
  • and the list goes on.

With the right people, enough rigor, patience and pragmatism. These problems can be solved, but then we often start all over again on the next project.

We are great at measuring usability and analytics. We can show how we’ve increased conversion rates on our e-commerce store, or how far people scroll down a page. In doing so we have created a culture of the world’s best design brains thinking about how to get more people to click a button.

We need to change this, we need to start measuring the potential for impact, of design, in a consistent way. We need a metric that people can see and they want to move.

Meet DIET (Design Impact Evaluation Tactic)

DIET asks key questions, that are fundamental for designers to be impactful in their work. The designer’s answers give a DIET score at the key stages of the project.

Numbers are powerful they give us an object to point at and discuss. They keep the conversation focused. The meaning behind a number is more important to designers, the why, not the what. Having the ingredients for good design baked into a number helps cross the divide of design and business metrics.

Why We Focus On Product Metrics Instead “The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can’t be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily really isn’t important. This is blindness. The fourth step is to say that what can’t be easily measured really doesn’t exist. This is suicide.”

— “Corporate Priorities: A continuing study of the new demands on business,” Daniel Yankelovich, 1972

Kristin Zibell’s table of UX metrics is a fantastic way to measure the customer experience, linking UX metrics directly to business goals.

Image source: Kristin Zibell (Large preview)

However, the potential value of design can’t be measured using a product metric such as conversion rate or time on task. What we are measuring here is the outcome of a product.

The product could be a square wheel or a chocolate fireguard, improving these products may show better metrics but is a means to an end.

If we as designers are spending all our time trying to move a needle how and where does innovation live in our work?

Measurements Gives Us A Language

Back in the 1700s, people were measuring temperature using cylinders of water and wine. This resulted in inaccurate measurements. It wasn’t until Fahrenheit came along and introduced constants to measure from, that things improved.

In one of his early experiments of constants, Fahrenheit used a glass of icy water as a low scale, and the upper scale being his wife’s armpit. Having the two constants meant he could measure temperatures against them. Fahrenheit went on to invent the mercury thermometer. The German’s scale of measuring temperature is still used in the United States today.

Fahrenheit gives us a consistent language to talk about the measurement of things. “It’s 120 degrees Fahrenheit outside” instantly informs you that it’s going to be very hot today, so don’t forget to wear sunscreen.

So, Like Fahrenheit, Do We Have A Glass Of Icy Water And An Armpit?

Design lives and breaths in different environments in different ways. In larger orgs projects, often balloon so get ditched or pivot. Stakeholders or key decision-makers can be out of reach. Business strategy can be unclear and not communicated well. Teams are fluid and can often be without the skills needed. All of these things are part of the ingredients which either prevent or allow design to thrive and grow.

Diagram showing how design is messy at the start but then things become clearer over time (Large preview)

The British design council suggested in 2005 that designers share similar approaches in their process, which they mapped out as the Double Diamond. The Double Diamond consists of a problem space and a solution space. The shape of the diamond is to represent divergent (thinking broadly) thinking and convergent (narrowing things down) thinking.

With designers on different projects, I mapped the journey of a design project from start to finish.

(Large preview)

Here are examples of 3 projects:

  • Team 1 completely missed out the problem stage, as they didn’t have a researcher at the time. Time passes and eventually the project pivots.
  • Team 2 followed a Lean UX process and their project evolved.
  • Whereas Team 3 used a Google sprint and their project got ditched.

So, some projects evolve or pivot and some get ditched, but we don’t record why?

Looking Beyond The Double Diamond

Often projects come to a design team in the form of a brief. This will be part of the business’s overall strategy. Market research and predictions of future trends inform this strategy space.

Before the Double diamond we have a strategy segment and after a ship and backlog segment. (Large preview)

The ship and backlog space are often managed outside of the design team and very few designers I spoke to understood how work got prioritized here.

The View From A Stakeholder

Stakeholders are likely to be looking after many projects, which live in the strategy chain.

The strategy segment is linked to many other strategy segments. Stakeholders often focus on delivering the strategy and want a direct path to delivery. (Large preview)

Projects in the strategy chain will be intrinsically linked, and require a holistic view of a business. So it’s not surprising that stakeholders just want to get things through to the delivery chain.

Strategically the business will look into future trends and consider finance and risk to the business. There will be a lot of technical input too, can we build this with our current infrastructure? Shall we purchase a 3rd party solution? How can we reduce the risk and cost to the business?

So what ends up happening is. Projects balloon, time in the problem and solution space increases.

Projects often die or change direction in the problem and solution space (Large preview)

Designers get asked questions in show-and-tells and presentations. Like “Why are we doing research?”, “We already know this is true”, “Why is this taking so long?”

Ultimately design projects often die in these scenarios. Or can lead to poor outcomes.

A solution could be better communication, clearer strategies, shouldn’t all this be managed better in the boardroom? Perhaps but, teams change, stakeholders come and go. We fight the same fights over and over. We don’t have a common language like Fahrenheit to describe design impact.

People value data at scale, data makes decision making easier, as there is something to hang the decision off.

DIET can give you that data.

How DIET Works

At each stage of a project, the team answers 5 questions and is given a score out of 10. The questions are based on the foundations that designers need to be impactful with their work

Scores are recorded at each stage of the project. (Large preview) Stage One

The strategy stage is for when you first hear about a project.

  1. We understand the business need/outcome.
  2. We understand the user needs.
  3. The subject matter experts & stakeholders are involved.
  4. The constraints (financial, time & tech) of the project are clear.
  5. Do we have the skills needed?
Stage Two

The problem stage is when you have completed your research on this particular problem.

  1. We know how people currently solve the problem.
  2. The problem resonates with our users.
  3. We are solving the right problem.
  4. The subject matter experts & stakeholders are involved?
  5. Do we have the skills needed?
Stage Three

The solution stage is when you have designed and tested your solution.

  1. Our solution solved the problem.
  2. Our solution is commercially viable.
  3. Our solution is going to be put live.
  4. Have we recorded our learnings?
  5. Have we shared our story?

To be impactful we need to work within our constraints, for our projects to be viable. If we are consistently designing stuff that can’t go live, we just build design debt and frustration.

Here are examples of two projects that have DIET scores:

Outcomes of projects are shown to be poor when the DIET score is low, when its high they are improved. (Large preview)

Project A had low scores all the way through and resulted in a poor outcome. Project B scored highly and the outcome is good.

Having a score from teams of predicted impact and matching that against actual impact. It gives us a base to show how, why and even if our design foundations work. Once you know a constant exists, you can make predictions about things yet to be discovered.

Over time, we have a constant to measure from. We can use this constant to learn where and why design impact succeeds and why it fails.

  • Is attrition rate (staff turnover) linked to low scores?
  • Do low-scoring projects take more time?
  • Do low-scoring projects mean poor team health?
How To Use DIET
  • At each stage of the project (Strategy, Problem & Solution);
  • Every two weeks on a Monday or at the start of a sprint cycle;
  • In the retro at the end of the project.

To get started with DIET, you’ll need a copy of our DIET score Google Sheet to record your scores. You can then start with scoring the strategy.

What We Have Learned

The feedback loop is large and the effort to constantly keep teams tracking scores is challenging. However, we have seen these benefits.

  • DIET acts as a good early warning signal;
  • DIET helps teams share knowledge earlier and on a regular basis;
  • DIET gives people the confidence to speak to their managers.

We have a long way to go and are still learning. The project is open source and we’d love to hear your thoughts and get your input to learn more.

One Voice For Design

Good communication across all levels is a common trait in successful teams. Having a tool which gives a constant to communicate design impact at scale gives consistency to our communications. Is DIET a way to do this?

(ah, il)
Categories: Design

A Complete Guide To WordPress Multisite

Thu, 01/23/2020 - 05:00
A Complete Guide To WordPress Multisite A Complete Guide To WordPress Multisite Manish Dudharejia 2020-01-23T13:00:00+00:00 2020-01-23T13:06:39+00:00

WordPress Multisite is a popular feature of WordPress, which enables you to create and run multiple websites using the same WordPress installation on your server. In other words, you can manage several different WordPress websites from a single dashboard.

However, people are sometimes unsure of how to use this feature. This guide will help to clear up questions related to what WordPress Multisite is, who needs it, and how to install it.

Let's start with the basics.

1. What Is WordPress Multisite?

WordPress Multisite is a feature that allows you to create and run multiple WordPress websites from a single WordPress dashboard. It was previously called WordPress Multi-User or WPMU. WordPress Multisite is not a new feature. It is an advanced feature on the WordPress platform that has been around since the launch of WordPress 3.0. You can use it for a variety of purposes, such as updating all of your websites with a single click or charging your subscribers to create a website on your Multisite network.

2. Key Features Of WordPress Multisite

WordPress Multisite comes with various unique features. For starters, you can run a network of blogs and websites from a single WordPress installation. It enables you to create a network of subdomains, like http://john.example.com, or directories, like http://www.example.com/john/. Alternatively, you can also have a separate domain for each website on the network. It is also easier to replicate functionality across a network of websites.

In WordPress Multisite, you can control the entire network as a Super Admin. As a regular website admin, you can control only one website on the network. As a Super Admin, you control the accessibility of users who want to create an account and set up WordPress blogs or websites of their own.

A Super Admin can install new themes and plugins, make them available to the websites on the network, and also customize the themes for all websites. Another feature is the ability to create websites and online shops intended for specific languages, regions, and currencies.

Both the Super Admin and the website admin can control content. While this control extends over the entire network for a Super Admin, the website admin has the right to choose which content from the main domain gets displayed on their respective website. Plugins are also under the control of a Super Admin. However, a website admin can activate and deactivate plugins on their website if required.

3. Who Should And Shouldn’t Use WordPress Multisite?

Although WordPress Multisite offers several features, it is not always the right choice. The main concern is that the websites on a Multisite network would share the same database. In other words, you can’t back up only a single website. That’s why all of the websites on a network must belong to the same principal domain.

Let me explain with an example. A university could use WordPress Multisite to build different websites for each department, for student and faculty member blogs, and for forums. Because the websites would share their database with the university's main domain, they would be easier to manage on a Multisite network.

Likewise, banks and financial institutions with a national or global network of branches, digital publications with multiple content sections, government offices with multiple departments, hotel chains, stores with multiple outlets, e-commerce companies, and website design companies such as Wix could also use a Multisite network to their advantage.

However, a web designer couldn’t use Multisite to manage several unrelated client projects. If one of the clients decided to move their website elsewhere, it would be a problem because the website would be sharing its database with others on the network. Multisite makes it difficult to back up an individual website on the network. You would be better off using a single installation in this case.

4. Pros And Cons Of WordPress Multisite

Now that we know who should and shouldn’t use WordPress Multisite, let's look at the technical pros and cons. You’ll need to weigh them carefully before making a decision.

Pros
  • The main advantage is the ability to manage multiple websites from a single dashboard. This is useful if you are running multiple websites managed by different teams under one parent domain, such as an e-commerce store with different country-specific sub-sites.
  • However, you can also assign a different admin to each website on your network.
  • With a single download, you can install and activate plugins and themes for all of the websites on your network.
  • You can also manage updates with a single master installation for all of the websites on your network.
Cons
  • Because all of the websites share the same network resources, they will all go down if the network goes down.
  • A sudden increase in traffic to one website will affect all others on the network. Unfortunately, beginners often find it difficult to manage traffic and server resources on a Multisite network.
  • Similarly, if one website gets hacked, the entire network will get compromised.
  • Not all WordPress plugins support a Multisite network.
  • Likewise, not all web hosting providers have the tools necessary to support a Multisite network.
  • If your hosting provider lacks the server requirements, you won't be able to use the Multisite feature. For example, some hosting providers might not allow you to add a domain to the same hosting server. In that case, you might need to change or upgrade your hosting plan or change providers.
5. Requirements For WordPress Multisite

Knowing the technical pros and cons, you must have decided whether Multisite is the right option for you. If you are going to use it, you will need to meet a few technical requirements first.

One of the first things you will need is a web hosting service provider that can handle multiple domains in a single web hosting plan. Although you could use shared hosting for a couple of websites with low traffic, you should use VPS hosting or a dedicated server, owing to the nature of the WordPress Multisite network.

You will also need to have the fundamental knowledge of how to install WordPress. It would be an added advantage if you already have a WordPress installation. However, you will need to back it up. You will also need to deactivate all of the plugins.

Make sure you have FTP access. You will need to know the basics of editing files using FTP as well. Finally, you will need to activate pretty permalinks. In other words, your URLs should look not like http://example.com/?p=2345, but like http://example.com/my-page.

6. Multisite Domain Mapping

By default, you can create additional websites on your Multisite network as subdomains or subfolders of the main website. They look like this:

subsite.network.com

or like this:

network.com/subsite

However, you might not always want this, because you will be required to create a unique domain name for each website. That's where domain mapping comes to the rescue. You can use this feature within the Multisite network to map additional websites to show as domain.com. Using domain mapping, this is what you will see:

subsite.network.com = domain.com

or:

network.com/subsite = domain.com

Prior to WordPress 4.5, you had to use a domain mapping plugin to map the additional websites. However, in version 4.5+, domain mapping is a native feature.

7. Multisite Hosting And SSL

As you probably know, Secure Sockets Layer (SSL) enables you to transport data over the internet securely. The data remains undecipherable to malicious users, bots, and hackers.

However, some hosting providers offer free SSL certification for the main domain only. You might need to buy it separately for each subdomain. If one of the websites on your multisite network lacks SSL certification, it will compromise the security of all the other websites. Thus, ensure that all websites on your WordPress Multisite network have SSL certificates.

8. Installing And Setting Up WordPress Multisite For New And Existing Websites

First, you will need to install WordPress. Once it’s installed, you will need to enable the Multisite feature. You can also enable it on your existing WordPress website. Before doing so, however, back up your website.

  • Use an FTP client or the cPanel file manager to connect with your website, and open the wp-config.php file for editing.
  • Add the following code to your wp-config.php file just before the /*:
  • /* Multisite */ define( 'WP_ALLOW_MULTISITE', true );
  • Now, save and upload your wp-config.php file back to the server.
  • That’s all!

Next, you will need to set up the Multisite network. If you are already logged into your WordPress dashboard, refresh the page to continue with the next steps. If not, you will need to log in again.

  • When setting up the Multisite network on your existing website, you will need to deactivate all plugins. Go to the “Plugins” » “Installed Plugins” page, and select all plugins. Select the “Deactivate” option from the “Bulk Actions” dropdown menu, and click “Apply”.
  • Deactivate Plugin. (Large preview)
  • Now, go to “Tools” » “Network Setup”. If you see a notice that you need Apache’s mod_rewrite module installed on your server, don't be alarmed. All leading WordPress hosting providers keep this module enabled.
  • Network Setup. (Large preview)
  • Choose the domain structure for websites on your network, either subdomains or subdirectories.
  • Add a title for your network.
  • Make sure that the email address for the network admin is correct.
  • Click the “Install” button.
  • You will see some code that you have to add to the wp-config.php and .htaccess files, respectively. Use an FTP client or the file manager in cPanel to copy and paste the code.
  • Complete Setup. (Large preview)

The set-up is complete. You will need to log in again to access your Multisite network.

9. WordPress Multisite Configuration And Other Settings

Hold on! You still need to configure the network settings, for which you will need to switch to the Multisite network dashboard.

  • Open the “My Sites” menu in the admin toolbar. Click the “Network Admin” option, and then click the “Dashboard” option to go to the Multisite network dashboard.
  • Click the “Settings” option in the admin sidebar. You will see your website’s title and the admin’s email address. Make sure they are correct before moving on to a few essential configuration settings.
A. Registration Settings

This setting enables you to open your website to user registration and allows existing users to create new websites on your network. Check the appropriate box.

If you check the “Registration Notification” box, you will receive an email notification whenever a new user or website gets registered. Check the “Add New Users” option to enable individual website administrators to add new users to their own websites.

Use the “Limited Email Registration” option to restrict registration to a specific domain. For example, allow only users from your company to register with your website. Likewise, you can also prevent some domains from being registered.

Registration Settings. (Large preview) B. New Website’s Settings

Here, you can configure the default options, such as welcome emails and the contents of the first default post, page, and comment, for every new website built on your Multisite network. You can update these settings anytime.

New Site Settings. (Large preview) C. Upload Settings

You can limit the total amount of space each website on your network can use for uploads. This will help you to delegate server resources judiciously. The default value is 100 MB. You can also set the type of files that users can add to their websites, such as images, .doc, .docx, and .odt files, audio and video files, and PDFs. You can also set a size limit for individual files.

Upload Settings. (Large preview) D. Menu Settings

This setting enables the administrative menu for the plugins section of your network’s websites. Once you enable this setting, users will be able to activate and deactivate plugins, but won’t be able to add new ones. Click “Save Changes” to apply the changes you have made.

10. Resources: Setting Up Themes And Plugins

Because individual website administrators can’t install themes and plugins on their own, you will need them to set up on the network.

A. Themes

Go to “My Sites” » “Network Admin” » “Themes”.

On this page, you will see a list of the themes currently installed. Use the following settings to make your desired changes.

  • “Network Enable”: Make the theme available to website administrators.
  • “Network Disable”: Disable a theme that you have previously made available.
  • “Add New”: Install a new theme on your network.
Change a Default Theme

Add the following code to your wp-config.php file to change the default theme for new websites (replacing your-theme with the name of the theme’s folder):

// Setting default theme for new sites define( 'WP_DEFAULT_THEME', 'your-theme' ); B. Plugins

Go to “My Sites” » “Network Admin” » “Plugins”.

Click the “Network Activate” option below each plugin to add it to your network. Remember that if you have already enabled the “Plugins Menu” option for website administrators in the “Network Settings”, then admins will not be able to delete or install new plugins. However, they will be able to activate and deactivate existing plugins.

11. How To Add A New Website To The Multisite Dashboard

Go to “My Sites” » “Network Admin” » “Sites”.

Add Sites. (Large preview)

Click the “Add New” button.

Add New Sites. (Large preview)

Fill in the following fields.

  • Add the address (URL) for your new website.
  • Enter your “Site Title”.
  • Enter the email address of the new website’s administrator.
Add Site Button. (Large preview)

Click the “Add Site” button to finish the process.

12. Google Analytics On WordPress Multisite

You can also generate Google Analytics code for all pages on all of the websites on your Multisite network. If you haven't already done so, create a Google Analytics account, and sign into it.

  • Start by creating a property to set up a Google Analytics ID. You will need this ID to install your global site tag (gtag.js).
  • Next, find your Google Analytics ID in the “Property” column of the relevant account in the “Admin” section of your Analytics account.
  • Now, you can copy and paste the global site tag on the relevant web pages. Add the gtag.js tag right after the opening <head> tag. You can have different analytics code for each website on the network, and the Super Admin can manage all of them if needed.
13. Setting Up On Local Host

You can use any WAMP or LAMP software to set up WordPress Multisite on a local system. You’ll need to follow the same steps you did to host a website. However, take care with the domain mapping. You can easily set up a subdirectory website in the local system, but to set up a subdomain or a different domain, you’ll need to set up virtual host on the WAMP or LAMP server.

14. Useful Plugins For WordPress Multisite And How They Work

You can use a variety of plugins to ensure the smooth operation of your Multisite network.

A. Domain Mapping
This plugin enables you to offer each website on your network its own domain name.

B. WPForms
Create different forms using a simple drag-and-drop tool.

C. Yoast SEO
Optimize the websites on your network for better search engine results. Yoast is a well-known name in the SEO world.

D. Pro Sites
Offer paid upgrades, advertising, and more, thereby monetizing your Multisite network. You can restrict the features of the free website, encouraging users to upgrade.

E. SeedProd
Add customized “Coming soon” and “maintenance mode” landing pages. This will jazz up the network while administrators work on their websites.

F. WP Mail SMTP
Fix the “WordPress is not sending email” issue with this plugin. It allows you to use an SMTP server to send crucial Multisite registration and notification emails.

G. User Switching
Using this plugin, you can switch user accounts as network admin to see what your users are experiencing when working on their websites. It can help you to troubleshoot some functionality issues.

15. Troubleshooting And FAQs A. Troubleshooting

When setting up a Multisite network, you might encounter a few common problems. Let's see how to troubleshoot these issues.

I. Login Issues

You might encounter a wp-admin login issue If you are using WordPress Multisite with subdirectories, rather than subdomains. If you are not able to log into the WordPress back end for individual websites with subdirectories, you can try replacing the define ('SUBDOMAIN_INSTALL', true); line in wp-config.php file with define ('SUBDOMAIN_INSTALL', 'false');.

II. Find Unconfirmed Users

Sometimes, you might not be able to find registered users who haven’t received an activation email. Usually, poorly configured mail settings are responsible for this problem. You can use SMTP (Simple Mail Transfer Protocol) to send activation emails. The PHP Mail function might send emails to the junk folder due to unauthorized email sending. Instead, you can use SMTP with proper domain authentication to get emails delivered to the inbox. Use any SMTP service provider, such as MailGun or Gmail.

B. FAQs 1. Can I install plugin “X” in my WordPress Multisite?

Yes, you can install any plugin in Multisite. However, not all plugins support Multisite. Check the plugin’s support before installing it.

2. Can I share user logins and roles across a Multisite network?

Yes, you can share user logins and roles across multiple websites. This comes in handy if you want website admins to manage the content on their own websites in your Multisite network.

3. Is it possible to display the main website’s posts on all websites on the network?

Yes, you can show your main website’s posts across the network.

4. If I am a Super Admin, can I log into all network websites with a single ID?

Yes, Super Admins can use the same credentials to sign into all network websites.

5. As a Super Admin, can I log into another network’s websites?

No, you can’t sign into networks other than your own.

6. Can I add more websites to my network later?

Yes, you can add as many websites as you want, anytime you want.

7. Can I use different plugins for each website, such as Yoast for one and All in One SEO for another?

Yes, you can use different plugins with similar functionality for different websites. However, you must set the plugin for the specific website you want. If you activate it for the entire Multisite network, it will work on all websites automatically.

8. Can I install a plugin on an individual website?

No, you cannot install a plugin directly on an individual website. You have to install it on the network. However, you can activate or deactivate it for a specific website.

9. Can I create a theme and apply it to a specific website?

Yes, you can create as many themes as you like. You can also activate or deactivate themes as a website’s admin.

16. WordPress Multisite Examples

Here are a few well-known brands using a WordPress Multisite network.

  • OpenView Venture Partners
    OpenView Venture Partners is a venture capital firm. The company uses a Multisite installation to run three different websites, including the corporate website, the corporate blog, and a multi-author blog called Labs. The company runs the last two websites under the subdomains blog.openviewpartners.com and labs.openviewpartners.com. Each website has a centralized theme that works perfectly.
  • The University of British Columbia Blogs
    The University of British Columbia (UBC) also uses WordPress Multisite. The purpose here is to enable professors to create course websites, build blogs with multiple contributors, and create portfolios for students as well as staff members. The WordPress Multisite installation gives teachers complete control over their online communities. They can add as many students as they like and take teaching beyond the walls of the classroom.
  • Cheap Flights
    Cheapflights is a travel website, offering flight tickets, hotel bookings, and vacation packages. The website uses WordPress Multisite to power its Travel Tips section. The section covers the latest travel news, tips on flying, information on the best places to travel to, and more.
Wrapping Up

As you can see, WordPress Multisite comes with several advantages. You can control and manage several websites from a single dashboard. It can certainly reduce your legwork and make your website monitoring hassle-free. Hopefully, you now have enough knowledge on installing, troubleshooting, and working with applications on a Multisite network to take the plunge.

Have you ever used WordPress Multisite? Will you consider using it for future projects? Let us know in the comments section below.

(dm, yk, al, il)
Categories: Design

How To Pass Data Between Components In Vue.js

Wed, 01/22/2020 - 05:00
How To Pass Data Between Components In Vue.js How To Pass Data Between Components In Vue.js Matt Maribojoc 2020-01-22T13:00:00+00:00 2020-01-22T13:15:38+00:00

Sharing data across components is one of the core functionalities of VueJS. It allows you to design a more modular project, control data scopes, and create a natural flow of data across your app.

Unless you’re creating your entire Vue app in one component (which wouldn’t make any sense), you’re going to encounter situations where you need to share data between components.

By the end of this tutorial, you will know three ways to get this done.

Okay — let’s get right into it!

Building An App With Nuxt

With Spotify, your friends can check out what you’re jamming to. What if the rest of the Internet could experience your algo-rhythm, too? Learn how to compose your own app to share what you’re listening to on Spotify using Vue.js and Nuxt. Read more →

1. Using Props To Share Data From Parent To Child

VueJS props are the simplest way to share data between components. Props are custom attributes that we can give to a component. Then, in our template, we can give those attributes values and — BAM — we’re passing data from a parent to a child component!

For example, let’s say we’re working on a user profile page and want to have a child component accept a username prop. We’ll need two components.

  1. The child component accepting the prop, let’s call this AccountInfo.vue.
  2. The parent component passing the prop, let’s call this ProfilePage.vue.

Inside AccountInfo.vue, we can declare the props it accepts using the props option. So, inside the component options, let’s make it look like the following.

// AccountInfo.vue <template> <div id='account-info'> {{username}} </div> </template> <script> export default { props: ['username'] } </script>

Then, to actually pass the data from the parent (ProfilePage.vue), we pass it like a custom attribute.

// ProfilePage.vue <account-info username='matt' />

Now if we load our page, we can see that our AccountInfo component properly renders the value passed in by its parent.

As when working with other VueJS directives, we can use v-bind to dynamically pass props. For example, let’s say we want to set the username prop to be equal to a variable. We can accomplish this by using shorthand for the v-bind directive (or just : for short). The code would look a little like this:

<template> <div> <account-info :username="user.username" /> </div> </template> <script> import AccountInfo from "@/components/AccountInfo.vue"; export default { components: { AccountInfo }, data() { return { user: { username: 'matt' } } } } </script>

This means that we can change our data and have any child props using that value will also update.

Tip: Always Verify Your Props

If you’re looking to write clearer Vue code, an important technique is to verify your props. In short, this means that you need to specify the requirements for your prop (i.e. type, format, and so on). If one of these requirements is not met (e.g. if the prop is passed an incorrect type), Vue will print out a warning.

Let’s say we want our username prop to only accept Strings. We would have to modify our props object to look like this:

export default { props: { username: String } }

Verifying props is essential when working in large-scale Vue apps or when designing plugins. It helps ensure that everyone is on the same page and use props the way that they were intended.

For a full list of the verifications we can include on props, I’d definitely recommend checking out the official documentation for an in-depth review.

Tip: Follow Prop Naming Conventions

According to the VueJS style guide, the best way to name your props is by using camelCase when declaring them in your script and kebab-case when referencing them in template code.

The reasoning behind this is actually quite simple. In Javascript, camelCase is the standard naming convention and in HTML, it’s kebab-case.

So, Vue recommends that we stick to the norms of each language. Thankfully, Vue is able to automatically convert between the two styles so there’s no additional setup for developers.

// GOOD <account-info :my-username="user.username" /> props: { myUsername: String } // BAD <account-info :myUsername="user.username" /> props: { "my-username": String } 2. Emitting Events To Share Data From Child To Parent

Now that we have data passing down the hierarchy, let’s pass it the other way: from a child component to a parent. We can’t use props, but we can use custom events and listeners.

Every Vue instance can call a .$emit(eventName) method that triggers an event. Then, we can listen for this event in the same way as any other, using the v-on directive.

Creating a Custom Event

Let’s build on our user profile example by adding a button that changes the username. Inside our child component (AccountInfo.vue), let’s create the button.

Then, when this button is clicked, we’ll emit an event called changeUsername.

<template> <div id='account-info'> <button @click='changeUsername()'>Change Username</button> {{username}} </div> </template> <script> export default { props: { username: String }, methods: { changeUsername() { this.$emit('changeUsername') } } } </script>

Inside the parent, we handle this event and change the user.username variable. Like we were discussing earlier, we can listen to events using the v-on directive or "@" for short.

<template> <div> <account-info :username="user.username" @changeUsername="user.username = 'new name'"/> </div> </template>

Let’s try it out. You should see that when you click the button, the username changes to "new name".

Tip: Custom Events Can Accept Arguments

The most common use case for passing arguments to your events is when you want a child component to be able to set a specific value for its prop. You never want to directly edit the value of a prop from the component itself.

However, luckily we can use pass arguments with our custom events to make the parent component change values.

Let’s say we want to modify the changeUsername event so that we can pass it a value.

The $emit method takes an optional second parameter for arguments. So all we do is add our new username value after the name of our event.

this.$emit('changeUsername', 'mattmaribojoc')

Then, in our parent component, we can either access these values inline by using a special $event variable, or we can write a handler method that takes a parameter.

<account-info :username="user.username" @changeUsername="user.username = $event"/> OR <account-info :username="user.username" @changeUsername="changeUsername($event)"/> export default { ... methods: { changeUsername (username) { this.user.username = username; } } } 3. Using Vuex To Create An Application-Level Shared State

Okay — we know how to share data between parents/children, but what about other components? Do we have to create an extremely complex hierarchy system if we want to pass data?

Thankfully not. The wonderful Vuex state management library has been simplifying developers' lives for years. In short, it creates a centralized data store that is accessible by all components.

In the methods we used previously (props / emitting events), each component has its own data state that we then share between components. However, Vuex lets us extract all the shared data into a single state that each component can access easily. This shared state is called a store.

Let’s try it out.

Because Vuex is separate from the core code of Vue, we’ll first have to install and import it into our project. First, we’ll have to run npm install vuex --save inside our project CLI.

Then, create a src/store folder with an index.js file that contains the following code.

// store/index.js import Vue from "vue"; import Vuex from "vuex"; Vue.use(Vuex); export default new Vuex.Store({ state: {}, getters: {}, mutations: {}, actions: {} });

To include this in our root Vue instance, we have to import our store/index.js file and pass it in our Vue constructor.

// main.js import store from "./store"; new Vue({ store, ... Accessing Vue Store Inside Components

Since we added our Vuex store onto our root Vue instance, it gets injected into all of the root’s children. If we want to access the store from a component, we can via this.$store.

Now, let’s dive into the specifics of each of the four parts of a Vuec store.

1. State

The Vuex state is an object that contains application-level data. All Vue instances will be able to access this data.

For our store, let’s create a user object that stores some more user profile data.

export default new Vuex.Store({ state: { user: { username: 'matt', fullName: 'Matt Maribojoc' } }, getters: {}, mutations: {}, actions: {} });

We can access this data inside any instance component like this.

mounted () { console.log(this.$store.state.user.username); }, 2. Getters

We use Vuex getters to return a modified value of state data. A good way to think of getters is to treat them like computed properties. For example, getters, like computed properties, cache their results and only re-evaluate when a dependency is modified.

Building onto our earlier store, let’s say we want to make a method that returns a user’s first name based off the full name attribute.

getters: { firstName: state => { return state.user.fullName.split(' ')[0] } }

Vuex getter properties are available to components on the store.getters object.

mounted () { console.log(this.$store.getters.firstName); }

Tip: Know the Default Getter Arguments

By default, Vuex getters accept two arguments.

  1. state — the state object for our application;
  2. getters — the store.getters object, meaning that we can call other getters in our store.

Every getter you declare will require the first state argument. And depending on how you design your code, your getters can reference each other using the second 'getters' argument.

Let’s make a last name getter that simply removes our first name value from our full name state property. This example would require both the state and getters objects.

lastName (state, getters) { return state.user.fullName.replace(getters.firstName, ''); } Tip: Pass Custom Arguments to Vuex Getters

Another cool feature of getters is that we can pass them custom arguments by making our getter return a method.

prefixedName: (state, getters) => (prefix) => { return prefix + getters.lastName; } // in our component console.log(this.$store.getters.prefixedName("Mr.")); 3. Mutations

Mutations are the only way to properly change the value of the state object. An important detail to note is that mutations must be synchronous.

Like getters, mutations always accept the Vuex state property as their first argument. They also accept a custom argument — called a payload — as the second argument.

For example, let’s make a mutation to change a user’s name to a specific value.

mutations: { changeName (state, payload) { state.user.fullName = payload } },

Then, we can call this method from our component using the store.commit method, with our payload as the second argument.

this.$store.commit("changeName", "New Name");

More often than not, you are going to want your payload to be an object. Not only does this mean that you can pass several arguments to a mutation, but also, it makes your code more readable because of the property names in your object.

changeName (state, payload) { state.user.fullName = payload.newName }

There are two different ways to call mutations with a payload.

  1. You can have the mutation type as the first argument and the payload as the second.
  2. You can declare pass a single object, with one property for the type and another for the payload.
this.$store.commit("changeName", { newName: "New Name 1", }); // or this.$store.commit({ type: "changeName", newName: "New Name 2" });

There isn’t a real difference between how the two work so it’s totally up to personal preference. Remember that it’s always best to be consistent throughout your entire project, so whichever one you choose, stick with it!

4. Actions

In Vuex, actions are fairly similar to mutations because we use them to change the state. However, actions don’t change the values themselves. Instead, actions commit mutations.

Also, while Vuex mutations have to be synchronous, actions do not. Using actions, we can call a mutation after an API call, for example.

Whereas most of the Vuex handlers we’ve seen accept state as their main parameter, actions accept a context object. This context object allows us to access the properties in our Vuex store (e.g. state, commit, getters).

Here’s an example of a Vuex action that waits two seconds and then commits the changeName mutation.

actions: { changeName (context, payload) { setTimeout(() => { context.commit("changeName", payload); }, 2000); } }

Inside our components, we use the store.dispatch method in order to run our function. We pass arguments just like we did with mutations. We declare the type and we pass any custom arguments in the second argument.

this.$store.dispatch("changeName", { newName: "New Name from Action" }); Wrapping Up

Now, you should know three different ways to share data across components in VueJS: props, custom events, and a Vuex store.

I hope this tutorial helped give you some more insight into some different Vue methods and best practices. Let me know how you’ve implemented them into your projects!

Further Reading

If you’re interested in going even deeper into the technical side/capabilities of each technique, here are some great places to start.

(dm, yk, il)
Categories: Design

Introducing Our New SmashingConf City Of Austin

Tue, 01/21/2020 - 02:30
Introducing Our New SmashingConf City Of Austin Introducing Our New SmashingConf City Of Austin Rachel Andrew 2020-01-21T10:30:00+00:00 2020-01-21T12:36:08+00:00

We are so excited to be bringing SmashingConf to a new city this year. We’re bringing you SmashingConf Austin where, in addition to our amazing line-up of conference talks and workshops, there will be plenty of fun and things to see and do. The team has been quite busy finding out about the local area, and we hope you’ll be as excited as we are to explore.

The Venue

We’ve found a fantastic conference venue in the Topfer Theatre at The ZACH. This 420-seat theater was constructed in 2011 as part of the complex of theater buildings belonging to the ZACH Theater company. The company was founded in 1932 as The Austin Civic Theater, and is the oldest continuously operating theater company in Texas as well as the tenth oldest in the USA.

The Conference Line-Up

Our line-up for Austin is now complete. The Smashing double-act of Vitaly Friedman and Phil Hawksworth will be making a return — guiding you through the two days and helping you to get the most out of every minute.

In Austin, we have an amazing line-up of returning Smashing friends but also new faces. The topics have been carefully curated to ensure there really is something for every member of your team!

Chris Coyier created and runs CSS-Tricks, and is also the co-founder of CodePen. In his talk “Building Websites,” he will talk about how we can build websites with almost no fancy technology at all — just raw HTML. From that point on, we complicate things as the websites we build have different needs. It’s a reminder to everyone that complications are OK when they are need-based, but also that they have costs and we should minimize the number of things we add.

Frontend UI/UX developer Sara Soueidan returns to the Smashing stage to talk about accessibility and CSS; she will build some components live in order that you can see her process as she develops accessible components for a site.

Consultant Performance Engineer Harry Roberts will take a look at some of the numbers powering the web performance industry from both sides of the table: What do performance improvements mean for your clients, and how do we translate that into a working relationship?

We loved Gemma O’Brien’s fantastic work when she spoke in Toronto, so she is returning to our Smashing stage for Austin. She makes beautiful murals and illustrations (take a look at her Instagram account to see some of her work!).

Gemma on stage in Toronto (Photo credit Marc Thiele)

They will be joined by Brad Frost speaking on Design Patterns, Mandy Michael on Front-end development, Miriam Suzanne on CSS, Rémi Parmentier on HTML email, Robyn Larsen on internationalization, Luke Wroblewski on UX, metrics and conversion, and Zach Leatherman on type and performance.

Try `scroll-snap-type` and `scroll-snap-align` using in your code

Categories: Design

Recreating The Arduino Pushbutton Using SVG And &lt;lit-element&gt;

Mon, 01/20/2020 - 03:30
Recreating The Arduino Pushbutton Using SVG And &lt;lit-element&gt; Recreating The Arduino Pushbutton Using SVG And &lt;lit-element&gt; Uri Shaked 2020-01-20T11:30:00+00:00 2020-01-20T12:26:44+00:00

Today, I am going to take you through the journey of creating an HTML component that mimics a momentary pushbutton component that is commonly used with Arduino and in electronic projects. We will use technologies such as SVG, Web Components and lit-element, and learn how to make the button accessible through some JavaScript-CSS trickery.

Let’s start!

From Arduino To HTML: The Need For A Pushbutton Component

Before we embark on the journey, let’s explore what we are going to create, and more importantly, why. I’m creating an open-source Arduino simulator in JavaScript called avr8js. This simulator is able to execute Arduino code and I will be using it in a series of tutorials and courses that teach makers how to program for Arduino.

The simulator itself only takes care of the program execution — it runs the code instruction by instruction and updates its internal state and a memory buffer according to the program logic. In order to interact with the Arduino program, you need to create some virtual electronic components that can send input to the simulator or react to its outputs.

Running the simulator alone is very much like running JavaScript in isolation. You can’t really interact with the user unless you also create some HTML elements, and hook them to the JavaScript code through the DOM.

Thus, in addition to the simulator of the processor, I’m also working on a library of HTML components that mimic physical hardware, starting with the first two components that you will find in almost any electronics project: an LED and a pushbutton.

The LED and the Pushbutton elements in action (Large preview)

The LED is relatively simple, as it only has two output states: on and off. Behind the scenes, it uses an SVG filter to create the lighting effect.

The pushbutton is more interesting. It also has two states, but it has to react to user input and update its state accordingly, and this is where the challenge comes from, as we will shortly see. But first, let’s nail down the requirements from our component we are going to create.

Defining The Requirements For The Pushbutton

Our component will resemble a 12mm pushbutton. These buttons are very common in electronics starter kits, and come with caps in multiple colors, as you can see in the photo below:

Simon Game with Yellow, Red, Blue and Green pushbuttons (Large preview)

In terms of behavior, the pushbutton should have two states: pressed and released. These are similar to the mousedown/mouseup HTML events, but we must make sure that the pushbuttons can also be used from mobile devices, and are accessible for users without a mouse.

As we will be using the state of the pushbutton as input for Arduino, there is no need to support "click" or "double click" events. It is up to the Arduino program running in the simulation to decide how to act upon the state of the button, and physical buttons do not generate click events.

If you’d like to learn more, check out a talk I held with Benjamin Gruenbaum at SmashingConf Freiburg in 2019: “Anatomy of a Click”.

To summarize our requirements, our pushbutton needs to:

  1. look similar to the physical 12mm pushbutton;
  2. have two distinct states: pressed, and released, and they should be visually discernible;
  3. support mouse interaction, mobile devices and be accessible to keyboard users;
  4. support different cap colors (at least red, green, blue, yellow, white and black).

Now that we have defined the requirements, we can start working on the implementation.

SVG For The Win

Most web components are implemented using a combination of CSS and HTML. When we need more complex graphics, we usually use raster images, in either JPG or PNG format (or GIF if you feel nostalgic).

In our case, however, we will use another approach: SVG graphics. SVG lends itself to complex graphics much more easily than CSS (yeah, I know, you can create fascinating things with CSS, but it doesn’t mean it should). But don’t worry, we are not giving up on CSS entirely. It will help us with styling the pushbuttons, and eventually even with making them accessible.

SVG has another big advantage, in comparison with raster graphics images: it is very easy to manipulate from JavaScript and can be styled through CSS. This means that we can provide a single image for the button and use JavaScript to customize the color cap, and CSS styles to indicate the state of the button. Neat, isn’t it?

Finally, SVG is just an XML document, which can be edited with text editors, and embedded directly into HTML, making it a perfect solution for creating reusable HTML components. Are you ready to draw our pushbutton?

Drawing The Pushbutton With Inkscape

Inkscape is my favorite tool for creating SVG vector graphics. It’s free and packed with powerful features, such as a large collection of built-in filter presets, bitmap tracing, and path binary operations. I started using Inkscape for creating PCB art, but in the past two years, I started using it for most of my graphic editing tasks.

Drawing the pushbutton in Inkscape is pretty straightforward. We are going to draw a top-view illustration of the button and its four metal leads that connect it to other parts, as follows:

  1. 12×12mm dark gray rectangle for the plastic case, with slightly rounded corners to make it softer.
  2. Smaller, 10.5×10.5 light gray rectangle for the metal cover.
  3. Four darker circles, one in each corner for the pins that hold the button together.
  4. A large circle in the middle, that is the contour of the button cap.
  5. A smaller circle in the middle for the top of the button cap.
  6. Four light-gray rectangles in a “T” shape for the metal leads of the button.

And the result, slightly enlarged:

Our hand-drawn Pushbutton Sketch (Large preview)

As a final touch, we’ll add some SVG gradient magic to the contour of the button, to give it a 3D feel:

Adding a gradient fill for creating 3D-feel (Large preview)

There we go! We have the visuals, now we need to get it to the web.

From Inkscape to Web SVG

As I mentioned above, SVGs are pretty straightforward to embed into HTML — you can just paste the content of the SVG file into your HTML document, open it in a browser, and it will be rendered on your screen. You can see it in action in the following CodePen example:

See the Pen SVG Pushbutton in HTML by @Uri Shaked

See the Pen SVG Pushbutton in HTML by @Uri Shaked

However, SVG files saved from Inkscape contain a lot of unnecessary baggage such as the Inkscape version and the window position when you last saved the file. In many cases, there are also empty elements, unused gradients and filters, and they all bloat the file size, and make it harder to work with it inside HTML.

Luckily, Inkscape can clean most of the mess for us. Here is how you do it:

  1. Go to the File menu and click on Clean up document. (This will remove unused definitions from your document.)
  2. Go again to File and click on Save as…. When saving, select Optimized SVG (*.svg) in the Save as type dropdown.
  3. You will see an “Optimized SVG Output” dialog with three tabs. Check all the options, except for “Keep editor data”, “Keep unreferenced definitions” and “Preserve manually created IDs…”.
(Large preview)

Removing all these things will create a smaller SVG file that is easier to work with. In my case, the file went from 4593 bytes down to just 2080 bytes, less than half the size. For more complex SVG files, this can be a huge saving of bandwidth and can make a notable difference in the loading time of your webpage.

The optimized SVG is also much easier to read and understand. In the following excerpt, you should be able to easily spot the two rectangles that make the body of the pushbutton:

<rect width="12" height="12" rx=".44" ry=".44" fill="#464646" stroke-width="1.0003"/> <rect x=".75" y=".75" width="10.5" height="10.5" rx=".211" ry=".211" fill="#eaeaea"/> <g fill="#1b1b1b"> <circle cx="1.767" cy="1.7916" r=".37"/> <circle cx="10.161" cy="1.7916" r=".37"/> <circle cx="10.161" cy="10.197" r=".37"/> <circle cx="1.767" cy="10.197" r=".37"/> </g> <circle cx="6" cy="6" r="3.822" fill="url(#a)"/> <circle cx="6" cy="6" r="2.9" fill="#ff2a2a" stroke="#2f2f2f" stroke-opacity=".47" stroke-width=".08"/>

You can even further shorten the code, for instance, by changing the stroke width of the first rectangle from 1.0003 to just 1. It doesn’t make a significant difference in the file size, but it makes the code easier to read.

In general, a manual pass over the generated SVG file is always useful. In many cases, you can remove empty groups or apply matrix transforms, as well as simplify gradient coordinates by mapping them from “user space on use” (global coordinates) to “object bounding box” (relative to the object). These optimizations are optional, but you get code that is easier to understand and maintain.

From this point on, we’ll put Inkscape away and work with the text representation of the SVG image.

Creating A Reusable Web Component

So far, we got the graphics for our pushbutton, ready to be inserted into our simulator. We can easily customize the color of the button by changing the fill attribute of the smaller circle, and the start color of the gradient of the larger circle.

Our next goal is to turn our pushbutton into a reusable Web Component which can be customized by passing a color attribute and reacts to user interaction (press/release events). We will use lit-element, a small library that simplifies the creation of Web Components.

lit-element excels in creating small, stand-alone component libraries. It’s built on top of the Web Components standard, which allows these components to be consumed by any web application, regardless of the framework used: Angular, React, Vue or Vanilla JS would all be able to use our component.

Creating components in lit-element is done using a class-based syntax, with a render() method that returns the HTML code for the element. A bit similar to React, if you are familiar with it. However, unlike react, lit-element uses standard Javascript tagged template literals for defining the content of the component.

Here is how you would create a simple hello-world component:

import { customElement, html, LitElement } from 'lit-element'; @customElement('hello-world') export class HelloWorldElement extends LitElement { render() { return html` <h1> Hello, World! </h1> `; } }

This component can then be used anywhere in your HTML code simply by writing <hello-world></hello-world>.

Note: Actually, our pushbutton requires just a bit more code: we need to declare an input property for the color, using the @property() decoractor (and with a default value of red), and paste the SVG code into our render() method, replacing the color of the button cap with the value of the color property (see example). The important bits are in line 5, where we define the color property: @property() color = 'red'; Also, in line 35 (where we use this property to define the fill color for the circle that makes the cap of the button), using the JavaScript template literal syntax, written as ${color}:

<circle cx="6" cy="6" r="2.9" fill="${color}" stroke="#2f2f2f" stroke-opacity=".47" stroke-width=".08" /> Making It Interactive

The last piece of the puzzle would be to make the button interactive. There are two aspects we need to consider: the visual response to the interaction as well as the programmatic response to the interaction.

For the visual part, we can simply inverse the gradient fill of the button contour, which will create the illusion the button has been pressed:

Inverting the button’s contour gradient (Large preview)

The gradient for the button contour is defined by the following SVG code, where ${color} is replaced with the color of the button by lit-element, as explained above:

<linearGradient id="grad-up" x1="0" x2="1" y1="0" y2="1"> <stop stop-color="#ffffff" offset="0" /> <stop stop-color="${color}" offset="0.3" /> <stop stop-color="${color}" offset="0.5" /> <stop offset="1" /> </linearGradient>

One approach for the pressed button look would be to define a second gradient, invert the order of colors, and use it as the fill of the circle whenever the button is pressed. However, there is a nice trick that allows us to reuse the same gradient: we can rotate the svg element by 180 degrees using a SVG transform:

<circle cx="6" cy="6" r="3.822" fill="url(#a)" transform="rotate(180 6 6)" />

The transformattribute tells SVG that we want to rotate the circle by 180 degrees, and that the rotation should happen about the point (6, 6) that is the center of the circle (defined by cx and cy). SVG transforms also affect the fill of the shape, and so our gradient will be rotated as well.

We only want to invert the gradient when the button is pressed, so instead of adding the transformattribute directly on the <circle>element, as we did above, we are actually going to set a CSS class for this element, and then take advantage of the fact that SVG attributes can be set through CSS, albeit using a slightly different syntax:

transform: rotate(180deg); transform-origin: 6px 6px;

These two CSS rules do exactly the same as the transformwe had above — rotate the circle 180 degrees around its center at (6, 6). We want these rules to be applied only when the button is pressed, so we’ll add a CSS class name to our circle:

<circle class="button-contour" cx="6" cy="6" r="3.822" fill="url(#a)" />

And now we can use the :active CSS pseudo-class to apply a transform to the button-contour whenever the SVG element is clicked:

svg:active .button-contour { transform: rotate(180deg); transform-origin: 6px 6px; }

lit-element allows us to attach a stylesheet to our component by declaring it in a static getter inside our component class, using a tagged template literal:

static get styles() { return css` svg:active .button-contour { transform: rotate(180deg); transform-origin: 6px 6px; } `; }

Just like the HTML template, this syntax allows us to inject custom values to our CSS code, even though we don’t need it here. lit-element also takes care of creating Shadow DOM for our component, so that the CSS only affects the elements within our component and does not bleed to other parts of the application.

Now, what about the programmatic behavior of the button when pressed? We want to fire an event so that the users of our component could figure out whenever the state of the button changes. One way to do this is to listen to mousedown and mouseup events on the SVG element, and fire “button-press”/“button-release” events correspondingly. This is what it looks like with lit-element syntax:

render() { const { color } = this; return html` <svg @mousedown=${() => this.dispatchEvent(new Event('button-press'))} @mouseup=${() => this.dispatchEvent(new Event('button-release'))} ... </svg> `; }

However, this is not the best solution, as we’ll shortly see. But first, take a quick look at the code we got so far:

import { customElement, css, html, LitElement, property } from 'lit-element'; @customElement('wokwi-pushbutton') export class PushbuttonElement extends LitElement { @property() color = 'red'; static get styles() { return css` svg:active .button-contour { transform: rotate(180deg); transform-origin: 6px 6px; } `; } render() { const { color } = this; return html` <svg @mousedown=${() => this.dispatchEvent(new Event('button-press'))} @mouseup=${() => this.dispatchEvent(new Event('button-release'))} width="18mm" height="12mm" version="1.1" viewBox="-3 0 18 12" xmlns="http://www.w3.org/2000/svg" > <defs> <linearGradient id="a" x1="0" x2="1" y1="0" y2="1"> <stop stop-color="#ffffff" offset="0" /> <stop stop-color="${color}" offset="0.3" /> <stop stop-color="${color}" offset="0.5" /> <stop offset="1" /> </linearGradient> </defs> <rect x="0" y="0" width="12" height="12" rx=".44" ry=".44" fill="#464646" /> <rect x=".75" y=".75" width="10.5" height="10.5" rx=".211" ry=".211" fill="#eaeaea" /> <g fill="#1b1b1"> <circle cx="1.767" cy="1.7916" r=".37" /> <circle cx="10.161" cy="1.7916" r=".37" /> <circle cx="10.161" cy="10.197" r=".37" /> <circle cx="1.767" cy="10.197" r=".37" /> </g> <g fill="#eaeaea"> <path d="m-0.3538 1.4672c-0.058299 0-0.10523 0.0469-0.10523 0.10522v0.38698h-2.1504c-0.1166 0-0.21045 0.0938-0.21045 0.21045v0.50721c0 0.1166 0.093855 0.21045 0.21045 0.21045h2.1504v0.40101c0 0.0583 0.046928 0.10528 0.10523 0.10528h0.35723v-1.9266z" /> <path d="m-0.35376 8.6067c-0.058299 0-0.10523 0.0469-0.10523 0.10523v0.38697h-2.1504c-0.1166 0-0.21045 0.0939-0.21045 0.21045v0.50721c0 0.1166 0.093855 0.21046 0.21045 0.21046h2.1504v0.401c0 0.0583 0.046928 0.10528 0.10523 0.10528h0.35723v-1.9266z" /> <path d="m12.354 1.4672c0.0583 0 0.10522 0.0469 0.10523 0.10522v0.38698h2.1504c0.1166 0 0.21045 0.0938 0.21045 0.21045v0.50721c0 0.1166-0.09385 0.21045-0.21045 0.21045h-2.1504v0.40101c0 0.0583-0.04693 0.10528-0.10523 0.10528h-0.35723v-1.9266z" /> <path d="m12.354 8.6067c0.0583 0 0.10523 0.0469 0.10523 0.10522v0.38698h2.1504c0.1166 0 0.21045 0.0938 0.21045 0.21045v0.50721c0 0.1166-0.09386 0.21045-0.21045 0.21045h-2.1504v0.40101c0 0.0583-0.04693 0.10528-0.10523 0.10528h-0.35723v-1.9266z" /> </g> <g> <circle class="button-contour" cx="6" cy="6" r="3.822" fill="url(#a)" /> <circle cx="6" cy="6" r="2.9" fill="${color}" stroke="#2f2f2f" stroke-opacity=".47" stroke-width=".08" /> </g> </svg> `; } }

You can click each of the buttons and see how they react. The red one even has some event listeners (defined in index.html), so when you click on it you should see some messages written to the console. But wait, what if you want to use the keyboard instead?

Making The Component Accessible And Mobile-Friendly

Hooray! We created a reusable pushbutton component with SVG and lit-element!

Before we sign off on our work, there are a few issues we should look at. First, the button is not accessible to people who use the keyboard. In addition, the behavior on mobile is inconsistent — the buttons do appear pressed when you hold your finger on them, but the JavaScript events are not fired if you hold your finger for more than one second.

Let’s start by tackling the keyboard issue. We could make the button keyboard-accessible by adding a tabindex attribute to the svg element, making it focusable. A better alternative, in my opinion, is just to wrap the button with a standard <button> element. By using the standard element, we also make it play nicely with screen readers and other assistive technology.

This approach has one drawback through, as you can see below:

Our pretty component encaged inside a <button> element. (Large preview)

The <button> element comes with some built-in styling. This could easily be fixed by applying some CSS to remove these styles:

button { border: none; background: none; padding: 0; margin: 0; text-decoration: none; -webkit-appearance: none; -moz-appearance: none; } button:active .button-contour { transform: rotate(180deg); transform-origin: 6px 6px; }

Note that we also replaced the selector that inverts the grid of the buttons’ contour, using button:activein place of svg:active. This ensures that the button-pressed style applies whenever the actual <button> element is pressed, regardless of the input device used.

We can even make our component more screen-reader friendly by adding an aria-label attribute that includes the color of the button:

<button aria-label="${color} pushbutton">

There is still one more thing to tackle: the “button-press” and “button-release” events. Ideally, we want to fire them based on the CSS :active pseudo-class of the button, just like we did in the CSS above. In other words, we would like to fire the “button-press” event whenever the button becomes :active, and the “button-release” event to fire whenever it is :not(:active).

But how do you listen to a CSS pseudo-class from Javascript?

Turns out, it is not so simple. I asked this question to the JavaScript Israel community, and eventually dug up one idea that worked out of the endless thread: use the :activeselector to trigger a super-short CSS animation, and then I can listen to it from JavaScript using the animationstart event.

A quick CodePen experiment proved that this actually works reliably. As much as I liked the sophistication of this idea, I decided to go with a different, simpler solution. The animationstart event isn’t available on Edge and iOS Safari, and triggering a CSS animation just for detecting the change of the button state doesn’t sound like the right way to do things.

Instead, we’re going to add three pairs of event listeners to the <button> element: mousedown/mouseup for the mouse, touchstart/touchend for mobile devices, and keyup/keydown for the keyboard. Not the most elegant solution, in my opinion, but it does the trick and works on all browsers.

<button aria-label="${color} pushbutton" @mousedown=${this.down} @mouseup=${this.up} @touchstart=${this.down} @touchend=${this.up} @keydown=${(e: KeyboardEvent) => e.keyCode === SPACE_KEY && this.down()} @keyup=${(e: KeyboardEvent) => e.keyCode === SPACE_KEY && this.up()} >

Where SPACE_KEY is a constant that equals 32, and up/down are two class methods that dispatch the button-press and button-release events:

@property() pressed = false; private down() { if (!this.pressed) { this.pressed = true; this.dispatchEvent(new Event('button-press')); } } private up() { if (this.pressed) { this.pressed = false; this.dispatchEvent(new Event('button-release')); } }
  • You can find the full source code here.
We Did It!

It was a pretty long journey that started with outlining the requirements and drawing the illustration for the button in Inkscape, went through converting our SVG file to a reusable web component using lit-element, and after making sure it’s accessible and mobile-friendly, we ended up with nearly 100 lines of code of a delightful virtual pushbutton component.

This button is just a single component in an open-source library of virtual electronic components I’m building. You are invited to peek at the source code, or check out the online Storybook where you can see and interact with all the available components.

And finally, if you are interested in Arduino, take a look at the programming Simon for Arduino course I’m currently building, where you can also see the pushbutton in action.

Till next time, then!

(dm, il)
Categories: Design

All Things Smashing: Monthly Update

Fri, 01/17/2020 - 03:30
All Things Smashing: Monthly Update All Things Smashing: Monthly Update Iris Lješnjanin 2020-01-17T11:30:00+00:00 2020-01-19T12:05:42+00:00

We can’t repeat enough how wonderful the web performance community is! There are good folks who help make the web faster, and their efforts matter indeed. With the new year sinking in and everyone’s resolutions still being put to the test, personal goals such as reproducing bugs and fixing issues suddenly become something we all have in common: improving the web for everyone involved.

As various areas of performance become more and more sophisticated and complicated throughout the years, Vitaly refines and updates his front-end performance checklist every year. This guide covers pretty much everything from performance budgets to single-page apps to networking optimizations. It has proved to be quite useful to folks in the past years — anyone can edit it (PDF, MS Word Doc and Apple Pages) and adjust it to their own personal needs or even use it for their organization.

Now, without further ado, let’s see what’s been cooking at Smashing!

Exciting Times: New Smashing Book

Are you ready for the next Smashing book? Well, just like all the printed books we’ve published, each and every is crafted to deliver in-depth knowledge and expertise shared by experts and practitioners from the industry. The Ethical Design Handbook will not be any different. Written by Trine Falbe, Martin Michael Frederiksen and Kim Andersen, the book will be pre-released late January.

As always, there will be a pre-order discount available. We expect to ship printed hardcover copies late February, but in the meantime, feel free to subscribe to the book mailing list so that you can be one of the first folks to get your hands on the book!

Less Speaking, More Time For Questions

Our SmashingConfs are known to be friendly, inclusive events where front-end developers and designers come together to attend live sessions and hands-on workshops. From live designing to live debugging, we want you to ask speakers anything — from naming conventions to debugging strategies. For each talk, we’ll have enough time to go into detail, and show real examples from real work on the big screen.

Dan Mall, Brad Frost and Ian Frost coding live on stage at SmashingConf in NYC. (Image credit: Drew McLellan) (Watch video)

If you’re eager not to miss out on one of our SmashingConfs, then early-bird tickets are still available. And if you need a lil’ help convincing your boss to send you to an event, let us know! We’ve got your back.

Categories: Design

The Mythical Mythical Man-Month

Wed, 01/15/2020 - 05:00
The Mythical Mythical Man-Month The Mythical Mythical Man-Month John Foreman 2020-01-15T13:00:00+00:00 2020-01-16T11:00:07+00:00

As a product leader at a tech company, I am a bottomless pit of need. My job as the Chief Product Officer at Mailchimp is to bring the product to market that’s going to win in a very competitive space. Mailchimp’s aspirations are high, and to realize them we need to deliver a substantial amount of product to the market. Oftentimes to many at the company, it feels like we are doing too much. We’re always at the edge of the wheels coming off.

And when you’re doing too much and you decide to do more than even that, you will inevitably begin to hear The Mythical Man-Month referenced. It’s like one of those stress-relief balls where if you squeeze one end, then out pops the Mythical Man-Month at the other end.

Published by Frederick Brooks back in 1975 (you remember 1975, right? When software development 100% resembled software development in 2020?), this book is rather famous amongst software engineers. Specifically, there’s one point in the entire book that’s famous (I’m not convinced people read anything but this point if they’ve read the book at all):

“...adding more men lengthens, not shortens, the schedule.”

Easy fix. I’ll just staff women to projects from now on (see the Return of the King and the fight against the Witch King of Angmar).

But let’s assume that Brooks’ point holds regardless of the gender identification of the software engineers in question. Here’s the point: software is difficult to build with lots of complex interdependencies. And everyone needs to work together to get it done.

As I add people to a team, they need to be onboarded and grafted into the project. Someone’s gotta carve off the right work for them. The team has to communicate to make sure their stuff all works together, and each additional person increases that communication complexity geometrically. And at some point, adding people becomes a burden to the project — not a benefit.

Here’s the graph from the book illustrating that point:

Add people to go slow (Large preview)

This is absolutely a fair point. That’s why I hear it so much at work. Exhausted individual contributors and exhausted leaders alike will toss it out — we can’t go faster, we can’t do more, stop the hiring, read The Mythical Man-Month and despair! The only solution is apparently to stop growing and kill some projects.

When I as CPO say, “we’re going to do this thing!” the reply then is often, “OK, so then what are we going to kill?” The Mythical Man-Month turns product development into a zero-sum game. If we want one thing, we must stop another. Now, that’s an actual myth, and I call hogwash.

And taken to its pathologically misinterpreted (we’ll get to this) conclusion, the book apparently says that the fastest tech company is one that employs all of four people — four men, apparently. Anything more just slows it all down. Someone should send Amazon, Apple, and Google copies of the book, so they can fix their obviously bloated orgs.

The only problem with this approach is that in a space where the competition is growing and iterating and executing — merely tamping organizational growth — editing and focusing the workload to match can be a recipe for extinction. You’ll be more sane and less stressed — right until you’re out of a job.

And as the owner of product management for my company, I’m not unsympathetic with this need to slow down and focus. We must ruthlessly prioritize! No doubt. But running a product is an exercise in contradiction. I must prioritize what I’ve got while simultaneously scheming to get more done. But what am I to do in the face of the Mythical Man-Month?

Surprisingly, the answer to this question comes from Brooks’ same book. Here’s another graph in the same chapter:

(Large preview)

There is a battle in scaling product development. If the work you’re trying to accomplish is purely partitionable, then go ahead and add people! If your work is all connected, then at some point adding people is just wrong.

If someone says that I absolutely have to kill a project in order to start another one, that’s just not the case. If the two projects require very little communication and coordination, then we can scale away.

This is why adding cores to a CPU can increase the experienced speed of your computer or phone up to a point — something engineers should know all about. Sure, adding cores won’t help me complete a complex single-threaded computation. But it may help me run a bunch of independent tasks.

The conflict for a product executive then between scaling and ruthless prioritization can be managed.

  1. You ruthlessly prioritize in places that are single-threaded (the backlog for a product team let’s say).
  2. You scale by adding more cores to handle independent work.

Very rarely, however, is anything fully-independent of all else at a company. At the bare minimum, your company is going to centralize supporting functions (global IT, legal, HR, etc.) leading to bottlenecks.

It’s All About Dependency Management

The job of a product executive then becomes not only creating a strategy, but executing in a way that maximizes value for the customer and the business by ensuring throughput and reducing interdependency risk as much as possible. “As much as possible” being key here. That way you can make the company look as much like the latter graph rather than the former. Interdependency is a disease with no cure, but its symptoms can be managed with many treatments.

One solution is to assemble a strategic direction for the company that minimizes or limits dependency through a carefully-selected portfolio of initiatives. The funny thing here is that many folks will push back on this. Let’s say I have two options, one where I can execute projects A, B, and C that have very little coordination (let’s say they impact different products), and another option with projects D1, D2, and D3 that have tons of interdependencies (let’s say they all impact the same product). It’s often the case that the Mythical Man-Month will be invoked against the former plan rather than the latter. Because on paper it looks like more.

Indeed, it’s less “focused.” But, it’s actually less difficult from a dependency perspective and hence fairs better with added personnel.

Keep in mind, I’m not saying to choose a bunch of work for the company that’s not related. Mailchimp will not be building a microwave oven anytime soon. All work should drive in the same long-term direction. This approach can increase customer experience risk (which we’ll discuss later) as well as the burden on global functions such as customer research. Keep an eye out for that.

Another treatment is to create a product and program management process that facilitates dependency coordination and communication where necessary without over-burdening teams with coordination if not required. Sometimes in attempting to manage coordination so we can do more we end up creating such onerous processes that we end up doing less. It’s a balance between doing too little coordination causing the pieces to not inter-operate and doing too much coordination causing the pieces to never get built because we’re all in stand-ups for eternity.

The contention in the Mythical Man-Month is that as you add folks to a software project, the communication needs to increase geometrically. For example, if you have 3 people on the project, that’s 3 lines of communication. But if you have 4, that’s 6 lines of communication. One extra person, in this case, leads to double the communication! Fun. (This is, of course, an over-simplification of communication on software development projects.)

Different people have different roles and hence receive different amounts of autonomy. Perhaps the project manager needs to communicate with everyone on the team. But does an engineer working on the API need to communicate with the product marketer? Or can the marketer just go through the product manager? A good process and meeting cadence can then eliminate unnecessary communication and meetings. The point is that Brooks’ intercommunication formula is an upper bound on coordination, not a death sentence.

Finally, use tools, principles, and frameworks combined with independent work over actual collaboration to combat interdependency symptoms. For example, if I can coordinate two teams’ key performance indicators (KPIs, i.e. measurements of success) to incentivize movement in more-or-less the same direction, then their independent work is more likely to end up “closer together” than if their KPIs incentivize orthogonal movement. This won’t ensure things fit together perfectly, only that the work I need to do to make them fit together in the future is less than it might otherwise be. Other examples might include using “even-over” statements, design systems, and automated testing.

So there’s a start. But interdependencies take on lots of forms beyond code. Let me give an example from Mailchimp.

Customer Experience Risk: The Hidden (But Acceptable?) Cost Of Firewalling Work

Since Mailchimp’s customer is often a small business owner who’s a marketing novice (and there are millions of small business owners turned marketers worldwide), we must deliver an experience that is seamless and immediately understandable end-to-end. We’re not afforded the luxury of assembling a Frankenstein’s monster of clouds via acquisition the way that enterprise players can. We can’t paper over poorly-integrated software with consultants and account managers.

As a consumer product (think Instagram or a Nintendo Switch or a Roomba), we have to be usable out of the box. For an all-in-one marketing platform meant to power your business, that’s hard! And that means each thing Mailchimp builds must be seamlessly connected from an experience perspective.

But, perfectly partitioning projects then introduces experience risk. It’s not that the code can’t be written independently. That can be achieved, but there’s still a risk that the products will look like they’ve been built by different teams, and that experience can be really damn confusing for the user. We bump up against Conway’s law — our customers can tell where one team’s work ends and the other team’s work begins.

So you try to connect everyone’s work together — not just on the back-end but on the front-end, too. In the ecosystem era, dominated by CX excellence from players like Apple, this has become almost table stakes in the consumer space. But this is a Mythical Man-Month nightmare, though not from an engineering perspective this time. It’s from a service design perspective. As we add more people to all of this “end-to-end” connected work, everything slows to a collaborative crawl.

Other than the third fix I noted above, using tools and frameworks rather than over-watchers and stage-gates, there is another release valve: make some deliberate customer experience trade-offs. Specifically, where are we comfortable releasing an experience that’s disconnected from the rest (i.e. that’s sub-par)? Accepting risk and moving forward is the product leader’s job. And so you use some criteria to sort it out (perhaps it’s not holding new, low-traffic areas of the app to the same experience standards as your “cash cows”), and make a decision (e.g. iteration and learning over polish on adjacent innovations). This, of course, extends beyond design.

You can always short-circuit Brooks’ law by choosing to firewall efforts, including efforts that, in a perfect world, shouldn’t be firewalled!

I’ll caveat this by saying the software I build doesn’t kill anyone. I wouldn’t advocate this approach if I were building a medical device. But at a marketing software company, I can deliberately isolate teams knowing that I’ve increased the odds of incompatibility as a trade-off for scaling up personnel and moving faster.

I’m sad to admit that the Mythical Man-Month is a reality at my company, and I suspect at yours as well. But it’s manageable — that’s the bottom line. Parallelization and dependency mitigation offer us a way out that limits the near-mythical status of the Mythical Man-Month. So the next time the stark dichotomy is raised at your company (scale to go slower or give up your aspirations) remember that if you’re smart about how you line up the work, you can still grow big.

Don’t Forget About The Softer Side Of Scaling

Keep in mind that managing the Mythical Man-Month will not stop engineers from invoking it like dark magic. They’re invoking the principle not only because there’s some truth in it, but because scaling just sucks (always) from an emotional and cognitive perspective. If I think I’m paid to write code and solve customer problems, the last thing I wanna do is change up my routine and figure out how to work with new people and a larger team.

As you scale your company, remember to empathize with the pain of scaling and change. A team that adds even a single member becomes a whole new team from a trust and cultural perspective. People are tired of this change. That means that while you go about managing and mitigating the Mythical Man-Month, you’ll need to manage the emotions surrounding growth. That’s perhaps the most critical task of all.

Strong belief in the Mythical Man-Month by a team in and of itself can bring its conclusions into reality. It’s basically the equivalent of the belief in flying in Peter Pan. If the team believes that scaling will slow them and they don’t buy into the change, they will indeed slow down.

So as you work to manage dependencies and introduce tools to help scale, make sure you clearly communicate the why behind the practices. Get folks involved in selecting the work and processes that mitigate man-month issues, because when they’re part of the change and their outlook changes, suddenly scaling becomes at least culturally possible.

(ra, il)
Categories: Design

Smashing Podcast Episode 7 With Amy Hupe: What Is A Government Design System?

Mon, 01/13/2020 - 21:00
Smashing Podcast Episode 7 With Amy Hupe: What Is A Government Design System? Smashing Podcast Episode 7 With Amy Hupe: What Is A Government Design System? Drew McLellan 2020-01-14T05:00:00+00:00 2020-01-14T09:35:20+00:00

Have you ever wondered how design systems are used within a government? Also, if you’d want to document a design system the best way you could, how would you do it? I spoke to Design Systems advocate, Amy Hupe, who shares her advice and lessons learned.

Show Notes Weekly Update Transcript

Drew McLellan: She’s a Content Specialist and Design Systems Advocate who spent the last three years working as Senior Content Designer at the Government Digital Service. In that time, she’s led content strategy for the GOV.UK design system, including a straightforward and inclusive approach to documentation. She’s previously worked for consumer advocacy company, Which? where she wrote about everything from composting to conveyancing. And a new role for 2020 sees her take up as Project Manager for Babylon Health Design System, DNA.

Drew: She’s a skilled cook, an Instagrammer, and knows how to use language to make services accessible and inclusive. But did you know she once sang backing vocals for Billy Ray Cyrus? My smashing friends, please welcome Amy Hupe. Hello Amy. How are you?

Amy Hupe: I am smashing. Thank you.

Drew: So I wanted to talk to you today about the role of design systems within government organizations generally, but specifically the GOV.UK design system, which I know you’ve done a lot of work with. I guess first of all, what does the GOV.UK design system encompass? And what was your involvement with it while you were at GDS?

Amy: So it encompasses all kinds of things. So I think the most obvious representation of it is the kind of website side, which is GOV.UK/design-system. And there you’ll find all of the kind of documentation. So all of the design guidelines, and the components and patterns, and you’ll see some of the code, lots of examples and lots of advice on how to use it. But thinking kind of more broadly than that, it also encompasses things like the prototype kit, which is a prototyping tool that is used in government to make HTML and CSS prototypes. So quite high fidelity prototypes and it also has its own kind of front end framework, which is called GOV.UK Front End. So that’s all the code that they use to build the services.

Amy: But then I like to think of design systems more holistically. So as well as all of that stuff, there’s also all the processes that sit around it. So things like how people contribute to it and how people come to know that it exists. Things like adoption and awareness and all that sort of stuff. So all of the things that enable people to design and build services in government is how I would define it.

Drew: So what was your involvement while you were at GDS with that? Where did you slot into that system?

Amy: It all kind of happened by chance, I guess. So I joined as a content designer in January 2017, and my intention when I came to GDS was actually to join the Gov UK content teams. So I thought I was going in to start writing guidance for system, and that was my dream. That was what I wanted to do. Then I arrived on day one and got plunked into this little protect team, called the Service Manual Patterns and Tools team.

Amy: At that point the design system didn’t exist, but we had our design patterns and some bits and pieces knocking about in different places. There was an ambition to try and pull those things together. So I was put into that team as a content designer. I didn’t know what a design pattern was, didn’t know anything about code, didn’t know anything really about web design at all. All I really knew was content.

Amy: So it was a pretty steep learning curve and I spent the next six months to a year, I think, helping the team to prototype it and figure out how it would be organized and laid out and how we would write our guidance, and all that sort of stuff. Then, in the midst of all of that as well as working on the content, I also started to look into the contribution side. So how people would contribute to it and how people would come to discover it and, get in touch with us, and what we would do when they did get in touch with us to try and make it better.

Drew: So what does designing content in that sort of context to be involve? What were the sort of daily tasks you were tackling?

Amy: So all kinds of things really. I mean there were weeks at a time I think where I didn’t write a single word and it was more just going out to research and meeting our users and try sort of understand what it was that they wanted from a design system. So yeah, without getting too far into it, there had been attempts to make something like the GOV.UK design system before, which is how we ended up with this kind of slightly disparate set of resources.

Amy: For one reason or another, these things ended up quite spread out, and it was never really one of them that was seen as the central place to go for this stuff. So a lot of it, it was just trying to understand what had happened before and why those things hadn’t necessarily taken off in the way that we had hoped that they would. Trying to understand which bits of our existing landscape were working for people and which bits weren’t.

Amy: So a lot of it was going out with our research [inaudible 00:05:07], and sitting in user research interviews, and taking notes and talking to people, and just understanding what it was that they needed. Then there were days where I did actually get to sit at a keyboard and write some guidance about some stuff, which was nice too. But yeah, it was very different for me. As you mentioned in the intro, my background was working at Which? So it was much more a traditional editorial role and I was used to working on long form content, and just writing really long articles, and pieces. So yeah, it was quite a big change. It was a big leap from that.

Drew: So your users in this context are people who are working in different government organizations? Is that right? Different departments within the government?

Amy: Yeah. Yeah, that’s right. So people working in, I think there’s 25 different ministerial departments in government, and then there’s lots of agencies and local government departments as well. So we were trying to spread out and talk to a really wide range of people from across the civil service. So yeah, lots of traveling in those early days.

Drew: Do you think that designing or working on a design system for a government, essentially, is any different from a design system for a small company or a big sort of enterprise company?

Amy: I think so. I mean I think from what I can kind of gather from conversations I’ve had, and conferences I’ve been to and stuff, every design system is slightly different and the context is always slightly different, and government is no different in that respect. But yeah, I suppose some of the unique challenges to working on something for government, is first of all the scale of it. So the audience is probably the biggest that you could have because government is so big, and all the different kinds of departments and the geographical spread of those organizations. So the scale of it is definitely something that’s slightly different.

Amy: I think also the fact that it’s not commercially competitive. So we weren’t trying to keep everything under wraps. Everything was done in the open as far as possible. Yeah, it’s all run as a big open source project, which was a slightly unusual concept for me. It took me a little while to get used to that.

Amy: Certainly when we first released it, we would see bits of our guidance and code popping up in other people’s design systems. It took a little while for me to feel all right about that. I think at first I was like, “What’s going on? Why are these people taking our stuff?” But actually now, I really like that. I see that as a big compliment, and I think it’s really good to reuse what you can. But yeah, that’s a strange kind of world to enter when you’ve been used to working in a more commercial setting, I guess.

Drew: I suppose the fact that it’s a essentially publicly funded system, means that is uniquely suited to the public taking it and using it, but also worldwide did you see a lot of use outside of the UK?

Amy: Yeah, yeah, there’s been some really exciting projects across the world that have picked it up. So I know that the New Zealand government have used quite a lot of it. I’m not sure what stage they’re at the moment, but certainly I saw their early data design system and they really used a lot of our guidance and our code, our layouts and things. I think the Dutch government is also using the GOV.UK design system primarily as its first proof of concept. The Australian government started with all of our contribution guidelines and have sort of adapted them based on their research. So we’ve been able to take some of that stuff back in. Yeah, so it’s gone pretty global. It’s exciting.

Drew: Would you factor in the fact that people would be using it when making decisions about the sort of next phase of things? Would it factor into your decisions that it’s actually your audience suddenly isn’t just UK government, it actually could potentially be a worldwide audience?

Amy: It’s definitely a consideration and I think at times that definitely made us as a team quite nervous about certain things that we were doing because the our audience and the scope of it suddenly got much bigger when we were thinking about all the different people that were using it. But personally I think you can’t get too caught up in that primarily we are there to serve the UK government. So it’s not practical to consider all of the potential audiences for it. I kind of think it’s up to the teams to adapt it how they need to for their own, their own users. But yeah, definitely it does make you think quite carefully about just throwing things out there before they’re kind of ready tested and stuff.

Drew: So were there any other sort of surprises in working on this design system other than the fact that it was then taken and used more broadly than you’d initially expected? Did anything else spring out and surprise you about it?

Amy: One thing that definitely stood out to me was the range of people in our audience. So not just the size of the audience, but like the variation in people’s level of knowledge, their skills, their confidence, the different kinds of jobs that they did and the kind of contexts in which they were working. I think there’s definitely a lot of variation in there. I think my perception going in was that I had this vision of this like designer front end developer in my head, somebody who has lots of technical knowledge and actually that’s just one type of user. There are lots of other people like content designers and things weren’t necessarily an expected audience for it, but have turned out to be key users.

Amy: So I think, yeah, that that was definitely a surprise to me. Then thinking about how we could cater to all those people with such a broad set of needs with the design system was definitely quite a big challenge. Yeah, I think that was probably my biggest surprise. Then I guess alongside that just how much people had seen to adopt it as their own. So I think after we launched pretty quickly, I was really pleasantly surprised at how many people I would see going out on advocating for it within their own departments and teams and people trying to contribute to it and people getting in touch with us to ask how they could kind of adapt it for their own users. It felt really community owned from day one and that was not necessarily something I expected, but something that was ready really good to see.

Drew: I guess much of the role of a design system is as a way of sort of documenting the design decisions that have been made so that those decisions can be then implemented and understood, and used by people. So I guess a design system is as much as anything, a documentation artifact isn’t it? It’s taking those decisions that have been made and explaining them in a way that people can reuse them. How did you approach as a team they design system as a sort of documentation artifact? How did you document what you were doing?

Amy: So I think it was about getting as much as we could get in a really clear picture of what people needed from that documentation. So this comes back to that point that I made about it being quite a broad reaching audience because there’s a whole range of different needs that people talk about documenting a component or a pattern like it’s a kind of single task. But actually there are loads of different ways that you can do that and there are loads of different needs that you need to take into consideration. So we have people who, for example would just, they would say, “Oh I want to see the research behind this.” For some people that means a number. They want to know that it’s being used in 20 different services so that they can tell their product manager that it’s worth investing the time and the money in implementing it within their service.

Amy: And that’s for them it’s just about getting that evidence-based backing for the decision that they’re trying to kind of push through. But then there’s other people who really care about understanding the research and whether it’s appropriate for their context and what additional research they might need to fill in to fill any gaps that have been missed or perhaps that they are dealing with in their unique situation. So I think the approach was to try and understand all of those different needs and to try and get a sense of priority amongst those and understand like how we could cater to all of the various different requirements that people had from the documentation. It’s not just one kind of one thing that fits everybody.

Amy: So figuring out how to kind of address all of those needs and to signpost the content really well in a way that meant that people could skip over the bits that weren’t relevant for them as well. Because when you are trying to serve such a broad audience, obviously you end up providing quite a lot of information. So making sure it’s really well signposted and organized I think was quite key to what we were doing.

Drew: So am I right in understanding that different departments within the government aren’t actually compelled to adopt the design system? You actually have to effectively sell it into them and persuade them to use it?

Amy: Yeah, so it’s slightly complicated. So in government there’s something called the government service standard and it’s a standard which all government services with over a certain number of users are required to meet in order to get funding and then to go into Alpha and then Beta and then live. One of the points on the service standard ,I left three weeks ago and it’s already dropped out of my head to which number it is, but one of the points of the service standard, it talks about reusing patterns and components and trying to reuse what’s there already. So sort of under that point they are compelled to use it, but it’s loose and it depends on who the assessor is. It’s not sort of heavily mandated. We would all always sort of advocate for doing what’s best in the specific context rather than just reusing patterns out of the box for the sake of it to tick a point on a service assessment. So it’s difficult to force it. So the approach was always much more collaborative and it was always about building support and building advocacy for the design system not shoving it down people’s throats.

Drew: I guess to that end, one of the ways that you’ve managed to do that is by encouraging contribution. Is that right?

Amy: Yeah, definitely. So I’m a big fan of contribution to design systems. I think it’s something that’s really interesting and yeah, certainly in the team we did a lot of work to make it possible to contribute to the GOV.UK design system. One of the real kind of benefits that we saw from that was The net advocates for the design system increasing. So when you get somebody to contribute to it and they then feel kind of more invested in it and what we received, those people would then go out to their teams and they would become our best sales people almost because they’d feel like they had a little piece of it and they had sort of something to show people and they would then encourage more people to contribute. So that effect ends up being quite exponential. Yeah. So we put a lot of effort into making that possible.

Drew: What sort of things did you do to encourage contribution?

Amy: We started really early. So way before we had a public design system, we started to engage with people who we thought would be interested contributors. I should mention here, we had a brilliant service designer on the team. She joined us in, I’m not going to get the dates correct in any way at the moment, but I think she worked with us in the whole of sort of 2018 and her name’s Ignatia and she just did a fantastic job of going around and engaging people. So one of the things that she did was to go and identify all of the different patterns in government and all of the different variations of those patterns. So going out and kind of saying, okay, there’s, there’s 10 different ways to ask for an address in government. Let’s look at them all together and decide which we think is the most appropriate approach.

Amy: How can we consolidate these into one? She ran a big workshop to try and get people looking at those and doing that kind of consolidation as a team. I think definitely her approach to building collaboration in way before we actually released anything to the public really helped with that because it meant that people already have that kind of awareness of it and many people had already contributed to it in some fashion or another before we actually took it public. So put us a few steps ahead. So I think that was really important. And just persistence, like a lot of persistence from the whole team in kind of helping people to contribute. I think there’s an idea that if you get people to contribute to a design system that’s a pretty sweet gig cause you can just get people to do all the work for you.

Amy: And you just sit there and you make your level code fixes and everybody’s actually giving you all the good stuff. But actually as anyone who’s worked on a design system will know, it’s incredibly complex. It’s very difficult to make a centralized solution that works for multiple different teams, and really, unless you’ve worked on a design system, it’s not reasonable to expect anyone to really understand what that takes. So there’s a lot of hand holding. There’s a lot of work involved in supporting contributors to contribute, I think I said this before, but it probably takes longer, I think, to help somebody to contribute to a design system then it would to just make the thing yourself in the centralized team. But I think recognizing the value that it brings and being persistent in your efforts to make people aware of contribution and help them to do it, help them to feel kind of motivated to do it. I think, yeah, that that persistence was really sort of key to our, our success in that area.

Drew: And just practically speaking with managing those contributions from the community, were there any tools or processes or anything that helped with that?

Amy: Yeah, so we had quite a strict process, I would say. Strict in so far as maybe, strict is the wrong word, comprehensive is probably a better word. So yeah, we have a set of contribution criteria which are in the design system. So everything’s as open as possible so people know what to expect. So there’s a set of criteria that we developed with the various people from the government community outside of our team, so again, like trying to involve people in the creation of these processes I think is really important. So there’s a set of criteria that all contributions to the design system have to meet and to make sure that we were being fairly unbiased, I suppose, and fair in terms of making the decisions about whether things met those criteria or not, we enlisted the support of a working group, which was a panel of representatives from across government. All from kind of different departments and different disciplines and people with different levels of seniority.

Amy: So everybody would have a slightly different perspective on the contributions and we would get together with them once a month and ask them to review any new contributions and decide whether or not they had met the criteria. So yeah, it was a sort of process designed to try and democratize the design of the design system I suppose, and to make it representative and ensure that it wasn’t just our team sitting in the middle making all the decisions without really understanding how it would affect the teams using those things.

Amy: Yeah, that was our sort of process. One more post I should mention is there’s a community backlog on GitHub, which anybody can use it. You don’t have to work in government to go and see it. It’s accessible from the design system and it’s basically a place where we try to host all of the research and all of the experimental stuff and the examples that go into their components and patterns in the design system. So again, it’s about pushing for that transparency and working in the open as much as possible so that people can have a voice and they can influence things before they’ve actually been published.

Drew: And do you think that process has worked well? If you were embarking on the same thing again, do you think you’d adopt a similar process or is there anything that didn’t work?

Amy: I think I would adopt a similar process but perhaps go into with slightly different expectations. What I would say is maybe slightly more realistic expectations and having said what I said about how we think that contributing will make things easier and faster. I was definitely in that camp. I think I thought that there would be a spike of work in the beginning to get people familiar with contributing and then over time we’d be able to be more hands off and people would just get the hang of it and it would be fine. But actually that never really materialized. There was always a lot of work involved in helping people to contribute and as I say, I think that that’s sort of to be expected. I don’t think you can really get away from that, but I still think it’s valuable.

Amy: I still think it’s worth investing that time, but perhaps not with an idea that you’re going to speed things up or that you’re going to be able to scale quicker or more from having contribution. So yeah, I think the process worked well. I do think it needs to be tailored to different organizations, so I’m starting a new role on Monday funnily enough, I’m working on another design system and I don’t expect to be able to pick up that process and just move it over there. I think everything has to be tailored to the organization and the context that you’re dealing with, but there’s definitely elements of it that I would like to try and bring over. But yeah, with slightly tempered expectations, I think.

Drew: I’ve talked a couple of episodes ago with Hayden Pickering about designing components, particularly within a design system to be accessible. That’s something you’ve got a lot of experience with too, I believe. Obviously accessibility is really, really crucial when working within a government design system, but many of us would argue that it’s really, really crucial wherever you’re working. Do you think design systems play a role in the accessibility of a design or the implementation of a design?

Amy: So there’s a brilliant talk by Tatiana Mack about building inclusive design systems that touches on this and that was sort of really influential to me and she talks about the sort of multiplication effect of design systems. So we have, with design systems, we’re telling people what good looks like and we’re giving people kind of quick ways to implement what we’re telling people best practices is. So that can work either way. It can work really well if you give people good design and good accessible design, then you have the potential to multiply that accessible design and to make things more accessible and more inclusive by default.

Amy: If you make decisions that exclude people in a design system, in that centralized space, which becomes the start point for people designing services, then you really have the potential to proliferate that exclusionary design. So I definitely think that design systems play a role in promoting and multiplying accessibility. But I think that it all starts with the intention of the teams working on and using the design system to make that happen. A design system is really it’s just the kind of vehicle I suppose and the intention needs to be there to make things accessible.

Drew: One of the things that always fascinates me, particularly with design systems that have such a large and varied audience like the the GFI UK design system, is the process of proliferating changes across the system. So if you, for example, find an accessibility improvement that you could make in a particular pattern and you make it in the design system, how do you ensure that that gets rolled out across such a broad audience? Is that something you’ve got any experience with?

Amy: Yeah. So again, I think that we kind of in the GOV.UL design system team, we put a lot of consideration into how that would work. I have to be honest, a lot of it is to do with how it’s technically implemented and I’m definitely not the right person to talk so much about the technical aspect of the team. I find there’s sort of two camps with design systems and there’s a camp which is like let’s get stuff out there as quickly as possible. Let’s just make it open soon as we can and that will stop duplication of effort and multiplication of effort and then we can iterate it as we go along. Then I think that there’s a slightly more sort of let’s move a bit more slowly camp, which I think I’m in, which favors holding off on releasing stuff until you have a certain level of confidence in it.

Amy: And I think that’s quite important because I think that in general, if you’re designing products and services, then starting with the minimal thing and then iterating as you go I think works great, but I think when you’re building something central that’s designed for lots and lots of people to sort of reuse and give to lots of different audiences, you very quickly use control of the thing and the way that it’s being used. So I think that having a certain amount of confidence in something before you release it and having a kind of assurance process in place, that means that you’ve got some confidence that it’s accessible before it goes out there is quite key and then hopefully the thing is slightly more stable and I think that’s really important for trust. I think trust is quite important when we’re talking about making changes to design systems because if we’re releasing changes all the time, then that makes the system quite unstable to use and I think that that breaks down trust and then people aren’t so likely to install updates and things.

Amy: Whereas I think if you can show that you’re being considerate about what you’re releasing and you’re releasing changes only when necessary, then you have that Goodwill and then people are more willing to make updates and stuff I think. But yeah, I mean I know that a lot of work went into making sure that the update process was fairly smooth and easy to implement in the GPV.UK design system. I’m just not the right person to talk about it, I think.

Drew: So we talked briefly about documentation. If I was looking to document a design system and if I wanted to do a really good job of it, is there anything that you would advise me to do to make sure I was documenting stuff well?

Amy: I never think it’s possible to kind of just give a blanket statement here because it really does need to cater to like the specific audience that you’re dealing with. My thing is to always aim to be just a little bit more inclusive than you maybe feel that you need to be. So if you’re thinking about, especially in a smaller organization that’s maybe scaling, I think that you have to be just as considerate as your future audience and your potential audience as your current audience. So if you have a small organization and you’ve got 10 front end developers and they all know the same sort of stuff and they’re able to talk to each other and communicate fairly freely, then your documentation may not to be as comprehensive as it within the larger organization.

Amy: But I think that in order to help a design system scale and to make sure that it’s equipped to do that, you have to think about who might join the organization in the future and who do you need to kind of leave the door open for? Who do you need to make things clear to? So I think always aim to be a little bit clearer than you feel you need to be in the moment. I think really testing documentation a lot is useful, so there’s lots of different ways to test content and documentation and I think that it’s really important to go out and make sure that it makes sense to other people. I think Caroline Darret always says that if you understand it well enough to know it’s correct, then you know too much to say that it’s clear.

Amy: Have I said that correctly? If you know it well enough to know it’s correct, then you know it too well to say that it’s clear, that’s better I think. And I really sort of agree with that. I think that to write good documentation you have to have pretty good subject matter knowledge or you need to or you end up developing that subject matter knowledge over time and through the process of writing it. By the time you’ve got that subject matter knowledge, it’s really hard to judge whether or not you’ve conveyed it in a way that’s clear to somebody who doesn’t. So going out and testing it with people who don’t know the subject matter like you do and getting them to actually try and use it in a practical task I think is really important. Yeah, that that’s my sort of number one thing. You’ll learn way more by putting it in front of people then you’ll learn by reading around and looking at what other people have done I think.

Drew: And in doing that you’re obviously going to get feedback on that documentation. Do you have any suggestions for how you would approach fixing things based on that feedback? Is there anything specific that you’d be looking for in the feedback to understand how well your documentation had worked?

Amy: Yeah, I mean there’s a few things I think to watch out for. I think it’s is really important to separate preferences and people perhaps not liking the documentation from people actually not being able to use it. So I think any task-based testing with documentation is better because it might be that actually somebody complains their way through an entire guide but they still complete the task that you’ve set them. That’s not to say that that doesn’t matter. If they wanted to do the thing but they actually hated the process, then you of course need to take that into consideration. But I think that some people and I’m probably one of them just can’t help themselves and will start, especially if it’s a content designer, I think we can’t kind of ever quite put that content design mentality aside.

Amy: So I definitely have a tendency to start live editing stuff if I’m supposed to be participating as research candidate on it. So I think yeah, separating preference from actual kind of usability and blockers is quite important. I think that making sure that your really interrogating the need to make changes and to update things. I think sometimes if somebody is particularly engaged with a design system, depending on the sort of person they are, they can be quite vocal about how they think it could be better or how they think that how they would’ve done it perhaps or how it could be clearer. I think it can be quite, especially if you’re sort of trying to build Goodwill and you’re in that early stage with the design system, it can be quite tempting to just immediately respond to that feedback and do what they say or try and make it clearer.

Amy: But then you can end up building it too far in the direction of the loud minority and I think actually really saying like how many people have got this problem? What evidence do we have that this isn’t working for people? And does that warrant a kind of update? I think yeah, trying to resist the temptation to respond to every comment and bit of criticism that you receive is quite important too, yeah.

Drew: I suppose I’m a common theme here with design systems that enable consistent design and give you a reusable resource in your design and about accepting contributions that make those designs stronger and implementing accessible design choices and documenting your design to make it easy to access and use. It really all comes back to sort of inclusion, would you say that was fair that including people as much as possible?

Amy: Definitely. Yeah. I mean I think that a good design system is a representative design system and I don’t think it’s possible to achieve representation by acting on people’s behalf. I think you really need to try and involve people in the process as much as possible. I think often for people working on design systems and certainly it was the case for us at the GOV.UK design system, you tend to be one step removed from your organizations end users. So if you think for the GOV.UK design system, the people that the design system is ultimately there to serve are members of the public and citizens and people using government services. But we in our team, we’re really working directly with those people. Most of the time our direct users are people working in the civil service. So making sure that you’ve got really strong feedback loops between your direct users and then their users to ensure that it’s representative I think is really important and I think that’s where inclusion comes in and yeah, I completely agree. I think it’s a really central thing, like I can’t imagine how you could build a successful design system without a focus on that.

Drew: Is there anything else that you’d like to share with us about your work on the GOV.UK design system?

Amy: I think my sort of main takeaway from working on it is that, I hate using the word physical when I’m talking about anything on the web, but the the visual representation of a design system I think can end up being the thing that we all get really fixated on. We look at how it’s coded and we look at how it’s organized and what it looks like and how it’s documented and what the design is. I think that obviously that stuff is really important. I think that it’s the thing that you can look at and show people and share. So it’s easy to see why we get fixated on that. But I really think that the most important factor of it is the people. I think that having inclusive processes and making sure that you’re kind of fostering safe discussion spaces and that you’re giving people an opportunity to get involved in the work and to participate and feel motivated to help you with it and to feel this sense of ownership over it.

Amy: I think all of that stuff is really important and all of that stuff really happens outside of the code and outside of the documentation. So yeah, I think my key takeaway from working on the GOV.UK design system is how much of it is really just people work and not really anything to do with guidance and code.

Drew: Here’s at Smashing we’re all about learning. So what have you been learning lately?

Amy: Lately I’ve been learning a lot about productivity and focus. I think definitely towards the end of last year I became aware that I was really plate spinning and luckily I don’t think I smashed any of those plates but I found myself kind of working quite chaotically and moving around lots of different projects and saying yes to everything. So this year is the year that I want to really improve my focus. So I’m trying to learn a little bit about mindfulness and organization and how to say no to things strategically so that I don’t get overwhelmed and too distracted. I’ve started bullet journaling so I’ve really become the full 2020 cliche at this point. So that’s what I’m learning at the moment.

Drew: If you dear listener, would like to hear more from Amy, you can follow her on Twitter where she’s @Amy_Hupe or find her on the web at amyhupe.co.uk. Thanks for joining us today. Amy, do you have any parting words for us?

Amy: Stay cool. What? Why did I say that? Just came out, it just came out.

(dm, ra, il)
Categories: Design

An Introduction To React’s Context API

Mon, 01/13/2020 - 03:30
An Introduction To React’s Context API An Introduction To React’s Context API Yusuff Faruq 2020-01-13T11:30:00+00:00 2020-01-14T09:35:20+00:00

For this tutorial, you should have a fair understanding of hooks. Still, before we begin, I’ll briefly discuss what they are and the hooks we’ll be using in this article.

According to the React Docs:

Hooks are a new addition in React 16.8. They let you use state and other React features without writing a class.”

That is basically what a React hook is. It allows us to use state, refs and other React features in our functional components.

Let us discuss the two hooks we will encounter in this article.

The useState Hook

The useState hook allows us to use state in our functional components. A useState hook takes the initial value of our state as the only argument, and it returns an array of two elements. The first element is our state variable and the second element is a function in which we can use the update the value of the state variable.

Let’s take a look at the following example:

import React, {useState} from "react"; function SampleComponent(){ const [count, setCount] = useState(0); }

Here, count is our state variable and its initial value is 0 while setCount is a function which we can use to update the value of count.

The useContext Hook

I will discuss this later in the article but this hook basically allows us to consume the value of a context. What this actually means will become more apparent later in the article.

Yarn Workspaces

Yarn workspaces let you organize your project codebase using a monolithic repository (monorepo). React is a good example of an open-source project that is monorepo and uses Yarn workspaces to achieve that purpose. Learn more →

Why Do We Need The Context API?

We want to build a “theme toggler” component which toggles between light mode and dark mode for our React app. Every component has to have access to the current theme mode so they can be styled accordingly.

Normally, we would provide the current theme mode to all the components through props and update the current theme using state:

import React from "react"; import ReactDOM from "react-dom"; function App() { return ( <div> <Text theme= "blue" /> <h1>{theme}</h1> </div> ); } function Text({theme}) { return( <h1 style = {{ color: `${theme}` }}>{theme}</h1> ); } const rootElement = document.getElementById("root"); ReactDOM.render(<App />, rootElement);

In the code sample above, we created a Text Component which renders an h1 element. The color of the h1 element depends on the current theme mode. Currently, the theme is blue. We can toggle between blue and red themes by using state.

We will create a state called “theme” using the useState hook. The useState hook will return the current value of the theme and a function which we can use to update the theme.

So, let us create our theme state:

const [theme, setTheme] = React.useState("blue");

We will also add a button element to our App component. This button will be used to toggle the themes and it needs a click event handler. So, let us write the click event handler like so:

const onClickHandler = () => { setTheme(); }

Now, we want to set the new theme to Red if the current theme is Blue, and vice versa. Instead of using an if statement, a more convenient way to do this is with the help of the ternary operator in JavaScript.

setTheme( theme === "red"? "blue": "red");

So now, we have written our onClick handler. Let’s add this button element to the App component:

<button onClick = {onClickHandler}>Change theme</button>

Let us also change the value of the theme props of the Text component to the theme state.

<Text theme={theme}/>

Now, we should have this:

import React from "react"; import ReactDOM from "react-dom"; import "./styles.css"; function App() { const[theme, setTheme] = React.useState("red"); const onClickHandler = () => { setTheme( theme === "red"? "blue": "red"); } return ( <div> <Text theme={theme}/> <button onClick = {onClickHandler}>Change theme</button> </div> ); } function Text({theme}) { return( <h1 style = {{ color: `${theme}` }}>{theme}</h1> ); } const rootElement = document.getElementById("root"); ReactDOM.render(<App />, rootElement);

We can now toggle between our two themes. However, if this was a much larger application, it would be difficult to use the theme in deeply nested components and the code becomes unwieldy.

Introducing The Context API

Let me introduce the Context API. According to the React documentation:

“Context provides a way to pass data through the component tree without having to pass props down manually at every level.”

For a more in-depth definition, it provides a way for you to make particular data available to all components throughout the component tree no matter how deeply nested that component may be.

Let us look at this example:

const App = () => { return( <ParentComponent theme = "light"/> ); } const ParentComponent = (props) => ( <Child theme = {props.theme} /> ) const Child = (props) => ( <Grandchild theme = {props.theme} /> ) const Grandchild = (props) => ( <p>Theme: {props.theme}</p> )

In the example above, we specified the application theme using a props in the ParentComponent called theme. We had to pass that props to all components down the component tree to get it where it is needed which is the GrandChild component. The ChildComponent had nothing to do with the theme props but was just used as an intermediary.

Now, imagine the GrandChild component was more deeply nested than it was in the top example. We would have to pass the theme props the same way we did here which would be cumbersome. This is the problem that Context solves. With Context, every component in the component tree has access to whatever data we decide to put in our context.

Let’s Get Started With Context

It’s time to replicate the theme toggling button we built at the beginning of the article with the Context API. This time, our theme toggler will be a separate component. We will build a ThemeToggler component which switches the theme of our React app using Context.

First, let us initialize our React app. (I prefer using create-react-app but you can use whatever method you prefer.)

Once you have initialized your React project, create a file called ThemeContext.js in your /src folder. You can also create a folder called /context and place your ThemeContext file in there if you want.

Now, let us move on.

Creating Your Context API

We will create our theme context in our ThemeContext.js file.

To create a context, we use React.createContext which creates a context object. You can pass in anything as an argument to React.createContext. In this case, we are going to pass in a string which is the current theme mode. So now our current theme mode is the “light” theme mode.

import React from "react"; const ThemeContext = React.createContext("light"); export default ThemeContext;

To make this context available to all our React components, we have to use a Provider. What is a Provider? According to the React documentation, every context object comes with a Provider React component that allows consuming components to subscribe to context changes. It is the provider that allows the context to be consumed by other components. That said, let us create our provider.

Go to your App.js file. In order to create our provider, we have to import our ThemeContext.

Once the ThemeContext has been imported, we have to enclose the contents of our App component in ThemeContext.Provider tags and give the ThemeContext.Provider component a props called value which will contain the data we want to make available to our component tree.

function App() { const theme = "light"; return ( <ThemeContext.Provider value = {theme}> <div> </div> </ThemeContext.Provider> ); }

So now the value of “light” is available to all our components (which we will write soon).

Creating Our Theme File

Now, we will create our theme file that will contain the different color values for both our light and dark themes. Create a file in your /src folder called Colors.js.

In Colors.js, we will create an object called AppTheme. This object will contain the colors for our themes. Once you are done, export the AppTheme object like so:

const AppTheme = { light: { textColor: "#000", backgroundColor: "#fff" }, dark: { textColor: "#fff", backgroundColor: "#333" } } export default AppTheme;

Now it’s time to start creating our different React components.

Creating Our React Components

Let’s create the following components:

  • Header
  • ThemeToggler
  • MainWithClass
Header.jsx import React from "react"; import ThemeToggler from "./ThemeToggler"; const headerStyles = { padding: "1rem", display: "flex", justifyContent: "space-between", alignItems: "center" } const Header = () => { return( <header style = {headerStyles}> <h1>Context API</h1> <ThemeToggler /> </header> ); } export default Header; ThemeToggler.jsx

(For now, we will just return an empty div.)

import React from "react"; import ThemeContext from "../Context/ThemeContext"; const themeTogglerStyle = { cursor: "pointer" } const ThemeToggler = () => { return( <div style = {themeTogglerStyle}> </div> ); } export default ThemeToggler; Consuming Context With Class-Based Components

Here, we will use the value of our ThemeContext. As you may already know, we have two methods of writing components in React: through functions or classes. The process of use context in both methods is different so we will create two components to serve as the main section of our application: MainWithClass and MainWithFunction.

Let us start with MainWithClass.

MainWithClass.jsx

We will have to import our ThemeContext and AppTheme. Once that is done, we will write a class that returns our JSX from a render method. Now we have to consume our context. There are two methods to do this with class-based components:

  1. The first method is through Class.contextType.

    To use this method, we assign the context object from our ThemeContext to contextType property of our class. After that, we will be able to access the context value using this.context. You can also reference this in any of the lifecycle methods and even the render method.

    import React, { Component } from "react"; import ThemeContext from "../Context/ThemeContext"; import AppTheme from "../Colors"; class Main extends Component{ constructor(){ super(); } static contextType = ThemeContext; render(){ const currentTheme = AppTheme[this.context]; return( <main></main> ); } }
    After assigning ThemeContext to the contextType property of our class, I saved the current theme object in the currentTheme variable.

    Now, we will grab the colors from the currentTheme variable and use them to style some markup.
    render() { const currentTheme = AppTheme[this.context]; return ( <main style={{ padding: "1rem", backgroundColor: `${currentTheme.backgroundColor}`, color: `${currentTheme.textColor}`, }}> <h1>Heading 1</h1> <p>This is a paragraph</p> <button> This is a button</button> </main>
    That’s it! This method, however, limits you to consuming only one context.
  2. The second method is ThemeContext.Consumer that involves the use of a Consumer. Each context object also comes with a Consumer React component which can be used in a class-based component. The consumer component takes a child as a function and that function returns a React node. The current context value is passed to that function as an argument.

    Now, let us replace the code in our MainWithClass component with this:
    class Main extends Component { constructor() { super(); this.state = { } } render(){ return( <ThemeContext.Consumer> { (theme) => { const currentTheme = AppTheme[theme]; return( <main style = {{ padding: "1rem", backgroundColor: `${currentTheme.backgroundColor}`, color: `${currentTheme.textColor}`, }}> <h1>Heading 1</h1> <p>This is a paragraph</p> <button> This is a button</button> </main> ) } } </ThemeContext.Consumer> ); } }
    As you can see, we used the current value of our ThemeContext which we aliased as “theme” and we grabbed the color values for that theme mode and assigned it to the variable currentTheme. With this method, you can use multiple Consumers.

Those are the two methods of consuming context with class-based components.

Consuming Context With Functional Components

Consuming context with functional components is easier and less tedious than doing so with class-based components. To consume context in a functional component, we will use a hook called useContext.

Here is what consuming our ThemeContext with a functional component would look like:

const Main = () => { const theme = useContext(ThemeContext); const currentTheme = AppTheme[theme]; return( <main style = {{ padding: "1rem", backgroundColor: `${currentTheme.backgroundColor}`, color: `${currentTheme.textColor}`, }}> <h1>Heading 1</h1> <p>This is a paragraph</p> <button> This is a button</button> </main> ); } export default Main;

As you can see, all we had to do was use our useContext hook with our ThemeContext passed in as an argument.

Note: You have to use these different components in the App.js file in order to see the results.

Updating Our Theme With The ThemeToggler Component

Now we are going to work on our ThemeToggler component. We need to be able to switch between the light and dark themes. To do this, we are going to need to edit our ThemeContext.js. Our React.createContext will now take an object resembling the result of a useState hook as an argument.

const ThemeContext = React.createContext(["light", () => {}]);

We passed an array to the React.createContext function. The first element in the array is the current theme mode and the second element is the function that would be used to update the theme. As I said, this just resembles the result of a useState hook but it is not exactly the result of a useState hook.

Now we will edit our App.js file. We need to change the value passed to the provider to a useState hook. Now the value of our Theme Context is a useState hook whose default value is “light”.

function App() { const themeHook = useState("light"); return ( <ThemeContext.Provider value = {themeHook}> <div> <Header /> <Main /> </div> </ThemeContext.Provider> ); } Writing Our ThemeToggler Component

Let us now actually write our ThemeToggler component:

import React,{useContext} from "react"; import ThemeContext from "../Context/ThemeContext"; const themeTogglerStyle = { cursor: "pointer" } const ThemeToggler = () => { const[themeMode, setThemeMode] = useContext(ThemeContext); return( <div style = {themeTogglerStyle} onClick = {() => {setThemeMode(themeMode === "light"? "dark": "light")}}> <span title = "switch theme"> {themeMode === "light" ? "
Categories: Design

Understanding CSS Grid: Grid Lines

Fri, 01/10/2020 - 03:30
Understanding CSS Grid: Grid Lines Understanding CSS Grid: Grid Lines Rachel Andrew 2020-01-10T11:30:00+00:00 2020-01-10T13:36:11+00:00

In the first article in this series, I took a look at how to create a grid container and the various properties applied to the parent element that make up your grid. Once you have a grid, you have a set of grid lines. In this article, you will learn how to place items against those lines by adding properties to the direct children of the grid container.

We will cover:

  1. The placement properties grid-column-start, grid-column-end, grid-row-start, grid-row-end and their shorthands grid-column and grid-row.
  2. How to use grid-area to place by line number.
  3. How to place items according to line name.
  4. The difference between the implicit and explicit grid when placing items.
  5. Using the span keyword, with a bit of bonus subgrid.
  6. What to watch out for when mixing auto-placed and placed items.
Basic Concepts Of Line-Based Positioning

To place an item on the grid, we set the line on which it starts, then the line that we want it to end on. Therefore, with a five-column, five-row grid, if I want my item to span the second and third column tracks, and the first, second and third row tracks I would use the following CSS. Remember that we are targetting the line, not the track itself.

.item { grid-column-start: 2; grid-column-end: 4; grid-row-start: 1; grid-row-end: 4; }

This could also be specified as a shorthand, the value before the forward slash is the start line,m the value after is the end line.

.item { grid-column: 2 / 4; grid-row: 1 / 4; }

On CodePen you can see the example, and change the lines that the item spans.

See the Pen Grid Lines: placement shorthands by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: placement shorthands by Rachel Andrew (@rachelandrew) on CodePen.

Note that the reason our box background stretches over the entire area is because the initial values of the alignment properties align-self and justify-self are stretch.

If you only need your item to span one track, then you can omit the end line, as the default behavior is that items span one track. We see this when we auto place items as in the last article, each item goes into a cell - spanning one column and one-row track. So to cause an item to span from line 2 to 3 you could write:

.item { grid-column: 2 / 3; }

It would also be perfectly correct to miss off the end line:

.item { grid-column: 2; } The grid-area Shorthand

You can also place an item using grid-area. We’ll encounter this property again in a future article, however, when used with line numbers it can be used to set all four lines.

.item { grid-area: 1 / 2 / 4 / 4; }

The order of those line numbers is grid-row-start, grid-column-start, grid-row-end, grid-column-end. If working in a horizontal language, written left to right (like English), that’s top, left, bottom, right. You may have realized this is the opposite of how we normally specify shorthands such as margin in CSS - these run top, right, bottom, left.

The reason for this is that grid works in the same way no matter which writing mode or direction you are using, and we’ll cover this in detail below. Therefore, setting both starts then both ends makes more sense than mapping the values to the physical dimensions of the screen. I don’t tend to use this property for line-based placement, as I think the two-value shorthands of grid-column and grid-row are more readable when scanning through a stylesheet.

Lines On The Explicit Grid

I mentioned the explicit versus the implicit grid in my last article. The explicit grid is the grid that you create with the grid-template-columns andgrid-template-rows properties. By defining your column and row tracks, you also define lines between those tracks and at the start and end edges of your grid.

Those lines are numbered. The numbering starts from 1 at the start edge in both the block and inline direction. If you are in a horizontal writing mode, with sentences which begin on the left and run towards the right this means that line 1 in the block direction is at the top of the grid, and line 1 in the inline direction is the left-hand line.

The item placed on the grid

If you are working in a horizontal RTL language - as you might be if working in Arabic - then line 1 in the block direction is still at the top, but line 1 in the inline direction is on the right.

The same placement with direction: rtl

If you are working in a Vertical Writing Mode, and in the image below I have set writing-mode: vertical-rl, then line 1 will be at the start of the block direction in that writing mode, in this case on the right. Line 1 in the inline direction is at the top.

The same placement in writing-mode: vertical-rl

Therefore, grid lines are tied to the writing mode and script direction of the document or component.

The end line of your explicit grid is number -1 and lines count back in from that point, making line -2 the second from the last line. This means that if you want to span an item across all tracks of the explicit grid you can do so with:

.item { grid-column: 1 / -1; } Lines On The Implicit Grid

If you have created implicit grid tracks then they also count up from 1. In the example below, I have created an explicit grid for columns, however, row tracks have been created in the implicit grid, where I am using grid-auto-rows to size these to 5em.

The item with a class of placed has been placed to span from row line 1 to row line -1. If we were working with an explicit grid for our two rows, then the item should span two rows. Because the row tracks have been created in the implicit grid, line -1 resolved to line 2, and not line 3.

See the Pen Grid Lines: explicit vs. implicit grid by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: explicit vs. implicit grid by Rachel Andrew (@rachelandrew) on CodePen.

There is currently no way to target the last line of the implicit grid, without knowing how many lines you have.

Placing Items Against Named Lines

In the last article I explained that in addition to line numbers, you can optionally name lines on your grid. You name the lines by adding a name or names inside square brackets between your tracks sizes.

.grid { display: grid; grid-template-columns: [full-start] 1fr [main-start] 2fr 2fr [main-end full-end]; }

Once you have some named lines, you can swap out the line number for a name when placing your items.

.item { grid-column: main-start / main-end; }

See the Pen Grid Lines: naming lines by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: naming lines by Rachel Andrew (@rachelandrew) on CodePen.

If your line has several names, you can pick whichever one you like when placing your item, all of the names will resolve to that same line.

Note: There are some interesting things that happen when you name lines. Take a look at my article “Naming Things In CSS Grid Layout” for more.

What Happens If There Are Multiple Lines With The Same Name?

You get some interesting behavior if you have multiple lines that have the same name. This is a situation that could happen if you name lines within repeat() notation. In the example below I have an 8 column grid, created by repeating 4 times a pattern of 1fr 2fr. I have named the line before the smaller track sm and the larger track lg. This means that I have 4 lines with each name.

In this situation, we can then use the name as an index. So to place an item starting at the second line named sm and stretching to the third line named lg I use grid-column: sm 2 / lg 3. If you use the name without a number that will always resolve to the first line with that name.

See the Pen Grid Lines: naming lines by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: naming lines by Rachel Andrew (@rachelandrew) on CodePen. Using The span Keyword

There are situations where you know that you want an item to span a certain number of tracks, however, you don’t know exactly where it will sit on the grid. An example would be where you are placing items using auto-placement, but want them to span multiple tracks rather than the default 1. In this case, you can use the span keyword. In the example below, my item starts on line auto, this is the line where auto-placement would put it, and it then spans 3 tracks.

.item { grid-column: auto / span 3; }

See the Pen Grid Lines: span keyword by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: span keyword by Rachel Andrew (@rachelandrew) on CodePen.

This technique will become very useful once we have wide support of the subgrid value for grid-template-columns and grid-template-rows. For example, in a card layout where the cards have a header and main content area in which you want to align with each other, you can cause each card to span 2 rows, while still allowing for the usual auto-placement behavior. The individual cards will use subgrid for their rows (i.e. getting two rows each). You can see this in the below example if you use Firefox, and read my article CSS Grid Level 2: Here Comes Subgrid to learn more about subgrid.

See the Pen Grid Lines: span keyword and subgrid by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: span keyword and subgrid by Rachel Andrew (@rachelandrew) on CodePen.

/ The example in Firefox using the Grid Inspector

Layering Items With Line-Based Placement

Grid will auto-place items into empty cells on the grid, it won’t stack items into the same cell. However, by using line-based placement you can put items into the same grid cell. In this next example, I have an image that spans two-row tracks, and a caption which is placed in the second track and given a semi-transparent background.

See the Pen Grid Lines: card with layered elements by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: card with layered elements by Rachel Andrew (@rachelandrew) on CodePen.

Items will stack up in the order that they appear in the document source. So in the above example, the caption comes after the image and therefore displays on top of the image. If the caption had come first then it would end up displaying behind the image and we wouldn’t be able to see it. You can control this stacking by using the z-index property. If it was important for the caption to be first in the source, then you can use z-index, with a higher value for the caption than the image. This would force the caption to display on top of the image so that it can be read.

Mixing Line-Based And Auto-Placement

You need to take a little extra care if you are mixing placed items with auto-placed ones. When items are fully auto-placed in grid, they will place themselves sequentially onto the grid, each finding the next available empty space to put themselves into.

See the Pen Grid Lines: auto-placement by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: auto-placement by Rachel Andrew (@rachelandrew) on CodePen.

The default behavior is always to progress forwards, and to leave a gap if an item does not fit on the grid. You can control this behavior by using the property grid-auto-flow with a value of dense. In this case, if there is an item that fits a gap already left in the grid, it will be placed out of source order in order to fill the gap. In the example below using dense packing, item 3 is now placed before item 2.

See the Pen Grid Lines: auto-placement and dense packing by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: auto-placement and dense packing by Rachel Andrew (@rachelandrew) on CodePen.

Note that this behavior can cause problems for users who are tabbing through the document as the visual layout will be out of sync with the source order that they are following.

Auto-placement works slightly differently if you have already placed some items. The placed items will be positioned first, and auto-placement will then look for the first available gap to start placing items. If you have left some whitespace at the top of your layout by way of an empty grid row, then introduce some items which are auto-placed, they will end up in that track.

To demonstrate in this final example I have placed with the line-based positioning properties, items 1 and 2 leaving the first row empty. Later items have moved up to fill the gaps.

See the Pen Grid Lines: auto-placement mixed with placed items by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Lines: auto-placement mixed with placed items by Rachel Andrew (@rachelandrew) on CodePen.

This behavior is worth understanding, as it can mean that items end up in strange places if you introduce some new elements to your layout which haven’t been given a placement on the grid.

Wrapping Up

That is pretty much all you need to know about grid lines. Remember that you always have numbered lines, no matter how else you are using grid you can always place an item from one line number to another. The other methods we will look at in future articles are alternate ways to specify your layout, but are based on the grid created by numbered lines.

(il)
Categories: Design

How To Create And Deploy Angular Material Application

Thu, 01/09/2020 - 04:30
How To Create And Deploy Angular Material Application How To Create And Deploy Angular Material Application Shubham 2020-01-09T12:30:00+00:00 2020-01-09T13:06:05+00:00

Angular is one of the popular choices while creating new web applications. Moreover, “Material Design” specs have become a go-to choice for creating minimal and engaging experience today. Thus, any new “Angular” project mostly uses the “Angular Material Design Library” to use the components which follow the material design specifications. From smooth animations to proper interaction feedback, all of this is already available as part of the official material design library for angular.

After the web application is developed, the next step is to deploy it. That is where “Netlify” comes into the picture. With its very easy to use interface, automatic deployment, traffic splitting for A/B testing and various other features, Netlify is surely a great tool.

The article will be a walkthrough of creating an Angular 8 web application using the official Angular Material Design library. We will be creating a QR Code generator web application completely based on Angular while hosted on Netlify.

Files for this tutorial can be found on GitHub and a demo version is deployed here.

Getting Started
  1. Install Angular 8,
  2. Create a Github account,
  3. Install Git on your computer,
  4. Create a Netlify account.

Note: I will be using VSCode and Microsoft Windows as the preferred IDE and OS, though the steps would be similar for any other IDE on any other OS.

After the above prerequisites are complete, let’s begin!

Mocks & Planning

Before we begin creating the project, it would be beneficial to plan ahead: What kind of UI would we want in our application? Will there be any reusable pieces? How will the application interact with external services?

First, check the UI mocks.

Homepage (Large preview) Creating a QR Page (Large preview) History page (Large preview)

These are the three different pages which will be contained in the application. The homepage will be the starting point of our application. Creating a QR page should deal with the creation of a new QR code. The History page will show all the saved QR codes.

The mockups not only provide an idea of the look and feel of the application, but they also segregate the responsibility of each page.

One observation (from the mocks) is that it seems that the top navigation bar is common across all the pages. Thus, the navigation bar can be created as a reusable component and reused.

Now that we have a fair bit of an idea as to how the application will look and what can be reused, let’s start.

Creating A New Angular Project

Launch VSCode, then open a terminal window in VSCode to generate a new Angular project.

Terminal in VSCode (Large preview)

The terminal will open with a default path as shown in the prompt. You can change to a preferred directory before proceeding; in the case of Windows, I will use the cd command.

Navigating to the preferred path (Large preview)

Moving forward, angular-cli has a command to generate new projects ng new <project-name>. Just use any fancy project name you like and press enter, e.g. ng new qr.

This will trigger the angular-cli magic; it will provide a few options to configure some aspects of the project, for instance, adding angular routing. Then, based on the selected options, it will generate the whole project skeleton which can be run without any modification.

For this tutorial, enter Yes for routing and select CSS for styling. This will generate a new Angular project:

Creating a new Angular project (Large preview)

We now have got ourselves a fully working Angular project. In order to make sure everything is working properly, we can run the project by entering this command in the terminal: ng serve. Uh oh, but wait, this results in an error. What could have happened?

ng serve error (Large preview)

Don’t worry. Whenever you create a new project using angular-cli, it generates the whole skeleton inside a folder named after the project name specified in the command ng new qr. Here, we will have to change the current working directory to the one just created. In Windows, use the command cd qr to change directory.

Now, try running the project again with the help of ng serve:

Project running (Large preview)

Open a web browser, go to the URL http://localhost:4200 to see the project running. The command ng serve runs the application on port 4200 by default.

TIP: To run it on a different port, we use the command ng serve --port <any-port> for instance, ng serve --port 3000.

This ensures that our basic Angular project is up and running. Let’s move on.

We need to add the project folder to VSCode. Go to the “File” menu and select “Open Folder” and select the project folder. The project folder will now be shown in the Explorer view on the left.

Adding Angular Material Library

To install the Angular material library, use the following command in the terminal window: ng add @angular/material. This will (again) ask some questions such as which theme you want, whether you want default animations, whether touch support is required, among others. We will just select the default Indigo/Pink theme, Yes to adding HammerJS library and browser animations.

Adding Angular material (Large preview)

The above command also configures the whole project to enable support for the material components.

  1. It adds project dependencies to package.json,
  2. It adds the Roboto font to the index.html file,
  3. It adds the Material Design icon font to your index.html,
  4. It also adds a few global CSS styles to:
    • Remove margins from the body,
    • Set height: 100% into the HTML and body,
    • Set Roboto as the default application font.

Just to be sure that everything is fine you can run the project again at this point, though you will not notice anything new.

Adding Home Page

Our project skeleton is now ready. Let’s start by adding the homepage.

(Large preview)

We want to keep our homepage simple, just like the above picture. This home page uses a few angular material components. Let’s dissect.

  1. The top bar is a simple HTML nav element which contains material style button, mat-button, with an image and a text as its child. The bar color is the same as the primary color which was selected while adding Angular material library;
  2. A centered image;
  3. Another, mat-button, with just a text as its child. This button will allow users to navigate to the history page;
  4. A count badge, matBadge, attached to the above button, showing the number of QR codes saved by the user;
  5. A floating action button, mat-fab, at the bottom right corner having the accent color from the selected theme.

Digressing a little, let’s add other required components and services first.

Adding Header

As planned previously, the navigation bar should be reused, let’s create it as a separate angular component. Open terminal in VSCode and type ng g c header (short for ng generate component header) and press Enter. This will create a new folder named “header” which will contain four files:

  • header.component.css: used to provide styling for this component;
  • header.component.html: for adding HTML elements;
  • header.component.spec.ts: for writing test cases;
  • header.component.ts: to add the Typescript-based logic.
Header component (Large preview)

To make the header look like as it was in the mocks, add the below HTML in header.component.html:

<nav class="navbar" [class.mat-elevation-z8]=true> <div> <button *ngIf="showBackButton" aria-hidden=false mat-icon-button routerLink="/"> <mat-icon style="color: white;"> <i class="material-icons md-32">arrow_back</i> </mat-icon> </button> <span style="padding-left: 8px; color: white;">{{currentTitle}}</span> </div> <button *ngIf="!showBackButton" aria-hidden=false mat-button class="button"> <img src="../../assets/qr-icon-white.png" style="width: 40px;"> <span style="padding-left: 8px;">QR Generator</span> </button> <button *ngIf="showHistoryNav" aria-hidden=false mat-button class="button" routerLink="/history"> <span style="padding-left: 8px;">History</span> </button> </nav>

TIP: To add elevation for any material component use [class.mat-elevation-z8]=true, the elevation value can be changed by changing z value, in this case it is z8. For instance, to change the elevation to 16, use [class.mat-elevation-z16]=true.

In the above HTML snippet, there are two Angular material elements being used: mat-icon and mat-button/mat-icon-button. Their usage is very simple; first, we need to add those two as modules in our app.module.ts as shown below:

Module import for mat-icon and mat-button (Large preview)

This will allow us to use these two Angular material elements anywhere in any component.

For adding material buttons, the following HTML snippet is used:

<button mat-button> Material Button </button>

There are different types of material button elements available in the Angular material library such as mat-raised-button, mat-flat-button, mat-fab and others; just replace the mat-button in the above code snippet with any other type.

Types of material buttons. (Large preview)

The other element is mat-icon which is used to show icons available in the material icon library. When the Angular material library was added in the beginning, then a reference to the material icon library was added as well, which enabled us to use icons from the vast array of icons.

The usage is as simple as:

<mat-icon style="color: white;"> <i class="material-icons md-32">arrow_back</i> </mat-icon>

The nested <i> tag can be used to change the icon size (here it’s md-32) which will make the icon size 32px in height and width. This value can be md-24, md-48, and so on. The value of the nested <i> tag is the name of the icon. (The name can be found here for any other icon.)

Accessibility

Whenever icons or images are used, it is imperative that they provide sufficient information for accessibility purposes or for a screen-reader user. ARIA (Accessible Rich Internet Applications) defines a way to make web content and web applications more accessible to people with disabilities.

One point to note is that the HTML elements which do have their native semantics (e.g. nav) do not need ARIA attributes; the screenreader would already know that nav is a navigation element and read it as such.

The ARIA specs is split into three categories: roles, states and properties. Let’s say that a div is used to create a progress bar in the HTML code. It does not have any native semantics; ARIA role can describe this widget as a progress bar, ARIA property can denote its characteristic such as it can be dragged. ARIA state will describe its current state such as the current value of the progress bar. See the snippet below:

<div id="percent-loaded" role="progressbar" aria-valuenow="75" aria-valuemin="0" aria-valuemax="100"> </div>

Similarly, a very commonly used aria attribute: aria-hidden=true/false is used. The value true makes that element invisible to screen-readers.

Since most of the UI elements used in this application have native semantic meaning, the only ARIA attributes used are to specify ARIA visibility states.For detailed information, refer to this.

The header.component.html does contain some logic to hide and show back button depending on the current page. Moreover, the Home button also contains an image/logo which should be added to the /assets folder. Download the image from here and save it in the /assets folder.

For styling of the navigation bar, add the below css in header.component.css:

.navbar { position: fixed; top: 0; left: 0; right: 0; z-index: 2; background: #3f51b5; display: flex; flex-wrap: wrap; align-items: center; padding: 12px 16px; } .button { color: white; margin: 0px 10px; }

As we want to keep the header component reusable across other components, thus to decide what should be shown, we will require those as parameters from other components. This requires usage of @Input() decorator which will bind to the variables we used in header.component.html.

Add these lines in the header.component.ts file:

// Add these three lines above the constructor entry. @Input() showBackButton: boolean; @Input() currentTitle: string; @Input() showHistoryNav: boolean; constructor() { }

The above three bindings will be passed as a parameter from other components which the header component will be using. Its usage will be more clear once we move forward.

Moving on, we need to create a homepage that can be represented by an Angular component. So let’s start by creating another component; type ng g c home in the terminal to auto-generate the home component. As previously, a new folder named “home” will be created containing four different files. Before proceeding to modify those files, let’s add some routing information to angular routing module.

Adding Routing

Angular provides a way to map URL to a specific component. Whenever some navigation happens, the Angular framework monitors the URL and based on the information present in the app-routing.module.ts file; it initializes the mapped component. This way different components are does not need to shoulder the responsibility of initializing other components. In our case, the application has three pages navigable by clicking on different buttons. We achieve this by leveraging the routing support provided by the Angular framework.

The home component should be the starting point of the application. Let’s add this information to the app-routing.module.ts file.

Routing Home Component. (Large preview)

The path property is set as an empty string; this enables us to map the application URL to the homepage component, something like google.com which shows the Google homepage.

TIP: Path value never starts with a “/”, but instead uses an empty string even though path can be like search/coffee.

Moving back to the homepage component, replace the content of home.component.html with this:

<app-header [showBackButton]="false" [currentTitle]=""></app-header> <app-profile></app-profile> <!-- FAB Fixed --> <button mat-fab class="fab-bottom-right" routerLink="/create"> <mat-icon> <i class="material-icons md-48">add</i> </mat-icon> </button>

There are three parts to the home component:

  1. The reusable header component <app-header> here.
  2. Profile component <app-profile> here.
  3. The floating action button at the bottom right.

The above HTML snippet shows how the reusable header component is used in other components, we just use the component selector and pass in the required parameters.

Profile component is created to be used as the body for the home page, we will create it soon.

The floating action button with + icon is a kind of Angular material button of type mat-fab on the bottom right of the screen. It has the routerLink attribute directive which uses the route information provided in the app-routing.module.ts for navigation. In this case, the button has the route value as /create which will be mapped to create component.

To make the create button float on bottom right, add the below CSS code in home.component.css:

.fab-bottom-right { position: fixed; left: auto; bottom: 5%; right: 10%; }

Since profile component is supposed to manage home page body, we will leave home.component.ts intact.

Adding Profile Component

Open terminal, type ng g c profile and press enter to generate profile component. As planned earlier, this component will handle the main body of the home page. Open profile.component.html and replace its content with this:

<div class="center profile-child"> <img class="avatar" src="../../assets/avatar.png"> <div class="profile-actions"> <button mat-raised-button matBadge="{{historyCount}}" matBadgeOverlap="true" matBadgeSize="medium" matBadgeColor="accent" color="primary" routerLink="/history"> <span>History</span> </button> </div> </div>

The above html snippet shows how to use the matBadge element of the material library. To be able to use it here, we need to follow the usual drill of adding MatBadgeModule to app.module.ts file. Badges are small pictorial status descriptor for UI elements such as buttons or icons or texts. In this case, it is used with a button to show count of QR saved by the user. Angular material library badge has various other properties such as setting the position of the badge with matBadgePosition, matBadgeSize to specify size, matBadgeColor to set the badge color.

One more image asset needs to be added to the assets folder. Download and save the same to the assets folder of the project.

Open profile.component.css and add this:

.center { top: 50%; left: 50%; position: absolute; transform: translate(-50%, -50%); } .profile-child { display: flex; flex-direction: column; align-items: center; } .profile-actions { padding-top: 20px; } .avatar { border-radius: 50%; width: 180px; height: 180px; }

The above CSS will achieve the UI as planned.

Moving on, we need some kind of logic to update the history count value as it will reflect in the matBadge used earlier. Open profile.component.ts and add the highlighted snippet appropriately:

export class ProfileComponent implements OnInit { historyCount = 0; constructor(private storageUtilService: StorageutilService) { } ngOnInit() { this.updateHistoryCount(); } updateHistoryCount() { this.historyCount = this.storageUtilService.getHistoryCount(); } }

We have added StorageutilService but we have not created such service till now. Ignoring the error, we have completed our profile component which also finishes off our home page component. We will revisit this profile component after creating our storage utility service. Okay then let’s do so.

Local Storage

HTML5 provides web storage feature which can be used to store data locally. This provides much more storage compared to cookies, at least 5MB vs 4KB. There are two types of web storage with different scope and lifetime: Local and Session. The former can store data permanently while the latter is temporary and for a single session. The decision to select the type can be based on the use case, in our scenario we want to save across sessions, so we will go with Local storage.

Each piece of data is stored in a key/value pair. We will use the text for which the QR is generated as the key and the QR image encoded as a base64 string as the value. Create an entity folder, inside the folder create a new qr-object.ts file and add the code snippet as shown:

QR entity model. (Large preview)

The content of the class:

export class QR { text: string; imageBase64: string; constructor(text: string, imageBase64: string) { this.imageBase64 = imageBase64; this.text = text; } }

Whenever the user saves the generated QR, we will create an object of the above class and save that object using the storage utility service.

Create a new service folder, we will be creating many services, it’s better to group them together.

Services Folder. (Large preview)

Change the current working directory to services, cd services, to create a new service use ng g s <any name> this is a shorthand to ng generate service <any name>, type ng g s storageutil and press enter. This will create two files: storageutil.service.ts and storageutil.service.spec.ts. The latter is for writing unit tests. Open storageutil.service.ts and add this:

private historyCount: number; constructor() { } saveHistory(key : string, item :string) { localStorage.setItem(key, item) this.historyCount = this.historyCount + 1; } readHistory(key : string) : string { return localStorage.getItem(key) } readAllHistory() : Array { const qrList = new Array(); for (let i = 0; i < localStorage.length; i++) { const key = localStorage.key(i); const value = localStorage.getItem(key); if (key && value) { const qr = new QR(key, value); qrList.push(qr); } } this.historyCount = qrList.length; return qrList; } getHistoryCount(): number { if (this.historyCount) { return this.historyCount; } this.readAllHistory(); return this.historyCount; } deleteHistory(key : string) { localStorage.removeItem(key) this.historyCount = this.historyCount - 1; }

Import the qr-object class to correct any errors. To use the local storage feature, there is no need to import anything new just use the keyword localStorage to save or get value based on a key.

Now open profile.component.ts file again and import the StorageutilService class to properly finish off the profile component.

Running the project, we can see the home page is up as planned.

Adding Create QR Page

We have our homepage ready, though the create/add button does not do anything. Worry not, the actual logic was already written. We used a routerLink directive to change the base path of the URL to /create but there was no mapping added to the app-routing.module.ts file. Let’s create a component which will deal with the creation of new QR codes, type ng g c create-qr and press enter to generate a new component.

Open the app-routing.module.ts file and add the below entry to the routes array:

{ path: 'create', component: CreateQrComponent },

This will map the CreateQRComponent with the URL /create.

Open create-qr.components.html and replace the contents with this:

<app-header [showBackButton]="showBackButton" [currentTitle]="title" [showHistoryNav]="showHistoryNav"></app-header> <mat-card class="qrCard" [class.mat-elevation-z12]=true> <div class="qrContent"> <!--Close button section--> <div class="closeBtn"> <button mat-icon-button color="accent" routerLink="/" matTooltip="Close"> <mat-icon> <i class="material-icons md-48">close</i> </mat-icon> </button> </div> <!--QR code image section--> <div class="qrImgDiv"> <img *ngIf="!showProgressSpinner" style="padding: 5px 5px;" src={{qrCodeImage}} width="200px" height="200px"> <mat-spinner *ngIf="showProgressSpinner"></mat-spinner> <div class="actionButtons" *ngIf="!showProgressSpinner"> <button mat-icon-button color="accent" matTooltip="Share this QR" style="margin: 0 5px;"> <mat-icon> <i class="material-icons md-48">share</i> </mat-icon> </button> <button mat-icon-button color="accent" (click)="saveQR()" matTooltip="Save this QR" style="margin: 0 5px;"> <mat-icon> <i class="material-icons md-48">save</i> </mat-icon> </button> </div> </div> <!--Textarea to write any text or link--> <div class="qrTextAreaDiv"> <mat-form-field style="width: 80%;"> <textarea matInput [(ngModel)]="qrText" cdkTextareaAutosize cdkAutosizeMinRows="4" cdkAutosizeMaxRows="4" placeholder="Enter a website link or any text..."></textarea> </mat-form-field> </div> <!--Create Button--> <div class="createBtnDiv"> <button class="createBtn" mat-raised-button color="accent" matTooltip="Create new QR code" matTooltipPosition="above" (click)="createQrCode()">Create</button> </div> </div> </mat-card>

The above snippet uses many of the angular material library elements. As planned, it has one header component reference wherein the required parameters are passed. Next up is the main body of the create page, it consists of one angular material card or mat-card centered and elevated up to 12px as [class.mat-elevation-z12]=true is used. The material card is just another kind of container that can be used as any other div tag. Although the material library provides some properties to lay out well-defined information in a mat-card such as image placement, title, subtitle, description and action as can be seen below.

Card example. (Large preview)

In the above html snippet, we have used mat-card just as any other container.

Another material library element used is matTooltip, it is just another tooltip with ease of use, displayed when the user hovers over or longpresses an element. Just use the snippet below to show tooltip:

matTooltip="Any text you want to show"

It can be used with icon buttons or any other UI element to convey extra information. In application context, it is displaying information about the close icon button. To change the placement of the tooltip matTooltipPosition is used, e.g.

matTooltip="Any text you want to show" matTooltipPosition="above"

Besides matTooltip, mat-spinner is used to show loading progress. When the user clicks on the create button, a network call is made, this is when the progress spinner is shown. When the network call returns with result, we just hide the spinner. It can be used simply like this:

<mat-spinner *ngIf="showProgressSpinner"></mat-spinner>

showProgressSpinner is a boolean variable which is used to show/hide the progress spinner. The library also provides some other parameters like [color]=’accent’ to change color, [mode]=’indeterminate’ to change the progress spinner type. An indeterminate progress spinner will not show the progress of the task while a determinate one can have different values to reflect task progress. Here an indeterminate spinner is used as we do not know how long the network call will take.

The material library provides a variant of textarea conforming to the material guideline but it can only be used as a descendent of mat-form-field. Usage of material textarea is just as simple as the default HTML one, like below:

<mat-form-field> <textarea matInput placeholder="Hint text"></textarea> </mat-form-field>

matInput is a directive which allows native input tag to work with mat-form-field. The placeholder property allows adding any hint text for the user.

TIP: Use cdkTextareaAutosize textarea property to make it auto-resizable. cdkAutosizeMinRows, cdkAutosizeMaxRows to set rows and columns and all three together to make textarea auto-resize till it reaches the max rows and columns limit set.

To use all these material library element, we need to add them in the app.module.ts file.

Create QR module imports. (Large preview)

There is a placeholder image being used in the html, download and save it to the assets folder.

The above html also requires css styling, open create-qr.component.ts file and add:

.qrCard { display: flex; flex-direction: column; align-items: center; position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); width: 20%; height: 65%; padding: 50px 20px; } .qrContent { display: flex; flex-direction: column; align-items: center; width: 100%; } .qrTextAreaDiv { width: 100%; display: flex; flex-direction: row; justify-content: center; padding: 0px 0px; position: absolute; bottom: 10%; } .createBtn { left: 50%; transform: translate(-50%, 0px); width: 80%; } .createBtnDiv { position: absolute; bottom: 5%; width: 100%; } .closeBtn { display: flex; flex-direction: row-reverse; align-items: flex-end; width: 100%; margin-bottom: 20px; } .closeBtnFont { font-size: 32px; color: rgba(0,0,0,0.75); } .qrImgDiv { top: 20%; position: absolute; display: flex; flex-direction: column; align-items: center; justify-content: center; width: 100%; } .actionButtons { display: flex; flex-direction: row; padding-top: 20px; }

Let’s wire up the UI with logic, open create-qr.component.ts file and add the below code, leaving those lines which are already present:

export class CreateQrComponent implements OnInit { qrCodeImage = '../../../assets/download.png'; showProgressSpinner = false; qrText: string; currentQR; showBackButton = true; title = 'Generate New QR Code'; showHistoryNav = true; constructor(private snackBar: MatSnackBar, private restutil: RestutilService, private storageService: StorageutilService) { } ngOnInit() { } createQrCode() { //Check if any value is given for the qr code text if (!!this.qrText) { //Make the http call to load qr code this.loadQRCodeImage(this.qrText); } else { //Show snackbar this.showSnackbar('Enter some text first') } } public loadQRCodeImage(text: string) { // Show progress spinner as the request is being made this.showProgressSpinner = true; // Trigger the API call this.restutil.getQRCode(text).subscribe(image =>{ // Received the result - as an image blob - require parsing this.createImageBlob(image); }, error => { console.log('Cannot fetch QR code from the url', error) // Hide the spinner - show a proper error message this.showProgressSpinner = false; }); } private createImageBlob(image: Blob) { // Create a file reader to read the image blob const reader = new FileReader(); // Add event listener for "load" - invoked once the blob reading is complete reader.addEventListener('load', () => { this.qrCodeImage = reader.result.toString(); //Hide the progress spinner this.showProgressSpinner = false; this.currentQR = reader.result.toString(); }, false); // Read image blob if it is not null or undefined if (image) { reader.readAsDataURL(image); } } saveQR() { if (!!this.qrText) { this.storageService.saveHistory(this.qrText, this.currentQR); this.showSnackbar('QR saved') } else { //Show snackbar this.showSnackbar('Enter some text first') } } showSnackbar(msg: string) { //Show snackbar this.snackBar.open(msg, '', { duration: 2000, }); } }

To provide users contextual information, we also use MatSnackBar from the material design library. This shows up as a popup from below the screen and stays for a few seconds before disappearing. This is not an element rather a service that can be invoked from the typescript code. The above snippet with method name showSnackbar shows how to open up a snackbar, but before it can be used, we need to add the MatSnackBar entry in app.module.ts file just like we did for other material library elements.

TIP: In recent angular material library versions, there is no straightforward way to change the snackbar styling, instead one has to make two additions to the code, first is to use the below CSS to alter background and foreground colors:

::ng-deep snack-bar-container.snackbarColor { background-color: rgba(63, 81, 181, 1); } ::ng-deep .snackbarColor .mat-simple-snackbar { color: white; }

Second is to use a property called panelClass to set the style to the above CSS class:

this.snackBar.open(msg, '', { duration: 2000, panelClass: ['snackbarColor'] });

The above two combinations will allow custom styling to the material design library snackbar component.

This completes the create qr page but there is one piece still missing. Checking the create-qr.component.ts file, it will show an error regarding the missing piece. The missing piece to this puzzle is the RestutilService which is responsible for fetching the qr code image from the third-party API.

In the terminal, change the current directory to services, type ng g s restutil and press enter. This will create the RestUtilService files, open the restutil.service.ts file and add this snippet:

private edgeSize = '300'; private BASE_URL = 'https://api.qrserver.com/v1/create-qr-code/?data={data}!&size={edge}x{edge}'; constructor(private httpClient: HttpClient) { } public getQRCode(text: string): Observable { // Create the url with the provided data and other options let url = this.BASE_URL; url = url.replace("{data}", text).replace(/{edge}/g, this.edgeSize); // Make the http api call to the url return this.httpClient.get(url, { responseType: 'blob' }); }

The above service fetches the qr image from the third party API and since the response is not of json type, but an image, so we specify the responseType as ‘blob’ in the above snippet.

Angular provides HttpClient class to communicate with any HTTP supporting server. It provides many features like filtering the request before it is fired, getting back the response, enabling the processing of the response via callbacks and others. To use the same, add an entry for HttpClientModule in app.module.ts file.

Finally, import this service in the create-qr.component.ts file to complete the create.

There is a problem with the above create qr logic. If the user uses the same text to generate the QR again and again, it will result in a network call. One way to redress this is caching the request based, thus serving the response from the cache if the request text is same.

Caching Request

Angular provides a simplified way of making HTTP calls, HttpClient, along with HttpInterceptors to inspect and transform HTTP requests or responses to and from servers. It can be used for authentication or caching and many such things, multiple interceptors can be added and chained for further processing. In this case, we are intercepting requests and serving the response from the cache if the qr text is same.

Create an interceptor folder, then create a file cache-interceptor.ts:

Cache interceptor. (Large preview)

Add the below code snippet to the file:

import { Injectable } from '@angular/core'; import { HttpInterceptor, HttpResponse, HttpRequest, HttpHandler, HttpEvent } from '@angular/common/http'; import { tap } from 'rxjs/operators'; import { of, Observable } from 'rxjs'; @Injectable({ providedIn: 'root' }) export class RequestCachingService implements HttpInterceptor { private cacheMap = new Map<string, HttpResponse<any>>(); constructor() { } intercept(req: HttpRequest, next: HttpHandler): Observable<HttpEvent<any>> { const cachedResponse = this.cacheMap.get(req.urlWithParams); if (cachedResponse) { return of(cachedResponse); } return next.handle(req).pipe(tap(event => { if (event instanceof HttpResponse) { this.cacheMap.set(req.urlWithParams, event); } })) } }

In the above code snippet, we have a map with the key being the request URL, and the response as the value. We check if the current URL is present in the map; if it is, then return the response (the rest is handled automatically). If the URL is not in the map, we add it.

We are not done yet, an entry to the app.module.ts is required for its proper functioning. Add below snippet:

import { HttpClientModule, HTTP_INTERCEPTORS } from '@angular/common/http'; import { CacheInterceptor } from './interceptor/cache-interceptor'; providers: [ { provide: HTTP_INTERCEPTORS, useClass: CacheInterceptor, multi: true } ],

This adds the caching feature to our application. Let’s move on to the third page, History page.

Adding History Page

All the saved QR codes will be visible here. To create another component, open terminal type ng g c history and press enter.

Open history.component.css and add the below code:

.main-content { padding: 5% 10%; } .truncate { width: 90%; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; } .center-img { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); display: flex; flex-direction: column; align-items: center; }

Open history.component.html and replace the content with this:

<app-header [showBackButton]="showBackButton" [currentTitle]="title" [showHistoryNav]="showHistoryNav"></app-header> <div class="main-content"> <mat-grid-list cols="4" rowHeight="500px" *ngIf="historyList.length > 0"> <mat-grid-tile *ngFor="let qr of historyList"> <mat-card> <img mat-card-image style="margin-top: 5px;" src="{{qr.imageBase64}}"> <mat-card-content> <div class="truncate"> {{qr.text}} </div> </mat-card-content> <mat-card-actions> <button mat-button (click)="share(qr.text)">SHARE</button> <button mat-button color="accent" (click)="delete(qr.text)">DELETE</button> </mat-card-actions> </mat-card> </mat-grid-tile> </mat-grid-list> <div class="center-img" *ngIf="historyList.length == 0"> <img src="../../assets/no-see.png" width="256" height="256"> <span style="margin-top: 20px;">Nothing to see here</span> </div> </div>

As usual, we have the header component at the top. Then the rest of the body is a grid list that will show all the saved QR codes as individual mat-card. For the grid view, we are using mat-grid-list from the angular material library. As per the drill, before we can use it, we have to first add it to the app.module.ts file.

Mat grid list acts as a container with multiple tile children called mat-grid-tile, in the above html snippet, each tile is created using mat-card using some of its properties for generic placement of other UI elements. We can provide the number of columns and rowHeight, which is used to calculate width automatically. In the above snippet we are providing both the number of columns and the rowHeight value.

We are using a placeholder image when the history is empty, download it and add to the assets folder.

To implement the logic for populating all these information, open history.component.ts file and add the below snippet in the HistoryComponent class:

showBackButton = true; title = 'History'; showHistoryNav = false; historyList; constructor(private storageService: StorageutilService, private snackbar: MatSnackBar ) { } ngOnInit() { this.populateHistory(); } private populateHistory() { this.historyList = this.storageService.readAllHistory(); } delete(text: string) { this.storageService.deleteHistory(text); this.populateHistory(); } share(text: string) { this.snackbar.open(text, '', {duration: 2000,}) }

The above logic just fetches all the saved QR and populates the page with it. Users can delete the saved QR which will delete the entry from the local storage. This finishes off our history component or does it? We need to add the route mapping for this component, open app-routing.module.ts and add a mapping for the history page as well:

{ path: 'history', component: HistoryComponent },

The whole route array should look like this by now:

const routes: Routes = [ { path: '', component: HomeComponent }, { path: 'create', component: CreateQrComponent }, { path: 'history', component: HistoryComponent }, ];

It is a good time to run the application to check the complete flow, open terminal and type ng serve and press enter, go to localhost:4200 to verify the working of the application.

Add To Github

Before proceeding to the deployment step, it would be good to add the project to a Github repository.

  1. Open Github
  2. Create a new repository
  3. Github new repository. (Large preview)
  4. In VS Code use the terminal and follow the first set of commands mentioned in the quick start guide to push all the project files
  5. Github add project. (Large preview)

Just refresh the page to check if all the files are visible. From this point any git changes such as commit, pull/push will be reflected in this newly created repository.

Netlify & Deployment

Our application runs in our local machine, but to enable others to access it, we should deploy it on a cloud platform and register it to a domain name. This is where Netlify comes into play. It provides continuous deployment services, integration with Github and many more features to benefit from. Right now, we want to enable global access to our application, let’s get started.

  1. Sign-up on Netlify.
  2. From the dashboard, click on New site from Git button.
  3. Netlify new site. (Large preview)
  4. Click on Github in the next screen.
  5. Netlify select git provider. (Large preview)
  6. Authorize Netlify to be able to access your Github repositories.
  7. Netlify github authorization. (Large preview)
  8. Search for and select the newly created qr repository.
  9. Netlify github repository selection. (Large preview)
  10. Netlify, in the next step, allows us to choose the Github repository branch for deployments. Normally one uses the master branch but one can also have a separate release branch which contains only release related and stable features.
  11. Netlify build and deploy. (Large preview)

Since, this is an angular web application, add ng build --prod as the build command. Publish directory will be dist/qr as mentioned in the angular.json file.

Angular build path. (Large preview)

Now click on the Deploy site button which will trigger a project build with the command ng build --prod and will output the file to dist/qr.

Since we provided the path information to Netlify, it will automatically pick up the correct files for servicing the web application. Netlify adds a random domain to our application by default.

Netlify site deployed. (Large preview)

You can click on the link provided in the above page, to access our application from anywhere. Finally, our application is deployed.

Custom Domain

In the above image, the URL for our application is shown, the sub-domain is randomly generated. Let’s change it. Click on Domain settings button then in the custom domains section click on the 3-dot menu and select Edit site name.

Custom domain. (Large preview)

This will open a popup wherein a new site name can be entered, this name should be unique across the Netlify domain. Enter any site name, which is available and click save.

Site name. (Large preview)

Now the link to our application will be updated with the new site name.

Split Testing

Another cool feature offered by Netlify is split testing, it enables traffic splitting so that different sets of users will interact with different application deployments. We can have new features added to a different branch and split the traffic to this branch deployment, analyze traffic and then merge the feature branch with the main deployment branch. Let’s configure it.

The prerequisite to enabling split testing is a Github repository with at least two branches. Head over to the app repository in Github, created earlier, create a new branch, a.

Create new branch. (Large preview)

The repository will now have a master branch and a branch. Netlify needs to be configured to do branch deployments, open Netlify dashboard and click on Settings, on the left side click on Build & Deploy, then Continuous Deployment, then on the right side in the Deploy contexts section, click on Edit settings.

Branch deployments. (Large preview)

In the Branch deploys sub-section select the option Let me add individual branches, and enter the branch names and save it.

Branch deploy is another useful feature provided by Netlify, we can select which Github repository branches to deploy, we can also enable preview for every pull request to the master branch before merging. This is a neat feature enabling developers to actually test their changes out live before adding their code changes to the main deployment branch.

Now, click on Split Testing tab option at the top of the page, the split testing configurations will be presented here.

Split testing. (Large preview)

We can select the branch, other than the production branch, in this case a. We can also play around with the settings of splitting traffic, based on the traffic percentage each branch has been allotted, Netlify will re-route some users to the application deployed using a branch and others to the master branch. After configuring, click on Start test button to enable traffic splitting.

TIP: Netlify may not recognize that the connected Github repository has more than one branch and may give this error:

Split testing error. (Large preview)

To resolve this, just reconnect to the repository from the Build & Deploy options.

Netlify provides a lot of other features as well, we just went through some of its useful features to demonstrate the ease of configuring different aspects of Netlify.

This brings us to the end of our journey, we have successfully created an Angular Material design based web application and deployed it on Netlify.

Conclusion

Angular is a great and popular framework for web application development, with the official Angular material design library it is much easier to create applications which adhere to the material design specs for a very natural interaction with the users. Moreover, the application developed with a great framework should use a great platform for deployment, Netlify is just that. With constant evolution, great support and with a plethora of features it surely is a great platform to bring web applications or static sites to the masses. Hopefully, this article will provide help in getting started with new Angular project from just a thought to deployment.

Next Steps (dm, yk, il)
Categories: Design

The Split Personality Of Brutalist Web Development

Wed, 01/08/2020 - 03:00
The Split Personality Of Brutalist Web Development The Split Personality Of Brutalist Web Development Frederick O'Brien 2020-01-08T11:00:00+00:00 2020-01-08T12:37:38+00:00

Of all the design trends to hit the internet in recent years, brutalism is surely the most eye-catching, and the most poorly defined. A variety of major brands have embraced ‘brutalist’ aesthetics online. There are even directories for those interested in seeing a selection of them in one place. The style has well and truly entered the mainstream.

Bloomberg.com’s stark, no-nonsense design went live in 2016 and was refined in 2018. It is often touted as a leading example of brutalism’s growth online. (Large preview)

Indeed, brutalist web design has grown so quickly that there does not seem to be a clear consensus on what the style actually is. To some it means practicality, to others audacity. Much like the architecture it takes its name from, brutalist web development has become two competing philosophies in one. Neither is necessarily ‘right’, but knowing the difference is important. It may even be sensible to start calling them different things.

A Brief(ish) History Of Brutalism

Before we get ahead of ourselves, let’s recap the term ‘brutalist’ — where it came from and what it means. Brutalism is a style of architecture that took off after World War II, reaching its peak in the ‘50s and ‘60s. Championing simple, geometric designs and bared building materials, the trend was in large part a reaction against the ornate, over-designed structures of preceding decades.

The name comes from béton brut, which is French for raw or rough concrete. Concrete is a common material for brutalist structures, lending itself as it does to the style’s no-frills approach. Other materials can be and are used, but concrete is especially common. Whatever structures are made of, embellishments are deemed unnecessary. The form and the materials are enough.

The United Kingdom, with its fondness for grey, drab things, particularly took to the style. Notable examples of brutalist architecture here include the Royal National Theatre, the Barbican Estate, and Balfron Tower. It has proven especially popular for public buildings — libraries, theatres, universities, housing estates, and so on.

The National Theatre in London. Designed by Denys Lasdun and opened in 1976, it is a textbook example of brutalist architecture. It is both one of the most hated and most loved buildings in Britain. Photograph by Henry Hemming. (Large preview)

Although there is not a catch-all definition that everyone agrees on, deference is often paid to English architectural critic Rayner Barnam, whose 1955 essay ‘On the New Brutalism’ attempted to outline the core ideas of the style. In anticipation of those of you who would not read the whole thing, Barnham boiled the philosophy down to the following:

In the last resort what characterizes the New Brutalism in architecture as in painting is precisely its brutality, its je-m’ en-foutisme, its bloody-mindedness.

Loosely translated, je-m’ en-foutisme translates as ‘don’t give a damn attitude. To be sure, brutalist buildings are unconcerned about conventional standards of beauty. They are also rather divisive. Where some gush over their firm, utilitarian character, others decry ugliness, impersonality, and, well, brutality.

Love it or hate it, brutalist architecture celebrates rawness. Indeed, Barnam opened his essay with a quote by Swiss-French architect Le Corbusier: “Architecture is the establishing of moving relationships with raw materials.” Le Corbusier’s Unité d'habitation housing designs inspired a generation of brutalist architects.

Balconies at La Maison du Fada in Marseille, France. Designed by Le Corbusier and completed in 1952, it was a pioneering example of brutalist residential design. Photograph by Jean-Pierre Dalbéra. (Large preview)

So in short, brutalist architecture not only reduces construction to its fundamental materials, it finds beauty in that simplicity. Critics say it’s a bit in your face, a bit impersonal, a bit totalitarian even. The dual meaning of ‘raw’ and ‘brutal’ has clouded the definition, but as a rule, the goal is rawness and the result is perceived by some as brutal.

The style has waned in popularity since its postwar heyday, but it endures as one of the most distinctive around. A good few have attained listed status, and I for one am glad they have. A city of brutalist buildings would be a bit much, but a city without any is poorer for it.

Further Reading Practicality Or Audacity?

So what’s all this got to do with web development? The philosophy, mainly, and the way it has splintered. Brutalism has found new life online, especially in the last three or four years. A slew of sites have taken on the brutalist moniker, and with the trend’s rise have come accompanying aw(ww)ards, articles, and directories.

Browsing through these things you may well get the impression that not everyone is talking about the same thing. That’s because they aren’t. In the world of web development, ‘brutalist’ has grown to encompass a variety of styles. It’s a disservice to designers to keep lumping together such different approaches. I have separated web brutalism into two types below, but as we shall see even that may not be enough.

Type One: l'Internet Brut

The first type of brutalist web design has much more in common with its architectural forebears. Think of it as l'Internet brut, where the raw materials are HTML and, to a lesser extent, CSS. The backgrounds are light, the text is black, and the hyperlinks are blue. There’s some wriggle room, but that’s the gist of it. No faffing about. Short of displaying actual code you couldn’t get much rougher.

In the beginning was the Style, and the Style was with Brutalism, and the Style was Brutalism. (Large preview)

There are countless examples of this style, big and small. The first ever website is an inadvertent disciple, while more recently Brutalist Web Design by programmer David Bryant Copeland puts forward a lovely little manifesto for the style.

Going up in the world, other websites with strong brutalist streaks include:

Although the raw materials of these sites are very similar, they don’t all look the same. They are shaped around their content and purpose. Like their architectural cousins, there’s actually a huge variety in form.

A proper l'Internet brut website. It even has grey! (Large preview)

As you can see with the Craigslist homepage above, there is very little in the way of excess, and possibly even less in the way of style. It’s barely changed in 20(!) years, because it hasn’t needed to. Take a look at the code and even a novice like me can follow how the pages are put together. You don’t have to guess how it’s built because it’s all right there in front of you.

With sites like this you’ll often notice an overlap with the ‘publicly minded’ leaning on a lot of these websites — marketplaces, forums, encyclopedias. It’s oddly appropriate to see a site like Wikipedia take on the digital form of, say, Robin Hood Gardens. Bloomberg has plenty of company in the news space as well. Papers like The New York Times and The Washington Post have embraced similarly blunt, functional designs in recent years. News design has always had a strong brutalist streak.

The Washington Post’s revamped website rolled out in 2015. Newspaper designer Mario Garcia praised it at the time for ‘avoiding clutter and crowdedness.’ (Large preview)

It bears mentioning here that several of the sites used as examples here didn’t set out to be brutalist. Much like Villa Göth, which is widely considered the source of the term brutalism, they set out to be practical and simple. They were adopted, so to speak. Their success is what inspired (and continues to inspire) architects and developers alike. They’re so unconcerned about appearances that they became shining examples of brutalist design without even realizing it!

Sites in this vein don’t always scream beauty, but there is an elegance to their functionality. They are unconcerned and unpretentious, shaped for their purpose using the raw elements of the web. (Pun intended.)

Type Two: l’Internet Fou

This is the split. Right here. Those of you with even a passing interest in web design trends will know what we’ve looked at so far fails to account for a huge number of ‘brutalist’ sites. As Vitaly Friedman noted in Smashing Book 6:

Brutalism in architecture is characterized by unconcerned aesthetics, not intentionally broken aesthetics. When applied to web design, this style often goes along with deliberately broken design conventions and guiding principles.

The rise of ‘brutalist’ design over the last few years has had a lot more to do with brutalness than with rawness. This is the madder world, at times bordering on anarchy. Here design conventions are subverted and usability is an afterthought — and that’s not when it’s being actively sabotaged. These are the sites that prompt articles titled ‘Style Over Substance’, and for The Washington Post to sum up the style as ‘intentionally ugly.’

The homepage of artists’ magazine Toiletpaper. (Large preview) JI SOO EOM, another site found in the Brutalist Websites directory. (Large preview)

In the suitably migraine-inducing article ‘Brutalism: BrutAl wEbsIteS for mOdern dAy webMAsTeRS’, Awwwards describes this second strain as follows:

Brutalism in web design laughs in the face of rationalism and functionality, in the world of design it can be defined as freestyle, ugly, irreverent, raw, and superficially decorative, etc.

I hope it isn’t controversial to say that this is an altogether different approach to type one. At a stretch, you might argue that this approach is more the domain of artists and graphic designers, and that art is, therefore, the rawest form their websites can take, but that would be a stretch. There’s no question brutalist architecture can drift into ‘statement’ territory, but that’s not its natural realm.

The Brutalist Websites directory suggests, ‘Brutalism [online] can be seen as a reaction by a younger generation to the lightness, optimism, and frivolity of today's web design.’ There are shades of the founding brutalist ethos in this, but it is more irreverent and subversive. They can be very beautiful in their own way, but also cut from a completely different cloth from the Craigslists of this world.

This Town Ain’t Big Enough For The Two Of Us

So there you have it. When brutalist web design isn’t going all in on rationalism and functionality, it’s laughing in the face of rationalism and functionality. All clear?

The term has grown to encompass approaches that are in many senses at odds with each other. Indeed, Pascal Deville, the founder of the Brutalist Websites directory, thinks the style has splintered into three micro-stylistics: the purists, the UX minimalists and the anti-ists (or artists). Having vetted hundreds of submissions over the years, he’d know better than just about anyone else. He says:

The purists reference strongly to the architectural characteristics of Web Brutalism, such as the concept of ‘truth to materials’ and the use of the purest markup elements available. The UX minimalists, in contrast, see efficiency and performance as the main driver of Web Brutalism and even believe that the radical limitation of possibilities can boost conversions. The ‘anti-ists’ or artists see web design as an (still) undervalued form of art and don’t show much respect the status quo and mostly get bad press.

What is a ‘proper’ brutalist website? To an extent, the answer depends on the context. If a website belongs to an artist then something brash may be more appropriate than something unconcerned. Generally speaking, though, it seems to me that the sensibilities of the ‘anti-ist’ type are actually much closer to something like Dadaism, with all its absurdity and mirth and mess, or the avant-garde leanings of Expressionism.

Small Dada Evening by Kurt Schwitters and Theo van Doesburg. (Large preview) Typical Vertical Mess as Depiction of the Dada Baargeld by Johannes Theodor Baargeld. Which branch of ‘brutalist’ web design do these look more like to you? (Large preview)

I don’t want this to come across as a game of semantics, where different styles are filed away neatly into little boxes. What I am more concerned with is highlighting different approaches so that each may be given the space needed to flourish. As Deville acknowledges, the creative potential of the web is still being explored. ‘It's a new form of art and I'm very happy to experience first hand,’ he says. ‘It's happening now.’

This has practical consequences as well. Whether you’re a developer talking to a client or a client talking to a developer, it pays to be clear which version of brutalist web design you’re on about. If you’re a real champ you’ll naturally refer them to this article. Otherwise, visual examples like the one below are likely your best bet for getting to the point.

They’re both ‘brutalist’ so be sure what you’re asking for. The design on the left is a project by Constantin Grosnov. (Large preview)

Beyond that, maybe we should start giving different styles different names. I appreciate this would be rather inconvenient to a lot of people. Domains have been bought, awards awarded, and articles written, but going forward the label seems too restrictive. It can no longer contain so many approaches. If nothing else, the split personality of brutalist web development shows how much terrain remains to be explored in web design.

There are countless schools of art — Brutalism, Expressionism, Romanticism, Art Deco, Futurism, Dadaism, Impressionism, absurdism, modernism, minimalism, and on and on and on. They find form through paintings, buildings, literature… why not websites? As the links below show, I’m not alone in asking this. With every new development style, ‘anti-mainstream’ becomes less adequate for describing what designers are doing. They are starting to explore the philosophy of web design in ways that haven’t been done before.

The ‘Dadaist’ strain of brutalist web design has one thing absolutely right: the scope for what a website can be is far, far too narrow. The web is an infinite sandbox, and embracing the breadth of possibilities within it can only be a good thing. That starts with expanding our vocabulary.

(ra, yk, il)
Categories: Design

Why You Should Choose HTML5 &lt;article&gt; Over &lt;section&gt;

Tue, 01/07/2020 - 03:30
Why You Should Choose HTML5 &lt;article&gt; Over &lt;section&gt; Why You Should Choose HTML5 &lt;article&gt; Over &lt;section&gt; Bruce Lawson 2020-01-07T11:30:00+00:00 2020-01-07T12:35:56+00:00

A few days ago, I was having a chat with some friends, one of whom asked me the difference between <article> and <section> in HTML. This is one of the eternal mysteries of web development, up there with “why is it white-space: nowrap, not white-space: no-wrap?” and “why is CSS ‘gray’ a darker color than ‘darkgray’?”.

I gave my usual answer: think of <article> not just as a newspaper article, or a blog post, but as an article of clothing — a discrete entity that can be reused in another context. So your trousers are an article, and you can wear them with a different outfit; your shirt is an article, and can be worn with different trousers; your knee-length patent leather stiletto boots are an article (you wouldn’t wear just one of them, would you?).

The spec says:

“The article element represents a complete, or self-contained, composition in a document, page, application, or site and that is, in principle, independently distributable or reusable, e.g. in syndication. This could be a forum post, a magazine or newspaper article, a blog entry, a user-submitted comment, an interactive widget or gadget, or any other independent item of content.”

So a homepage with a list of blog posts would be a <main> element wrapping a series of <article> elements, one for each blog post. You would use the same structure for a list of videos (think YouTube) with each video being wrapped in an <article>, a list of products (think Amazon) and so on. Any of those <article>s is conceptually syndicatable — each could stand alone on its own dedicated page, in an advert on another page, as an entry in an RSS feed, and so on.

Apple’s WatchOS contains Reader which uses the <article> element to know the primary content of your page. Apple says:

“We’ve brought Reader to watchOS 5 where it automatically activates when following links to text-heavy web pages. It’s important to ensure that Reader draws out the key parts of your web page by using semantic markup to reinforce the meaning and purpose of elements in the document. Let’s walk through an example. First, we indicate which parts of the page are the most important by wrapping it in an article tag.”

Combining <article> with HTML5 microdata helps Reader construct the optimal display for small watch screens:

“Specifically, enclosing these header elements inside the article ensure that they all appear in Reader. Reader also styles each header element differently depending on the value of its itemprop attribute. Using itemprop, we’re able to ensure that the author, publication date, title, and subheading are prominently featured.” So What About <section>?

My usual advice continues: don’t bother with <section> or worry about how it differs from <article>. It was invented as a generic wrapper for headings so that the browser could determine the HTML5 document outline.

The what? The document outline algorithm is a way to use only one heading tag — <h1> — and have it magically “become” the correct level of heading (e.g. turn into an <h2>, <h3>, etc.), depending on how deeply it’s nested in HTML5 sectioning elements:<article>, <section>, and so on.

So, for example, here’s what you’ve typed into your CMS:

<h1>My Fabulous article</h1> <p>Lorem Ipsum Trondant Fnord</p>

This works brilliantly when shown as a standalone article. But what about on your homepage, which is a list of your latest articles?

<h1>My latest posts</p> <article> <h1>My fabulous article</h1> <p>Lorem Ipsum Trondant Fnord</p> </article> <article> <h1>Another magnum opus</h1> <p>Magnum solero paddlepop</p> </article>

In this example, according to the specification, the <h1>s inside the <article> elements “become” logical <h2>s, because <article>, like <section>, is a sectioning element.

Note: This isn’t a new idea. Way back in 1991, Sir Uncle Timbo wrote:

“I would in fact prefer, instead of <h1>, <h2>, etc. for headings [those come from the AAP DTD] to have a nestable <SECTION>...</SECTION> element, and a generic <H>...</H> which at any level within the sections would produce the required level of Heading.”

Unfortunately, however, no browser implements HTML5 outlining, so there is no point in using <section>. At one point, the JAWS screen reader attempted to implement the document outlining algorithm (in IE, but not on Firefox) but implemented it buggily. It seems that browser developers simply aren’t interested (more sordid details in the Further Reading section for true anoraks).

“But,” interjected another friend in the conversation, “now browsers display different sizes of font depending on how deeply the <h1> is nested in <section>s”, and proceeded to prove it. Mind blown!

Here’s a similar demo. The left column shows four <h1>s, nested in sections; the right column shows a, <h1>, <h2>, <h3>, <h4> with no nesting. The Firefox screenshot shows that the nested <h1>s default to the same font as the traditional <h1>…<h4> tags:

A comparison of <h1>s nested in <section> elements and <h1>, <h2>, <h3>, <h4> (Large preview)

The results are the same in Chrome, Chromium derivatives such as Edge beta for Mac, and Safari on Mac.

So does this mean that we should all happily start using <h1> as our only heading element, nesting it in <section>s?

No. Because this is only a change in the visual styling of the h1s. If we crack open the Firefox Accessibility inspector in devtools, we can see that the text “level 2” is styled to look like an H2, but is still set at “level 1” — the Accessibility Tree hasn’t been changed to be level 2.

Firefox’s Accessibility Inspector shows that a nested <h1> appears visually the same as an <h2> but its aria-level is incorrectly set to “1”, not “2” (Large preview)

Compare this with the Real H2 in the right column:

Firefox’s Accessibility Inspector shows that a real <h2> has a computed aria-level of “2”, which is correct (Large preview)

This shows the accessibility tree has been correctly informed that this is a level 2 heading. In fact, Mozilla did try communicating the computed level to the accessibility tree:

“We experimented a bit with that... but had to revert it because people in our a11y team complained about too many regressions (accidentally lowering <h1> levels and such).”

For assistive technology users, a proper hierarchy of headings is vital. As the eighth WebAIM screenreader user survey shows,

“The usefulness of proper heading structures is very high, with 86.1% of respondents finding heading levels very or somewhat useful.”

Therefore, you should continue to use <h1> until <h6>, and ignore section.

Never Say Never

“But..” you might now be spluttering indignantly, “there’s a <section> element right on this very page!”. And you would be right, dear reader. The “quick summary” is wrapped in a <section>, for accessibility reasons. When screen reader user Léonie Watson gave her webinar “How A Screen Reader User Accesses The Web”, she pointed out an area where Smashing Magazine’s markup could be tweaked to make her experience better.

As you can see from the screenshot, Smashing articles are preceded by a quick summary, followed by a horizontal line separating the summary from the article proper.

The Smashing “Quick Summary” is separated from its full article by a horizontal line. (Large preview)

But the separator is purely decorative, so Léonie couldn’t tell where the summary ends and the article begins. She suggested a fix: we wrapped the summary in a <section> element:

<section aria-label="quick summary"> Summary text </section>

In most screen readers, a <section> element isn’t announced unless it has an accessible name. In this case, the text of the aria label. Now, her screen reader announced “Quick summary region”, and after the summary “Quick summary region end”. This simple markup also makes it possible for a screen reader user to jump over the summary if they want to.

We could have used a simple <div> but then, as Marco Zehe writes,

“As a rule of thumb, if you label something via aria-label or aria-labelledby, make sure it has a proper widget or landmark role.”

So rather than use <div role=”region” aria-label=”quick summary”>, we chose <section> as that has a built-in role of region and Bruce’s infallible law of ARIA™ applies: built-in beats bolt-on. Bigly.

Conclusion

Hopefully, you’ve come away with these take-homes:

  • Don’t use loads of <h1>s. Make <h1> the main heading of your page, then use <h2>, <h3>, <h4>, etc. in a proper hierarchy without skipping levels.
  • <section> can be used with aria-label to signal to a screen reader user where a particular sub-part of an article begins and ends. Otherwise, forget about it, or use another element, such as <aside aria-label=”quick summary”> or <div role=”region” aria-label=”quick summary”>.
  • <main>, <header>, <footer> and <nav> are very useful for screen reader users, and entirely transparent to those who don’t use assistive technology. So use them.
  • <article> isn’t just for blog posts — it’s for any self-contained thing. It also helps WatchOS display your content properly.

I gratefully acknowledge Léonie Watson’s help writing this article. Any errors are totally her fault.

Further Reading
  • Headings And Sections,” HTML 5.2 W3C Recommendation (14 Dec. ’17)
    Note its warning: “There are currently no known native implementations of the outline algorithm … Therefore the outline algorithm cannot be relied upon to convey document structure to users. Authors should use heading rank (h1-h6) to convey document structure.”
  • There Is No Document Outline Algorithm,” Adrian Roselli
    All the gory details of how the specification for the sectioning algorithm has changed.
  • ARIA in HTML,” W3C Editor’s Draft (19 Dec. ’19)
    Rules to live by if you do find yourself adding ARIA roles and attributes to HTML.
  • The Practical Value Of Semantic HTML,” Bruce Lawson
    My own article, linking to details of how WatchOS uses HTML5 and microdata.
(ra, il)
Categories: Design

Front-End Performance Checklist 2020 [PDF, Apple Pages, MS Word]

Mon, 01/06/2020 - 02:30
Front-End Performance Checklist 2020 [PDF, Apple Pages, MS Word] Front-End Performance Checklist 2020 [PDF, Apple Pages, MS Word] Vitaly Friedman 2020-01-06T10:30:13+00:00 2020-01-06T12:10:04+00:00

Web performance is a tricky beast, isn’t it? How do we actually know where we stand in terms of performance, and what our performance bottlenecks exactly are? Is it expensive JavaScript, slow web font delivery, heavy images, or sluggish rendering? Is it worth exploring tree-shaking, scope hoisting, code-splitting, and all the fancy loading patterns with intersection observer, server push, clients hints, HTTP/2, service workers and — oh my — edge workers? And, most importantly, where do we even start improving performance and how do we establish a performance culture long-term?

Back in the day, performance was often a mere afterthought. Often deferred till the very end of the project, it would boil down to minification, concatenation, asset optimization and potentially a few fine adjustments on the server’s config file. Looking back now, things seem to have changed quite significantly.

Performance isn’t just a technical concern: it affects everything from accessibility to usability to search engine optimization, and when baking it into the workflow, design decisions have to be informed by their performance implications. Performance has to be measured, monitored and refined continually, and the growing complexity of the web poses new challenges that make it hard to keep track of metrics, because metrics will vary significantly depending on the device, browser, protocol, network type and latency (CDNs, ISPs, caches, proxies, firewalls, load balancers and servers all play a role in performance).

So, if we created an overview of all the things we have to keep in mind when improving performance — from the very start of the process until the final release of the website — what would that list look like? Below you’ll find a (hopefully unbiased and objective) front-end performance checklist for 2020 — an updated overview of the issues you might need to consider to ensure that your response times are fast, user interaction is smooth and your sites don’t drain user’s bandwidth.

Table Of Contents

(You can also just download the checklist PDF (166 KB) or download editable Apple Pages file (275 KB) or the .docx file (151 KB). Happy optimizing, everyone!)

ol.start { counter-reset: perfcounter; } ol.start > li:before, ol.continue > li:before { content: counters(perfcounter, '.', decimal-leading-zero); counter-increment: perfcounter; } @media all and (min-width: 1024px) { ol.start > li, ol.continue > li { list-style: none; margin-bottom: 1.5em; margin-top: 2em; } ol.start > li:first-child, ol.continue > li:first-child { margin-top: 1em; } ol.start > li:before, ol.continue > li:before { margin-left: -1.1em; margin-right: 2.4%; font-family: "Mija", Arial, sans-serif; display: inline-block; line-height: 1.1em; text-align: center; background-color: #E53B2C; color: #fff; padding: .65em .5em .5em .5em; border-radius: 11px; } ol.start p, ol.continue p { font-size: inherit; } } Getting Ready: Planning And Metrics

Micro-optimizations are great for keeping performance on track, but it’s critical to have clearly defined targets in mind — measurable goals that would influence any decisions made throughout the process. There are a couple of different models, and the ones discussed below are quite opinionated — just make sure to set your own priorities early on.

  1. Establish a performance culture.
    In many organizations, front-end developers know exactly what common underlying problems are and what loading patterns should be used to fix them. However, as long as there is no established endorsement of the performance culture, each decision will turn into a battlefield of departments, breaking up the organization into silos. You need a business stakeholder buy-in, and to get it, you need to establish a case study, or a proof of concept using the Performance API on how speed benefits metrics and Key Performance Indicators (KPIs) they care about.

    Without a strong alignment between dev/design and business/marketing teams, performance isn’t going to sustain long-term. Study common complaints coming into customer service and sales team, study analytics for high bounce rates and conversion drops. Explore how improving performance can help relieve some of these common problems. Adjust the argument depending on the group of stakeholders you are speaking to.

    Run performance experiments and measure outcomes — both on mobile and on desktop (for example, with Google Analytics). It will help you build up a company-tailored case study with real data. Furthermore, using data from case studies and experiments published on WPO Stats will help increase sensitivity for business about why performance matters, and what impact it has on user experience and business metrics. Stating that performance matters alone isn’t enough though — you also need to establish some measurable and trackable goals and observe them over time.

    How to get there? In her talk on Building Performance for the Long Term, Allison McKnight shares a comprehensive case-study of how she helped establish a performance culture at Etsy (slides). More recently, Tammy Averts has spoken about habits of highly effective performance teams in both small and large organizations.

Performance budget builder by Brad Frost and Jonathan Fielding’s Performance Budget Calculator can help you set up your performance budget and visualize it. (Large preview)
  1. Goal: Be at least 20% faster than your fastest competitor.
    According to psychological research, if you want users to feel that your website is faster than your competitor’s website, you need to be at least 20% faster. Study your main competitors, collect metrics on how they perform on mobile and desktop and set thresholds that would help you outpace them. To get accurate results and goals though, make sure to first get a thorough picture of your users' experience by studying your analytics. You can then mimic the 90th percentile’s experience for testing.

    To get a good first impression of how your competitors perform, you can use Chrome UX Report (CrUX, a ready-made RUM data set, video introduction by Ilya Grigorik and detailed guide by Rick Viscomi) or Treo Sites, a RUM monitoring tool that is powered by Chrome UX Report. Alternatively, you can also use Speed Scorecard (also provides a revenue impact estimator), Real User Experience Test Comparison or SiteSpeed CI (based on synthetic testing).

    Treo Sites provides competitive analysis based on real-world data. (Large preview)

    Note: If you use Page Speed Insights (no, it isn’t deprecated), you can get CrUX performance data for specific pages instead of just the aggregates. This data can be much more useful for setting performance targets for assets like “landing page” or “product listing”. And if you are using CI to test the budgets, you need to make sure your tested environment matches CrUX if you used CrUX for setting the target (thanks Patrick Meenan!).

    If you need some help to show the reasoning behind prioritization of speed, or you'd like to visualize conversion rate decay or increase in bounce rate with slower performance, or perhaps you'd need to advocate for a RUM solution in your organization, Sergey Chernyshev has built a UX Speed Calculator, an open-source tool that helps you simulate data and visualize it to drive your point across.

    Just when you need to make a case for performance to drive your point across: UX Speed Calculator visualizes an impact of performanc on bounce rates, conversion and total revenue — based on real data. (Large preview)

    Collect data, set up a spreadsheet, shave off 20%, and set up your goals (performance budgets) this way. Now you have something measurable to test against. If you’re keeping the budget in mind and trying to ship down just the minimal payload to get a quick time-to-interactive, then you’re on a reasonable path.

    Need resources to get started?

    Once you have a budget in place, incorporate them into your build process with Webpack Performance Hints and Bundlesize, Lighthouse CI, PWMetrics or Sitespeed CI to enforce budgets on pull requests and provide a score history in PR comments.

    To expose performance budgets to the entire team, integrate performance budgets in Lighthouse via Lightwallet. And if you need something custom, you can use webpagetest-charts-api, an API of endpoints to build charts from WebPagetest results.

    Performance awareness shouldn’t come from performance budgets alone though. Just like Pinterest, you could create a custom eslint rule that disallows importing from files and directories that are known to be dependency-heavy and would bloat the bundle. Set up a listing of “safe” packages that can be shared across the entire team.

    Also, think about critical customer tasks that are most beneficial to your business. Study, discuss and define acceptable time thresholds for critical actions and establish "UX ready" user timing marks that the entire organization has approved. In many cases, user journeys will touch on the work of many different departments, so alignment in terms of acceptable timings will help support or prevent performance discussions down the road. Make sure that additional costs of added resources and features are visible and understood.

    Align performance efforts with other tech initiatives, ranging from new features of the product being built to refactoring to reaching to new global audiences. So every time a conversation about further development happens, performance is a part of that conversation as well. It’s much easier to reach performance goals when the code base is fresh or is just being refactored. Once you've established a strong performance culture in your organization, aim for being 20% faster than your former self to keep priorities in tact as time passes by. (thanks, Guy Podjarny!)

    Also, as Patrick Meenan suggested, it’s worth to plan out a loading sequence and trade-offs during the design process. If you prioritize early on which parts are more critical, and define the order in which they should appear, you will also know what can be delayed. Ideally, that order will also reflect the sequence of your CSS and JavaScript imports, so handling them during the build process will be easier. Also, consider what the visual experience should be in "in-between"-states, while the page is being loaded (e.g. when web fonts aren’t loaded yet).

    Planning, planning, planning. It might be tempting to get into quick "low-hanging-fruits"-optimizations early on — and it might be a good strategy for quick wins — but it will be very hard to keep performance a priority without planning and setting realistic, company-tailored performance goals.

The difference between First Paint, First Contentful Paint, First Meaningful Paint, Visual Complete and Time To Interactive. Large view. Credit: @denar90 New metrics landing in Lighthouse v6 in early 2020. First Meaningful Paint (FMP) is now deprecated, and Largest Contentful Paint (LCP) and Total Blocking Time (TBT) are coming in Lighthouse soon. (Large preview)
  1. Choose the right metrics.
    Not all metrics are equally important. Study what metrics matter most to your application: usually, it will be defined by how fast you can start to render most important pixels of your interface and how quickly you can provide input responsiveness for these rendered pixels. This knowledge will give you the best optimization target for ongoing efforts. In the end, it’s not the load events or server response times that define the experience, but the perception of how snappy the interface feels.

    What does it mean? Rather than focusing on full page loading time (via onLoad and DOMContentLoaded timings, for example), prioritize page loading as perceived by your customers. That means focusing on a slightly different set of metrics. In fact, choosing the right metric is a process without obvious winners.

    Based on Tim Kadlec’s research and Marcos Iglesias’ notes in his talk, traditional metrics could be grouped into a few sets. Usually, we’ll need all of them to get a complete picture of performance, and in your particular case some of them might be more important than others.

    • Quantity-based metrics measure the number of requests, weight and a performance score. Good for raising alarms and monitoring changes over time, not so good for understanding user experience.
    • Milestone metrics use states in the lifetime of the loading process, e.g. Time To First Byte and Time To Interactive. Good for describing the user experience and monitoring, not so good for knowing what happens between the milestones.
    • Rendering metrics provide an estimate of how fast content renders (e.g. Start Render time, Speed Index). Good for measuring and tweaking rendering performance, but not so good for measuring when important content appears and can be interacted with.
    • Custom metrics measure a particular, custom event for the user, e.g. Twitter’s Time To First Tweet and Pinterest’s PinnerWaitTime. Good for describing the user experience precisely, not so good for scaling the metrics and comparing with with competitors.

    To complete the picture, we’d usually look out for useful metrics among all of these groups. Usually, the most specific and relevant ones are:

    • Time to Interactive (TTI)
      The point at which layout has stabilized, key webfonts are visible, and the main thread is available enough to handle user input — basically the time mark when a user can interact with the UI. The key metrics for understanding how much wait a user has to experience to use the site without a lag.
    • First Input Delay (FID), or Input responsiveness
      The time from when a user first interacts with your site to the time when the browser is actually able to respond to that interaction. Complements TTI very well as it describes the missing part of the picture: what happens when a user actually interacts with the site. Intended as a RUM metric only. There is a JavaScript library for measuring FID in the browser.
    • Largest Contentful Paint (LCP)
      Marks the point in the page load timeline when the page’s important content has likely loaded. The assumption is that the most important element of the page is the largest one visible in the user’s viewport. If elements are rendered both above and below the fold, only the visible part is considered relevant. Currently a hidden metric in Lighthouse, to be rolled out if it proves to be valuable.
    • Total Blocking Time (TBT)
      A new metric that helps quantify the severity of how non-interactive a page is prior to it becoming reliably interactive (that is, the main thread has been free of any tasks running over 50ms (long tasks) for at least 5s). The metric measures the total amount of time between the first paint and Time to Interactive (TTI) where the main thread was blocked for long enough to prevent input responsiveness. No wonder, then, that a low TBT is a good indicator for good performance. (thanks, Artem, Phil)
    • Cumulative Layout Shift (CLS)
      The metric highlights how often users experience unexpected layout shifts (reflows) when accessing the site. It examines unstable elements and their impact on the overall experience. The lower the score, the better.
    • Speed Index
      Measures how quickly the page contents are visually populated; the lower the score, the better. The Speed Index score is computed based on the speed of visual progress, but it’s merely a computed value. It’s also sensitive to the viewport size, so you need to define a range of testing configurations that match your target audience. Note that it is becoming less important with LCP coming in as a new metric (thanks, Boris, Artem!).
    • CPU time spent
      A metric that shows how often and how long the main thread is blocked, working on painting, rendering, scripting and loading. High CPU time is a clear indicator of a janky experience, i.e. when the user experiences a noticeable lag between their action and a response. With WebPageTest, you can select "Capture Dev Tools Timeline" on the "Chrome" tab to expose the breakdown of the main thread as it runs on any device using WebPageTest.
    • Component-Level CPU Costs
      Just like with the CPU time spent, this metric, proposed by Stoyan Stefanov, explores the impact of JavaScript on CPU. The idea is to use CPU instruction count per component to understand its impact on the overall experience, in isolation. Could be implemented using Puppeteer and Chrome.
    • FrustrationIndex
      While many metrics featured above explain when a particular event happens, Tim Vereecke's FrustrationIndex looks at the gaps between metrics instead of looking at them individually. It looks at the key milestones perceived by the end-user, such as Title is visible, First content is visible, Visually ready and Page looks ready and calculates a score indicating the level of frustration while loading a page. The bigger the gap, the bigger the chance a user gets frustrated. Potentially a good KPI for user experience. Tim has published a detailed post about FrustrationIndex and how it works.
    • Ad Weight Impact
      If your site depends on the revenue generated by advertising, it’s useful to track the weight of ad related code. Paddy Ganti’s script constructs two URLs (one normal and one blocking the ads), prompts the generation of a video comparison via WebPageTest and reports a delta.
    • Deviation metrics
      As noted by Wikipedia engineers, data of how much variance exists in your results could inform you how reliable your instruments are, and how much attention you should pay to deviations and outlers. Large variance is an indicator of adjustments needed in the setup. It also helps understand if certain pages are more difficult to measure reliably, e.g. due to third-party scripts causing significant variation. It might also be a good idea to track browser version to understand bumps in performance when a new browser version is rolled out.
    • Custom metrics
      Custom metrics are defined by your business needs and customer experience. It requires you to identify important pixels, critical scripts, necessary CSS and relevant assets and measure how quickly they get delivered to the user. For that one, you can monitor Hero Rendering Times, or use Performance API, marking particular timestamps for events that are important for your business. Also, you can collect custom metrics with WebPagetest by executing arbitrary JavaScript at the end of a test.

    Note that the First Meaningful Paint (FMP) doesn't appear in the overview above. It used to provide an insight into how quickly the server outputs any data. Long FMP usually indicated JavaScript blocking the main thread, but could be related to back-end/server issues as well. However, the metric has been deprecated recently as it appears not to be accurate in about 20% of the cases. It will no longer be supported in the next versions of Lighthouse.

    Steve Souders has a detailed explanation of most of these metrics. It’s important to notice that while Time-To-Interactive is measured by running automated audits in the so-called lab environment, First Input Delay represents the actual user experience, with actual users experiencing a noticeable lag. In general, it’s probably a good idea to always measure and track both of them.

    Depending on the context of your application, preferred metrics might differ: e.g. for Netflix TV UI, key input responsiveness, memory usage and TTI are more critical, and for Wikipedia, first/last visual changes and CPU time spent metrics are more important.

    Note: both FID and TTI do not account for scrolling behavior; scrolling can happen independently since it’s off-main-thread, so for many content consumption sites these metrics might be much less important (thanks, Patrick!).

User-centric performance metrics provide a better insight into the actual user experience. First Input Delay (FID) is a new metric that tries to achieve just that. (Large preview)
  1. Gather data on a device representative of your audience.
    To gather accurate data, we need to thoroughly choose devices to test on. In most companies, that means looking into analytics and creating user profiles based on most common device types. Yet often, analytics alone doesn’t provide a complete picture. A significant portion of the target audience might be abandoning the site (and not returning back) just because their experience is too slow, and their devices are unlikely to show up as the most popular devices in analytics. So additionally conducting research on common devices in your target group might be a good idea.

    Globally in 2018–2019, according to the IDC, 87% of all shipped mobile phones are Android devices. An average consumer upgrades their phone every 2 years, and in the US phone replacement cycle is 33 months. Average bestselling phones around the world will cost under $200.

    A representative device, then, is an Android device that is at least 24 months old, costing $200 or less, running on slow 3G, 400ms RTT and 400kbps transfer, just to be slightly more pessimistic. This might be very different for your company, of course, but that’s a close enough approximation of a majority of customers out there. In fact, it might be a good idea to look into current Amazon Best Sellers for your target market. (Thanks to Tim Kadlec, Henri Helvetica and Alex Russel for the pointers!).

    What test devices to choose then? The ones that fit well with the profile outlined above. It’s a good option to choose a Moto G4/G5 Plus, a mid-range Samsung device (Galaxy A50, S8), a good middle-of-the-road device like a Nexus 5X, Xiaomi Mi A3 or Xiaomi Redmi Note 7 and a slow device like Alcatel 1X or Cubot X19, perhaps in an open device lab. For testing on slower thermal-throttled devices, you could also get a Nexus 4, which costs just around $100.

    Also, check the chipsets used in each device and do not over-represent one chipset: a few generations of Snapdragon and Apple as well as low-end Rockchip, Mediatek would be enough (thanks, Patrick!).

    If you don’t have a device at hand, emulate mobile experience on desktop by testing on a throttled 3G network (e.g. 300ms RTT, 1.6 Mbps down, 0.8 Mbps up) with a throttled CPU (5× slowdown). Eventually switch over to regular 3G, slow 4G (e.g. 170ms RTT, 9 Mbps down, 9Mbps up), and Wi-Fi. To make the performance impact more visible, you could even introduce 2G Tuesdays or set up a throttled 3G/4G network in your office for faster testing.

    Keep in mind that on a mobile device, we should be expecting a 4×–5× slowdown compared to desktop machines. Mobile devices have different GPUs, CPU, memory and different battery characteristics. That’s why it’s important to have a good profile of an average device and always test on an such a device.

  2. Introducing the slowest day of the week. Facebook has introduced 2G Tuesdays to increase visibility and sensitivity of slow connections. (Image source)

    Luckily, there are many great options that help you automate the collection of data and measure how your website performs over time according to these metrics. Keep in mind that a good performance picture covers a set of performance metrics, lab data and field data:

    • Synthetic testing tools collect lab data in a reproducible environment with predefined device and network settings (e.g. Lighthouse, Calibre, WebPageTest) and
    • Real User Monitoring (RUM) tools evaluate user interactions continuously and collect field data (e.g. SpeedCurve, New Relic — the tools provide synthetic testing, too).

    The former is particularly useful during development as it will help you identify, isolate and fix performance issues while working on the product. The latter is useful for long-term maintenance as it will help you understand your performance bottlenecks as they are happening live — when users actually access the site.

    By tapping into built-in RUM APIs such as Navigation Timing, Resource Timing, Paint Timing, Long Tasks, etc., synthetic testing tools and RUM together provide a complete picture of performance in your application. You could use PWMetrics, Calibre, SpeedCurve, mPulse and Boomerang, Sitespeed.io, which all are great options for performance monitoring. Furthermore, with Server Timing header, you could even monitor back-end and front-end performance all in one place.

    Note: It’s always a safer bet to choose network-level throttlers, external to the browser, as, for example, DevTools has issues interacting with HTTP/2 push, due to the way it’s implemented (thanks, Yoav, Patrick!). For Mac OS, we can use Network Link Conditioner, for Windows Windows Traffic Shaper, for Linux netem, and for FreeBSD dummynet.

Lighthouse, a performance auditing tool integrated into DevTools. (Large preview)
  1. Set up "clean" and "customer" profiles for testing.
    While running tests in passive monitoring tools, it’s a common strategy to turn off anti-virus and background CPU tasks, remove background bandwidth transfers and test with a clean user profile without browser extensions to avoid skewed results (Firefox, Chrome).

    However, it’s also a good idea to study which extensions your customers are using frequently, and test with a dedicated "customer" profile as well. In fact, some extensions might have a profound performance impact (study) on your application, and if your users use them a lot, you might want to account for it up front. "Clean" profile results alone are overly optimistic and can be crushed in real-life scenarios.

  2. Share the performance goals with your colleagues.
    Make sure that performance goals are familiar to every member of your team to avoid misunderstandings down the line. Every decision — be it design, marketing or anything in-between — has performance implications, and distributing responsibility and ownership across the entire team would streamline performance-focused decisions later on. Map design decisions against performance budget and the priorities defined early on.

Setting Realistic Goals
  1. 100-millisecond response time, 60 fps.
    For an interaction to feel smooth, the interface has 100ms to respond to user’s input. Any longer than that, and the user perceives the app as laggy. The RAIL, a user-centered performance model gives you healthy targets: To allow for <100 milliseconds response, the page must yield control back to main thread at latest after every <50 milliseconds. Estimated Input Latency tells us if we are hitting that threshold, and ideally, it should be below 50ms. For high-pressure points like animation, it’s best to do nothing else where you can and the absolute minimum where you can't.

    RAIL, a user-centric performance model.

    Also, each frame of animation should be completed in less than 16 milliseconds, thereby achieving 60 frames per second (1 second ÷ 60 = 16.6 milliseconds) — preferably under 10 milliseconds. Because the browser needs time to paint the new frame to the screen, your code should finish executing before hitting the 16.6 milliseconds mark. We’re starting to have conversations about 120fps (e.g. iPad Pro’s screens run at 120Hz) and Surma has covered some rendering performance solutions for 120fps, but that’s probably not a target we’re looking at just yet.

    Be pessimistic in performance expectations, but be optimistic in interface design and use idle time wisely (check idlize and idle-until-urgent). Obviously, these targets apply to runtime performance, rather than loading performance.

  2. FID < 100ms, TTI < 5s on 3G, Speed Index < 3s, Critical file size budget < 170KB (gzipped).
    Although it might be very difficult to achieve, a good ultimate goal would be Speed Index under 3s and Time to Interactive under 5s, and for repeat visits, aim for under 2s (achievable only with a service worker). Aim for Largest Contentful Paint of under 1s and minimize Total Blocking Time and Cumulative Layout Shift. An acceptable First Input Delay (highlighted as Max Potential First Input Delay in Lighthouse) is under 130–100ms. As mentioned above, we’re considering the baseline being a $200 Android phone (e.g. Moto G4) on a slow 3G network, emulated at 400ms RTT and 400kbps transfer speed.

    We have two major constraints that effectively shape a reasonable target for speedy delivery of the content on the web. On the one hand, we have network delivery constraints due to TCP Slow Start. The first 14KB of the HTML — 10 TCP packets, each 1460 bytes, making around 14.25 KB, albeit not to be taken literally — is the most critical payload chunk, and the only part of the budget that can be delivered in the first roundtrip (which is all you get in 1 sec at 400ms RTT due to mobile wake-up times).

    (Note: as TCP generally under-utilizes network connection by a significant amount, Google has developed TCP Bottleneck Bandwidth and RRT (BBR), a relatively new TCP delay-controlled TCP flow control algorithm. Designed for the modern web, it responds to actual congestion, rather than packet loss like TCP does, it is significantly faster, with higher throughput and lower latency — and the algorithm works differently. It's still important to prioritize critical resources as early as possible, but 14 KB might not be that relevant with BBR in place.) (thanks, Victor, Barry!)

    On the other hand, we have hardware constraints on memory and CPU due to JavaScript parsing times (we’ll talk about them in detail later). To achieve the goals stated in the first paragraph, we have to consider the critical file size budget for JavaScript. Opinions vary on what that budget should be (and it heavily depends on the nature of your project), but a budget of 170KB JavaScript gzipped already would take up to 1s to parse and compile on an average phone. Assuming that 170KB expands to 3× that size when decompressed (0.7MB), that already could be the death knell of a "decent" user experience on a Moto G4/G5 Plus.

    If you want to target growing markets such as South East Asia, Africa or India, you'll have to look into a very different set of constraints. Addy Osmani covers major feature phone constraints, such as few low cost, high-quality devices, unavailability of high-quality networks and expensive mobile data — along with PRPL-30 budget and development guidelines for these environments.

    According to Addy Osmani, a recommended size for lazy-loaded routes is also less than 35 KB. (Large preview) Addy Osmani suggests PRPL-30 performance budget (30KB gzipped + minified initial bundle) if targeting a feature phone. (Large preview)

    In fact, Google’s Alex Russel recommends to aim for 130–170KB gzipped as a reasonable upper boundary. In real-life world, most products aren’t even close: an median bundle size today is around 417KB, which is up 42% compared to early 2015. On a middle-class mobile device, that accounts for 15–25 seconds for Time-To-Interactive.

    Geekbench CPU performance benchmarks for the highest selling smartphones globally in 2019. JavaScript stresses single-core performance (remember, it’s inherently more single-threaded than the rest of the Web Platform) and is CPU bound. From Addy’s article “Loading Web Pages Fast On A $20 Feature Phone”. (Large preview)

    We could also go beyond the bundle size budget though. For example, we could set performance budgets based on the activities of the browser’s main thread, i.e. paint time before start render, or track down front-end CPU hogs. Tools such as Calibre, SpeedCurve and Bundlesize can help you keep your budgets in check, and can be integrated into your build process.

    Finally, a performance budget probably shouldn’t be a fixed value. Depending on the network connection, performance budgets should adapt, but payload on slower connection is much more "expensive", regardless of how they’re used.

    Note: It might sound strange to set such rigid budgets in times of wide-spread HTTP/2, upcoming 5G, rapidly evolving mobile phones and flourishing SPAs. However, they do sound reasonable when we deal with the unpredictable nature of the network and hardware, including everything from congested networks to slowly developing infrastructure, to data caps, proxy browsers, save-data mode and sneaky roaming charges.

From Fast By Default: Modern loading best practices by Addy Osmani (Slide 19) Performance budgets should adapt depending on the network conditions for an average mobile device. (Image source: Katie Hempenius) (Large preview) Defining The Environment
  1. Choose and set up your build tools.
    Don’t pay too much attention to what’s supposedly cool these days. Stick to your environment for building, be it Grunt, Gulp, Webpack, Parcel, or a combination of tools. As long as you are getting results you need and you have no issues maintaining your build process, you’re doing just fine.

    Among the build tools, Rollup is gaining traction, but Webpack seems to be the most established one, with literally hundreds of plugins available to optimize the size of your builds. Getting started with Webpack can be tough though. So if you want to get started, there are some great resources out there:

  2. Use progressive enhancement as a default.
    Still, after all these years, keeping progressive enhancement as the guiding principle of your front-end architecture and deployment is a safe bet. Design and build the core experience first, and then enhance the experience with advanced features for capable browsers, creating resilient experiences. If your website runs fast on a slow machine with a poor screen in a poor browser on a sub-optimal network, then it will only run faster on a fast machine with a good browser on a decent network.

    In fact, with adaptive module serving, we seem to be taking progressive enhancement to another level, serving "lite" core experiences to low-end devices, and enhancing with more sophisticated features for high-end devices. Progressive enhancement isn't likely to fade away any time soon, so it seems.

  3. If you need a practical implementation of the strategy on mid-scale and large-scale projects, Scott Jehl’s Modernizing our Progressive Enhancement Delivery article is a good place to start.

    -->
  4. Choose a strong performance baseline.
    With so many unknowns impacting loading — the network, thermal throttling, cache eviction, third-party scripts, parser blocking patterns, disk I/O, IPC latency, installed extensions, antivirus software and firewalls, background CPU tasks, hardware and memory constraints, differences in L2/L3 caching, RTTS — JavaScript has the heaviest cost of the experience, next to web fonts blocking rendering by default and images often consuming too much memory. With the performance bottlenecks moving away from the server to the client, as developers, we have to consider all of these unknowns in much more detail.

    With a 170KB budget that already contains the critical-path HTML/CSS/JavaScript, router, state management, utilities, framework, and the application logic, we have to thoroughly examine network transfer cost, the parse/compile-time and the runtime cost of the framework of our choice. Luckily, we’ve seen a huge improvement over the last few years in how fast browsers can parse and compile scripts. Yet the execution of JavaScript is still the main bottleneck, so paying close attention to script execution time and network can be impactful.

    As noted by Seb Markbåge, a good way to measure start-up costs for frameworks is to first render a view, then delete it and then render again as it can tell you how the framework scales. The first render tends to warm up a bunch of lazily compiled code, which a larger tree can benefit from when it scales. The second render is basically an emulation of how code reuse on a page affects the performance characteristics as the page grows in complexity.

  5. Evaluate frameworks and dependencies.
    Now, not every project needs a framework and not every page of a single-page-application needs to load a framework. In Netflix’s case, "removing React, several libraries and the corresponding app code from the client-side reduced the total amount of JavaScript by over 200KB, causing an over-50% reduction in Netflix’s Time-to-Interactivity for the logged-out homepage." The team then utilized the time spent by users on the landing page to prefetch React for subsequent pages that users were likely to land on (read on for details).

    It might sound obvious but worth stating: some projects can also benefit benefit from removing an existing framework altogether. Once a framework is chosen, you’ll be staying with it for at least a few years, so if you need to use one, make sure your choice is informed and well considered.

    Inian Parameshwaran has measured performance footprint of top 50 frameworks (against First Contentful Paint — the time from navigation to the time when the browser renders the first bit of content from the DOM). Inian discovered that, out there in the wild, Vue and Preact are the fastest across the board — both on desktop and mobile, followed by React (slides). You could examine your framework candidates and the proposed architecture, and study how most solutions out there perform, e.g. with server-side rendering or client-side rendering, on average.

    Baseline performance cost matters. According to a study by Ankur Sethi, "your React application will never load faster than about 1.1 seconds on an average phone in India, no matter how much you optimize it. Your Angular app will always take at least 2.7 seconds to boot up. The users of your Vue app will need to wait at least 1 second before they can start using it." You might not be targeting India as your primary market anyway, but users accessing your site with suboptimal network conditions will have a comparable experience. In exchange, your team gains maintainability and developer efficiency, of course. But this consideration needs to be deliberate.

    You could go as far as evaluating a framework (or any JavaScript library) on Sacha Greif’s 12-point scale scoring system by exploring features, accessibility, stability, performance, package ecosystem, community, learning curve, documentation, tooling, track record, team, compatibility, security for example. But on a tough schedule, it’s a good idea to consider at least the total cost on size + initial parse times before choosing an option; lightweight options such as Preact, Inferno, Vue, Svelte or Polymer can get the job done just fine. The size of your baseline will define the constraints for your application’s code.

    There are many tools to help you make an informed decision about the impact of your dependencies and viable alternatives:

    A good starting point is to choose a good default stack for your application. Gatsby (React), Vuepress (Vue)Preact CLI, and PWA Starter Kit provide reasonable defaults for fast loading out of the box on average mobile hardware. ​​Also, take a look at web.dev framework-specific performance guidance for React and Angular that's supposed to be expanded later this year (thanks, Phillip!).

CPU and compute performance of top-selling phones (Image credit: Addy Osmani) (Large preview)
  1. Consider using PRPL pattern and app shell architecture.
    Different frameworks will have different effects on performance and will require different strategies of optimization, so you have to clearly understand all of the nuts and bolts of the framework you’ll be relying on. When building a web app, look into the PRPL pattern and application shell architecture. The idea is quite straightforward: Push the minimal code needed to get interactive for the initial route to render quickly, then use service worker for caching and pre-caching resources and then lazy-load routes that you need, asynchronously.
PRPL stands for Pushing critical resource, Rendering initial route, Pre-caching remaining routes and Lazy-loading remaining routes on demand. An application shell is the minimal HTML, CSS, and JavaScript powering a user interface.
  1. Have you optimized the performance of your APIs?
    APIs are communication channels for an application to expose data to internal and third-party applications via so-called endpoints. When designing and building an API, we need a reasonable protocol to enable the communication between the server and third-party requests. Representational State Transfer (REST) is a well-established, logical choice: it defines a set of constraints that developers follow to make content accessible in a performant, reliable and scalable fashion. Web services that conform to the REST constraints, are called RESTful web services.

    As with good ol' HTTP requests, when data is retrieved from an API, any delay in server response will propagate to the end user, hence delaying rendering. When a resource wants to retrieve some data from an API, it will need to request the data from the corresponding endpoint. A component that renders data from several resources, such as an article with comments and author photos in each comment, may need several roundtrips to the server to fetch all the data before it can be rendered. Furthermore, the amount of data returned through REST is often more than what is needed to render that component.

    If many resources require data from an API, the API might become a performance bottleneck. GraphQL provides a performant solution to these issues. Per se, GraphQL is a query language for your API, and a server-side runtime for executing queries by using a type system you define for your data. Unlike REST, GraphQL can retrieve all data in a single request, and the response will be exactly what is required, without over or under-fetching data as it typically happens with REST.

    In addition, because GraphQL is using schema (metadata that tells how the data is structured), it can already organize data into the preferred structure, so, for example, with GraphQL, we could remove JavaScript code used for dealing with state management, producing a cleaner application code that runs faster on the client.

    If you want to get started with GraphQL, Eric Baer published two fantastic articles on yours truly Smashing Magazine: A GraphQL Primer: Why We Need A New Kind Of API and A GraphQL Primer: The Evolution Of API Design (thanks for the hint, Leonardo!).

A difference between REST and GraphQL, illustrated via a conversation between Redux + REST on the left, an Apollo + GraphQL on the right. (Image source: Hacker Noon) (Large preview)
  1. Will you be using AMP or Instant Articles?
    Depending on the priorities and strategy of your organization, you might want to consider using Google’s AMP or Facebook’s Instant Articles or Apple’s Apple News. You can achieve good performance without them, but AMP does provide a solid performance framework with a free content delivery network (CDN), while Instant Articles will boost your visibility and performance on Facebook.

    The seemingly obvious benefit of these technologies for users is guaranteed performance, so at times they might even prefer AMP-/Apple News/Instant Pages-links over "regular" and potentially bloated pages. For content-heavy websites that are dealing with a lot of third-party content, these options could potentially help speed up render times dramatically.

    Unless they don't. According to Tim Kadlec, for example, "AMP documents tend to be faster than their counterparts, but they don’t necessarily mean a page is performant. AMP is not what makes the biggest difference from a performance perspective."

    A benefit for the website owner is obvious: discoverability of these formats on their respective platforms and increased visibility in search engines. You could build progressive web AMPs, too, by reusing AMPs as a data source for your PWA. Downside? Obviously, a presence in a walled garden places developers in a position to produce and maintain a separate version of their content, and in case of Instant Articles and Apple News without actual URLs (thanks Addy, Jeremy!).

  2. Choose your CDN wisely.
    Depending on how much dynamic data you have, you might be able to "outsource" some part of the content to a static site generator, pushing it to a CDN and serving a static version from it, thus avoiding database requests. You could even choose a static-hosting platform based on a CDN, enriching your pages with interactive components as enhancements (JAMStack). In fact, some of those generators (like Gatsby on top of React) are actually website compilers with many automated optimizations provided out of the box. As compilers add optimizations over time, the compiled output gets smaller and faster over time.

    Notice that CDNs can serve (and offload) dynamic content as well. So, restricting your CDN to static assets is not necessary. Double-check whether your CDN performs compression and conversion (e.g. image optimization in terms of formats, compression and resizing at the edge), support for servers workers, edge-side includes, which assemble static and dynamic parts of pages at the CDN’s edge (i.e. the server closest to the user), and other tasks. If you want to be on the edge, check if your CDN supports HTTP over QUIC (HTTP/3) as well.

    Note: based on research by Patrick Meenan and Andy Davies, HTTP/2 prioritization is effectively broken on many CDNs, so be careful when choosing a CDN. Patrick has more details in his recent talk on HTTP/2 Prioritization (thanks, Barry!).

Assets Optimizations
  1. Use Brotli for plain text compression.
    In 2015, Google introduced Brotli, a new open-source lossless data format, which is now supported in all modern browsers. In practice, Brotli appears to be much more effective than Gzip and Deflate. It might be (very) slow to compress, depending on the settings, but slower compression will ultimately lead to higher compression rates. Still, it decompresses fast. You can also estimate Brotli compression savings for your site.

    Browsers will accept it only if the user is visiting a website over HTTPS. Brotli is widely supported, and many CDNs support it (Akamai, AWS, KeyCDN, Fastly, Cloudlare, CDN77) and you can enable Brotli even on CDNs that don’t support it yet (with a service worker).

    The catch is that compressing all assets with Brotli is quite expensive, so a number of servers can’t use it just because of the cost overhead it produces. In fact, at the highest level of compression, Brotli is so slow that any potential gains in file size could be nullified by the amount of time it takes for the server to begin sending the response as it waits to dynamically compress the asset. With static compression, however, higher compression settings are preferred.

    If you can bypass the cost of dynamically compressing static assets, it’s worth the effort. Brotli can be used for any plaintext payload — HTML, CSS, SVG, JavaScript, and so on.

    The strategy? Pre-compress static assets with Brotli+Gzip at the highest level and compress (dynamic) HTML on the fly with Brotli at level 3–5. Make sure that the server handles content negotiation for Brotli or gzip properly.

Both on desktop and on mobile, only 15% of all requests are compressed with Brotli. Around 65% are compressed with gzip. The rest isn’t compressed at all. (Image source: Web Almanac: Compression) (Large preview)
  1. Use responsive images and WebP.
    As far as possible, use responsive images with srcset, sizes and the <picture> element. While you’re at it, you could also make use of the WebP format (supported in all modern browsers except Safari and iOS Safari) by serving WebP images with the <picture> element and a JPEG fallback (see Andreas Bovens' code snippet) or by using content negotiation (using Accept headers). Ire Aderinokun has a very detailed tutorial on converting images to WebP, too.

    Sketch natively supports WebP, and WebP images can be exported from Photoshop using a WebP plugin for Photoshop. Other options are available, too. If you’re using WordPress or Joomla, there are extensions to help you easily implement support for WebP, such as Optimus and Cache Enabler for WordPress and Joomla’s own supported extension (via Cody Arsenault).

    It’s important to note that while WebP image file sizes compared to equivalent Guetzli and Zopfli, the format doesn’t support progressive rendering like JPEG, which is why users might see an actual image faster with a good ol' JPEG although WebP images might get faster through the network. With JPEG, we can serve a "decent" user experience with the half or even quarter of the data and load the rest later, rather than have a half-empty image as it is in the case of WebP. Your decision will depend on what you are after: with WebP, you’ll reduce the payload, and with JPEG you’ll improve perceived performance.

    On Smashing Magazine, we use the postfix -opt for image names — for example, brotli-compression-opt.png; whenever an image contains that postfix, everybody on the team knows that the image has already been optimized. And — shameless plug! — Jeremy Wagner even published a Smashing book on WebP.

The Responsive Image Breakpoints Generator automates images and markup generation.
  1. Are images properly optimized?
    When you’re working on a landing page on which it’s critical that a particular image loads blazingly fast, make sure that JPEGs are progressive and compressed with mozJPEG (which improves the start rendering time by manipulating scan levels) or take a look at Guetzli, Google’s new open-source encoder focusing on perceptual performance, and utilizing learnings from Zopfli and WebP. The only downside: slow processing times (a minute of CPU per megapixel). For PNG, we can use Pingo, and for SVG, we can use SVGO or SVGOMG. And if you need to quickly preview and copy or download all the SVG assets from a website, svg-grabber can do that for you, too.

    Every single image optimization article would state it, but keeping vector assets clean and tight is always worth reminding. Make sure to clean up unused assets, remove unnecessary metadata and reduces the number of path points in artwork (and thus SVG code). (Thanks, Jeremy!)

    There are more advanced options though. You could:

    • Use Squoosh to compress, resize and manipulate images at the optimal compression levels (lossy or lossless),
    • Use the Responsive Image Breakpoints Generator or a service such as Cloudinary or Imgix to automate image optimization. Also, in many cases, using srcset and sizes alone will reap significant benefits.
    • To check the efficiency of your responsive markup, you can use imaging-heap, a command line tool that measure the efficiency across viewport sizes and device pixel ratios.
    • Lazy load images and iframes with hybrid lazy-loading, utilizing native lazy-loading and lazyload, a library that detects any visibility changes triggered through user interaction (with IntersectionObserver which we’ll explore later).
    • For offscreen images, we can display a placeholder first, and when the image is within the viewport, using IntersectionObserver, trigger a network call for the image to be downloaded in background. We can then defer render until decode with img.decode() or download the image if the Image Decode API isn't available. When rendering the image, we can use fade-in animations, for example. Katie Hempenius and Addy Osmani share more insights in their talk Speed at Scale: Web Performance Tips and Tricks from the Trenches.
    • You can add automatic image compression to your Pull Requests, so no image can hit production uncompressed. The action uses mozjpeg and libvips that work with PNGs and JPGs.
    • Watch out for images that are loaded by default, but might never be displayed — e.g. in carousels, accordions and image galleries.
    • Consider Swapping Images with the Sizes Attribute by specifying different image display dimensions depending on media queries, e.g. to manipulate sizes to swap sources in a magnifier component.
    • Review image download inconsistencies to prevent unexpected downloads for foreground and background images.
    • Sometimes optimizing images alone won't do the trick. To improve the time needed to start the rendering of a critical image, lazy-load less important images and defer any scripts to load after critical images have already rendered.
    • To optimize storage interally, you could use Dropbox’s new Lepton format for losslessly compressing JPEGs by an average of 22%.
    • Watch out for the aspect-ratio property in CSS and intrinsicsize attribute which will allow us to set aspect ratios and dimensions for images, so browser can reserve a pre-defined layout slot early to avoid layout jumps during the page load.
    • If you feel adventurous, you could chop and rearrange HTTP/2 streams using Edge workers, basically a real-time filter living on the CDN, to send images faster through the network. Edge workers use JavaScript streams that use chunks which you can control (basically they are JavaScript that runs on the CDN edge that can modify the streaming responses), so you can control the delivery of images. With service worker it’s too late as you can’t control what’s on the wire, but it does work with Edge workers. So you can use them on top of static JPEGs saved progressively for a particular landing page.
    A sample output by imaging-heap, a command line tool that measure the efficiency across viewport sizes and device pixel ratios. (Image source) (Large preview)

    The future of responsive images might change dramatically with the adoption of client hints. Client hints are HTTP request header fields, e.g. DPR, Viewport-Width, Width, Save-Data, Accept (to specify image format preferences) and others. They are supposed to inform the server about the specifics of user’s browser, screen, connection etc. As a result, the server can decide how to fill in the layout with appropriately sized images, and serve only these images in desired formats. With client hints, we move the resource selection from HTML markup and into the request-response negotiation between the client and server.

    As Ilya Grigorik noted, client hints complete the picture — they aren’t an alternative to responsive images. "The <picture> element provides the necessary art-direction control in the HTML markup. Client hints provide annotations on resulting image requests that enable resource selection automation. Service Worker provides full request and response management capabilities on the client." A service worker could, for example, append new client hints headers values to the request, rewrite the URL and point the image request to a CDN, adapt response based on connectivity and user preferences, etc. It holds true not only for image assets but for pretty much all other requests as well.

    For clients that support client hints, one could measure 42% byte savings on images and 1MB+ fewer bytes for 70th+ percentile. On Smashing Magazine, we could measure 19-32% improvement, too. Unfortunately, client hints still have to gain some browser support. Still under consideration in Firefox. However, if you supply both the normal responsive images markup and the <meta> tag for Client Hints, then the browser will evaluate the responsive images markup and request the appropriate image source using the Client Hints HTTP headers.

    Not good enough? Well, you can also improve perceived performance for images with the multiple background images technique. Keep in mind that playing with contrast and blurring out unnecessary details (or removing colors) can reduce file size as well. Ah, you need to enlarge a small photo without losing quality? Consider using Letsenhance.io.

    These optimizations so far cover just the basics. Addy Osmani has published a very detailed guide on Essential Image Optimization that goes very deep into details of image compression and color management. For example, you could blur out unnecessary parts of the image (by applying a Gaussian blur filter to them) to reduce the file size, and eventually you might even start removing colors or turn the picture into black and white to reduce the size even further. For background images, exporting photos from Photoshop with 0 to 10% quality can be absolutely acceptable as well. Ah, and don’t use JPEG-XR on the web — "the processing of decoding JPEG-XRs software-side on the CPU nullifies and even outweighs the potentially positive impact of byte size savings, especially in the context of SPAs".

Addy Osmani recommends to replace animated GIFs with looping inline videos. The file size difference is noticeable (80% savings). (Large preview)
  1. Are videos properly optimized?
    We covered images so far, but we’ve avoided a conversation about good ol' GIFs. Frankly, instead of loading heavy animated GIFs which impact both rendering performance and bandwidth, it’s a good idea to switch either to animated WebP (with GIF being a fallback) or replace them with looping HTML5 videos altogether. Yes, unlike with images, browsers do not preload <video> content, but HTML5 videos tend to be lighter and smaller than GIFs. Not an option? Well, at least we can add lossy compression to GIFs with Lossy GIF, gifsicle or giflossy.

    Tests show that inline videos within img tags display 20× faster and decode 7× faster than the GIF equivalent, in addition to being a fraction in file size.

    In the land of good news, video formats have been advancing massively over the years. For a long time, we had hoped that WebM would become the format to rule them all, and WebP (which is basically one still image inside of the WebM video container) will become a replacement for dated image formats. But despite WebP and WebM gaining support these days, the breakthrough didn’t happen.

    In 2018, the Alliance of Open Media has released a new promising video format called AV1. AV1 has compression similar to the H.265 codec (the evolution of H.264) but unlike the latter, AV1 is free. The H.265 license pricing pushed browser vendors to adopt a comparably performant AV1 instead: AV1 (just like H.265) compresses twice as good as WebM.

    AV1 has good chances of becoming the ultimate standard for video on the web. (Image credit: Wikimedia.org) (Large preview)

    In fact, Apple currently uses HEIF format and HEVC (H.265), and all the photos and videos on the latest iOS are saved in these formats, not JPEG. While HEIF and HEVC (H.265) aren’t properly exposed to the web (yet?), AV1 is — and it’s gaining browser support. So adding the AV1 source in your <video> tag is reasonable, as all browser vendors seem to be on board.

    For now, the most widely used and supported encoding is H.264, served by MP4 files, so before serving the file, make sure that your MP4s are processed with a multipass-encoding, blurred with the frei0r iirblur effect (if applicable) and moov atom metadata is moved to the head of the file, while your server accepts byte serving. Boris Schapira provides exact instructions for FFmpeg to optimize videos to the maximum. Of course, providing WebM format as an alternative would help, too.

    Need to start rendering videos faster but videos files are still too large? For example, whenever hou have a large background video on a landing page? A common technique to use is to show the very first frame as a still image first, or display a heavily optimized, short looping segment that could be interpreted as a part of the video, and then, whenever the video is buffered enough, start playing the actual video. Doug Sillars has a written a detailed guide to background video performance that could be helpful in that case. (Thanks, Guy Podjarny!).

    Video playback performance is a story on its own, and if you’d like to dive into it in details, take a look at another Doug Sillars' series on The Current State of Video and Video Delivery Best Practices that include details on video delivery metrics, video preloading, compression and streaming.

Zach Leatherman’s Comprehensive Guide to Font-Loading Strategies provides a dozen options for better web font delivery.
  1. Are web fonts optimized?
    The first question that’s worth asking if you can get away with using UI system fonts in the first place — just make sure to double check that they appear correctly on various platforms. If it’s not the case, chances are high that the web fonts you are serving include glyphs and extra features and weights that aren’t being used. You can ask your type foundry to subset web fonts or if you are using open-source fonts, subset them on your own with Glyphhanger or Fontsquirrel. You can even automate your entire workflow with Peter Müller’s subfont, a command line tool that statically analyses your page in order to generate the most optimal web font subsets, and then inject them into your page.

    WOFF2 support is great, and you can use WOFF as fallback for browsers that don’t support it — or perhaps legacy browsers could be served well enough with system fonts instead. There are many, many, many options for web font loading, and you can choose one of the strategies from Zach Leatherman’s "Comprehensive Guide to Font-Loading Strategies," (code snippets also available as Web font loading recipes).

    Probably the better options to consider today are Critical FOFT with preload and "The Compromise" method. Both of them use a two-stage render for delivering web fonts in steps — first a small supersubset required to render the page fast and accurately with the web font, and then load the rest of the family async. The difference is that "The Compromise" technique loads polyfill asynchronously only if font load events are not supported, so you don’t need to load the polyfill by default. Need a quick win? Zach Leatherman has a quick 23-min tutorial and case study to get your fonts in order.

    In general, it might be a good idea to use the preload resource hint to preload fonts, but in your markup include the hints after the link to critical CSS and JavaScript. With preload, there is a puzzle of priorities, so consider injecting rel="preload" elements into the DOM just before the external blocking scripts. According to Andy Davies, "resources injected using a script are hidden from the browser until the script executes, and we can use this behaviour to delay when the browser discovers the preload hint." Otherwise, font loading will cost you in the first render time.

    It’s a good idea to be selective and choose files that matter most, e.g. the ones that are critical for rendering or that would help you avoiding visible and disruptive text reflows. In general, Zach advises to preload one or two fonts of each family — it also makes sense to delay some font loading if they are less critical.

    It has become quite common to use local() value (which refers to a lo­cal font by name) when defining a font-family in the @font-face rule:

    /* Warning! Probably not a good idea! */ @font-face { font-family: Open Sans; src: local('Open Sans Regular'), local('OpenSans-Regular'), url('opensans.woff2') format ('woff2'), url('opensans.woff') format('woff'); }

    The idea is reasonable: some popular open-source fonts such as Open Sans are coming pre-installed with some drivers or apps, so if the font is avail­able lo­cally, the browser does­n’t need to down­load the web font and can dis­play the lo­cal font im­me­di­ately. As Bram Stein noted, "though a lo­cal font matches the name of a web font, it most likely isn't the same font. Many web fonts dif­fer from their "desk­top" ver­sion. The text might be ren­dered dif­fer­ently, some char­ac­ters may fall back to other fonts, Open­Type fea­tures can be miss­ing en­tirely, or the line height may be dif­fer­ent."

    Also, as typefaces evolve over time, the locally installed version might be very different from the web font, with characters looking very different. So, according to Bram, it's better to never mix lo­cally in­stalled fonts and web fonts in @font-face rules.

    Nobody likes waiting for the content to be displayed. With the font-display CSS descriptor, we can control the font loading behavior and enable content to be readable immediately (font-display: optional) or almost immediately (font-display: swap). However, if you want to avoid text reflows, we still need to use the Font Loading API, specifically to group repaints, or when you are using third party hosts. Unless you can use Google Fonts with Cloudflare Workers, of course.

    Talking about Google Fonts: although the support for font-display was added recently, consider using google-webfonts-helper, a hassle-free way to self-host Google Fonts. Always self-host your fonts for maximum control if you can.

    In general, if you use font-display: optional, it might not be a good idea to also use preload as it will trigger that web font request early (causing network congestion if you have other critical path resources that need to be fetched). Use preconnect for faster cross-origin font requests, but be cautious with preload as preloading fonts from a different origin wlll incur network contention. All of these techniques are covered in Zach’s Web font loading recipes.

    It might be a good idea to opt out of web fonts (or at least second stage render) if the user has enabled Reduce Motion in accessibility preferences or has opted in for Data Saver Mode (see Save-Data header). Or when the user happens to have slow connectivity (via Network Information API). Eventually, we might also be able to use prefers-reduced-data CSS media query to not define font declarations if the user has opted into data-saving mode. The media query would basically expose if the Save-Data request header from the Client Hint HTTP extension is on/off to allow for usage with CSS. Not quite there yet though.

    To measure the web font loading performance, consider the All Text Visible metric (the moment when all fonts have loaded and all content is displayed in web fonts), Time to Real Italics as well as Web Font Reflow Count after first render. Obviously, the lower both metrics are, the better the performance is. It’s important to notice that variable fonts might require a significant performance consideration. They give designers a much broader design space for typographic choices, but it comes at the cost of a single serial request opposed to a number of individual file requests. That single request might be slow, blocking the rendering of the content on a page. So subsetting and splitting the font into character sets will still matter. On the good side though, with a variable font in place, we’ll get exactly one reflow by default, so no JavaScript will be required to group repaints.

    Now, what would make a bulletproof web font loading strategy then? Subset fonts and prepare them for the 2-stage-render, declare them with a font-display descriptor, use Font Loading API to group repaints and store fonts in a persistent service worker’s cache. On the first visit, inject the preloading of scripts just before the blocking external scripts. You could fall back to Bram Stein’s Font Face Observer if necessary. And if you’re interested in measuring the performance of font loading, Andreas Marschke explores performance tracking with Font API and UserTiming API.

    Finally, don’t forget to include unicode-range to break down a large font into smaller language-specific fonts, and use Monica Dinculescu’s font-style-matcher to minimize a jarring shift in layout, due to sizing discrepancies between the fallback and the web fonts.

    Does the future look bright? With progressive font enrichment, eventually we might be able to "download only the required part of the font on any given page, and for subsequent requests for that font to dynamically ‘patch’ the original download with additional sets of glyphs as required on successive page views", as Jason Pamental explains it. Incremental Transfer Demo is already available, and it’s work in progress.

Build Optimizations
  1. Set your priorities straight.
    It’s a good idea to know what you are dealing with first. Run an inventory of all of your assets (JavaScript, images, fonts, third-party scripts and "expensive" modules on the page, such as carousels, complex infographics and multimedia content), and break them down in groups.

    Set up a spreadsheet. Define the basic core experience for legacy browsers (i.e. fully accessible core content), the enhanced experience for capable browsers (i.e. the enriched, full experience) and the extras (assets that aren’t absolutely required and can be lazy-loaded, such as web fonts, unnecessary styles, carousel scripts, video players, social media buttons, large images). A while back, we published an article on "Improving Smashing Magazine’s Performance," which describes this approach in detail.

    When optimizing for performance we need to reflect our priorities. Load the core experience immediately, then enhancements, and then the extras.

  2. Use native JavaScript modules in production.
    Remember the good ol' cutting-the-mustard technique to send the core experience to legacy browsers and an enhanced experience to modern browsers? An updated variant of the technique could use ES2015+ <script type="module">, also known as module/nomodule pattern.

    As Philip Walton writes, "the technique uses bundlers and transpilers to generate two versions of your codebase, one with modern syntax (loaded via <script type="module">) and one with ES5 syntax (loaded via <script nomodule>)." Modern browsers would interpret the script as a JavaScript module and run it as expected, while legacy browsers wouldn’t recognize the attribute and ignore it because it’s unknown HTML syntax. We can ship significantly less code to module-supporting browsers, and it’s now supported by most frameworks and CLIs.

    One note of warning though: the module/nomodule pattern can backfire on some clients, so you might want to consider a workaround: Jeremy's less risky differential serving pattern which, however, sidesteps the preload scanner, which could affect performance in ways one might not anticipate. (thanks, Jeremy!)

    In fact, Rollup supports modules as an output format, so we can both bundle code and deploy modules in production. Parcel has just added module support in Parcel 2. Webpack isn't quite there yet. Also, watch out for Pika that is looking into simplifying build process and management of JavaScript modules.

    Note: It’s worth stating that feature detection alone isn’t enough to make an informed decision about the payload to ship to that browser. On its own, we can’t deduce device capability from browser version. For example, cheap Android phones in developing countries mostly run Chrome and will cut the mustard despite their limited memory and CPU capabilities.

    Eventually, using the Device Memory Client Hints Header, we’ll be able to target low-end devices more reliably. At the moment of writing, the header is supported only in Blink (it goes for client hints in general). Since Device Memory also has a JavaScript API which is available in Chrome, one option could be to feature detect based on the API, and fall back to module/nomodule technique if it’s not supported (thanks, Yoav!).

  3. Are you using tree-shaking, scope hoisting and code-splitting?
    Tree-shaking is a way to clean up your build process by only including code that is actually used in production and eliminate unused imports in Webpack. With Webpack and Rollup, we also have scope hoisting that allows both tools to detect where import chaining can be flattened and converted into one inlined function without compromising the code. With Webpack, we can also use JSON Tree Shaking as well.

    Also, you might want to consider learning how to avoid bloat and expensive styles. Feeling like going beyond that? You can also use Webpack to shorten the class names and use scope isolation to rename CSS class names dynamically at the compilation time.

    Code-splitting is another Webpack feature that splits your codebase into "chunks" that are loaded on demand. Not all of the JavaScript has to be downloaded, parsed and compiled right away. Once you define split points in your code, Webpack can take care of the dependencies and outputted files. It enables you to keep the initial download small and to request code on demand when requested by the application. Alexander Kondrov has a fantastic introduction to code-splitting with Webpack and React.

    Consider using preload-webpack-plugin that takes routes you code-split and then prompts browser to preload them using <link rel="preload"> or <link rel="prefetch">. Webpack inline directives also give some control over preload/prefetch. (Watch out for prioritization issues though.)

    Where to define split points? By tracking which chunks of CSS/JavaScript are used, and which aren’t used. Umar Hansa explains how you can use Code Coverage from Devtools to achieve it.

    If you aren’t using Webpack, note that Rollup shows significantly better results than good ol' Browserify exports. While we’re at it, you might want to check out rollup-plugin-closure-compiler and Rollupify, which converts ECMAScript 2015 modules into one big CommonJS module — because small modules can have a surprisingly high performance cost depending on your choice of bundler and module system.

    When dealing with single-page applications, we need some time to initialize the app before we can render the page. Your setting will require your custom solution, but you could watch out for modules and techniques to speed up the initial rendering time. For example, here’s how to debug React performance and eliminate common React performance issues, and here’s how to improve performance in Angular. In general, most performance issues come from the initial time to bootstrap the app.

    So, what’s the best way to code-split aggressively, but not too aggressively? According to Phil Walton, "in addition to code-splitting via dynamic imports, [we could] also use code-splitting at the package level, where each imported node modules get put into a chunk based on its package’s name." Phil provides a tutorial on how to build it as well.

  4. Can you offload JavaScript into a Web Worker?
    To reduce the negative impact to Time-to-Interactive, it might be a good idea to look into offloading heavy JavaScript into a Web Worker or caching via a Service Worker.

    As the code base keeps growing, the UI performance bottlenecks will show up, slowing down the user’s experience. That’s because DOM operations are running alongside your JavaScript on the main thread. With web workers, we can move these expensive operations to a background process that’s running on a different thread. Typical use cases for web workers are prefetching data and Progressive Web Apps to load and store some data in advance so that you can use it later when needed. And you could use Comlink to streamline the communication between the main page and the worker. Still some work to do, but we are getting there.

    How to get started? Here are a few resources that are worth looking into:

    Note that Web Workers don’t have access to the DOM because the DOM is not "thread-safe", and the code that they execute needs to be contained in a separate file.

  5. Can you offload "hot paths" to WebAssembly?
    We could offload computationally heavy tasks off to WebAssembly (WASM), a binary instruction format, designed as a portable target for compilation of high-level languages like C/C++/Rust. Its browser support is remarkable, and it has recently become viable as function calls between JavaScript and WASM are getting faster. Plus, it’s even supported on Fastly’s edge cloud.

    Of course, WebAssembly isn’t supposed to replace JavaScript, but it can complement it in cases when you notice CPU hogs. For most web apps, JavaScript is a better fit, and WebAssembly is best used for computationally intensive web apps, such as web games.

    If you’d like to learn more about WebAssembly:

Milica Mihajlija provides a general overview of how WebAssembly works and why it’s useful. (Large preview)
  1. Are you using an ahead-of-time compiler?
    Make sure to use an ahead-of-time compiler to offload some of the client-side rendering to the server and, hence, output usable results quickly. Finally, consider using Optimize.js for faster initial loading by wrapping eagerly invoked functions (it might not be necessary any longer, though).
  2. Serve legacy code only to legacy browsers.
    With ES2015 being remarkably well supported in modern browsers, we can use babel-preset-env to only transpile ES2015+ features unsupported by the modern browsers you are targeting. Then set up two builds, one in ES6 and one in ES5. As mentioned above, JavaScript modules are now supported in all major browsers, so use use script type="module" to let browsers with ES module support load the file, while older browsers could load legacy builds with script nomodule. And we can automate the entire process with Webpack ESNext Boilerplate.

    Note that these days we can write module-based JavaScript that runs natively in the browser, without transpilers or bundlers. <link rel="modulepreload"> header provides a way to initiate early (and high-priority) loading of module scripts. Basically, it’s a nifty way to help in maximizing bandwidth usage, by telling the browser about what it needs to fetch so that it’s not stuck with anything to do during those long roundtrips. Also, Jake Archibald has published a detailed article with gotchas and things to keep in mind with ES Modules that’s worth reading.

    For lodash, use babel-plugin-lodash that will load only modules that you are using in your source. Your dependencies might also depend on other versions of Lodash, so transform generic lodash requires to cherry-picked ones to avoid code duplication. This might save you quite a bit of JavaScript payload.

    Shubham Kanodia has written a detailed low-maintenance guide on smart bundling: to shipping legacy code to only legacy browsers in production with the code snippet you could use right away.

Jake Archibald has published a detailed article with gotchas and things to keep in mind with ES Modules, e.g. inline scripts are deferred until blocking external scripts and inline scripts are executed. (Large preview)
  1. Are you using module/nomodule pattern for JavaScript?
    We want to send just the necessary JavaScript through the network, yet it means being slightly more focused and granular about the delivery of those assets. A while back Philip Walton introduced the idea of module/nomodule pattern (also introduced by Jeremy Wagner as differential serving). The idea is to compile and serve two separate JavaScript bundles: the “regular” build, the one with Babel-transforms and polyfills and serve them only to legacy browsers that actually need them, and another bundle (same functionality) that has no transforms or polyfills.

    As a result, we help reduce blocking of the main thread by reducing the amount of scripts the browser needs to process. Jeremy Wagner has published a comprehensive article on differential serving and how to set it up in your build pipeline, from setting up Babel, to what tweaks you’ll need to make in Webpack, as well as the benefits of doing all this work.

  2. Identify and rewrite legacy code with incremental decoupling.
    Long-living projects have a tendency to gather dust and dated code. Revisit your dependencies and assess how much time would be required to refactor or rewrite legacy code that has been causing trouble lately. Of course, it’s always a big undertaking, but once you know the impact of the legacy code, you could start with incremental decoupling.

    First, set up metrics that tracks if the ratio of legacy code calls is staying constant or going down, not up. Publicly discourage the team from using the library and make sure that your CI alerts developers if it’s used in pull requests. polyfills could help transition from legacy code to rewritten codebase that uses standard browser features.

  3. Identify and remove unused CSS/JS.
    CSS and JavaScript code coverage in Chrome allows you to learn which code has been executed/applied and which hasn't. You can start recording the coverage, perform actions on a page, and then explore the code coverage results. Once you’ve detected unused code, find those modules and lazy load with import() (see the entire thread). Then repeat the coverage profile and validate that it’s now shipping less code on initial load.

    You can use Puppeteer to programmatically collect code coverage and Canary already allows you to export code coverage results, too. As Andy Davies noted, you might want to collect code coverage for both modern and legacy browsers though.

    There are many other use-cases for Puppeteer, such as, for example, automatic visual diffing or monitoring unused CSS with every build. If you’re looking for a detailed guide to Puppeteer, Nitay Neeman has a very comprehensive overview of Puppeteer, with examples and use cases.

    Furthermore, purgecss, UnCSS and Helium can help you remove unused styles from CSS. And if you aren’t certain if a suspicious piece of code is used somewhere, you can follow Harry Roberts' advice: create a 1×1px transparent GIF for a particular class and drop it into a dead/ directory, e.g. /assets/img/dead/comments.gif. After that, you set that specific image as a background on the corresponding selector in your CSS, sit back and wait for a few months if the file is going to appear in your logs. If there are no entries, nobody had that legacy component rendered on their screen: you can probably go ahead and delete it all.

    For the I-feel-adventurous-department, you could even automate gathering on unused CSS through a set of pages by monitoring DevTools using DevTools.

  4. Trim the size of your JavaScript bundles.
    As Addy Osmani noted, there’s a high chance you’re shipping full JavaScript libraries when you only need a fraction, along with dated polyfills for browsers that don’t need them, or just duplicate code. To avoid the overhead, consider using webpack-libs-optimizations that removes unused methods and polyfills during the build process.

    Add bundle auditing into your regular workflow as well. There might be some lightweight alternatives to heavy libraries you’ve added years ago, e.g. Moment.js could be replaced with native Internationalization API, date-fns or Luxon. Benedikt Rötsch’s research showed that a switch from Moment.js to date-fns could shave around 300ms for First paint on 3G and a low-end mobile phone.

    That’s where tools like Bundlephobia could help find the cost of adding a npm package to your bundle. size-limit extends basic bundle size check with details on JavaScript execution time. You can even integrate these costs with a Lighthouse Custom Audit. This goes for frameworks, too. By removing or trimming the Vue MDC Adapter (Material Components for Vue), styles drop from 194KB to 10KB.

    Feeling adventurous? You could look into Prepack. It compiles JavaScript to equivalent JavaScript code, but unlike Babel or Uglify, it lets you write normal JavaScript code, and outputs equivalent JavaScript code that runs faster.

    Alternatively to shipping the entire framework, you could even trim your framework and compile it into a raw JavaScript bundle that does not require additional code. Svelte does it, and so does Rawact Babel plugin which transpiles React.js components to native DOM operations at build-time. Why? Well, as maintainers explain, "react-dom includes code for every possible component/HTMLElement that can be rendered, including code for incremental rendering, scheduling, event handling, etc. But there are applications that do not need all these features (at initial page load). For such applications, it might make sense to use native DOM operations to build the interactive user interface."

size-limit provides basic bundle size check with details on JavaScript execution time as well. (Large preview) In his article, Benedikt Rötsch’s showed that a switch from Moment.js to date-fns could shave around 300ms for First paint on 3G and a low-end mobile phone. (Large preview)
  1. Are you using predictive prefetching for JavaScript chunks?
    We could use heuristics to decide when to preload JavaScript chunks. Guess.js is a set of tools and libraries that use Google Analytics data to determine which page a user is most likely to visit next from a given page. Based on user navigation patterns collected from Google Analytics or other sources, Guess.js builds a machine-learning model to predict and prefetch JavaScript that will be required on each subsequent page.

    Hence, every interactive element is receiving a probability score for engagement, and based on that score, a client-side script decides to prefetch a resource ahead of time. You can integrate the technique to your Next.js application, Angular and React, and there is a Webpack plugin which automates the setup process as well.

    Obviously, you might be prompting the browser to consume unneeded data and prefetch undesirable pages, so it’s a good idea to be quite conservative in the number of prefetched requests. A good use case would be prefetching validation scripts required in the checkout, or speculative prefetch when a critical call-to-action comes into the viewport.

    Need something less sophisticated? DNStradamus does DNS prefetching for outbound links as they appear in the viewport. Quicklink and Instant.page are small libraries that automatically prefetch links in the viewport during idle time in attempt to make next-page navigations load faster. Quicklink is data-considerate, so it doesn’t prefetch on 2G or if Data-Saver is on, and so is Instant.page if the mode is set to use viewport prefetching (which is a default).

  2. Take advantage of optimizations for your target JavaScript engine.
    Study what JavaScript engines dominate in your user base, then explore ways of optimizing for them. For example, when optimizing for V8 which is used in Blink-browsers, Node.js runtime and Electron, make use of script streaming for monolithic scripts.

    Script streaming allows async or defer scripts to be parsed on a separate background thread once downloading begins, hence in some cases improving page loading times by up to 10%. Practically, use <script defer> in the <head>, so that the browsers can discover the resource early and then parse it on the background thread.

    Caveat: Opera Mini doesn’t support script deferment, so if you are developing for India or Africa, defer will be ignored, resulting in blocking rendering until the script has been evaluated (thanks Jeremy!).

    You could also hook into V8’s code caching as well, by splitting out libraries from code using them, or the other way around, merge libraries and their uses into a single script, group small files together and avoid inline scripts. Or perhaps even use v8-compile-cache.

    Firefox’s recently released Baseline Interpreter has speed up Firefox and there are a few JIT Optimization Strategies available as well.

Progressive booting means using server-side rendering to get a quick first meaningful paint, but also include some minimal JavaScript to keep the time-to-interactive close to the first meaningful paint.
  1. Client-side rendering or server-side rendering? Both!
    That's a quite heated conversation to have. Ultimately, the decision has to be driven by the performance of the application. The ultimate approach would be to set up some sort of progressive booting: Use server-side rendering to get a quick first meaningful paint, but also include some minimal necessary JavaScript to keep the time-to-interactive close to the first meaningful paint. If JavaScript is coming too late after the First Meaningful Paint, the browser will lock up the main thread while parsing, compiling and executing late-discovered JavaScript, hence handcuffing the interactivity of site or application.

    To avoid it, always break up the execution of functions into separate, asynchronous tasks, and where possible use requestIdleCallback. Consider lazy loading parts of the UI using WebPack’s dynamic import() support, avoiding the load, parse, and compile cost until users really need them (thanks Addy!).

    In its essence, Time to Interactive (TTI) tells us the time between navigation and interactivity. The metric is defined by looking at the first five-second window after the initial content is rendered, in which no JavaScript tasks take longer than 50ms. If a task over 50ms occurs, the search for a five-second window starts over. As a result, the browser will first assume that it reached Interactive, just to switch to Frozen, just to eventually switch back to Interactive.

    Once we reached Interactive, we can then — either on demand or as time allows — boot non-essential parts of the app. Unfortunately, as Paul Lewis noticed, frameworks typically have no simple concept of priority that can be surfaced to developers, and hence progressive booting isn't easy to implement with most libraries and frameworks.

    Still, we are getting there. These days there are a couple of choices we can explore, and Houssein Djirdeh and Jason Miller provide an excellent overview of these options in their talk on Rendering on the Web. The overview below is based on their talk.

    • Full Server-Side Rendering (SSR)
      In classic SSR, such as WordPress, all requests are handled entirely on the server. The requested content is returned as a finished HTML page and browsers can render it right away. Hence, SSR-apps can't really make use of the DOM APIs, for example. The gap between First Contentful Paint and Time to Interactive is usually small, and the page can be rendered right away as HTML is being streamed to the browser. However, we end up with longer server think time and consequently Time To First Byte and we don't make use of responsive and rich features of modern applications.

    • Static SSR (SSR)
      We build out the product as a single page application, but all pages are prerendered to static HTML with minimal JavaScript as a build step. Thus, we can display a landing page quickly and then prefetch a SPA-framework for subsequent pages. Netflix has adopted this approach decreasing loading and Time-to-Interactive by 50%.

    • Server-Side Rendering With (Re)Hydration (SSR + CSR)
      With hydration in the mix, the HTML page returned from the server also contains a script that loads a fully-fledged client-side application.

      With React, we can use ReactDOMServer module on a Node server like Express, and then call the renderToString method to render the top level components as a static HTML string. With Vue, we can use the vue-server-renderer to render a Vue instance into HTML using renderToString. In Angular, we can use @nguniversal to turn client requests into fully server-rendered HTML pages.

      A fully server-rendered experience can also be achieved out of the box with Next.js (React) or Nuxt.js (Vue).

      The approach has its downsides. As a result, we do gain full flexibility of client-side apps while providing faster server-side rendering, but we also end up with a longer gap between First Meaningful Paint and Time To Interactive and increased First Input Delay. Rehydration is very expensive, and usually this strategy alone will not be good enough as it heavily delays Time To Interactive.

    • Streaming Server-Side Rendering With Progressive Hydration (SSR + CSR)
      To minimize the gap between Time To Interactive and First Contentful Paint, we render multiple requests at once and send down content in chunks as they get generated. So we don't have to wait for the full string of HTML before sending content to the browser, and hence improve Time To First Byte.

      In React, instead of renderToString, we can use renderToNodeStream to pipe the response and send the HTML down in chunks. In Vue, we can use renderToStream that can be piped and streamed. With React Suspense on the horizon, we might use asynchronous rendering for that purpose, too.

      On the client-side, rather than booting the entire application at once, we boot up components progressively. Sections of the applications are first broken down into standalone scripts with code splitting, and then hydrated gradually (in order of our priorities). In fact, we can hydrate critical components first, while the rest could be hydrated later. The role of client-side and server-side rendering can then be defined differently per component. We can then also defer hydration of some components until they come into view, or are needed for user interaction, or when the browser is idle.

      For Vue, Markus Oberlehner has published a guide on reducing Time To Interactive of SSR apps using hydration on user interaction as well as vue-lazy-hydration, an early-stage plugin that enables component hydration on visibility or specific user interaction. The Angular team works on progressive hydration with Ivy Universal. You can implement partial hydration with Preact and Next.js, too.

      For React, partial hydration is on the Suspense roadmap (and it looks promising!). If you feel adventurous, Jason Miller has published working demos on how progressive hydration could be implemented with React, so you can use them right away: demo 1, demo 2, demo 3 (also available on GitHub). Plus, you can look into the react-prerendered-component library.

    • Trisomorphic Rendering
      With service workers in place, we can use streaming server rendering for initial/non-JS navigations, and then have the service worker taking on rendering of HTML for navigations after it has been installed. In that case, service worker prerenders content and enables SPA-style navigations for rendering new views in the same session. Works well when you can share the same templating and routing code between the server, client page, and service worker.

    Trisomorphic rendering, with the same code rendering in any 3 places: on the server, in the DOM or in a service worker. (Image source: Google Developers) (Large preview)
    • CSR With Prerendering
      Prerendering is similar to server-side rendering but rather than rendering pages on the server dynamically, we render the application to static HTML at build time.

      Gatsby, an open source static site generator that uses React, uses renderToStaticMarkup method instead of renderToString method during builds, with main JS chunk being preloaded and future routes are prefetched, without DOM attributes that aren't needed for simple static pages. For Vue, we can use Vuepress to achieve the same goal. You can also use prerender-loader with Webpack.

      The result is a better Time To First Byte and First Contentful Paint, and we reduce the gap between Time To Interactive and First Contentful Paint. We can't use the approach if the content is expected to change much. Plus, all URLs have to be known ahead of time to generate all the pages. So some components might be rendered using prerendering, but if we need something dynamic, we have to rely on the app to fetch the content.

    • Full Client-Side Rendering (CSR)
      All logic, rendering and booting are done on the client. The result is usually a huge gap between Time-To-Interactive and First Contentful Paint. As a result, applications feel sluggish as the entire app has to be booted on the client to render anything. In general, SSR is faster than CSR. Yet still, It's a most frequent implementation for many apps out there.

    So, client-side or server-side? In general, it's a good idea to limit the use of fully client-side frameworks to pages that absolutely require them. For advanced applications, it's not a good idea to rely on server-side rendering alone either. Both server-rendering and client-rendering are a disaster if done poorly.

    Whether you are leaning towards CSR or SSR, make sure that you are rendering important pixels as soon as possible and minimize the gap between that rendering and Time To Interactive. Consider prerendering if your pages don't change much, and defer the booting of frameworks if you can. Stream HTML in chunks with server-side rendering, and implement progressive hydration for client-side rendering — and hydrate on visibility, interaction or during idle time to get the best of both worlds.

The spectrum of options for client-side versus server-side rendering. Also, check Jason’s and Houssein’s talk at Google I/O on Performance Implications of Application Architecture. (Image source: Jason Miller) (Large preview) AirBnB has been experimenting with progressive hydration; they defer unneeded components, load on user interaction (scroll) or during idle time and testing show that it can improve TTI. (Large preview)
  1. Always prefer to self-host third-party assets.
    In general, it’s a good rule of thumb to self-host your static assets by default. It's common to assume that if many sites use the same public CDN and the same version of a JavaScript library or a web font, then the visitors would land on our site with the scripts and fonts already cached in their browser, speeding up their experience considerably. However, it's very unlikely to happen.

    For security reasons, to avoid fingerprinting, browsers have been implementing partitioned caching that was introduced in Safari back in 2013, and in Chrome last year. So if two sites point to the exact same third-party resource URL, the code is downloaded once per domain, and the cache is "sandboxed" to that domain due to privacy implications (thanks, David Calhoun!). Hence, using a public CDN will not automatically lead to better performance.

    Furthermore, it's worth noting that resources don't live in the browser's cache as long as we might expect, and first-party assets are more likely to stay in the cache than third-party assets. Therefore, self-hosting is usually more reliable and secure, and better for performance, too.

  2. Constrain the impact of third-party scripts.
    With all performance optimizations in place, often we can’t control third-party scripts coming from business requirements. Third-party-scripts metrics aren’t influenced by end-user experience, so too often one single script ends up calling a long tail of obnoxious third-party scripts, hence ruining a dedicated performance effort. To contain and mitigate performance penalties that these scripts bring along, it’s not enough to just load them asynchronously (probably via defer) and accelerate them via resource hints such as dns-prefetch or preconnect.

    57% of all JavaScript code excution time is spent on third-party code, so regularly auditing your dependencies and tag managers is important.

    As Yoav Weiss explained in his must-watch talk on third-party scripts, in many cases these scripts download resources that are dynamic. The resources change between page loads, so we don’t necessarily know which hosts the resources will be downloaded from and what resources they would be.

    What options do we have then? Consider using service workers by racing the resource download with a timeout and if the resource hasn’t responded within a certain timeout, return an empty response to tell the browser to carry on with parsing of the page. You can also log or block third-party requests that aren’t successful or don’t fulfill certain criteria. If you can, load the 3rd-party-script from your own server rather than from the vendor’s server and lazy load them. E.g. Zendesk has avoided 2.3 MB chat widget on page load by creating a fake chat button which downloads the script only on click because the majority of users don’t engage.

    Another option is to establish a Content Security Policy (CSP) to restrict the impact of third-party scripts, e.g. disallowing the download of audio or video. The best option is to embed scripts via <iframe> so that the scripts are running in the context of the iframe and hence don’t have access to the DOM of the page, and can’t run arbitrary code on your domain. Iframes can be further constrained using the sandbox attribute, so you can disable any functionality that iframe may do, e.g. prevent scripts from running, prevent alerts, form submission, plugins, access to the top navigation, and so on.

    For example, it’s probably going to be necessary to allow scripts to run with <iframe sandbox="allow-scripts">. Each of the limitations can be lifted via various allow values on the sandbox attribute (supported almost everywhere), so constrain them to the bare minimum of what they should be allowed to do.

    Consider using Intersection Observer; that would enable ads to be iframed while still dispatching events or getting the information that they need from the DOM (e.g. ad visibility). Watch out for new policies such as Feature policy, resource size limits and CPU/Bandwidth priority to limit harmful web features and scripts that would slow down the browser, e.g. synchronous scripts, synchronous XHR requests, document.write and outdated implementations.

    To stress-test third parties, examine bottom-up summaries in Performance profile page in DevTools, test what happens if a request is blocked or it has timed out — for the latter, you can use WebPageTest’s Blackhole server blackhole.webpagetest.org that you can point specific domains to in your hosts file. Preferably self-host and use a single hostname, but also generate a request map that exposes fourth-party calls and detect when the scripts change. You can use Harry Roberts' approach for auditing third parties and produce spreadsheets like this one. Harry also explains the auditing workflow in his talk on third-party performance and auditing.

    Have to deal with almighty Google Tag Manager? Barry Pollards provides some guidelines to contain the impact of Google Tag Manager. Also, Christian Schaefer explores strategies for loading ads in 2020.

Casper.com published a detailed case study on how they managed to shave 1.7 seconds off the site by self-hosting Optimizely. It might be worth it. (Image source) (Large preview)
  1. Set HTTP cache headers properly.
    Double-check that expires, max-age, cache-control, and other HTTP cache headers have been set properly. In general, resources should be cacheable either for a very short time (if they are likely to change) or indefinitely (if they are static) — you can just change their version in the URL when needed.

    Use Cache-control: immutable, designed for fingerprinted static resources, to avoid revalidation (supported in Firefox, Edge and Safari). In fact, according to Web Almanac, "its usage has grown to 3.4%, and it’s widely used in Facebook and Google third-party responses."

    Remember the stale-while-revalidate? As you probably know, we specify the caching time with the Cache-Control response header, e.g. Cache-Control: max-age=604800. After 604800 seconds have passed, the cache will re-fetch the requested content, causing the page to load slower. This slowdown can be avoided by using stale-while-revalidate; it basically defines an extra window of time during which a cache can use a stale asset as long as it revalidates it async in the background. Thus, it "hides" latency (both in the network and on the server) from clients.

    In June–July 2019, Chrome and Firefox launched support of stale-while-revalidate in HTTP Cache-Control header, so as a result, it should improve subsequent page load latencies as stale assets are no longer in the critical path. Result: zero RTT for repeat views.

    You can use Heroku’s primer on HTTP caching headers, Jake Archibald’s "Caching Best Practices" and Ilya Grigorik’s HTTP caching primer as guides. Also, be wary of the vary header, especially in relation to CDNs, and watch out for the Key header which helps avoiding an additional round trip for validation whenever a new request differs slightly (but not significantly) from prior requests (thanks, Guy!).

    Also, double-check that you aren’t sending unnecessary headers (e.g. x-powered-by, pragma, x-ua-compatible, expires and others) and that you include useful security and performance headers (such as Content-Security-Policy, X-XSS-Protection, X-Content-Type-Options and others). Finally, keep in mind the performance cost of CORS requests in single-page applications.

Delivery Optimizations
  1. Do you load all JavaScript libraries asynchronously?
    When the user requests a page, the browser fetches the HTML and constructs the DOM, then fetches the CSS and constructs the CSSOM, and then generates a rendering tree by matching the DOM and CSSOM. If any JavaScript needs to be resolved, the browser won’t start rendering the page until it’s resolved, thus delaying rendering. As developers, we have to explicitly tell the browser not to wait and to start rendering the page. The way to do this for scripts is with the defer and async attributes in HTML.

    In practice, it turns out we should prefer defer to async (at a cost to users of Internet Explorer up to and including version 9, because you’re likely to break scripts for them). According to Steve Souders, once async scripts arrive, they are executed immediately. If that happens very fast, for example when the script is in cache aleady, it can actually block HTML parser. With defer, browser doesn’t execute scripts until HTML is parsed. So, unless you need JavaScript to execute before start render, it’s better to use defer.

    Also, as mentioned above, limit the impact of third-party libraries and scripts, especially with social sharing buttons and <iframe> embeds (such as maps). Size Limit helps you prevent JavaScript libraries bloat: If you accidentally add a large dependency, the tool will inform you and throw an error. You can use static social sharing buttons (such as by SSBG) and static links to interactive maps instead.

    You might want to revise your non-blocking script loader for CSP compliance.

  2. Lazy load expensive components with IntersectionObserver and priority hints.
    In general, it’s a good idea to lazy-load all expensive components, such as heavy JavaScript, videos, iframes, widgets, and potentially images. Native lazy-loading is already available for images and iframes, and we can use importance attribute (high or low) on a <script>, <img>, or <link> element (Blink only). In fact, it’s a great way to deprioritize images in carousels or fectes, as well as re-prioritize scripts. However, sometimes we might need a bit more granular control.

    The most performant way to do lazy load script is by using the Intersection Observer API that provides a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document’s viewport. Basically, you need to create a new IntersectionObserver object, which receives a callback function and a set of options. Then we add a target to observe.

    The callback function executes when the target becomes visible or invisible, so when it intercepts the viewport, you can start taking some actions before the element becomes visible. In fact, we have a granular control over when the observer’s callback should be invoked, with rootMargin (margin around the root) and threshold (a single number or an array of numbers which indicate at what percentage of the target’s visibility we are aiming).

    Alejandro Garcia Anglada has published a handy tutorial on how to actually implement it, Rahul Nanwani wrote a detailed post on lazy-loading foreground and background images, and Google Fundamentals provide a detailed tutorial on lazy loading images and video with Intersection Observer as well. Remember art-directed storytelling long reads with moving and sticky objects? You can implement performant scrollytelling with Intersection Observer, too.

    Also, watch out for the Feature policy: LazyLoad will provide a mechanism that allows us to force opting in or out of LazyLoad functionality on a per-domain basis (similar to how Content Security Policies work).

    Check again what else you could lazy load. Even lazy-loading translation strings and emoji could help. By doing so, Mobile Twitter managed to achieve 80% faster JavaScript execution from the new internationalization pipeline.

By lazy-loading translation strings, Mobile Twitter managed to achieve 80% faster JavaScript execution from the new internationalization pipeline. (Image credit: Addy Osmani) (Large preview)
  1. Load images progressively.
    You could even take lazy loading to the next level by adding progressive image loading to your pages. Similarly to Facebook, Pinterest and Medium, you could load low quality or even blurry images first, and then as the page continues to load, replace them with the full quality versions by using the LQIP (Low Quality Image Placeholders) technique proposed by Guy Podjarny.

    Opinions differ if these techniques improve user experience or not, but it definitely improves time to first meaningful paint. We can even automate it by using SQIP that creates a low quality version of an image as an SVG placeholder, or Gradient Image Placeholders with CSS linear gradients. These placeholders could be embedded within HTML as they naturally compress well with text compression methods. In his article, Dean Hume has described how this technique can be implemented using Intersection Observer.

    Browser support? Decent, with Chrome, Firefox, Edge and Samsung Internet being on board. WebKit status is currently supported in preview. Fallback? If the browser doesn’t support intersection observer, we can still lazy load a polyfill or load the images immediately. And there is even a library for it.

    Want to go fancier? You could trace your images and use primitive shapes and edges to create a lightweight SVG placeholder, load it first, and then transition from the placeholder vector image to the (loaded) bitmap image.

  2. SVG lazy loading technique by José M. Pérez. (Large preview)
  3. Do you send critical CSS?
    To ensure that browsers start rendering your page as quickly as possible, it’s become a common practice to collect all of the CSS required to start rendering the first visible portion of the page (known as "critical CSS" or "above-the-fold CSS") and add it inline in the <head> of the page, thus reducing roundtrips. Due to the limited size of packages exchanged during the slow start phase, your budget for critical CSS is around 14KB. (This specific limitation doesn't apply with TCP BBR in place although it's still important to prioritize critical resources and load them as early as possible).

    If you go beyond that, the browser will need additional roundtrips to fetch more styles. CriticalCSS and Critical enable you to discover critical CSS. You might need to do it for every template you’re using.

    You can then inline critical CSS and lazy-load the rest with critters Webpack plugin. If possible, consider using the conditional inlining approach used by the Filament Group, or convert inline code to static assets on the fly.

    If you load your full CSS asynchronously with libraries such as loadCSS, it’s not really necessary. With media="print" on link, you can trick browser into fetching the CSS asynchronously but applying to the screen environment once it loads. (thanks, Scott!)

    With HTTP/2, critical CSS could be stored in a separate CSS file and delivered via a server push without bloating the HTML. The catch is that server pushing is troublesome with many gotchas and race conditions across browsers. It isn’t supported consistently and has some caching issues (see slide 114 onwards of Hooman Beheshti’s presentation). The effect could, in fact, be negative and bloat the network buffers, preventing genuine frames in the document from being delivered. Also, it appears that server pushing is much more effective on warm connections due to the TCP slow start.

    Even with HTTP/1, putting critical CSS (and other important assets) in a separate file on the root domain has benefits, sometimes even more than inlining due to caching. Chrome speculatively opens a second HTTP connection to the root domain when requesting the page, which removes the need for a TCP connection to fetch this CSS (thanks, Philip!)

    A few gotchas to keep in mind: unlike preload that can trigger preload from any domain, you can only push resources from your own domain or domains you are authoritative for. It can be initiated as soon as the server gets the very first request from the client. Server-pushed resources land in the Push cache and are removed when the connection is terminated. However, since an HTTP/2 connection can be re-used across multiple tabs, pushed resources can be claimed by requests from other tabs as well. We can't depend on it though, especially in Safari and Edge (thanks, Inian, Barry!).

    At the moment, there is no simple way for the server to know if pushed resources are already in one of the user’s caches, so resources will keep being pushed with every user’s visit. You may then need to create a cache-aware HTTP/2 server push mechanism. If fetched, you could try to get them from a cache based on the index of what’s already in the cache, avoiding secondary server pushes altogether.

    For a while, the cache-digest specification was considered to help negate the need to manually build such "cache-aware" servers, basically declaring a new frame type in HTTP/2 to communicate what’s already in the cache for that hostname. However, cache-digest spec was abandoned, so the solution isn't quite in sight yet. (thanks, Barry!).

    For dynamic content, when a server needs some time to generate a response, the browser isn’t able to make any requests since it’s not aware of any sub-resources that the page might reference. For that case, we can warm up the connection and increase the TCP congestion window size, so that future requests can be completed faster. Also, all inlined assets are usually good candidates for server pushing. In fact, Inian Parameshwaran did remarkable research comparing HTTP/2 Push vs. HTTP Preload, and it’s a fantastic read with all the details you might need.

    Bottom line: As Sam Saccone noted, preload is good for moving the start download time of an asset closer to the initial request, while Server Push is good for cutting out a full RTT (or more, depending on your server think time) — if you have a service worker to prevent unnecessary pushing, that is.

  4. Experiment with regrouping your CSS rules.
    We’ve got used to critical CSS, but there are a few optimizations that could go beyond that. Harry Roberts conducted a remarkable research with quite surprising results. For example, it might be a good idea to split the main CSS file out into its individual media queries. That way, the browser will retrieve critical CSS with high priority, and everything else with low priority — completely off the critical path.

    Also, avoid placing <link rel="stylesheet" /> before async snippets. If scripts don’t depend on stylesheets, consider placing blocking scripts above blocking styles. If they do, split that JavaScript in two and load it either side of your CSS.

    Scott Jehl solved another interesting problem by caching an inlined CSS file with a service worker, a common problem familiar if you’re using critical CSS. Basically, we add an ID attribute onto the style element so that it’s easy to find it using JavaScript, then a small piece of JavaScript finds that CSS and uses the Cache API to store it in a local browser cache (with a content type of text/css) for use on subsequent pages. To avoid inlining on subsequent pages and instead reference the cached assets externally, we then set a cookie on the first visit to a site. Voilà!

    It's worth noting that dynamic styling can be expensive, too, but usually only in cases when you rely on hundreds of concurrently rendered composed components. So if you're using CSS-in-JS, make sure that your CSS-in-JS library optimizes the execution when your CSS has no dependencies on theme or props, and don't over-compose styled components. Aggelos Arvanitakis shares more insights into performance costs of CSS-in-JS.

Do we stream reponses? With streaming, HTML rendered during the initial navigation request can take full advantage of the browser’s streaming HTML parser.
  1. Do you stream responses?
    Often forgotten and neglected, streams provide an interface for reading or writing asynchronous chunks of data, only a subset of which might be available in memory at any given time. Basically, they allow the page that made the original request to start working with the response as soon as the first chunk of data is available, and use parsers that are optimized for streaming to progressively display the content.

    We could create one stream from multiple sources. For example, instead of serving an empty UI shell and letting JavaScript populate it, you can let the service worker construct a stream where the shell comes from a cache, but the body comes from the network. As Jeff Posnick noted, if your web app is powered by a CMS that server-renders HTML by stitching together partial templates, that model translates directly into using streaming responses, with the templating logic replicated in the service worker instead of your server. Jake Archibald’s The Year of Web Streams article highlights how exactly you could build it. Performance boost is quite noticeable.

    One important advantage of streaming the entire HTML response is that HTML rendered during the initial navigation request can take full advantage of the browser’s streaming HTML parser. Chunks of HTML that are inserted into a document after the page has loaded (as is common with content populated via JavaScript) can’t take advantage of this optimization.

    Browser support? Getting there with partial support in Chrome, Firefox, Safari and Edge supporting the API and Service Workers being supported in all modern browsers.

  2. Consider making your components connection-aware.
    Data can be expensive and with growing payload, we need to respect users who choose to opt into data savings while accessing our sites or apps. The Save-Data client hint request header allows us to customize the application and the payload to cost- and performance-constrained users. In fact, you could rewrite requests for high DPI images to low DPI images, remove web fonts, fancy parallax effects, preview thumbnails and infinite scroll, turn off video autoplay, server pushes, reduce the number of displayed items and downgrade image quality, or even change how you deliver markup. Tim Vereecke has published a very detailed article on data-s(h)aver strategies featuring many options for data saving.

    The header is currently supported only in Chromium, on the Android version of Chrome or via the Data Saver extension on a desktop device. Finally, you can also use the Network Information API to deliver low/high resolution images and videos based on the network type. Network Information API and specifically navigator.connection.effectiveType use RTT, downlink, effectiveType values (and a few others) to provide a representation of the connection and the data that users can handle.

    In this context, Max Stoiber speaks of connection-aware components and Addy Osmani speaks of adaptive module serving. For example, with React, we could write a component that renders differently for different connection types. As Max suggested, a <Media /> component in a news article might output:

    • Offline: a placeholder with alt text,
    • 2G / save-data mode: a low-resolution image,
    • 3G on non-Retina screen: a mid-resolution image,
    • 3G on Retina screens: high-res Retina image,
    • 4G: an HD video.

    Dean Hume provides a practical implementation of a similar logic using a service worker. For a video, we could display a video poster by default, and then display the "Play" icon as well as the video player shell, meta-data of the video etc. on better connections. As a fallback for non-supporting browsers, we could listen to canplaythrough event and use Promise.race() to timeout the source loading if the canplaythrough event doesn’t fire within 2 seconds.

    If you want to dive in a bit deeper, here are a couple of resources to get started:

  3. Consider making your components device memory-aware.
    Network connection gives us only one perspective at the context of the user though. Going further, you could also dynamically adjust resources based on available device memory, with the Device Memory API. navigator.deviceMemory returns how much RAM the device has in gigabytes, rounded down to the nearest power of two. The API also features a Client Hints Header, Device-Memory, that reports the same value.

    Bonus: Umar Hansa shows how to defer expensive scripts with dynamic imports to change the experience based on device memory, network connectivity and hardware concurrency .

The 'Priority' column in DevTools. Image credit: Ben Schwarz, The Critical Request
  1. Warm up the connection to speed up delivery.
    Use resource hints to save time on dns-prefetch (which performs a DNS lookup in the background), preconnect (which asks the browser to start the connection handshake (DNS, TCP, TLS) in the background), prefetch (which asks the browser to request a resource) and preload (which prefetches resources without executing them, among other things).

    Remember prerender? The resource hint used to prompt browser to build out the entire page in the background for next navigation. The implementations issues were quite problematic, ranging from a huge memory footprint and bandwidth usage to multiple registered analytics hits and ad impressions.

    Unsurprinsingly, it was deprecated, but the Chrome team has brought it back as NoState Prefetch mechanism. In fact, Chrome treats the prerender hint as a NoState Prefetch instead, so we can use it today. As Katie Hempenius explains in that article, "like prerendering, NoState Prefetch fetches resources in advance; but unlike prerendering, it does not execute JavaScript or render any part of the page in advance." NoState Prefetch only uses ~45MiB of memory and subresources that are fetched will be fetched with an IDLE Net Priority. Since Chrome 69, NoState Prefetch adds the header Purpose: Prefetch to all requests in order to make them distinguishable from normal browsing.

    Most of the time these days, we’ll be using at least preconnect and dns-prefetch, and we’ll be cautious with using prefetch, preload and prerender; the former should only be used if you are confident about what assets the user will need next (for example, in a purchasing funnel).

    Note that even with preconnect and dns-prefetch, the browser has a limit on the number of hosts it will look up/connect to in parallel, so it’s a safe bet to order them based on priority (thanks Philip Tellis!).

    In fact, using resource hints is probably the easiest way to boost performance, and it works well indeed. When to use what? As Addy Osmani has explained, it's a good idea to preload resources that we have high-confidence will be used on the current page. Prefetch resources likely to be used for future navigations across multiple navigation boundaries, e.g. Webpack bundles needed for pages the user hasn’t visited yet.

    Addy’s article on "Loading Priorities in Chrome" shows how exactly Chrome interprets resource hints, so once you’ve decided which assets are critical for rendering, you can assign high priority to them. To see how your requests are prioritized, you can enable a "priority" column in the Chrome DevTools network request table (as well as Safari).

    (Image credit: Pat Meenan) (Large preview)

    Since fonts usually are important assets on a page, sometimes it's a good idea to request the browser to download critical fonts with preload. However, double check if it actually helps performance as there is a puzzle of priorities when preloading fonts: as preload is seen as high importance, it can leapfrog even more critical resources like critical CSS. (thanks, Barry!)

    You could also load JavaScript dynamically, effectively lazy-loading execution. Also, since <link rel="preload"> accepts a media attribute, you could choose to selectively prioritize resources based on @media query rules.

    A few gotchas to keep in mind: preload is good for moving the start download time of an asset closer to the initial request, but preloaded assets land in the memory cache which is tied to the page making the request. preload plays well with the HTTP cache: a network request is never sent if the item is already there in the HTTP cache.

    Hence, it’s useful for late-discovered resources, a hero image loaded via background-image, inlining critical CSS (or JavaScript) and pre-loading the rest of the CSS (or JavaScript). Also, a preload tag can initiate a preload only after the browser has received the HTML from the server and the lookahead parser has found the preload tag.

    Preloading via the HTTP header could be a bit faster since we don’t to wait for the browser to parse the HTML to start the request (it's debated though). Early Hints will help even further, enabling preload to kick in even before the response headers for the HTML are sent (on the roadmap in Chromium, Firefox). Plus, Priority Hints will help us indicate loading priorities for scripts.

    Beware: if you’re using preload, as must be defined or nothing loads, plus preloaded fonts without the crossorigin attribute will double fetch.

  2. Use service workers for caching and network fallbacks.
    No performance optimization over a network can be faster than a locally stored cache on a user’s machine. If your website is running over HTTPS, use the "Pragmatist’s Guide to Service Workers" to cache static assets in a service worker cache and store offline fallbacks (or even offline pages) and retrieve them from the user’s machine, rather than going to the network. Also, check Jake’s Offline Cookbook and the free Udacity course "Offline Web Applications."

    Browser support? As stated above, it’s widely supported and the fallback is the network anyway. Does it help boost performance? Oh yes, it does. And it’s getting better, e.g. with Background Fetch allowing background uploads/downloads from a service worker.

    There are a number of use cases for a service worker. For example, you could implement "Save for offline" feature, handle broken images, introduce messaging between tabs or provide different caching strategies based on request types. In general, a common reliable strategy is to store the app shell in the service worker’s cache along with a few critical pages, such as offline page, frontpage and anything else that might be important in your case.

    There are a few gotchas to keep in mind though. With a service worker in place, we need to beware range requests in Safari (if you are using Workbox for a service worker it has a range request module). If you ever stumbled upon DOMException: Quota exceeded. error in the browser console, then look into Gerardo’s article When 7KB equals 7MB.

    As Gerardo writes, “If you are building a progressive web app and are experiencing bloated cache storage when your service worker caches static assets served from CDNs, make sure the proper CORS response header exists for cross-origin resources, you do not cache opaque responses with your service worker unintentionally, you opt-in cross-origin image assets into CORS mode by adding the crossorigin attribute to the <img> tag.”

    A good starting point for using service workers would be Workbox, a set of service worker libraries built specifically for building progressive web apps.

  3. Are you using service workers on the CDN/Edge, e.g. for A/B testing?
    At this point, we are quite used to running service workers on the client, but with CDNs implementing them on the server, we could use them to tweak performance on the edge as well.

    For example, in A/B tests, when HTML needs to vary its content for different users, we could use Service Workers on the CDN servers to handle the logic. We could also stream HTML rewriting to speed up sites that use Google Fonts.

Timeseries of service worker installation. Only 0.44% of all desktop pages register a service worker, according to Web Almanac. (Large preview)
  1. Optimize rendering performance.
    Isolate expensive components with CSS containment — for example, to limit the scope of the browser’s styles, of layout and paint work for off-canvas navigation, or of third-party widgets. Make sure that there is no lag when scrolling the page or when an element is animated, and that you’re consistently hitting 60 frames per second. If that’s not possible, then at least making the frames per second consistent is preferable to a mixed range of 60 to 15. Use CSS’ will-change to inform the browser of which elements and properties will change.

    Also, measure runtime rendering performance (for example, in DevTools). To get started, check Paul Lewis’ free Udacity course on browser-rendering optimization and Georgy Marchuk’s article on Browser painting and considerations for web performance.

    If you want to dive deeper into the topic, Nolan Lawson has shared tricks to accurately measure layout performance in his article, and Jason Miller suggested alternative techniques, too. We also have a lil' article by Sergey Chikuyonok on how to get GPU animation right.

    Note: changes to GPU-composited layers are the least expensive, so if you can get away by triggering only compositing via opacity and transform, you’ll be on the right track. Anna Migas has provided a lot of practical advice in her talk on Debugging UI Rendering Performance, too.

  2. Have you optimized rendering experience?
    While the sequence of how components appear on the page, and the strategy of how we serve assets to the browser matter, we shouldn’t underestimate the role of perceived performance, too. The concept deals with psychological aspects of waiting, basically keeping customers busy or engaged while something else is happening. That’s where perception management, preemptive start, early completion and tolerance management come into play.

    What does it all mean? While loading assets, we can try to always be one step ahead of the customer, so the experience feels swift while there is quite a lot happening in the background. To keep the customer engaged, we can test skeleton screens (implementation demo) instead of loading indicators, add transitions/animations and basically cheat the UX when there is nothing more to optimize. Beware though: skeleton screens should be tested before deploying as some tests showed that skeleton screens can perform the worst by all metrics.

  3. Do you prevent layout shifts and repaints?
    In the realm of perceived performance probably one of the more disruptive experiences is layout shifting, or reflows, caused by rescaled images and videos, web fonts, injected ads or late-discovered scripts that populate components with actual content. As a result, a customer might start reading an article just to be interrupted by a layout jump above the reading area. The experience is often abrupt and quite disorienting: and that's probably a case of loading priorities that need to be reconsidered.

    The community has developed a couple of techniques and workarounds to avoid reflows. Always set width and height attributes on images, so modern browsers allocate the box and reserve the space by default (Firefox, Chrome).

    For both images or videos, we can use an SVG placeholder to reserve the display box in which the media will appear in. That means that the area will be reserved properly when you need to maintain its aspect ratio as well.

    Instead of lazy-loading images with external scripts, consider using native lazy-loading, or hybrid lazy-loading when we load an external script only if native lazy-loading isn't supported.

    As mentioned above, always group web font repaints and transition from all fallback fonts to all web fonts at once — just make sure that that switch isn’t too abrupt, by adjusting line-height and spacing between the fonts with font-style-matcher. (Note that adjustments are complicated with complicated font stacks though.)

    To ensure that the impact of reflows is contained, measure the layout stability with the Layout Instability API. With it, you can calculate the Cumulative Layout Shift (CLS) score and include it as a requirement in your tests, so whenever a regression appears, you can track it and fix it.

    To calculate the layout shift score, the browser looks at the viewport size and the movement of unstable elements in the viewport between two rendered frames. Ideally, the score would be close to 0. There is a great guide by Milica Mihajlija and Philip Walton on what CLS is and how to measure it. It's a good starting point to measure and maintain perceived performance and avoid disruption, especially for business-critical tasks.

    Bonus: if you want to reduce reflows and repaints, check Charis Theodoulou's guide to Minimising DOM Reflow/Layout Thrashing and Paul Irish's list of What forces layout / reflow as well as CSSTriggers.com, a reference table on CSS properties that trigger layout, paint and compositing.

Networking and HTTP/2
  1. Is OCSP stapling enabled?
    By enabling OCSP stapling on your server, you can speed up your TLS handshakes. The Online Certificate Status Protocol (OCSP) was created as an alternative to the Certificate Revocation List (CRL) protocol. Both protocols are used to check whether an SSL certificate has been revoked. However, the OCSP protocol does not require the browser to spend time downloading and then searching a list for certificate information, hence reducing the time required for a handshake.
  2. Have you adopted IPv6 yet?
    Because we’re running out of space with IPv4 and major mobile networks are adopting IPv6 rapidly (the US has reached a 50% IPv6 adoption threshold), it’s a good idea to update your DNS to IPv6 to stay bulletproof for the future. Just make sure that dual-stack support is provided across the network — it allows IPv6 and IPv4 to run simultaneously alongside each other. After all, IPv6 is not backwards-compatible. Also, studies show that IPv6 made those websites 10 to 15% faster due to neighbor discovery (NDP) and route optimization.
  3. Make sure all assets run over HTTP/2.
    With Google pushing towards a more secure HTTPS web over the last few years, a switch to HTTP/2 environment is definitely a good investment. In fact, according to Web Almanac, 54% of all requests are running over HTTP/2 already.

    It’s important to understand that HTTP/2 isn’t perfect and has prioritization issues, but it’s supported very well, it isn’t going anywhere; and, in most cases, you’re better off with it.

    If you’re still running on HTTP, the most time-consuming task will be to migrate to HTTPS first, and then adjust your build process to cater for HTTP/2 multiplexing and parallelization. For the rest of this article, I’ll assume that you’re either switching to or have already switched to HTTP/2.

54% of all requests are served over HTTP/2 in late 2019, according to Web Almanac — just 4 years after its formal standardization. (Image source: Web Almanac) (Large preview)
  1. Properly deploy HTTP/2.
    Again, serving assets over HTTP/2 can benefit from a partial overhaul of how you’ve been serving assets so far. You’ll need to find a fine balance between packaging modules and loading many small modules in parallel. At the end of the day, still the best request is no request, however, the goal is to find a fine balance between quick first delivery of assets and caching.

    On the one hand, you might want to avoid concatenating assets altogether, instead of breaking down your entire interface into many small modules, compressing them as a part of the build process and loading them in parallel. A change in one file won’t require the entire style sheet or JavaScript to be re-downloaded. It also minimizes parsing time and keeps the payloads of individual pages low.

    On the other hand, packaging still matters. By using many small scripts, overall compression will suffer. The compression of a large package will benefit from dictionary reuse, whereas small separate packages will not. There’s standard work to address that, but it’s far out for now. Secondly, browsers have not yet been optimized for such workflows. For example, Chrome will trigger inter-process communications (IPCs) linear to the number of resources, so including hundreds of resources will have browser runtime costs.

    To achieve best results with HTTP/2, consider to load CSS progressively, as suggested by Chrome’s Jake Archibald.

    Still, you can try to load CSS progressively. In fact, in-body CSS no longer blocks rendering for Chrome. But there are some prioritization issues so it’s not as straightforward, but worth experimenting with.

    You could get away with HTTP/2 connection coalescing, which allows you to use domain sharding while benefiting from HTTP/2, but achieving this in practice is difficult, and in general, it’s not considered to be good practice. Also, HTTP/2 and Subresource Integrity don't always get on.

    What to do? Well, if you’re running over HTTP/2, sending around 6–10 packages seems like a decent compromise (and isn’t too bad for legacy browsers). Experiment and measure to find the right balance for your website.

  2. Do your servers and CDNs support HTTP/2?
    Different servers and CDNs support HTTP/2 differently. Use Is TLS Fast Yet? to check your options, or quickly look up how your servers are performing and which features you can expect to be supported.

    Consult Pat Meenan’s incredible research on HTTP/2 priorities (video) and test server support for HTTP/2 prioritization. According to Pat, it’s recommended to enable BBR congestion control and set tcp_notsent_lowat to 16KB for HTTP/2 prioritization to work reliably on Linux 4.9 kernels and later (thanks, Yoav!). Andy Davies did a similar research for HTTP/2 prioritization across browsers, CDNs and Cloud Hosting Services.

    While on it, double check if your kernel supports TCP BBR and enable it if possible. It's currently used on Google Cloud Platform, Amazon Cloudfront, Linux (e.g. Ubuntu).

  3. Do your servers and CDNs support HTTP over QUIC (HTTP/3)?
    If you feel adventurious or cutting-edge, you might want to check if your servers or CDNs support HTTP over QUIC (also known as HTTP/3). While HTTP/2 has brought significant improvements, it doesn't perform particularly well in situations when network is slow or unreliable (significant packet loss).

    To address the issue, Google has been working on Google QUIC, which is the protocol used by Chrome for many Google services today. Google has then brought many of the learnings to IETF in 2015 which is being standardized now.

    QUIC and HTTP/3 are better and more bulletproof: with faster handshakes, better encryption, more reliable independent streams, more encrypted, and with 0-RTT if the client previously had a connection with the server. However, it's quite CPU intensive (2-3x CPU usage for the same bandwidth), UDP stacks are unoptimized, and there are some unresolved issues with hardware and TLS layer.

    HTTP/3 is expected to ship early 2020 as a standard. Chrome and Safari confirmed that they have in-house implementations already, with HTTP/3 available in Chrome Canary and Firefox Nightly. Some CDNs support QUIC and HTTP/3 already. Neither Apache, nginx or IIS support it yet, but it might change in 2020.

Is TLS Fast Yet? allows you to check your options for servers and CDNs when switching to HTTP/2. (Large preview)
  1. Is HPACK compression in use?
    If you’re using HTTP/2, double-check that your servers implement HPACK compression for HTTP response headers to reduce unnecessary overhead. Because HTTP/2 servers are relatively new, they may not fully support the specification, with HPACK being an example. H2spec is a great (if very technically detailed) tool to check that. HPACK’s compression algorithm is quite impressive, and it works.
  2. Make sure the security on your server is bulletproof.
    All browser implementations of HTTP/2 run over TLS, so you will probably want to avoid security warnings or some elements on your page not working. Double-check that your security headers are set properly, eliminate known vulnerabilities, and check your HTTPS setup. Also, make sure that all external plugins and tracking scripts are loaded via HTTPS, that cross-site scripting isn’t possible and that both HTTP Strict Transport Security headers and Content Security Policy headers are properly set.
Testing And Monitoring
  1. Have you optimized your auditing workflow?
    It might not sound like a big deal, but having the right settings in place at your fingertips might save you quite a bit of time in testing. Consider using Tim Kadlec’s Alfred Workflow for WebPageTest for submitting a test to the public instance of WebPageTest. In fact, WebPageTest has many obscure features, so take the time to learn how to read a WebPageTest Waterfall View chart and how to read a WebPageTest Connection View chart to diagnose and resolve performance issues faster.

    You could also drive WebPageTest from a Google Spreadsheet and incorporate accessibility, performance and SEO scores into your Travis setup with Lighthouse CI or straight into Webpack.

    And if you need to debug something quickly but your build process seems to be remarkably slow, keep in mind that "whitespace removal and symbol mangling accounts for 95% of the size reduction in minified code for most JavaScript — not elaborate code transforms. You can simply disable compression to speed up Uglify builds by 3 to 4 times."

Integrating accessibility, performance and SEO scores into your Travis setup with Lighthouse CI will highlight the performance impact of a new feature to all contributing developers. (Image source) (Large preview)
  1. Have you tested in proxy browsers and legacy browsers?
    Testing in Chrome and Firefox is not enough. Look into how your website works in proxy browsers and legacy browsers. UC Browser and Opera Mini, for instance, have a significant market share in Asia (up to 35% in Asia). Measure average Internet speed in your countries of interest to avoid big surprises down the road. Test with network throttling, and emulate a high-DPI device. BrowserStack is fantastic, but test on real devices as well.
k6 allows you write unit tests-alike performance tests.
  1. Have you tested the impact on accessibility?
    When the browser starts to load a page, it builds a DOM, and if there is an assistive technology like a screen reader running, it also creates an accessibility tree. The screen reader then has to query the accessibility tree to retrieve the information and make it available to the user — sometimes by default, and sometimes on demand. And sometimes it takes time.

    When talking about fast Time to Interactive, usually we mean an indicator of how soon a user can interact with the page by clicking or tapping on links and buttons. The context is slightly different with screen readers. In that case, fast Time to Interactive means how much time passes by until the screen reader can announce navigation on a given page and a screen reader user can actually hit keyboard to interact.

    Léonie Watson has given an eye-opening talk on accessibility performance and specifically the impact slow loading has on screen reader announcement delays. Screen readers are used to fast-paced announcements and quick navigation, and therefore might potentially be even less patient than sighted users.

    Large pages and DOM manipulations with JavaScript will cause delays in screen reader announcements. A rather unexplored area that could use some attention and testing as screen readers are available on literally every platform (Jaws, NVDA, Voiceover, Narrator, Orca).

  2. Is continuous monitoring set up?
    Having a private instance of WebPagetest is always beneficial for quick and unlimited tests. However, a continuous monitoring tool — like Sitespeed, Calibre and SpeedCurve — with automatic alerts will give you a more detailed picture of your performance. Set your own user-timing marks to measure and monitor business-specific metrics. Also, consider adding automated performance regression alerts to monitor changes over time.

    Look into using RUM-solutions to monitor changes in performance over time. For automated unit-test-alike load testing tools, you can use k6 with its scripting API. Also, look into SpeedTracker, Lighthouse and Calibre.

Quick Wins

This list is quite comprehensive, and completing all of the optimizations might take quite a while. So, if you had just 1 hour to get significant improvements, what would you do? Let’s boil it all down to 15 low-hanging fruits. Obviously, before you start and once you finish, measure results, including start rendering time and Time To Interactive on a 3G and cable connection.

  1. Measure the real world experience and set appropriate goals. A good goal to aim for is Largest Contentful Paint < 1 s, a Speed Index < 3s, Time to Interactive < 5s on slow 3G, for repeat visits, TTI < 2s. Optimize for start rendering time and time-to-interactive.
  2. Prepare critical CSS for your main templates, and include it in the <head> of the page. For CSS/JS, operate within a critical file size budget of max. 170KB gzipped (0.7MB decompressed).
  3. Trim, optimize, defer and lazy-load as many scripts as possible, check lightweight alternatives and limit the impact of third-party scripts.
  4. Serve legacy code only to legacy browsers with <script type="module"> and module/nomodule pattern.
  5. Experiment with regrouping your CSS rules and test in-body CSS.
  6. Add resource hints to speed up delivery with faster dns-lookup, preconnect, prefetch, preload and prerender.
  7. Subset web fonts and load them asynchronously, and utilize font-display in CSS for fast first rendering.
  8. Optimize images with mozjpeg, guetzli, pingo and SVGOMG, and consider serving WebP with an image CDN.
  9. Check that HTTP cache headers and security headers are set properly.
  10. Enable Brotli compression on the server. (If that’s not possible, don’t forget to enable Gzip compression.)
  11. Enable TCP BBR congestion as long as your server is running on the Linux kernel version 4.9+.
  12. Enable OCSP stapling and IPv6 if possible.
  13. If HTTP/2 is available, enable HPACK compression and enable HTTP/3 if it's available on CDNs.
  14. Cache assets such as fonts, styles, JavaScript and images in a service worker cache.
  15. Explore options to avoid rehydration, use progressive hydration and streaming server-side rendering for your SPA.
Download The Checklist (PDF, Apple Pages)

With this checklist in mind, you should be prepared for any kind of front-end performance project. Feel free to download the print-ready PDF of the checklist as well as an editable Apple Pages document to customize the checklist for your needs:

If you need alternatives, you can also check the front-end checklist by Dan Rublic, the "Designer’s Web Performance Checklist" by Jon Yablonski and the FrontendChecklist.

Off We Go!

Some of the optimizations might be beyond the scope of your work or budget or might just be overkill given the legacy code you have to deal with. That’s fine! Use this checklist as a general (and hopefully comprehensive) guide, and create your own list of issues that apply to your context. But most importantly, test and measure your own projects to identify issues before optimizing. Happy performance results in 2019, everyone!

A huge thanks to Guy Podjarny, Yoav Weiss, Addy Osmani, Artem Denysov, Denys Mishunov, Ilya Pukhalski, Jeremy Wagner, Colin Bendell, Mark Zeman, Patrick Meenan, Leonardo Losoviz, Andy Davies, Rachel Andrew, Anselm Hannemann, Barry Pollard, Patrick Hamann, Gideon Pyzer, Andy Davies, Maria Prosvernina, Tim Kadlec, Rey Bango, Matthias Ott, Peter Bowyer, Phil Walton, Mariana Peralta, Jean Pierre Vincent, Philipp Tellis, Ryan Townsend, Ingrid Bergman, Mohamed Hussain S. H., Jacob Groß, Tim Swalling, Bob Visser, Kev Adamson, Adir Amsalem, Aleksey Kulikov and Rodney Rehm for reviewing this article, as well as our fantastic community which has shared techniques and lessons learned from its work in performance optimization for everybody to use. You are truly smashing!

(ra, il)
Categories: Design

Understanding CSS Grid: Creating A Grid Container

Fri, 01/03/2020 - 03:30
Understanding CSS Grid: Creating A Grid Container Understanding CSS Grid: Creating A Grid Container Rachel Andrew 2020-01-03T11:30:00+00:00 2020-01-03T14:06:17+00:00

This is the start of a new series here at Smashing Magazine concentrating on CSS Grid Layout. While Grid has been available in browsers since 2017, many developers won’t have had a chance to use it on a project yet. There seem to be a lot of new properties and values associated with CSS Grid Layout. This can make it seem overwhelming. However, quite a lot of the specification details alternate ways to do things, meaning that you don’t have to learn the entire spec to get started. This series aims to take you from grid novice to expert — with lots of practical usage tips along the way.

This initial article will cover what happens when you create a grid container and the various properties that you can use on the parent element to control that grid. You will discover that there are several use cases that are fulfilled only with the properties that you apply to the grid container.

In this article, we will cover:

  • Creating a grid container with display: grid or display: inline-grid,
  • Setting up columns and rows with grid-template-columns and grid-template-rows,
  • Controlling the size of implicit tracks with grid-auto-columns and grid-auto-rows.
Overflow And Data Loss In CSS

CSS is designed to keep your content readable. Let’s explore situations in which you might encounter overflow in your web designs and how CSS has evolved to create better ways to manage and design around unknown amounts of content. Read article →

Creating A Grid Container

Grid, like Flexbox, is a value of the CSS display property. Therefore to tell the browser that you want to use grid layout you use display: grid. Having done this, the browser will give you a block-level box on the element with display: grid and any direct children will start to participate in a grid formatting context. This means they behave like grid items, rather than normal block and inline elements.

However, you may not immediately see a difference on your page. As you haven’t created any rows or columns, you have a one-column grid. Enough rows are being generated to hold all of your direct children, and they are displaying one after the other in that single column. Visually they look just like block elements.

You will see a difference if you had any string of text, not wrapped in an element, and a direct child of the grid container, as the string will be wrapped in an anonymous element and become a grid item. Any element which is normally an inline element, such as a span, will also become a grid item once its parent is a grid container.

The example below has two block-level elements, plus a string of text with a span in the middle of the string. We end up with five grid items:

  • The two div elements,
  • The string of text before the span,
  • The span,
  • The string of text after the span.

See the Pen Grid Container: Direct children and strings of text become grid items by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: Direct children and strings of text become grid items by Rachel Andrew (@rachelandrew) on CodePen.

If you inspect the grid using the Firefox Grid Inspector, you can see the five-row tracks that have been created for the items.

The Grid Inspector is useful to help you see how many rows have been created

You can also create an inline grid by using display: inline-grid; in this case, your grid container becomes an inline-level box. However, the direct children are still grid items and behave in the same way as grid items inside a block-level box (it is only the outer display type). That is why the grid container behaves the way it does above when it is alongside other boxes on the page.

This next example has a grid followed by a string of text, as this is an inline-level grid, the text can display alongside it. Inline-level things do not stretch to take up all the space in the inline dimension in that way that block-level things do.

See the Pen Grid Container: inline-grid by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: inline-grid by Rachel Andrew (@rachelandrew) on CodePen.

Note: In the future, we will be able to better describe our layout by using display: block grid in order to create our block-level container, and display: inline grid to create an inline-level container. You can read about this change to the display specification in my article, “Digging Into The DIsplay Property: The Two Values Of Display”.

Columns And Rows

To get something that looks like a grid, we will need to add columns and rows. These are created using the grid-template-columns and grid-template-rows properties. These properties are defined in the spec as accepting a value called a track-list.

These properties specify, as a space-separated track list, the line names and track sizing functions of the grid. The grid-template-columns property specifies the track list for the grid’s columns, while grid-template-rows specifies the track list for the grid’s rows.

Some valid track-list values are as follows:

grid-template-columns: 100px 100px 200px; Creates a three-column grid: The first column is 100px, the second 100px, the third 200px. grid-template-columns: min-content max-content fit-content(10em) Creates a three-column grid: The first column is the min-content size for that track, the second the max-content size. The third is either max-content unless the content is larger than 10em, in which case it is clamped to 10em. grid-template-columns: 1fr 1fr 1fr; Creates a three-column grid using the fr unit. The available space in the grid container is divided into three and shared between the three columns. grid-template-columns: repeat(2, 10em 1fr); Creates a four-column grid with a repeating pattern of 10em 1fr 10em 1fr as the track-list in the repeat statement is repeated twice. grid-template-columns: repeat(auto-fill, 200px); Fills the container with as many 200px columns as will fit leaving a gap at the end if there is spare space. grid-template-columns: repeat(auto-fill, minmax(200px, 1fr)); Fills the container with as many 200px columns as will fit then distributes the remaining space equally between the created columns. grid-template-columns: [full-start] 1fr [content-start] 3fr [content-end] 1fr [full-end]; Creates a three-column grid: The first and third columns have 1 part each of the available space while the middle column has 3 parts. The lines are named by putting line names in square brackets.

As you can see there are many ways to create a track listing! Let’s have a look at exactly how these all work, with a few tips in terms of why you might use each one.

Using Length Units

You can use any length units, or a percentage to create your tracks. If the size of the tracks adds up to less than is available in the grid container, then by default the tracks will line up at the start of the container and the spare space will go to the end. This is because the default value of align-content and justify-content is start. You can space out the grid tracks, or move them to the end of the container using the alignment properties, which I explain in detail in my article “How To Align Things In CSS”.

See the Pen Grid Container: length units by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: length units by Rachel Andrew (@rachelandrew) on CodePen.

You can also use the keywords min-content, max-content and fit-content(). Using min-content will give you a track that is as small as it can be without causing overflow. Therefore, when used as a column size, the content will softly wrap wherever possible. The track becoming the size of the longest word in the column or largest fixed-size element.

Using max-content will cause the content to not do any soft-wrapping at all. In a column, any string of text will unwrap which may cause overflow.

The fit-content keyword can only be used by passing in a value. That value becomes the max that this track will grow to. Therefore, the track will act like max-content with the content unwrapping and stretching out until it hits the value you passed in. At that point, it will start wrapping as normal. So your track may be smaller than the value you pass in, but never larger.

See the Pen Grid Container: min-content, max-content, fit-content() by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: min-content, max-content, fit-content() by Rachel Andrew (@rachelandrew) on CodePen.

You can find out more about sizing in Grid and other layout methods in my article “How Big Is That Box? Understanding Sizing In CSS Layout”.

If you end up with tracks that take up more space than you have in your container, they will overflow. If you use percentages then, as with percentage-based float or flex layouts, you will need to take care that the total percentage is not more than 100% if you want to avoid overflow.

The fr Unit

Grid Layout includes a method that can save you calculating percentages for yourself — track sizing with the fr unit. This unit isn’t a length, and therefore can’t be combined with calc(); it is a flex unit and represents the available space in the grid container.

This means that with a track-list of 1fr 1fr 1fr; the available space is divided into three and shared evenly between the tracks. With a track-list of 2fr 1fr 1fr, the available space is divided into four and two parts are given to track one — one part each to tracks two and three.

See the Pen Grid Container: fr by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: fr by Rachel Andrew (@rachelandrew) on CodePen.

Something to watch out for is that what is being shared out by default is available space which is not the total space in the container. If any of your tracks contain a fixed-size element or a long word that can’t be wrapped, this will be laid out before the space is shared out.

In the next example, I removed the spaces between the words of ItemThree. This made a long unbreakable string so space distribution happens after the layout of that item has been accounted for.

See the Pen Grid Container: fr with larger content by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: fr with larger content by Rachel Andrew (@rachelandrew) on CodePen.

You can mix the fr unit with fixed length tracks, and this is where it becomes very useful. For example, you could have a component with two fixed-sized columns and a center area that stretches:

See the Pen Grid Container: mixing fr units and fixed-size tracks by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: mixing fr units and fixed-size tracks by Rachel Andrew (@rachelandrew) on CodePen.

You can have a component with one track set to fit-content(300px) and the other to 1fr. This makes for a component that can have something smaller than 300px in the first track, in which case it only takes the space it needs and the fr unit expands to take up the rest of the space.

If you add something larger (such as an image with max-width: 100%), the first track will stop growing at 300px and the fr unit takes the rest of the space. Mixing the fr unit with fit-content is a way to make some very flexible components for your site.

See the Pen Grid Container: mixing fr and fit-content() by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: mixing fr and fit-content() by Rachel Andrew (@rachelandrew) on CodePen. The repeat() Function

Using repeat() in your track-list can save typing out the same value or values over and over again. For example the following two lines are the same:

grid-template-columns: 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr 1fr; grid-template-columns: repeat(12, 1fr);

When using repeat() the value before the column is the number of times to repeat the track-list that comes after the comma. That track-list can be multiple values. This means you can repeat a pattern of tracks.

You can use the repeat() function for part of a track-list. For example, the following line would give you a 1fr track, 3 200px tracks, and a final 1fr track.

grid-template-columns: 1fr repeat(3,200px) 1fr

In addition to a number before the comma to indicate a fixed number of times to repeat the pattern, you can also use the keywords auto-fill or auto-fit. Using one of these keywords means that instead of a fixed number of tracks, your grid container will be filled with as many tracks as will fit.

See the Pen Grid Container: auto-fill by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: auto-fill by Rachel Andrew (@rachelandrew) on CodePen.

Using a fixed-length unit means that, unless the container is able to be exactly divided by that size, you will end up with some spare space remaining. In the example above my container is 500px wide, so I get two 200px tracks plus space at the end.

We can use another grid function to make the value a minimum, with any spare space distributed across all of the tracks. The minmax() function takes a minimum and a maximum size. With a minimum of 200px and a max of 1fr, we get as many 200px tracks as will fit and because the max is 1fr, which we already know will share out the space evenly, the extra is distributed across the tracks.

See the Pen Grid Container: auto-fill and minmax() by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: auto-fill and minmax() by Rachel Andrew (@rachelandrew) on CodePen.

I mentioned there are two possible keywords: auto-fill and auto-fit. If you have enough content to fill the first row of cells, then these will behave in exactly the same way. If, however, you do not (e.g. if we remove all but one item inside the container above), then they behave differently.

Using auto-fill will maintain the available track sizing even if there is no content to go into it.

See the Pen Grid Container: auto-fill and minmax() with one item by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: auto-fill and minmax() with one item by Rachel Andrew (@rachelandrew) on CodePen.

If, instead, you use auto-fit, the empty tracks will be collapsed:

See the Pen Grid Container: auto-fit and minmax() with one item by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: auto-fit and minmax() with one item by Rachel Andrew (@rachelandrew) on CodePen.

By using the Firefox Grid Inspector, you can see that the tracks are still there, but have been collapsed to zero. The end line of our grid is still line 3 as we can fit two tracks.

The track is still there but collapsed Named Lines

My final example above used the named lines approach. When using Grid. you always have line numbers, however, you can also name the lines. Lines are named inside square brackets. You can have multiple names for one line; in that case, a space separates them. For example, in the following track-list, all of my lines have two names.

grid-template-columns: [main-start sidebar-start] 1fr [sidebar-end content-start] 4fr [content-end main-end]

You can name your lines anything that you like, except the word span as that is a reserved word due to being used when placing items on the grid.

Note: In the next article in this series, I’ll be talking more about line-based placement and how named lines are used. In the meantime, read my article on “Naming Things in CSS Grid Layout” to help you learn more on the topic.

The Explicit vs The Implicit Grid

When creating a grid using grid-template-columns and grid-template-rows with a track-list, you are creating what is referred to as the explicit grid. This is the grid you have defined which has the sizing you have chosen for each track.

If you have more items than will fit, or place an item so it falls outside of the bounds of the grid you have created, Grid will create tracks in the implicit grid. These implicit tracks will be auto-sized by default. We saw this implicit grid in action when I declared display: grid on the parent element and grid created rows, one for each item. I didn’t define these rows, but as there were grid items, the row tracks were created to give them somewhere to go.

You can set a size for implicit rows or columns by using the grid-auto-rows or grid-auto-columns properties. These properties take a track-listing, so if you want all implicit columns to be at least 200 pixels tall but grow if there is more content, you could use the following:

grid-auto-rows: minmax(200px, auto)

If you want the first implicit row to be auto-sized, and the second to be min-content sized, and so on (until all of the grid items have been accommodated), you can pass in multiple values:

grid-auto-rows: auto 100px

See the Pen Grid Container: grid-auto-rows by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid Container: grid-auto-rows by Rachel Andrew (@rachelandrew) on CodePen. Using A Grid With Auto-Placement

Creating a grid (and allowing the browser to auto-place items) gets you a long way in terms of the useful patterns you can achieve. We have not yet looked at placing items on the grid, but many layouts that make use of Grid don’t do any placement. They simply rely on placing the items in source order — one in each grid cell.

If you are new to CSS Grid, then playing with different track sizes and seeing how the items place themselves into the cells you create is a great way to start.

(il)
Categories: Design

How To Decide Which PWA Elements Should Stick

Thu, 01/02/2020 - 04:30
How To Decide Which PWA Elements Should Stick How To Decide Which PWA Elements Should Stick Suzanne Scacca 2020-01-02T12:30:00+00:00 2020-01-02T13:35:56+00:00

As the number of website visitors and shoppers grows on mobile, it’s important to consider how small additions to your design will encourage them to do more than just research and browse. One of the elements I think mobile designers — for PWAs and mobile websites — need to do more with is the sticky bar.

What exactly do I mean by “more”? Well, I mean using the fixed navigation bar at the top or bottom of a mobile site for more than just navigation or branding.

Today, I’m going to show you some creative uses for sticky elements in mobile design, so you can help more of your visitors to take action.

Sticky Element Inspiration For Mobile Design

Think about the main challenge we face when it comes to mobile. While users are more than willing to take their first steps on a website or PWA from their mobile devices, conversion often happens on desktop (if they remember to do it at all).

When used properly, fixed elements can encourage more mobile visitors to take action right where they are. And this works for all kinds of websites.

1. Make the Top Sticky Bar Useful

The sticky bar at the top of your mobile site shouldn’t just be there for branding.

That said, I get that it can be tricky using that space when the logo may end up comprising a good chunk of that space. But if you design it thin enough, you can stack another banner beside it. Just make sure it’s useful.

The Lancome PWA is an interesting example because it simultaneously does this well and poorly:

Lancome has three sticky bars at the top of its PWA. (Source: Lancome) (Large preview)

There are three sticky bars at the top of the PWA:

  • A banner promoting a special offer,
  • A standard navigation bar,
  • A secondary navigation bar with shop categories.

The two navigation bars are great. Together, they don’t take up too much space and they make it much easier for users to find what they’re looking for and to complete their purchases. However, that promotional banner is not very well executed.

For starters, it’s too big and demands too much attention. Secondly, there’s no way to dismiss the message. It just stays there, stuck to the top of the PWA, no matter where the visitor goes.

If you’re going to use a sticky bar to promote an offer — no matter its size — give your users the option to move it out of the way if it’s irrelevant or if they’ve already collected the pertinent details from it.

George.com is another e-commerce web app that takes advantage of the top sticky bar. This one, however, doesn’t waste the space with distracting elements.

George.com uses a standard navigation bar and sticky search bar on its PWA. (Source: George.com) (Large preview)

On the home page, George.com attaches a sticky and voice-enabled search bar to the top of the page. This is great as it caters to a number of visitor types:

  • Visitors that prefer to use the standard navigation from the menu.
  • Visitors that prefer to type a quick search to the exact item they need.
  • Visitors that want to use their voice to search for something.

It checks off all the boxes.

In addition to providing a great search experience for its store, George.com also customizes this sticky element as visitors go deeper into the site:

George.com provides shoppers with a sticky Sort and Filter bar. (Source: George.com) (Large preview)

As shoppers peruse product pages, the sticky search bar becomes a Sort and Filter bar that follows them down the page. For big online stores, this is a useful tool so mobile users don’t have to scroll all the way to the top to adjust their search results.

The top sticky bar isn’t just useful for e-commerce stores as you’ll see in the rest of the examples in this article. However, when it comes to mobile, there’s a greater opportunity for e-commerce sites to pack extra value into this space, so take advantage of it.

2. Add a Bottom Navigation Bar with Quick-Tap Actions

Okay, so we’ve established what makes for a good sticky top bar. But what about a bottom bar? Is it even necessary?

One of the benefits of designing a PWA instead of a mobile site is that we can give it the top and bottom wrapper. But it’s not always needed. I’d say as a general rule of thumb to include a bottom bar when there are commonly used actions you want users to have easy access to.

Let’s start with an example that’s a mix of the good and the eh: Twitter.

Twitter places its sticky navigation bar on the bottom of the PWA. (Source: Twitter) (Large preview)

Twitter has chosen a different placement for its navigation bar. While the sticky bar at the top provides a place to access user settings, the bottom is for:

  • Visiting one’s news feed;
  • Searching for posts, people, hashtags, etc.;
  • Checking on notifications and direct messages.

For a social media app, this design makes a lot of sense. It’s not as though users are going to spend much time updating their settings, so why not put it out of the thumb zone and keep the regularly used elements within reach?

The issue I take with Twitter’s sticky elements is the click-to-tweet button (the big blue button in the bottom-left). While it’s not high enough to cover content being read at the top of the page, it does cover part of it down below.

It’s awfully reminiscent of those floating social icons that used to cover content on mobile. You don’t really see that anymore and I think it was for that exact reason.

If you’re thinking about adding a free-standing sticky element of your own to your site, make sure it doesn’t cover any content. Twitter may be able to get away with it, but your brand might not.

As for other examples of bottom bars, let’s turn our attention to the Weather Channel PWA:

The Weather Channel PWA uses both a sticky top and bottom bar. (Source: Weather Channel) (Large preview)

What’s nice about the top bar, in particular, is that it prioritizes the user experience instead of its own branding. Once a visitor enters their location, the rest of the site’s content is personalized, which is great.

As for the bottom navigation, Weather Channel has done a really nice job with this. Similar to how Twitter places commonly used buttons in its bottom bar, the same idea is present here. After all, it’s not as though Weather Channel visitors are coming to the site to read about Dover Federal Credit Union. They want to get precise predictions for upcoming weather.

Now, the two examples above show you how to use the bottom navigation bar as a permanent fixture on a mobile site. But you can also use it as a custom feature on your internal pages as job search site The Muse does:

The Muse uses a sticky bar to shortcut various actions visitors might want to take. (Source: The Muse) (Large preview)

This bottom sticky bar appears only on job listings pages. Notice how it doesn’t just say “Apply”.

I’m willing to bet The Muse designer spent time studying its user journey and how frequently job seekers actually apply for a position the first time they see it. By including “Email Myself” and “Save” buttons in this action bar, it addresses the fact that job seekers might need time to mull the decision over or to prepare the application before filling it out.

So, while you can certainly use a sticky bottom bar as a type of secondary navigation for commonly-clicked pages, I’d also suggest looking at it the way The Muse has: by designing a sticky bar that’s tailor-made for your own user’s journey.

3. Simplify Order Customization with Sticky Elements

Remember the days when you’d have to call up your local restaurant to place an order for delivery or when, gulp, you had to actually visit a store to buy something? Online ordering is an amazing thing — but it could be even better if we set up our mobile sites and PWAs the right way for it.

Again, I want to start with an example that kinda gets it right.

This is the PWA for MINI USA:

Users customize their Mini Cooper on a page with an oversized sticky element. (Source: MINI USA) (Large preview)

This is what users go through when they want to customize their car before purchasing. Looking at it from this screenshot, it looks nice. You can see the car in its customized state along with the updated price.

However, that entire section — down to the “Review” and “Save” buttons — is fixed. That means that all customization takes place on about a third to a quarter of the screen down below. It’s not an easy customization experience, to say the least.

While the customization screen needs some work, it’s the final Review screen that is done nicely:

The MINI USA Review page adds a sticky action bar to the bottom. (Source: MINI USA) (Large preview)

Here the top bar has gone back to a normal size while a new action bar has been added to the bottom. This is similar to what The Muse does to streamline the next steps with job applicants. In this case, MINI gives potential customers the ability to choose one of a number of options, even if they don’t lead to an immediate sale.

There are other types of PWAs and mobile sites that can and should simplify the online ordering process. Like MINI, Uber Eats uses custom sticky elements to help users put together their orders.

Uber Eats includes a top menu navigation bar in its PWA. (Source: Uber Eats) (Large preview)

When a user has selected a restaurant to order from, a sticky menu bar appears at the top of the page. This is especially useful for lengthy menus as well as to help users quickly navigate to the kind of food they’re jonesing for.

Assuming the user has found an item they want, the next page removes the top sticky bar and adds an “Add to Order” button/bar instead.

Uber Eats places an “Add to Order” button at the bottom of its web app. (Source: Uber Eats) (Large preview)

This way, the distraction of other menu categories is gone and now the user only has to focus on customizing the selected item before placing it in the cart.

Again, what this comes down to is being able to predict your users’ steps before they even get there. You can use either the top or bottom navigation to aid in this process, but it’s best to place initial steps in a sticky top bar and later steps at the bottom as they near conversion.

4. Display “Sidebar” Widgets On Digital Publications

Without a sidebar on mobile, you might try to tuck the widgets that would otherwise be there at the bottom of your content. But unless you know that your content is going to be read all the way through and that visitors will keep scrolling for more, there’s no guarantee they’ll see anything you put down there.

So, when it makes sense to do so, use sticky bars to add only the most essential sidebar-esque content.

Let’s take Inc., for example.

Inc.’s PWA comes with a sticky subscription bar, banner ad and secondary hamburger menu. (Source: Inc.) (Large preview)

There are three sticky elements that appear around Inc.’s articles:

  • A subscription form (which can be dismissed),
  • A banner ad (which cannot),
  • A floating hamburger menu.

The first two elements are fine since at least one of them is dismissible. However, the floating hamburger menu is problematic since it covers part of the content. Considering this is a content-centric site, it’s probably not a good idea to cover any part of the page.

The only way we might be able to excuse the placement of this fixed element is if it were to add extra value to the content. However, all it does is give readers more articles to read:

Inc.’s floating hamburger menu contains more articles to read. (Source: Inc.) (Large preview)

The goal on any content website is to get visitors to actually read the content. But if you’re presenting them with other options straight away, you’re only giving them more content to get distracted by.

The concept of this floating menu is a good one, but the execution isn’t great. I’d recommend displaying it as visitors get at least 75% of the way down the page. That way, it only comes into view when they should be looking for related content to read.

As for publications that get the sticky elements right, look for ones that keep it simple.

The New Yorker, for instance, does a nice job of using the sticky navigation bar and a darker, less distracting bottom bar to promote its subscriptions:

The New Yorker uses sticky bars to promote its paid subscriptions. (Source: The New Yorker) (Large preview)

If it’s important to you to get subscribers for your publication — especially paid ones — this is a good way to make use of the fixed bars on mobile.

If, instead, you’re more focused on getting the word out about your content, then a sticky bar like the one The Billings Gazette uses would be better:

The Billings Gazette prioritizes sharing over subscribing of its content. (Source: The Billings Gazette) (Large preview)

This is really well done. Social media sharing options are limited to the ones that make the most sense for mobile users. The same goes for the other share options here: WhatsApp, text, and email. When clicked, the corresponding app opens, so readers don’t have to use their browser sharing options or copy-and-paste the link.

In all honesty, I’m not sure it should be an either/or. I think you could use the top bar to promote your subscription so long as it’s easy to dismiss. Then, the bottom bar could be used for sharing links. Just make sure one of the bars moves out of the way so you can maximize the reading space.

Wrapping Up

Bottom line? It’s time to start using your sticky mobile elements for more than just storage of a logo, hamburger menu or search bar.

As we’ve seen here today, the key is to figure out what your users need most from you. Then, use your sticky elements to build a shortcut that makes a difference in their experience.

(ra, yk, il)
Categories: Design

2019: A Smashing Year In Review

Tue, 12/31/2019 - 05:30
2019: A Smashing Year In Review 2019: A Smashing Year In Review Rachel Andrew 2019-12-31T13:30:00+00:00 2020-01-01T04:06:38+00:00

2019 has been quite a productive (sometimes challenging, but ultimately very successful) year for the Smashing team. In this annual round-up, I’d like to share some of my thoughts and those of some of the Smashing team, as we look back on the past year as well as look forward to 2020.

Travel And Friendships

As always, my 2019 has involved a lot of travel. In addition to my conference speaking engagements and travel to W3C meetings, I attended all four of our Smashing conferences; I ran CSS Layout workshops in Toronto, New York and San Francisco. The conferences are a time when most of the team is together in person.

The home of Smashing is in Freiburg, Germany, and before SmashingConf Freiburg, we held a big team meeting, with almost everyone who is involved with Smashing able to take part. There have been many changes in the Smashing Team this year, and that meeting in Freiburg was a chance for us all to come together; I believe that it was one of the most valuable things we have done this year.

There are many challenges in doing all of the things we do as a small (mainly part-time and remote) team. However, if we keep talking and keep the Smashing community at the heart of everything we do, the past year demonstrates that we can achieve amazing things!

The Conferences

The SmashingConf team of Amanda Annandale, Charis Rooda and Mariona Jones are a force of nature. They seem to achieve the impossible and (as Charis told me) still have time to enjoy the surroundings of the places they visit.

The SmashingConf team in Toronto

I’m always blown away when I walk into the venue and see what has been achieved — even before the event starts. Artwork created by the very talented Ricardo Gimenes is everywhere — such as the movie posters from Toronto, and the artwork in the theater we use as a venue in New York.

Our movie posters in Toronto (Photo credit Marc Thiele) The signage in the theater in New York (Photo credit Drew McLellan)

One of my favorite things to do at the conferences is to lead the Smashing Run which we normally manage to do on both conference days. This is becoming quite a fixture, with several attendees and speakers running and chatting for half an hour before breakfast. I’m already looking forward to our inaugural run in Austin in 2020, although it may be a bit of a warm one!

I sometimes help the conference team out when words need writing or editing, and sometimes when the legality of balloons is called into question. As Amanda Annandale (Senior Event Manager) remembers:

“September marked my third year at Smashing, and while it provided a whole new set of challenges, it also provided a huge sense of accomplishments. The conference team sat down at the end of 2018 and was able to make some big plans for the future.

“It’s been amazing to see these plans (from organization to side-events to new locations), and our team, come together. But, new tasks can bring about some hilarious roadblocks. Smashing is on a long and necessary quest to reduce our carbon footprint. BUT, Vitaly is rather partial to balloons.

“For those who may not know (because Rachel Andrew and I were shocked to learn), foil balloons are heavily regulated in the state of California. This (we discovered while spending a disproportionate time researching eco-balloons over plastic balloons) is obviously bad for the environment. We’ve never been so happy to find a company making fully eco-friendly balloons, that are fully biodegradable in a very short amount of time! This experience definitely strengthened our resolve.

“We are now working with a company out of Austin to improve our printing processes to be more eco-friendly, and working with each of our caterers to reduce our waste. We still have a way to go, but we’re aiming for a Smashing impact in 2020!” The (eco-friendly) balloons are deployed in San Francisco (Photo credit Marc Thiele)

Conferences are expensive to produce and we are fortunate to have some wonderful partners who help us to create these events. They are looked after by our partnerships manager, Mariona Jones, who has been joined this year by Esther Fernández. Between them, they are working to bring together all of the Smashing properties in order to create new partnership opportunities. Mariona told me,

“The most exciting moment this year has been to be able to create together with the whole team the Smashing Media platform bringing together events, magazine, publishing house, membership and Smashing TV. The highlight of the year is undoubtedly the birth of the partnerships and data office and the addition to the Smashing Family of my dear colleague Esther.”

Esther adds,

“Joining the Smashing team has been one of the highlights of the year. It’s been a pleasure to enter this community and to make the Smashing conferences happen.”

I’m looking forward to working together with Mariona and Esther this year as we open up new opportunities for partnerships that cross the boundaries of the different parts of the platform!

Smashing Magazine

The heart of what I do at Smashing is the online magazine; as Editor in Chief, my role here is to try to bring you web design and development content that will inform you, help with your day-to-day work, and also make you think. We publish almost every weekday, so always have a large list of articles moving through the writing, editing and publishing process.

Looking through our analytics, I pulled up a list of the most popular articles published in the last year. The range of topics making it to the top may surprise you, and demonstrate the wide range of subjects we cover here. We have the Front-End Performance Checklist, an article comparing Sketch, Figma, and Adobe XD, and two articles about designing tables: Table Design Patterns On The Web and How To Architect A Complex Web Table. HTML and CSS are always popular with How To Align Things In CSS, How To Learn CSS and HTML5 Input Types: Where Are They Now? —all getting a top spot. They are joined by Styling An Angular Application With Bootstrap and Using Vue.js To Create An Interactive Weather Dashboard. That’s quite the range of subject matter!

Covering such a broad spectrum of web design and development is certainly a challenge and one I couldn’t do alone. My subject editors Alma Hoffmann, Chui Chui Tan, Drew McLellan and Michel Bozgounov bring their expertise to the topics they help curate. Copy editors Andrew Lobo and Owen Gregory help preserve the tone of voice of our authors while ensuring the content is easy to understand for an international audience. Cosima Mielke ensures that the newsletter is well researched along with many other roles (including eBook production), and Yana Kirilenko does a great job of getting articles from Google Docs, Dropbox Paper and various Markdown apps into the CMS. Senior editor Iris Lješnjanin does an amazing job of keeping everything on track, fielding the email, hitting publish on most of the pieces, and making sure that we are all using smashingly correct punctuation! I am very grateful for all of their work.

Vitaly and I are well-known faces in the web community, however, there is a whole cast of folk working behind the scenes to keep the magazine running successfully. I don’t say thank you enough, but I sincerely appreciate all the work that goes into the magazine across the team.

Smashing Magazine turned 13 this year to which I shared personal stories from the team — you can read more about the people behind the Smashing scenes over here.

This year, I’ve tried to bring the various facets of the business into the magazine. For example, each conference results in a set of high-quality videos of the presentations which was hidden away on Vimeo. This year, I’ve published a write-up of each event, listing all of the videos. I hope that this means more people can benefit from the wisdom of our speakers and also shows the brilliant work the conference team does in curating and putting on these events.

Something that I really enjoy is to publish articles by folks who have never written for a large publication before and to help their articles go through the process. Earlier this year, I wrote an article on Pitching Your Writing To Publications. If your 2020 goals include writing for Smashing Magazine, drop us a line with an outline of your idea. We would love to work with you!

Smashing Books And Our First Print Magazine

In 2019, we published two printed books, plus our very first print magazine. Art Direction For The Web was published in the spring, and at the end of the year, we began shipping Inclusive Components.

In the middle of the launch of Inclusive Components, we welcomed a new team member, Ari Stiles. She told me,

“It was challenging and fun to start working on the Smashing Library right after Heydon’s book was released, when promotion was already in full swing. A bit like stepping in front of a firehose — but in a good way! It helps that Inclusive Components is a well-written, timely book. I love helping people discover new and helpful resources like this one, and I’m excited about all of our new books for 2020.” Selecting a topic for our first print magazine was tricky. We wanted these magazines to be a snapshot of the industry at a certain time, but also to have a longer shelf life than tutorials on topics that will be out of date in a few months. Ultimately, for issue one, we chose a subject that was at the forefront of many minds in 2019 — that of ethics and privacy. The collection of essays I commissioned is designed to make you think, and we still have a few print copies and the digital version, if you would like to read them.

Categories: Design

New Adventures Ahead! (January 2020 Wallpapers)

Tue, 12/31/2019 - 04:30
New Adventures Ahead! (January 2020 Wallpapers) New Adventures Ahead! (January 2020 Wallpapers) Cosima Mielke 2019-12-31T12:30:00+00:00 2019-12-31T13:08:34+00:00

Let’s welcome 2020 with a new wallpaper! After all, the new year is the perfect occasion to tidy up your desktop and start on a fresh, blank slate — no clutter, just the things you really need and space for what’s about to come. And some inspiration, of course.

As every month since more than nine years already, artists and designers from across the globe once again took out their favorite tools to create wallpapers to inspire you, make you smile, think, or just to cater for a blob of color on a dark winter day. The wallpapers are available in versions with and without a calendar for January 2020 and can be downloaded for free. And since this little challenge has brought forth so many unique artworks over the past few years, we also assembled a selection of older January favorites at the end of this post. We wish you a wonderful start into the new year and a lot of exciting adventures to cross your way in 2020!

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
Submit your wallpaper

Do you have an idea for a February wallpaper design? We are always looking for creative talent to be featured in our wallpapers posts. Don’t be shy, join in! →

Month of the Garnet Birthstone

“I wanted to approach the monthly challenge from a more unique perspective. Instead of chosing a cliché subject that has been done way too many times before, I chose a less known, special subject. The birthstone. Enjoy.”

Designed by Bram Copermans from Belgium

Rubber Ducky Day

“Winter can be such a gloomy time of the year. The sun sets earlier, the wind feels colder and our heating bills skyrocket. I hope to brighten up your month with my wallpaper for Rubber Ducky Day!”

Designed by Ilya Plyusnin from Belgium

New Year, New Beginnings

— Designed by MasterBundles from USA

Good Intentions

— Designed by Jonathan Verhaegen from Belgium

The King Of Rock And Roll

“On January 8th 1935, the creator of the rock ‘n’ roll genre was born and he’s back for his final debut on this wallpaper!”

Designed by Bailey Lievens from Belgium

Aquaman

“Aquaman relaxing in the ‘nice’ January weather.”

Designed by Ricardo Gimenes from Sweden

It’s Snowing!

“It is January, it’s cold and snowy… In the house, the fireplace is on and it’s hot. Only penguins are enjoying time out there.”

Designed by Veronica Valenzuela from Spain

Laughter Is An Instant Vacation

“These are polarized times we’re living through. It seems like there is division all around. So sometimes you just have to find the time to remember that the one thing that connects us all is a good laugh. And there is no better laugh than the belly laugh! So on the 24th of January, let’s celebrate one of life’s truly great joys and let it all hang out!”

Designed by Ever Increasing Circles from the United Kingdom

Chocolate Cake Day

“I really love chocolate cake, so when I found out about “Chocolate Cake Day” I had to make a wallpaper about it!”

Designed by Aaron Claes from Belgium

National Popcorn Day

“In this epic Netflix and Chill era, nothing has gotten more important than popcorn during the newest blockbuster! Time to gain awareness about celebrating our delicious guilty pleasure during the movies!

A little story around the wallpaper: Mr Popcorn and Mrs Popcorn are enjoying a great movie with… You guessed it right, popcorn!! But Mr Popcorn is somewhat annoyed as his Mrs eats Popcorn from his head instead of eating from the bucket right in between the loved ones.”

Designed by Nicolas van der Straten Ponthoz from Belgium

Earth’s Rotation Day

— Designed by Bob Storms from Belgium

Go Green & Save The Earth

“Taking care of the Earth is not just a responsibility, it’s a necessity. So I designed a wallpaper to remind everyone what we can do to help save the planet.”

Designed by Farhat Asif from India

Stickers from the 70’s

“I didn’t want to make a typical New Year themed wallpaper for January so I started looking for fun ideas. Apparently January 13th is day of the stickers and so I made something original with this. I wanted to add an extra challenge that was totally out of my comfort zone. So I designed everything in seventies style. Grooovy baby!”

Designed by Bastien Corens from Belgium

The Wolf’s Month

“I love wolves!”

Designed by Morgane Van Achter from Belgium

Oldies But Goodies

The beginning of something new, the colors of winter, local New Year’s traditions — these are just a few of the things that inspired people to create a January wallpaper over the years. Below you’ll find a selection of designs from our archives that are just too good to be forgotten. Enjoy! (Please note that these wallpapers don’t come with a calendar.)

Start Somewhere

“If we wait until we’re ready, we’ll be waiting for the rest of our lives. Start today — somewhere, anywhere.”

Designed by Shawna Armstrong from the United States

Facts

“I was reminded of a simple fact while I was browsing for inspiration for this wallpaper. I’ve read on Wikipedia that January is the coldest month on most of the northern hemisphere and the hottest one on most of the southern hemisphere. I found it fascinating that someone in Australia is enjoying a surf while I am watching the first snowflakes of the winter. I was hoping to create a wallpaper that will serve as a reminder of the fact that we live in a fascinating world full of varieties and contrasts. The old-worn-out-encyclopedia style hopefully emphasizes the educational theme of the wallpaper.”

Designed by Danijel Gajan from Serbia

Hidden Gem

“Kingfishers are called ‘ijsvogels’ (ice-birds) in Dutch. Not because they like the winter cold, but because of the intense blue and teal colors…”

Designed by Franke Margrete from the Netherlands

Freedom

“It is great to take shots of birds and think about the freedom they have. Then I start dreaming of becoming one and flying around the world with their beautiful wings.”

Designed by Marija Zaric from Belgrade, Serbia

January Is The Month For Dreaming

“It can be very hot in Australia and very cold in Europe so I think that it is a good month for dreaming and making plans.”

Designed by Tazi from Australia

Travel And Explore

“For once you have tasted flight you will walk the earth with your eyes turned skywards, for there you have been and there you will long to return. (Leonardo da Vinci)”

Designed by Dipanjan Karmakar from India

A New Beginning

“I wanted to do a lettering-based wallpaper because I love lettering. I chose January because for a lot of people the new year is perceived as a new beginning and I wish to make them feel as positive about it as possible! The idea is to make them feel like the new year is (just) the start of something really great.”

Designed by Carolina Sequeira from Portugal

Open The Doors Of The New Year

“January is the first month of the year and usually the coldest winter month in the Northern hemisphere. The name of the month of January comes from ‘ianua’, the Latin word for door, so this month denotes the door to the new year and a new beginning. Let’s open the doors of the new year together and hope it will be the best so far!”

Designed by PopArt Studio from Serbia

Pop, Fizz, Clink

“I wanted to try my hand at creating a low poly illustration and thought a champagne glass would be fun.”

Designed by Denise Johnson from Chicago

Oaken January

“In our country, Christmas is celebrated in January when oak branches and leaves are burnt to symbolize the beginning of the New Year and new life. It’s the time when we gather with our families and celebrate the arrival of the new year in a warm and cuddly atmosphere.”

Designed by PopArt Studio from Serbia

Snowy Octopus

— Designed by Karolina Palka from Poland

Soft Wishes

“Let yourself be carried away by the delicate desires of your heart…”

Designed by Katia Piccinni from Italy

Boom!

— Designed by Elise Vanoorbeek from Belgium

The Early January Bird

“January is the month of a new beginning, hope and inspiration. That’s why it reminds me of an early bird.”

Designed by Zlatina Petrova from Bulgaria

Winter Leaves

— Designed by Nathalie Ouederni from France

Caucasian Mountains

“From Caucasus with love!”

Designed by Ilona from Russia

Wolf-Month

“Wolf-month (in Dutch “wolfsmaand”) is another name for January.”

Designed by Chiara Faes from Belgium

Be Awesome Today

“A little daily motivation to keep your cool during the month of January.”

Designed by Amalia Van Bloom from the United States

Blue Neon Sign

— Designed by Jong S. Kim from the United States

Join In Next Month!

Thank you to all designers for their participation. Join in next month!

Smashing Newsletter

Upgrade your inbox and get our editors’ picks 2× a month — delivered right into your inbox. Earlier issues.

Your (smashing) email Subscribe → Useful tips for web designers. Sent 2× a month.
You can unsubscribe any time — obviously. (cm, il)
Categories: Design

Smashing Podcast Episode 6 With Luca Mezzalira: What Are Micro-Frontends?

Mon, 12/30/2019 - 21:00
Smashing Podcast Episode 6 With Luca Mezzalira: What Are Micro-Frontends? Smashing Podcast Episode 6 With Luca Mezzalira: What Are Micro-Frontends? Drew McLellan 2019-12-31T05:00:00+00:00 2019-12-31T07:06:05+00:00

In this episode of the Smashing Podcast, we’re talking about micro-frontends. What is a micro-frontend and how is that different from the sort of approach we might be taking at the moment? Drew McLellan finds out from micro-frontend pioneer, Luca Mezzalira.

Show Notes Weekly Update Micro-Frontends Transcript

Drew McLellan: He’s a Google developer expert on web technologies and manager of the London JavaScript community. With more than 15 years experience, he currently works as VP of Architecture, designing sports video platform DAZN, which delivers on demand sports content to millions of users all watching live. He’s the author of the book Front-End Reactive Architectures for Apress and is also a technical reviewer for Apress, Packt Publishing, Pragmatic Bookshelf, and O’Reilly, as well as being an experienced public speaker at technical conferences all around the world. He’s Italian, sports a handsome beard, and clearly has deep knowledge of the web platform. But did you know he once crossed South America on an ostrich? My smashing friends, please welcome Luca Mezzalira. Hi, Luca. How are you?

Luca Mezzalira: I’m smashing.

Drew: I want to talk to you today about the subject of micro-frontends. Now this is a concept that’s completely new to me, certainly by the name, and I expect it’s new to a lot of our listeners, too. Before we get into micro-frontends themselves, I guess we should understand the problem that you’re looking to address with them. So perhaps you could tell us a little bit about how you see applications in a more traditional way and what sort of problems do those things hit that maybe micro-frontends might be the solution to?

Luca: Okay, that’s a good starting point, in my opinion. So usually when you implement or design a new green field project and you want to work with front end applications, you have a few architecture that you can leverage. You can use a single page application, you can use a server side rendering application, or you can use a multi-page application composed by just simple HTML pages. Obviously those are super valid options and I think very used by many, many developers. The real problem that we’re trying to solve here is how you can scale these concepts with distributed teams to hundreds of developers working on the same codebase, because the reality is when you’re working in these platforms in particular, when you think about SaaS platform for instance, you need to have multiple developers and multiple teams that are working on the same project. And obviously the way how, for instance, you do acquisition or retention is completely different on the way how you expose the catalog or how you show a specific part of a platform.

Luca: So now in my experience I work a lot with single-page application. I work with server-side rendering application, but at some point in DAZN I’d be asked to think about a way to scale our technical team. And I need to come up … if for backend we have some solution that are microservices in this case, so we can scale our APIs independently, and take in consideration that the scale and the volume for a specific throughput on a specific API. On the front end, really, it’s really more difficult because if you think about that, you don’t have technical problem to solve when you need to scale, if you’re using a single page application, for instance. Probably for a server-side rendering you have some, but on a single-page application, for instance, it’s distributed by nature because it’s on client-side, different client-side.

Luca: So the only thing that they are loading are just some static files like CSS and HTML and JavaScript that are served by CDN, and in that case you can scale accordingly, it’s not a big challenge. But the real challenge is how you scale the teams working on the same platform, because sometimes the challenges that are faced by one team could be completely different from the challenges that are faced by another team, and usually what you do is you try to find a lot of tradeoffs between the things. Because if you think, let’s try to think on a normal use case. So usually when you start a platform you’re starting small. So you try to create that quick single-page application, as well you have your monolith, so you just set up everything in your CICD just once for front end and back end, and then you start to iterate on your logic. But the reality is when you have success, you need to evolve this part, and it’s not always maintaining the same architecture that could, let’s say, create the benefit for your business, because maybe you can find some bottlenecks.

Luca: So now going back to the single-page application part. So if we want to scale a single-page application part, the challenge is not technical, it’s with humans if you want. So how we can scale teams working on the same application. So what I did a few years ago is, was the starting to look at a possible architecture that and principles that would allow me to scale the front end as well as the backend. So working on the backend principles so that you can find the microservices. I started to look at the different solution and they came out with the micro-frontends that at the beginning we didn’t even call it that way because obviously for years ago there wasn’t the, let’s say, that name for that specific architecture.

Luca: But the reality is taking a monolith, so a single page application and slicing it in a way that will allow us to focus ourselves in a tiny problem. So a smaller problem than the entire application and trying to solve that in the best way possible. Technically speaking. Obviously that lead to have independent pieces of your frontend application that could be deployed in production without affecting all the others. So the challenge basically for Micro-frontend is trying to figure out their way to take a single page application or server side rendered application and create a artifact of these, let’s say, as close as possible to a business domain, and can be deployed independently.

Drew: So I mean you mentioned micro services on the back end. So conceptually this is a similar sort of solution to the problem. The micro services serve on the back end, but ported over to the front end. Is that a rough equivalence or is it more involved in that?

Luca: Yes. No, it’s a way to solve the same problem it is trying to solve microservices on the back end but on the front end in this time. Usually when I started this journey at the beginning, you know, you start to think about that and you start to evaluate different approaches. And in the last few months I came up with what they call, the Micro-frontend decision framework that basically is four steps that they use in order to, let’s say you identify an approach for Micro-frontend, because if up to now, we usually pick one framework that designed architecture for us and we implement on top of their architecture. If you think about angular or think about React or Redux. You have all the pieces that are needed but you don’t take architectural decisions. You would take a design decisions or how you implement on top of that specific architecture.

Luca: So on Micro-frontend you need to start the step back. So we need to think about how we want to first slice our application. So there are two usually options for that. You can slice horizontally, so you can have multiple micro-frontends in the same view or you can slice vertically. Therefore you always load one Micro-frontend per time. And those decision are quite key because it will then cascade certain other options that you have based on the initial decision to make. So the first, as I said, you decide how you want to slice your application. The second one is how you want to compose your application. So you want to have like, for instance, an app shell that is loading one or multiple micro-frontends in the same view. You want to have, I don’t know, application server that is composing different fragments of your applications, so different Micro-frontend and then serve the final view to your user. Or you want to use edge-side included, it’s a standard that is inside of CDNs in order to compose a page and serve these there.

Luca: Those are three of the options that you have. And then apart from composing, then you need to think how you want to route. So how you route from, I don’t know, /login or /signin to the catalog part or a specific detail object. Also here you have like three option. You can do it at the application server, you can do it on CDN level with Lambda Edge or any other web workers in CloudFlare or anything else. Or you can do a client site. So you have a logic inside your app shell that you say, okay, now for this specific URL you need to load another view or another micro-frontend. And the last bit is how you communicate with Micro-frontend.

Luca: So usually if you have like multiple Micro-frontend on the same page, there is higher complexity on managing this communication, because you need to maintain the different Micro-frontend independent. So that means you cannot have any reference on how the page is structured. So usually you can use stuff like custom events or an event meter that is injected inside each single Micro-frontend. And then the Micro-frontend are communicating together. Obviously in the other case when you have like let’s say a vertical split off your Micro-frontend is way easier, because inside the vertical basically the representation of a vertical Micro-frontend is a single page application or a single page. And in that case it’s easier to let’s say create and share having a shared state across the entire Micro-frontend.

Luca: Then if you think about having multiple Micro-frontend all together, then you should avoid to have states that are shared across Micro-frontend, because otherwise you are coupling things. Instead of the whole concept here is decoupling and have independent deployment. And therefore let’s say the challenges of a horizontal split, which is the first decision you should take or vertical split, are completely different, and we need to be very well aware which one fits our use case.

Drew: So rather than a specific technical solution, micro frontends are very much like a design pattern that you would implement in whatever technology is appropriate for the particular problem you’re trying to solve?

Luca: Yeah, more than technology, I would see that we pick the right architecture for the right job. Just to give you an example, I was talking … there is a famous framework, a fairly new for Micro-frontend, it’s called Luigi framework, that was released by SAP open source. What they are doing is creating some iframes where they are wrapping their Micro-frontend inside it. So now if you think about that, let’s say, using iframes nowadays, you’re on a public website that maybe as the SEO or other features that are mandatory, it could be problematic.

Luca: But in the case of SAP, if you think about that, they have like enterprise application where they can control the browser that the user is using, they can control the environment, they don’t need to be available on a multitude or different version of the browser. So for them these thing allows them to have certain areas of the application that are constant and they have certain areas that are changing independently without any problem. But obviously an iframe solution wouldn’t work in other situation. Just to give another example, Spotify, we’re using iframes at the beginning. In fact the desk publication is still composed by multiple iframes and each single iframe is a tiny application that does, I don’t know, just a music player or just their recommendation, whatever it is. They try to have the same approach on web, but they dismissed that this year in order to move back to a single page application.

Luca: And that means, they explain why in the technical blog, they were saying that obviously if you apply that to millions of users that are using iframes that needs to load every time the same vendors file. And then you have like a lot of dependancies duplicated and the time for interacting with your page would be longer. So in reality, this approach could fit for certain use cases, but they shouldn’t fit for all of them. That’s why I’m saying, as I described before, if you have a decision framework that helps you to address those thing and you can start to say, okay, I sliced the application in this way, therefore I have those options that are available when I want to compose, when I want to route, when I want to communicate, it should guide you in order to have the right decision at the right time.

Luca: Then obviously apart from those four decisions, there are many others. Like how you create consistency in the design system that you have across all the Micro-frontend. Or I don’t know how you manage the dependencies and avoid the clashing of the dependency inside the Micro-frontend, but the reality is those four decisions that I mentioned before will allow you to then take all the others in a quick way without having the problem of overthinking, which is the best solution because you already set the cornerstone, the four pillars, that will allow you to take all the other decision … I don’t say in a easy way, but in a quicker way than a review or the spectrum of opportunities.

Drew: You mentioned before, the way that Micro-frontend can help with the sort of structure of teams within your organization and having lots of people working on the same application. What are some of the implications then and does it make any difference if your teams are distributed or co located or are there any challenges that are faced there?

Luca: Yes. So I can tell you what is the story of DAZN. That is the company where I’m working on. Currently in DAZN, we had like a nice challenge. So currently we have over 300 people that are working on the front and the back end of our platform. It’s an OTT platform that is streaming live at sport events globally. And the interesting bit is if all microservices we know how to manage more or less and we have distributed teams. So we have four dev centers. We’d want to put teams in each single dev centers on the front, and we tried this approach and it work pretty well. So with Micro-frontend we were able to provide different business domains in different locations and allow the cross communication between teams inside a specific business domain, very because the worst case scenario there that if you have to speak with another team on your same business domain, you just reach the walking distance from your desk. If instead you need to discuss specific thing on the distributive team, so with maybe somebody in London instead of Amsterdam, or instead of company in Poland, you just organize a call.

Luca: But those kinds of communication are more rare than the ones that are happening across teams inside the same location. And that’s why we started working on that. So the first thing that they did was looking at how our users were interacting with our website, how did the company was structured. And when we identify the four key areas that we are working on, that are currently acquisition and retention, let’s say porting of their core application on multiple TV’s and mobile and having the core domain that for us is the video player and the discovery phase of our content. And finally all the back office elements. I was able to identify those four areas and we those four areas we assigned for each single dev center.

Luca: Obviously there are some point of contacts between those areas, but then there are ways that you can let’s say mitigate and have some initial workshop with the different teams that are in different location and then work towards the same API contract for instance, or the same goal with having some checkpoints during the development. But the nice thing of approaching that allowed approaching with Micro-frontend is the fact that we finally understand deeply how our system was working. We sit down and we analyze how we were structured and we changed not only the way how we were affected things, but also how the company was working. And that was a kind of a different approach from what they have seen so far. But it’s proving that is working pretty well in the case that each single team can interact with the team of the same location in the same domain.

Luca: So they are talking exactly on the same language if you are talking about the domain driven design. And that is that if they need to interact with other teams, they can literally share the workshop or flying to another dev center and it’s less than a problem. But in this way we let’s say, augment the throughput and reduce the communication overhead, and the extent of dependencies that were happening in other situation that they were working on.

Drew: And do all these teams need to be using like a standardized JavaScript framework? Do they all need to be coding in the same things, they all need to be either React or Angular or to enable the interoperability between them or can people be using different things for different Micro-frontend?

Luca: Yeah. So in DAZN we decided to slice vertically our Micro-frontend and that was a decision that allowed us to have the freedom to pick the technology that we need for each single Micro-frontend. Considering that every time we load one Micro-frontend per time and this means that for instance, that how we have a landing page is different from the sign in / sign up journey. So we can update … we’re mainly using React at the moment. But if, for instance, I remember when React 16 was a release that we were able to release in production React 16, also if it wasn’t in the stable version for just a landing page and see if it was working without affecting all the other teams.

Luca: And then at their own speed, at thier own pace, they were updating their own stuff. So that allows us also to let’s say try new technologies or new assumptions that we have on the existing application with a certain amount of users. Because we implement the also candidate releases for front end. We implement the, let’s say several practices that allows us to just try certain times in production and see how the things are working.

Luca: The beauty of these approaches that we can independently decide to have the right tool for the right job more than having a common denominator across the entire stack. Because as you can imagine, when you started to work on a project, the decision that you made the first few years are probably different on the decision that you made in a trajectory where the company’s growing, the business is evolving and these became more mature and the challenge is completely different. So it wouldn’t be flexible enough or agile enough, if you think about that, the fact that we stick with the same decision we take two years ago. In particular an institution like DAZN, that we move from basically zero to 3000 employees in three years. So as you can imagine it was a massive growth and it was a massive growth for the company as well as on the user base.

Drew: Is there an established way for the different Micro-frontend to share data and to communicate with each other, for example, just to keep each other in step with the same view or is there a way to do that?

Luca: Yes, there is. It depends which of the decision framework, which path you’re going to take. Because if you’re going to take the vertical slice that became very easy. So what we have in order to communicate through Micro-frontend is an app shell that is loading in Micro-frontend inside itself. And what it does is storing everything that has to be, let’s say, shared across different Micro-frontend or on a web storage, either a session or local storage or in memory. And then based on those information, the Micro-frontend is loaded, can retrieve from the app shell to this information and then consume that, amend that or change them. It’s completely up to how you slice the application, but in this case, just to provide an example, if you think when you are authenticated users and you need to go to the sign in page, when you in and the APIs are our consumed and they are providing a JWT token, Micro-frontend is passing these to the app shell and the app shell these starting we saved their web storage.

Luca: Then after that the app shell is loading the new authenticated area for that specific application and those authenticated areas, they’re retrieving the JWT token from the app shell and is performing either a refresh access token or is validating some data starting inside the JWT token. So it’s using basically the information that were produced by another Micro-frontend at their own wheel.

Drew: It sounds like a very interesting concept and I can potentially see lots of big advantages to working this way, but it can’t be without its challenges, surely. Are there any particular things that are more difficult to deal with when architecting things in this way?

Luca: I think first and foremost the main challenges that I see is the shift of mindset. Because before we were used to have a, let’s say, the tech leads or the lead developers that were deciding everything around an entire application taking all decisions. Now finally we move from this centralized entity to a de-centralized entity that is local for each state. As you can imagine, this is bringing some challenges because if before we have someone that is tracing the path, now is that we have, let’s say, multiple people at the top defining the right path inside their domain, and this is a huge shift of mindset.

Luca: On the other side, I think the complexity is accepting sometimes that you doing the wrong abstraction could be, let’s say, more expensive than duplicating code. And that’s I know there’s something that I found very challenging in managing developers because they’re thinking, “Okay, now I can reuse this object or this specific library hundreds of times inside the project,” but the reality is very different. I saw components library that were abstract and they spend a lot of time making that as the best code ever or the best in a perfect shape. But the reality were used just twice. So the effort of doing that, it wasn’t exactly that. I saw on the other side libraries that they started with a couple of use cases for a specific component. And then those use cases became 10 and then the code became unmaintainable.

Luca: So trying to add a new function inside the same component could be more at risk than a benefit. So I think the other thing that we need to understand with Micro-frontend is how much we want to share it and how much we want to duplicate. And there is no harm, honestly, duplicating. In our case for instance we have a duplication of footer and header, and we did that mainly because we changed like three times the header in four years. So as you can imagine the fact that we are centralizing these, are assigned to a team and create an external dependency for all the teams, all the hundreds of developers that we have, is more let’s say an issue that a benefit for the company because we are not adding an enormous value.

Luca: At the same time currently we are refactoring, our shared libraries that would be payment library, because obviously payment has some logic behind that and if we want to change once, we don’t want to apply that twice in multiple parts of the code. We want to have just one library that is a source of truth, but for the header and footer, also if there is a discrepancy or for pixel or there is a function that this is deployed like a week later, it won’t hurt the application.

Drew: So are there some telltale signs that people should look for when evaluating an application and thinking, “Oh yes, this would be a good candidate to move to a Micro-frontend sort of architecture?”

Luca: Yeah, so my suggestion would be first and foremost I wouldn’t start a greenfield project with Micro-frontend unless we know exactly how it should be built. And usually it’s very unlikely that you have this information because, particularly if you have a new platform or a new project and it’s the first time that you are working on this, it could be a nontrivial finding this information. Usually what I suggest is starting with an existing architecture that would be so a single page application and then evolving that. In particular for instance I found, I think that using Micro-frontend for legacy applications or when we want to replace specific part of the application or when we have a project that we want to evolve and scale for multiple teams, those are three use cases that I feel very strong could suit the Micro-frontend architecture. Obviously that doesn’t mean that from now on everything should be made Micro-frontend, because Micro-frontend is not the silver bullet at all.

Luca: What they are is an additional architecture that we can leverage on the front end world. And up to now we had like a certain amount of architectures, now we have an additional one. But that’s a lot of challenges because obviously you need to, if before server side rendering or a single page application, there are clear patterns that were explored and then implemented by several framework and so on. With Micro-frontend currently, is one way to do things. But adding the decision framework probably should allow people to make the right decisions for their use cases. Because often there are a lot of misconceptions on what Micro-frontend are and how they should be used. And lots of people are thinking that maybe let’s say, are evil for, I don’t know, having too many libraries in one view or other things.

Luca: The reality is you need to understand deeply the concept, understand how to implement that and then you can start to work on that. I fully agree that there are technical challenges and there are a lot of decisions that you have to make and you cannot just start straight away with an editor in front of you writing code and thinking, okay, now I’m creating a micro-frontend architecture. Because you need to understand the concept, understand the context and create, also governance around that because the complexity is not just writing the code, it’s also understanding how all the pieces are fitting together in the CICD part the SEO part and so on.

Luca: So Micro-frontend does provide, let’s say a level of flexibility and require a lot of effort for defining the governance right. Because when you have the governance right, everything would be smooth. Often and unfortunately I would say too often, I saw companies that where they don’t spend enough time on the governance side, understanding the CICD for instance, because they don’t think that this is important. But instead for Micro-frontend, like for microservices, having automation right will allow you to speed up the development. If you don’t spend enough time on the automation bit, you risk to have more burden than benefits.

Drew: I guess it’s like so many things in the web development world where people are in danger of diving in with a technical solution before they’ve really understood the problem. And it sounds like with Micro-frontend is very much a case you need to see the problem and then implement the solution to know what problem that you’re solving. I guess the very nature of Micro-frontend make it very easy to start integrating into an existing application, to spot a small problem and swap it out with a Micro-frontend in order to solve that problem. Is that a reasonable suggestion?

Luca: Yeah, I would say so. In this case, the only thing that I would suggest if we start in this way is looking more at the vertical slicing over the horizontal slicing, because otherwise you need to solve so many problems about, let’s assume that for instance you’re using Angular and you want to move to a new version of Angular. If you need to have two Angular version living together without using I-frame, it could be complicated or even not possible. So if you start, you the aspect not from … if you check the challenge, not from the technical point of view, but from the business point of view. Maybe for instance, you can take, I don’t know, the sign-in part on that you want to write with a different version or the same version but more update version of a framework and that could be a good way. And then you route through the path. That could be a good way to replace slowly but steady a specific application.

Luca: What we have done in our case is basically applying the strangler pattern that is a well known pattern for microservices, but on the front end. So based on the URL and based on the browser and country of the user. So slowly but steady, basically we were killing the monolith, that in this case was single page application, releasing our new application more often and see the behaviors of the users, if it was improving the experience, it was causing any problem on our system or not. And that allowed us to provide immediate value to the business, but at the same time allowed us to test our assumptions and see if we were going to the right direction or not.

Drew: It sounds like a very attractive solution to some problems I’m sure a lot of people are facing. If I, as a developer, wanted to start investigating more about Micro-frontend, where would be a good place to start?

Luca: Yes, so currently I’m spending a lot of my spare time trying to advocating around these architecture, because I think there are a lot of misconceptions. On my Medium account. I’ve wrote several articles that are available there as well. A recorded a lot of videos in conferences that you can find on YouTube without any problem. And the other thing I would suggest if you’re looking for some code example on some frameworks, the one that I would recommend to start with is a single SPA, mainly because it has a vertical slicing approach, it’s easier to pick it up and you can start to understand the benefit of this architecture. Then there are many others that are available. Before I mentioned Luigi framework, as well as many others that are currently out there that are allowing you to compose multiple Micro-frontends in the same view.

Luca: Like another one in my head is TailorJS is another interesting. But definitely there is open components that is one developed by Open Table. But in general there are plenty of opportunity if you start to search about Micro-frontend, they’re out there.

Drew: That sounds great. Is there anything else that you wanted to talk about with regard to the Micro-frontends?

Luca: Yeah. Personally I would suggest to take an open mind approach on learning this architecture and this approach, technical approach, mainly because I believe that there is a lot of good, but we need to, let’s say, invest a bit of time and spend a bit of time to deeply understand how the things are working. Because obviously there isn’t, just one way to do things. We are used that we take a framework and immediately we start to work on it and it’s super productive, if you think about that.

Luca: But in this case you need to spend a bit of more time understanding, as you said a couple of times, the problem. Understand which is the pattern that would allow you to express better not only from a technical point of view but those two from our organizational point of view, the solution that you have in mind.

Drew: So I’ve been learning all about Micro-frontend today. What have you been learning about lately?

Luca: Recently there are two things that I’m learning. So last week I was in Las Vegas during the AWS event and is obviously a cloud conference. Pretty amazing, 70,000 people, a lot of sessions that were spreading several hotels in Vegas. And there for me, a serverless is a paradigm that I’m studying the most because I truly believe that in the future that will be the way how we are going to design and implement software and obviously AWS is very prominent on this approach.

Luca: And the second topic is around management and how to be a leader of a tech team. Because obviously I’m SVP of architecture. I have a team of architects that I’m leading and you can never rest because you need to not only to be on top of the technical side, but you need also to understand the people problems, understand how you can make them successful. Because obviously if they are successful, you are successful. You are first a technical facilitator to a certain extent. So that for me, those for me are the two things that currently I’m studying on top of exploring the Micro-frontend world.

Drew: If you, dear listener, would like to hear more from Luca, you can follow him on Twitter where he’s @LucaMezzalira or find his activities collected together at Lucamezzalira.com. Thanks for joining us today, Luca. Did you have any parting words?

Luca: No, but thank you very much for listening up to now.

(dm, ra, il)
Categories: Design

5 Things To Stop Doing In Mobile App Design

Mon, 12/30/2019 - 03:00
5 Things To Stop Doing In Mobile App Design 5 Things To Stop Doing In Mobile App Design Suzanne Scacca 2019-12-30T11:00:00+00:00 2019-12-30T12:36:00+00:00

I move to a new state every two to three years, so it’s important for me to live “light”. Every time I prepare to move, I go through the “Do I really need to keep this?” exercise. Although I’ve been doing this for almost 20 years, it never gets any easier. I wonder things like:

What if I sell my bed and never have a comfortable night’s sleep again? What if I get rid of the fancy dress I wore once but might need for some hypothetical future event? What if I decide to start baking cupcakes again and don’t have my cupcake tin anymore?

It’s easy to get attached to things when they served you well at one time or another. But if you take a closer look at the “stuff” you’ve accumulated, you’ll realize that a lot of it has lost its usefulness along the way.

I think it’s important to run through a similar type of decluttering exercise in the work you do as a designer. That way, the apps you build always look fresh and modern instead of weighed down by antiquated features or functionality that at one time had a purpose.

Before you start charging ahead into the new year, take a moment to reflect on how you approach mobile app design. If you’re still holding onto components or functionality that no longer serve any purpose or, worse, intrude on the user experience, it’s time for a change.

Want some help? I’m going to run through some elements you can afford to scrap from mobile app builds in 2020 and beyond.

Related Reading on SmashingMag: 1. Harmful FOMO Elements

You know why marketers, influencers, and designers use FOMO (i.e. it can be really effective in boosting sales). However, you also know how damaging it can be for users’ mindsets (not to mention the distrust they feel towards brands as a result).

You could avoid FOMO altogether, but it’s a tricky thing, isn’t it?

You know that (when left to their own devices) mobile app users may forget that your app even exists on their phones without something to pull them back in. But it’s too easy to go overboard with FOMO-inducing components.

For example, this is ToonBlast:

The ToonBlast gaming app has numerous modules and countdown timers to induce constant FOMO. (Image source: ToonBlast) (Large preview)

The home screen is incredibly overwhelming. More than that, those ticking clocks (there are four of them) are a nightmare for users who can’t help but click on things they feel they’re going to miss out on by not doing so. And for users who can ignore the timers, they won’t be completely unaffected by them either. The game displays pop-up reminders for each of the countdowns. It’s impossible to ignore them.

This is FOMO at its absolute worst.

Even if reminders for each of the countdowns were sent as push notifications instead of disruptive pop-ups, it still would be bad for the user experience. There are just too many things competing for the user’s attention and each of the clocks is like a ticking time bomb.

I know it might seem like giving app users more reasons to engage is a good idea, especially if you’re struggling to attract and retain users. But if that’s really an issue, then you need to work on improving the core product first and foremost.

Going forward, I think we’d all do well to move away from harmful FOMO elements and embrace more simplified and stronger core products.

If you’re not sure what that looks like, I’d recommend turning your attention to Instagram:

Instagram is working on removing FOMO from its app. (Image source: Instagram) (Large preview)

Instagram is a simple and straightforward product. Users turn their news feeds into personal curations of people and accounts they want to follow while sharing their own content with the world.

Now, Instagram isn’t completely FOMO-free as you can see from the Stories bar at the top of the page. However, there’s nothing really urgent about the way these stories are displayed. They don’t take up much space in the app (unlike the way Facebook handles it, for instance) nor are there any screaming alarms that say, “Hey! So-and-so’s story is about to expire! Watch it now!”

That said, Instagram is working to remove the harmful effects of FOMO in its app by doing away with like counters and cracking down on influencers and companies that don’t mark ads as such. If you want to create a strong yet simple product that keeps harmful FOMO elements out of the picture, keep this one on your radar.

2. Out-of-Context Access Requests

Unlike mobile websites and PWAs, mobile apps have the ability to get in front of 100% of users who activate push notifications. But that’s the catch. Your users have to be willing to press “OK” or “Allow” when you display that push notification (or phone access) request pop-up.

So, how do you get more of them to do that without constantly shoving those requests down their throats?

Some brands haven’t figured this out yet, to be honest. Take Snapchat, for example.

Snapchat doesn’t like when users disable notifications and phone access. (Image source: Snapchat) (Large preview)

This is one of those apps that just goes way overboard when it comes to requesting access to users’ devices. It wants to:

  • Send push notifications.
  • Use the camera.
  • Use the microphone.
  • Access saved photos.
  • Enable location tracking.
  • And so on.

Rather than ask for access when it’s relevant, it often sends a deluge of requests first thing when users sign into the app. That’s the wrong way to create a welcoming environment for your users.

A better way to go about asking for access or permissions would be to place it in the context of the app — and only when it makes sense. I’ll show you a couple of examples.

This is the app for ParkWhiz:

ParkWhiz reminds users about the benefits of enabling location tracking. (Image source: ParkWhiz) (Large preview)

Look at the section called “Help Us Find You” toward the bottom.

Not only does ParkWhiz gently remind users to enable location tracking on their devices, but it does so by explaining the reasons why it would benefit them to do so. Notice also that this isn’t displayed in an intrusive pop-up at the point of entry. Instead, it’s in a spot in the app where, when enabled, it can help streamline the search experience.

YouTube is another app that does this well.

YouTube displays a small tooltip to remind users to turn on notifications. (Image source: YouTube) (Large preview)

In this example, YouTube quickly displays a tooltip over the disabled notification icon. The notice reads:

“You’re missing out on subscriptions! Tap the bell to turn on notifications.”

They’re right. I’m subscribed to this channel and, yet, I haven’t received notifications (push or email) about new videos for a while. I hadn’t realized this until I saw this reminder.

The way this is handled is nice. It makes users stop and think about what they’re missing out on instead of rushing to close out another request pop-up. It also doesn’t force them to turn on push for everything. They can customize which notifications they receive.

Push notifications are supposed to be helpful. And access to your users’ phones is supposed to enhance their experience. That’s why it’s important to ask for their cooperation in enabling these features within the right context. Rather than bombard them with request after request at the outset of installing or opening an app, deliver them within the experience as in-line elements.

3. Unnecessary Icon Labels

Note that this point is called unnecessary icon labels and not just a sweeping generalization of all of them. That’s because there are certain parts of an app where icon labels still work well. Like the navigation bar.

However, I’ve noticed an alarming trend lately whereby apps pair every page or tab name with a matching icon. There are a number of reasons why this is an issue and I’m going to use the GEICO app to demonstrate them.

The GEICO mobile app home page comes with a list of services and modules paired with icons. (Image source: GEICO) (Large preview)

This home page makes it easy for users to take advantage of their auto insurance and related services on the go. Let’s focus on the “Vehicle Trouble” section though.

There are four tabs:

  • Emergency Roadside Service represented by a tow truck icon,
  • Start a New Claim represented by a car with what looks like a crash symbol,
  • Report Glass Damage represented by a car with a crack on the windshield,
  • View Recent Claims represented by a clipboard with the letter “C” on it.

The icons aren’t all that easy to decipher (except the tow truck) and I’m just not sure they add any value here. Really, if you can’t think of anything better than putting a letter “C” on a clipboard to represent claims, maybe icons aren’t needed after all?

Next, let’s take a look at the GEICO app’s list of settings:

The GEICO app’s navigation pairs each page with an icon. (Image source: GEICO) (Large preview)

There are a lot of settings pages here. Not only that, they’re not the kinds of pages you’d typically see in other mobile apps, so the designer has had to get creative in pairing them with icons.

If this navigation didn’t have icons, I think it would be much easier to read through the options. The same goes for the home page. Without the icons, the font size could be increased so the focus could strictly be on the page names and insured users could get to the information they need more quickly. As it stands now, the icons are just wasted space.

Let’s look at another example.

Rover is an app that pet owners can use to book pet sitting and walking services. Icons are used sparingly through the app to distinguish services from one another as well as to label the navigation pages.

The Rover mobile app includes icons to label its navigation and services. (Image source: Rover) (Large preview)

I don’t think the icons on this page are necessary in terms of expediting user selection (e.g. “I need overnight house sitting so I’m going to choose the moon-over-the-house icon.”). That said, I don’t think the icons detract from the button text since each option is clearly labeled with big, bold font. What’s more, the icons do a nice job of bringing balance to the buttons so there aren’t huge white gaps in the middle.

Now, let’s look at what the designer has chosen to do under the “More” tab:

Rover’s list of “More” settings don’t include icons like the rest of the app. (Image source: Rover) (Large preview)

This is similar to GEICO’s slide-out navigation menu. But notice how Rover’s is text only. Considering how common these settings are from app to app, it would’ve been easy enough to add icons to each, but the designer chose to leave them off and I think that was a good decision.

There’s a time and place when icons serve a purpose. As far as labeling a secondary navigation menu in your app, it’s time to do away with that. I’d also express caution over labeling pages with icons if it’s a struggle to find a match. That should be a sign to you that they’re not needed to begin with.

4. Excessively Long Home Pages

In web design, we’re seeing much shorter home pages than in years past, thanks to the need for more efficient mobile experiences. So, why isn’t this something we’re doing in mobile app design?

There are some apps where this isn’t an issue. Namely, ones where there’s no scrolling at all (e.g. dating apps, gaming apps, etc.). And there are some cases where endless scrolling on the home page is fine (e.g. news and social media apps).

But what about other apps?

Listings apps (like for real estate or travel) sometimes have a hard time with this. For example, this is the top of the Airbnb mobile app:

Airbnb app home page with search bar and categories. (Image source: Airbnb) (Large preview)

This part of the page is well done and includes everything users need to find what they’re looking for:

  • A search bar,
  • A list of travel categories to swipe through,
  • Quick links to recent search queries.

But for some reason, Airbnb has designed this home page to go on and on and on with sections for:

  • Top-rated experiences,
  • Airbnb Plus places to stay,
  • Introducing Airbnb Adventures,
  • Places to stay around the world,
  • Featured Airbnb Plus destinations,
  • Stay with a superhost,
  • Unique places to stay for your next trip,
  • Explore New York City,
  • And on and on it goes.
Airbnb includes over a dozen sections of content on the home page of its app. (Image source: Airbnb) (Large preview)

I’m not sure what the logic was here. While I understand wanting to help your users by providing them with useful recommendations, this goes way overboard. It’s not even as though this is personalized content based on the user’s profile or recent searches. It’s just a smattering of categories that, if anything, are going to overload and overwhelm users with options.

If the app you’re building or have built runs into a similar problem, check out Hotels.com for inspiration:

The Hotels.com app provides users with a search bar and recently viewed hotels upon entering the app. (Image source: Hotels.com) (Large preview)

Unlike Airbnb, Hotels.com’s home “Discover” page is short. All it takes is three swipes to get to the bottom of the page. Users see sections for:

  • Recent searches,
  • A city guide (based on a recent query),
  • Last-minute deals,
  • Current bookings,
  • Hotels.com Rewards standings (if relevant).

For the most part, the content is 100% relevant to the user and not just meant to promote every possible service or feature of the app.

If you really feel like users would benefit from seeing every possible feature, create a secondary navigation for it. That way, they can quickly scan through the options and pick the one(s) they’re most interested in. When you give them an endless home page to scroll through and too many listings and buttons to click, you’re only going to make it harder for them to take action.

5. Dark Patterns in Ads

You have to monetize a mobile app if you’re going to make the original investment worth your while. It’s as simple as that.

But I’ve recently encountered some very scary dark patterns in mobile app monetization — specifically, with the way ads are designed. And it’s got me wondering if third-party ad networks are really the smartest way to monetize if they’re going to compromise everything you’ve done to create an awesome in-app experience otherwise.

Now, I understand that app designers usually don’t have any role in designing the ads that appear. That said, do you really think your users know anything about ad networks and how those ad placements get inside your app? Of course not!

So, when one of your users has a bad experience with an ad, what do you think is going to happen? They’re not going to think:

“Oh, that advertiser is terrible for doing that.”

Instead, they’re going to think:

“If I see one more ad like this, I’m uninstalling this app.”

Let me show you some examples of ads that will push the limits of your users’ patience.

This is Wordscapes, a gaming app I’m quite fond of:

A banner ad at the bottom of the Wordscapes app is cut off. (Image source: Wordscapes) (Large preview)

I’ve been playing Wordscapes for a long time and when I first started, it was great. The banner ads were there, but they never really got in the way. And the interstitial video ads only appeared every few rounds or so. They were always easy to dismiss, too.

Over the past year or so, however, the quality of ads has majorly deteriorated. Take the banner ad above. That’s actually a video ad that doesn’t fit in the allotted space.

Then, you have this badly designed banner ad for Jynarque:

A banner ad that’s too dark appears at the bottom of Wordscape (Image source: Wordscapes) (Large preview)

Neither of these banner ads are really dark patterns. However, they do suggest there’s something not quite right about where Wordscapes is sourcing their ad content from.

Now, I’m going to show you some of the more deceptive ads I’ve come across.

This is an ad from Showtime to promote the TV show Shameless:

Wordscapes shows an interstitial video ad for Showtime’s Shameless. (Image source: Wordscapes) (Large preview)

See the number “5” in the top-right corner? That’s a countdown timer, which should tell users how long they have to wait until they can dismiss the ad. However, when the timer is up, this icon appears:

A Showtime ad provides users with an untraditional escape after the timer runs out. (Image source: Wordscapes) (Large preview)

The timer gets to “0” and is replaced by this button. It’s not the traditional “X” that app users are accustomed to when it comes to watching ads, so they might not realize this will take them back into the game. In fact, they might misinterpret this “Next” symbol as a “Play” button and watch the ad in full. While it’s nice that Showtime gives users an exit, it would be better if the iconography were consistent with other video ads.

Then, there’s this interstitial ad for DoorDash:

A DoorDash ad includes two fake “X” buttons. (Image source: Wordscapes) (Large preview)

This is what the ad looks like the second it appears on the screen, which is actually encouraging.

“An ad that’s going to let us exit out of it right away! Woohoo!”

But that’s not the case at all. Notice how there are two X’s in the top-right corner. One of them looks fake (the plain “X” symbol) while the other looks like an “X” you’d use to dismiss an ad.

The first time I saw this, I clicked on the good “X”, hoping my finger would be small enough to miss the fake target. Yet, this is where I ended up:

An ad for DoorDash tries to take Wordscapes visitors to the app store (Image source: Wordscapes) (Large preview)

The click takes users out of the Wordscapes app and tries to move them to the app store. After hitting “Cancel” and sitting through five more seconds of the DoorDash ad, this new “X” appears in the top-right corner:

DoorDash finally displays the real exit button for its mobile ad. (Image source: Wordscapes) (Large preview)

At this point, I can’t imagine users are very happy with DoorDash or Wordscapes for this experience.

These examples of bad ads and dark patterns in monetization are just the tip of the iceberg, too. There are ads that:

  • Provide no timer or indication of when the ad will end.
  • Switch the placement of the “X” so users unintentionally click the ad instead of leave it.
  • Auto-play sound even when the device’s sound is turned off.

I know I’m picking on Wordscapes because I spend the most time inside the app, but it’s not the only one whose reputation is being hurt by third-party ad content.

Again, I recognize that you have no say in the design or execution of ads that come from ad networks. That said, I’d really urge you to talk to your clients about being more discerning about where they source their ads from. If mobile ads continue to be this bad, it might be worth sourcing your own ad content from partners and sponsors you trust instead of random companies that use deceptive advertising tactics.

Wrapping Up

There are a ton of reasons to declutter your mobile app designs. But if these examples have demonstrated anything, the most important reason to clean up is to get rid of useless and sometimes harmful design elements or techniques.

And if you’re having a hard time getting rid of the excess, I’d encourage you to reevaluate the core product. If it’s not strong enough to stand on its own, in its simplest of forms, then it’s time to go back to the drawing board because no amount of distractions you fill it with will make it a worthwhile download for your users.

(ra, il)
Categories: Design
©2020 Richard Esmonde. All rights reserved.